uuid
int64 541B
3,299B
| dataset
stringclasses 1
value | text
stringlengths 1
4.29M
|
---|---|---|
2,869,038,155,828 | arxiv | \section{Introduction}
\label{sec:Introduction}
It is a well-established fact that our Universe is currently undergoing accelerated cosmological expansion \citep{Farooqetal2017, Scolnicetal2018, PlanckCollaboration2020, eBOSSCollaboration2021}. This observational fact can be explained by general relativistic cosmological models if we include dark energy in them. The simplest cosmological model that is consistent with this observation is the standard spatially-flat $\Lambda$CDM model \citep{Peebles1984}. In this model, dark energy in the form of the cosmological constant $\Lambda$ contributes $\sim 70\%$ of the current cosmological energy budget, non-relativistic cold dark matter (CDM) contributes $\sim 25\%$, and almost all of the remaining $\sim 5\%$ is contributed by non-relativistic baryons. This model is consistent with most observational data but a little spatial curvature and mild dark energy dynamics are not ruled out. So, in this paper, in addition to the $\Lambda$CDM model, we consider two dynamical dark energy models, one being the widely-used but physically-incomplete XCDM parametrization which parametrizes dynamical dark energy as an $X$-fluid and the other is the physically-complete $\phi$CDM model which models dynamical dark energy as a scalar field. In each case we consider flat and non-flat spatial hypersurfaces to also allow for possibly non-zero spatial curvature of the Universe.\footnote{Recent observational constraints on spatial curvature are discussed in \citet{Farooqetal2015}, \citet{Chenetal2016}, \citet{Ranaetal2017}, \citet{Oobaetal2018a, Oobaetal2018b}, \citet{Yuetal2018}, \citet{ParkRatra2019a, ParkRatra2019b}, \citet{Wei2018}, \citet{DESCollaboration2019}, \citet{Lietal2020}, \citet{Handley2019}, \citet{EfstathiouGratton2020}, \citet{DiValentinoetal2021a}, \citet{VelasquezToribioFabris2020}, \citet{Vagnozzietal2020, Vagnozzietal2021}, \citet{KiDSCollaboration2021}, \citet{ArjonaNesseris2021}, \citet{Dhawanetal2021}, and references therein.}
These models are mostly tested using well-established cosmological probes such as cosmic microwave background (CMB) anisotropy data, baryon acoustic oscillation (BAO) observations, Hubble parameter [$H(z)$] measurements, and Type Ia supernova (SNIa) apparent magnitude data. CMB anisotropy data probe the $z \sim 1100$ part of redshift space and are the only high-redshift data. BAO data probe redshift space up to $z \sim 2.3$, the highest $z$ reached by the better-established lower-redshift probes. These are limited sets of cosmological data and a number of observationally-viable cosmological models make very similar predictions for these probes, so to establish a more accurate standard cosmological model and to obtain tighter cosmological parameter constraints we need to use other astronomical data.
A significant amount of work has been done to develop new cosmological probes. This work includes use of HII starburst galaxy observations which extend to $z \sim 2.4$ \citep{ManiaRatra2012, Chavezetal2014, GonzalezMoran2019, GonzalezMoranetal2021, Caoetal2020, Caoetal2021a, Johnsonetal2021}, quasar (QSO) angular size measurements which extend to $z \sim 2.7$ \citep{Caoetal2017, Ryanetal2019, Caoetal2020, Caoetal2021b, Zhengetal2021, Lianetal2021}, QSO X-ray and UV flux measurements which extend to $z \sim 7.5$ \citep{RisalitiLusso2015, RisalitiLusso2019, KhadkaRatra2020a, KhadkaRatra2020b, KhadkaRatra2021, Yangetal2020, Lussoetal2020, Lietal2021, Lianetal2021}, and gamma-ray burst (GRB) data that extend to $z \sim 8.2$ \citep{Amati2008, Amati2019, samushia_ratra_2010, Wang_2016, Demianski_2019, Dirirsa2019, KhadkaRatra2020c, Khadkaetal2021}.
An additional new method that can be used in cosmology is based on QSOs with a measured time delay between the quasar ionizing continuum and the Mg II line luminosity. This technique is referred to as reverberation mapping and it makes use of the tight correlation between the variable ionizing radiation powered by the accretion disc and the line-emission that originates in the broad-line region (BLR) optically-thick material located farther away that efficiently reprocesses the disc continuum radiation \citep{1982ApJ...255..419B}. We refer to these reverberation-mapped sources as Mg II QSOs. We use Mg II QSOs to constrain cosmological dark energy models for the following reasons: (i) The current reasonably large number, 78, of studied Mg II QSOs at intermediate $z$ \citep{2019ApJ...880...46C,Homayouni2020, Mary2020, Michal2020, Michal2021, Zhefu2021}. The current Mg II QSO redshift range $0.0033\leq z \leq 1.89$ is more extended, especially towards higher redshifts, than that of $117$ reverberation-mapped H$\beta$ quasars \citep[$0.002\leq z \leq 0.89$; ][]{Mary2019}. (ii) Some works using QSO X-ray and UV flux measurements show evidence for tension with predictions of the standard spatially-flat $\Lambda$CDM model with $\Omega_{m0}=0.3$ \citep{RisalitiLusso2019, KhadkaRatra2020b, KhadkaRatra2021, Lussoetal2020} and the Mg II QSO sample is an alternative QSO data set that might help clarify this issue. (iii) For MgII QSOs, the UV spectrum is not severely contaminated by starlight as is the case of QSOs where reverberation mapping has been performed using the optical H$\beta$ line \citep{Bentz2013}. Hence, the measured Mg II QSO flux density at 3000\,\AA\, can be considered to largely represent the accretion-disc ionizing flux density at this wavelength that is reprocessed by BLR clouds located at the mean distance of $R=c\tau$, where $\tau$ is the rest-frame time delay between the UV ionizing continuum and the broad-line material emitting Mg II inferred e.g. by the cross-correlation function.
The reveberation-measured rest-frame time-delay of the broad UV Mg II emission line (which is centered at 2798\,\AA\, in the rest frame) and the monochromatic luminosity of the QSO are correlated through the radius-luminosity correlation, also known as the $R-L$ relation, with the power-law scaling $R\propto L^{\gamma}$. Such a relation was first discovered for the broad H$\beta$ line in the optical domain \citep[the H$\beta$ rest-frame wavelength is 4860\,\AA;][]{kaspi2000,peterson2004,Bentz2013}, and the possibility of using such measurements to create a Hubble diagram and constrain cosmological parameters was discussed soon afterwards \citep{watson2011,haas2011,czerny2013,Bentz2013}. Using the H$\beta$ broad component, initially the power-law index $\gamma=0.67 \pm 0.05$ deviated from $\gamma=0.5$ given by simple photoionization arguments\footnote{Using the definition of the ionization parameter for a BLR cloud, $U=Q(H)/[4\pi R^2 c n(H)]$, where $Q(H)$ is the hydrogen-ionizing photon flux in ${\rm cm^{-2} s^{-1}}$, $R$ is the cloud distance from the continuum source, and $n(H)$ is the total hydrogen density. Assuming that $Un(H)=\mathrm{const}$ for BLR clouds in different sources, we obtain $R\propto L^{1/2}$.} \citep{2005ApJ...629...61K}. After extending the sample by including lower-redshift sources and correcting for host starlight contamination \citep{Bentz2013}, the updated H$\beta$ sample yielded a slope of $\gamma=0.533^{+0.035}_{-0.033}$, i.e. consistent with the simple photoionization theory, and a small intrinsic scatter of only $\sigma_{\rm ext}=0.13$ dex, which made these data attractive for cosmological applications. As the $H\beta$ quasar sample was enlarged by adding sources with a higher accretion rate, the overall scatter increased significantly \citep{2014ApJ...782...45D,2018ApJ...856....6D,2017ApJ...851...21G}. Using accretion-rate tracers, such as the Eddington ratio, dimensionless accretion-rate, relative Fe II strength, or the fractional variability, it was found that this scatter is mostly driven by the accretion rate \citep{2018ApJ...856....6D,Mary2019,2020ApJ...903..112D,2020ApJ...899...73F}. Sources with a higher accretion rate have shortened time lags with respect to the $R-L$ relation, i.e. the higher the acrretion rate, the larger the departure. The same trend was later confirmed for the Mg II QSO $R-L$ relation \citep{Michal2020,Mary2020,Michal2021}.
The $R-L$ correlation, although with a relatively large dispersion of $\sim 0.3$ dex for Mg II QSOs \citep{Mary2020,Michal2021}, in principle enables us to use reverberation-measured Mg II QSOs to determine constraints on cosmological parameters since the time delay measurement allows one to obtain the source absolute luminosity \citep[see][ for overviews]{2019FrASS...6...75P,2020mbhe.confE..10M}. Some attempts have previously been made to use reverberation-measured QSOs in cosmology \citep{Mary2019,Czerny2021, Michal2021}, and so far an overall agreement has been found with the standard $\Lambda$CDM cosmological model for H$\beta$ QSOs \citep{Mary2019}, combined H$\beta$ and Mg II sources \citep{Czerny2021}, and Mg II QSOs alone \citep{Michal2021}.
In this paper, we use 78 Mg II QSOs --- the largest set of such measurements to date --- to simultaneously constrain cosmological parameters and $R-L$ relation parameters (the intercept $\beta$ and the slope $\gamma$) in six different cosmological models. This simultaneous determination of cosmological parameters and $R-L$ relation parameters --- done here for Mg II QSOs for the first time --- allows us to avoid the circularity problem.\footnote{This is the problem of having to either assume $\beta$ and $\gamma$ to use the $R-L$ relation and data to constrain cosmological model parameters, or having to assume a cosmological model (and parameter values) to use the measurements to determine $\beta$ and $\gamma$.} Since we determine $\beta$ and $\gamma$ values in six different cosmological models, we are able to test whether Mg II QSOs are standardizable candles.\footnote{This is one reason why we study a number of different cosmological models.} We find that the $R-L$ relation parameters are independent of the cosmological model in which they were derived, thus establishing that current Mg II QSOs are standardizable candles. However, while cosmological parameter constraints obtained using these Mg II QSOs are consistent with those obtained from most other cosmological probes, they are significantly less restrictive. The Mg II QSO constraints are less restrictive because the $R-L$ relation, which is the basis of our method, has a large intrinsic dispersion ($\sigma_{\rm ext} \sim 0.29$ dex) and also involves two nuisance parameters, $\beta$ and $\gamma$. Cosmological constraints obtained using the Mg II QSO data set are consistent with those obtained using BAO + $H(z)$ data, so we also analyze these 78 Mg II QSO data in conjunction with BAO + $H(z)$ data. Results obtained from the joint analyses are consistent with the standard spatially-flat $\Lambda$CDM model but also do not rule out a little spatial curvature and mild dark energy dynamics.
This paper is structured as follows. In Sec.\ 2 we summarize the cosmological models we use. In Sec. 3 we describe the data sets we analyze. In Sec. 4 we summarize our analysis methods. In Sec. 5 we present our results. We conclude in Sec. 6. The MgII QSO data sets we use are tabulated in the Appendix.
\section{Models}
\label{sec:models}
We constrain cosmological model parameters by comparing model predictions to cosmological measurements at known redshift $z$. We consider six different dark energy cosmological models, three with flat spatial geometry and three with non-flat spatial geometry. For the observations we consider, model predictions depend on the Hubble parameter --- the cosmological expansion rate --- a function that depends on $z$ and on the cosmological parameters of the model.
In these models the Hubble parameter can be expressed as
\begin{equation}
\label{eq:friedLCDM}
H(z) = H_0\sqrt{\Omega_{m0}(1 + z)^3 + \Omega_{k0}(1 + z)^2 + \Omega_{DE}(z)},
\end{equation}
where $H_0$ is the Hubble constant, $\Omega_{DE}(z)$ is the dark energy density parameter, and $\Omega_{m0}$ and $\Omega_{k0}$ are the current values of the non-relativistic matter and curvature energy density parameters. In the spatially-flat models $\Omega_{k0} = 0$. For analyses of the BAO + $H(z)$ and QSO + BAO + $H(z)$ data, we express $\Omega_{m0}$ in terms of the current values of the cold dark matter density parameter $(\Omega_{c})$ and the baryon density parameter $(\Omega_{b})$: $\Omega_{m0} = \Omega_{c} + \Omega_{b}$, and use $\Omega_b h^2$ and $\Omega_c h^2$ as free parameters [here $h = H_0/(100\, {\rm km}\, {\rm s}^{-1} {\rm Mpc}^{-1}$)] instead of $\Omega_{m0}$. As discussed in Sec.\ 4, QSO data alone cannot constrain $H_0$, which in this case is set to $70$ ${\rm km}\hspace{1mm}{\rm s}^{-1}{\rm Mpc}^{-1}$; for the BAO + $H(z)$ and QSO + BAO + $H(z)$ data analyses $H_0$ is a free parameter to be determined from the data. The dark energy density evolves as a power of $(1+z)$ in four of the six models we study. In these models $\Omega_{DE}(z) = \Omega_{DE0}(1+z)^{1+\omega_X}$ where $\omega_X$ is the dark energy equation of state parameter (defined below) and $\Omega_{DE0}$ is the current value of the dark energy density parameter.
In the $\Lambda$CDM model $\omega_X = -1$ so $\Omega_{DE}$ = $\Omega_{DE0}$ = $\Omega_{\Lambda}$, and dark energy is the standard cosmological constant. The current values of the three $\Lambda$CDM model energy density parameters obey the energy budget equation $\Omega_{m0} + \Omega_{k0} + \Omega_{\Lambda} = 1$. For the QSO-only data analyses we fix $H_0$ and in the spatially-flat $\Lambda$CDM model we take $\Omega_{m0}$ to be the free parameter while in the non-flat $\Lambda$CDM model $\Omega_{m0}$, and $\Omega_{k0}$ are the free parameters.
In the XCDM parametrization dark energy is parametrized as an ideal $X$-fluid with equation of state parameter $\omega_X$ being the ratio of the $X$-fluid pressure and energy density. Here $\Omega_{DE0}$ = $\Omega_{X0}$ is the current value of the $X$-fluid dynamical dark energy density parameter. The current values of the three XCDM parametrization energy density parameters obey the energy budget equation $\Omega_{m0} + \Omega_{k0} + \Omega_{X0} = 1$. The $X$-fluid energy density decreases with time when $\omega_X > -1$. For the QSO-only data analyses we fix $H_0$ and in the spatially-flat XCDM parametrization we take $\Omega_{m0}$ and $\omega_X$ to be the free parameters while in the non-flat XCDM parametrization, $\Omega_{m0}$, $\Omega_{k0}$, and $\omega_X$ are the free parameters. In the limit $\omega_x \rightarrow -1$ the XCDM parametrization reduces to the $\Lambda$CDM model.
In the $\phi$CDM model \citep{PeeblesRatra1988, RatraPeebles1988, Pavlovetal2013} dynamical dark energy is a scalar field $\phi$.\footnote{Recent observational constraints on the $\phi$CDM model are discussed in \citet{Avsajanishvilietal2015}, \citet{SolaPeracaulaetal2018, SolaPercaulaetal2019}, \citet{Zhaietal2017}, \citet{Oobaetal2018c, Oobaetal2019}, \citet{ParkRatra2018, ParkRatra2019c, ParkRatra2020}, \citet{Sangwanetal2018}, \citet{Singhetal2019}, \citet{UrenaLopezRoy2020}, \citet{SinhaBanerjee2021}, and references therein.} Here the dynamical dark energy density parameter $\Omega_{DE}$ is determined by the scalar field potential energy density. In this paper we assume an inverse power law scalar field potential energy density
\begin{equation}
\label{eq:phiCDMV}
V(\phi) = \frac{1}{2}\kappa m_{p}^2 \phi^{-\alpha}.
\end{equation}
In this equation $m_{p}$ is the Planck mass, $\alpha$ is a positive parameter [$\Omega_{DE}$ = $\Omega_{\phi}(z, \alpha)$ is the scalar field dynamical dark energy density parameter], and the constant $\kappa$ is determined by using the shooting method to ensure that the current energy budget constraint $\Omega_{m0} + \Omega_{k0} + \Omega_{\phi}(z = 0, \alpha) = 1$ is satisfied.
For this potential energy density, the equations of motion for a spatially homogeneous scalar field and FLRW metric tensor are
\begin{align}
\label{field}
& \ddot{\phi} + 3\frac{\dot{a}}{a}\dot\phi - \frac{1}{2}\alpha \kappa m_{p}^2 \phi^{-\alpha - 1} = 0, \\
\label{friedpCDM}
& \left(\frac{\dot{a}}{a}\right)^2 = \frac{8 \uppi}{3 m_{p}^2}\left(\rho_m + \rho_{\phi}\right) - \frac{k}{a^2}.
\end{align}
Here $a$ is the scale factor, an overdot denotes a derivative with respect to time, $k$ is negative, zero, and positive for open, flat, and closed spatial geometries (corresponding to $\Omega_{k0} > 0, =0, \rm and < 0$), $\rho_m$ is the non-relativistic matter energy density, and the scalar field energy density
\begin{equation}
\rho_{\phi} = \frac{m^2_p}{32\pi}[\dot{\phi}^2 + \kappa m^2_p \phi^{-\alpha}].
\end{equation}
We numerically integrate eqs.\ (3) and (4), compute $\rho_{\phi}$, and then compute $\Omega_{\phi}(z, \alpha)$ from
\begin{equation}
\Omega_{\phi}(z, \alpha) = \frac{8 \uppi \rho_{\phi}}{3 m^2_p H^2_0}.
\end{equation}
For the QSO-only data analyses we fix $H_0$ and in the spatially-flat $\phi$CDM model we take $\Omega_{m0}$ and $\alpha$ to be the free parameters and in the non-flat $\phi$CDM model, $\Omega_{m0}$, $\Omega_{k0}$, and $\alpha$ are the free parameters. In the limit $\alpha\rightarrow0$ the $\phi$CDM model reduces to the $\Lambda$CDM model.
\section{Data}
\label{sec:data}
We use three different Mg II QSO compilations, as well as BAO and $H(z)$ data. The Mg II QSO data sets are summarized in Table \ref{tab:samples}, which lists the number of QSOs in each sample, and the covered redshift range. These data are listed in Table \ref{tab:MgQSOdata} where for each source the name, $z$, measured QSO flux for the Mg II line $(F_{3000})$, and rest-frame time-delay $(\tau)$ are listed.
\begin{table}
\centering
\caption{Summary of the Mg II QSO data sets.}
\label{tab:samples}
\begin{threeparttable}
\begin{tabular}{l|cc}
\hline
Data set & Number & Redshift range \\
\hline
Mg II QSO-69 & $69$ & $[0.0033, 1.89]$\\
Mg II QSO-9 & $9$ & $[1.06703, 1.7496]$\\
Mg II QSO-78 & $78$ & $[0.0033, 1.89]$\\
\hline
\end{tabular}
\end{threeparttable}
\end{table}
\begin{itemize}
\item[]{\bf Mg II QSO-69 sample}.
This sample includes the first 69 QSOs listed in Table \ref{tab:MgQSOdata}. These data are described in detail in \citet{Mary2020} and \citet{Michal2021}. This quasar sample contains 69 QSOs including those from the most recent Mg II Sloan Digital Sky Survey Reverberation Mapping data set \citep[SDSS-RM, 57 sources;][]{Homayouni2020}, from previous SDSS-RM results \citep[6 sources; ][where one source is included in the more recent SDSS-RM sample]{2016ApJ...818...30S}, several luminous quasars, in particular CTS 252 \citep{2018ApJ...865...56L}, CTS C30.10 \citep{2019ApJ...880...46C}, HE 0413-4031 \citep{Michal2020}, and HE 0435-4312 \citep{Michal2021}, and two older \textit{International Ultraviolet Explorer (IUE)} measurements of the low-luminosity QSO NGC 4151 based on two separate campaigns in 1988 and 1991 \citep{2006ApJ...647..901M}\footnote{Since there were two campaigns, we keep both values of the rest-frame time delay. As the luminosity state changes over time, the rest-frame time-delay adjusts accordingly, $\tau\propto L^{1/2}$. The resulting virial black hole mass remains consistent within the uncertainties since the line width behaves as $\Delta V\propto L^{-1/4}$. For NGC 4151, the virial black hole mass is $M_{\rm BH}=(4.14 \pm 0.73) \times 10^7\,M_{\odot}$ \citep{2006ApJ...647..901M}.}. The redshift range of this sample is $0.0033 \leq z \leq 1.89$, while the 3000 {\AA} luminosity of QSOs in the Mg II QSO-69 sample covers four orders of magnitude, $42.83 \leq \log_{10}{(L_{3000} [{\rm erg\,s^{-1}}])} \leq 46.79$. Both the low- and high-luminosity sources are beneficial for better determining the $R-L$ correlation relation. The Pearson correlation coefficient for the whole sample is $r=0.63$ with $p=5.60\times 10^{-9}$, while the Spearman correlation coefficient is $s=0.47$ with $p=4.52 \times 10^{-5}$, where $p$ expresses a two-sided $p$-value\footnote{The $p$-value relates to the hypothesis test, where the null hypothesis is that the two data sets, $\tau$ and $L_{3000}$, are uncorrelated. The $p$-value then estimates the probability with which these two uncorrelated data sets would yield the correlation coefficient that was inferred here.}. The RMS intrinsic scatter reaches $\sigma_{\rm ext} \sim 0.30$ dex for the standard $R-L$ relation, but it drops for the highly-accreting subsample, especially for extended versions of the $R-L$ relation \citep{Michal2021}. The sample is relatively homogeneous, with $\sim 83\%$ of the sources coming from the most recent SDSS-RM sample and $\sim 9\%$ of the sources from the previous SDSS-RM sample. This means that for most of the sources a consistent approach was used to infer the significant time-delay, mostly using the JAVELIN method that makes use of the damped random walk approach in fitting the continuum light curve \citep{2009ApJ...698..895K,2010ApJ...721.1014M,2010ApJ...708..927K,2011ApJ...735...80Z,2013ApJ...765..106Z,2016ApJ...819..122Z}, while the remaining sources were analyzed typically by a combination of other methods, including a standard interpolation and discrete cross-correlation functions, the $\chi^2$ method, and measures of data randomness/regularity \citep[see][ for overviews and applications to data]{czerny2013,2017ApJ...844..146C,2019AN....340..577Z,Michal2021}.
\item[]{\bf Mg II QSO-9 sample}. This sample includes the last 9 QSOs listed in Table \ref{tab:MgQSOdata}. These data are from \citet{Zhefu2021}. They measured 9 Mg II lags using the first five years of data from the Dark Energy Survey \citep[DES, e.g.,][]{Flaugher2015} - Australian DES \citep[OzDES, e.g.,][]{Lidman2020} reverberation mapping project. The measurement sample spans the redshift range $\sim 1.1 - 1.7$. The lags are consistent with both the H$\beta$ $R-L$ relation determined by \citet{Bentz2013} and the Mg II $R-L$ relation of \citet{Homayouni2020}.
\item[]{\bf Mg II QSO-78 sample.} This sample is the union of the Mg II QSO-69 and the Mg II QSO-9 samples. For the united sample, the Pearson correlation coefficient between $\tau$ and $L_{3000}$ is $r=0.63$ with $p=6.68\times 10^{-10}$ and the Spearman correlation coefficient is $s=0.50$ with $p=4.06\times 10^{-6}$, hence the correlation along the $R-L$ is slightly enhanced by adding MgII QSO-9 to the MgII QSO-69 sample. After the sample enlargement, the RMS scatter decreases only by $\sim 1.68\%$ from $\sim 0.30$ dex to $\sim 0.29$ dex.
\end{itemize}
In this paper, we also use 31 $H(z)$ and 11 BAO measurements. The $H(z)$ data redshift range is $0.07 \leq z \leq 1.965$ and the BAO data redshift range is $0.0106 \leq z \leq 2.33$. The $H(z)$ data are given in Table 2 of \cite{Ryanetal2018} and the BAO data are listed in Table 1 of \cite{KhadkaRatra2021}. Cosmological constraints obtained from the Mg II QSO samples are consistent with those obtained from the BAO + $H(z)$ data so we also jointly analyse the Mg II QSO-78 and BAO + $H(z)$ data sets.
\section{Methods}
\label{sec:methods}
\begin{figure}
\includegraphics[width=\linewidth,right]{tau_L_corr_78_final3.pdf}\par
\caption{$R-L$ correlation for 78 Mg II QSOs using the flat $\Lambda$CDM model. Black crosses show the Mg II QSO-69 sample and red crosses show the Mg II QSO-9 sample. Blue solid line is the $R-L$ correlation with best-fit parameter values for the QSO-78 data set. Blue and light gray shaded regions are the $1\sigma$ and $3\sigma$ confidence regions around the best-fit $R-L$ relation accounting only for the uncertainties in $\beta$ and $\gamma$.}
\label{fig:tau_L}
\end{figure}
The $R-L$ correlation relates the rest-frame time-delay of the Mg II broad line and the monochromatic luminosity of the QSO. For the sources used in this paper, this correlation can be seen in Fig.\ \ref{fig:tau_L}. The $R-L$ relation is usually expressed in the form
\begin{equation}
\label{eq:corr}
\log \left({\frac{\tau} {\rm day}}\right) = \beta + \gamma \log\left({\frac{L_{3000}}{10^{44}\,{\rm erg\,s^{-1}}}}\right),
\end{equation}
where $\log$ = $\log_{10}$ and $L_{3000}$ and $\tau$ are the monochromatic luminosity of the quasar at 3000 {\AA} in the rest frame in units of erg s$^{-1}$ and the rest-frame time-delay of the Mg II line in units of day. Here $\beta$ and $\gamma$ are the correlation model free parameters and need to be determined from the data.
The measured quantities are the time delay and the quasar flux. Expressing the luminosity in terms of the flux we obtain
\begin{equation}
\label{eq:corr_f}
\log \left({\frac{\tau} {\rm day}}\right) = \beta + \gamma \log\left({\frac{F_{3000}}{10^{44}\,{\rm erg\,cm^{-2}\,s^{-1}}}}\right) + \gamma\log(4\pi) + 2\gamma\log\left(\frac{D_L}{\rm cm}\right),
\end{equation}
where $F_{3000}$ is the measured quasar flux at 3000 {\AA} in units of ${\rm erg\,cm^{-2}\,s^{-1}}$ and $D_L(z,p)$ is the luminosity distance in units of cm, which is a function of $z$ and the cosmological parameters $p$ of the cosmological model under study (see Sec.\ 2). The luminosity distance is
\begin{equation}
\label{eq:DM}
\frac{H_0\sqrt{\left|\Omega_{k0}\right|}D_L(z, p)}{(1+z)} =
\begin{cases}
{\rm sinh}\left[g(z)\right] & \text{if}\ \Omega_{k0} > 0, \\
\vspace{1mm}
g(z) & \text{if}\ \Omega_{k0} = 0,\\
\vspace{1mm}
{\rm sin}\left[g(z)\right] & \text{if}\ \Omega_{k0} < 0,
\end{cases}
\end{equation}
where
\begin{equation}
\label{eq:XCDM}
g(z) = H_0\sqrt{\left|\Omega_{k0}\right|}\int^z_0 \frac{dz'}{H(z')},
\end{equation}
and $H(z)$ is the Hubble parameter which is given in Sec.\ 2 for each cosmological model.
In a given cosmological model, eqs.\ (\ref{eq:corr_f}) and (\ref{eq:DM}) can be used to predict the rest-frame time-delay of the Mg II line for a quasar at known redshift. We can then compare the predicted and observed time-delays by using the likelihood function \citep{Dago2005}
\begin{equation}
\label{eq:chi2}
\ln({\rm LF}) = -\frac{1}{2}\sum^{N}_{i = 1} \left[\frac{[\log(\tau^{\rm obs}_{X,i}) - \log(\tau^{\rm th}_{X,i})]^2}{s^2_i} + \ln(2\pi s^2_i)\right].
\end{equation}
Here $\ln$ = $\log_e$, $\tau^{\rm th}_{X,i}(p)$ and $\tau^{\rm obs}_{X,i}(p)$ are the predicted and observed time-delays at redshift $z_i$, and $s^2_i = \sigma^2_{\log{\tau_{\rm obs},i}} + \gamma^2 \sigma^2_{\log{F_{3000},i}} + \sigma_{\rm ext}^2$, where $\sigma_{\log{\tau_{\rm obs},i}}$ and $\sigma_{\log{F_{3000},i}}$ are the measurement error on the observed time-delay ($\tau^{\rm obs}_{X,i}(p)$) and the measured flux ($F_{3000}$) respectively, and $\sigma_{\rm ext}$ is the intrinsic dispersion of the $R-L$ relation.
QSO data alone cannot constrain $H_0$ because of the degeneracy between the correlation intercept parameter $\beta$ and $H_0$, so in this case we set $H_0$ to $70$ ${\rm km}\hspace{1mm}{\rm s}^{-1}{\rm Mpc}^{-1}$.
\begin{table}
\centering
\caption{Summary of the non-zero flat prior parameter ranges.}
\label{tab:prior}
\begin{threeparttable}
\begin{tabular}{l|c}
\hline
Parameter & Prior range \\
\hline
$\Omega_bh^2$ & $[0, 1]$ \\
$\Omega_ch^2$ & $[0, 1]$ \\
$\Omega_{m0}$ & $[0, 1]$ \\
$\Omega_{k0}$ & $[-2, 1]$ \\
$\omega_{X}$ & $[-5, 0.33]$ \\
$\alpha$ & $[0, 10]$ \\
$\sigma_{\rm ext}$ & $[0, 5]$ \\
$\beta$ & $[0, 10]$ \\
$\gamma$ & $[0, 5]$ \\
\hline
\end{tabular}
\end{threeparttable}
\end{table}
To determine cosmological model and $R-L$ parameter constraints from QSO-only data, we maximize the likelihood function given in eq.\ (\ref{eq:chi2}) and determine the best-fit values of all the free parameters and the corresponding uncertainties. The likelihood analysis for each data set and cosmological model is done using the Markov chain Monte Carlo (MCMC) method implemented in the \textsc{MontePython} code \citep{Brinckmann2019}. Convergence of the MCMC chains for each parameter is determined by using the Gelman-Rubin criterion $(R-1 < 0.05)$. For each free parameter we assume a top hat prior which is non-zero over the ranges given in Table \ref{tab:prior}.
To determine cosmological model parameter constraints from BAO + $H(z)$ data we use the method described in \cite{KhadkaRatra2021}. To determine cosmological model and $R-L$ relation parameter constraints from QSO + BAO + $H(z)$ data we maximize the sum of the ln likelihood function given in eq.\ (\ref{eq:chi2}) and the BAO + $H(z)$ ln likelihood function given in eqs.\ (12) and (13) of \cite{KhadkaRatra2021}.
For model comparisons, we compute the Akaike and Bayes Information Criterion ($AIC$ and $BIC$) values,
\begin{align}
\label{eq:AIC}
AIC =& \chi^2_{\rm min} + 2d,\\
\label{eq:BIC}
BIC =& \chi^2_{\rm min} + d\ln{N}\, ,
\end{align}
where $\chi^2_{\rm min} = -2 \ln({\rm LF}_{\rm max})$. Here $N$ is the number of data points, $d$ is the number of free parameters, and the degree of freedom $dof = N - d$. $AIC$ and $BIC$ penalize free parameters, while $\chi^2_{\rm min}$ does not, with $BIC$ more severely penalizing larger $d$ (than $AIC$ does) when $N \gtrsim 7.4$, as is the case for all data sets we consider here. We also compute the differences, $\Delta AIC$ and $\Delta BIC$, with respect to the spatially-flat $\Lambda$CDM model $AIC$ and $BIC$ values. Positive $\Delta AIC$ or $\Delta BIC$ values indicate that the flat $\Lambda$CDM model is favored over the model under study. They provide weak, positive, and strong evidence for the flat $\Lambda$CDM model when they are in $[0, 2]$, $(2, 6]$, or $> 6$. Negative $\Delta AIC$ or $\Delta BIC$ values indicate that the model under study is favored over the flat $\Lambda$CDM model.
\begin{table*}
\centering
\small\addtolength{\tabcolsep}{-3.3pt}
\small
\caption{Unmarginalized one-dimensional best-fit parameters for Mg II QSO and BAO + $H(z)$ data sets. For each data set, $\Delta AIC$ and $\Delta BIC$ values are computed with respect to the $AIC$ and $BIC$ values of the flat $\Lambda$CDM model.}
\label{tab:BFP}
\begin{threeparttable}
\begin{tabular}{l|cccccccccccccccccccc}
\hline
Model & Data set & $\Omega_{b}h^2$ & $\Omega_{c}h^2$& $\Omega_{\rm m0}$ & $\Omega_{\rm k0}$ & $\omega_{X}$ & $\alpha$ & $H_0$\tnote{a} & $\sigma_{\rm ext}$ & $\beta$ & $\gamma$ & $dof$ & $-2\ln({\rm LF}_{\rm max})$ & $AIC$ & $BIC$ & $\Delta AIC$ & $\Delta BIC$\\
\hline
& Mg II QSO-69 & - & -& 0.155 & - & - & - & - & 0.288 & 1.667 & 0.290 & 65 & 29.56 & 37.56 & 46.50 & - & -\\
Flat & Mg II QSO-78 & - & -& 0.138 & - & - & - &- & 0.281 & 1.666 & 0.283 & 74 & 30.16 & 38.16 & 47.58 & - & -\\
$\Lambda$CDM & Mg II QSO-9 & - & - & 0.804 & - & - & - &- & $0.207$ & 2.154 & 0.002 & 5 & $-0.874$ & 7.126 & 7.91 & - & -\\
& B+H\tnote{b} & 0.024 & 0.119 & 0.298 & - & - & - &69.119&-&-&-& 39 & 23.66&29.66&34.87 & - & -\\
& Q+B+H\tnote{c} & 0.024 & 0.119 & 0.300 & - & - & - &68.983&0.285&1.685&0.293& 115 & 53.96 & 63.96 & 77.90 & - & -\\
\hline
& Mg II QSO-69 & - & -& 0.357 & $-1.075$ & - &-&- & 0.274 & 1.612 & 0.364 & 64 & 23.50 & 33.50 & 44.67 & $-4.06$ & $-1.83$\\
Non-flat & Mg II QSO-78 & - & - & 0.391 & $-$1.119 & - & - &- & 0.270 & 1.623 & 0.354 & 73 & 25.40 & 35.40 & 47.18 & $-2.76$ & $-0.40$\\
$\Lambda$CDM & Mg II QSO-9 &- & -& 0.664 & $-$0.759 & - & - &- & 0.211 & 2.157 & 0.001 & 4 & $-$0.88 & 9.12 & 10.11 & 2.00 & 2.20\\
& B+H\tnote{b} & 0.025 & 0.114 & 0.294 & 0.021 & - & - &68.701&-&-&-&38&23.60&31.60&38.55 & 1.94 & 3.68\\
& Q+B+H\tnote{c} & 0.024 & 0.117 & 0.298 & 0.012 & - & - &68.667&0.278&1.679&0.291&114&53.988&65.98&82.70 & 2.02 & 4.80\\
\hline
& Mg II QSO-69 &- & -& 0.003 & - & $-$4.998 &-&- & 0.277 & 1.353 & 0.233 & 64 & 23.98 & 33.98 & 45.15 & $-3.58$ & $-1.35$\\
Flat & Mg II QSO-78 &- & -& 0.006 & - & $-$4.848 & - &- & 0.273 & 1.372 & 0.248 & 73 & 24.44 & 34.44 & 46.22 & $-3.72$ & $-1.36$\\
XCDM & Mg II QSO-9 &- & -& 0.021 & - & $-$2.683 & - &- & 0.213 & 2.154 & 0.007 & 4 & $-$0.88 & 9.12 & 10.11 & 2.00 & 2.20\\
& B+H\tnote{b} & 0.031 & 0.088 & 0.280 & - & $-$0.691 & - &65.036& - & - & -&38&19.66&27.66&34.61 & $-2.00$ & $-0.26$\\
& Q+B+H\tnote{c} & 0.030 & 0.089 & 0.280 & - & $-$0.705 & - &65.097& 0.282 & 1.678 & 0.295&114&50.26&62.26&78.98 & $-1.70$ & 1.08\\
\hline
& Mg II QSO-69 &- & -& 0.043 & $-$0.091 & $-$2.727 &-&- & 0.262 & 1.455 & 0.293 & 63 & 17.96 & 29.96 & 43.36 & $-7.60$ & $-3.14$\\
Non-flat & Mg II QSO-78 &- & - & 0.029 & $-$0.057 & $-$3.372 & - &- & 0.257 & 1.351 & 0.298 & 72 & 18.62 & 30.62 & 44.76 & $-7.54$ & $-2.82$\\
XCDM & Mg II QSO-9 &- & -& 0.044 & 0.505 & $-$0.953 & - &- & 0.211 & 2.152 & 0.002 & 3 & $-$0.88 & 11.12 & 12.30 & 4.00 & 4.39\\
& B+H\tnote{b} & 0.030 & 0.094 & 0.291 & $-$0.147 & $-$0.641 & - &65.204& - & - & -&37&18.34&28.34&37.03 & $-1.32$ & $2.16$\\
& Q+B+H\tnote{c} & 0.029 & 0.100 & 0.295 & $-$0.159 & $-$0.643 & - &65.264& 0.292 & 1.682 & 0.298&113&48.94&62.94& 82.45 & $-1.02$ & 4.55\\
\hline
& Mg II QSO-69 &- & -& 0.149 & - & - &9.150&- & 0.288 & 1.668 & 0.286 & 64 & 29.56 & 39.56 & 50.73 & 2.00 & 4.23\\
Flat & Mg II QSO-78 &- & -& 0.171 & - & - & 8.777 &- & 0.281 & 1.672 & 0.285 & 73 & 30.16 & 40.16 & 51.94 & 2.00 & 4.36\\
$\phi$CDM & Mg II QSO-9 &- & -& 0.377 & - & - & 7.795 &- & 0.208 & 2.148 & 0.003 & 4 & $-$0.88 & 9.12 & 10.11 & 2.00 & 2.20\\
& B+H\tnote{b} & 0.033 & 0.080 & 0.265 & - & - & 1.445 &65.272& - & - & -&38&19.56&27.56&34.51 & $-2.10$ & $-0.36$\\
& Q+B+H\tnote{c} & 0.031 & 0.086 & 0.272 & - & - & 1.212 &65.628&0.280&1.693&0.288& 114 & 50.12&62.12&78.84 & $-1.84$ & 0.94\\
\hline
& Mg II QSO-69 &- & -& 0.439 & $-$0.440 & - & 9.540 &- & 0.287 & 1.672 & 0.307 & 63 & 29.18 & 41.18 & 54.58 & 3.62 & 8.08\\
Non-flat & Mg II QSO-78 &- & - & 0.341 & $-$0.333 & - & 5.637 & - & 0.282 & 1.671 & 0.296 & 72 & 29.76 & 41.76 & 55.90 & 3.60 & 8.32\\
$\phi$CDM & Mg II QSO-9 &- & -& 0.879 & $-$0.185 & - & 7.644 &- & 0.212 & 2.155 & 0.001 & 3 & $-0.88$ & 11.12 & 12.30 & 4.00 & 4.39\\
& B+H\tnote{b} & 0.035 & 0.078 & 0.261 & $-$0.155 & - & 2.042 &65.720& - & - & -&37&18.16&28.16&36.85 & $-1.50$ & 1.98\\
& Q+B+H\tnote{c} & 0.033 & 0.082 & 0.265 & $-$0.160 & - & 1.902 &65.876& 0.284 & 1.682 & 0.297&113&48.72&62.72&82.23 & $-1.24$ & 4.33\\
\hline
\end{tabular}
\begin{tablenotes}
\item[a]${\rm km}\hspace{1mm}{\rm s}^{-1}{\rm Mpc}^{-1}$. $H_0$ is set to $70$ ${\rm km}\hspace{1mm}{\rm s}^{-1}{\rm Mpc}^{-1}$ for the QSO-only data analyses.
\item[b]${\rm BAO}+H(z)$.
\item[c]Mg II QSO-78 + ${\rm BAO}+H(z)$.
\end{tablenotes}
\end{threeparttable}
\end{table*}
\begin{sidewaystable*}
\begin{adjustbox}{angle=180}
\centering
\small\addtolength{\tabcolsep}{0.0pt}
\begin{threeparttable}
\caption{Marginalized one-dimensional best-fit parameters with 1$\sigma$ confidence intervals, or 1$\sigma$ or 2$\sigma$ limits, for the Mg II QSO and BAO + $H(z)$ data sets.}
\label{tab:1d_BFP2}
\setlength{\tabcolsep}{1.3mm}{
\begin{tabular}{lccccccccccccc}
\hline
Model & Data & $\Omega_{b}h^2$ & $\Omega_{c}h^2$& $\Omega_{m0}$ & $\Omega_{\Lambda}$\tnote{a} & $\Omega_{k0}$ & $\omega_{X}$ & $\alpha$ & $H_0$\tnote{b} & $\sigma_{\rm ext}$ & $\beta$ & $\gamma$ \\
\hline
Flat $\Lambda$CDM\ & Mg II QSO-69 &-&-& $0.240^{+0.450}_{-0.170}$ & $0.758^{+0.172}_{-0.448}$ & - & - & - &-& $0.301^{+0.024}_{-0.032}$ & $1.699^{-0.059}_{-0.059}$ & $0.300^{+0.049}_{-0.049}$\\
& Mg II QSO-78 &-&-& $0.270^{+0.400}_{-0.210}$ & $0.729^{+0.211}_{-0.399}$ & - & - & - &-& $0.292^{+0.022}_{-0.029}$ & $1.700^{-0.058}_{-0.058}$ & $0.297^{+0.046}_{-0.046}$\\
& Mg II QSO-9 &-&-& $> 0.088$ & $< 0.912$ & - & - & - &-& $0.257^{+0.113}_{-0.073}$ & $1.712^{-0.368}_{-0.732}$ & $0.296^{+0.414}_{-0.296}$\\
& BAO+H\tnote{c}& $0.024^{+0.003}_{-0.003}$ & $0.119^{+0.008}_{-0.008}$ & $0.299^{+0.015}_{-0.017}$ & - & - & - & - &$69.300^{+1.800}_{-1.800}$&-&-&-\\
& Q+B+H\tnote{d} & $0.024^{+0.003}_{-0.003}$ & $0.119^{+0.007}_{-0.008}$ & $0.299^{+0.015}_{-0.017}$ & - & - & - & - &$69.300^{+1.800}_{-1.800}$& $0.291^{+0.022}_{-0.029}$&$1.682^{+0.054}_{-0.054}$&$0.293^{+0.043}_{-0.043}$\\
\hline
Non-flat $\Lambda$CDM\ & Mg II QSO-69 &-&-& $0.681^{+0.219}_{-0.301}$ & $1.785^{+0.335}_{-0.985}$ & $-1.296^{+0.926}_{-0.684}$ & - &-&-& $0.297^{+0.025}_{-0.032}$ & $1.674^{+0.065}_{-0.065}$ & $0.324^{+0.052}_{-0.060}$\\
& Mg II QSO-78 &-&-& $0.726^{+0.153}_{-0.397}$ & $1.712^{+0.298}_{-1.122}$ & $-1.169^{+1.269}_{-0.511}$ & - &-&-& $0.289^{+0.023}_{-0.029}$ & $1.680^{+0.063}_{-0.063}$ & $0.317^{+0.048}_{-0.055}$\\
& Mg II QSO-9 &-&-& $> 0.126$ & $0.661^{+0.639}_{-0.660}$ & $> -1.51$ & - &-&-& $0.256^{+0.112}_{-0.076}$ & $1.678^{+0.412}_{-0.668}$ & $0.317^{+0.433}_{-0.277}$\\
& BAO+H\tnote{c}& $0.025^{+0.004}_{-0.004}$ & $0.113^{+0.019}_{-0.019}$ & $0.292^{+0.023}_{-0.023}$ & $0.667^{+0.093}_{+0.081}$ & $-0.014^{+0.075}_{-0.075}$ & - & - &$68.700^{+2.300}_{-2.300}$&-&-&-\\
& Q+B+H\tnote{d} & $0.025^{+0.004}_{-0.005}$ & $0.115^{+0.018}_{-0.018}$ & $0.294^{+0.023}_{-0.023}$ & $0.675^{+0.092}_{+0.079}$ & $0.031^{+0.094}_{-0.110}$ & - & - &$68.800^{+2.200}_{-2.200}$&$0.292^{+0.022}_{-0.029}$&$1.681^{+0.055}_{-0.055}$&$0.293^{+0.044}_{-0.044}$\\
\hline
Flat XCDM & Mg II QSO-69 &-&-& (< 0.496, 1$\sigma$) & - & - & $< -0.393$ & - &-& $0.298^{+0.025}_{-0.032}$ & $1.675^{+0.085}_{-0.109}$ & $0.297^{+0.049}_{-0.049}$\\
& Mg II QSO-78 &-&-& (< 0.500, 1$\sigma$) & - & - & $< -0.367$ & - &-& $0.291^{+0.024}_{-0.030}$ & $1.640^{+0.120}_{-0.074}$ & $0.294^{+0.046}_{-0.046}$\\
& Mg II QSO-9 &-&-& --- & - & - & --- & - &-& $0.261^{+0.113}_{-0.082}$ & $1.614^{+0.476}_{-0.624}$ & $0.294^{+0.046}_{-0.046}$\\
& BAO+H\tnote{c} & $0.030^{+0.005}_{-0.005}$ & $0.093^{+0.019}_{-0.017}$ & $0.282^{+0.021}_{-0.021}$ & - & - & $-0.744^{+0.140}_{-0.097}$ & - &$65.800^{+2.200}_{-2.500}$& - & - & -\\
&Q+B+H\tnote{d}& $0.030^{+0.004}_{-0.005}$ & $0.093^{+0.019}_{-0.016}$ & $0.283^{+0.023}_{-0.020}$ & - & - & $-0.750^{+0.150}_{-0.100}$ & - &$65.800^{+2.200}_{-2.600}$& $0.292^{+0.022}_{-0.029}$ & $1.680^{+0.055}_{-0.055}$ & $0.294^{+0.044}_{-0.044}$\\
\hline
Non-flat XCDM & Mg II QSO-69&-&- & $0.287^{+0.513}_{-0.087}$ & - & $-0.339^{+0.559}_{-0.681}$ & $-1.138^{+0.738}_{-2.362}$ & - &-& $0.297^{+0.025}_{-0.032}$ & $1.672^{+0.088}_{-0.107}$ & $0.318^{+0.051}_{-0.057}$\\
& Mg II QSO-78 &-&-& $0.373^{+0.407}_{-0.133}$ & - & $-0.303^{+0.523}_{-0.677}$ & $< 0.246$ & - &-& $0.289^{+0.023}_{-0.029}$ & $1.640^{+0.120}_{-0.079}$ & $0.314^{+0.048}_{-0.053}$\\
& Mg II QSO-9 &-&-& --- & - & $0.000^{+0.810}_{-0.540}$ & $- 0.728^{+0.788}_{-2.262}$ & - &-& $0.256^{+0.111}_{-0.077}$ & $1.802^{+0.318}_{-0.702}$ & $0.197^{+0.493}_{-0.177}$\\
& BAO+H\tnote{c} & $0.029^{+0.005}_{-0.005}$ & $0.099^{+0.021}_{-0.021}$ & $0.293^{+0.027}_{-0.027}$ & - & $-0.120^{+0.130}_{-0.130}$ & $-0.693^{+0.130}_{-0.077}$ & - &$65.900^{+2.400}_{-2.400}$& - & - & -\\
& Q+B+H\tnote{d} & $0.029^{+0.005}_{-0.006}$ & $0.099^{+0.021}_{-0.021}$ & $0.293^{+0.027}_{-0.027}$ & - & $-0.120^{+0.130}_{-0.130}$ & $-0.700^{+0.140}_{-0.079}$ & - &$66.000^{+2.200}_{-2.500}$& $0.292^{+0.022}_{-0.029}$ & $1.682^{+0.055}_{-0.055}$ & $0.296^{+0.044}_{-0.044}$\\
\hline
Flat $\phi$CDM & Mg II QSO-69 &-&-& $0.264^{+0.406}_{-0.214}$ & - & - & - & --- &-& $0.301^{+0.025}_{-0.033}$ & $1.697^{+0.063}_{-0.057}$ & $0.299^{+0.049}_{-0.049}$\\
& Mg II QSO-78 &-&-& $0.276^{+0.394}_{-0.216}$ & - & - & - & --- &-& $0.293^{+0.022}_{-0.029}$ & $1.699^{+0.061}_{-0.055}$ & $0.296^{+0.046}_{-0.046}$\\
& Mg II QSO-9 &-&-& --- & - & - & - & --- &-& $0.247^{+0.106}_{-0.067}$ & $1.831^{+0.279}_{-0.651}$ & $0.167^{+0.443}_{-0.147}$\\
& BAO+H\tnote{c} & $0.032^{+0.006}_{-0.003}$ & $0.081^{+0.017}_{-0.017}$ & $0.266^{+0.023}_{-0.023}$ & - & - & - & $1.530^{+0.620}_{-0.850}$ &$65.100^{+2.100}_{-2.100}$& - & - & -\\
& Q+B+H\tnote{d} & $0.032^{+0.006}_{-0.003}$ & $0.081^{+0.018}_{-0.018}$ & $0.266^{+0.024}_{-0.024}$ & - & - & - & $1.510^{+0.620}_{-0.890}$ &$65.200^{+2.100}_{-2.100}$& $0.292^{+0.022}_{-0.029}$ & $1.680^{+0.055}_{-0.055}$ & $0.295^{+0.044}_{-0.044}$\\
\hline
Non-flat $\phi$CDM & Mg II QSO-69 &-&-& --- & - & $-0.009^{+0.399}_{-0.361}$ & - & --- &-& $0.301^{+0.025}_{-0.033}$ & $1.700^{+0.058}_{-0.058}$ & $0.301^{+0.049}_{-0.049}$\\
& Mg II QSO-78 &-&-& --- & - & $-0.011^{+0.401}_{-0.359}$ & - & --- &-& $0.292^{+0.022}_{-0.029}$ & $1.702^{+0.055}_{-0.055}$ & $0.298^{+0.045}_{-0.045}$\\
& Mg II QSO-9 &-&-& --- & - & $0.000^{+0.430}_{-0.330}$ & - & --- &-& $0.254^{+0.113}_{-0.074}$ & $1.793^{+0.317}_{-0.793}$ & $0.214^{+0.516}_{-0.194}$\\
& BAO+H\tnote{c} & $0.032^{+0.006}_{-0.004}$ & $0.085^{+0.017}_{-0.021}$ & $0.271^{+0.024}_{-0.028}$ & - & $-0.080^{+0.100}_{-0.100}$ & - & $1.660^{+0.670}_{-0.830}$ &$65.500^{+2.500}_{-2.500}$& - & - & -\\
& Q+B+H\tnote{d} & $0.032^{+0.007}_{-0.004}$ & $0.086^{+0.018}_{-0.022}$ & $0.272^{+0.024}_{-0.029}$ & - & $-0.090^{+0.100}_{-0.120}$ & - & $1.660^{+0.670}_{-0.850}$ &$65.600^{+2.200}_{-2.200}$& $0.292^{+0.022}_{-0.029}$ & $1.681^{+0.055}_{-0.055}$ & $0.295^{+0.044}_{-0.044}$\\
\hline
\end{tabular}}
\begin{tablenotes}
\item[a]In our analyses $\Omega_{\Lambda}$ is a derived parameter and in each case $\Omega_{\Lambda}$ chains are derived using the current energy budget equation $\Omega_{\Lambda}= 1-\Omega_{m0}-\Omega_{k0}$ (where $\Omega_{k0}=0$ in the flat $\Lambda$CDM model). From these chains, using the \textsc{python} package \textsc{getdist} \citep{Lewis_2019}, we determine best-fit values and uncertainties for $\Omega_{\Lambda}$. We also use this \textsc{python} package to plot the likelihoods and compute the best-fit values and uncertainties of the free parameters.
\item[b]${\rm km}\hspace{1mm}{\rm s}^{-1}{\rm Mpc}^{-1}$. $H_0$ is set to $70$ ${\rm km}\hspace{1mm}{\rm s}^{-1}{\rm Mpc}^{-1}$ for the QSO-only data analyses.
\item[c]BAO + $H(z)$.
\item[d]Mg II QSO-78 + BAO + $H(z)$.
\end{tablenotes}
\end{threeparttable}
\end{adjustbox}
\end{sidewaystable*}
\section{Results}
\label{sec:QSO}
\subsection{Mg II QSO-69, Mg II QSO-9, and Mg II QSO-78 data constraints}
\label{MgIIresults}
Results for the Mg II QSO-69, QSO-9, and QSO-78 data sets are given in Tables \ref{tab:BFP} and \ref{tab:1d_BFP2}. The unmarginalized best-fit parameter values are listed in Table \ref{tab:BFP} and the marginalized one-dimensional best-fit parameter values and limits are given in Table \ref{tab:1d_BFP2}. The one-dimensional likelihood distributions and the two-dimensional likelihood contours for the Mg II QSO-69 and Mg II QSO-78 data sets are shown in blue and olive, respectively, in Figs.\ 2--4 and corresponding plots for the Mg II QSO-9 data set are shown in blue in Figs.\ 5--7.
\begin{figure*}
\begin{multicols}{2}
\includegraphics[width=\linewidth]{FLCDM_Mg_78_BAO.pdf}\par
\includegraphics[width=\linewidth]{NLCDM_Mg_QSO_69_78.pdf}\par
\end{multicols}
\caption{One-dimensional likelihood distributions and two-dimensional likelihood contours at 1$\sigma$, 2$\sigma$, and 3$\sigma$ confidence levels using Mg II QSO-69 (blue), Mg II QSO-78 (olive), and BAO + $H(z)$ (red) data for all free parameters. Left panel shows the flat $\Lambda$CDM model. The black dotted vertical lines are the zero acceleration lines with currently accelerated cosmological expansion occurring to the left of the lines. Right panel shows the non-flat $\Lambda$CDM model. The black dotted sloping line in the $\Omega_{k0}-\Omega_{m0}$ subpanel is the zero acceleration line with currently accelerated cosmological expansion occurring to the lower left of the line. The black dashed horizontal or vertical line in the $\Omega_{k0}$ subpanels correspond to $\Omega_{k0} = 0$.}
\label{fig:Eiso-Ep}
\end{figure*}
\begin{figure*}
\begin{multicols}{2}
\includegraphics[width=\linewidth]{FXCDM_Mg_78_BAO.pdf}\par
\includegraphics[width=\linewidth]{NXCDM_Mg_78_BAO.pdf}\par
\end{multicols}
\caption{One-dimensional likelihood distributions and two-dimensional likelihood contours at 1$\sigma$, 2$\sigma$, and 3$\sigma$ confidence levels using Mg II QSO-69 (blue), Mg II QSO-78 (olive), and BAO + $H(z)$ (red) data for all free parameters. Left panel shows the flat XCDM parametrization. The black dotted curved line in the $\omega_X-\Omega_{m0}$ subpanel is the zero acceleration line with currently accelerated cosmological expansion occurring below the line and the black dashed straight lines correspond to the $\omega_X = -1$ $\Lambda$CDM model. Right panel shows the non-flat XCDM parametrization. The black dotted lines in the $\Omega_{k0}-\Omega_{m0}$, $\omega_X-\Omega_{m0}$, and $\omega_X-\Omega_{k0}$ subpanels are the zero acceleration lines with currently accelerated cosmological expansion occurring below the lines. Each of the three lines is computed with the third parameter set to the BAO + $H(z)$ data best-fit value given in Table 3. The black dashed straight lines correspond to the $\omega_X = -1$ $\Lambda$CDM model. The black dotted-dashed straight lines correspond to $\Omega_{k0} = 0$.}
\label{fig:Eiso-Ep}
\end{figure*}
\begin{figure*}
\begin{multicols}{2}
\includegraphics[width=\linewidth]{FSCF_Mg_78_BAO.pdf}\par
\includegraphics[width=\linewidth]{NSCF_Mg_78_BAO.pdf}\par
\end{multicols}
\caption{One-dimensional likelihood distributions and two-dimensional likelihood contours at 1$\sigma$, 2$\sigma$, and 3$\sigma$ confidence levels using Mg II QSO-69 (blue), Mg II QSO-78 (olive), and BAO + $H(z)$ (red) data for all free parameters. The $\alpha = 0$ axes correspond to the $\Lambda$CDM model. Left panel shows the flat $\phi$CDM model. The black dotted curved line in the $\alpha - \Omega_{m0}$ subpanel is the zero acceleration line with currently accelerated cosmological expansion occurring to the left of the line. Right panel shows the non-flat $\phi$CDM model. The black dotted lines in the $\Omega_{k0}-\Omega_{m0}$, $\alpha-\Omega_{m0}$, and $\alpha-\Omega_{k0}$ subpanels are the zero acceleration lines with currently accelerated cosmological expansion occurring below the lines. Each of the three lines is computed with the third parameter set to the BAO + $H(z)$ data best-fit value given in Table 3. The black dashed straight lines correspond to $\Omega_{k0} = 0$.}
\label{fig:Eiso-Ep}
\end{figure*}
\begin{figure*}
\begin{multicols}{2}
\includegraphics[width=\linewidth]{FLCDM_Mg_QSO_osu.pdf}\par
\includegraphics[width=\linewidth]{NLCDM_Mg_QSO_osu.pdf}\par
\end{multicols}
\caption{One-dimensional likelihood distributions and two-dimensional likelihood contours at 1$\sigma$, 2$\sigma$, and 3$\sigma$ confidence levels using Mg II QSO-9 (blue), and BAO + $H(z)$ (red) data for all free parameters. Left panel shows the flat $\Lambda$CDM model. The black dotted vertical lines are the zero acceleration lines with currently accelerated cosmological expansion occurring to the left of the lines. Right panel shows the non-flat $\Lambda$CDM model. The black dotted sloping line in the $\Omega_{k0}-\Omega_{m0}$ subpanel is the zero acceleration line with currently accelerated cosmological expansion occurring to the lower left of the line. The black dashed horizontal or vertical line in the $\Omega_{k0}$ subpanels correspond to $\Omega_{k0} = 0$.}
\label{fig:Eiso-Ep}
\end{figure*}
\begin{figure*}
\begin{multicols}{2}
\includegraphics[width=\linewidth]{FXCDM_Mg_QSO_osu.pdf}\par
\includegraphics[width=\linewidth]{NXCDM_Mg_QSO_osu.pdf}\par
\end{multicols}
\caption{One-dimensional likelihood distributions and two-dimensional likelihood contours at 1$\sigma$, 2$\sigma$, and 3$\sigma$ confidence levels using Mg II QSO-9 (blue), and BAO + $H(z)$ (red) data for all free parameters. Left panel shows the flat XCDM parametrization. The black dotted curved line in the $\omega_X-\Omega_{m0}$ subpanel is the zero acceleration line with currently accelerated cosmological expansion occurring below the line and the black dashed straight lines correspond to the $\omega_X = -1$ $\Lambda$CDM model. Right panel shows the non-flat XCDM parametrization. The black dotted lines in the $\Omega_{k0}-\Omega_{m0}$, $\omega_X-\Omega_{m0}$, and $\omega_X-\Omega_{k0}$ subpanels are the zero acceleration lines with currently accelerated cosmological expansion occurring below the lines. Each of the three lines is computed with the third parameter set to the BAO + $H(z)$ data best-fit value given in Table 3. The black dashed straight lines correspond to the $\omega_X = -1$ $\Lambda$CDM model. The black dotted-dashed straight lines correspond to $\Omega_{k0} = 0$.}
\label{fig:Eiso-Ep}
\end{figure*}
\begin{figure*}
\begin{multicols}{2}
\includegraphics[width=\linewidth]{FSCF_Mg_QSO_osu.pdf}\par
\includegraphics[width=\linewidth]{NSCF_Mg_QSO_osu.pdf}\par
\end{multicols}
\caption{One-dimensional likelihood distributions and two-dimensional likelihood contours at 1$\sigma$, 2$\sigma$, and 3$\sigma$ confidence levels using Mg II QSO-9 (blue), and BAO + $H(z)$ (red) data for all free parameters. The $\alpha = 0$ axes correspond to the $\Lambda$CDM model. Left panel shows the flat $\phi$CDM model. The black dotted curved line in the $\alpha - \Omega_{m0}$ subpanel is the zero acceleration line with currently accelerated cosmological expansion occurring to the left of the line. Right panel shows the non-flat $\phi$CDM model. The black dotted lines in the $\Omega_{k0}-\Omega_{m0}$, $\alpha-\Omega_{m0}$, and $\alpha-\Omega_{k0}$ subpanels are the zero acceleration lines with currently accelerated cosmological expansion occurring below the lines. Each of the three lines is computed with the third parameter set to the BAO + $H(z)$ data best-fit value given in Table 3. The black dashed straight lines correspond to $\Omega_{k0} = 0$.}
\label{fig:Eiso-Ep}
\end{figure*}
The Mg II QSO-9 data set is small and so constraints derived using these data have larger error bars than those determined from the QSO-69 data. From Table \ref{tab:1d_BFP2} and Figs.\ 2--7, we see that the QSO-9 and QSO-69 constraints are consistent and so it is reasonable to use the combined QSO-78 data to constrain parameters.
From Table \ref{tab:1d_BFP2} we see that the $R-L$ relation parameters $\beta$ and $\gamma$ for each data set, QSO-9, QSO-69, and QSO-78, have values that are independent of the cosmological model assumed in the analysis. This validates the basic assumption of the $R-L$ relation and means that these sources can be used as standardizable candles to constrain cosmological model parameters. For these three data sets, the best-fit values of $\beta$ are $\sim 1.7$ and the best-fit values of $\gamma$ are $\sim 0.3$.\footnote{The Mg II $R-L$ relation is thus shallower than the value predicted by the simple photoionization model ($\gamma = 0.5$). This can happen because the broad Mg II line is emitted towards the outer part of the BLR and it exhibits a weaker response to the continuum variation than do the Balmer emission lines \citep{guo2020}. In addition, the Mg II line is a resonance line that is mostly collisionally excited, while Balmer lines are recombination lines. This can qualitatively affect the slope of the $R-L$ relation for the Mg II line in comparison with the Balmer lines. However, \citet{Mary2020} and \citet{Michal2021} found that by separating the sample into low and high accretors, it is possible to recover the expected value in both cases, i.e. the slope increases from $\sim 0.3$. This result supports the existence of the $R-L$ correlation for Mg II QSOs, which is also consistent with the theoretical findings of \citet{guo2020}, who predict the existence of the global Mg II $R-L$ correlation, while the weaker response of Mg II to the continuum variations can affect the $R-L$ correlation slope for individual sources. Given that there is a significant Mg II QSO $R-L$ correlation, as long as there are no significant unaccounted-for errors, an $R-L$ relation slope $\sim 0.3$ (instead of $\sim 0.5$) does not invalidate the cosmological usage of Mg II QSOs.} Another free parameter of the $R-L$ relation is the intrinsic dispersion ($\sigma_{\rm ext}$). The minimum value of $\sigma_{\rm ext}$, $\sim 0.25$ dex, is obtained using the Mg II QSO-9 data set and the maximum value of $\sigma_{\rm ext}$, $\sim 0.3$ dex, is obtained using the Mg II QSO-69 data set.
For the combined Mg II QSO-78 data, $\sigma_{\rm ext} \sim 0.29$ dex. This is smaller than the $\sigma_{\rm ext} \sim 0.39$ dex for the best available gamma-ray burst data set of 118 standardizable-candle GRBs spanning $0.3399 \leq z \leq 8.2$ \citep{Khadkaetal2021} and a little larger than the $\sigma_{\rm ext} \sim 0.24$ dex for the best available QSO X-ray and UV flux data set of 1019 standardizable-candle QSOs spanning $0.009 \leq z \leq 1.479$ \citep{KhadkaRatra2021}.
From Figs.\ 2--4 we see that for the Mg II QSO-78 data set the likelihoods favor the part of cosmological model parameter space that is consistent with currently-accelerating cosmological expansion, with the non-flat $\phi$CDM model being somewhat of an outlier.
From Table \ref{tab:1d_BFP2}, for the Mg II QSO-69 data set, the minimum value of $\Omega_{m0}$, $0.240^{+0.450}_{-0.170}$, is obtained in the spatially-flat $\Lambda$CDM model and the maximum value of $\Omega_{m0}$, $0.681^{+0.219}_{-0.301}$, is in the spatially non-flat $\Lambda$CDM model. These data cannot constrain $\Omega_{m0}$ in the flat XCDM parametrization or the non-flat $\phi$CDM model. For the Mg II QSO-9 data, the value of $\Omega_{m0}$ is determined to be > 0.088 and > 0.126, at 2$\sigma$, in the flat and non-flat $\Lambda$CDM model respectively. These data cannot constrain $\Omega_{m0}$ in the four other models. For the Mg II QSO-78 data, the minimum value of $\Omega_{m0}$, $0.270^{+0.400}_{-0.210}$, is obtained in the flat $\Lambda$CDM model and the maximum value of $\Omega_{m0}$, $0.726^{+0.153}_{-0.397}$, is obtained in the non-flat $\Lambda$CDM model. These data cannot constrain $\Omega_{m0}$ in the flat XCDM parametrization or the non-flat $\phi$CDM model. All $\Omega_{m0}$ values obtained using these QSO data sets are consistent with those from BAO + $H(z)$ data or other well-established cosmological probes such as CMB anisotropy or Type Ia supernova measurements. In Fig.\ \ref{fig:Hubble_diagram} we plot the Hubble diagram of the 78 Mg II QSOs and this figure shows that this QSO Hubble diagram is consistent with that of a flat $\Lambda$CDM model with $\Omega_{m0} = 0.3$.
\begin{figure}
\includegraphics[width=\linewidth,right]{Hubble_diagram_QSO_78.pdf}\par
\caption{Hubble diagram of 78 Mg II QSOs in the best-fit flat $\Lambda$CDM model. Magenta solid line is the prediction for the best-fit flat $\Lambda$CDM model with $\Omega_{m0}=0.27$ from the Mg II QSO-78 data set. Black and red data points are the observed distance moduli and corresponding uncertainties for the Mg II QSO-69 and Mg II QSO-9 samples respectively in the best-fit QSO-78 flat $\Lambda$CDM model. The blue dotted line shows the standard flat $\Lambda$CDM model with $\Omega_{m0}=0.3$.}
\label{fig:Hubble_diagram}
\end{figure}
From Table \ref{tab:1d_BFP2} and Figs.\ 2--4, we see that currently-available Mg II QSO data set at most only weak constraints on $\Omega_{\Lambda}$, $\Omega_{k0}$, $\omega_X$, and $\alpha$.\footnote{In the spatially non-flat $\phi$CDM model, $\Omega_{\phi}(z, \alpha)$ is obtained from the numerical solutions of the equations of motion and its current value always lies in the range $0 \leq \Omega_{\phi}(0, \alpha) \leq 1$. This restriction on $\Omega_{\phi}(0,\alpha)$ can be seen in the non-flat $\phi$CDM model plots in Figs.\ 4 and 7 in the form of straight-line contour boundaries in the $\Omega_{m0}-\Omega_{k0}$ subpanels.}
Table \ref{tab:BFP} lists, for all three QSO data sets, the values of $AIC$, $BIC$, and their differences, $\Delta AIC$ and $\Delta BIC$, with respect to the $AIC$ and $BIC$ values for the spatially-flat $\Lambda$CDM model. For the Mg II QSO-69 and Mg II QSO-78 data sets, from the $AIC$ and $BIC$ values, the most favored case is the non-flat XCDM parametrization while the non-flat $\phi$CDM model is the least favored one. For the Mg II QSO-9 data set, from the $AIC$ and $BIC$ values, the most favored case is the flat $\Lambda$CDM model while the non-flat XCDM parametrization and the $\phi$CDM model are least favored. From the $\Delta AIC$ values, only in the non-flat XCDM parametrization do the Mg II QSO-69 and Mg II QSO-78 data sets provide strong evidence against the spatially-flat $\Lambda$CDM model. From the $\Delta BIC$ values, the Mg II QSO-69 and Mg II QSO-78 data sets provide strong evidence against only the non-flat $\phi$CDM model.
\subsection{BAO + $H(z)$ and Mg II QSO-78 + BAO + $H(z)$ data constraints}
\label{com_con}
\begin{figure*}
\begin{multicols}{2}
\includegraphics[width=\linewidth]{FLCDM_Mg_78_BAO_1.pdf}\par
\includegraphics[width=\linewidth]{NLCDM_Mg_78_BAO_1.pdf}\par
\end{multicols}
\caption{One-dimensional likelihood distributions and two-dimensional likelihood contours at 1$\sigma$, 2$\sigma$, and 3$\sigma$ confidence levels using Mg II QSO-78 (gray), BAO + $H(z)$ (red), and Mg II QSO-78 + BAO + $H(z)$ (blue) data for all free parameters. Left panel shows the flat $\Lambda$CDM model and right panel shows the non-flat $\Lambda$CDM model. The black dashed straight lines in the right panel correspond to $\Omega_{k0} = 0$.}
\label{fig:Eiso-Ep}
\end{figure*}
\begin{figure*}
\begin{multicols}{2}
\includegraphics[width=\linewidth]{FXCDM_Mg_78_BAO_1.pdf}\par
\includegraphics[width=\linewidth]{NXCDM_Mg_78_BAO_1.pdf}\par
\end{multicols}
\caption{One-dimensional likelihood distributions and two-dimensional likelihood contours at 1$\sigma$, 2$\sigma$, and 3$\sigma$ confidence levels using Mg II QSO-78 (gray), BAO + $H(z)$ (red), and Mg II QSO-78 + BAO + $H(z)$ (blue) data for all free parameters. Left panel shows the flat XCDM parametrization. Right panel shows the non-flat XCDM parametrization. The black dashed straight lines in both panels correspond to the $\omega_X = -1$ $\Lambda$CDM models. The black dotted straight lines in the $\Omega_{k0}$ subpanels in the right panel correspond to $\Omega_{k0} = 0$.}
\label{fig:Eiso-Ep}
\end{figure*}
\begin{figure*}
\begin{multicols}{2}
\includegraphics[width=\linewidth]{FSCF_Mg_78_BAO_1.pdf}\par
\includegraphics[width=\linewidth]{NSCF_Mg_78_BAO_1.pdf}\par
\end{multicols}
\caption{One-dimensional likelihood distributions and two-dimensional likelihood contours at 1$\sigma$, 2$\sigma$, and 3$\sigma$ confidence levels using Mg II QSO-78 (gray), BAO + $H(z)$ (red), and Mg II QSO-78 + BAO + $H(z)$ (blue) data for all free parameters. Left panel shows the flat $\phi$CDM model and right panel shows the non-flat $\phi$CDM model. The $\alpha = 0$ axes correspond to the $\Lambda$CDM models. The black dashed straight lines in the $\Omega_{k0}$ subpanels in the right panel correspond to $\Omega_{k0} = 0$.}
\label{fig:Eiso-Ep}
\end{figure*}
The BAO + $H(z)$ data results listed in Tables \ref{tab:BFP} and \ref{tab:1d_BFP2} are from \cite{KhadkaRatra2021} and are discussed in Sec.\ 5.3 of that paper. These BAO + $H(z)$ results are shown in red in Figs.\ 2--7 and 9--11. In this paper, we use these BAO + $H(z)$ results to compare with cosmological constraints obtained from the Mg II QSO data sets to see whether the Mg II QSO results are consistent or not with the BAO + $H(z)$ ones. This provides us with a qualitative idea of the consistency (inconsistency) between the Mg II QSO results and those obtained using better-established cosmological probes which favor $\Omega_{m0}\sim 0.3$.
In Figs.\ 2--4 we see that the cosmological constraints from QSO-78 data and those from BAO + $H(z)$ data are mutually consistent. It is therefore not unreasonable to jointly analyze these data. Since the Mg II QSO-78 data cosmological constraints are significantly less restrictive than those that follow from BAO + $H(z)$ data, adding the QSO-78 data to the mix will not significantly tighten the BAO + $H(z)$ cosmological constraints. Results from the Mg II QSO-78 + BAO + $H(z)$ data set are given in Tables \ref{tab:BFP} and \ref{tab:1d_BFP2}. The unmarginalized best-fit parameter values are listed in Table \ref{tab:BFP} and the one-dimensional marginalized best-fit parameter values and limits are given in Table \ref{tab:1d_BFP2}. Corresponding one-dimensional likelihood distributions and two-dimensional likelihood contours are plotted in blue in Figs.\ 9--11.
From Table \ref{tab:1d_BFP2}, the minimum value of $\Omega_bh^2$ is found to be $0.024^{+0.003}_{-0.003}$ and is obtained in the spatially-flat $\Lambda$CDM model while the maximum value of $\Omega_bh^2$ is $0.032^{+0.007}_{-0.004}$ in the spatially non-flat $\phi$CDM model. The minimum value of $\Omega_ch^2$ is $0.081^{+0.018}_{-0.018}$ and is obtained in the spatially-flat $\phi$CDM model while the maximum value of $\Omega_bh^2$ is found to be $0.119^{+0.007}_{-0.008}$ in the spatially-flat $\Lambda$CDM model. The minimum value of $\Omega_{m0}$ is $0.266^{+0.024}_{-0.024}$ in the spatially-flat $\phi$CDM model while the maximum value of $\Omega_{m0}$ is $0.299^{+0.015}_{-0.017}$ in the spatially-flat $\Lambda$CDM model. As expected, these results are almost identical to those obtained using BAO + $H(z)$ data.
From Table \ref{tab:1d_BFP2}, in the flat $\Lambda$CDM model, the value of $\Omega_{\Lambda}$ is $0.700^{+0.017}_{-0.015}$. In the non-flat $\Lambda$CDM model, the value of $\Omega_{\Lambda}$ is $0.675^{+0.092}_{-0.079}$.
For analyses that involve the BAO + $H(z)$ data, the Hubble constant $H_0$ is a free parameter. From the Mg II QSO-78 + BAO + $H(z)$ data, the minimum value of $H_0$ is $65.2 \pm 2.1$ ${\rm km}\hspace{1mm}{\rm s}^{-1}{\rm Mpc}^{-1}$ in the spatially-flat $\phi$CDM model while the maximum value of $H_0$ is $69.3 \pm 1.8$ ${\rm km}\hspace{1mm}{\rm s}^{-1}{\rm Mpc}^{-1}$ in the spatially-flat $\Lambda$CDM model.
From Table \ref{tab:1d_BFP2}, the values of the spatial curvature energy density parameter $\Omega_{k0}$ are $0.031^{+0.094}_{-0.110}$, $-0.120^{+0.130}_{-0.130}$, and $-0.090^{+0.100}_{-0.120}$ in the non-flat $\Lambda$CDM, XCDM, and $\phi$CDM model respectively. These are consistent with flat spatial hypersurfaces and also with mildly open or closed ones.
From Table \ref{tab:1d_BFP2}, in the flat XCDM parametrization, the value of the dynamical dark energy equation of state parameter ($\omega_X$) is $-0.750^{+0.150}_{-0.100}$ while in the non-flat XCDM parametrization $\omega_X$ is $-0.700^{+0.140}_{-0.079}$. In the flat $\phi$CDM model, the value of the scalar field potential energy density parameter $(\alpha)$ is $1.510^{+0.620}_{-0.890}$ while in the non-flat $\phi$CDM model $\alpha$ is $1.660^{+0.670}_{-0.850}$. In these four dynamical dark energy models, dynamical dark energy is favored at $1.7\sigma - 3.8\sigma$ statistical significance over the cosmological constant.
From Table \ref{tab:BFP}, from the $AIC$ and $BIC$ values, the most favored model is the flat $\phi$CDM model while the non-flat $\Lambda$CDM model is the least favored. From the $\Delta AIC$ values, all models are almost indistinguishable from the spatially-flat $\Lambda$CDM model. From the $\Delta BIC$ values, the non-flat $\Lambda$CDM, XCDM, and $\phi$CDM models provide positive evidence for the spatially-flat $\Lambda$CDM model.
\section{Conclusion}
\label{con}
In this paper, we use the $R-L$ relation to standardize Mg II QSOs. Analyses of different Mg II QSO data sets using six different cosmological dark energy models show that the $R-L$ relation parameters are model-independent and that the intrinsic dispersion of the $R-L$ relation for the whole Mg II QSO data set is $\sim 0.29$ dex which is not very large for only 78 QSOs. So, for the first time, we have shown that one can use the $R-L$ relation to standardize available Mg II QSOs and thus use them as a cosmological probe.
We determined constraints on cosmological model parameters using these Mg II QSO data and found that these constraints are significantly weaker than, and consistent with, those obtained using BAO + $H(z)$ data. In Fig.\ \ref{fig:Hubble_diagram} we show that the 78 Mg II QSOs have a Hubble diagram consistent with what is expected in the standard spatially-flat $\Lambda$CDM model with $\Omega_{m0} = 0.3$. This differs from the results of the QSO X-ray and UV flux data compiled by \citet{RisalitiLusso2019} and \citet{Lussoetal2020}.\footnote{\citet{KhadkaRatra2021} found that only about half of the \citet{Lussoetal2020} QSO flux sources, about a 1000 QSOs at $z \lesssim 1.5$, were standardizable and that cosmological constraints from these QSOs were consistent with what is expected in the standard $\Lambda$CDM model.}
The constraints obtained from the joint analyses of Mg II QSO data and the BAO + $H(z)$ measurements are consistent with the current standard spatially-flat $\Lambda$CDM model but also do not rule out slight spatial curvature. These data weakly favor dynamical dark energy over the cosmological constant.
The current Mg II QSO data set contains only 78 sources and covers the redshift range $0.0033 \leq z \leq 1.89$. Future detections of significant time-delays of the BLR emission of Mg II QSOs will increase the number of sources over a larger redshift extent, which will further constrain the Mg II QSO $R-L$ relation, in particular its slope. A large increase of suitable sources is expected from the Rubin Observatory Legacy Survey of Space and Time that will monitor about 10 million quasars in six photometric bands during its 10-year lifetime. We hope that such an improved data set will soon provide tighter cosmological constraints, as well as allow for a comparison with constraints from QSO X-ray and UV flux measurements which currently are exhibiting some tension with standard flat $\Lambda$CDM model expectations.
\section{ACKNOWLEDGEMENTS}
This research was supported in part by US DOE grant DE-SC0011840, US NSF grant No.\ 161553, and by the Polish Funding Agency National Science Centre, project 2017/26/A/ST9/00756 (MAESTRO 9). Part of the computation for this project was performed on the Beocat Research Cluster at Kansas State University. Time delays for quasars CTS C30.10, HE 0413-4031, and HE 0435-4312 were determined with the SALT telescope, and Polish participation in SALT is funded by grant No.\ MNiSW DIR/WK/2016/07.
\section*{Data availability}
The data analysed in this article are listed in Table \ref{tab:MgQSOdata} of this paper.
\bibliographystyle{mnras}
|
2,869,038,155,829 | arxiv | \section{Two-dimensional channel models}\label{sec:background}
As in \cite{molkaraie2013information} we consider the $2$-D $(1,\infty)$ run-length limited
constrained channel. The $2$-D $(1,\infty)$ run-length limited constraint implies that no two
horizontally or vertically adjacent bits on a $2$-D lattice may be both be equal to $1$.
An example is given below:
\begin{align}
{\footnotesize
\begin{array}{ccccc}
\cdots & \cdots & \cdots & \cdots & \cdots \\
\cdots & 0 & 1 & 0 & \cdots \\
\cdots & 0 & 0 & 1 & \cdots \\
\cdots & 0 & 1 & 0 & \cdots \\
\cdots & \cdots & \cdots & \cdots & \cdots
\end{array}\nonumber}
\end{align}
This channel can be modelled as a probabilistic graphical model (PGM). A PGM is a probabilistic
model which {\em factorizes} according to the structure of an underlying graph $\G = \{\V, \E\}$,
with vertex set $\V$ and edge set $\E$. In this article we will focus on square lattice graphical
models with pair-wise interactions, see Figure~\ref{fig:pgm}.
\begin{figure}[h!]
\centering
\resizebox{0.29\textwidth}{!}{%
\input{pgmfig}
}
\caption{$M \times M$ square lattice graphical model with pair-wise interactions. The nodes
correspond to random variables $x_{\ell,j}$ and the edges encodes the interactions $\psi
(x_{\ell,j},x_{m,n} )$.}\label{fig:pgm}
\end{figure}
That means that the joint probability mass function (\pmf) of the set of random variables, $\xV
\eqdef \{x_{1,1},\ldots,x_{1,M},x_{2,M},\ldots,x_{M,M} \}$, can be represented as a product of
factors over the pairs of variables in the graph:
\begin{equation}
p( \xV ) = \frac{1}{Z} \prod_{(\ell j,m n) \in \E} \psi (x_{\ell, j}, x_{m, n}).
\label{eq:pfactors}
\end{equation}
Here, $Z$---the partition function---is given by
\begin{equation}
Z = \sum_{\xV} \prod_{(\ell j,m n) \in \E} \psi (x_{\ell, j}, x_{m, n}),
\label{eq:Z}
\end{equation}
and $\psi (x_{\ell, j}, x_{m, n})$ denotes the so-called potential function encoding the pairwise
interaction between $x_{\ell, j}$ and $x_{m, n}$. For a more in-depth exposition of graphical models
we refer the reader to \cite{koller2009probabilistic}.
\subsection{Constrained channels and \pgm}
The noiseless $2$-D $(1,\infty)$ run-length limited constrained channel can be described by a square
lattice graphical model as in Figure~\ref{fig:pgm}, with binary variables $x_{\ell,j} \in \{0,1\}$
and pair-wise interactions between adjacent variables. Defining the factors as
\begin{equation}
\psi (x_{\ell,j}, x_{m,n}) =
\begin{cases}
0, & \text{if } x_{\ell,j} = x_{m,n} = 1, \\
1, & \text{otherwise,}
\end{cases}
\label{eq:interaction}
\end{equation}
results in a joint \pmf given by
\begin{align}
p( \xV ) = \frac{1}{Z} \prod_{(\ell j,m n) \in \E} \psi (x_{\ell, j}, x_{m, n}),
\end{align}
where the partition function $Z$ is the number of satisfying configurations or, equivalently, the cardinality of the support of $p(\xV)$. For a channel of dimension $M \times M$ we can write the finite-size noiseless capacity as
\begin{align}
\label{eq:bkg:capacity}
C_M = \frac{1}{M^2} \log_2{Z}.
\end{align}
Hence, to compute the capacity of the channel we need to compute the partition function
$Z$. Unfortunately, calculating $Z$ exactly is in general computationally intractable.
This means that we need a way to approximate the partition function. Note
that for this particular model, known upper and lower bounds of the infinite-size noiseless capacity,
$M \to \infty$, agree on more than eight decimal digits \cite{kato1999capacity,
nagy2000capacity}. However, our proposed method is applicable in the finite-size case, as well as to other models where no tight
bounds are known.
\subsection{High-dimensional undirected chains}\label{sec:hmm}
In the previous section we described how we can calculate the noiseless capacity for $2$-D channel models by casting the problem as a partition function estimation problem in the \pgm framework. In our running example the corresponding graph is the $M \times M$ square lattice \pgm
depicted in Figure~\ref{fig:pgm}. We now show how we can turn these models into high-dimensional undirected chains
by introducing a specific new set of variables. We will see that this idea, although simple, is a
key enabler of our proposed algorithm.
We define
$\x_{k}$ to be the $M$-dimensional variable corresponding to all the original variables in column $k$, \ie
\begin{align}
\x_{k} = \{ x_{1,k},\ldots,x_{M,k} \}, \qquad k=1,\ldots,M.
\end{align}
The resulting graphical model in the $\x_{k}$'s will be an undirected chain with joint \pmf given by
\begin{align}
p(\xV) = \frac{1}{Z} \prod_{k=1}^M \bphi (\x_{k}) \prod_{k=2}^M \bpsi( \x_{k}, \x_{k-1}),
\end{align}
where the partition function $Z$ is the same as for the original model and the $\bphi (\x_{k})$'s and $\bpsi( \x_{k}, \x_{k-1})$'s are the in-column and between-column interaction potentials, respectively. In terms of the original factors of the $2$-D $(1,\infty)$ run-length limited constrained channel model we get
\begin{subequations}
\label{eq:psiDef}
\begin{align}
\bphi (\x_{k}) &= \prod_{j=1}^{M-1} \psi(x_{j+1,k}, x_{j,k}), \label{eq:inrow}\\
\bm{\psi}( \x_{k}, \x_{k-1}) &= \prod_{j = 1}^M \psi (x_{j,k}, x_{j,k-1}). \label{eq:betweenrow}
\end{align}
\end{subequations}
We illustrate this choice of auxiliary variables and the resulting undirected chain in Figure~\ref{fig:hmm}.
\begin{figure}[h!]
\centering
\subfloat[$M \times M$ square lattice \pgm]{
\resizebox{0.29\textwidth}{!}{%
\input{rowfig}
\label{fig:hmmfig_a}
}
}\\
\subfloat[Corresponding $M$-dimensional chain]{
\resizebox{0.29\textwidth}{!}{%
\input{hmmfig}
\label{fig:hmmfig_b}
}
}
\caption{$M \times M$ square lattice graphical model converted to an $M$-dimensional undirected chain model.}\label{fig:hmm}
\end{figure}
This transformation of the \pgm is a key enabler for the partition function estimation algorithm we propose in the subsequent section.
\section{Conclusions}\label{sec:conclusions}
We have introduced an \smc method to compute the noiseless capacity of two-dimensional channel models. The proposed algorithm was shown to improve upon a
state-of-the-art Monte Carlo estimation method by more than an order-of-magnitude. Furthermore, while this improvement was obtained
using a sequential implementation, the \smc method is easily parallelizable over the particles
(which is not the case for the \mcmc-based tree sampler), offering further improvements by making use of modern computational architectures. This gain is of significant importance because the running time can be on the order of days for realistic scenarios.
Extensions to calculate the information rate of noisy $2$-D source/channel
models by the method proposed in \cite{molkaraie2013information} are straightforward.
\section{Experiments}\label{sec:experiments}
We compare our algorithm to the state-of-the-art Monte Carlo approximation algorithm proposed in \cite{molkaraie2013information} on the same example that they consider as explained in Section \ref{sec:background}. Since the key enabler to the algorithm proposed in \cite{molkaraie2013information} is tree sampling according to \cite{HamzeF:2004}---a specific type of blocked Gibbs sampling---we will in the sequel refer to this algorithm as the {\em tree sampler}. All results are compared versus average wall-clock execution time. We run each algorithm $10$ times independently to estimate error bars as well as mean-squared-errors (MSE) compared to the true value (computed using a long run of the tree sampler).
For the \mcmc-based tree sampler, we use a burn-in of $10 \%$ of the generated samples when estimating the capacity.
The tree sampler actually gives two estimates of the capacity at each iteration; we use the average of these two when comparing to the \smc algorithm.
Consider first a channel with dimension $M=10$. We can see the results with error bars from $10$ independent runs in Figure~\ref{fig:10x10est} of both algorithms. The rightmost data point corresponds to approximately $20$k iterations/particles. Both algorithms converge to the value $C_{10} \approx 0.6082$. However, the \smc algorithm is clearly more efficient and with less error per fix computation time. We estimated the true value by running $10$ independent tree samplers for $100$k iterations, removed burn-in and taking the mean as our estimate.
\begin{figure}[h!]
\centering
\includegraphics[width=0.45\textwidth]{10x10est}
\caption{Estimates of the capacity $C_{10}$, with error bars, based on $10$ independet runs of our proposed \smc-based method and the tree sampler \cite{molkaraie2013information}. Plotted versus wall-clock time in log-scale. Note that this is also an upper bound on the infinite-size capacity, \ie $C_M \geq C_{\infty} \approx 0.5879$.}\label{fig:10x10est}
\end{figure}
The estimated true value was subsequently used to calculate the MSE as displayed in Figure~\ref{fig:10x10err}. The central limit theorem for the \smc sampler (see \eqref{eq:smc:clt}) tells us that the error should decrease at a rate of $1/N$ which is supported by this experiment. Furthermore, we can see that the \smc algorithm on average gives an order of magnitude more accurate estimate than the tree sampler per fix computation time.
\begin{figure}[h!]
\centering
\includegraphics[width=0.45\textwidth]{10x10error}
\caption{Mean-squared-error of the capacity $C_{10}$ estimates based on $10$ independet runs of our proposed \smc-based method and the tree sampler \cite{molkaraie2013information}. Plotted versus wall-clock time in log-log-scale.}\label{fig:10x10err}
\end{figure}
In our second example we scale up the model to $M=60$, \ie a total of $3600$ nodes as opposed to $100$ in the previous example. The basic tree sampler performs poorly on this large model with very slow mixing and convergence. To remedy this problem \cite{molkaraie2013information} propose to aggregate every $W$ columns in the tree sampler and sample these exactly by simple enumeration, resulting in further blocking of the underlying Gibbs sampler. However, this results in an algorithm with a computational complexity exponential in $W$ \cite{molkaraie2013information}. The same strategy can be applied to our algorithm and we compare the tree sampler and \smc for widths $W=1$ and $3$.
There seems to be no gain in increasing the width higher than this for either method.
The resulting MSEs\footnote{For this model
the basic tree sampler converges too slowly and the tree sampler with $W=3$ was too computationally demanding to provide an accurate estimate
of the ``true'' value. For this reason, we estimate the true value by averaging $10$ independent runs of \smc with $N = 200$k.}
based on $10$ independent runs of the tree sampler and the \smc algorithm are presented in Figure~\ref{fig:60x60err}.
\begin{figure}[h!]
\centering
\includegraphics[width=0.45\textwidth]{60x60error}
\caption{Mean-squared-error of the capacity $C_{60}$ estimates based on $10$ independet runs of our proposed \smc-based method and the tree sampler \cite{molkaraie2013information} for strip widths $1$ (standard) and $3$ respectively. Plotted versus wall-clock time in log-log-scale.}\label{fig:60x60err}
\end{figure}
As we can see the basic tree sampler converges very slowly, in line with results from \cite{molkaraie2013information}. On the other hand, our proposed \smc sampling method performs very well,
even with $W=1$, and on average it has more than an order-of-magnitude smaller error than the tree sampler with $W=3$. In comparing the two different \smc methods there seems to be no apparent gain in increasing the width of the strips added at each iteration in this case.
\section{Introduction}\label{sec:intro}
With ever increasing demands on storage system capacity and reliability there has been increasing interest in page-oriented storage solutions. For these types of systems variations of two-dimensional constraints can be imposed to help with, amongst other things, timing control and reduced intersymbol interference \cite{immink2004codes}. This has sparked an interest in analyzing information theoretic properties of two-dimensional channel models
for use in \eg holographic data storage \cite{siegel2006information}.
Our main contribution is a new algorithm, based on sequential Monte Carlo (\smc) methods, for numerically estimating the capacity of two-dimensional channels. We show how we can utilize structure in the model to sample the auxiliary target distributions in the \smc algorithm exactly. The focus in this paper is on computing the noiseless capacity of constrained finite-size two-dimensional models. However, the proposed algorithm works also for various generalizations and noisy channel models.
Recently, several approaches have been proposed to solve the capacity estimation problem in two-dimensional constrained channels. These methods rely either on variational approximations \cite{sabato2012generalized} or on Markov chain Monte Carlo \cite{loeliger2009estimating, molkaraie2013information}. Compared to these methods our algorithm is fundamentally different; samples are drawn sequentially from a sequence of probability distributions of increasing dimensions using \smc coupled with a finite state-space forward-backward procedure. We compare our proposed algorithm to a state-of-the-art Monte Carlo estimation algorithm proposed in \cite{loeliger2009estimating, molkaraie2013information}. Using \smc algorithms has earlier been proposed to compute the information rate of one-dimensional continuous channel models with memory \cite{dauwels2008computation}. Although both approaches are based on \smc, the methods, implementation and goals are very different.
\section*{Acknowledgment}
Supported by the projects \emph{Probabilistic modeling of dynamical systems} (Contract number:
621-2013-5524) and \emph{Learning of complex dynamical systems} (Contract number: 637-2014-466),
both funded by the Swedish Research Council.
We would also like to thank Lukas Bruderer for suggesting the application.
\bibliographystyle{IEEEtran}
\section{Sequential Monte Carlo}\label{sec:smc}
Sequential Monte Carlo methods, also known as particle filters, are designed to sample sequentially from some sequence of target
distributions: $\bar{\gamma}_k(\x_{1:k})$, $k = 1,\,2\,\dots$. While SMC is most commonly used for inference on directed chains, in particular for state-space
models, these methods are in fact much more generally applicable. Specifically, as we shall see below,
SMC can be used to simulate from the joint \pmf specified by an undirected chain.
Consequently, by using the representation introduced in Section~\ref{sec:background}
it is possible to apply SMC to estimate the partition function of the
$2$-D $(1,\infty)$ run-length limited constrained channel.
We start this section with a short introduction to \smc
samplers with some known theoretical results. These results are then used to compute an unbiased
estimate of the partition function. We leverage the undirected chain model with the \smc sampler and
show how we can perform the necessary steps using {\em Forward Filtering/Backward Sampling} (\ffbs)
\cite{carter1994gibbs, fruhwirth1994data}. For a more thorough description of \smc methods see \eg
\cite{DoucetJ:2011,doucet2001sequential}.
\subsection{Estimating the partition function using fully adapted \smc}
We propose to use a {\em fully adapted} \smc algorithm \cite{PittS:1999}.
That the sampler is \emph{fully adapted} means that the proposal distributions
for the resampling and propagation steps are optimally chosen with respect to minimizing the
variance of the importance weights, \ie the importance weights for a fully adapted sampler
are all equal. Using the optimal proposal distributions---which can significantly reduce the variance
of estimators derived from the sampler---is not tractable in general. However, as we shall see below,
this is in fact possible for the square lattice \pgm described above.
For the undirected chain model (see Figure~\ref{fig:hmmfig_b}), we let $\bar{\gamma}_k(\x_{1:k})$ be the \pmf induced by the sub-graph corresponding
to the first $k$ variables.
Specifically, $\bar{\gamma}_k(\x_{1:k}) = \frac{\gamma_k(\x_{1:k})}{Z_k}$, where the unnormalized distributions $\gamma_k(\x_{1:k})$ are given by
\begin{subequations}
\begin{align}
\gamma_1 (\xk[1]) &= \bphi(\xk[1]),\\
\gammak (\Xk) &= \gamma_{k-1}(\Xk[1:k-1]) \bphi(\xk) \bpsi(\xk,\xk[k-1]),
\end{align}
\end{subequations}
with $\bphi (\cdot), \bpsi (\cdot)$ as defined in~\eqref{eq:psiDef} and $Z_k$ being the normalizing constant for $\gamma_k(\x_{1:k})$.
We take the sequence of distributions $\bar\gamma_k(\x_{1:k})$ for $k = \range{1}{M}$ as the target distributions for the \smc sampler.
Note that $\bar{\gamma}_k(\x_{1:k})$ for $k < M$ is \emph{not}, in general, a marginal distribution under $p(X)$.
This is, however, not an issue since by construction $\bar\gamma_M(\x_{1:M}) = p(X)$ (where $\x_{1:M}$ identifies to $X$),
\ie at iteration $k=M$ we still recover the correct target distribution.
At iteration $k$, the \smc
sampler approximates $\bar{\gamma}_k(\x_{1:k})$ by a collection of particles
$\{\Xk^i\}_{i=1}^N$, where $\Xk = \{\x_1,\ldots, \xk \}$ is the set of all variables in column $1$
through $k$ of the \pgm. These samples define an empirical point-mass approximation of
the target distribution,
\begin{align*}
\widehat\gamma_k^N(\Xk) \eqdef \frac{1}{N}\sum_{i=1}^N \delta(\Xk-\Xk^i),
\end{align*}
where $\delta(x)$ is the Kronecker delta.
The standard \smc algorithm produces a collection of weighted particles. However, as mentioned above, in the fully adapted setting we use a specific choice of proposal distribution and resampling probabilities, resulting in
equally weighted particles \cite{PittS:1999}.
Consider first the initialization at iteration $k=1$. The auxiliary probability distribution $\bar\gamma_1(\x_1)$ corresponds to the \pgm induced by the first column of the original square lattice model. That is, the graphical model for $\bar\gamma_1(\x_1)$ is a chain (the first column of Figure~\ref{fig:hmmfig_a}). Consequently, we can sample from this distribution exactly, as well as compute the normalizing constant $Z_1$, using \ffbs. The details are given in the subsequent section. Simulating $\Np$ times from $\bar\gamma(\x_1)$
results in an \emph{equally weighted} sample $\{\x_1^i\}_{i=1}^\Np$ approximating this distribution.
We proceed inductively and assume that we have at hand a sample $\{\Xk[1:k-1]^i\}_{i=1}^N$,
approximating $\bar{\gamma}_{k-1}(\Xk[1:k-1])$. This sample is propagated forward by simulating,
conditionally independently given the particle generation up to iteration $k-1$, as follows: We
decide which particle among $\{\x_{1:k-1}^{j}\}_{j=1}^\Np$ that should be used to generate a new particle $\x_{1:k}^{i}$
(for each $i \in \set{1}{N}$) by drawing an {\em ancestor index} $a_k^i$ with probability
\begin{align}
\label{eq:smc:resampling}
\Prb(a_k^i = j) = \frac{ \nu_{k-1}^j }{ \sum_{l} \nu_{k-1}^l }, \qquad j \in \set{1}{N},
\end{align}
where $\nu_{k-1}^i$ are resampling weights. The variable $a_k^i$ is the index of the particle at iteration $k-1$ that will be
used to construct $\Xk^i$.
Generating the ancestor indices corresponds to a selection---or
resampling---process that will put emphasis on the most likely particles.
This is a crucial step of the \smc sampler.
For the fully adapted sampler, the resampling weights $\nu_{k-1}^i = \nu_{k-1}(\Xk[k-1]^{i})$
are chosen in order to adapt the resampling to the \emph{consecutive target distribution} $\bar{\gamma}_k$ \cite{PittS:1999}.
Intuitively, a particle $\x_{1:k-1}^i$ that is probable under the marginal distribution $\sum_{\x_k}\bar\gamma_k(\x_{1:k})$
will be assigned a large weight. Specifically, in the fully adapted algorithm we pick the resampling weights according to
\begin{align}
\label{eq:smc:adjustmult}
\nu_{k-1}(\Xk[k-1]) = \sum_{\xk} \frac{\gammak (\Xk)}{\gamma_{k-1}(\Xk[1:k-1])}
= \sum_{\xk} \bphi(\xk) \bpsi(\xk,\xk[k-1]).
\end{align}
Given the ancestors, we simulate $\xk^i$
from the optimal proposal distribution: $\x_k^i \sim q(\cdot \mid \x_{k-1}^{a_k^i})$ for $i=\range{1}{\Np}$, where
\begin{align}
\label{eq:smc:propagation}
q(\x_k \mid \x_{k-1}) = \frac{\bphi(\xk)\bpsi(\xk,\xk[k-1])}{\sum_{\xk'} \bphi(\xk') \bpsi(\xk',\xk[k-1])}.
\end{align}
Again, simulating from this distribution, as well as computing the resampling weights \eqref{eq:smc:adjustmult},
can be done exactly by running \ffbs on the $k$th column of the model. Finally, we augment the particles as,
$\Xk^i \eqdef (\Xk[1:k-1]^{a_k^i}, \xk^i)$. As pointed out above, with the choices \eqref{eq:smc:adjustmult} and \eqref{eq:smc:propagation}
we obtain a collection of equally weighted particles $\{ \x_{1:k}^i \}_{i=1}^\Np$, approximating $\bar\gamma_k(\x_{1:k})$.
At iteration $k=M$, the \smc sampler provides a Monte Carlo approximation of the joint \pmf $p(X) = \bar\gamma_M(\x_{1:k})$.
While this can be of interest on its own, we are primarily interested in the normalizing constant $Z$ (\ie the partition function). However,
it turns out that the \smc algorithm in fact provides an estimator of $Z_k$ as a byproduct, given by
\begin{equation}
\label{eq:smc:Zhat}
\widehat Z_k^N \eqdef Z_1 \prod_{\ell = 1}^{k-1} \left( \frac{1}{N} \sum_{i=1}^N \nu_\ell^i \right).
\end{equation}
It may not be obvious to see why \eqref{eq:smc:Zhat} is a natural estimator of the normalizing constant $Z_k$.
However, it has been shown that this \smc-based estimator is unbiased for any $N \geq 1$ and $k=1,\ldots,K$.
This result is due to \cite[Proposition~7.4.1]{DelMoral:2004}.
Specifically, for our $2$-D constrained channel example, it follows that at the last iteration $k=M$ we have an unbiased estimator of the partition function
\begin{align}
\label{eq:smc:unbiased}
\EE[\widehat Z_M^N] = Z.
\end{align}
Furthermore, under a weak integrability condition the estimator is asymptotically normal with a rate $\sqrt{\Np}$:
\begin{align}
\label{eq:smc:clt}
\sqrt{\Np}( \widehat Z_M^N - Z) \stackrel{d}{\rightarrow} \Normal(0, \sigma^2),
\end{align}
where an explicit expression for $\sigma^2$ is given in \cite[Proposition~9.4.1]{DelMoral:2004}.
\subsection{\smc samplers and Forward Filtering/Backward Sampling}
To implement the fully adapted \smc sampler
described above we are required to compute the sums involved in equations \eqref{eq:smc:adjustmult} and
\eqref{eq:smc:propagation}. By brute force calculation our method would be
computationally prohibitive as the complexity is exponential in the dimensionality $M$ of the chain. .
However, as we show below, it is possible to use \ffbs to efficiently carry out these summations.
This development is key to our proposed solution to the problem of estimating the partition
function, since the computational complexity of estimating the channel capacity is reduced from $\ordo (\Np M 2^M)$ (brute force) to
$\ordo (\Np M^2)$ (\ffbs).
Initially, at $k=1$, the graph describing the target distribution $\bar\gamma_1(\x_1)$
is trivially a chain which can be sampled from exactly by using \ffbs. Additionally, the normalizing constant
$Z_1$ can be computed in the forward pass of the \ffbs algorithm.
However, this is true for any consecutive iteration $k$ as well. Indeed, simulating $\x_k$ under $\bar\gamma_{k}$, conditionally on
$\Xk[1:k-1]$, again corresponds to doing inference on a chain. This means we can
employ \ffbs to compute the resampling weights \eqref{eq:smc:adjustmult} (corresponding to a conditional normalizing constant computation)
and to simulate $\x_k$ from the optimal proposal \eqref{eq:smc:propagation}.
Let $k$ be a fixed iteration of the \smc algorithm. The forward filtering step of \ffbs is performed by sending messages
\begin{align}
\label{eq:forwardfiltering}
m^i_{j+1}(x_{j+1,k}) = \sum_{x_{j,k}} \psi (x_{j+1,k},x_{j,k}) \psi(x_{j,k}, x_{j,k-1}^i) m^i_j(x_{j,k}),
\end{align}
for $j=1,\ldots, M-2$, \ie from the top to the bottom of column~$k$. The resampling weights
are given as a byproduct from the message passing as
\begin{align}
\label{eq:ffbs:adjustmult}
\nu_{k-1}(\Xk[k-1]^i) &= \sum_{\xk} \bphi(\xk) \bpsi(\xk,\xk[k-1]^i) \nonumber \\
&= \sum_{x_{M,k}} \psi(x_{M,k}, x_{M,k-1}^i) m^i(x_{M-1,k}).
\end{align}
After sampling the ancestor indices $a_k^i$ as in \eqref{eq:smc:resampling}, we perform backward sampling
to sample the full column of variables $\xk$, one at a time in reverse order $j = M,\ldots,1$,
\begin{align}
\label{eq:backwardsampling}
x_{j,k}^i \sim \frac{\psi(x_{j,k},x_{j+1,k}^i) \psi(x_{j,k}, x_{j,k-1}^{a_k^i}) m_j^{a_k^i}(x_{j,k})}{\sum_{x_{j,k}'} \psi(x_{j,k}',x_{j+1,k}^i) \psi(x_{j,k}', x_{j,k-1}^{a_k^i}) m_j^{a_k^i}(x_{j,k}')},
\end{align}
with straightforward modifications for $j = 1$ and $M$.
This results in a draw $\xk^i = \prange{x_{1,k}^i}{x_{M,k}^i}$ from the optimal proposal $q(\cdot \mid \x_{k-1}^{a_k^i})$ defined in \eqref{eq:smc:propagation}.
A summary of the resulting solution is provided in Algorithm~\ref{alg:smcffbs}.
\begin{algorithm}[htb]
\caption{Channel capacity estimation}
\label{alg:smcffbs}
\begin{algorithmic}
\STATE {\em Perform each step for $i = 1,\ldots,N$, except setting $\widehat Z_k^N$.}
\STATE Sample $\Xk[1]^i$ using \ffbs \eqref{eq:forwardfiltering}, \eqref{eq:backwardsampling}.
\STATE Set $\widehat Z_1^N = Z_1$.
\FOR{$k=2$ {\bfseries to} $M$}
\STATE Calculate $\nu_{k-1}(\x_{k-1}^i)$ using forward filtering \eqref{eq:forwardfiltering}-\eqref{eq:ffbs:adjustmult}.
\STATE Sample $a_k^{i}$ according to~\eqref{eq:smc:resampling}.
\STATE Sample $\xk^i$ using backward sampling \eqref{eq:backwardsampling}.
\STATE Set $\Xk^i = (\Xk[1:k-1]^{a_k^i}, \xk^i)$.
\STATE Set $\widehat Z_k^N = \widehat Z_{k-1}^N \left(\frac{1}{N} \sum_{i=1}^N \nu_{k-1}(\x_{k-1}^i)\right)$
\ENDFOR
\end{algorithmic}
\end{algorithm}
\subsection{Practical implementation details}
For numerical stability it is important to use a few tricks in implementing
Algorithm~\ref{alg:smcffbs}. First, the size of the messages~\eqref{eq:forwardfiltering} grows quickly
with the chain dimension $M$ and the risk of overflow is big for realistic graph sizes. This can be resolved by instead working with the normalized messages
$\mu$,
\begin{align}
\label{eq:normalizedmessage}
\mu_{j+1}^i(x_{j+1,k}) = \frac{1}{c_{j+1}^i} \sum_{x_{j,k}} \psi (x_{j+1,k},x_{j,k}) \psi(x_{j,k}, x_{j,k-1}^i) \mu_{j}^i(x_{j,k}),
\end{align}
where $c_{j+1}^i = \displaystyle \sum_{x_{j:j+1,k}} \psi (x_{j+1,k},x_{j,k}) \psi(x_{j,k},
x_{j,k-1}^i) \mu_j^i(x_{j,k})$ is just the normalization constant of the message. We can see that
using the normalized message $\mu_j^{i}$ instead of $m_j^i$ in \eqref{eq:backwardsampling} does not change the
distribution that we are sampling from. Furthermore, it is easy to verify that the resampling weights
are given by
\begin{align}
\label{eq:efficient-implementation-weights}
\nu_{k-1}(\Xk[k-1]^i) = \left(\prod_{j=1}^{M-2}c_{j+1}^i \right) \sum_{x_{M,k}} \psi(x_{M,k}, x_{M,k-1}^i) \mu^i(x_{M-1,k}).
\end{align}
Secondly, since we are actually interested in calculating the capacity, which is proportional to
$\log_2 Z$, we estimate the log-partition function as follows
\begin{align}
\log_2 \widehat Z_k^N = \log_2 \widehat Z_{k-1}^N + \log_2 \left( \sum_{i=1}^N \nu_{k-1}(\x_{k-1}^i)\right) - \log_2 \Np.
\end{align}
Note that in taking the logarithm of $\widehat Z_k^N$ we introduce a negative bias (\cf \eqref{eq:smc:unbiased}). However, the
estimator of the log-partition function (and thus also the capacity~\eqref{eq:bkg:capacity}) is nevertheless consistent and the bias decreases at a rate $\ordo(1/N)$.
Indeed, as we will see, in practice the bias is negligible and the error is dominated by the variance.
Thirdly, in \smc implementations it is advisable to work with the logarithms of the
resampling weights. This will usually lead to increased numerical stability and help to
combat underflow/overflow issues.
With $\log_2 \nu_{k-1}(\Xk[k-1]^i)$ being the logarithm of \eqref{eq:efficient-implementation-weights}, we update the weights as:
\begin{subequations}
\label{eq:num:adjustmult}
\begin{align}
c &\gets \underset{i}{\text{max}} \left\{ \log_2 \nu_{k-1}(\Xk[k-1]^i) \right\},\\
\label{eq:newadjustmult}
\nu_{k-1}(\Xk[k-1]^i) &\gets 2^{\log_2 \nu_{k-1}(\Xk[k-1]^i) - c}.
\end{align}
\end{subequations}
where $c$ is the maximimum of the log of the adjustment multipliers.
Subtracting the maximum value $c$ from all the log-weights improves numerical stability and it does not change the
resampling probabilities \eqref{eq:smc:resampling} due to the normalization. However, we must add the constant $c$ to the
sequential estimate of the log-partition function, \ie
\begin{multline}
\label{eq:num:partition}
\log_2 \widehat Z_k^N = \log_2 \widehat Z_{k-1}^N + \log_2 \left( \sum_{i=1}^N \nu_{k-1}(\x_{k-1}^i)\right)
\\- \log_2 \Np + c,
\end{multline}
where $\nu_{k-1}(\x_{k-1}^i)$ are the modified weights given by \eqref{eq:newadjustmult}.
|
2,869,038,155,830 | arxiv | \section{Introduction}
The diffractive scattering is a process where the colliding particles scatter at very small angles and without any color flux in the final state. This involves a propagator carrying the vacuum quantum numbers, called Pomeron and described, in the soft regime, within the Regge theory. Since the first operation period in 1992, ZEUS and H1, the two experiments dedicated to the DIS physics at HERA, observed that $(\sim 10 \%)$ of lepton-proton DIS events had a diffractive origin. It opens a new area of studies in diffractive production mechanism, providing a hard scale which can be
varied over a wide range and therefore it is an ideal testing for QCD models.
Moreover, the diffractive production of Vector Mesons (VMs) and real photons, allows to study the transition from the soft to the hard regime in strong interactions. The hard regime (high energy and low Bjorken-$x$, $x_{Bj}$) is sensitive to the gluon content and well described by perturbative-QCD, while in the soft regime (low-$x$) the interaction is well described within the Regge phenomenology. Indicating with $Q^2$ the virtuality of the exchanged photon and with $M^2$ the square mass of the produced VM, HERA data suggested a universal hard scale, $Q^2+M^2$, for the diffractive exclusive pruduction of VMs and real photons, which indicates the transition from the soft to the hard regime.
Moreover, the diffractive production of real photons, a process known as Deeply Virtual Compton Scattering (DVCS), leads to the extraction of the Generalized Parton Distribution functions (GPDs), containing combined imformations about the longitudinal momentum distribution of partons and their position on the trasfers plain. The GPD-based calculations will be very helpfull in the description of the Higgs boson diffractive production mechanism, which will be experimentally studied with the LHC accelerator.
The following Sections will present the most recent results achieved at HERA together with a short outlook to the future exclusive diffraction program at LHC. An introduction to a new phenomenological model for the description of the VMs and DVCS amplitudes in the framework of the Regge theory will be also given.
\section{Exclusive diffraction at HERA}
\begin{figure}[t]
\centering
{\includegraphics[width=9.0cm]{vm-kin.eps}}
\caption {Typical Faynman diagram for the exclusive diffractive VM production at HERA, showing the relevant kinematic variables.}
\label{diff_diag}
\end{figure}
The diffractive processes are characterized by the presence of a leading proton
in the final state carrying most of the proton beam energy and by a large rapidity gap (LRG) in the forward (proton) direction.
Figure~\ref{diff_diag} shows a schematic diagram of the exclusive diffractive process at HERA, $ep\rightarrow -> eVp$, together with the relevant kinematic variables: the photon virtuality, $Q^2$, the photon-proton centre-of-mass energy, $W$ ,
and the square of the four-momentum transfer at the proton vertex, $t$. Relevant kinematic variables in diffraction are also the fraction
of the proton longitudinal momentum carried by the exchanged
colour singlet object, $x_{IP}$, and the fraction of the exchanged momentum carried by the quark coupling to the virtual photon, $\beta$.
\subsection{The $Q^2$ and $W$ dependence of the cross section}
Recently, a precision measurement of the reaction $\gamma^*p\rightarrow\rho^0 p$ was published by ZEUS~\cite{zeus_rho}. It was found that the cross section falls steeply with the increasing of $Q^2$ but, unlike it was observed for the $J/\psi$ electroproduction~\cite{zeus_jpsi,h1_jpsi}, it cannot be described by a simple propagator term like $\sigma\propto (Q^2+M^2)^{-n}$, in particular an $n$ value increasing with $Q^2$ appears to be favored. Figure~\ref{q2_rho} reports the cross section for the $\rho^0$ electroproduction versus $Q^2$ compared with several theoretical predictions: the KWM model~\cite{KMW} based on the saturation model, the FSS model~\cite{FSS} with and without saturation and the DF model~\cite{DF}.
\begin{figure}[h]
\centering
{\includegraphics[width=0.7\textwidth]{DESY-07-118_25.eps}}
\caption {The $\gamma^*p\rightarrow\rho^0p$ cross section as a function of $Q^2$ measured at $W=90\;GeV^2$ and comared in (a) and (b) with different models as described in the text.}
\label{q2_rho}
\end{figure}
The soft to hard transition can be observed looking at the W-dependence of the VMs photoproduction ($Q^2=0$), where the scale is provided by $M^2$. Figure~\ref{W_php} collects the $\sigma ( \gamma^* p\rightarrow V p )$ as a function of $W$ from the lightest vector meson, $\rho^0$, to the heaviest, $\Upsilon$, compared to the total cross section.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.55\textwidth,angle=0]{W_php.eps}
\caption{The $W$ dependence of the cross section for exclusive VM photoproduction together with the total photoproduction cross section. Lines are the result of a $W^{\delta}$ fit to the data at high $W$-energy values. \label{W_php}}
\end{figure}
The cross section rises with the energy as $W^{\delta}$, where the $\delta$ exponent increases with the hard scale $M^2$ as expected for a transition from the soft to the hard regime. New results on the $\Upsilon$ photoproduction~\cite{upsilon}, recently published by ZEUS, confirmed the steeper rise of $\sigma(W)$ for higher vector meson masses.
The transition from the soft to the hard regime can also be studied varying $Q^2$. Recent results were achieved by H1~\cite{h1_dvcs} and ZEUS~\cite{zeus_dvcs} for the exclusive production of a real photon, the Deeply Virtual Compton Scattering (DVCS), where the hard scale is provided only by the photon virtuality, $Q^2$. Figure~\ref{W_dvcs} shows the H1 (left) and the ZEUS (right) results. A similar result was obtained for the $J/\psi$ electroproduction~\cite{zeus_jpsi,h1_jpsi}.
\begin{figure}[htbp]
\begin{tabular}{lr}
\includegraphics[width=0.5\textwidth,angle=0]{d07-142f3b.eps} &
\includegraphics[width=0.5\textwidth,angle=0]{DESY-08-132_3.eps}
\end{tabular}
\caption{The $W$ dependence of the cross section for a DVCS process. Lines come from a $W^{\delta}$ fit to the data. Left: the H1 measurement of the $\delta$ slope as a function of $Q^2$. Right: the new ZEUS measurement at low $Q^2$ (dots) together with the published measurements (squares). \label{W_dvcs}}
\end{figure}
The electroproduction of a large variety of VMs was studied at different $Q^2$ values and the corresponding slope $\delta$ is reported in Fig.~\ref{W_dis} (left) versus the scale $Q^2+M^2$, including the DVCS measurements.
The data behaviour seems to be universal with $\delta$ rising from 0.2, as expected from a soft Pomeron exchange~\cite{Wsoft}, showing a logarithmic shape $\delta\propto \ln(Q^2+M^2)$.
The steep rise with $W$ of the cross section even at low-$Q^2$, seems to suggest that the most sensitive part to the soft scale comes from the wave function of the pruduced VM.
\begin{figure}[htbp]
\centering
\begin{tabular}{cc}
\includegraphics[width=0.53\textwidth,angle=0]{W_dis.eps}
\includegraphics[width=0.46\textwidth,angle=0]{b_collec.eps}
\end{tabular}
\caption{The dependence on the hard scale $Q^2+M^2$ of the value $\delta$ (left) extracted from a fit $W^{\delta}$ and of the slope $B$ ($b$ in the figure lable) (right) extracted from a fit $\frac{d\sigma}{dt}\propto e^{B|t|}$ for the exclusive VM electroproduction. DVCS is also included. \label{W_dis}}
\end{figure}
\subsection{$t$ dependence of the cross section and GPDs}
The differential cross section as a function of $t$, can be parametrised by an exponential fit: $\frac{d\sigma}{d|t|}\propto e^{b|t|}$. Figure~\ref{W_dis} (right) reports the collection of the $b$ values versus the scale $Q^2+M^2$ for the electroproduction of VMs and DVCS, with $b$ decreasing from $\sim 11\; GeV^{-2}$ to $\sim 5\; GeV^{-2}$ as expected in hard regime.
Since the $b$ value can be related via a Fourier transform to the impact parameter and assuming that the exclusive process in the hard regime is dominated by gluons, the relation $\langle r^2\rangle=2b(\hbar c)^2$ can be used to obtain the radius of the gluon confinement area in the proton. $b\sim 5\;GeV^2$ corresponds to $\langle r^2\rangle\sim 0.6\;fm$ smaller than the proton radius ($\sim 0.8\;fm$) indicating that the gluons are well contained within the charge-radius of the proton.
Measurements of the $t$-slope parameters $b$ are key measurements
for almost all exclusive processes. Indeed, a Fourier
transform from momentum to impact parameter space
readily shows that the $t$-slope is related to the typical
transverse distance between the colliding objects. At
high scale, the \={q}q dipole is almost point-like, and the $t$
dependence of the cross section is given by the transverse
extension of the gluons (or sea quarks) in the proton for
a given $x_{Bj}$ range. In particular for DVCS, interpretation
of t-slope measurements does not suffer from
the lack of knowledge of the VM wave function.
Then, a DVCS cross section, differential in $t$, is directly related to
GPDs~\cite{b_slope_papers}.
The measurement of $d\sigma/d|t|$ for the DVCS process, recentrly published by the H1 Collab~\cite{h1_dvcs}, where $t$ was obtained from the transverse momentum distribution of the photon, studied $b$ versus $Q^2$ and $W$ as shown in Fig.~\ref{b_dvcs}. $b$ seems to decrease with $Q^2$ up to the value expected for a hard process but it doesn't depend on $W$.
\begin{figure}[htbp]
\centering
\begin{tabular}{cc}
\includegraphics[width=0.5\textwidth,angle=0]{d09-109f4a.eps}
\includegraphics[width=0.5\textwidth,angle=0]{d09-109f4b.eps}
\end{tabular}
\caption{The $t$ slope parameter, $b$, as a function of $Q^2$ (left) and $W$ (right).}
\label{b_dvcs}
\end{figure}
A new ZEUS measurement~\cite{zeus_dvcs} of $d\sigma/d|t|$ has been achieved from a direct measurement of the proton final state of using a spectrometer based on the roman pot thechnique (see Fig.~\ref{b_dvcs},right). The result $b=4.5\pm 1.3~(stat.)\pm 0.4~(syst.)~GeV^{-2}$, measured at $Q^2=3.2~GeV^2$ and $W=104~GeV$, is consistent, within the large uncertainties due to the low acceptance of the spectrometer, with the H1 result~\cite{h1_dvcs} of $b=5.45\pm 0.19~(stat.)\pm 0.34~(syst.)~GeV^{-2}$ at $Q^2=8~GeV^2$ and $W=82~GeV$.
The complete parton imaging in the nucleon would
need to get measurements of $b$ for wide range of $x_{Bj}$ values, $0.001 < x_{Bj} > 0.1$, that experimentally appears to be really difficult. In fact, there is one
way to recover $x_{Bj}$ and $t$ correlations over the whole $x_{Bj}$
domain: to measure a Beam Charge Asymmetry (BCA).
A determination of a cross section asymmetry with
respect to the beam charge has been realized by the H1~\cite{h1_dvcs} and HERMES~\cite{hermes_dvcs}
experiments by measuring the ratio $(d\sigma^+ - d\sigma^-) / (d\sigma^+ + d\sigma^-)$ as a function of the azimuthal angle, $\phi$, between the production and the scattering plane. The results are shown in Fig.~\ref{BCA}.
\begin{figure}[htbp]
\centering
\begin{tabular}{cc}
\includegraphics[width=0.5\textwidth,angle=0]{frank.bca_prl.eps}
\includegraphics[width=0.5\textwidth,angle=0]{d09-109f6.eps}
\end{tabular}
\caption{The BCA as a function of the azimuthal angle between the production and the scattering plane, measured by the HERMES (left) and H1 (right) experiments at HERA.}
\label{BCA}
\end{figure}
\section{Exclusive diffraction at LHC}
Interest in diffraction at the LHC has been greatly boosted recently by theoretical predictions~\cite{KMR} that identified central exclusive production (CEP) as a potential discovery channel for the Higgs boson. In the last years, both ATLAS and CMS have set up forward physics programs. Both experiments have Zero Degree Calorimeters with acceptance for neutral particles for $|\eta| > 8.3$. Apart from those, their forward instrumentation is quite complementary. ATLAS is equipped with a dedicated luminosity system
consisting of ALFA, Roman-pot housed tracking detectors at 240 m from the interaction point (I.P.), which will peform an absolute luminosity measurement in runs with special LHC optics, and of LUCID, Cherenkov detectors ($5.6 < |\eta| < 6.0$) for the primary purpose of luminosity monitoring during routine LHC data taking. At the CMS I.P., the task of an absolute luminosity determination will be carried out by an independent experiment, TOTEM, with Roman-pot housed silicon detectors at 220 m distance from the I.P. and two tracking
telescopes inside of the CMS volume. CMS in addition has the CASTOR calorimeter which extends the CMS calorimetric coverage to rapidity values of 6.5. CASTOR gives access to the QCD parton evolution dynamics at very low-$x$.
In addition, FP420, a joint R\&D program of ATLAS, CMS and the LHC machine group has investigated the
feasibility of an upgrade of the forward detector instrumentation to make possible the direct observation of the scattered protons in CEP of a Higgs boson.
\section{The Pomeron at HERA, a two-pole model}
A simple factorized Regge-pole model~\cite{DVCS} for the description of the DVCS amplitude was suggested and
successfully tested over the HERA data.
The authors are now working to include in the analysis the
VMs production by using and extending the
main ideas of the model.
It follows from perturbative QCD that asymptotically the Pomeron
is an infinite sum of poles. This result is far from practical
realization, however in minimal version, it is legitimate to
assume that the Pomeron is a sum of two Regge poles
\begin{equation}\label{TwoP}
A(s,t,\tilde Q^2)=f_s(s,t,\tilde
Q^2)(-is/s_0)^{\alpha_s(t)}+f_h(s,t,\tilde
Q^2)(-is/s_0)^{\alpha_h(t)},
\end{equation}
where the subscripts allude that the first term is ``soft", with
$\alpha_s(0)\approx 1.08$ and the second one is ``hard", with
$\alpha_s(0)\approx 1.4,$ these numbers coming from the fits to
soft hadronic and hard deep inelastic reactions. Such a
two-component Pomeron was first suggested by Landshoff \cite{L} and it
accounts for both soft and hard processes in the framework of
a universal $Q^2$-independent Pomeron, which implies that the
trajectories $\alpha_s$ and $\alpha_h$ are the same in all
reactions, differing only by their weights, determined by the
residue functions $f_i(s,t,\tilde Q^2),\ \ i=1,2$.
According to arguments~\cite{Kaidalov} based on
unitarity, $f_h$ is progressively damped more that $f_s$ with
increasing $Q^2+M^2$ hence the hard term is masked at low
$Q^2$ (soft reactions) while it dominates at high
$Q^2$.
\begin{figure}[h]
\begin{center}
\includegraphics[clip,scale=0.7]{Fig0.eps}
\end{center}
\caption{\small\it{Sum of two Regge-Pomeron exchanges,
approcimated by an "effective Pomeron" (rightmost diagram). }}
\label{fig:diagram2}
\end{figure}
In the model the two (soft and hard) terms have identical
functional form, differing only by the parameters of two trajectories,
$\alpha_s(t)$ and $\alpha_h(t)$. This is
a universal Pomeron in the sense, that its trajectories are
the same in any (soft or hard) process, varying only the
relative weight of the two terms
\begin{eqnarray}\label{2}
A(s,t,Q^2)=A^s+h A^h=-[e^{b_1\alpha^s(t)}e^{b^s_2
\beta^s(z)}(-is/s_0)^{\alpha^s(t)}+\nonumber \\
h e^{b_1\alpha^h(t)}e^{b^h_2
\beta^h(z)}(-is/s_0)^{\alpha^h(t)}].
\end{eqnarray}
|
2,869,038,155,831 | arxiv | \section{Introduction}
Increasing the sample size is the most common way to improve the performance of statistical estimators.
In some cases (see, for instance, the experiments of \citet{Evgeniou} on customer data analysis or those of \citet{jacob:clustered} on molecule binding problems), having access to some new data may be impossible, often due to experimental limitations.
One way to circumvent those constraints is to use datasets from several related (and, hopefully, ``similar'') problems, as if it gave additional (in some sense) observations on the initial problem.
The statistical methods using this heuristic are called ``multi-task'' techniques, as opposed to ``single-task'' techniques, where every problem is treated one at a time.
In this paper, we study kernel ridge regression in a multi-task framework and try to understand when multi-task can improve over single-task.
The first trace of a multi-task estimator can be found in the work of \citet{stein1956inadmissibility}.
In this article, Charles Stein showed that the usual maximum-likelihood estimator of the mean of a Gaussian vector (of dimension larger than 3, every dimension representing here a task) is not admissible---that is, there exists another estimator that has a lower risk for every parameter.
He showed the existence of an estimator that uniformly attains a lower quadratic risk by shrinking the estimators along the different dimensions towards an arbitrary point.
An explicit form of such an estimator was given by \citet{james1961estimation}, yielding the famous James-Stein estimator.
This phenomenon, now known as the ``Stein's paradox'', was widely studied in the following years and the behaviour of this estimator was confirmed by empirical studies, in particular the one from \citet{efron1977stein}.
This first example clearly shows the goals of the multi-task procedure: an advantage is gained by borrowing information from different tasks (here, by shrinking the estimators along the different dimensions towards a common point), the improvement being scored by the global (averaged) squared risk.
Therefore, this procedure does not guarantee individual gains on every task, but a global improvement on the sum of those task-wise risks.
\medskip
We consider here $p \geq 2$ different regression tasks, a framework we refer to as ``multi-task'' regression, and where the performance of the estimators is measured by the fixed-design quadratic risk.
Kernel ridge regression is a classical framework to work with and comes with a natural norm, which often has desirable properties (such as, for instance, links with regularity).
This norm is also a natural ``similarity measure'' between the regression functions.
\citet{Evgeniou} showed how to extend kernel ridge regression to a multi-task setting, by adding a regularization term that binds the regression functions along the different tasks together.
One of the main questions that is asked is to assert whether the multi-task estimator has a lower risk than any single-task estimator.
It was recently proved by \citet{solnon:hal-00610534} that a fully data-driven calibration of this procedure is possible, given some assumptions on the set of matrices used to regularize---which correspond to prior knowledge on the tasks.
Under those assumptions, the estimator is showed to verify an \emph{oracle inequality}, that is, its risk matches (up to constants) the best possible one, the \emph{oracle risk}.
Thus, it suffices to compare the oracle risks for the multi-task procedure and the single-task one to provide an answer to this question.
\medskip
The multi-task regression setting, which could also be called ``multivariate regression'', has already been studied in different papers.
It was first introduced by \citet{Brown_Zidek_1980} in the case of ridge regression, and then adapted by \citet{Evgeniou} in its kernel form.
Another view of the meaning of ``task similarity'' is that the functions all share a few common features, and can be expressed by a similar regularization term.
This idea was expressed in a linear set up (also known as group lasso) by \citet{Obozinski_Wainwright_Jordan_2011} and \citet{Lounici:1277345}, in multiple kernel learning by \citet{koltchinskii2010sparsity} or in semi-supervised learning by \citet{Ando:2005:FLP:1046920.1194905}.
The kernel version of this was also studied \citep{DBLP:journals/ml/ArgyriouEP08,jacob:clustered}, a convex relaxation leading to a trace norm regularization and allowing the calibration of parameters.
Another point of view was brought by \citet{ben2003exploiting}, defining a multi-task framework in classification, two classification problems being similar if, given a group of permutations of the input set, a dataset of the one can be permuted in a dataset of the other.
They followed the analysis of \citet{Baxter2000}, which shows very general bounds on the risk of a multi-task estimator in a model-selection framework, the sets of all models reflecting the insight the statistician has on the multi-task setting.
\medskip
Advantages of the multi-task procedure over the single task one were first shown experimentally in various situations by, for instance, \citet{Thrun96g}, \citet{Caruana:1997:ML:262868.262872} or \citet{Bakker:2003:TCG:945365.945370}.
For classification, \citet{ben2003exploiting} compare upper bounds on multi-task and single-task classification errors, and showed that the multi-task estimator could, in some settings, need less training data to reach the same upper bounds.
The low dimensional linear regression setting was analysed by \citet{2009arXiv0912.5338R}, who showed that, under sparsity assumptions, restricted isometry conditions and using the trace-norm regularization, the multi-task estimator achieves the rates of a single-task estimator with a $np$-sample.
\citet{liang10regularization} also obtained a theoretical criterion, applicable to the linear regression setting and unfortunately non observable, which tells when the multi-task estimator asymptotically has a lower risk than the lower one.
A step was recently carried by \citet{FeldmanGF12MTav} in a kernel setting where every function is estimated by a constant.
They give a closed-form expression of the oracle for two tasks and run simulations to compare the risk of the multi-task estimator to the risk of the single-task estimator.
\medskip
In this article we study the oracle multi-task risk and compare it to the oracle single-task risk.
We then find situations where the multi-task oracle is proved to have a lower risk than the single-task oracle.
This allows us to better understand which situation favors the multi-task procedure and which does not.
After having defined our model (Section~\ref{sec.model}), we write down the risk of a general multi-task ridge estimator and see that it admits a convenient decomposition using two key elements: the mean of the tasks and the resulting variance (Section~\ref{sec.risk}).
This decomposition allows us to optimize this risk and get a precise estimation of the oracle risk, in settings where the ridge estimator is known to be minimax optimal (Section~\ref{sec.MTrisk}).
We then explore several repartitions of the tasks that give the latter multi-task rates, study their single-task oracle risk (Section~\ref{sec.STrisk}) and compare it to their respective multi-task rates.
This allows us to discriminate several situations, depending whether the multi-task oracle either outperforms its single-task counterpart, underperforms it or whether both behave similarly (Section~\ref{sec.MTSTcomp}).
We also show that, in the cases favorable to the multi-task oracle detailed in the previous sections, the estimator proposed by \citet{solnon:hal-00610534} behaves accordingly and achieves a lower risk than the single-task oracle (Section~\ref{sec.riskest}).
We finally study settings where we can no longer explicitly study the oracle risk, by running simulations, and we show that the multi-task oracle continues to retain the same virtues and disadvantages as before (Section~\ref{sec.simus}).
\medskip
\noindent We now introduce some notations, which will be used throughout the article.
\begin{itemize}
\item The integer $n$ is the sample size, the integer $p$ is the number of tasks.
\item For any $n \times p$ matrix $Y$, we define
\begin{equation*}
y = \vect(Y) := \paren{ Y_{1,1}, \ldots, Y_{n,1}, Y_{1,2}, \ldots, Y_{n,2}, \ldots, Y_{1,p}, \ldots, Y_{n,p} } \in \mathbb{R}^{np},
\end{equation*}
that is, the vector in which the columns $Y^j := (Y_{i,j})_{1 \leq i \leq n}$ are stacked.
\item $\mathcal{M}_n(\mathbb{R})$ is the set of all real square matrices of size $n$.
\item $\mathcal{S}_p(\mathbb{R})$ is the set of symmetric matrices of size $p$.
\item $\mathcal{S}_p^+(\mathbb{R})$ is the set of symmetric positive-semidefinite matrices of size $p$.
\item $\mathcal{S}_p^{++}(\mathbb{R})$ is the set of symmetric positive-definite matrices of size $p$.
\item $\mathbf{1}$ is the vector of size $p$ whose components are all equal to $1$.
\item $\norr{\cdot}$ is the usual Euclidean norm on $\mathbb{R}^{k}$ for any $k \in \mathbb{N}$: $\forall u \in \mathbb{R}^k$, $\norr{u}^2 := \sum_{i=1}^k u_i^2$.
\item For two real sequences $(u_n)$ and $(v_n)$ we write $u_n \asymp v_n$ if there exists positive constants $\ell$ and $L$ such that, for a large enough $n$, $\ell v_n \leq u_n \leq L v_n$.
\item For $(a,b)\in\mathbb{R}^2$, $a\wedge b$ denotes the minimum of $a$ and $b$.
\end{itemize}
\section{Kernel ridge regression in a multi-task setting \label{sec.model}}
We consider here that each task is treated as a kernel ridge-regression problem and we will then extend the single-task ridge-regression estimator in a multi-task setting.
\subsection{Model and estimator}
Let $\Omega$ be a set, $\mathcal{A}$ be a $\sigma$-algebra on $\Omega$ and $\mathbb{P}$ be a probability measure on $\mathcal{A}$.
We observe $\mathcal{D}_n = (X_i,Y_i^1,\ldots,Y_i^p)_{i=1}^n \in (\mathcal{X} \times \mathbb{R}^p)^{n}$.
For each task $j \in \{ 1,\ldots,p\}$, $\mathcal{D}_n^j = (X_i,y_i^j)_{i=1}^n$ is a sample with distribution $\P^j$, whose first marginal distribution is $\P$, for which a simple regression problem has to be solved.
We assume that for every $j\in\{1,\dots,p\},~F^j \in L^2(\PP)$, $\S$ is a symmetric positive-definite matrix of size $p$ such that the vectors $(\varepsilon_i^j)_{j=1}^p$ are independent and identically distributed (i.i.d.) with normal distribution $\mathcal{N}(0,\S)$, with mean zero and covariance matrix~$\Sigma$, and
\begin{equation} \label{modele}
\forall i \in \{ 1,\ldots,n\},\forall j \in \{ 1,\ldots,p\}, ~ y_i^j = F^j(X_i) + \varepsilon_i^j \enspace .
\end{equation}
We suppose here, for simplicity, that $\S = \sigma^2 I_p$, with $\sigma^2 \in \mathbb{R}_+^{\star}$.
\begin{rem}
This implies that the outputs of every task are independent, which slightly simplifies the setting but allow lighter calculations.
It is to be noted, though, that the analysis carried afterwards can still take place without this assumption.
This can be dealt by diagonalizing $\S$, majoring the quantities of interest using the largest eigenvalue of $\S$ and minoring those quantities by its smallest eigenvalue.
The comparisons shown in Section~\ref{sec.MTSTcomp} are still valid, only being enlarged by the condition number of $\S$. A fully data-driven estimation of~$\S$ was proposed by \citet{solnon:hal-00610534}.
\end{rem}
We consider here a fixed-design setting, that is, we consider the input points as fixed and want to predict the output of the functions $F^j$ on those input points only.
The analysis could be transfered to the random-design setting by using tools developped by \citet{hsu2011analysis}.
For an estimator $(\widehat{F}^1,\dots,\widehat{F}^p)$, the natural quadratic risk to consider is
\begin{equation*}
\esp{\frac{1}{np}\sum_{j=1}^p\sum_{i=1}^n(\widehat{F}^j(X_i)-F^j(X_i))^2 | (X_1,\dots,X_n)} \enspace .
\end{equation*}
For the sake of simplicity, all the expectations that follow will implicitly be written conditional on $(X_1,\dots,X_n)$.
This corresponds to the fixed-design setting, which treats the input points as fixed.
\begin{rem}
We will use the following notations from now on :
\begin{equation*}
f = \vect\paren{(f^j(X_i))_{i,j}},~ f^j = \vect\paren{(f^j(X_i))_{i=1}^n} ~\textrm{ and }~ y = \vect\paren{(Y_i^j)_{i,j}} \enspace ,
\end{equation*}
so that, when using such vectorized notations, the elements are stacked task by task, the elements refering to the first task always being stored in the first entries of the vector, and so on.
\end{rem}
We want to estimate $f$ using elements of a particular function set.
Let $\mathcal{F}\subsetL^2(\PP)$ be a reproducing kernel Hilbert space (RKHS) \citep{Aronszajn}, with kernel~$k$ and feature map $\Phi: \mathcal{X} \to \mathcal{F}$, which give us the positive semidefinite kernel matrix $K = (k(X_i,X_{\ell}))_{1 \leq i , \ell \leq n} \in \mathcal{S}_n^+(\mathbb{R})$.
As done by \citet{solnon:hal-00610534} we extend the multi-task estimators generalizing the ridge-regression used in \citet{Evgeniou}. Given a positive-definite matrix $M \in \mathcal{S}_p^{++}(\mathbb{R})$, we consider the estimator
\begin{equation}\label{min.multi.M}
\widehat{F}_M \in \argmin_{g \in \mathcal{F}^p} \Bigg\{ \underbrace{\frac{1}{np} \sum_{i=1}^n \sum_{j=1}^p (y_i^j - g^j(X_i))^2}_{\textrm{Empirical risk}} + \underbrace{\sum_{j=1}^p \sum_{\ell=1}^p M_{j,l}\scal{g^j}{g^{\ell}}_{\mathcal{F}}}_{\textrm{Regularization term}} \Bigg\} \enspace .
\end{equation}
This leads to the fixed-design estimator
\begin{equation*}
\widehat{f}_M = A_{M}y \in \mathbb{R}^{np}\enspace ,
\end{equation*}
with
\begin{equation*}
A_{M} = A_{M,K}
:= \tilde{K}_{M}(\tilde{K}_{M}+np I_{np})^{-1} = (M^{-1} \otimes K)\left((M^{-1} \otimes K)+np I_{np}\right)^{-1} \enspace ,
\end{equation*}
where $\otimes$ denotes the Kronecker product (see the textbook of \citet{Horn_Johnson_Matrix_analysis} for simple properties of the Kronecker product).
\begin{rem}
This setting also captures the single-task setting.
Taking $j\in\{1,\dots,p\}$, $f^j = (f^j(X_1),\dots,f^j(X_n))^{\top}$ being the target-signal for the $j$th task and $y^j = (y_1^j,\dots,y_n^j)^{\top}$ being the observed output of the $j$th task, the single-task estimator for the $j$th task becomes (for $\l\in\mathbb{R}_+$)
\begin{equation*}
\widehat{f}^j_{\l} = A_{\l}y^j = K(K+n\l I_n)^{-1} y^j\enspace .
\end{equation*}
\end{rem}
\subsection{Two regularization terms for one problem}
A common hypothesis that motivates the use of multi-task estimators is that all the target functions of the different tasks lie in a single cluster (that is, the $p$ functions that are estimated are all close with respect to the norm defined on~$\mathcal{F}$).
Two different regularization terms are usually considered in this setting:
\begin{itemize}
\item one that penalizes the norms of the $p$ function and their differences, introduced by \citet{Evgeniou}, leading to the criterion (with $(g^j)_{j=1}^p \in \mathcal{F}^p,~ (\a,\b) \in (\mathbb{R}_+)^2$)
\begin{equation}\label{eq.MSD}
\frac{1}{np} \sum_{i=1}^n \sum_{j=1}^p (y_i^j - g^j(X_i))^2 + \frac{\a}{p} \sum_{j=1}^p \norm{g^j}_{\mathcal{F}}^2 + \frac{\b}{2p} \sum_{j=1}^p \sum_{k=1}^p \norm{g^j-g^k}_{\mathcal{F}}^2 \enspace ;
\end{equation}
\item one that penalizes the norms of the average of the $p$ functions and the resulting variance, leading to the criterion (with $(g^j)_{j=1}^p \in \mathcal{F}^p,~ (\l,\mu) \in (\mathbb{R}_+)^2$)
\begin{equation} \label{eq.MAV}
\frac{1}{np} \sum_{i=1}^n \sum_{j=1}^p (y_i^j - g^j(X_i))^2 + \l \norm{\frac{\sum_{j=1}^p g^j}{p}}_{\mathcal{F}}^2 + \mu\croch{\frac{\sum_{j=1}^p \norm{g^j}_{\mathcal{F}}^2}{p}-\norm{\frac{\sum_{j=1}^p g^j}{p}}_{\mathcal{F}}^2} \enspace .
\end{equation}
\end{itemize}
As we will see, those two penalties are closely related. Lemma~\ref{lemma.M} indeed shows that the two former penalties can be obtained as a special case of Equation~\eqref{min.multi.M}, the matrix $M$ being respectively
\begin{equation*}
M_{\SD}(\a,\b) := \frac{\a}{p} \frac{\boldsymbol{1}\boldsymbol{1}^{\top}}{p} + \frac{\a+p\b}{p}\paren{I_p-\frac{\boldsymbol{1}\boldsymbol{1}^{\top}}{p}}
\end{equation*}
and
\begin{equation*}
M_{\AV}(\l,\mu) := \frac{\l}{p}\frac{\boldsymbol{1}\boldsymbol{1}^{\top}}{p} + \frac{\mu}{p}\paren{I_p-\frac{\boldsymbol{1}\boldsymbol{1}^{\top}}{p}}\enspace .
\end{equation*}
Thus, we see that those two criteria are related, since $M_{\SD}(\a,\b) = M_{\AV}(\a,\a+p\b)$ for every $(\a,\b)$. Minimizing Equations~\eqref{eq.MSD} and \eqref{eq.MAV} over $\mathcal{F}^p$ respectly give the ridge estimators $\widehat{f}_{SD}(\a,\b) = A_{M_{\SD}(\a,\b)}Y$ and $\widehat{f}_{\mathrm{AV}}(\l,\mu) = A_{M_{\AV}(\l,\mu)} Y$.
\begin{rem}
We can now see that the regularization terms used in Equations~\eqref{eq.MSD} and \eqref{eq.MAV} are equivalent when the parameters are not constrained to be positive. However, if one desires to use the regularization \eqref{eq.MSD} (that is, with $\l = \a$ and $\mu = \a+p\b$) and seeks to calibrate those parameters by taking them to be nonnegative (which is to be expected if they are seen as regularization parameters), the following problems could occur:
\begin{itemize}
\item if the optimization is carried over $(\l,\mu)$, then the selected parameter $\b = \frac{\mu-\l}{p}$ may be negative;
\item conversely, if the risk of the estimator defined by Equation~\eqref{eq.MSD} is optimized over the parameters $(\a,\a+p\b)$ with the constraints $\a \geq 0$ and $\b \geq 0$, then the infimum over $\mathbb{R}_+^2$ could never be approached.
\end{itemize}
\end{rem}
We will also show in the next section that the risk of $ \widehat{f}_{\mathrm{AV}}(\l,\mu)$ nicely decomposes in two parts, the first part depending only on $\l$ and the second only on $\mu$, which is not the case for $\widehat{f}_{SD}(\a,\b)$ because of the aforementioned phenomenon.
This makes us prefer the second formulation and use the matrices $M_{\AV}$ instead of the matrices $M_{\SD}$.
\section{Decomposition of the risk \label{sec.risk}}
A fully data-driven selection of the hyper-parameters was proposed by \citet{Arl_Bac:2009:minikernel_long}, for the single-task ridge estimator, and by \citet{solnon:hal-00610534} for the multi-task estimator.
The single-task estimator is shown to have a risk which is close to the single-task oracle-risk (with a fixed-design)
\begin{equation*}
\mathfrak{R}^{\star}_{\mathrm{ST}} = \inf_{(\l^1,\dots,\l^p)\in\mathbb{R}_+^p}\set{\frac{1}{np}\esp{\sum_{j=1}^p \norr{\widehat{f}^j_{\l^j}-f^j}^2}} \enspace ,
\end{equation*}
while the multi-task estimator is shown to have a risk which is close to the multi-task oracle risk
\begin{equation*}
\mathfrak{R}^{\star}_{\mathrm{MT}} = \inf_{(\l,\mu)\in\mathbb{R}_+^2}\set{\frac{1}{np}\esp{\norr{\widehat{f}_{M_{\AV}(\l,\mu)} -f}^2}} \enspace .
\end{equation*}
The purpose of this paper is to closely study both oracle risks and, ultimately, to compare them.
We show in this section how to decompose the risk of an estimator obtained by minimizing Equation~\eqref{eq.MAV} over $(g^j)_{j=1}^p \in \mathcal{F}^p$.
A key point of this analysis is that the matrix $M_{\AV}(\l,\mu)$ naturally decomposes over two orthogonal vector-subspaces of $\mathbb{R}^p$.
By exploiting this decomposition we can simply use the classical bias-variance decomposition to analyse the Euclidean risk of those linear estimators.
\subsection{Eigendecomposition of the matrix $M_{\AV}(\l,\mu)$}
In this section we show that all the matrices $M_{\AV}(\l,\mu)$ have the same eigenvectors, which gives us a simple decomposition of the matrices $A_{M_{\AV}(\l,\mu)}$.
Let us denote by $(e_1,\dots,e_p)$ the canonical basis of $\mathbb{R}^p$. The eigenspaces of $p^{-1}\boldsymbol{1}\boldsymbol{1}^{\top}$ are orthogonal and correspond to:
\begin{itemize}
\item $\linspan{e_1+\dots+e_p}$ associated to eigenvalue $1$,
\item $\linspan{e_2-e_1,\dots,e_p-e_1}$ associated to eigenvalue $0$.
\end{itemize}
Thus, with $(\l,\mu)\in(R^+)^2$, we can diagonalize in an orthonormal basis any matrix $M_{\AV}(\l,\mu)$ as $M = M_{\AV}(\l,\mu)=P^{\top} D_{\frac{\l}{p},\frac{\mu}{p}}P$, with $D = \diag\{\frac{\l}{p},\frac{\mu}{p},\dots,\frac{\mu}{p}\} = D_{\frac{\l}{p},\frac{\mu}{p}}$.
Let us also diagonalize $K$ in an orthonormal basis : $K = Q^{\top} \Delta Q$, $\Delta = \diag\{\gamma_1,\dots,\gamma_n\}$. Then
\begin{equation*}
A_M = A_{M_{\AV}(\l,\mu)} = (P^{\top} \otimes Q^{\top}) \left[(D^{-1}\otimes \Delta)\left((D^{-1}\otimes \Delta)+npI_{np} \right)^{-1} \right](P \otimes Q) \enspace .
\end{equation*}
We can then note that $(D^{-1}\otimes \Delta)\left((D^{-1}\otimes \Delta)+npI_{np} \right)^{-1}$ is a diagonal matrix, whose diagonal entry of index $(j-1)n+i$ ($i\in\{1,\dots,n\}$, $j\in\{1,\dots,p\}$) is
\begin{equation*}
\left\{ \begin{array}{lr} &\frac{\gamma_i}{\gamma_i+n\l} \textrm{ if } j=1\enspace ,\\
&\frac{\gamma_i}{\gamma_i+n\mu} \textrm{ if } j>1\enspace . \end{array}\right.
\end{equation*}
In the following section we will use the following notations :
\begin{itemize}
\item for every $j\in\{1,\dots,p\}$, $(h^j_i)_{i=1}^n$ denotes the coordinates of $(f^j(X_i))_{i=1}^n$ in the basis that diagonalizes $K$,
\item for every $i\in\{1,\dots,n\}$, $(\nu^j_i)_{j=1}^p$ denotes the coordinates of $(h^j_i)_{j=1}^p$ in the basis that diagonalizes $M$.
\end{itemize}
Or, to sum up, we have :
\begin{equation*}
\forall j\in\{1,\dots,p\},~\begin{pmatrix} h^j_1\\\vdots\\h^j_n \end{pmatrix} = Q \begin{pmatrix} f^j(X_1)\\\vdots\\f^j(X_n) \end{pmatrix}
\end{equation*}
and
\begin{equation*}
\forall i\in\{1,\dots,n\},~\begin{pmatrix} \nu^1_i\\\vdots\\\nu^p_i \end{pmatrix} = P ~\begin{pmatrix} h^1_i\\\vdots\\h^p_i \end{pmatrix} \enspace .
\end{equation*}
With the usual notation $\nu^j = (\nu^j_1,\dots,\nu^j_n)^{\top}$ and $f$ , we get, by using elementary properties of the Kronecker product,
\begin{equation*}
\nu = \begin{pmatrix} \nu^1\\\vdots\\\nu^p \end{pmatrix} = (P\otimes Q) f \enspace .
\end{equation*}
\subsection{Bias-variance decomposition}
We now use a classical bias-variance decomposition of the risk of $\widehat{f}_{\mathrm{AV}}(\l,\mu)$ and show that the quantities introduced above allow a simple expression of this risk.
For any matrix $M \in \mathcal{S}_p^{++}(\mathbb{R})$, the classical bias-variance decomposition for the linear estimator $\widehat{f}_M = A_My$ is
\begin{align*}
\frac{1}{np}\esp{\norr{\widehat{f}_{M} -f}^2} &= \frac{1}{np}\|(A_{M}-I_{np})f\|_2^2 + \frac{1}{np}\tr(A_{M}^{\top} A_{M}\cdot(\S \otimes I_n)) \\
&= \underbrace{\frac{1}{np}\|(A_{M}-I_{np})f\|_2^2}_{\textrm{Bias}} +\underbrace{\frac{\sigma^2}{np} \tr(A_{M}^{\top} A_{M})}_{\textrm{Variance}} \enspace .
\end{align*}
We can now compute both bias and variance of the estimator $\widehat{f}_{\mathrm{AV}}(\l,\mu)$ by decomposing $A_{M_{\AV}(\l,\mu)}$ on the eigenbasis introduced in the previous section.
\begin{description}
\item[$np\times$Variance :]
\begin{align*}
&\sigma^2\tr(A_{M}^{\top} A_{M}) \\
=~~&\sigma^2 \tr\bigg( (P \otimes Q)^{\top}\left[(D^{-1}\otimes \Delta)\left((D^{-1}\otimes \Delta)+npI_{np} \right)^{-1}\right]^2 (P \otimes Q) \bigg) \\
=~~&\sigma^2 \tr\bigg(\left[(D^{-1}\otimes \Delta)\left((D^{-1}\otimes \Delta)+npI_{np} \right)^{-1}\right]^2\bigg)\\
=~~&\sigma^2 \sum_{i=1}^n \left[\left(\frac{\gamma_i}{\gamma_i+n\l}\right)^2+(p-1)\left(\frac{\gamma_i}{\gamma_i+n\mu}\right)^2\right] \enspace .
\end{align*}
\item[$np\times$Bias :]
\begin{align*}
&\|(A_{M}-I_{np})f\|_2^2 \\
=~~& \|(P \otimes Q)^{\top} \left[(D^{-1}\otimes K)\left((D^{-1}\otimes K)+npI_{np} \right)^{-1} -I_{np}\right](P \otimes Q)f\|_2^2 \\
=~~& \|\left[(D^{-1}\otimes \Delta)\left((D^{-1}\otimes \Delta)+npI_{np} \right)^{-1} -I_{np}\right] \nu\|_2^2 \\
=~~& (n\l)^2\sum_{i=1}^n\frac{(\nu_i^1)^2}{(\gamma_i+n\l)^2} +(n\mu)^2 \sum_{i=1}^n\sum_{j=2}^p \frac{(\nu^j_i)^2}{(\gamma_i+n\mu)^2} \\
=~~& (n\l)^2\sum_{i=1}^n\frac{(\nu_i^1)^2}{(\gamma_i+n\l)^2} +(n\mu)^2 \sum_{i=1}^n \frac{\sum_{j=2}^p(\nu^j_i)^2}{(\gamma_i+n\mu)^2} \enspace .
\end{align*}
\end{description}
\noindent Thus, the risk of $\widehat{f}_{\mathrm{AV}}(\l,\mu)$ becomes
\begin{equation}\label{optim.ind}
\begin{split}
n\l^2\sum_{i=1}^n\frac{\frac{(\nu_i^1)^2}{p}}{(\gamma_i+n\l)^2} + \frac{\sigma^2}{np} \sum_{i=1}^n \left(\frac{\gamma_i}{\gamma_i+n\l}\right)^2 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\+ n\mu^2 \sum_{i=1}^n \frac{\frac{\sum_{j=2}^p(\nu^j_i)^2}{p}}{(\gamma_i+n\mu)^2} + \frac{\sigma^2(p-1)}{np} \sum_{i=1}^n\left(\frac{\gamma_i}{\gamma_i+n\mu}\right)^2 \enspace .
\end{split}
\end{equation}
This decomposition has two direct consequences:
\begin{itemize}
\item the oracle risk of the multi-task procedure can be obtained by optimizing Equation~\eqref{optim.ind} independently over $\l$ and $\mu$;
\item the estimator $\widehat{f}_{\mathrm{AV}}$ can be calibrated by independently calibrating two parameters.
\end{itemize}
It is now easy to optimize over the quantities in Equation~\eqref{optim.ind}. An interesting fact is that both sides have a natural and interesting interpretation, which we give now.
\subsection{Remark}
To avoid further ambiguities and to simplify the formulas we introduce the following notations for every $i\in\{1,\dots,n\}$:
\begin{equation*}
\mu_i = \nu_i^1 = \frac{h_i^1+\dots + h_i^p}{\sqrt{p}}
\end{equation*}
and
\begin{equation*}
\varsigma_i^2 = \frac{\sum_{j=1}^p(h_i^j)^2}{p} -\left( \frac{\sum_{j=1}^p h_i^j}{p}\right)^2 = \frac{1}{p} \sum_{j=1}^p \paren{h_i^j-\frac{\sum_{j=1}^p h_i^j}{p}}^2 \enspace ,
\end{equation*}
so that
\begin{equation*}
p\varsigma_i^2 = \sum_{j=2}^p(\nu_i^j)^2\enspace .
\end{equation*}
\begin{rem}
We can see that for every $i\in\{1,\dots,n\}$, $\mu_i/\sqrt{p}$ is the average of the $p$ target functions $f^j$, expressed on the basis diagonalizing $K$. Likewise, $\varsigma_i^2$ can be seen as the variance between the $p$ target functions $f^j$ (which does not come from the noise).
\end{rem}
\noindent Henceforth, the risk of $\widehat{f}_{\mathrm{AV}}(\l,\mu)$ over $(\l,\mu)$ is decoupled into two parts.
\begin{itemize}
\item With the parameter $\l$, a part which corresponds to the risk of a single-task ridge estimator, which regularizes the mean of the tasks functions, with a noise variance $\sigma^2/p$:
\begin{equation} \label{risque.moy.lambda}
n\l^2\sum_{i=1}^n\frac{\frac{\mu_i^2}{p}}{(\gamma_i+n\l)^2} + \frac{\sigma^2}{np} \sum_{i=1}^n \left(\frac{\gamma_i}{\gamma_i+n\l}\right)^2 \enspace .
\end{equation}
\item With the parameter $\mu$, a part which corresponds to the risk of a single-task ridge estimator, which regularizes the variance of the tasks functions, with a noise variance $(p-1)\sigma^2/p$:
\begin{equation}\label{risque.var.lambda}
n\mu^2 \sum_{i=1}^n \frac{\varsigma_i^2}{(\gamma_i+n\mu)^2} + \frac{(p-1)\sigma^2}{np} \sum_{i=1}^n\left(\frac{\gamma_i}{\gamma_i+n\mu}\right)^2 \enspace .
\end{equation}
\end{itemize}
\begin{rem}
Our analysis can also be used on any set of positive semi-definite matrices $\mathcal{M}$ that are jointly diagonalizable on an orthonormal basis, as was $\set{M_{\AV}(\l,\mu),(\l,\mu)\in\mathbb{R}^2_+}$.
The element of interest then becomes the norms of the projections of the input tasks on the different eigenspaces (here, the mean and the resulting variance of the $p$ tasks).
An example of such a set is when the tasks are known to be split into several clusters, the assignement of each task to its cluster being known to the statistician.
The matrices that can be used then regularize the mean of the tasks and, for each cluster, the variance of the tasks belonging to this cluster.
\end{rem}
\section{Precise analysis of the multi-task oracle risk \label{sec.MTrisk}}
In the latter section we showed that, in order to obtain the multi-task risk, we just had to optimize several functions, which have the form of the risk of a kernel ridge estimator.
The risk of those estimators has already been widely studied.
\citet{Johnstone1994} (see also the article of \citet{CapDeVito} for random design) showed that, for a single-task ridge estimator, if the coefficients of the decomposition of the input function on the eigenbasis of the kernel decrease as $i^{-2\d}$, with $2\d >1$, then the minimax rates for the estimation of this imput function is of order $n^{1/2\d-1}$.
The kernel ridge estimator is then known to be minimax optimal, under certain regularity assumptions (see the work of \citet{bach2012_column_sampling} for more details). If the eigenvalues of the kernel are known to decrease as $i^{-2\b}$, then a single-task ridge estimator is minimax optimal under the following assumption:
\begin{equation} \tag{\bf$\textrm{H}_{\textrm{M}}(\b,\d)$}\label{hyp.minimax}
1 < 2\d < 4\b+1 \enspace .
\end{equation}
The analysis carried in the former section shows that the key elements to express this risk are the components of the average of the signals ($\mu_i$) and the components of the variance of the signals ($\varsigma_i$) on the basis that diagonalises the kernel matrix $K$, together with the eigenvalues of this matrix ($\gamma_i$).
It is then natural to impose the same natural assumptions that make the single-task ridge estimator optimal on those elements.
We first suppose that the eigenvalues of the kernel matrix have a polynomial decrease rate:
\begin{equation} \tag{\bf$\textrm{H}_{\textrm{K}}(\b)$}\label{hyp.K}
\forall i \in \{1,\dots,n\},~\gamma_i = ni^{-2\b} \enspace .
\end{equation}
Then, we assume that the the components of the average of the signals and the variance of the signals also have a polynomial decrease rate:
\begin{equation} \tag{\bf$\textrm{H}_{\textrm{AV}}(\d,C_1,C_2)$}\label{hyp.AV}
\forall i \in \{1,\dots,n\},~\left\{ \begin{array}{rcl} \frac{\mu_i^2}{p} &=& C_1 n i^{-2\d} \\ \varsigma_i^2 &=& C_2 n i^{-2\d} \end{array} \right. \enspace .
\end{equation}
\begin{rem}
We assume for simplicity that both Assumptions~\eqref{hyp.K} and \eqref{hyp.AV} hold in equality, although the equivalence $\asymp$ is only needed.
\end{rem}
\begin{ex}
This example, related to Assumptions \eqref{hyp.AV} and
\noindent \eqref{hyp.K} by taking $\b=m$ and $2\d = k+2$, is detailed by \citet{Wah:1990} and by \citet{gu2002smoothing}.
Let $\mathcal{P}\paren{2\pi}$ the set of all square-integrable $2\pi$-periodic functions on $\mathbb{R}$, $m\in\mathbb{N}^{\star}$ and define $\mathcal{H}=\set{f\in\mathcal{P}\paren{2\pi},~f_{|\croch{0,2\pi}}^{(m)}\in L^2\croch{0,2\pi}} $.
This set $\mathcal{H}$ has a RKHS structure, with a reproducing kernel having the Fourier base functions as eigenvectors.
The $i$-th eigenvalue of this kernel is $i^{-2m}$.
For any function $f\in\mathcal{P}\croch{0,2\pi}\cap\mathcal{C}^k\croch{0,2\pi}$, then its Fourier coefficient are $O(i^{-k})$.
For instance, if $f\in\mathcal{P}\croch{0,2\pi}$ such that $\forall x \in \croch{-\pi,\pi},~ f^{(k)}(x) = \absj{x}$, then its Fourier coefficients are $\asymp i^{-(k+2)}$.
\end{ex}
Under Assumptions~\eqref{hyp.K} and \eqref{hyp.AV}, we can now more precisely express the risk of a multi-task estimator. Equation~\eqref{risque.moy.lambda} thus becomes
\begin{align*}
& n\l^2\sum_{i=1}^n\frac{\frac{\mu_i^2}{p}}{(\gamma_i+n\l)^2} + \frac{\sigma^2}{np} \sum_{i=1}^n \left(\frac{\gamma_i}{\gamma_i+n\l}\right)^2 \\
=~~ & n\l^2\sum_{i=1}^n\frac{C_1ni^{-2\d}}{(ni^{-2\b}+n\l)^2} + \frac{\sigma^2}{np} \sum_{i=1}^n \left(\frac{ni^{-2\b}}{ni^{-2\b}+n\l}\right)^2 \\
=~~ & C_1\l^2 \sum_{i=1}^n\frac{i^{4\b-2\d}}{(1+\l i^{2\b})^2}+ \frac{\sigma^2}{np} \sum_{i=1}^n \frac{1}{\left(1+\l i^{2\b}\right)^2} \\
=~~ & R(n,p,\sigma^2,\l,\b,\d,C_1) \enspace ,
\end{align*}
while Equation~\eqref{risque.var.lambda} becomes
\begin{align*}
& n\mu^2 \sum_{i=1}^n \frac{\varsigma_i^2}{(\gamma_i+n\mu)^2} + \frac{(p-1)\sigma^2}{np} \sum_{i=1}^n\left(\frac{\gamma_i}{\gamma_i+n\mu}\right)^2 \\
=~~ &n\mu^2\sum_{i=1}^n\frac{C_2ni^{-2\d}}{(ni^{-2\b}+n\mu)^2} + \frac{(p-1)\sigma^2}{np} \sum_{i=1}^n \left(\frac{ni^{-2\b}}{ni^{-2\b}+n\mu}\right)^2 \\
=~~ & C_2\mu^2 \sum_{i=1}^n\frac{i^{4\b-2\d}}{(1+\mu i^{2\b})^2}+ \frac{(p-1)\sigma^2}{np} \sum_{i=1}^n \frac{1}{\left(1+\mu i^{2\b}\right)^2} \\
=~~ & R(n,p,(p-1)\sigma^2,\mu,\b,\d,C_2) \enspace ,
\end{align*}
with
\begin{equation}\label{eq.def.R}
R(n,p,\sigma^2,x,\b,\d,C) = Cx^2 \sum_{i=1}^n\frac{i^{4\b-2\d}}{(1+x i^{2\b})^2}+ \frac{\sigma^2}{np} \sum_{i=1}^n \frac{1}{\left(1+x i^{2\b}\right)^2} \enspace .
\end{equation}
\begin{rem}
It is to be noted that the function $R$ corresponds to the risk of a single-task ridge estimator when the decomposition of the input function on the eigenbasis of $K$ has $i^{-2\d}$ for coefficients and when $p=1$.
Thus, studying $R$ will allow us to derive both single-task and multi-task oracle rates.
\end{rem}
\subsection{Study of the optimum of $R(n,p,\sigma^2,\cdot,\b,\d,C)$}
We just showed that the function $R(n,p,\sigma^2,\cdot,\b,\d,C)$ was suited to derive both single-task and multi-task oracle risk.
\citet{bach2012_column_sampling} showed how to obtain a majoration on the function $R(n,p,\sigma^2,\cdot,\b,\d,C)$, so that its infimum was showed to match the minimax rates under Assumption~\eqref{hyp.minimax}.
In this section, we first propose a slightly more precise upper bound of this risk function.
We then show how to obtain a lower bound on this infimum that matches the aforementioned upper bound.
This will be done by precisely localizing the parameter minimizing $R(n,p,\sigma^2,\cdot,\b,\d,C)$.
Let us first introduce the following notation:
\begin{definition}
\begin{equation*}
R^{\star}(n,p,\sigma^2,\b,\d,C) = \inf_{\l\in\mathbb{R}_+} \set{R(n,p,\sigma^2,\l,\b,\d,C)} \enspace .
\end{equation*}
\end{definition}
We now give the upper bound on $R^{\star}(n,p,\sigma^2,\b,\d,C)$. For simplicity, we will denote by $\kappa(\b,\d)$ a constant, defined in Equation~\eqref{def.kbd}, which only depends on $\b$ and $\d$.
\begin{property}\label{prop.maj.risk}
Let $n$ and $p$ be positive integers, $\sigma$, $\b$ and $\d$ positive real numbers such that \eqref{hyp.minimax}, \eqref{hyp.K} and \eqref{hyp.AV} hold. Then,
\begin{equation}\label{eq.maj.R.opt}
R^{\star}(n,p,\sigma^2,\b,\d,C) \leq \paren{2^{1/2\d}\paren{\frac{np}{\sigma^2}}^{1/2\d-1}C^{1/2\d}\kappa(\b,\d)}\wedge \frac{\sigma^2}{p} \enspace .
\end{equation}
\end{property}
\begin{proof}
Property~\ref{prop.maj.risk} is proved in Section~\ref{app.proof.prop.maj.risk} of the appendix.
\end{proof}
In the course of showing Property~\ref{prop.maj.risk}, we obtained an upper bound on the risk function $R$ that holds uniformly on $\mathbb{R}_+$.
Obtaining a similar (up to multiplicative constants) lower bound that also holds uniformly on $\mathbb{R}_+$ is unrealistic.
However, we will be able to lower bound $R^{\star}$ by showing that $R$ is minimized by an optimal parameter $\l^{\star}$ that goes to 0 as $n$ goes to $+\infty$.
\begin{property}\label{prop.param.reg.maj}
If Assumption~\eqref{hyp.minimax} holds, the risk $R(n,p,\sigma^2,\cdot,\b,\d,C)$ attains its global minimum over $\mathbb{R}_+$ on $[0,\varepsilon\paren{\frac{np}{\sigma^2}}]$, with
\begin{equation*}
\varepsilon\paren{\frac{np}{\sigma^2}} = \sqrt{ C^{(1/2\d)-1}2^{1/2\d}\kappa(\b,\d)} \times\frac{1}{\paren{\frac{np}{\sigma^2}}^{1/2-(1/4\d)}}\paren{1+\eta\paren{\frac{np}{\sigma^2}}} \enspace ,
\end{equation*}
where $\eta(x)$ goes to $0$ as $x$ goes to $+\infty$.
\end{property}
\begin{proof}
Property~\ref{prop.param.reg.maj} is shown in Section~\ref{proof.prop.param.reg.maj} of the appendix.
\end{proof}
\begin{rem}
Thanks to the assumption made on $\d$, $\frac{1}{2\d}-1<0$ so that $\paren{\frac{np}{\sigma^2}}^{\frac{1}{2\d}-1}$ goes to 0 as $\frac{np}{\sigma^2}$ goes to $+\infty$.
This allows us to state that, if the other parameters are constant, $\l^{\star}$ goes to 0 as the quantity $\frac{np}{\sigma^2}$ goes to $+\infty$.
\end{rem}
We can now give a lower bound on $R^{\star}(n,p,\sigma^2,\b,\d,C)$.
We will give two versions of this lower bound. First, we state a general result.
\begin{property}\label{prop.min.risk}
For every $(C,\b,\d)$ such that $1<2\d<4\b$ holds, there exits an integer $N$ and a constant $\a\in(0,1)$ such that, for every for every $(n,p,\sigma^2)$ verifying $\frac{np}{\sigma^2} \geq N$, we have
\begin{equation}\label{eq.min.R.opt}
R^{\star}(n,p,\sigma^2,\b,\d,C) \geq \paren{\a\paren{\frac{np}{\sigma^2}}^{1/2\d-1}C^{1/2\d}\kappa(\b,\d)}\wedge\frac{\sigma^2}{4p} \enspace .
\end{equation}
\end{property}
\begin{proof}
Property~\ref{prop.min.risk} is proved in Section~\ref{sec.proof.prop.min.risk} of the appendix.
\end{proof}
\begin{rem}
It is to be noted that $N$ and $\a$ only depend on $\b$ and $\d$. We can also remark that $\a$ can be taken arbitrarily close to
\begin{equation*}
\frac{\int_{0}^{1} \frac{u^{\frac{1}{2\b}-1}}{(1+u)^2}du}{\int_{0}^{+\infty} \frac{u^{\frac{1}{2\b}-1}}{(1+u)^2}du} \wedge \frac{\int_{0}^{1} \frac{u^{\frac{1-2\d}{2\b}+1}}{(1+u)^2}du}{\int_{0}^{+\infty} \frac{u^{\frac{1-2\d}{2\b}+1}}{(1+u)^2}du} \enspace .
\end{equation*}
Numerical computations show that, by taking $\b=\d=2$, this constant is larger than $0.33$.
\end{rem}
\begin{rem}
The assumption made on $\b$ and $\d$ is slighlty more restrictive than \eqref{hyp.minimax}, under which the upper bound is shown to hold and under which the single-task estimator is shown to be minimax optimal.
\end{rem}
We are now ensured that $R$ attains its global minimum on $\mathbb{R}_+$, thus we can give the following definition.
\begin{definition}
For every $n$, $p$, $\sigma^2$, $\d$, $\b$ and $C$, under the assumption of Property~\ref{prop.param.reg.maj}, we introduce
\begin{equation*}
\l^{\star}_R \in \argmin_{\l\in\mathbb{R}_+}\set{R(n,p,\sigma^2,\l,\b,\d,C)} \enspace .
\end{equation*}
\end{definition}
We now give a slightly refined version of Property~\ref{prop.min.risk}, by discussing whether this optimal parameter $\l^{\star}_R $ is larger or lower than the threshold $n^{-2\b}$.
This allows us to better understand the effect of regularizarion on the oracle risk $R^{\star}$.
\begin{property}\label{prop.param.reg.min}
For every $(\b,\d)$ such that $4\b>2\d>1$, integers $N_1$ and $N_2$ exist such that
\begin{enumerate}
\item for every $(n,p,\sigma^2)$ verifying $\frac{np}{\sigma^2} \geq N_1$ and $n^{1-2\d} \times \frac{p}{\sigma^2} \leq \frac{1}{N_2}$, then
\begin{equation*}
\l^{\star}_R \geq \frac{1}{n^{2\b}}
\end{equation*}
and
\begin{equation*}
R^{\star}(n,p,\sigma^2,\b,\d,C) \asymp \paren{\frac{\sigma^2}{np}}^{1-1/2\d} \enspace .
\end{equation*}
\item for every $(n,p,\sigma^2)$ verifying $\frac{np}{\sigma^2} \geq N_1$ and $n^{1-2\d} \times \frac{p}{\sigma^2} \geq N_2$, then
\begin{equation*}
\l^{\star}_R \leq \frac{1}{n^{2\b}}
\end{equation*}
and
\begin{equation*}
R^{\star}(n,p,\sigma^2,\b,\d,C) \asymp R(n,p,\sigma^2,0,\b,\d,C) \asymp \frac{\sigma^2}{p} \enspace ;
\end{equation*}
\end{enumerate}
\end{property}
\begin{proof}
Property~\ref{prop.param.reg.min} is proved in Section~\ref{sec.proof.prop.param.reg.min} of the appendix.
\end{proof}
\begin{rem}
If $p \leq n\sigma^2$ and $\d > 1$ then we are in the first case, for a large enough $n$.
This is a case where regularization has to be employed in order to obtain optimal convergence rates.
\end{rem}
\begin{rem}
If $\sigma^2$ and $n$ are fixed and $p$ goes to $+\infty$ then we are in the second case.
It is then useless to regularize the risk, since the risk can only be lowered by a factor $4$.
This also corresponds to a single-task setting where the noise variance $\sigma^2$ is very small and where the estimation problem becomes trivial.
\end{rem}
\subsection{Multi-task oracle risk}
We can now use the upper and lower bounds on $R^{\star}$ to control the oracle risk of the multi-task estimator.
We define
\begin{equation*}
\l^{\star} \in \argmin_{\l\in\mathbb{R}_+}\set{ R(n,p,\sigma^2,\l,\b,\d,C_1)}
\end{equation*}
and
\begin{equation*}
\mu^{\star} \in \argmin_{\mu\in\mathbb{R}_+}\set{R(n,p,(p-1)\sigma^2,\mu,\b,\d,C_2)} \enspace .
\end{equation*}
Property~\ref{prop.param.reg.maj} ensures that $\l^{\star}$ and $\mu^{\star}$ exist, even though they are not necessarily unique. The oracle risk then is
\begin{equation*}
\mathfrak{R}^{\star}_{\mathrm{MT}} = \inf_{(\l,\mu)\in\mathbb{R}_+^2}\set{\frac{1}{np}\esp{\norr{\widehat{f}_{M_{\AV}(\l,\mu)} -f}^2}} = \frac{1}{np}\esp{\norr{\widehat{f}_{M_{\AV}(\l^{\star},\mu^{\star})} -f}^2} \enspace .
\end{equation*}
We now state the main result of this paper, which simply comes from the analysis of $R^{\star}$ performed above.
\begin{theorem}\label{thm.MT.oracle}
For every $n$, $p$, $C_1$, $C_2$, $\sigma^2$, $\b$ and $\d$ such that Assumption~\eqref{hyp.minimax} holds, we have
\begin{equation}\label{eq.maj.oracle.MT}
\mathfrak{R}^{\star}_{\mathrm{MT}} \leq2^{1/2\d} \paren{\frac{np}{\sigma^2}}^{1/2\d-1}\kappa(\b,\d) \croch{C_1^{1/2\d} + (p-1)^{1-(1/2\d)}C_2^{1/2\d}} \enspace .
\end{equation}
Furthermore, constants $N$ and $\a \in (0,1)$ exist such that, if $n \geq N$, $p/\sigma^2 \leq n$ and $2<2\d<4\b$, we have
\begin{equation}\label{eq.min.oracle.MT}
\mathfrak{R}^{\star}_{\mathrm{MT}} \geq \a\paren{\frac{np}{\sigma^2}}^{1/2\d-1}\kappa(\b,\d) \croch{C_1^{1/2\d} + (p-1)^{1-(1/2\d)}C_2^{1/2\d}} \enspace .
\end{equation}
\end{theorem}
\begin{proof}
The risk of the multi-task estimator $\widehat{f}_{M_{\AV}(\l,\mu)}$ can be written as
\begin{equation*}
R(n,p,\sigma^2,\l,\b,\d,C_1) + R(n,p,(p-1)\sigma^2,\mu,\b,\d,C_2) \enspace .
\end{equation*}
We then apply Properties~\ref{prop.maj.risk} and \ref{prop.min.risk}, since $p/\sigma^2 \leq n$ implies that $p/(p-1)\sigma^2 \leq n$.
The assumption $\d >1$ ensures that the first setting of Property~\ref{prop.param.reg.min} holds.
\end{proof}
\begin{rem}
An interesting fact is that the oracle multi-task risk is of the order $(np/\sigma^2)^{1/2\d-1}$.
This corresponds to the risk of a single-task ridge estimator with sample size $np$.
\end{rem}
\begin{rem}
As noted before, the assumption under which the lower bound holds is slightly stronger than Assumption~\eqref{hyp.minimax}.
\end{rem}
\section{Single-task oracle risk \label{sec.STrisk}}
In the former section we obtained a precise approximation of the multi-task oracle risk $\mathfrak{R}^{\star}_{\mathrm{MT}}$.
We would now like to obtain a similar approximation for the single-task oracle risk $\mathfrak{R}^{\star}_{\mathrm{ST}}$.
In the light of Section~\ref{sec.risk}, the only element we need to obtain the oracle risk of task $j\in\{1,\dots,p\}$ is the expression of $(h_i^j)_{i=1}^n$, that is, the coordinates of $(f^j(X_i))_{i=1}^n$ on the eigenbasis of $K$.
Unfortunately, Assumption~\eqref{hyp.AV} does not correspond to one set of task functions $(f^1,\dots,f^p)$.
Thus, since several single-task settings can lead to the same multi-task oracle risk, we now explicitly define two repartitions of the task functions $(f^1,\dots,f^p)$, for which the single-task oracle risk will be computed.
\begin{itemize}
\item ``2 points'': suppose, for simplicity, that $p$ is even and that
\begin{equation} \label{2Points} \tag{2Points}
f^1 = \dots = f^{p/2} ~\textrm{ and }~ f^{p/2+1} = \dots = f^p\enspace .
\end{equation}
\item ``1 outlier'':
\begin{equation} \label{1Out} \tag{1Out}
f^1 = \dots = f^{p-1} \enspace .
\end{equation}
\end{itemize}
Both assumptions correspond to settings in which the multi-task procedure would legitimately be used.
Assumption~\eqref{2Points} models the fact that all the functions lie in a cluster of small radius.
It supposes that the functions are split into two groups of equal size, in order to be able to explicitly derive the single-task oracle risk.
Assumption~\eqref{1Out} supposes that all the functions are grouped in one cluster, with one outlier.
In order to make the calculations possible, all the functions in one group are assumed to be equal.
Since this is not a fully convincing situation to study the behaviour of the multi-task oracle, simulation experiments were also run on less restrictive settings.
The results of those experiments are shown in Section~\ref{sec.simus}.
\begin{rem}
The hypotheses \eqref{2Points} and \eqref{1Out} made on the functions $f^j$ can be expressed on $(h_i^j)$. Assumption~\eqref{2Points} becomes
\begin{equation*}
\forall i \in \{1,\dots,n\},~ h^1_i = \dots = h^{p/2}_i ~\textrm{ and }~ h^{p/2+1}_i = \dots = h^p_i \enspace ,
\end{equation*}
while Assumption ~\eqref{1Out} becomes
\begin{equation*}
\forall i \in \{1,\dots,n\},~h^1_i = \dots = h^{p-1}_i \enspace .
\end{equation*}
\end{rem}
Under those hypotheses we now want to derive an expression of $(h_i^1,\dots,h_i^p)$ given $(\mu_i,\varsigma_i)$ so that we can exactly compute the single-task oracle risk.
Remember we defined for every $i \in \{1,\dots,n\}$,
\begin{equation*}
\mu_i = \frac{1}{\sqrt{p}} \sum_{j=1}^p h_i^j
\end{equation*}
and
\begin{equation*}
\varsigma_i^2 = \frac{1}{p} \sum_{j=1}^p \paren{h_i^j}^2 - \frac{\mu_i^2}{p} = \frac{1}{p} \sum_{j=1}^p\paren{h_i^j - \frac{\mu_i}{\sqrt{p}}}^2 \enspace .
\end{equation*}
We also re-introduce the single-task oracle risk:
\begin{equation*}
\mathfrak{R}^{\star}_{\mathrm{ST}} = \inf_{(\l^1,\dots,\l^p)\in\mathbb{R}_+^p}\set{\frac{1}{np}\sum_{j=1}^p \norr{\widehat{f}^j_{\l^j}-f^j}^2} \enspace .
\end{equation*}
We now want to closely study this single-task oracle risk, in both settings.
\subsection{Analysis of the oracle single-task risk for the ``2 points'' case \eqref{2Points}}
In this section we write the single-task oracle risk when Assumption~\eqref{2Points} holds.
As shown in Lemma~\ref{lemma.risk.2Points}, the risk of the estimator $\widehat{f}^j_{\l}=A_{\l}y^j$ for the $j$th task, which we denote by $R^j(\l)$, verifies
\begin{equation*}
R(n,1,\sigma^2,\l,\b,\d,\paren{\sqrt{C_1}-\sqrt{C_2}}^2) \leq R^j(\l) \leq R(n,1,\sigma^2,\l,\b,\d,\paren{\sqrt{C_1}+\sqrt{C_2}}^2) \enspace .
\end{equation*}
Both upper and lower parts eventually behave similarly.
In order to simplify notations and to avoid having to constantly write two risks, we will assume that half of the tasks have a risk equal to the right-hand side of the later inequality and the other half a risk equal to the left-hand side of this inequality.
This leads to the following assumption:
\begin{equation} \tag{\bf H$_{\textrm{2Points}}$}\label{hyp.2Points}
\forall i \in \{1,\dots,n\},~ \left\{ \begin{array}{rcl} h_i^1 &=& \sqrt{n}i^{-\d}(\sqrt{C_1}+\sqrt{C_2}) \\ h_i^p &=& \sqrt{n}i^{-\d}(\sqrt{C_1}-\sqrt{C_2}) \end{array} \right. \enspace .
\end{equation}
This minor change does not affect the convergence rates of the estimator.
Consequently, if $1 \leq j \leq p/2$ the risk for task $j$ is $R(n,1,\sigma^2,\l,\b,\d,\paren{\sqrt{C_1}+\sqrt{C_2}}^2)$ so that the oracle risk for task $j$ is, given that $n\sigma^2 \geq 1$,
\begin{equation*}
\asymp \paren{\frac{n}{\sigma^2}}^{1/2\d-1}\kappa(\b,\d)\times \paren{\sqrt{C_1}+\sqrt{C_2}}^{1/\d}\enspace ,
\end{equation*}
and if $p/2+1\leq j \leq p$ the risk for task $j$ is $R(n,1,\sigma^2,\l,\b,\d,\paren{\sqrt{C_1}-\sqrt{C_2}}^2)$ so that the oracle risk for task $j$ is, given that $n\sigma^2 \geq 1$,
\begin{equation*}
\asymp \paren{\frac{n}{\sigma^2}}^{1/2\d-1}\kappa(\b,\d)\times \absj{\sqrt{C_1}-\sqrt{C_2}}^{1/\d}\enspace ,
\end{equation*}
\begin{rem}
We can remark that \eqref{hyp.2Points} implies \eqref{2Points} and that \eqref{hyp.2Points} implies \eqref{hyp.AV}, as shown in Lemma \ref{lemma.hyp.2Points}. Consequently, if \eqref{hyp.2Points} holds, we have, for every $i\in\{1,\dots,n\}$, $h_i^1 = \frac{\mu_i}{\sqrt{p}} + \varsigma_i$ and $h_i^p = \frac{\mu_i}{\sqrt{p}} - \varsigma_i$.
\end{rem}
\begin{cor}
For every $n$, $p$, $C_1$, $C_2$, $\sigma^2$, $\b$ and $\d$ such that $2<2\d<4\b$ and $n\sigma^2>1$ and that Assumptions~\eqref{hyp.2Points} and \eqref{hyp.K} hold, then
\begin{equation} \label{ST_risk_2Points}
\mathfrak{R}^{\star}_{\mathrm{ST}} \asymp \paren{\frac{np}{\sigma^2}}^{1/2\d-1}\frac{\kappa(\b,\d)}{2} \times p^{1-1/2\d} \croch{\paren{\sqrt{C_1}+\sqrt{C_2}}^{1/\d}+\absj{\sqrt{C_1}-\sqrt{C_2}}^{1/\d}}\enspace .
\end{equation}
\end{cor}
\subsection{Analysis of the oracle single-task risk for the ``1 outlier'' case \eqref{1Out}}
In this section we suppose that Assumption~\eqref{1Out} holds.
As shown in Lemma~\ref{lemma.risk.1Out}, we can lower and upper bound the risks of the single-tasks estimators by functions of the shape $R(n,p,\sigma^2,\l,\b,\d,C)$.
As in the latter section, to avoid the burden of writing two long risk terms at every step, and since all those risks have the same convergence rates, we suppose from now on the new assumption:
\begin{equation}\label{hyp.1Out}\tag{\bf H$_{\textrm{1Out}}$}
\forall i \in \{1,\dots,n\} \left \{ \begin{array}{rcl} h_i^1 &=& \sqrt{n}i^{-\d}\paren{\sqrt{C_1} + \frac{1}{\sqrt{p-1}}\sqrt{C_2}} \\
h_i^p &=& \sqrt{n}i^{-\d}\paren{\sqrt{C_1} -\sqrt{p-1}\sqrt{C_2}} \end{array} \right.\enspace .
\end{equation}
This minor change does not affect the convergence rates of the estimator.
Consequently, if $1 \leq j \leq p-1$ the risk for task $j$ is $R(n,1,\sigma^2,\l,\b,\d,\paren{\sqrt{C_1} + \sqrt{\frac{C_2}{p-1}}}^2)$ so that the oracle risk for task $j$ is, given that $n\sigma^2 \geq 1$,
\begin{equation*}
\asymp \paren{\frac{n}{\sigma^2}}^{1/2\d-1}\kappa(\b,\d) \times \paren{\sqrt{C_1} + \sqrt{\frac{C_2}{p-1}}}^{1/\d}\enspace ,
\end{equation*}
while the risk for task $p$ is $R(n,1,\sigma^2,\l,\b,\d,\paren{\sqrt{C_1} -\sqrt{(p-1)C_2}}^2)$ so that the oracle risk for task $p$ is, given that $n\sigma^2 \geq 1$,
\begin{equation*}
\asymp \paren{\frac{n}{\sigma^2}}^{1/2\d-1}\kappa(\b,\d)\times \absj{\sqrt{C_1} -\sqrt{(p-1)C_2}}^{1/\d}\enspace .
\end{equation*}
\begin{rem}
We can also remark here that \eqref{hyp.1Out} implies \eqref{1Out} and that \eqref{hyp.1Out} implies \eqref{hyp.AV}, as shown in Lemma~\ref{lemma.risk.1Out}. Consequently, if \eqref{hyp.1Out} holds, we have, for every $i\in\{1,\dots,n\}$, $h_i^1 = \frac{\mu_i}{\sqrt{p}} + \frac{1}{\sqrt{p-1}}\varsigma_i$ and $h_i^p = \frac{\mu_i}{\sqrt{p}} - \sqrt{p-1}\varsigma_i$.
\end{rem}oracle
\begin{cor}
For every $n$, $p$, $C_1$, $C_2$, $\sigma^2$, $\b$ and $\d$ such that $2<2\d<4\b$ and $n\sigma^2>1$ and that Assumptions~\eqref{hyp.1Out} and \eqref{hyp.K} hold, then
\begin{multline} \label{ST_risk_1Out}
\mathfrak{R}^{\star}_{\mathrm{ST}} \asymp \paren{\frac{np}{\sigma^2}}^{1/2\d-1}\kappa(\b,\d)\\ \times p^{1-1/2\d}\croch{\frac{p-1}{p}\paren{\sqrt{C_1} + \sqrt{\frac{C_2}{p-1}}}^{1/\d} +\frac{1}{p}\absj{\sqrt{C_1} -\sqrt{(p-1)C_2}}^{1/\d}}\enspace .
\end{multline}
\end{cor}
\section{Comparison of multi-task and single-task \label{sec.MTSTcomp}}
In the two latter section we obtained precise approximations of the multi-task oracle risk, $\mathfrak{R}^{\star}_{\mathrm{MT}}$, and of the single-task oracle risk, $\mathfrak{R}^{\star}_{\mathrm{ST}}$, under either Assumption~\eqref{hyp.2Points} or \eqref{hyp.1Out}.
We can now compare both risks in either setting, by studying their ratio
\begin{equation*}
\rho = \frac{\mathfrak{R}^{\star}_{\mathrm{MT}}}{\mathfrak{R}^{\star}_{\mathrm{ST}}} \enspace .
\end{equation*}
We will express the quantity $\rho$ as a factor of
\begin{equation*}
r = \frac{C_2}{C_1} \enspace .
\end{equation*}
The parameter $r$ controls the amount of the signal which is contained in the mean of the functions.
When $r$ is small, the mean of the tasks contains much more signal than the variance of the tasks, so that the tasks should be ``similar''.
This is a case where the multi-task oracle is expected to perform better than the single-task oracle.
On the contrary, when $r$ is large, the variance of the tasks is more important than the mean of the tasks.
This is a case where the tasks would be described as ``non-similar''.
It is then harder to conjecture whether the single-task oracle performs better than the multi-task oracle and, as we will see later, the answer to this greatly depends on the setting.
\subsection{Analysis of the oracle multi-task improvement for the ``2 points'' case \eqref{2Points}}
We now express $\rho$ as a function of $r$ when the tasks are split in two groups.
\begin{cor}\label{cor.rho.2Points}
For every $n$, $p$, $C_1$, $C_2$, $\sigma^2$, $\b$ and $\d$ such that $2<2\d<4\b$ and $n\sigma^2>p$ and that Assumptions~\eqref{hyp.2Points} and \eqref{hyp.K} hold, then
\begin{equation} \label{rho_2Points}
\rho \asymp \frac{p^{1/2\d-1}+(\frac{p-1}{p})^{1-(1/2\d)}r^{1/2\d}}{\paren{1+\sqrt{r}}^{1/\d}+\absj{1 -\sqrt{r}}^{1/\d}} \enspace .
\end{equation}
\end{cor}
\begin{rem}
The right-hand side of Equation~\eqref{rho_2Points} is always smaller than $\frac{1}{2}$.
Thus, under the assumptions of Corollary~\ref{cor.rho.2Points}, the multi-task oracle risk can never be arbitrarily worse than the single-task oracle risk.
\end{rem}
We can first see that, under the assumptions of Corollary~\ref{cor.rho.2Points}, $\rho = \Theta\paren{p^{1/2\d-1}}$ as $r$ goes to 0.
This is the same improvement that we get we multiplying the sample-size by $p$.
We also have $\rho = \Theta\paren{\paren{\frac{p-1}{p}}^{1-(1/2\d)}}$ as $r$ goes to $+\infty$, so that the multi-task oracle and the single-task oracle behave similarly.
Finally, $\rho =\Theta\paren{\frac{r^{1/2\d}}{\paren{1+\sqrt{r}}^{1/\d}+\absj{1 -\sqrt{r}}^{1/\d}}}$ as $p$ goes to $+\infty$, so that the behaviours we just discussed are still valid with a large number of tasks.
\subsection{Analysis of the oracle multi-task improvement for the ``1 outlier'' case \eqref{1Out} \label{sub.sec.an1Out}}
We now express $\rho$ as a function of $r$ when the tasks are grouped in one group, with one outlier.
\begin{cor}\label{cor.rho.1Out}
For every $n$, $p$, $C_1$, $C_2$, $\sigma^2$, $\b$ and $\d$ such that $2<2\d<4\b$ and $n\sigma^2>p$ and that Assumptions~\eqref{hyp.1Out} and \eqref{hyp.K} hold, then
\begin{equation}\label{rho_1Out}
\rho \asymp\frac{p^{1/2\d-1}+\paren{\frac{p-1}{p}}^{1-(1/2\d)}r^{1/2\d}}{\frac{p-1}{p}\paren{1+\sqrt{\frac{r}{p-1}}}^{1/\d} +\frac{1}{p}\absj{1-\sqrt{r(p-1)}}^{1/\d}} \enspace .
\end{equation}
\end{cor}
We can see that, under the assumptions of Corollary~\ref{cor.rho.1Out}, $\rho = \Theta\paren{p^{1/2\d-1}}$ as $r$ goes to 0.
As in the latter section, this is the same improvement that we get we multiplying the sample-size by $p$.
However, $\rho = \Theta\paren{\paren{\frac{p-1}{p}}^{1-1/2\d} \times \frac{p(p-1)^{-1/2\d}}{1+(p-1)^{1-1/\d}}}$ as $r$ goes to $+\infty$.
This quantity goes to $+\infty$ as $p\longrightarrow +\infty$, so that the multi-task oracle performs arbitrarily worse than the single-task one in this asymptotic setting.
Finally, $\rho=\Theta\paren{r^{1/2\d}}$ as $p$ goes to $+\infty$.
This quantity goes to $+\infty$ as $r$ goes to $+\infty$, so that the behaviours we just mentioned stay valid with a large number of tasks.
\subsection{Discussion \label{subsec.diff}}
When $r$ is small, either under Assumption~\eqref{2Points} or \eqref{1Out}, the mean of the signal is much stronger than the variance.
Thus, the multi-task procedure performs better than the single-task one.
\begin{ex}
If $r=0$, then all the tasks are equal. The improvement of the multi-task procedure over the single-task one then is $p^{1/2\d-1}$. This was expected: it corresponds to the risk of a ridge regression with a $np$-sample.
\end{ex}
As $r$ goes to $0$, the multi-task oracle outperforms its single-task counterpart by a factor $p^{1/2\d-1}$. When $p$ is large (but, remember, this only holds when $p/\sigma^2 \leq n$, so $n$ also has to be large), this leads to a substantial improvement. It is easily seen that, for any constant $C>1$, if $r \leq (C-1)^{2\d} (p-1)^{1-2\d}$, then the right-hand side of Equation~\eqref{rho_2Points} becomes smaller than $Cp^{1/2\d-1}$. Thus, if the tasks are similar enough, the multi-task oracle performs as well as the oracle for a $np$-sample, up to a constant.
\medskip
On the contrary, when $r$ is large, the variance carries most of the signal, so that the tasks differ one from another. As $r$ goes to $+\infty$, the two settings have different behaviours:
\begin{itemize}
\item under Assumption~\eqref{2Points} (that is, when we are faced to two equally-sized groups), the oracle risks of the multi-task and of the single-task estimators are of the same order: they can only differ by a multiplicative constant;
\item under Assumption~\eqref{1Out} (that is, when we are faced to one cluster and one outlier), the single-task oracle outperforms the multi-task one, by a factor which is approximatly $p^{1/\d}$.
\end{itemize}
Finally, Assumption~\eqref{2Points} presents no drawback for the multi-task oracle, since under those hypotheses its performance cannot be worse than the single-task oracle's one.
On the contrary, Assumption~\eqref{1Out} presents a case where the use of a multi-task technique greatly increases the oracle risk, when the variance between the tasks is important, while it gives an advantage to the multi-task oracle when this variance is small.
The location where the multi-task improvement stops corresponds to the barrier $\rho = 1$.
Studying this object seems difficult, since we only know $\rho$ up to a multiplicative constant.
Also, finding the contour lines of the righ-hand side of Equation~\eqref{rho_1Out} does not seem to be an easy task.
In Section~\ref{sec.simus}, we will run simulations in situations where the oracle risk can no longer be explicitly derived.
We will show that the behaviours found in these two examples still appear in the simulated examples.
\section{Risk of a multi-task estimator \label{sec.riskest}}
\citet{solnon:hal-00610534} introduced an entirely data-driven estimator to calibrate $M_{\AV}(\l,\mu)$ over $\mathbb{R}_+^2$.
One of their main results is an oracle inequality, that compares the risk of this estimator to the oracle risk.
Thus, $\mathfrak{R}^{\star}_{\mathrm{MT}}$ is attainable by a fully data-driven estimator.
We now show that our estimation of the multi-task oracle risk is precise enough so that we can use it in the mentionned oracle inequality and still have a lower risk than the single-task oracle one.
The following assumption will be used, with $\df(\l) = \tr(A_{\l})$ and $A_{\l} = K(K+n\l I_n)^{-1}$ :
\begin{equation}\tag{Hdf}\label{Hdf}
\left.
\begin{aligned}
& \forall j \in \set{1, \ldots, p} , \, \exists \l_{0,j} \in (0,+\infty) \, , \\
& \qquad \df(\l_{0,j}) \leq \sqrt{n} \quad \mbox{and} \quad \frac{1}{n} \norm{ (A_{\l_{0,j}}-I_n) f^j }_2^2 \leq \sigma^2 \sqrt{\frac{\ln n}{n}}
\end{aligned}
\enspace \right\}
\end{equation}
We will also denote $\mathcal{M} = \set{M_{\AV}(\l,\mu),~ (\l,\mu)\in\mathbb{R}_+^2}$ and $\widehat{M}_{\textrm{HM}}$ the estimator introduced in \citet{solnon:hal-00610534}, which belongs to $\mathcal{M}$. Theorem 29 of \citet{solnon:hal-00610534} thus states:
\begin{theorem} \label{thm.oracle.HM}
Let $\a = 2$, $\th \geq 2$, $p\in\mathbb{N}^{\star}$ and assume~\eqref{Hdf} holds true. An absolute constant $L>0$ and a constant $n_1(\th)$ exist such that the following holds as soon as $n \geq n_1(\th)$.
\begin{equation} \label{resultatoracle.esp.HM}
\begin{split}
\esp{\frac{1}{np}\norr{\widehat{f}_{\widehat{M}_{\textrm{HM}}}-f}^2} \leq \left( 1+\frac{1}{\ln(n)} \right)^2 \esp{\inf_{M \in \mathcal{M}}\left\{\frac{1}{np} \norr{ \widehat{f}_{M}-f}^2 \right\}} \\+L\sigma^2 (2 + \th)^2p\frac{\ln(n)^3}{n}
+ \frac{p}{n^{\th/2}} \frac{\norr{f}^2 }{n p}
\enspace .
\end{split}
\end{equation}
\end{theorem}
We first remark that
\begin{equation*}
\esp{\inf_{M \in \mathcal{M}}\left\{\frac{1}{np} \norr{ \widehat{f}_{M}-f}^2 \right\}} \leq \mathfrak{R}^{\star}_{\mathrm{MT}}\enspace .
\end{equation*}
We can now plug the oracle risk in the oracle inequality~\eqref{resultatoracle.esp.HM}.
Then, if we suppose that, for $i\in\{1,\dots,n\}$ and $j\in\{1,\dots,p\}$, $(h_i^j)^2 = n C^j i^{-2\d}$, we have that
\begin{equation*}
\norr{f}^2 = \sum_{j=1}^p\sum_{i=1}^n (h_i^j)^2 = n\sum_{j=1}^p C^j \sum_{i=1}^n i^{-2\d} \leq n\zeta{(2\d)}\sum_{j=1}^p C^j \enspace .
\end{equation*}
\begin{rem}
Assumption \eqref{2Points} means that for every $i\in\{1,\dots,n\}$, if $1\leq j\leq p/2$,
\begin{equation*}
C^j = \paren{\sqrt{C_1}+\sqrt{C_2}}^2
\end{equation*}
and if $p/2+1\leq j\leq p$,
\begin{equation*}
C^j = \paren{\sqrt{C_1}-\sqrt{C_2}}^2 \enspace .
\end{equation*}
Assumption $\eqref{1Out}$ means that for every $i\in\{1,\dots,n\}$, if $1\leq j \leq p-1$,
\begin{equation*}
C^j=\paren{\sqrt{C_1} + \sqrt{\frac{C_2}{p-1}}}^2
\end{equation*}
while
\begin{equation*}
C^p = \paren{\sqrt{C_ 1} -\sqrt{(p-1)C_2}}^2 \enspace .
\end{equation*}
\end{rem}
\begin{property}
Under Assumptions \eqref{hyp.K} and \eqref{hyp.AV} with $2\d>2$, there exists a constant $N_1$ such that for every $n\geq N_1$, Assumption \eqref{Hdf} holds.
\end{property}
\begin{proof}
We can see that Assumption \eqref{Hdf} is made independently on every task. Thus we can suppose that $p=1$. Let us denote $b(\l) = n^{-1} \norr{(A_\l-I_n)f}^2$. We can see that if there exists constants $c>0$ and $d>1$ such that for every $\l\in\mathbb{R}_+$ $b(\l) \leq c\sigma^2\df(\l)^{-d}$, then Assumption \eqref{Hdf} holds for $n$ large enough. Indeed, let $\l \in \mathbb{R}_+$ such that $\df(\l) \leq \sqrt{n}$. Then, if $b(\l) \leq c\sigma^2\df(\l)^{-d}$, $b(\l) \leq \sigma^2 c (\sqrt{n})^{-d} \leq \sigma^2 c \frac{n^{(-d+1)/2}}{\sqrt{n}}$. It just suffices to see that, for $n$ large enough, $cn^{-d+1} \leq \ln(n)$.
Using Lemmas \ref{lemma.S1} and \ref{lemma.S2} we can see that, for every $\l\in\mathbb{R}_+$,
\begin{equation*}
b(\l) \leq \frac{\l^{\frac{2\d-1}{2\b}}}{\b}I_1(\b,\d)
\end{equation*}
and, for $n$ large enough, there exists a constant $\a$ such that, for every $\l\in\mathbb{R}_+$,
\begin{equation*}
\df(\l) = \tr{A_{\l}} \geq \a\frac{\l^{\frac{-1}{2\b}}}{2\b}I_2(\b)
\end{equation*}
Thus, for $n$ large enough, there exists a constant $c$ (depending on $\sigma^2$, $\b$ and $\d$) such that, for every $\l\in\mathbb{R}_+$,
\begin{equation*}
b(\l) \leq c\sigma^2 \tr(A_{\l})^{-(2\d-1)} \enspace .
\end{equation*}
Hence, if $2\d>2$, there exists a constant $N_1$ such that for every $n\geq N_1$, Assumption \eqref{Hdf} holds.
\end{proof}
Thus, we can apply Theorem~\ref{thm.oracle.HM} to the estimator $\widehat{f}_{\widehat{M}_{\textrm{HM}}}$ under either Assumption \eqref{2Points} or \eqref{1Out} (and we denote by $\rho$ either $\rho_{2Points}$ or $\rho_{1Out}$).
\begin{property}
For every positive numbers $(\b,\d,\th,C_1,C_2)$ verifying $4\b>2\d>2$ and $\th>1$, there exists positive constants $(N(\b,\d,\th),L)$ such that, for every $(n,p,\sigma^2)$ verifying $n\geq N$ and $\frac{p}{\sigma^2} \leq n$, if Assumption \eqref{hyp.K} and if either Assumption~\eqref{hyp.2Points} or Assumption~\eqref{hyp.1Out} hold, the ratio between the risk of the estimator $\widehat{f}_{\widehat{M}_{\textrm{HM}}}$ and the single-task oracle risk verifies
\begin{equation*}
\begin{split}
\frac{ \esp{\frac{1}{np}\norr{\widehat{f}_{\widehat{M}_{\textrm{HM}}}-f}^2}}{\mathfrak{R}^{\star}_{\mathrm{ST}}} \leq \left( 1+\frac{1}{\ln(n)} \right)^2\rho ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \\ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~+ Cst \times \frac{L\sigma^2(2 +\th)^2 p\frac{\ln(n)^3}{n} + \frac{p\zeta(2\d)}{n^{\th/2}}\frac{1}{p}\sum_{j=1}^pC^j}{\paren{\frac{n}{\sigma^2}}^{1/2\d-1}\kappa(\b,\d)\times\frac{1}{p}\sum_{j=1}^p (C^j)^{1/2\d}} \enspace .
\end{split}
\end{equation*}
\end{property}
\begin{proof}
This is a straightforward application of the preceding results.
\end{proof}
We now show that the latter fully data-driven multi-task ridge estimator achieves a lower risk than the single-task ridge oracle, in both settings \eqref{2Points} and \eqref{1Out}.
\begin{cor}\label{cor.est}
For every positive numbers $(\b,\d,\th,\sigma^2,\varepsilon)$ verifying $4\b>2\d>2$ and $\th>2$, there exists positive constants $(N,r)$ such that, for every $(n,p,C_1,C_2)$ verifying $n\geq N$, $\frac{p}{\sigma^2}\leq n^{1/4\d}$ and $\frac{C_2}{C_1} \leq r$, if Assumptions \eqref{hyp.K} holds and if either Assumption~\eqref{hyp.2Points} or Assumption~\eqref{hyp.1Out} hold, the ratio between the risk of the estimator $\widehat{f}_{\widehat{M}_{\textrm{HM}}}$ and the single-task oracle risk verifies
\begin{equation*}
\frac{ \esp{\frac{1}{np}\norr{\widehat{f}_{\widehat{M}_{\textrm{HM}}}-f}^2}}{\mathfrak{R}^{\star}_{\mathrm{ST}}} < \varepsilon \enspace .
\end{equation*}
\end{cor}
\begin{proof}
First, we can see that under either Assumption~\eqref{2Points} or Assumption~\eqref{1Out}, both $\frac{1}{p}\sum_{j=1}^pC^j$ and $\frac{1}{p}\sum_{j=1}^p (C^j)^{1/2\d}$ converge, as $p$ goes to $+\infty$, to quantities only depending on $C_1$, $C_2$ and $\d$ and are thus bounded with respect to $p$.
Then, as it was shown in the previous section, both $\rho_{2Points}$ and $\rho_{1Out}$ go to $0$ as $\frac{C_1}{C_2}$ goes to $0$. Finally, we can see that $\frac{p}{\sigma^2}\leq n^{1/4\d}$ implies that $\frac{p}{\sigma^2}\leq n$ and that
\begin{equation*}
\frac{\sigma^2 p\frac{\ln(n)^3}{n}}{\paren{\frac{n}{\sigma^2}}^{1/2\d-1}} = \paren{\sigma^2}^{1/2\d}\times p \times \frac{\ln(n)^3}{n^{1/2\d}} \leq \paren{\sigma^2}^{1+1/2\d} \times \frac{\ln(n)^3}{n^{1/4\d}} \converge_{n\to +\infty}0
\end{equation*}
together with
\begin{equation*}
\frac{\frac{p}{n^{\th/2}}}{\paren{\frac{n}{\sigma^2}}^{1/2\d-1}} \leq \paren{\sigma^2}^{1/2\d} \times n^{1-\th/2-1/4\d}\converge_{n\to +\infty}0 \enspace .
\end{equation*}
\end{proof}
\begin{rem}
The result shown in Corollary~\eqref{cor.est} establishes that a fully data-driven multi-task estimator outperforms an oracle single-task estimator, which is minimax optimal .
\end{rem}
\section{Numerical experiments \label{sec.simus}}
The hypotheses we used in the former sections, although sufficient to precisely derive the risk of the estimator, do not reflect realistic situations.
In this section we study less restrictive settings.
However, we are no longer able to obtain simple formulas for the oracle risk as we did before.
Thus, we resort to numerical simulations to illustrate the behaviour of both single-task and multi-task oracles.
\subsection{Setting A: relaxation of Assumptions~\eqref{hyp.AV} and \eqref{2Points} in order to get one general group of tasks}
In the latter sections we modeled the fact that the $p$ target functions are close.
However, due to technical constraints we were only able to deal with cases where the functions are split into two groups and are then equal inside each group, thus introducing Assumptions~\eqref{2Points} and \eqref{1Out}.
We propose here to extend this setting by simulating a more general group of tasks.
Those tasks should all be at a comparable distance from a centroid function.
We suppose that $(\varepsilon_i^j)_{i\in\{1,\dots,n\},j\in\{1,\dots,p\}}$ is a sequence of i.i.d. random variables, independent of $(X_i)_{i\in\{1,\dots,n\}}$, following a Rademacher distribution (that is, such that $\mathbb{P}(\varepsilon_i^j=1) = \mathbb{P}(\varepsilon_i^j=-1) = 1/2$).
The target functions are then defined by
\begin{equation}
\forall i\in\{1,\dots,n\},~\forall j \in \{1,\dots,p\},~ h_i^j = \sqrt{n}i^{-\d}\paren{\sqrt{C_1}+\varepsilon_i^j\sqrt{C_2}} \enspace .
\end{equation}
Thus, all the $p$ target functions are ``close'' to a centroid function, whose coordinates on the eigenvectors of the kernel matrix are $\sqrt{n}i^{-\d}\sqrt{C_1}$, with a ``dispersion factor'' $\sqrt{C_2}$. In this setting, we can easily express the key elements for the analysis of this risk :
\begin{equation*}
\frac{\mu_i^2}{p} = ni^{-2\d}\paren{\sqrt{C_1}+\frac{\sum_{j=1}^{p}\varepsilon_i^j}{p}\sqrt{C_2}}^2
\end{equation*}
and
\begin{equation*}
\varsigma_i^2 = ni^{-2\d} \paren{\frac{1}{p}\sum_{j=1}^p\paren{\sqrt{C_1}+\varepsilon_i^j\sqrt{C_2}}^2 -\paren{\sqrt{C_1}+\frac{\sum_{j=1}^{p}\varepsilon_i^j}{p}\sqrt{C_2}}^2} \enspace .
\end{equation*}
\begin{rem}
The theoretical analysis developed previously cannot be applied here, due to the presence of random terms, which depend on $i$, in front of the decay term $ni^{-2\d}$.
\end{rem}
\subsection{Setting B: random drawing of the input points and functions}
Assumptions~\eqref{hyp.K} and \eqref{hyp.AV} model the behaviour of the spectral elements of $f$ and $K$ as if they exactly follow the spectral elements of the kernel operator and the input function.
Although convenient for the analysis, this setting is unlikely to hold in practice and we propose here to draw the input points $(X_i)_{i=1}^n$ and compute the risk using the eigenvalues of the kernel matrix.
We suppose here that $(X_i)_{i=1}^n$ is a sequence of i.i.d. random variables uniformly drawn on $\croch{-\pi,\pi}$.
As in the latter section, we also suppose that we have an i.i.d. sequence of random variables $(\varepsilon_i^j)_{i\in\{1,\dots,n\},j\in\{1,\dots,p\}}$, independent of $(X_i)_{i\in\{1,\dots,n\}}$, following a Rademacher distribution. The target functions are then defined by
\begin{equation}
\forall i\in\{1,\dots,n\},~\forall j \in \{1,\dots,p\},~ f^j(X_i) =\paren{\sqrt{C_1}+\varepsilon_i^j\sqrt{C_2}}\absj{X_i} \enspace .
\end{equation}
As stated in \citet{Wah:1990} and in \citet{gu2002smoothing}, a natural kernel to use is, with $m \in \mathbb{N}^{\star}$,
\begin{equation*}
R(x,y) = 2\sum_{i=1}^{+\infty}\frac{\cos\paren{i(x-y)}}{i^{2m}} \enspace .
\end{equation*}
In this setting, the coefficients of the decomposition of $f : x \mapsto \absj{x}$ on the Fourier basis are known to be asymptotically equivalent to $i^{-2}$. Thus, this setting is a natural extension of Assumptions~\eqref{hyp.K} and \eqref{hyp.AV}, with $\b = m$---since the eigenvalues of the kernel $R$ are $i^{-2m}$---and $\d = 2$.
\subsection{Setting C: further relaxation of Assumptions~\eqref{hyp.AV} and \eqref{2Points} in one group of tasks}
We consider the same tasks than in Setting A, but also allow the regularity of the variance to vary.
This gives the following model, supposing that $(\varepsilon_i^j)_{i\in\{1,\dots,n\},j\in\{1,\dots,p\}}$ is a sequence of i.i.d. random variables, independent of $(X_i)_{i\in\{1,\dots,n\}}$, following a Rademacher distribution:
\begin{equation*}
\forall i\in\{1,\dots,n\},~\forall j \in \{1,\dots,p\},~ h_i^j = \sqrt{n}\paren{\sqrt{C_1}i^{-\d_1}+\varepsilon_i^j\sqrt{C_2}i^{-\d_2}} \enspace .
\end{equation*}
We allow the variance to have a varying regularity and intensity by changing $C_2$ and $\d_2$. This gives us the following quantities of interest: for every $i\in\{1,\dots,n\}$,
\begin{equation*}
\frac{\mu_i^2}{p} = ni^{-2\d_1}\paren{\sqrt{C_1}+\frac{\sum_{j=1}^{p}\varepsilon_i^j}{p}\sqrt{C_2}i^{-(\d_2-\d_1)}}^2
\end{equation*}
and
\begin{equation*}
\begin{split}
\varsigma_i^2 = ni^{-2\d_1} \Bigg(\frac{1}{p}\sum_{j=1}^p\paren{\sqrt{C_1}+\varepsilon_i^j\sqrt{C_2}i^{-(\d_2-\d_1)}}^2 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~-\paren{\sqrt{C_1}+\frac{\sum_{j=1}^{p}\varepsilon_i^j}{p}\sqrt{C_2}i^{-(\d_2-\d_1)}}^2 \Bigg) \enspace .
\end{split}
\end{equation*}
\subsection{Setting D: relaxation of Assumptions~\eqref{1Out} and \eqref{hyp.AV}}
Assumption~\eqref{1Out} states that we have one of $p-1$ identical tasks and one outlier.
We now simulate a slightly more general setting by having one cluster of $p-1$ around 0 and an outlier.
This gives the following model, supposing that $(\varepsilon_i^j)_{i\in\{1,\dots,n\},j\in\{1,\dots,p\}}$ is a sequence of i.i.d. random variables, independent of $(X_i)_{i\in\{1,\dots,n\}}$, following a Rademacher distribution:
\begin{equation*}
\forall i \in \set{1,\dots,n},~ \forall j \in \set{1,\dots,p-1},~ h^j_i = \sqrt{n}\varepsilon_i^j i^{-2}
\end{equation*}
and
\begin{equation*}
\forall i \in \set{1,\dots,n},~ \ h^p_i = \sqrt{nC_2}\varepsilon_i^p i^{-\d_2}\enspace .
\end{equation*}
We allow the outlier to have a varying regularity and intensity by changing $C_2$ and $\d_2$. This gives us the following quantities of interest: for every $i\in\{1,\dots,n\}$,
\begin{equation*}
\frac{\mu_i^2}{p} = ni^{-\d_1}\paren{\frac{\sqrt{C_1}}{p}\sum_{j=1}^{p-1}\varepsilon_i^j+\frac{\varepsilon_i^p}{p}\sqrt{C_2}i^{-(\d_2-\d_1)}}^2
\end{equation*}
and
\begin{equation*}
\varsigma_i^2 = ni^{-\d_1} \paren{\frac{p-1}{p}C_1+\frac{1}{p}C_2i^{-2(\d_2-\d_1)} -\paren{\frac{1}{p}\sum_{j=1}^{p-1}\varepsilon_i^j+\frac{\varepsilon_i^p}{p}\sqrt{C_2}i^{-(\d_2-\d_1)}}^2} \enspace .
\end{equation*}
\subsection{Methodology}
In every setting, we computed the oracle risks of both the multi-task estimator and the single-task one.
As shown before, for instance in Equation~\eqref{optim.ind}, both the multi-task risk (which has two hyper-parameters, $\l$ and $\mu$) and the single-task risk (which has $p$ hyper-parameters, $\l^1$ to $\l^p$) can be decomposed as a sum of several functions, each depending on a unique hyper-parameter.
We used Newton's method to optimize each of those $p+2$ functions over, respectively, $\l$, $\mu$, $\l^1,\dots,\l^p$.
Our stopping criterion was that the derivative of the function being optimized was inferior to $10^{-5}$, in absolute value.
We replicated each experiment $N=100$ times.
This gives us $N$ independent realisations of $(\mathfrak{R}^{\star}_{\mathrm{MT}},\mathfrak{R}^{\star}_{\mathrm{ST}})$, the randomness coming from the repartition of the tasks and, in Setting~B, from the drawing of the input points $(X_i)_{i=1}^n$.
In Settings A and B, we first test the hypothesis $ \mathbb{H}_0 = \set{\mathbb{P}\paren{\mathfrak{R}^{\star}_{\mathrm{MT}} < \mathfrak{R}^{\star}_{\mathrm{ST}}} < 0.5 }$ against $ \mathbb{H}_1 = \set{\mathbb{P}\paren{\mathfrak{R}^{\star}_{\mathrm{MT}} < \mathfrak{R}^{\star}_{\mathrm{ST}}} \geq 0.5 }$.
This amounts to testing whether the median of $\frac{\mathfrak{R}^{\star}_{\mathrm{MT}}}{\mathfrak{R}^{\star}_{\mathrm{ST}}}$ is larger than one.
For every iteration $i\in\set{1,\dots,N}$, we observe $B_i = \mathbf{1}_{\mathfrak{R}^{\star}_{\mathrm{MT}} < \mathfrak{R}^{\star}_{\mathrm{ST}}}$.
Since the random variables $(B_i)_{i\in\set{1,\dots,N}}$ follow a Bernouilli distribution of parameter $\mathbb{P}\paren{\mathfrak{R}^{\star}_{\mathrm{MT}} < \mathfrak{R}^{\star}_{\mathrm{ST}}}$, we can apply Hoeffding's inequality \citep{Massart} and see that, for every $\varepsilon >0$, $[\bar{B}_N-\varepsilon,1]$ is a confidence interval of level $1-e^{-2N\varepsilon^2}$ for $\mathbb{P}\paren{\mathfrak{R}^{\star}_{\mathrm{MT}} < \mathfrak{R}^{\star}_{\mathrm{ST}}}$.
This leads to the following p-value:
\begin{equation*}
\pi_1 = \left\{ \begin{array}{lr} e^{-2N\paren{\bar{B}_N-0.5}^2} & \textrm{if } \bar{B}_N \geq 0.5 \enspace , \\ 0 & \textrm{otherwise} \enspace . \end{array} \right.
\end{equation*}
In those two settings, we also test the hypothesis $\mathbb{H}_0 = \set{\esp{\frac{\mathfrak{R}^{\star}_{\mathrm{MT}}}{\mathfrak{R}^{\star}_{\mathrm{ST}}}} > 1}$ against $\mathbb{H}_1 = \set{\esp{\frac{\mathfrak{R}^{\star}_{\mathrm{MT}}}{\mathfrak{R}^{\star}_{\mathrm{ST}}}} \leq 1}$.
Let us denote by $\empesp{\frac{\mathfrak{R}^{\star}_{\mathrm{MT}}}{\mathfrak{R}^{\star}_{\mathrm{ST}}}}$ the empirical mean of the random variables $\frac{\mathfrak{R}^{\star}_{\mathrm{MT}}}{\mathfrak{R}^{\star}_{\mathrm{ST}}}$, $\hat{\textrm{Std}}\croch{\frac{\mathfrak{R}^{\star}_{\mathrm{MT}}}{\mathfrak{R}^{\star}_{\mathrm{ST}}}}$ the resulting standard deviation and $\Phi$ the cumulative distribution function of a standard gaussian distribution.
Then, a classical use of the central limit theorem and of Slutsky's Lemma gives that
\begin{equation*}
\croch{ 0, \empesp{\frac{\mathfrak{R}^{\star}_{\mathrm{MT}}}{\mathfrak{R}^{\star}_{\mathrm{ST}}}}+\frac{\varepsilon}{\sqrt{n}}\hat{\textrm{Std}}\croch{\frac{\mathfrak{R}^{\star}_{\mathrm{MT}}}{\mathfrak{R}^{\star}_{\mathrm{ST}}}}}
\end{equation*}
is an asymptotic confidence interval of level $\Phi(\varepsilon)$ for $\esp{\frac{\mathfrak{R}^{\star}_{\mathrm{MT}}}{\mathfrak{R}^{\star}_{\mathrm{ST}}}}$.
This leads to the following asymptotic p-value:
\begin{equation*}
\pi_2 = \Phi\croch{\sqrt{n}\paren{\empesp{\frac{\mathfrak{R}^{\star}_{\mathrm{MT}}}{\mathfrak{R}^{\star}_{\mathrm{ST}}}}-1} \hat{\textrm{Std}}\croch{\frac{\mathfrak{R}^{\star}_{\mathrm{MT}}}{\mathfrak{R}^{\star}_{\mathrm{ST}}}}^{-1} } \enspace .
\end{equation*}
The results of those tests are shown in Table~\ref{table.A} for Setting A and in Table~\ref{table.B} for Setting B.
In Settings C and D, we use the same asymptotic framework and show error bars corresponding to the asymptotic confidence interval
\begin{equation*}
\croch{ \empesp{\frac{\mathfrak{R}^{\star}_{\mathrm{MT}}}{\mathfrak{R}^{\star}_{\mathrm{ST}}}} - \frac{z_{0.975}}{\sqrt{n}} \hat{\textrm{Std}}\croch{\frac{\mathfrak{R}^{\star}_{\mathrm{MT}}}{\mathfrak{R}^{\star}_{\mathrm{ST}}}}, \empesp{\frac{\mathfrak{R}^{\star}_{\mathrm{MT}}}{\mathfrak{R}^{\star}_{\mathrm{ST}}}} + \frac{z_{0.975}}{\sqrt{n}} \hat{\textrm{Std}}\croch{\frac{\mathfrak{R}^{\star}_{\mathrm{MT}}}{\mathfrak{R}^{\star}_{\mathrm{ST}}}}}
\end{equation*}
of level $95\%$, where $z_{\a}$ denotes the quantile of order $\a$ of the standard gaussian distribution.
The results of those simulations are shown in Figure~\ref{fig.var} for Setting~C and in Figure~\ref{fig.out} for Setting~D.
We used the following values for the parameters: $n=50$, $p=5$, $\sigma^2=1$ and $C_1 = 1$. We finally settled $\d= 2$ in Settings A and B and $\d_1 = 2$ in Settings C and D.
\begin{table}[!h]
\centering
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
$C_2$&$r = \frac{C_2}{C_1} $ & $\b$ & $\bar{B}_{100} $ & $\pi_1$ & $\empesp{\frac{\mathfrak{R}^{\star}_{\mathrm{MT}}}{\mathfrak{R}^{\star}_{\mathrm{ST}}}}$ & $\hat{\textrm{Std}}\croch{\frac{\mathfrak{R}^{\star}_{\mathrm{MT}}}{\mathfrak{R}^{\star}_{\mathrm{ST}}}}$ & $ \pi_2$ \\
\hline
$0.01$ & $0.01$ & 2 & $1$ & $< 10^{-15}$ & $0.434 $ & $0.0324$ & $< 10^{-15}$ \\
\hline and
$0.1$ & $0.1$ & 2 & $1$ & $< 10^{-15}$ & $0.672 $ & $0.0747$ & $< 10^{-15}$\\
\hline
$0.5$ & $0.5$ & 2 & $0.94$ & $< 10^{-15}$ & $0.898$ & $0.0913$ & $< 10^{-15}$\\
\hline
$1$ & $1$ & 2 & $ 0.51$ & $9.80 \times 10^{-1}$ & $1.01$ & $ 0.129$ & $0.773$\\
\hline
$5$ & $5$ & 2 & $ 0.38$ & $1$ & $0.998$ & $ 0.0292$ & $0.302$\\
\hline
$10$ & $10$ & 2 & $ 0.42$ & $1$ & $0.996$ & $ 0.0172$ & $9.90\times 10^{-3}$ \\
\hline
$100$ & $100$ & 2 & $ 0.76$ & $1.35\times 10^{-6}$ & $0.997$ & $ 5.44\times 10^{-3}$ & $5.97\times 10^{-10}$ \\
\hline
$0.01$ & $0.01$ & 4 &$1$ & $< 10^{-15}$ & $0.426$ & $0.0310$ & $< 10^{-15}$ \\
\hline
$0.1$ & $0.1$ & 4 & $1$ & $< 10^{-15}$ & $0.703$ & $0.0737$ & $< 10^{-15}$\\
\hline
$0.5$ & $0.5$ & 4 & $0.75$ & $3.73\times 10^{-6}$ & $0.934$ & $0.113$ & $1.80\times 10^{-9}$ \\
\hline
$1$ & $1$ & 4 & $ 0.31$ & $1$ & $1.08$ & $ 0.163$ & $1.00$ \\
\hline
$5$ & $5$ & 4 & $ 0.38$ & $1$ & $1.01$ & $ 0.0439$ & $0.965$ \\
\hline
$10$ & $10$ & 4 & $ 0.43$ & $1$& $0.993$ & $ 0.0304$ & $0.0113$ \\
\hline
$100$ & $100$ & 4 & $ 0.83$ & $3.48\times 10^{-10}$ & $0.992$ & $ 0.0103$ & $1.22\times 10^{-14}$ \\
\hline
\end{tabular}
\caption{Comparison of the multi-task oracle risk to the single-task oracle risk in Setting A.}
\label{table.A}
\end{table}~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
\begin{table}[!h]
\centering
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
$C_2$&$r = \frac{C_2}{C_1} $ & $m$ & $\bar{B}_{100} $ & $\pi_1$ & $\empesp{\frac{\mathfrak{R}^{\star}_{\mathrm{MT}}}{\mathfrak{R}^{\star}_{\mathrm{ST}}}}$ & $\hat{\textrm{Std}}\croch{\frac{\mathfrak{R}^{\star}_{\mathrm{MT}}}{\mathfrak{R}^{\star}_{\mathrm{ST}}}}$ & $\pi_2$ \\
\hline
$0.01$ & $0.01$ & 2 & $1$ & $< 10^{-15}$ & $0.570 $ & $0.0409$ & $< 10^{-15}$\\
\hline
$0.1$ & $0.1$ & 2 & $1$ & $< 10^{-15}$ & $0.745$ & $0.0333$ & $< 10^{-15}$ \\
\hline
$0.5$ & $0.5$ & 2 & $0.99$ & $< 10^{-15}$ & $0.907$ & $0.0406$ & $ < 10^{-15}$\\
\hline
$1$ & $1$ & 2 & $ 0.80$ & $1.52 \times 10^{-8}$ & $0.961$ & $ 0.0459 $ & $< 10^{-15}$\\
\hline
$5$ & $5$ & 2 & $ 0.55$ & $0.607 $ & $0.995$ & $0.205 $ & $2.59\times 10^{-3}$\\
\hline
$10$ & $10$ & 2 & $ 0.53$ & $0.835 $ & $0.996$ & $ 0.114 $ & $6.23\times 10^{-4}$ \\
\hline
$100$ & $100$ & 2 & $ 0.81$ & $4.50\times 10^{-9}$ & $0.996$ & $ 6.35\times 10^{-3}$ & $1.03\times 10^{-11}$\\
\hline
$0.01$ & $0.01$ & 4 &$1$ & $< 10^{-15}$& $0.527$ & $0.0409$ & $< 10^{-15}$ \\
\hline
$0.1$ & $0.1$ & 4 & $1$ & $< 10^{-15}$ & $0.756$ & $0.0534$ & $< 10^{-15}$\\
\hline
$0.5$ & $0.5$ & 4 & $0.93$ & $< 10^{-15}$ & $0.917$ & $0.0650$ & $ < 10^{-15}$\\
\hline
$1$ & $1$ & 4 & $ 0.49$ & $1$& $1.01$ & $0.0896$ & $ 0.855 $ \\
\hline
$5$ & $5$ & 4 & $ 0.40$ & $1$ & $0.997$ & $0.0295$ & $ 0.170$\\
\hline
$10$ & $10$ & 4 & $ 0.41$ & $1$ & $0.998$ & $0.0179$ & $ 0.114 $ \\
\hline
$100$ & $100$ & 4 & $ 0.84$ & $9.10\times 10^{-11}$ & $0.994$ & $ 8.71\times 10^{-3}$ & $7.36\times 10^{-14}$ \\
\hline
\end{tabular}
\caption{Comparison of the multi-task oracle risk to the single-task oracle risk in Setting B.}
\label{table.B}
\end{table}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
\begin{figure}
\hbox{\hspace{-1cm}\includegraphics[width=1.15\textwidth]{fig_var.eps}}
\caption{Further relaxation of Assumption~\eqref{2Points} (Experiment~C), improvement of multi-task compared to single-task: $\esp{\frac{\mathfrak{R}^{\star}_{\mathrm{MT}}}{\mathfrak{R}^{\star}_{\mathrm{ST}}}}$. Best seen in colour.
}
\label{fig.var}
\end{figure}
\begin{figure}
\hbox{\hspace{-1cm}\includegraphics[width=1.15\textwidth]{fig_out.eps}}
\caption{Relaxation of Assumption~\eqref{1Out} (Experiment~D), improvement of multi-task compared to single-task: $\esp{\frac{\mathfrak{R}^{\star}_{\mathrm{MT}}}{\mathfrak{R}^{\star}_{\mathrm{ST}}}}$. Best seen in colour.
}
\label{fig.out}
\end{figure}
\subsection{Interpretation}
When all the tasks are grouped in one cluster (Settings A, B and C), the same phenomenon as under Assumption~\eqref{2Points} appears.
In situations where the mean component of the signal has more weight than the variance component (in Settings A and B, that is when $r$ is small, in Setting C, this occurs when $\d_2$ is large and $C_2$ is small) then the multi-task oracle seems to outperform the single-task one.
On the contrary, when the mean component of the signal is negligible compared to the variance component (likewise, this occurs in Settings A and B when $r$ is large and in Setting C when $\d_2$ is small or when $C_2$ large), then both oracles seem to perform similarly.
Adversary settings to the multi-task oracle appear when one task is added outside of a cluster (Setting D).
When this outlier is less regular than the tasks belonging to the cluster (that is, when $\d_2$ is large), the single-task oracle performs better than the multi-task one, which confirms the theoretical analysis performed in Section~\ref{sub.sec.an1Out}.
\section{Conclusion}
This paper shows the existence of situations where the multi-task kernel ridge regression, with a perfect parameter calibration, can perform better than the single-task one.
This happens when the tasks are distributed given simple specifications, which are studied both theoretically and on simulated examples.
\medskip
The analysis performed here allows us to have a precise estimation of the risk of the multi-task oracle (Theorem~\ref{thm.MT.oracle}), this result holding under a few hypotheses on the regularity of the kernel, of the mean of the tasks and of its resulting variance.
Several simple single-task settings are then investigated, with the constraint that they respect the latter assumptions.
This theoretical grounding, backed-up by our simulated examples, allows us to understand better when and where the multi-task procedure outperforms the single-task one.
\begin{itemize}
\item The situation where all the regression functions are close in the RKHS (that is, their differences are extremely regular) is favorable to the multi-task procedure, when using the matrices $\mathcal{M} = \set{M_{\AV}(\l,\mu),(\l,\mu)\in\mathbb{R}_+^2}$.
In this setting, the multi-task procedure can do much better than the single-task one (as if it had $p$ times more input points).
It is also shown to never do worse (up to a multiplicative constant)~!
\item On the contrary, when one outlier lies far apart from this cluster, this multi-task procedure suddenly performs badly, that is, arbitrarily worse than the single-task one.
This comes as no surprise, since the addition of a far less regular task naturally destroys the joint learning of a group of tasks.
In this case, the use of a multi-task procedure which clusters the tasks together (because of the choice of $\mathcal{M}$) is inadapted to the situation.
\end{itemize}
\medskip
Our analysis can easily be adapted to a slightly wider set of assumptions on the tasks than the one presented here (all the tasks are grouped together, in one cluster).
It is for instance possible to treat the case where the tasks are grouped in two (or more) clusters---when the allocation of each task to its cluster is known to the statistician, at the price of introducing more hyperparameters.
We are still limited, though, to certain cases of hypotheses, reflected on the set of matricial hyperparameters $\mathcal{M}$.
The failure of the multi-task oracle on the case where one outlier stays outside of one group of tasks can be seen, not as the impossibility to use multi-task techniques in this situation, but rather as the fact the set of matrices used here, $\mathcal{M} = \set{M_{\AV}(\l,\mu),(\l,\mu)\in\mathbb{R}_+^2}$, is inadapted to the situation.
We can at least see two different solutions to this kind of inadaptation.
First, the use of prior knowledge can help the statistician to craft an \emph{ad hoc} set $\mathcal{M}$.
Second, we could seek to automatically adapt to the situation in order to learn a good set $\mathcal{M}$ from data.
Learning more complex sets $\mathcal{M}$ is an important---but complex---challenge, that we want to address in the future.
This question can at least be split into three (not necessarily independent) problems, that call for the elaboration of new tools:
\begin{itemize}
\item a careful study of the risk, to find a set $\mathcal{M}^{\star} \subset \mathcal{S}_p^{++} (\mathbb{R})$ of candidate matrices;
\item optimization tools, to derive an algorithm able to select a matrix in this set $\mathcal{M}^{\star}$;
\item new concentration of measure results, to be able to show oracle inequalities that control the risk of the output of the algorithm.
\end{itemize}
\medskip
Our estimation of the multi-task oracle risk is also shown to be precise enough so that we can plug it in an oracle inequality, hereby showing the existence of a multi-task estimator that has a lower risk than the single-task oracle (under the same favorable circumstances as before).
\medskip
Finally, it would be intereting to extend the analysis developped here to the random-design setting.
This could be done, for instance, by using the tools brought by \citet{hsu2011analysis}, that link random-design convergence rates to fixed-design ones.
\paragraph{Acknowledgments:} The author thanks Sylvain Arlot and Francis Bach for inspireful discussions and their precious comments, which greatly helped to inprove the quality of this paper.
\bibliographystyle{plainnat}
\bibpunct{(}{)}{;}{n}{,}{,}
|
2,869,038,155,832 | arxiv | \section{Introduction}
A variety of population dynamics and physiological processes can be described as the following equation
\begin{equation}\label{eq2}
x'(t)=-a(t)x(t) + \l b(t)f(x(t)).
\end{equation}
Periodic solutions of the type problems have attracted much attention, see e.g. \cite{JIANGWEI2002,OReganWang2005,HWJDE1,WAZLASTA} and
references therein.
On the other hand, recently, there are a considerable interest in the existence of positive periodic solutions of singular systems
of the second order differential
equations, see Chu, Torres and Zhang \cite{chujde2007}, Franco and Webb \cite{Franco2006}, Jiang, Chu, O'Regan, and Agarwal \cite{jcoa}, and the author \cite{Wang2010} and references therein. It has been shown that many results for nonsingular systems still valid for
singular cases. In particular, the author \cite{Wang2010} demonstrates that the Krasnoselskii fixed point theorem on compression and expansion of cones
can be effectively used to deal with singular problems. In fact, by choosing appropriate cones, the singularity of the systems is essentially removed and the associated operator becomes well-defined for
certain ranges of functions even there are negative terms.
Agarwal and O'Regan \cite{Agarwal2003} provided some results on solutions of singular first order differential equations.
Chu and Nieto \cite{chu2008} showed
the existence of periodic solutions for singular first order differential equations with impulses based on a nonlinear alternative of Leray-Schauder.
The results in \cite{Agarwal2003,chu2008} for first order differential equations deal with a single equation.
In this paper, by employing the Krasnoselskii fixed point theorem on compression and expansion of a cone,
we shall establish the existence and multiplicity of positive $\o$-periodic solutions for the following singular non-autonomous $n$-dimensional system
\begin{equation}\label{eq1}
x_i(t)=-a_i(t)x_i(t) + \l b_i(t)f_i(x_1(t),..., x_n(t)), i=1,...,n.
\end{equation}
where $\l>0$ is a positive parameter. Our results give an almost complete structure of the existence
of positive periodic solutions of (\ref{eq1}) with an appropriately chosen parameter. Our results further show that
there are analogous results between the first order and second ordinary differential equations.
First we make assumptions for (\ref{eq1}). Let $\mathbb{R}=(-\infty, \infty)$, $\mathbb{R}_+=[0, \infty)$,
$\mathbb{R}_+^n=\Pi_{i=1}^n \mathbb{R}_+$ and for any
$\vect{u}=(u_1,...,u_n) \in \mathbb{R}^n_+$,
$\norm{\vect{u}}=\sum_{i=1}^n \abs{u_i}$.
{(H1)~~$a_i, b_i$ $\in C(\mathbb{R}, [0, \infty))$ are
$\o$-periodic functions such that $\int_0^{\o}a_i(t)dt >0$,
$\int_0^{\o}b_i(t)dt >0$, $i=1,\dots,n$.}
{(H2)~~$f_i: \mathbb{R}_+^n \setminus \{0\} \to (0,\infty)$ is continuous, $i=1,\dots,n$}\\
Our main results are:
\sthm\mlabel{th1} Let (H1),(H2) hold. Assume that $\lim_{ \norm{\vect{u}} \to 0} f_i(x)=\infty$ for some $ i=1,...,n.$ \\
(a). If $\lim_{ \norm{\vect{u}} \to \infty} \frac{f_i(x)}{\norm{\vect{u}}}=0$, $i=1,\dots,n$ , then, for all $\l>0$, (\ref{eq1}) has
a positive periodic solution. \\
(b). If $\lim_{ \norm{\vect{u}} \to \infty} \frac{f_i(x)}{\norm{\vect{u}}}=\infty$ for $i=1,\dots,n$, then, for all sufficiently small $\l>0$, (\ref{eq1}) has
two positive periodic solutions. \\
(c). There exists a $\l_0>0$ such that (\ref{eq1}) has a positive periodic solution for $0 < \l < \l_0$.
\ethm
\srmark\label{rem1}
As discussed in \cite{Wang2010}, we can extend Theorem \ref{th1} to the following singular systems with possible negative $e_i$,
\begin{equation}\label{eq11}
x_i'(t)=-a_i(t)x_i(t) + \l b_i(t)f_i(x_1(t),..., x_n(t)) + \l e_i(t), i=1,...,n.
\end{equation}
where $e_i(t), i=1,...,n,$ are continuous $\o$-periodic functions. When $e_i(t)$ takes negative values, we shall need
a stronger condition on $b_i (b_i>0).$
Such a result can be proved in the same way as in \cite{Wang2010}. We will not give a detailed proof here. The idea to deal with negative $e_i$ is to split $b_i(s) f_i(x(s))+e_i(t)$ into the two terms $\frac{1}{2}b_i(s) f_i(x(s))$
and $\frac{1}{2}b_i(s) f_i(x(s))+e_i(t)$.
The first term is always nonnegative and used to carry out the estimates of the operator. We will make the second term
$\frac{1}{2}b_i(s) f_i(x(s))+e_i(t)$ nonnegative by choosing appropriate domains of $f_i$. This is possible because
$\lim_{ \norm{\vect{u}} \to 0} f_i(x)=\infty$ or $\lim_{ \norm{\vect{u}} \to \infty} f_i(x)=\infty$.
The choice of the even split
of $b_i(s) f_i(x(s))$ here is not necessarily optimal in terms of obtaining maximal $\l$-intervals for the existence of periodic solutions of the systems.
\ermark
\srmark\label{rem2}
O'Regan and the author \cite{OReganWang2005}, and the author \cite{HWJDE1} established the existence, multiplicity and nonexistence of positive
periodic solution of the first order ODE
\begin{equation}\label{eq90}
x_i'(t)=a_i(t)g_i(x(t))x_i(t) - \l b_i(t)f_i(x(t-\t(t))), i=1,...,n.
\end{equation}
where $g_i$ are positive bounded functions and $\t \in C(\mathbb{R}, [0, \infty))$ is a $\o$-periodic function. These results can also be extended to (\ref{eq90})
if $f_i$ has a singularity at zero.
\ermark
\setcounter{equation}{0}
\section{Preliminaries}
We recall some concepts and conclusions of an operator in a cone. Let $E$
be a Banach space and $K$ be a closed, nonempty subset of $E$. $K$
is said to be a cone if $(i)$~$\alpha u+\beta v\in K $ for all
$u,v\in K$ and all $\alpha,\beta \geq 0$ and $(ii)$~$u,-u\in K$ imply
$u=0$. The following well-known result of the fixed
point theorem is crucial in our arguments.
\slm\mlabel{lm1} {\rm (\cite{KRAS})} Let $X$ be a Banach
space and $K\ (\subset X)$ be a cone. Assume that $\Omega_1,\
\Omega_2$ are bounded open subsets of $X$ with $0 \in \Omega_1,\bar\Omega_1 \subset \Omega_2$, and let
$$
\mathcal{T}: K \cap (\bar{\Omega}_2\setminus \Omega_1 ) \rightarrow K
$$
be completely continuous operator such that either
\begin{itemize}
\item[{\rm (i)}] $\| \mathcal{T}u \| \geq \| u \|,\ u\in K\cap \partial
\Omega_1$ and $ \| \mathcal{T}u \| \leq \| u \|,\ u\in K\cap \partial
\Omega_2$; or
\item[{\rm (ii)}] $\| \mathcal{T}u \| \leq \| u \|,\ u\in K\cap \partial
\Omega_1$ and $\| \mathcal{T}u \| \geq \| u \|,\ u\in K\cap \partial
\Omega_2$.
\end{itemize}
Then $\mathcal{T}$ has a fixed point in $K \cap ( \bar \Omega_2 \backslash
\Omega_1)$.
\elm
We now introduce some notation. For $r>0$, let
$$\begin{array}{l}
\s=\min\limits_{i=1,\dots,n}\{\s_i \}>0,~~{\rm where}~~\s_i =
e^{-\int_0^{\o}a_i(t)dt}, i=1,\dots,n,\\
M(r)=\max\{f_i(\vect{u}): \vect{u} \in
\mathbb{R}_+^n, \s r\leq \norm{\vect{u}} \leq r,i=1,\dots,n\}>0,\\
m(r)=\min\{f_i(\vect{u}): \vect{u} \in
\mathbb{R}_+^n, \s r\leq \norm{\vect{u}} \leq r,i=1,\dots,n\}>0,\\
\G=\s \min\limits_{n=1,\dots,n}\{\frac{\int^{\o}_{0} b_i(s)
ds}{\s^{-1}_i-1}\}>0,~~\chi = \sum_{i=1}^n
\frac{\s^{-1}_i}{\s^{-1}_i-1} \int^{\o}_{0} b_i(s)ds>0.
\end{array}
$$
In order to apply Lemma \ref{lm1} to (\ref{eq1}), let $X$ be the
Banach space defined by
$$X=\{\vect{u}(t)\in C(\mathbb{R},\mathbb{R}^n):
\vect{u}(t+\o)=\vect{u}(t), t \in \mathbb{R}, i=1,\dots,n\}$$ with
a norm $\displaystyle{\norm{\vect{u}}= \sum_{i=1}^n
\sup_{t\in[0,\o]} \abs{u_i(t)}},$ for $\vect{u}=(u_1,...,u_n) \in
X.$ For $\vect{u} \in X$ or $\mathbb{R}^n_+$, $\norm{\vect{u}}$
denotes the norm of $\vect{u}$ in $X$ or $\mathbb{R}^n_+$,
respectively.
Define
$$
K = \{\vect{u}=(u_1,...,u_n) \in X: u_i(t) \geq \s_i
\sup_{t\in[0,\o]} \abs{u_i(t)}, i=1,\dots,n, t \in [0, \o] \}.
$$
It is clear $K$ is cone in $X$ and $\min_{ t \in [0,\o]} \sum_{i=1}^n |u_i(t)| \geq \s \norm{u}$ for $\vect{u}=(u_1,...,u_n) \in K$.
For $r>0$, define $\O_r =
\{\vect{u} \in K: \norm{\vect{u}} < r \}. $ It is clear that
$\partial \O_r = \{\vect{u} \in K: \norm{\vect{u}}=r\}$. Let
$\vect{T}_{\l}: K \setminus \{0\}\to X$ be a map with components
$(T_{\l}^1,...,T_{\l}^n)$:
\begin{equation}\label{T_def}
T_{\l}^i\vect{u}(t) = \l \int^{t+\o}_t G_i(t, s) b_i(s)
f_i(\vect{u}(s))ds,~~i=1,\dots,n,
\end{equation}
where
$$
G_i(t,s)=\frac{e^{\int_t^s a_i(\theta)d\theta}}{\s^{-1}_i-1}
$$
satisfying
$$
\frac{1}{\s^{-1}_i-1} \leq G_i(t,s) \leq
\frac{\s^{-1}_i}{\s^{-1}_i-1}\; \text{for} \; t \leq s \leq t+\o.
$$
\slm\mlabel{lm-compact} Assume \rm{(H1)-(H2)} hold. Then $\vect{T}
_{\l}(K \setminus \{0\}) \subset K$ and $\vect{T}_{\l}: K\setminus \{0\} \to K$ is compact and
continuous. \elm
\pf If $ \vect{u}=(u_1,...,u_n) \in K \setminus \{0\}$, then $\min_{ t \in [0,\o]} \sum_{i=1}^n |u_i(t)| \geq \s \norm{\vect{u}}>0$,
and then $T _{\l}^i$ is defined. In view of the definition of $K$, for
$\vect{u} \in K \setminus \{0\}$, we have, $i=1,\dots,n,$
\begin{equation*}
\begin{split}
(T_{\l}^i\vect{u})(t+\o)
& =\l \int^{t+2\o}_{t+\o} G_i(t+\o, s) b_i(s) f_i(\vect{u}(s))ds \\
& =\l \int^{t+\o}_{t} G_i(t, s) b_i(s) f_i(\vect{u}(s))ds
=(T_{\l}^i\vect{u})(t).
\end{split}
\end{equation*}
It is easy to see that $\int^{t+\o}_{t} b_i(s)
f_i(\vect{u}(s))ds$ is a constant because of the periodicity
of $b_i(t) f_i(\vect{u}(t))$. One can show that, for
$\vect{u} \in K \setminus \{0\} $ and $t \in [0,\o]$, $i=1,\dots,n,$
\begin{equation*}
\begin{split}
T_{\l}^i\vect{u}(t)
& \geq \frac{1}{\s^{-1}_i-1}\l \int^{t+\o}_{t} b_i(s) f_i(\vect{u}(s))ds \\
& = \s_i \frac{\s^{-1}_i}{\s^{-1}_i-1} \l \int^{\o}_{0} b_i(s)
f_i(\vect{u}(s))ds\geq \s_i \sup_{t\in[0,\o]}
\abs{T_{\l}^i\vect{u}(t)}.
\end{split}
\end{equation*}
Thus $\vect{T} _{\l}(K \setminus \{0\}) \subset K$ and it is easy to show that
$\vect{T}_{\l}: K \setminus \{0\}\to K$ is compact and continuous. \epf
\slm\mlabel{lm-fixed-equation-equal} Assume that \rm{(H1)-(H2)}
hold. Then $\vect{u}\in K \setminus \{0\} $ is a positive periodic solution of
(\ref{eq1}) if and only if it is a fixed point of $\vect{T} _{\l}$
in $K \setminus \{0\}$
\elm \pf If $\vect{u}=(u_1,\dots,u_n) \in K \setminus \{0\}$ and
$\vect{T}_{\l}\vect{u}=\vect{u}$, then, for $i=1,\dots,n,$
\begin{equation*}
\begin{split}
u_i'(t)
& = \frac{d}{dt} (\l \int^{t+\o}_t G_i(t, s) b_i(s) f_i(\vect{u}(s))ds)\\
& = \l G_i(t, t+\o) b_i(t+\o)f_i(\vect{u}(t+\o)- \l G_i(t,t)b_i(t)f_i(\vect{u}(t)) \\
& \quad - a_i(t) T_{\l}^iu(t) \\
& = \l [G_i(t, t+\o)-G_i(t,t)]b_i(t)f_i(\vect{u}(t))-a_i(t) T_{\l}^iu(t)\\
& = - a_i(t) u_i(t)+ \l b_i(t)f_i(\vect{u}(t)).
\end{split}
\end{equation*}
Thus $\vect{u}$ is a positive $\o$-periodic solution of
(\ref{eq1}). On the other hand, if $\vect{u}=(u_1,\dots,u_n)$ is a
positive $\o$-periodic function, then $ \l
b_i(t)f_i(\vect{u}(t))= a_i(t) u_i(t)+u_i'(t)$ and
\begin{equation*}
\begin{split}
T_{\l}^i\vect{u}(t)
& = \l \int^{t+\o}_t G_i(t, s) b_i(s) f_i(\vect{u}(s))ds\\
& = \int^{t+\o}_t G_i(t, s) (a_i(s) u_i(s)+u_i'(s))ds \\
& = \int^{t+\o}_t G_i(t, s) a_i(s) u(s)ds +\int^{t+\o}_t G_i(t, s) u'_i(s)ds \\
& = \int^{t+\o}_t G_i(t, s) a_i(s) u(s)ds + G_i(t, s) u(s)|^{t+\o}_t - \int^{t+\o}_t G_i(t, s) a_i(s) u_i(s)ds \\
& = u_i(t).
\end{split}
\end{equation*}
Thus, $\vect{T}_{\l}\vect{u}=\vect{u}$, Furthermore, in view of
the proof of Lemma \ref{lm-compact}, we also have $u_i(t) \geq
\s_i \sup_{t\in[0,\o]} u_i(t)$ for $t \in [0,\o].$ That is,
$\vect{u}$ is a fixed point of $\vect{T}_{\l}$ in $K \setminus \{0\}$. \epf
\slm\mlabel{f_estimate_>*} Assume that \rm{(H1)-(H2)} hold. For
any $\eta > 0$ and $ \vect{u}=(u_1, \dots, u_n) \in K \setminus \{0\} $, if there
exists a $f_i$ such that
\mbox{$f_i(\vect{u}(t)) \geq \sum_{j=1}^n u_j(t)\eta$ } for $ t
\in [0, \o]$, then $ \norm{\vect{T}_{\l}\vect{u}} \geq \l \G
\eta \norm{\vect{u}}. $ \elm
\pf Since $\vect{u} \in K\setminus \{0\}$ and
\mbox{$f_i(\vect{u}(t)) \geq \sum_{j=1}^n u_j(t)\eta$ } for $ t
\in [0, \o]$, we have
\begin{equation*}
\begin{split}
(T_{\l}^i\vect{u})(t)
& \geq \frac{1}{\s^{-1}_i-1}\l \int^{\o}_{0} b_i(s) f_i(\vect{u}(s))ds \\
& \geq \frac{1}{\s^{-1}_i-1}\l \int^{\o}_{0} b_i(s) \sum_{j=1}^n u_j(s) \eta ds \\
& \geq \frac{1}{\s^{-1}_i-1}\l \int^{\o}_{0} b_i(s) ds \sum_{j=1}^n \s_j\sup_{t\in[0,\o]} u_j(t) \eta \\
& \geq \l \min_{i=1,\dots,n}\{\s_i\} \frac{\int^{\o}_{0} b_i(s)
ds}{\s^{-1}_i-1} \eta \norm{\vect{u}}.
\end{split}
\end{equation*}
Thus $\norm{\vect{T}_{\l}\vect{u}} \geq \l \G \eta
\norm{\vect{u}}$. \epf
Let $\hat{f}_i: [1, \infty) \to \mathbb{R}_+$ be the function given by
$$\hat{f}_i(\theta) =\max \{f_i(u):u \in \mathbb{R}_+^n \; \rm{and}\; 1 \leq \abs{u} \leq \theta \}, i=1,...,n.$$
It is easy to see that $\hat{f}_i(\theta)$ is a nondecreasing function on $[1,\infty)$. The following lemma is essentially the same as \cite[Lemma 3.6]{Wang2010} and \cite[Lemma 2.8]{HWJMAA1}.
\slm (\cite{Wang2010,HWJMAA1})\mlabel{lm6}
Assume \rm{(H1)} holds. If $\lim_{ \abs{x} \to \infty} \frac{f_i(x)}{\abs{x}}$ exists (which can be infinity), then $\lim_{\theta \to \infty} \frac{\hat{f}_i(\theta)}{\theta}$ exists and
$\lim_{\theta \to \infty} \frac{\hat{f}_i(\theta)}{\theta}=\lim_{ \abs{x} \to \infty} \frac{f_i(x)}{\abs{x}}$.
\elm
\slm\mlabel{f_estimate_<*} Assume that
\rm{(H1)-(H2)} hold. Let $r > \frac{1}{\s}$ and if there exists an $\e > 0$ such that
$$
\hat{f}_i(r) \leq \e r,\;\; i=1,...,n,
$$
then
$$
\norm{\vect{T}_{\l}\vect{u}} \leq \l \chi \varepsilon
\norm{\vect{u}} $$
for $\vect{u}=(u_1, \dots, u_n) \in \partial\O_{r}$.
\elm
\pf From the definition of $\vect{T}$, for
$\vect{u} \in \partial\O_{r}$, we have
\begin{equation*}
\begin{split}
\norm{\vect{T}_{\l}\vect{u}}
& \leq \sum_{i=1}^n \frac{\s^{-1}_i}{\s^{-1}_i-1} \l \int^{\o}_{0} b_i(s) f_i(\vect{u}(s))ds \\
& \leq \sum_{i=1}^n \frac{\s^{-1}_i}{\s^{-1}_i-1} \l \int^{\o}_{0} b_i(s) \hat{f}_i(r)ds \\
& \leq \sum_{i=1}^n \frac{\s^{-1}_i}{\s^{-1}_i-1} \l \int^{\o}_{0} b_i(s)ds \varepsilon \norm{\vect{u}}= \l \chi \varepsilon \norm{\vect{u}}.
\end{split}
\end{equation*}
\epf
In view of the definitions of $m(r)$ and $M(r)$, it follows that
$ M(r) \geq f_i(\vect{u}(t)) \geq m(r)$ \; $\rm{for}\; t \in [0,
\o]$, $i=1,\dots,n$ if $\vect{u} \in \partial \O_{r}$, $r>0$ .
Thus it is easy to see that the following two lemmas can be shown
in similar manners as in Lemmas \ref{f_estimate_>*} and
\ref{f_estimate_<*}.
\slm\mlabel{lm8} Assume \rm{(H1)-(H2)} hold. If $ \vect{u} \in
\partial \O_{r}$, $r
>0$, then $ \norm{\vect{T}_{\l}\vect{u}} \geq \l \frac{\G}{\s} m(r). $\elm
\slm\mlabel{lm9} Assume \rm{(H1)-(H2)} hold. If $ \vect{u}\in
\partial \O_{r}$, $r >0$, then $ \norm{\vect{T}_{\l}\vect{u}}
\leq \l \chi M(r). $ \elm
\setcounter{equation}{0}
\section{Proof of Theorem \ref{th1}}
Part (a). From the assumptions,
there is an $r_1 > 0$ such that
$$
f_i(\vect{u}) \geq \eta \norm{\vect{u}}
$$
for $ \vect{u}=(u_1,...,u_n) \in \mathbb{R}_+^n$ and $ 0<\norm{\vect{u}} \leq r_1,$
where $\eta > 0$ is chosen so that
$$
\l \G \eta > 1.
$$
If $ \vect{u}=(u_1,...,u_n) \in \partial \O_{r_1}$, then
$$f_i(\vect{u}(t)) \geq \eta\sum_{i=1}^n u_i(t), \;\; {\rm for } \;\; t \in [0,1].$$
Lemma ~\ref{f_estimate_>*} implies that
$$
\norm{\vect{T}_{\l}\vect{u}} \geq \l \G \eta \norm{\vect{u}} > \norm{\vect{u}} \quad \textrm{for} \quad \vect{u} \in \partial\O_{r_1}.
$$
We now determine $\O_{r_2}$. Since $\lim_{ \norm{\vect{u}} \to \infty} \frac{f_i(x)}{\norm{\vect{u}}}=0$, $i=1,\dots,n$
it follows from Lemma ~\ref{lm6} that $\lim_{\theta \to \infty }\frac{\hat{f}_i(\theta)}{\theta}=0$, $i=1,...,n.$
Therefore there is an $r_2>\max\{2r_1,\frac{1}{\s}\}$ such that
$$
\hat{f}_i(r_2) \le \e r_2,\;i=1,...,n,
$$
where the constant $\e > 0$ satisfies
$$
\l \e \chi < 1.
$$
Thus, we have by Lemma ~\ref{f_estimate_<*} that
$$
\norm{\vect{T}_{\l}\vect{u}} \leq \l \e \chi \norm{\vect{u}} < \norm{\vect{u}} \quad \textrm{for} \quad \vect{u} \in \partial\O_{r_2}.
$$
By Lemma ~\ref{lm1},
It follows that
$\vect{T}_{\l}$ has a fixed point in $\O_{r_2} \setminus \bar{\O}_{r_1}$, which is the desired positive solution of (\ref{eq1}).
\epf
Part (b).
Fix a number $r_1 > 0$. Lemma \ref{lm9} implies that there exists a $\l_0 >0$ such that
$$
\norm{\vect{T}_{\l}\vect{u}} < \norm{\vect{u}}, \; {\rm for} \; \vect{u} \in \partial \O_{r_1},\; 0< \l < \l_0.
$$
In view of $\lim_{ \norm{\vect{u}} \to 0}f_i(x) = \infty$, there is a positive number $r_2 < r_1$ such that
$$
f_i(\vect{u}) \geq \eta \norm{\vect{u}}
$$
for $ \vect{u}=(u_1,...,u_n) \in \mathbb{R}_+^n$ and $ 0<\norm{\vect{u}} \leq r_2,$
where $\eta > 0$ is chosen so that
$$
\l \G \eta > 1.
$$
Then
$$f_i(\vect{u}(t)) \geq \eta\sum_{i=1}^n u_i(t),$$
for $\vect{u}=(u_1,...,u_n) \in \partial \O_{r_2}, \;\;t \in [0,1].$
Lemma ~\ref{f_estimate_>*} implies that
$$
\norm{\vect{T}_{\l}\vect{u}} \geq \l \G \eta \norm{\vect{u}} > \norm{\vect{u}} \quad \textrm{for} \quad \vect{u} \in \partial\O_{r_2}.
$$
On the other hand, since $\lim_{\norm{\vect{u}} \to \infty} \frac{f_i}{\norm{\vect{u}}}=\infty$, there is
an $\hat{H} > 0$ such that
$$
f_i(\vect{u}) \geq \eta \norm{\vect{u}}
$$
for $ \vect{u}=(u_1,...,u_n) \in \mathbb{R}_+^n$ and $\norm{\vect{u}} \geq \hat{H}$ ,
where $\eta > 0$ is chosen so that
$$
\l \G \eta > 1.
$$
Let $r_3 = \max\{2r_1, \frac{\hat{H}}{\s} \}$. If $ \vect{u}=(u_1 ,...,u_n) \in \partial \O_{r_3}$, then
$$ \min_{0\leq t \leq\o} \sum_{i=1}^n u_i(t) \geq \s
\norm{\vect{u}}= \s r_3 \geq \hat{H},$$
which implies that
$$
f_i(\vect{u}(t)) \geq \eta\sum_{i=1}^n u_i(t)\; \rm{for} \; t \in [0,\o].
$$
It follows from Lemma ~\ref{f_estimate_>*} that
$$
\norm{\vect{T}_{\l}\vect{u}} \geq \l\G \eta \norm{\vect{u}} > \norm{\vect{u}} \quad \rm{for} \quad \vect{u} \in \partial\O_{r_3}.
$$
It follows from Lemma ~\ref{lm1} that
$\vect{T}_{\l}$ has two fixed points $\vect{u}_1 $ in $\O_{r_1} \setminus \bar{\O}_{r_2}$ and $\vect{u}_2 $ $ \in \O_{r_3} \setminus \bar{\O}_{r_1}$ such that
$$
r_2 < \norm{\vect{u}_1} < r_1 < \norm{\vect{u}_2} < r_3.
$$
Consequently, (\ref{eq1})
has two positive solutions for $ 0< \l < \l_0$.
Part (c).
Fix a number $r_1 > 0$. Lemma \ref{lm8} implies that there exists a $\l_0 >0$ such that
$$
\norm{\vect{T}_{\l}\vect{u}} < \norm{\vect{u}}, \; {\rm for} \; \vect{u} \in \partial \O_{r_1},\; 0< \l < \l_0.
$$
In view of $\lim_{x\to 0}f_i(x) = \infty$, there is a positive number $r_2 < r_1$ such that
$$
f_i(\vect{u}) \geq \eta \norm{\vect{u}}
$$
for $ \vect{u}=(u_1,...,u_n) \in \mathbb{R}_+^n$ and $ 0<\norm{\vect{u}} \leq r_2,$
where $\eta > 0$ is chosen so that
$$
\l \G \eta > 1.
$$
Then
$$f_i(\vect{u}(t)) \geq \eta\sum_{i=1}^n u_i(t),$$
for $\vect{u}=(u_1,...,u_n) \in \partial \O_{r_2}, \;\;t \in [0,1].$
Lemma ~\ref{f_estimate_>*} implies that
$$
\norm{\vect{T}_{\l}\vect{u}} \geq \l \G \eta \norm{\vect{u}} > \norm{\vect{u}} \quad \textrm{for} \quad \vect{u} \in \partial\O_{r_2}.
$$
It follows from Lemma ~\ref{lm1} that
$\vect{T}_{\l}$ has a fixed point in $\O_{r_1} \setminus \bar{\O}_{r_2}$. Consequently, (\ref{eq1})
has a positive solution for $ 0< \l < \l_0$.
|
2,869,038,155,833 | arxiv | \section{Introduction}}
Finding the ground state of a quantum many-body system is a fundamental problem with far-reaching consequences for physics, materials science, and chemistry.
Many powerful methods \cite{HohenbergKohn, NobelKohn, CEPERLEY555,SandvikSSE,becca_sorella_2017, DMRG1,DMRG2} have been proposed,
but classical computers still struggle to solve many general classes of the ground state problem.
To extend the reach of classical computers, classical machine learning (ML) methods have recently been adapted to study this problem~\cite{CarleoRMP,APXReview, dassarma2017, carrasquilla2017nature,Carleo_2017,torlai_learning_2016,Nomura2017, evert2017nature,leiwang2016,gilmer2017neural,torlai_Tomo,vargas2018extrapolating,schutt2019unifying,Glasser2018,caro2022out,rodriguez2019identifying,qiao2020orbnet,choo_fermionicnqs2020,kawai2020predicting,moreno2020deep,Kottmann2021}.
A recent work \cite{huang2021provably} proposes a polynomial-time classical ML algorithm that can efficiently predict ground state properties of gapped geometrically local Hamiltonians, after learning from data obtained by measuring other Hamiltonians in the same quantum phase of matter.
Furthermore, \cite{huang2021provably} shows that under a widely accepted conjecture, no polynomial-time classical algorithm can achieve the same performance guarantee.
However, although the ML algorithm given in \cite{huang2021provably} uses a polynomial amount of training data and computational time, the polynomial scaling $\mathcal{O}(n^c)$ has a very large degree~$c$.
Moreover, when the prediction error $\epsilon$ is small, the amount of training data grows exponentially in $1 / \epsilon$, indicating that a very small prediction error cannot be achieved efficiently.
In this work, we present an improved ML algorithm for predicting ground state properties.
We consider an $m$-dimensional vector $x \in [-1, 1]^{m}$ that parameterizes an $n$-qubit gapped geometrically local Hamiltonian given as
\begin{equation}
H(x) = \sum_{j} h_j(\vec{x}_j),
\end{equation}
where $x$ is the concatenation of constant-dimensional vectors $\vec{x}_1, \ldots, \vec{x}_L$ parameterizing the few-body interaction $h_j(\vec{x}_j)$.
Let $\rho(x)$ be the ground state of $H(x)$ and $O$ be a sum of geometrically local observables with $\norm{O}_\infty \leq 1$.
We assume that the geometry of the $n$-qubit system is known, but we do not know how $h_j(\vec{x}_j)$ is parameterized or what the observable $O$ is.
The goal is to learn a function $h^*(x)$ that approximates the ground state property $\Tr(O \rho(x))$ from a classical dataset,
\begin{equation}
\big( x_\ell, y_\ell \big), \quad \forall \ell = 1, \ldots, N,
\end{equation}
where $y_\ell \approx \Tr(O \rho(x_\ell))$ records the ground state property for $x_\ell \in [-1, 1]^m$ sampled from an arbitrary unknown distribution~$\mathcal{D}$.
The setting considered in this work is very similar to that in \cite{huang2021provably}, but we assume the geometry of the $n$-qubit system to be known, which is necessary to overcome the sample complexity lower bound of $N = n^{\Omega(1 / \epsilon)}$ given in \cite{huang2021provably}.
One may compare the setting to that of finding ground states using adiabatic quantum computation \cite{farhi2000quantum,mizel2007simple,childs2001robustness,aharonov2008adiabatic,barends2016digitized,albash2018adiabatic,du2010nmr,wan2020fast}.
To find the ground state property $\Tr(O \rho(x))$ of $H(x)$,
this class of quantum algorithms requires the ground state $\rho_0$ of another Hamiltonian $H_0$ stored in quantum memory, explicit knowledge of a gapped path connecting $H_0$ and $H(x)$, and an explicit description of~$O$.
In contrast, here we focus on ML algorithms that are entirely classical, have no access to quantum state data, and have no knowledge about the Hamiltonian $H(x)$, the observable $O$, or the gapped paths between $H(x)$ and other Hamiltonians.
The proposed ML algorithm uses a nonlinear feature map $x \mapsto \phi(x)$ with a geometric inductive bias built into the mapping.
At a high level, the high-dimensional vector $\phi(x)$ contains nonlinear functions for each geometrically local subset of coordinates in the $m$-dimensional vector~$x$.
Here, the geometry over coordinates of the vector $x$ is defined using the geometry of the $n$-qubit system.
The ML algorithm learns a function $h^*(x) = \mathbf{w}^* \cdot \phi(x)$ by training an $\ell_1$-regularized regression (LASSO) \cite{doi:10.1137/0907087, tibshirani1996regression, mohri2018foundations} in the feature space.
We prove that given $\epsilon = \Theta(1)$, the improved ML algorithm can use a dataset size of
\begin{equation}
N = \mathcal{O}\left(\log\left(n\right)\right),
\end{equation}
to learn a function $h^*(x)$ with an average prediction error of at most $\epsilon$,
\begin{equation}
\E_{x \sim \mathcal{D}} \left| h^*(x) - \Tr(O \rho(x)) \right|^2 \leq \epsilon,
\end{equation}
with high success probability.
The sample complexity $N = \mathcal{O}\left(\log\left(n\right)\right)$ of the proposed ML algorithm improves substantially over the sample complexity of $N = \mathcal{O}(n^c)$ in the previously best-known classical ML algorithm \cite{huang2021provably}, where $c$ is a very large constant.
The computational time of both the improved ML algorithm and the ML algorithm in \cite{huang2021provably} is $\mathcal{O}(n N)$.
Hence, the logarithmic sample complexity $N$ immediately implies a nearly linear computational time.
In addition to the reduced sample complexity and computational time,
the proposed ML algorithm works for any distribution over $x$, while the best previously known algorithm \cite{huang2021provably} works only for the uniform distribution over $[-1, 1]^m$.
Furthermore, when we consider the scaling with the prediction error $\epsilon$, the best known classical ML algorithm in \cite{huang2021provably} has a sample complexity of $N = n^{\mathcal{O}(1 / \epsilon)}$, which is exponential in $1 / \epsilon$.
In contrast, the improved ML algorithm has a sample complexity of $N = \log(n) 2^{\mathrm{polylog}(1 / \epsilon)}$, which is quasi-polynomial in $1 / \epsilon$.
In combination with the classical shadow formalism \cite{huang2020predicting, elben2020mixed, elben2022randomized, wan2022matchgate, bu2022classical}, the proposed ML algorithm also yields the same reduction in sample and time complexity compared to \cite{huang2021provably} for predicting ground state representations.
\begin{figure}[t]
\centering
\includegraphics[width=1.0\linewidth]{figures/overview.pdf}
\caption{\textbf{Overview of the proposed machine learning algorithm.} Given a vector $x \in [-1, 1]^m$ that parameterizes a quantum many-body Hamiltonian $H(x)$. The algorithm uses a geometric structure to create a high-dimensional vector $\phi(x) \in \mathbb{R}^{m_\phi}$. The ML algorithm then predicts properties or a representation of the ground state $\rho(x)$ of Hamiltonian $H(x)$ using the $m_\phi$-dimensional vector $\phi(x)$. }
\label{fig:overview}
\end{figure}
\vspace{2em}
{\renewcommand\addcontentsline[3]{}\section{ML algorithm and rigorous guarantee}}
The central component of the improved ML algorithm is the geometric inductive bias built into our feature mapping $x \in [-1, 1]^m \mapsto \phi(x) \in \mathbb{R}^{m_\phi}$.
To describe the ML algorithm, we first need to present some definitions relating to this geometric structure.
\vspace{2em}
{\renewcommand\addcontentsline[3]{} \subsection{Definitions}}
We consider $n$
qubits arranged at locations, or sites, in a $d$-dimensional space, e.g., a spin chain ($d=1$), a square lattice ($d=2$), or a cubic lattice ($d=3$).
This geometry is characterized by the distance $d_{\mathrm{qubit}}(i, i')$ between any two qubits $i$ and $i'$.
Using the distance $d_{\mathrm{qubit}}$ between qubits, we can define the geometry of local observables.
Given any two observables $O_A, O_B$ on the $n$-qubit system,
we define the distance $d_{\mathrm{obs}}(O_A, O_B)$ between the two observables as the minimum distance between the qubits that $O_A$ and $O_B$ act on.
We also say an observable is geometrically local if it acts nontrivially only on nearby qubits under the distance metric $d_{\mathrm{qubit}}$.
We then define $S^{\mathrm{(geo)}}$ as the set of all geometrically local Pauli observables, i.e., geometrically local observables that belong to the set $\{I, X, Y, Z\}^{\otimes n}$.
The size of $S^{\mathrm{(geo)}}$ is $\mathcal{O}(n)$, linear in the total number of qubits.
With these basic definitions in place, we now define a few more geometric objects.
The first object is the set of coordinates in the $m$-dimensional vector $x$ that are close to a geometrically local Pauli observable $P$. This is formally given by,
\begin{equation}
\label{eq:ip}
I_P \triangleq \left\{ c \in \{1, \dots, m\} : d_{\mathrm{obs}}(h_{j(c)}, P) \leq \delta_1 \right\},
\end{equation}
where $h_{j(c)}$ is the few-body interaction term in the $n$-qubit Hamiltonian $H(x)$ that is parameterized by the variable $x_c \in [-1, 1]$, and $\delta_1$ is an efficiently computable hyperparameter that is determined later.
Note that, by definition, each variable $x_c$ parameterizes one of the interaction terms $h_{j(c)}$.
Intuitively, $I_P$ is the set of coordinates that have the strongest influence on the function $\Tr(P \rho(x))$.
The second geometric object is a discrete lattice over the space $[-1, 1]^m$ associated to each subset $I_P$ of coordinates.
For any geometrically local Pauli observable $P \in S^{\mathrm{(geo)}}$, we define $X_P$ to contain all vectors $x$ that take on value $0$ for coordinates outside $I_P$ and take on a set of discrete values for coordinates inside $I_P$. Formally, this is given by
\begin{equation}
X_P \triangleq \left.\begin{cases}
x \in [-1,1]^m : \text{if } c \notin I_P, \,\, x_{c} = 0\\
\hspace{62pt} \text{if } c \in I_P, \,\, x_{c} \in \left\{0, \pm \delta_2, \pm 2\delta_2,\dots, \pm 1\right\}
\end{cases}\right\},
\end{equation}
where $\delta_2$ is an efficiently computable hyperparameter to be determined later.
The definition of $X_P$ is meant to enumerate all sufficiently different vectors for coordinates in the subset $I_P \subseteq \{1, \ldots, m\}$.
Now given a geometrically local Pauli observable $P$ and a vector $x$ in the discrete lattice $X_P \subseteq [-1, 1]^m$, the third object is a set $T_{x, P}$ of vectors in $[-1, 1]^m$ that are close to $x$ for coordinates in $I_P$. This is formally defined as,
\begin{equation}
T_{x, P} \triangleq \left\{ x' \in [-1,1]^m : -\frac{\delta_2}{2} < x_{c} - x_{c}' \leq \frac{\delta_2}{2}, \forall c \in I_P \right\}.
\end{equation}
The set $T_{x, P}$ is defined as a thickened affine subspace close to the vector $x$ for coordinates in $I_P$.
If a vector $x'$ is in $T_{x, P}$, then $x'$ is close to $x$ for all coordinates in $I_P$, but $x'$ may be far away from $x$ for coordinates outside of $I_P$.
\vspace{2em}
{\renewcommand\addcontentsline[3]{} \subsection{Feature mapping and ML model}\label{subsec:ML-model}}
We can now define the feature map $\phi$ taking an $m$-dimensional vector $x$ to an $m_\phi$-dimensional vector $\phi(x)$ using the thickened affine subspaces $T_{x', P}$ for every geometrically local Pauli observable $P \in S^{\mathrm{(geo)}}$ and every vector $x'$ in the discrete lattice $X_P$.
The dimension of the vector $\phi(x)$ is given by $m_{\phi} = \sum_{P \in S^{\mathrm{(geo)}}} |X_P|$.
Each coordinate of the vector $\phi(x)$ is indexed by $x' \in X_P$ and $P \in S^{\mathrm{(geo)}}$ with
\begin{equation} \label{eq:phi-def}
\phi(x)_{x', P} \triangleq \mathds{1}\left[x \in T_{x', P}\right],
\end{equation}
which is the indicator function checking if $x$ belongs to the thickened affine subspace.
Recall that this means each coordinate of the $m_\phi$-dimensional vector $\phi(x)$ checks if $x$ is close to a point $x'$ on a discrete lattice $X_P$ for the subset $I_P$ of coordinates close to a geometrically local Pauli observable $P$.
The classical ML model we consider is an $\ell_1$-regularized regression (LASSO) over the $\phi(x)$ space.
More precisely, given an efficiently computable hyperparameter $B > 0$, the classical ML model finds an $m_\phi$-dimensional vector $\mathbf{w}^*$ from the following optimization problem,
\begin{equation}
\min_{\substack{\mathbf{w} \in \mathbb{R}^{m_\phi}\\ \norm{\mathbf{w}}_1 \leq B} } \, \frac{1}{N} \sum_{\ell=1}^N \left| \mathbf{w} \cdot \phi(x_\ell) - y_\ell \right|^2,
\end{equation}
where $\{(x_\ell, y_\ell)\}_{\ell=1}^N$ is the training data.
Here, $x_\ell \in [-1,1]^m$ is an $m$-dimensional vector that parameterizes a Hamiltonian $H(x)$ and $y_\ell$ approximates $\Tr(O\rho(x_\ell))$.
The learned function is given by $h^*(x) = \mathbf{w}^* \cdot \phi(x)$.
The optimization does not have to be solved exactly.
We only need to find a $\mathbf{w}^*$ whose function value is $\mathcal{O}(\epsilon)$ larger than the minimum function value.
There is an extensive literature \cite{efron2004least, daubechies2004iterative, combettes2005signal, cesa2011efficient, friedman2010regularization, hazan2012linear, chen2021quantum} improving the computational time for the above optimization problem.
The best known classical algorithm \cite{hazan2012linear} has a computational time scaling linearly in $m_\phi / \epsilon^2$ up to a log factor, while the best known quantum algorithm \cite{chen2021quantum} has a computational time scaling linearly in $\sqrt{m_\phi} / \epsilon^2$ up to a log factor.
\vspace{2em}
{\renewcommand\addcontentsline[3]{} \subsection{Rigorous guarantee}}
\noindent The classical ML algorithm given above yields the following sample and computational complexity.
This theorem improves substantially upon the result in \cite{huang2021provably}, which requires $N = n^{\mathcal{O}(1 / \epsilon)}$.
The proof idea is given in Section~\ref{sec:proofidea}, and the detailed proof is given in Appendices~\ref{sec:simple},~\ref{sec:norminequality},~\ref{sec:algorithm}.
Using the proof techniques presented in this work, one can show that the sample complexity $N = \log(n / \delta) 2^{\mathrm{polylog}(1 / \epsilon)}$ also applies to any sum of few-body observables $O = \sum_j O_j$ with $\sum_j \norm{O_j}_\infty \leq 1$, even if the operators $\{O_j\}$ are not geometrically local.
\vspace{0.5em}
\begin{theorem}[Sample and computational complexity] \label{thm:rigor-guarantee}
Given $n, \delta > 0$, $\tfrac{1}{e} > \epsilon > 0$ and a training data set $\{x_\ell, y_\ell\}_{\ell = 1}^N$ of size
\begin{equation}
N = \log(n / \delta) 2^{\mathrm{polylog}(1 / \epsilon)},
\end{equation}
where $x_\ell$ is sampled from an unknown distribution $\mathcal{D}$ and $|y_\ell - \Tr(O \rho(x_\ell))| \leq \epsilon$ for any observable~$O$ with eigenvalues between $-1$ and $1$ that can be written as a sum of geometrically local observables.
With a proper choice of the efficiently computable hyperparameters $\delta_1, \delta_2$, and $B$,
the learned function $h^*(x) = \mathbf{w}^* \cdot \phi(x)$ satisfies
\begin{equation}
\E_{x \sim \mathcal{D}} \left| h^*(x) - \Tr(O \rho(x)) \right|^2 \leq \epsilon
\end{equation}
with probability at least $1 - \delta$.
The training and prediction time of the classical ML model are bounded by $\mathcal{O}(n N) = n \log(n / \delta) 2^{\mathrm{polylog}(1 / \epsilon)}$.
\end{theorem}
The output $y_\ell$ in the training data can be obtained by measuring $\Tr(O \rho(x_\ell))$ for the same observable~$O$ multiple times and averaging the outcomes.
Alternatively, we can use the classical shadow formalism~\cite{huang2020predicting, elben2020mixed, elben2022randomized, wan2022matchgate, bu2022classical, van2022hardware} that performs randomized Pauli measurements on $\rho(x_\ell)$ to predict $\Tr(O \rho(x_\ell))$ for a wide range of observables $O$.
Theorem~\ref{thm:rigor-guarantee} and the classical shadow formalism together yield the following corollary for predicting ground state representations.
We present the proof of Corollary~\ref{corollary:ground-state-rep} in Appendix~\ref{sec:rigor-guarantee}.
\begin{corollary}
\label{corollary:ground-state-rep}
Given $n, \delta > 0$, $\tfrac{1}{e} > \epsilon > 0$ and a training data set $\{x_\ell, \sigma_T(\rho(x_\ell))\}_{\ell = 1}^N$ of size
\begin{equation}
N = \log(n / \delta) 2^{\mathrm{polylog}(1 / \epsilon)},
\end{equation}
where $x_\ell$ is sampled from an unknown distribution $\mathcal{D}$ and $\sigma_T(\rho(x_\ell))$ is the classical shadow representation of the ground state $\rho(x_\ell)$ using $T$ randomized Pauli measurements. For $T = \tilde{\mathcal{O}}(\log(n)/\epsilon^2)$, then the proposed ML algorithm can learn a ground state representation $\hat{\rho}_{N, T}(x)$ that achieves
\begin{equation}
\E_{x \sim \mathcal{D}}|\Tr(O\hat{\rho}_{N, T}(x)) - \Tr(O\rho(x))|^2 \leq \epsilon
\end{equation}
for any observable $O$ with eigenvalues between $-1$ and $1$ that can be written as a sum of geometrically local observables with probability at least $1-\delta$.
\end{corollary}
We can also show that the problem of estimating ground state properties for the class of parameterized Hamiltonians $H(x) = \sum_j h_j(\vec{x}_j)$ considered in this work is hard for non-ML algorithms that cannot learn from data.
This is a manifestation of the computational power of data studied in \cite{huang2020power}.
The proof of Proposition~1 in~\cite{huang2021provably} constructs a parameterized Hamiltonian $H(x)$ that belongs to the family of parameterized Hamiltonians considered in this work and hence establishes the following.
\begin{prop}[A variant of Proposition 1 in~\cite{huang2021provably}]
Consider a randomized polynomial-time classical algorithm $\mathcal{A}$ that does not learn from data. Suppose for any smooth family of gapped 2D Hamiltonians $H(x) = \sum_j h_j(\vec{x}_j)$ and any single-qubit observable $O$, $\mathcal{A}$ can compute ground state properties $\Tr(O\rho(x))$ up to a constant error averaged over $x \in [-1, 1]^m$ uniformly.
Then, $\NP$-complete problems can be solved in randomized polynomial time.
\end{prop}
{\renewcommand\addcontentsline[3]{} \section{Proof ideas} \label{sec:proofidea} }
We describe the key ideas behind the proof of Theorem~\ref{thm:rigor-guarantee}.
The proof is separated into three parts.
The first part in Appendix~\ref{sec:simple} describes the existence of a simple functional form that approximates the ground state property $\Tr(O \rho(x))$.
The second part in Appendix~\ref{sec:norminequality} gives a new bound for the $\ell_1$-norm of the Pauli coefficients of the observable $O$ when written in the Pauli basis.
The third part in Appendix~\ref{sec:algorithm} combines the first two parts, using standard tools from learning theory to establish the sample complexity corresponding to the prediction error bound given in Theorem~\ref{thm:rigor-guarantee}.
In the following, we discuss these three parts in detail.
\vspace{2em}
{\renewcommand\addcontentsline[3]{} \subsection{Simple form for ground state property}}
Using the spectral flow formalism \cite{bachmann2012automorphic,hastings2005quasiadiabatic,osborne2007simulating}, we first show that the ground state property can be approximated by a sum of local functions.
First, we write $O$ in the Pauli basis as $O = \sum_{P \in \{I, X, Y, Z\}^{\otimes n}} \alpha_P P$.
Then, we show that for every geometrically local Pauli observable $P$, we can construct a function $f_P(x)$ that depends only on coordinates in the subset $I_P$ of coordinates that parameterizes interaction terms $h_j$ near the Pauli observable $P$.
The function $f_P(x)$ is given by
\begin{equation} \label{eq:fP-def}
f_P(x) = \alpha_P \Tr(P \rho(\chi_P(x))),
\end{equation}
where $\chi_P(x) \in [-1, 1]^m$ is defined as $\chi_P(x)_c = x_c$ for coordinate $c \in I_P$ and $\chi_P(x)_c = 0$ for coordinates $c \not\in I_P$.
The sum of these local functions $f_P$ can be used to approximate the ground state property,
\begin{equation}
\Tr(O \rho(x)) \approx \sum_{P \in S^{\mathrm{(geo)}}} f_P(x).
\end{equation}
The approximation only incurs an $\mathcal{O}(\epsilon)$ error if we consider $\delta_1 = \Theta(\log^2(1 / \epsilon))$ in the definition of $I_P$.
The key point is that correlations decay exponentially with distance in the ground state of a gapped local Hamiltonian; therefore, the properties of the ground state in a localized region are not sensitive to the details of the Hamiltonian at points far from that localized region.
Furthermore, the local function $f_P$ is smooth.
The smoothness property allows us to approximate each local function $f_P$ by a simple discretization,
\begin{equation}
f_P(x) \approx \sum_{x' \in X_P} f_P(x') \mathds{1}\left[x \in T_{x', P}\right].
\end{equation}
One could also use other approximations for this step, such as Fourier approximation or polynomial approximation.
For simplicity, we consider a discretization-based approximation with $\delta_2 = \Theta(1 / \epsilon)$
in the definition of $T_{x',P}$ to incur at most an $\mathcal{O}(\epsilon)$ error.
The point is that, for a sufficiently smooth function $f_P(x)$ that depends only on coordinates in $I_P$ and a sufficiently fine lattice over the coordinates in $I_P$, replacing $x$ by the nearest lattice point (based only on coordinates in $I_P$) causes only a small error.
Using the definition of the feature map $\phi(x)$ in Eq.~\eqref{eq:phi-def}, we have
\begin{equation} \label{eq:w-prime-def}
\Tr(O \rho(x)) \approx \sum_{P \in S^{\mathrm{(geo)}}} \sum_{x' \in X_P} f_P(x') \phi(x)_{x', P} = \mathbf{w}' \cdot \phi(x),
\end{equation}
where $\mathbf{w}'$ is an $m_\phi$-dimensional vector indexed by $x' \in X_P$ and $P \in S^{\mathrm{geo}}$ given by $\mathbf{w}_{x', P}' = f_P(x')$.
The approximation is accurate if we consider $\delta_1 = \Theta(\log^2(1 / \epsilon))$ and $\delta_2 = \Theta(1 / \epsilon)$.
Thus, we can see that the ML algorithm with the proposed feature mapping indeed has the capacity to approximately represent the target function $\Tr(O \rho(x))$.
As a result, we have the following lemma.
\vspace{0.5em}
\begin{lemma}[Training error bound]
\label{lemma:main-training}
The function given by $\mathbf{w}' \cdot \phi(x)$ achieves a small training error:
\begin{equation}
\frac{1}{N} \sum_{\ell=1}^N \left| \mathbf{w}' \cdot \phi(x_\ell) - y_\ell \right|^2 \leq 0.53 \epsilon.
\end{equation}
\end{lemma}
\noindent This lemma follows from the two facts that $\mathbf{w}' \cdot \phi(x) \approx \Tr(O \rho(x))$ and $\Tr(O \rho(x_\ell)) \approx y_\ell$.
{\renewcommand\addcontentsline[3]{} \subsection{Norm inequality for observables}}
The efficiency of an $\ell_1$-regularized regression depends greatly on the $\ell_1$ norm of the vector $\mathbf{w}'$.
Moreover, the $\ell_1$-norm of $\mathbf{w}'$ is closely related to the observable $O = \sum_j O_j$ given as a sum of geometrically local observables with $\norm{O}_\infty \leq 1$.
In particular, again writing $O$ in the Pauli basis as $O = \sum_{Q \in \{I, X, Y, Z\}^{\otimes n}} \alpha_Q Q$, the $\ell_1$-norm $\norm{\mathbf{w}'}_1$ is closely related to
$\sum_{Q} \left| \alpha_Q \right|,$
which we refer to as the Pauli $1$-norm of the observable $O$.
While it is well known that
\begin{equation}
\sum_{Q} \left| \alpha_Q \right|^2 = \Tr(O^2) / 2^n \leq \norm{O}_\infty^2,
\end{equation}
there do not seem to be many known results characterizing $\sum_{Q} \left| \alpha_Q \right|$.
To understand the Pauli $1$-norm,
we prove the following theorem.
\vspace{0.5em}
\begin{theorem}[Pauli $1$-norm bound]
\label{thm:main-normineq}
Let $O = \sum_{Q \in \{I, X, Y, Z\}^{\otimes n}} \alpha_Q Q$ be an observable that can be written as a sum of geometrically local observables. We have,
\begin{equation}
\sum_Q |\alpha_Q| \leq C \norm{O}_\infty,
\end{equation}
for some constant $C$.
\end{theorem}
\noindent A series of related norm inequalities are also established in \cite{huang2022learning}.
However, the techniques used in this work differ significantly from those in~\cite{huang2022learning}.
{\renewcommand\addcontentsline[3]{}\subsection{Prediction error bound for the ML algorithm}}
\noindent Using the construction of the local function $f_P(x_{c}, c \in I_P)$ given in Eq.~\eqref{eq:fP-def} and the vector $\mathbf{w}'$ defined in Eq.~\eqref{eq:w-prime-def}, we can show that
\begin{equation}
\norm{\mathbf{w}'}_1 \leq \max_{P \in S^{\mathrm{(geo)}}} \left| X_P \right| \left( \sum_{Q} \left| \alpha_Q \right| \right) \leq \left(1 + \frac{2}{\delta_2}\right)^{\mathrm{poly}(\delta_1)} \left( \sum_{Q} \left| \alpha_Q \right| \right).
\end{equation}
The second inequality follows by bounding the size of our discrete subset $X_P$ and noticing that $|I_P| = \mathrm{poly}(\delta_1)$.
The norm inequality in Theorem~\ref{thm:main-normineq} then implies
\begin{equation}
\norm{\mathbf{w}'}_1 \leq C \norm{O}_\infty \left(1 + \frac{2}{\delta_2}\right)^{\mathrm{poly}(\delta_1)} \leq 2^{\mathrm{poly} \log(1 / \epsilon)},
\end{equation}
because $\norm{O}_\infty \leq 1$ and $\delta_1 = \Theta(\log^2(1 / \epsilon)), \delta_2 = \Theta(1 / \epsilon)$.
This shows that there exists a vector $\mathbf{w}'$ that has a bounded $\ell_1$-norm and achieves a small training error.
The existence of $\mathbf{w}'$ guarantees that the vector $\mathbf{w}^*$ found by the optimization problem with the hyperparameter $B \geq \norm{\mathbf{w}'}_1$ will yield an even smaller training error.
Using the norm bound on $\mathbf{w}'$, we can choose the hyperparameter $B$ to be $B = 2^{\mathrm{poly} \log(1/\epsilon)}$.
Using standard learning theory \cite{tibshirani1996regression, mohri2018foundations}, we can thus obtain
\begin{equation}
\E_{x \sim \mathcal{D}} \left| h^*(x) - \Tr(O \rho(x)) \right|^2 \leq \frac{1}{N} \sum_{\ell=1}^N \left| \mathbf{w}^* \cdot \phi(x_\ell) - y_\ell \right|^2 + \mathcal{O}\left( B \sqrt{\frac{\log(m_{\phi} / \delta)}{N}} \right)
\end{equation}
with probability at least $1 - \delta$.
The first term is the training error for $\mathbf{w}^*$, which is smaller than the training error of $0.53 \epsilon$ for $\mathbf{w}'$ from Lemma~\ref{lemma:main-training}.
Thus, the first term is bounded by $0.53 \epsilon$.
The second term is determined by $B$ and $m_\phi$, where we know that $m_\phi \leq |S^{\mathrm{(geo)}}| \left(1 + \frac{2}{\delta_2}\right)^{\mathrm{poly}(\delta_1)}$ and $|S^{\mathrm{(geo)}}| = \mathcal{O}(n)$.
Hence, with a training data size of
\begin{equation}
N = \mathcal{O}\left( \log(n / \delta) 2^{\mathrm{polylog}(1 / \epsilon)} \right),
\end{equation}
we can achieve a prediction error of $\epsilon$ with probability at least $1 - \delta$ for any distribution $\mathcal{D}$ over $[-1, 1]^m$.
\vspace{2em}
{\renewcommand\addcontentsline[3]{} \section{Numerical experiments}}
\begin{figure}[t]
\centering
\includegraphics[width=1.0\linewidth]{figures/Heisenberg.pdf}
\caption{\textbf{Predicting ground state properties in 2D antiferromagnetic random Heisenberg models. (A)} Prediction error. Each point indicates the root-mean-square error for predicting the correlation function in the ground state (averaged over Heisenberg model instances and each pair of neighboring spins).
Left figure fixes the training set size $N$ to be $50$ and system size $n$ to be $9 \times 5 = 45$. Center figure fixes the shadow size $T$ to be $500$ and $n = 45$. Right figure fixes $N = 50$ and $T = 500$.
The shaded regions show the standard deviation over different spin pairs.
\textbf{(B)} Visualization. We plot how much each coupling $J_{ij}$ contributes to the prediction of the correlation function over different pairs of qubits in the trained ML model. Thicker and darker edges correspond to higher contributions. We see that the ML model learns to utilize the local geometric structure. }
\label{fig:heisenberg}
\end{figure}
In this section, we present numerical experiments to assess the performance of the classical ML algorithm in practice.
The results illustrate the improvement of the algorithm presented in this work compared to those considered in~\cite{huang2021provably}, the mild dependence of the sample complexity on the system size~$n$, and the inherent geometry exploited by the ML models.
We consider the classical ML models described in Section~\ref{subsec:ML-model}, utilizing a random Fourier feature map~\cite{rahimi2007random}.
While the indicator function feature map was a useful tool to obtain our rigorous guarantees, random Fourier features are more robust and commonly used in practice.
Furthermore, we determine the optimal hyperparameters using cross-validation to minimize the root-mean-square error (RMSE) and then evaluate the performance of the chosen ML model using a test set.
The models and hyperparameters are further detailed in Appendix~\ref{sec:numerics}.
For these experiments, we consider the two-dimensional antiferromagnetic random Heisenberg model consisting of $4\times 5 = 20$ to $9\times 5 = 45$ spins.
In this setting, the spins are placed on sites in a 2D lattice.
The Hamiltonian is
\begin{equation}
H = \sum_{\langle ij \rangle} J_{ij} (X_i X_j + Y_i Y_j + Z_i Z_j),
\end{equation}
where the summation ranges over all pairs $\langle ij \rangle$ of neighboring sites on the lattice and the couplings $\{J_{ij}\}$ are sampled uniformly from the interval $[0,2]$.
Here, the vector $x$ is a list of all couplings $J_{ij}$ so that the dimension of the parameter space is $m = O(n)$, where $n$ is the system size.
We trained a classical ML model using randomly chosen values of the parameter vector $x = \{J_{ij}\}$.
For each parameter vector of random couplings sampled uniformly from $[0,2]$, we approximated the ground state using the same method as in~\cite{huang2021provably}, namely with the density-matrix renormalization group (DMRG)~\cite{white1992density} based on matrix product states (MPS)~\cite{SCHOLLWOCK201196}.
The classical ML model was trained on a data set $\{x_\ell, \sigma_T(\rho(x_\ell))\}_{\ell = 1}^N$ with $N$ randomly chosen vectors $x$, where each $x$ corresponds to a classical representation $\sigma_T(\rho(x_\ell))$ created from $T$ randomized Pauli measurements \cite{huang2020predicting}.
The ML algorithm predicted the classical representation of the ground state for a new vector $x$. These predicted classical representations were used to estimate two-body correlation functions, i.e., the expectation value of
\begin{equation}
C_{ij} = \frac{1}{3}(X_i X_j + Y_i Y_j + Z_i Z_j),
\end{equation}
for each pair of qubits $\langle ij \rangle$ on the lattice.
In Figure~\ref{fig:heisenberg}A, we can clearly see that the ML algorithm proposed in this work consistently outperforms the ML models implemented in~\cite{huang2021provably}, which includes the rigorous polynomial-time learning algorithm based on Dirichlet kernel proposed in \cite{huang2021provably}, Gaussian kernel regression \cite{cortes1995support, murphy2012machine}, and infinite-width neural networks \cite{jacot2018neural, neuraltangents2020}.
Figure~\ref{fig:heisenberg}A (Left) and \ref{fig:heisenberg}A (Center) show that as the number $T$ of measurements per data point or the training set size $N$ increases, the prediction performance of the proposed ML algorithm improves faster than the other ML algorithms.
This observation reflects the improvement in the sample complexity dependence on prediction error $\epsilon$. The sample complexity in \cite{huang2021provably} depends exponentially on $1 / \epsilon$, but Theorem~\ref{thm:rigor-guarantee} establishes a quasi-polynomial dependence on $1 / \epsilon$.
From Figure~\ref{fig:heisenberg}A (Right), we can see that the ML algorithms do not yield a substantially worse prediction error as the system size $n$ increases.
This observation matches with the $\log(n)$ sample complexity in Theorem~\ref{thm:rigor-guarantee}, but not with the $\mathrm{poly}(n)$ sample complexity proven in \cite{huang2021provably}.
An important step for establishing the improved sample complexity in Theorem~\ref{thm:rigor-guarantee} is that a property on a local region $R$ of the quantum system only depends on parameters in the neighborhood of region $R$.
In Figure~\ref{fig:heisenberg}B, we visualize where the trained ML model is focusing on when predicting the correlation function over a pair of qubits.
A thicker and darker edge is considered to be more important by the trained ML model.
Each edge of the 2D lattice corresponds to a coupling $J_{ij}$.
For each edge, we sum the absolute values of the coefficients in the ML model that correspond to a feature that depends on the coupling $J_{ij}$.
We can see that the ML model learns to focus only on the neighborhood of a local region $R$ when predicting the ground state property.
\vspace{2em}
{\renewcommand\addcontentsline[3]{}\section{Outlook}}
The classical ML algorithm and the advantage over non-ML algorithms as proven in \cite{huang2021provably} illustrate the potential of using ML algorithms to solve challenging quantum many-body problems.
However, the classical ML model given in~\cite{huang2021provably} requires a large amount of training data.
Although the need for a large dataset is a common trait in contemporary ML algorithms \cite{brown2020language, deng2009imagenet, saharia2022photorealistic},
one would have to perform an equally large number of physical experiments to obtain such data.
This makes the advantage of ML over non-ML algorithms challenging to realize in practice.
The sample complexity $N = \mathcal{O}(\log n)$ of the ML algorithm proposed here illustrates that this advantage could potentially be realized after training with data from a small number of physical experiments.
The existence of a theoretically backed ML algorithm with a $\log(n)$ sample complexity raises the hope of designing good ML algorithms to address practical problems in quantum physics, chemistry, and materials science by learning from the relatively small amount of data that we can gather from real-world experiments.
Despite the progress in this work, many questions remain to be answered.
Recently, powerful machine learning models such as graph neural networks have been used to empirically demonstrate a favorable sample complexity when leveraging the local structure of Hamiltonians in the 2D random Heisenberg model~\cite{wang2022predicting, tran2022shadows}.
Is it possible to obtain rigorous theoretical guarantees for the sample complexity of neural-network-based ML algorithms for predicting ground state properties?
An alternative direction is to notice that the current results have an exponential scaling in the inverse of the spectral gap. Is the exponential scaling a fundamental nature of this problem? Or do there exist more efficient ML models that can efficiently predict ground state properties for gapless Hamiltonians?
We have focused on the task of predicting local observables in the ground state, but many other physical properties are also of high interest.
Can ML models predict low-energy excited state properties? Could we achieve a sample complexity of $N = \mathcal{O}(\log n)$ for predicting any observable~$O$?
Another important question is whether there is a provable quantum advantage in predicting ground state properties.
Could we design quantum ML algorithms that can predict ground state properties by learning from far fewer experiments than any classical ML algorithm? Perhaps this could be shown by combining ideas from adiabatic quantum computation \cite{farhi2000quantum,mizel2007simple,childs2001robustness,aharonov2008adiabatic,barends2016digitized,albash2018adiabatic,du2010nmr,wan2020fast} and recent techniques for proving quantum advantages in learning from experiments \cite{aharonov2021quantum, chen2022exponential, huang2022foundations, huang2021information, huang2022quantum}.
It remains to be seen if quantum computers could provide an unconditional super-polynomial advantage over classical computers in predicting ground state properties.
\vspace{2em}
{\renewcommand\addcontentsline[3]{} \subsection*{Acknowledgments:}}
{ The authors thank Chi-Fang Chen, Sitan Chen, Johannes Jakob Meyer, and Spiros Michalakis for valuable input and inspiring discussions.
We thank Emilio Onorati, Cambyse Rouz\'e, Daniel Stilck Fran\c ca, and James D. Watson for sharing a draft of their new results \cite{onorati2023learning} on efficiently predicting properties of states in thermal phases of matter with exponential decay of correlation and in quantum phases of matter with local topological quantum order.
LL is supported by Caltech Summer Undergraduate Research Fellowship (SURF), Barry M. Goldwater Scholarship, and Mellon Mays Undergraduate Fellowship.
HH is supported by a Google PhD fellowship and a MediaTek Research Young Scholarship. JP acknowledges funding from the U.S. Department of Energy Office of Science, Office of Advanced Scientific Computing Research, (DE-NA0003525, DE-SC0020290), and the National Science Foundation (PHY-1733907). The Institute for Quantum Information and Matter is an NSF Physics Frontiers Center. }
\vspace{2em}
\bibliographystyle{unsrt}
{\renewcommand\addcontentsline[3]{} |
2,869,038,155,834 | arxiv | \section{Introduction}
On general grounds, it is thought that metallicity will affect the
endpoints of stellar evolution, e.g., the relative outcomes in terms of
different supernova types and the observed properties of each. Metals
are a source of opacity that affects supernova progenitors
\citep[e.g.,][]{kudritzki00} and also the supernova explosions
themselves \citep[e.g.,][]{heger03}. However, the hypothesized
metallicity effects have been rather difficult to measure directly. The
number of supernova progenitors that have been identified directly from
pre-explosion imaging is small and limited to core-collapse events
\citep[e.g.,][]{hendry06, li07}. Previous works have either used
population studies with only observational proxies for metallicity
\citep[e.g.,][]{prantzos03} or have considered direct metallicity
measurements with only relatively small numbers of events
\citep[e.g.,][]{hamuy00, gallagher05, stanek06, modjaz07}.
A new approach is now possible, which we employ in this paper, that
takes advantage of the large sample of well-observed and typed
supernovae. Due to a fortuitous match in coverage, many of
these supernovae were in galaxies for which the Sloan Digital Sky Survey
(SDSS) has identified the host galaxies and measured their oxygen
abundances from emission lines in their spectra \citep{tremonti04}.
While these are central metallicities for the host galaxies, and are not
measured for each supernova site, they are much more directly connected
to the latter than proxies like the host luminosity. To further sharpen
our tests, we compare the metallicity distributions of the host galaxies
of SN~Ib/c and SN~Ia to those of SN~II, which are taken as a control
sample.
The progenitors of core-collapse supernovae (SN~II and Ib/c) are massive
stars, either single or in binaries, with initial main sequence masses
$\ga 8\,\mathrm{M}_{\odot}$ \citep[e.g.,][]{heger03}. The presence of hydrogen in
the spectra of SN~II indicates that the massive envelopes are retained
by the progenitors, of which red supergiants are probably the most
common. However, SN~Ib/c lack hydrogen (SN~Ib) or both hydrogen and
helium (SN~Ic) in their spectra, and are therefore thought to have
Wolf-Rayet (WR) stars as progenitors (see \citealt{crowther07} for a
review). The latter originate from the most massive stars, and have had
their outer layers stripped off by strong winds. Thus SN~Ib/c are
thought to have main sequence masses $\ga 30\,\mathrm{M}_{\odot}$, which would make
them $\simeq (8/30)^{1.35} \simeq 20\%$ of all core-collapse supernovae,
assuming a Salpeter slope in the high-mass end of the initial mass
function.
Based on theoretical considerations, the effects of line-driven winds
are expected to introduce a metallicity dependence in the minimum mass
necessary to produce WR stars
\citep[e.g.,][]{heger03,eldridge04,vink05}, which in turn can change the
fractions of core-collapse supernovae that explode as SN~II and
SN~Ib/c. Due to the relative frequencies, SN~Ib/c will be more affected
than SN~II. These metallicity effects on the progenitor winds may
strongly affect the rate at which radioactive $^{26}$Al is expelled into
the interstellar medium before decaying
\citep[e.g.,][]{prantzos04,palacios05}, in which case the decays
contribute to the observed diffuse 1.809 MeV gamma-ray line emission
from the Milky Way \citep[e.g.,][]{diehl06}. While $^{26}$Al appears to
originate in massive stars, it is not yet known how much comes from the
progenitors or the different core-collapse supernova types
\citep[e.g.,][]{prantzos96,higdon04}. For the most massive stars, GRB
progenitors in the collapsar model (e.g., \citealt{macfadyen99},
\citealt{yoon05}), the interplay between metallicity-dependent mass loss
through winds and rotation may be crucial (e.g.,
\citealt{hirschi05}). In all cases, binary progenitors may be more
complicated \citep[e.g.,][]{eldridge07}.
\citet{prantzos03} used the absolute magnitudes of galaxies as a proxy
for their average metallicities, from the luminosity-metallicity
relationship, and found that the number ratio of SN~Ib/c to SN~II
increases with metallicity; they argued that their result is consistent
with stellar evolution models of massive stars with rotation
\citep[e.g.,][]{meynet06}. If so, then one would expect a more robust
signature if the host metallicities were known directly. Ideally, in
the latter approach, one would use the metallicities as measured from
follow-up spectra obtained at the supernova sites, but this is difficult
in practice. This approach of using measured as opposed to estimated
metallicities was used by \citet{stanek06} (with compiled results from
the literature) to study nearby long-duration GRBs with subsequent
supernovae, finding that all of them had very low metallicity
environments and that this appeared to be key to forming powerful GRB
jets, and by \citet{modjaz07} to study nearby broad-lined SN~Ic (without
GRBs), finding in contrast that the metallicities of these environments
were much higher. The main caveats associated with these results are the
low statistics, five and twelve events, respectively. We try to combine
the virtues of these two approaches, with higher statistics and mostly
direct metallicity measurements.
The likely progenitors of SN~Ia are white dwarfs, forming from stars
with initial main-sequence masses $\la 8\,\mathrm{M}_{\odot}$, which accrete mass
from a companion (single-degenerate model) until they reach the
Chandrasekhar mass ($\simeq 1.4\,\mathrm{M}_{\odot}$) and produce a thermonuclear
explosion that completely disrupts the star \citep[e.g.,][]{whelan73}.
During the accretion process, white dwarfs could have strong winds when
the accretion rate reaches a critical value \citep[e.g.,][]{hachisu96},
which would allow them to burn hydrogen steadily and grow in mass. At
low metallicities (${\rm [Fe/H]} \la -1$), SN~Ia may be inhibited
through the single-degenerate channel \citep{kobayashi98}, as the white
dwarf wind is thought to be weak and the system passes through a common
envelope phase before reaching the Chandrasekhar mass. Metallicity also
affects the CNO abundances of white dwarfs, which can affect the
production of $^{56}$Ni in the explosion, and therefore the peak
luminosities of SN~Ia
\citep[e.g.,][]{umeda99,hoeflich00,timmes03,roepke06}. Studies of the
integrated metallicities of nearby SN~Ia hosts
\citep{hamuy00,gallagher05} have shown that metallicity does not seem to
be the main factor regulating their peak luminosities, which is
consistent with some theoretical models
\citep[e.g.,][]{podsiadlowski06}. Instead, the age of the stellar
population where SN~Ia progenitors originate seems to be very important:
{\it prompt} (SN~Ia explode $\sim 10^{8}$~yr after star formation) and
{\it delayed} (SN~Ia explode $> 10^{9}$~yr after star formation)
components were suggested to explain the high rates of SN~Ia in actively
star-forming galaxies (late type spirals and irregulars) compared with
SN~Ia in old, elliptical galaxies
\citep[e.g.,][]{mannucci05,scannapieco05,neill06}.
In this work, to our knowledge for the first time, we compare the
directly measured oxygen abundances of the hosts of SN~Ib/c and SN~Ia
with SN~II. We use the Sternberg Astronomical Institute (SAI) supernova
catalog and match it with the SDSS-DR4 catalog of oxygen abundances of a
large sample of star-forming galaxies from SDSS. Using the supernova
classifications presented in the literature, we can separate the sample
according to different supernova types and make statistical comparisons
of the metallicity distributions of their host galaxies. We also
investigate some individual cases in metal-poor environments that are
especially interesting and which can be used to test the strong
predictions made by some theoretical models. We create a second catalog
by matching the positions of all supernovae with images from
SDSS-DR6, independent of the host galaxy association. This allows us to
investigate significantly fainter SNe hosts, and we identify some even
more extreme hosts for follow-up observations. To enable their further
use in other studies, we make both catalogs available online, and will
update them regularly.
\section{First Catalog: Supernova-Host Pairs with Known Host
Metallicities (SAI~$\cap$~SDSS-DR4)}
\label{sec:data1}
We use the SAI supernova catalog\footnote{\scriptsize{{\tt
http://www.sai.msu.su/sn/sncat/}}} \citep{tsvetkov04} to obtain the
main properties of supernovae (name, classification, RA, DEC,
redshift) and their host information when available (galaxy
name, RA, DEC, redshift). The SAI catalog is a compilation of
information about supernova discoveries, obtained mainly
from reports in the International Astronomical Union Circulars (IAUC),
which include the coordinates and classification of the supernovae
from the IAUCs and also basic information about the host galaxies in
the cases where the galaxies can be identified in online galaxy
catalogs (e.g., HyperLEDA, NED and SDSS). The version of the catalog
we use contains 4,169 entries\footnote{Version updated on June 15,
2007.}, of which we have selected 3,050 supernovae discovered between
1909 and 2007 classified as SN~Ia, II, and Ib/c, including their
sub-types. Supernovae in the catalog with no classification or only
classified as Type~I are not considered for further analysis since we
want to be able to distinguish between SN~Ia and the core-collapse
types SN~Ib/c.
\citet{tremonti04} determined metallicities for a sample of star-forming
galaxies in the SDSS Data Release 2 (SDSS-DR2; \citealt{abazajian04})
from their spectra. Here we use a larger sample of 141,317 star-forming
galaxies (excluding AGN) from the SDSS-DR4 \citep{adelman06}, with
metallicities derived in the same consistent fashion, and which are
available online\footnote{\scriptsize{{\tt
http://www.mpa-garching.mpg.de/SDSS/DR4/}}}. The metallicities are
derived by a likelihood analysis which compares multiple nebular
emission lines ([\ion{O}{2}], H$\beta$, [\ion{O}{3}], H$\alpha$,
[\ion{N}{2}], [\ion{S}{2}]) to the predictions of the hybrid
stellar-population plus photoionization models of \citet{charlot01}. A
particular combination of nebular emission line ratios arises from a
model galaxy that is characterized by a galaxy-averaged metallicity,
ionization parameter, dust-to-metal ratio, and 5500\AA\ dust
attenuation. For each galaxy, a likelihood distribution for metallicity
is constructed by comparison to a large library of model galaxies. We
use the median of the oxygen abundance distributions in this paper. The
metallicities derived by \citet{tremonti04} are essentially on the
\citet{kewley02} abundance scale ($\Delta[\loh] < 0.05$~dex;
\citealt{ellison05}). For further reference in this paper, we call this
galaxy metallicity catalog SDSS-DR4.
We restrict the initial sample of galaxies to 125,958 by applying two of
the cuts that \citet{tremonti04} used for their final cleaned sample:
(1) the redshifts of the galaxies have to be reliable by SDSS standards;
and (2) H$\beta$, H$\alpha$, and [\ion{N}{2}]~$\lambda$6584 should be
detected at $> 5\sigma$ confidence, and
[\ion{S}{2}]~$\lambda\lambda$6717,6731 and [\ion{O}{3}]~$\lambda$5007
should at least have detections. While in our analysis we directly
compare nebular oxygen abundance within the SDSS-DR4 catalog for the
supernova hosts, when referring to ``Solar metallicity,'' we adopt the
Solar oxygen abundance of $\loh = 8.86$ \citep{delahaye06}.
We cross-matched the SAI catalog with the galaxy metallicity catalog
SDSS-DR4 using a matching radius of 60$\arcsec$ ($\sim 48$~kpc at
$z=0.04$). We used the coordinates of the host galaxies in the cases
where they are known and identified in the SAI catalog, and the
supernovae coordinates were used otherwise. We also required that the
redshifts reported in the SAI catalog, which were taken from galaxy
catalogs and the IAUCs, to be consistent within $20\%$ with the
redshifts of the closest galaxy from the SDSS catalog that passed the
proximity cut. After selecting the supernovae that passed the proximity
and redshift criteria, we visually inspected the SDSS images around the
galaxies to identify the ones that were wrongly selected as hosts (e.g.,
close galaxy pairs). The number of supernovae that passed
all these cuts is 254 in total: 95 SN~Ia, 123 SN~II, and 36
SN~Ib/c. There were some galaxies that hosted more than one supernova:
five galaxies had three supernovae each (NGC~1084, NGC~3627, NGC~3631,
NGC~3938, and NGC~5457) and 15 galaxies had two supernovae (NGC~2532,
NGC~2608, NGC~3627, NGC~3780, NGC~3811, NGC~3913, NGC~4012, NGC~4568,
NGC~5584, NGC~5630, NGC~6962, UGC~4132, UGC~5695, IC~4229, and
MCG~+07-34-134).
In Table~\ref{tab1} we present the final matched sample of supernovae
and host galaxy metallicities from SDSS-DR4, as well as the absolute
M$_B$ magnitudes of the galaxies obtained from the HyperLEDA database
and SDSS. The absolute magnitudes for SDSS galaxies were calculated
using Petrosian $gr$ magnitudes transformed to $B$ magnitudes using the
transformation of \citet{lupton05}, corrected by Galactic extinction
\citep{sfd98} and internal extinction to a face-on geometry
\citep{tully98}, and $k$-corrections \citep{blanton03}. To calculate the
absolute magnitudes, we use a flat cosmology with ${\rm
H}_{0}=70\,\,{\rm km\,s^{-1}\,Mpc^{-1}}$, $\Omega_{M}=0.3$,
$\Omega_{\Lambda}=0.7$. The typical 1$\sigma$ uncertainties in the
oxygen abundances are 0.05~dex at $\loh > 8.5$, and 0.15~dex at $\loh <
8.5$. Our estimated uncertainty in the absolute magnitudes of the hosts
is $\sim 0.3$~mag, calculated from a sub-sample of galaxies in the
catalog with reliable absolute magnitudes from SDSS and HyperLEDA.
Our first catalog, SAI~$\cap$~SDSS-DR4, is available
online\footnote{{\scriptsize{\tt
http://www.astronomy.ohio-state.edu/$\sim$prieto/snhosts/}}} and will be
updated as new supernovae are discovered with host galaxy metallicities
in the SDSS-DR4 catalog. It includes the information presented in
Table~\ref{tab1}, as well as images around the supernovae obtained from
SDSS-DR6.
Figure~\ref{fig1} shows the distribution of metallicities as a function
of redshift and M$_B$ of the supernova host galaxies, as well as the
distribution of star-forming galaxies in the SDSS-DR4 catalog. The
apparent ``stripes'' in the plots, regions with very few oxygen
abundance measurements, are an effect of the grid of model parameters
(metallicity, ionization parameters, attenuation, etc.) used to
calculate the metallicities (see \citealt{brinchmann04} for details). As
can be seen, the redshift distribution of supernovae varies for
different types, with the median redshifts of the samples at $z=0.014$
(II), 0.018 (Ib/c), and 0.031 (Ia). This is a combination of several
effects. First, SN~Ia supernovae are, on average, $\sim 2$~mag brighter
at peak luminosity than core-collapse events \citep{richardson02},
therefore, they can be found at larger distances in magnitude-limited
surveys. Second, the local rate of core-collapse supernovae in
late-type galaxies is $\sim 3$ times higher than the SN~Ia rate
\citep{cappellaro99,mannucci05}. Finally, the great interest in SN~Ia
as cosmological distance indicators makes most of the supernovae
searches concentrate their limited spectroscopic follow-up resources on
likely Type~Ia supernovae (as determined by their light curves).
As shown in Figure~\ref{fig1}, the distribution of host galaxy
metallicities follows the distribution of galaxies from SDSS, with a
wide range spanning $\sim 1.4$~dex ($7.9 < \loh < 9.3$). However, there
appear to be significant differences present between the hosts of
different supernovae types. In particular, most of the SN~Ib/c hosts are
concentrated in the higher metallicity/luminosity end of the
distribution ($\loh \ga 8.7$), while the metallicities of SN~II and
SN~Ia hosts are more evenly distributed and appear to be tracing each
other fairly well.
Figure~\ref{fig2} shows a mosaic of SDSS-DR6 \citep{adelman07}
images\footnote{{\scriptsize{{\tt
http://cas.sdss.org/dr6/en/tools/chart/chart.asp}}}} of the host
galaxies with the highest and lowest metallicities in the sample,
including two supernovae of each type. This figure shows the wide range
of host galaxy environments present in the sample, from big spirals
(e.g., SN~2000dq, SN~2004cc, SN~2005bc, SN~2005mb, SN~2002cg, and
SN~2006iq) to small dwarfs (e.g., SN~1997bo, SN~2006jc, SN~2004hw,
SN~1998bv, SN~2007I, and SN~2007bk), and that all types of supernovae
can be found in metal-rich and metal-poor star-forming galaxies.
\subsection{Testing Supernova Trends with Metallicity}
\label{sec:analysis}
Is the tendency of SN~Ib/c hosts towards higher metallicity, compared
with SN~II and SN~Ia, clearly seen in Figure~\ref{fig1}, a real physical
effect? To answer this question we identify and try to reduce some of
the biases present in the sample.
The supernova sample studied in this work is far from homogeneous. The
supernovae have been discovered by a variety of supernova surveys,
including amateur searches that look repeatedly at bright galaxies in
the local universe, professional searches that look at a number of
cataloged galaxies to a certain magnitude limit (e.g., LOSS), and
professional searches that look at all the galaxies in a given volume
(e.g., SDSS-II, The Supernova Factory), among others. The host galaxies
of supernovae discovered by amateur searches tend to have higher
metallicities due to the luminosity-metallicity relation (see
Figure~\ref{fig1}), while the metallicities of galaxies observed by
professional searches span a wider range.
As an example of a possible bias in the supernovae in our catalog, we
note that the median metallicity decreases by $\sim 0.1$~dex for the
hosts of supernovae discovered between 1970 and 2007. Ideally, all the
supernovae for the current study would be selected from galaxy-impartial
surveys. However, the numbers of different supernova types found by such
surveys in our catalog are still small (especially core-collapse
events), and do not allow a statistical comparison (see the discussion
in \citealt{modjaz07}).
Another bias present in the galaxy data that we use is the so-called
aperture bias \citep{kewley05,ellison05}. The SDSS spectra are taken
with a fixed fiber aperture of $3\arcsec$ (2.4~kpc at $z=0.04$). Since
galaxies have radial metallicity gradients (e.g., \citealt{zaritsky94}),
for nearby galaxies we are, on average, only measuring the higher
central metallicity, while for more distant galaxies we are covering a
larger fraction of the galaxy light with the SDSS fiber. This effect
also depends on galaxy luminosity, as for dwarf galaxies the fiber will
cover a larger fraction of the total light than in large
spirals. \citet{kewley05} find a mean difference of $\sim 0.1$~dex,
although with a large scatter ($0.1-0.3$~dex), between the central and
integrated metallicities of a sample of $\sim 100$ galaxies of all types
(S0-Im) in the redshift range $z = 0.005-0.014$.
In order to reduce these and other biases, we limit the comparison of
supernova types to host galaxies in the redshift range $0.01 < z <
0.04$, where there are 115 supernovae. In this {\it pseudo}
volume-limited sample, the median redshifts of the 58 SN~II (0.020), 19
SN~Ib/c (0.021) and 38 SN~Ia (0.024) hosts are consistent, while the
number of galaxies in each sub-sample still allows us to make a
meaningful statistical comparison. By using a small redshift slice we
are, effectively, reducing the aperture biases when comparing the galaxy
metallicity measurements, such that they are now comparable to or
smaller than the statistical errors in the metallicity determination.
We made additional checks of relative biases between supernova types in
our redshift-limited sample. First, the ratios of the numbers of
SN~Ib/c and SN~Ia to the total number of core-collapse supernovae are
reasonably consistent with the ratios obtained from the local supernovae
rates \citep[e.g.,][]{cappellaro99,mannucci05}. Second, the fact that
the SN--host separation distributions for SN~Ia and SN~II agree,
particularly at small radii (see below), suggests that our supernova
samples are not biased (relatively, one supernova type to another) by
obscuration effects.
We compare the metallicity distributions of the hosts of SN~Ib/c and
SN~Ia to SN~II, which are taken as the control sample. Given that SN~II
are the most common type of supernovae \citep[e.g.,][]{mannucci05} and
that they come from massive stars from a wide range of masses that
explode in all environments, presumably independent of metallicity, they
are effectively giving us the star-formation-rate weighting of the
luminosity-metallicity (or mass-metallicity) relationship for
star-forming galaxies. It would be of interest to test if indeed the
SN~II rates are independent of metallicity, but this is outside the
scope of the current paper.
Figure~\ref{fig3} shows the cumulative distribution of metallicities for
hosts of different supernova types in the redshift ranges $z<0.04$ and
$0.01 < z < 0.04$. Two important results are immediately apparent:
\begin{itemize}
\item The metallicities of SN~Ib/c hosts tend to be higher than those
of SN~II hosts,
\item The SN~Ia and SN~II hosts have very similar metallicity
distributions.
\end{itemize}
Kolmogorov-Smirnov (KS) tests between the metallicity distributions of
different supernova types in the redshift range $0.01 < z < 0.04$
strengthen these findings. The KS probabilities of two host metallicity
samples being drawn from the same distribution are: 5\% (II--Ib/c), 3\%
(Ia--Ib/c) and 56\% (II--Ia). We obtain a similar result if we compare
the mean metallicities of the host samples: 8.94$\pm$0.04 (SN~II),
8.94$\pm$0.04 (SN~Ia) and 9.06$\pm$0.04 (SN~Ib/c), where the errors are
the RMS of similar-sized samples obtained using bootstrap resampling.
The metallicity distribution of the SDSS-DR4 star-forming galaxies in
our redshift range, weighted only by galaxy counts, is also shown in
Figure~\ref{fig3}. This should not be used in any comparisons, as it
does not take into account the weighting with star formation rate or the
supernova and galaxy selection criteria. We take all of these into
account by only making relative comparisons between supernova types.
If we restrict the sample of SN~Ib/c to only SN~Ic and broad-lined Ic in
the same redshift range, leaving out supernovae classified as SN~Ib/c
and SN~Ib, the difference in metallicity distributions of the hosts of
SN~II and SN~Ic$+$hypernovae (13 SN) becomes smaller, with a KS
probability of 19\%. If only the three supernova classified as SN~Ib
(SN~2003I, SN~2005O, and SN~2005hl) in the {\it pseudo} volume-limited
sample are not considered, then the KS probability of SN~II and
SN~Ic$+$hypernovae$+$SNIb/c being drawn from the same sample is 10\%.
In Figure~\ref{fig4} we show the number ratio of SN~Ib/c to SN~II as a
function of the metallicities of the host galaxies. This ratio is very
important because the rates of core-collapse SNe are expected to change
as a function of the progenitor mass and metallicity and, therefore, it
can help to put constrains on massive stellar evolution models
\citep[e.g.,][]{eldridge07}. We have calculated the ratio in bins of
equal number of SN~II+SN~Ib/c, with 11 SNe per bin, to do a direct
comparison with the results of \citet{prantzos03}. The small statistics
compared with \citet{prantzos03}, who used the absolute magnitudes of
the hosts as a proxy for the average metallities through the
luminosity-metallicity relationship, is reflected in the large errors of
the ratio. The large error bars do not allow us to put constraints in
progenitor models, however, the general trend observed in the cumulative
distribution (see Figure~\ref{fig3}) is confirmed with the number
counts: SN~Ib/c are more common at higher metallicities compared with
SN~II. Our results are consistent with those of \citet{prantzos03}.
Figure~\ref{fig5} shows the cumulative distributions of projected
host-supernova distances for the reduced sample of 115 SNe used to
compare the host metallicities ($0.01 < z < 0.04$). Clearly, the SN~Ib/c
in the sample are found more towards the centers of their hosts when
compared with SN~II and SN~Ia
\citep[e.g.,][]{vandenbergh97,tsvetkov04,petrosian05}, which have
similar distributions to each other (as also in Figure~\ref{fig3}). A
galactocentric concentration of SN~Ib/c and their progenitors may be
important for the angular distributions of diffuse gamma-ray line
emission from the Milky Way. Besides the 1.809 MeV line from $^{26}$Al,
the 0.511 MeV line from positron annihilation is poorly understood, in
terms of its high flux and very strong central concentration
\citep[e.g.,][]{casse04, knodlseder05,beacom06}. Since the SN~Ib/c are
found at small separation, the central galaxy metallicities determined
by the SDSS should be representative of the local environments of the
supernovae. Taking into account the existence of negative metallicity
gradients in increasing galactocentric radii, the local metallicities of
the SN~II and SN~Ia, if anything, are even {\em lower} than deduced from
the SDSS central metallicities. The tendency for SN~Ib/c to prefer
higher metallicity relative to SN~II and SN~Ia is probably even stronger
than shown in Figure~\ref{fig3}.
\subsection{Supernovae in Low-Metallicity Hosts}
Even though we have shown that there is a strong preference of SN~Ib/c
for high-metallicity environments, compared with SN~II and SN~Ia, there
are four SN~Ib/c with relatively metal-poor host galaxies ($\loh <
8.6$). These events, and also some SN~Ia found in low-metallicity
dwarfs, made us investigate more carefully a number of individual cases.
We found that among the lowest-metallicity host galaxies in the sample,
there were supernovae that stood out because of their unusual properties
(all shown in Figure~\ref{fig2}).
\begin{description}
\item [SN~2006jc:] Peculiar SN~Ib/c supernova with strong \ion{He}{1}
lines in emission in the spectrum \citep{crotts06}, thought to arise
from the interaction of the supernova ejecta with a He-rich
circumstellar medium \citep{foley07,pastorello07}. Its host galaxy,
UGC~4904 at $z=0.006$, is a low luminosity, blue, and relatively
low-metallicity starburst (M$_{B}=-16.0$, $\loh=8.5$). Interestingly,
the host galaxy of SN~2002ao at $z=0.005$ (in UGC~9299), another
peculiar SN~Ib/c with spectral properties very similar to SN~2006jc
\citep{benetti06} that is also present in our first catalog, has low
metallicity compared with the majority of the SN~Ib/c hosts, and
shares similar morphological properties with the host of SN~2006jc.
\item [SN~2007I:] Broad-lined Ic (or hypernova) with a spectrum similar
to SN~1997ef \citep{blondin07} at $z=0.022$. Its host galaxy is a
star-forming, low-metallicity dwarf (M$_{B}=-16.7$, $\loh=8.3$),
unlike other broad-lined Ic supernovae observed in higher-metallicity
galaxies \citep{modjaz07}, and somewhat similar to the host galaxies
of long GRBs associated with supernovae \citep{stanek06,fruchter06};
however, see a detailed discussion in \citet{modjaz07}. The other
four broad-lined Ic supernovae in our sample that have been reported
in the literature are: SN~1997ef \citep{iwamoto00}, 2004ib
\citep{sako05}, 2005ks \citep{frieman05}, 2006qk \citep{frieman06}.
\item [SN~2007bk:] Type~Ia supernova with a spectrum similar to the slow
decliner/luminous SN~1991T \citep{dennefeld07} at $z=0.032$. The host
galaxy is a low metallicity/luminosity dwarf, with M$_B=-18.2$ and
$\loh=8.3$, similar to the Large Magellanic Cloud. The supernova was
found very far from the center of its dwarf host, at a projected
separation of $\sim9$~kpc. The magnitude of the supernova at discovery
($R=16.7$, \citealt{mikuz07}) and the phase obtained from the spectrum
($+50$~days, \citealt{dennefeld07}, although S.~Blondin finds equally
good matches with templates at $+30$~days, private communication),
imply that this was a very luminous Type~Ia event. If the reported
discovery magnitude and spectral phases are accurate, SN~2007bk was
$\sim 0.5-1.5$~mag brighter than SN~1991T at the same phase after
maximum light.
\end{description}
\section{Second Catalog: Supernova-Host Pairs with Unknown Host
Metallicities (SAI~$\cap$~SDSS-DR6)}
\label{sec:data2}
The existence of supernovae with unusual properties among the most
metal-poor, low-luminosity galaxies in the first catalog prompted us to
investigate a much larger sample of supernovae. We constructed a second
catalog with images around the positions of supernovae using SDSS,
matching the SAI catalog with SDSS-DR6. We included the redshifts
obtained from the SAI catalog to produce images in physical units around
the supernovae. The total number of matches is 1225 for supernovae at
$z<0.3$. This catalog is also available online, with the first catalog
described earlier in \S~\ref{sec:data1}.
This extended second catalog (SAI~$\cap$~SDSS-DR6) does not have
information about metallicities or luminosities of the hosts. It is a
visual tool that can be used to explore the environments around
supernovae found in the SDSS area, independent of the host galaxy
association. Identification of the supernovae hosts and their integrated
properties obtained from SDSS will be added in a future study.
Visually inspecting the images of the second catalog, we identified a
number of supernovae in what appear to be very faint galaxies, and which
are likely to be low-luminosity, metal-poor galaxies not present in the
first catalog. Some examples in the redshift range $0.01 \la z \la 0.05$
are (supernova types shown in parentheses): SN~1997ab (II), SN~1997az
(II), SN~2001bk (II), SN~2003cv (II), SN~2004gy (II), SN~2004hx (II),
SN~2005cg (Ia), SN~2005gi (II), SN~2005hm (Ib), SN~2005lb (II), SN~2006L
(IIn), SN~2006bx (II), SN~2006fg (II), 2007bg (Ic), 2007bu (II), and
2007ce (Ic). In this incomplete sample, which was selected by noting
some especially low-luminosity galaxies, the SN~Ia/SN~II ratio is lower
than expected. Similarly, in our catalog of hosts with known
metallicities, SN~Ia may also be relatively underabundant at the lowest
host luminosities and metallicities, as shown in Figure~\ref{fig1}. We
caution that the small statistics make these only hints, and we discuss
these issues further below.
One of the most interesting supernovae in our second catalog is
SN~2007bg, a recently discovered broad-lined Ic
\citep{quimby07,harutyunyan07} at $z=0.03$, which has an extremely faint
galaxy coincident with the position of the supernova. Using photometry
and images from SDSS-DR6, we estimate the luminosity of the apparent
host galaxy to be M${_B} \approx -12$, most likely a very metal-poor
dwarf ($\loh \sim 7.5$, or $\sim 1/20$~solar; see the
metallicity-luminosity relationship extended to dwarf galaxies by
\citealt{lee06}). Due to the extremely low luminosity of that galaxy, in
fact one of the lowest-luminosity supernova hosts ever seen, and also
fainter than most if not all GRB hosts \citep[see e.g.,][]{fruchter06},
this event may represent the missing link between broad-lined SN~Ic and
GRBs. This event is therefore an excellent candidate for a search for
an off-axis GRB jet in radio \citep{soderberg06} and possibly other
wavelengths. Follow-up spectroscopic observations and deep photometry
to determine the metallicity of the host and study the supernova
environment are strongly encouraged in this and other cases of very
low-luminosity SN hosts.
\section{Discussion and Conclusions}
\label{sec:discussion}
We find that SN~Ib/c tend to be in high-metallicity host galaxies,
compared to SN~II, our control sample that traces the underlying star
formation rates. This is the first time that such a trend has been found
using the directly-measured oxygen abundances of the supernova host
galaxies. This confirms and greatly strengthens an earlier result of
\citet{prantzos03}, which found a similar result using the absolute
magnitudes of the host galaxies as an indirect estimate of their
metallicities through the luminosity-metallicity relationship. This can
be interpreted in relative supernova rates: the ratio of SN~Ib/c to
SN~II increases with increasing metallicity and hence also cosmic age.
We also find that SN~Ib/c are consistently found towards the centers of
their hosts compared with SN~II and SN~Ia, which had been also found in
previous studies \citep[e.g.,][]{vandenbergh97,tsvetkov04,petrosian05}.
This suggests that direct measurements of metallicities at the explosion
sites, as opposed to the central host metallicities used here, would
reveal an even stronger effect, due to the radial metallicity gradients
observed in spiral galaxies. The local metallicities of SN~Ib/c would
be less reduced with respect to the central metallicities than SN~II and
SN~Ia, which would widen the separation seen in Figure~\ref{fig3}.
The tendency towards high metallicity of SN~Ib/c environments compared
to those of SN~II supports, in general terms, theoretical models of the
effects of metallicity in stellar evolution and the massive stars that
are core-collapse supernova progenitors (e.g.,
\citealt{heger03,meynet06,eldridge07,fryer07}). Also, models of stellar
evolution that include rotation, from \citet{meynet06}, predict that at
high metallicity Wolf-Rayet stars will earlier enter the WC phase when
they still are rich in helium, and that these stars would explode as
SN~Ib. The fact that we do see both SN~Ib and SN~Ic in hosts at high
metallicity should not be taken as inconsistent with these models,
mainly because the number of supernovae is small and the sample has not
been homogeneously selected. There is an indication, although not
statistically significant, that SN~Ib may be more common in higher
metallicity environments than SN~Ic and broad-lined SN~Ic in our sample.
The agreement between the metallicity distributions of the hosts of
SN~II and SN~Ia shows that their hosts are sampling a wide range of
properties of star-forming galaxies, from the relatively metal-poor
dwarfs to metal-rich grand design spirals. Using models of white dwarf
winds in the framework of single-degenerate progenitors of SN~Ia
\citep{hachisu96}, \citet{kobayashi98} made a strong prediction that
SN~Ia would not be found in low metallicity environments, such as dwarf
galaxies and the outskirts of spiral galaxies. However, we do observe
SN~Ia in metal poor dwarfs (e.g., SN~2004hw, SN~2006oa, and SN~2007bk,
with host metallicities between $\sim 0.2$ and 0.5 solar) and at large
projected distances ($> 10$~kpc) from their star-forming hosts (e.g.,
SN~1988Y, SN~1992B, SN~1993I, SN~2001bg, SN~2002gf, SN~2004ia,
SN~2004ig, SN~2005ms, SN~2006fi, and SN~2006gl). There are also extreme
cases that have been pointed out in previous studies, like the
low-luminosity dwarf (M$_B \approx -12.2$) host galaxy of the luminous
and slow decliner SN~1999aw \citep{strolger02}, which is most likely
very metal-poor ($\loh \sim 7.5$, or $\sim 1/20$~solar; see
\citealt{lee06}). Also, SN~2005cg was found in a dwarf with subsolar
gas metallicity \citep{quimby06}.
We do not find a statistically significant low-metallicity threshold in
the metallicities of SN~Ia compared with SN~II hosts, as predicted from
theory by \citet{kobayashi98} for single-degenerate progenitors of SN~Ia
with winds. However, there is a preference for finding more SN~II in
very faint galaxies compared with SN~Ia in our second catalog, which is
suggestive of a luminosity or metallicity threshold for the main channel
that produces SN~Ia. This will have to be explored in the future with a
larger sample that includes good luminosity information for the hosts
and actual metallicities measured from spectra. If no metallicity
threshold is found in larger samples, it means that the models and
predictions of white dwarf winds will have to be revisited. This would
have implications for modeling and understanding of galactic chemical
evolution that include the effects of white dwarf winds to shut down
SN~Ia at low metallicities (e.g., \citealt{kobayashi07}). Interestingly,
modeling the X-ray spectra of supernova remnants from probable SN~Ia
explosions in our Galaxy, the LMC and M31, \citet{badenes07} did not
find the strong effects of white dwarf winds predicted from theory.
On the other hand, independent of the existence (or not) of a mechanism
that can shut down the production of SN~Ia in low-metallicity
environments, we have noted examples of SN~Ia that explode in
low-metallicity dwarf galaxies, like SN~2007bk. Also, supernova remnants
from probable SN~Ia have been identified in the LMC
\citep[e.g.,][]{hughes95} and SMC \citep[e.g.,][]{vanderheyden04}. Is
this SN~Ia population dominated by a different kind of progenitors, like
double-degenerate mergers, compared to the main progenitor channel? Is
the expected trend between progenitor metallicities and peak-luminosity
starting to appear as we extend the sample to even lower metallicity
hosts? It is suggestive that the small number of SN~Ia in
low-metallicity hosts, like SN~2007bk, SN~2005cg and SN~1999aw, were all
luminous events compared with normal SN~Ia. Also, the very luminous
SN~Ia events that have spectral signatures of a strong ejecta-CSM
interaction, like SN~2005gj, are mostly associated with low-luminosity,
and most likely low-metallicity, hosts \citep{prieto07}. Is low
metallicity necessary to produce this extreme class of SN~Ia? Detailed
comparison studies of the observational properties of supernovae in
these extreme environments are encouraged.
In the course of this work, we have prepared two new catalogs that
should be useful for other studies. We used the SAI supernova catalog
and the SDSS-DR4 sample of metallicities of star-forming galaxies from
\citet{tremonti04} to produce a catalog of supernovae hosts with
metallicities derived in a consistent fashion. From this first catalog,
we found several interesting core-collapse (e.g., SN~2002ao, SN~2006jc,
and SN~2007I) and SN~Ia events (e.g., SN~2007bk) in low-metallicity
galaxies. We constructed a second catalog by matching the SAI supernova
catalog with images obtained from SDSS-DR6. The second catalog does not
contain information on host metallicities, but it can be used to
investigate the environments of supernovae independent of the host
association. In that second catalog, we found several examples of
core-collapse supernovae in faint galaxies. One of most interesting
cases is SN~2007bg, a broad-lined SN~Ic in an extremely low-luminosity
and very likely low-metallicity host. These catalogs will allow
researchers to select interesting candidates for further follow-up
observations. Also, as more homogeneous light curve and spectroscopic
data become available for supernovae in the first catalog, this will
allow us to test possible correlations between supernova properties and
the metallicities of their hosts, which may turn out to be crucial for
improving our understanding of the nature of different supernova
explosions. Another possible use of our catalogs is for systematically
characterizing the morphologies of supernova hosts.
We stress the great importance of galaxy-impartial surveys for finding
and studying the properties of all supernovae types. Some very
interesting and potentially informative supernovae have been found in
very low-luminosity, low-metallicity galaxies, hosts which are not
included in supernova surveys based on catalogs of normal galaxies.
These unusual supernovae and hosts may help probe the relationship
between the SN~Ib/c and SN~II core-collapse supernova types, the
progenitors of SN~Ia as well as the possible correlations between
observed SN~Ia properties and host metallicities, the supernova-GRB
connection \citep[e.g.,][]{stanek03} and its possible metallicity
dependence \citep[e.g.,][]{stanek06,modjaz07}, and also to test the
consistency between the cosmic stellar birth and death rates
\citep[e.g.,][]{hopkins06}. As we pointed out in \S~\ref{sec:analysis},
presently the comparison of host metallicities using supernovae
discovered by galaxy-impartial surveys is limited by their small
numbers, especially for core-collapse events, since SN~Ia receive much
more attention when decisions about spectroscopic follow-up are made.
This is also true for the study of their observational properties (e.g.,
light curves and spectra). However, in order to better understand all
types of cosmic explosions and put further constraints on the
predictions of stellar evolution theory, a larger effort on other
supernovae types is greatly needed.
\vspace{0.5in}
We are grateful to C.~Tremonti for help with the extended SDSS-DR4
catalog of galaxy metallicities. We thank C.~Badenes, S.~Blondin,
A.~Hopkins, J.~Johnson, M.~Kistler, M.~Modjaz, and G.~Pojmanski for
helpful discussions and suggestions. JFB is supported by the National
Science Foundation CAREER Grant PHY-0547102.
Funding for the SDSS and SDSS-II has been provided by the Alfred P.
Sloan Foundation, the Participating Institutions, the National Science
Foundation, the U.S. Department of Energy, the National Aeronautics and
Space Administration, the Japanese Monbukagakusho, the Max Planck
Society, and the Higher Education Funding Council for England.
The SDSS is managed by the Astrophysical Research Consortium for the
Participating Institutions. The Participating Institutions are the
American Museum of Natural History, Astrophysical Institute Potsdam,
University of Basel, Cambridge University, Case Western Reserve
University, University of Chicago, Drexel University, Fermilab, the
Institute for Advanced Study, the Japan Participation Group, Johns
Hopkins University, the Joint Institute for Nuclear Astrophysics, the
Kavli Institute for Particle Astrophysics and Cosmology, the Korean
Scientist Group, the Chinese Academy of Sciences (LAMOST), Los Alamos
National Laboratory, the Max-Planck-Institute for Astronomy (MPIA), the
Max-Planck-Institute for Astrophysics (MPA), New Mexico State
University, Ohio State University, University of Pittsburgh, University
of Portsmouth, Princeton University, the United States Naval
Observatory, and the University of Washington.
\newpage
|
2,869,038,155,835 | arxiv | \section{Introduction}\label{sec:introduction}
\begin{figure*}[htbp]
\centering
\begin{center}
\scriptsize
\includegraphics*[width=7.1in]{./photo/Motivation.pdf}
\caption{\textcolor{black}{\textbf{Motivation demonstration of the CGTP framework.}}
\textcolor{black}{In (a), the goal-oriented trajectory prediction model, a representative marginal prediction method, generates diverse goal-oriented future trajectories for each agent independently, producing joint predictions by combinations of self-consistent predictions over individual agents, with unrealistic or collision behaviors possibly occurring. In (b), the CGTP framework, a novel multimodal joint prediction method, accounts for future interactions via conditional modeling and outputs scene-compliant future trajectories. It focuses on goal interactive prediction realized by a combined form of marginal goal predictor with agent \textit{A} and conditional goal predictor with agent \textit{B}, and then the joint predictions can be generated from the conditional goal-oriented trajectory predictors.}}
\label{fig:Motivation}
\end{center}
\vspace{-1.3em}
\end{figure*}
Predicting or forecasting future trajectories of multiple agents is a mission-critical component in the autonomous driving system \cite{LKQ2021, DynaNet2021}, which plays an important role in the subsequent motion planning \cite{Brain2016, Lihaoran2020} and decision \cite{Wangjunjie1, Wangjunjie2} modules. In particular, the interactive behavior prediction has received increasing attention in recent years \cite{WaymoDataset2021}, which aims to jointly predict interactive behaviors
of interacting agents in critical situations such as cut-in and yielding.
\textcolor{black}{Fig.~\ref{fig:Motivation} presents two research pipelines focusing on the joint prediction fields.
A naive approach for the joint prediction is to use a marginal prediction method. This class of models \cite{Desire2017, MultiPath2019, VectorNet2020, TNT2020} generate diverse predictions independently for each agent, and then make the combinations of marginal predictions to output the joint realizations. \textcolor{black}{Notably, the goal-oriented trajectory prediction (GTP) \cite{TNT2020} is a representative marginal prediction method, achieving great success in multimodal trajectory prediction by first identifying endpoints via the goal predictor and then predicting multiple trajectories via the goal-oriented trajectory predictor.} However, the weakness
of such a method lies in the absence of future interactive modeling with the other interacting agent for the interactive behavior prediction. Limited by this, while the goal-oriented prediction method produces self-consistent predictions over individual agents, the joint prediction-pairs may result in unrealistic or collision behaviors. For instance, in spite of non-colliding predictions in the modal 1 of Fig.~\ref{fig:Motivation} (a), agent $B$ should aggressively speed up when making lane changing to keep a safe distance from agent $A$. Such scene-compliant interactive behaviors can be hardly captured by the pure marginal prediction method. To overcome this issue, recent advances have shown great success in predicting scene-compliant trajectories by learning from the multimodal joint prediction methods which adopt conditional prediction models to consider interactions between agent future predictions \cite{CBP2021, mfp2019, Precog2019, ILVM2020}.}
\textcolor{black}{Especially, Waymo also provide a large-scale
interactive motion forecasting dataset \cite{WaymoDataset2021} for autonomous driving, and some multimodal joint prediction methods \cite{SceneTrans2021, sun2022m2i, ProspectNet} achieve better performances on this dataset.}
Hence, the multimodal joint prediction rather than marginal prediction is required for interactive driving scenarios.
\textcolor{black}{In the literature, the researchers characterize the underlying intents of interacting agents into various forms of future prediction information, which served as conditional dependencies to the multimodal joint prediction methods.
A family of methods \cite{CBP2021, mfp2019, Precog2019, SceneTrans2021, sun2022m2i, ProspectNet} leverages future overall trajectories of interacting agents as explicit future intents, focusing on interactive behavior prediction conditioned on them. Instead, ILVM \cite{ILVM2020} adopts implicit latent variables to describe the future intents over interacting agents. Different from recent studies above, in this paper, we propose a novel conditional goal-oriented trajectory prediction (CGTP) framework} \textcolor{black}{by combining significant advantages between the goal-oriented trajectory prediction method and conditional prediction method, as indicated in Fig.~{\ref{fig:Motivation}} (b). On one hand, CGTP conducts conditional inferences both in goal predictor and goal-oriented trajectory predictor by comparison with the goal-oriented trajectory prediction method. On the other hand, compared with the recent studies on multimodal joint prediction, CGTP focuses more on the future interactions at the goal-based level via a combined form of marginal and conditional goal predictors.}
We factorize the CGTP framework into three parts: context encoding, goal interactive prediction and trajectory interactive prediction. To summarize, we list the main contributions of CGTP framework as follows:
\begin{itemize}
\item For the context encoding, we design a Goals-of-Interest Network (GoINet) by combining the advantages of Graph Neural Network (GNN) and Transformer, to hierarchically capture interactive features over prior knowledge (agent trajectories and future lanes), and then obtain the structural representations of fine-grained future goals for per interacting agent, by aggregating interactive features from the individual, local and global levels.
\item For the goal interactive prediction and trajectory interactive prediction, we propose the Conditional Goal Prediction Network (CGPNet) and the Goal-oriented Trajectory Forecasting Network (GTFNet) by embedding the conditional prediction into the goal-oriented trajectory prediction method.
Based on CGPNet and GTFNet, the future diverse interactions between two interacting agents can be captured by the learned joint distribution.
\item In addition, a goal interactive loss is established for the CGPNet, which aims to better learn the joint probability distribution over future goal candidates for the two interacting agents.
\item \textcolor{black}{
Comparison experiments on Argoverse motion forecasting dataset, In-house cut-in dataset, and Waymo open motion dataset verify the superiority of our CGTP framework over the mainstream marginal prediction models and state-of-the-art conditional prediction model.
}
\end{itemize}
\section{Related Work}\label{sec:relatedWork}
In this section, we provide a detailed trajectory prediction literature review with a particular emphasis on deep learning methods from the perspective of context encoding, anchor-based prediction and conditional prediction, respectively.
\subsection{Context Encoding}\label{sec:Contex Information Encoding}
There is a family of work on trajectory prediction via convolutional neural networks (CNNs) rendered input as a multi-channel rasterized bird's-eye view (BEV) image \cite{ MTP2019, TPCN2021}. \textcolor{black}{However, such rasterized approaches are difficult in modeling long-range interactions and representing continuous physical states. An popular alternative is to use a vectorized method.
With this approach, the history of agent motion is typically encoded via sequence modeling techniques like RNNs \cite{MATF2019}, while the elements of the road graphs are approximately treated as pairwise-linear segments with additional attributes such as current states and semantic type. Furthermore, the information aggregation techniques are utilized to learn relationships between the agent dynamics in the context of the road graph scenarios. Transformer \cite{Transformer2017} can be denoted as one popular choice for interaction-aware motion modeling based on the attention mechanism, capturing relationships via three different axes over timesteps, agents and road elements. For instance, \cite{zhao2020spatial} focuses on temporal encoding and decoding by applying a self-attention module for timesteps-axis, while \cite{Jean2019} employs a new multi-head attention architecture to complete interactions between all agents. Unlike past work using independent self-attention for each axis, SceneTransformer \cite{SceneTrans2021} is designed to handle the interactive modeling among timesteps, agents and road graph elements in a unified way. Alternatively, GNN-based methods have recently shown promise in motion prediction tasks, by learning interactive graph representations from vectorized features via operators like graph convolution and message passing \cite{GraphReview2018, Pedestrain2021, Chaochenzhuolei}.}
VectorNet \cite{VectorNet2020} introduces a hierarchical graph method that first processes agent histories and map features in the form of polylines and then fuses them using a global interactive graph. Different from VectorNet, LaneGCN \cite{LaneGCN2020} merely constructs lane graph using graph convolution before capturing all possible agent-map interactions. However, the interaction-aware models above concentrate their focus more on interactive modeling between coarse-scale objects such as agent past trajectories or lanes \cite{zhang2022trajgen}, rather than the fine-grained elements of map topology such as the goals.
\subsection{Anchor-based Multimodal Trajectory prediction}\label{sec:anchor-based method}
The multimodal trajectory prediction models are largely realized by anchor-based methods. This class of methods choose different types of possible intents, including a diverse of future trajectories, goals, relevant lanes and regions, to represent the modes of agent trajectory distribution. A family of studies leverage future trajectories as good prior anchors, and then produce the final trajectory predictions by using a learning-based method. For example, MultiPath \cite{MultiPath2019} and CoverNet \cite{CoverNet2019} generate a candidate set of predefined future trajectories as hypothesis proposals, while PRIME \cite{PRIME2021} and TPNet \cite{TPNet2020} produce feasible future trajectories based on the model-based generator instead. Further, TNT \cite{TNT2020} generates goal-oriented trajectories to diversify the prediction modes, with the goal anchor candidates sampled from the High-Definition (HD) map. Besides, since the topological structure of lanes can be thought of a guidance for the motion of drivers, a vast majority of recent work leverage a set of instance-level lane entities as spatial proposals to generate multimodal plausible trajectories \cite{LaPred2021, GoalNet2020}. Unlike these anchor-based models above, \cite{mmTransformer2021} constructs a novel region-based training method in order to cover all the possible prediction modes in the determined scene with limited training samples. This approach divides the surrounding space into a small number of regions in the first place, and then refines the prediction outcomes to a specific region where ground truth locates. In conclusion, these anchor-based approaches commonly have two stages including the anchor selecting and trajectory regression, which are trained end-to-end with stronger interpretability. Unfortunately, the anchor-based methods are largely used in the marginal prediction process until now.
\subsection{Conditional Trajectory Prediction}
As illustrated in Section I, the marginal prediction approach can be hardly applied in the interactive driving scenarios, since such a model ignores the fact that the action made by interacting agent $A$ in the future may have a critical effect on the interacting agent $B$ and vice versa. Hence, a minimal number of studies have made explorations on modeling joint future trajectories based on the conditional prediction models \cite{CBP2021, SceneTrans2021, mfp2019, Precog2019, sun2022m2i, ProspectNet, ILVM2020}. These methods output future agent trajectories by conditioning on other interacting agents' explicit future trajectories or implicit latent variables.
By comparison, we develop a CGPNet to complete goal interactive prediction in priority based on the conditional method, which takes as queries the potential future goals of agent $A$ and predicts the probability distribution over the future goal candidates of agent $B$ conditioned on per query.
Followed by this, we consider interactions over the future trajectories timestep-by-timestep, designing GTFNet to predict interactive future behaviors in a rollout manner.
\textcolor{black}{\section{Problem Formulation}}\label{sec:backgound}
\textcolor{black}{\subsection{Variable Definition}}\label{sec:formulation}
\begin{figure*}[!t]
\centering
\begin{center}
\scriptsize
\includegraphics*[width=7.1in]{./photo/CGITP_modified.pdf}\\
\caption{\textbf{An overview of the CGTP framework.} The proposed GoINet is first used to extract the hierarchical features over each interacting agent and its future goal candidates. Then, we select the interactive future goal-pairs via a novel CGPNet. Finally, the proposed GTFNet conducts the trajectory interactive prediction process to produce the goal-oriented predictions of two interacting agents, generating multimodal interactive behaviors such as cut-in, yielding, lane-keeping and turning right et al..}
\label{fig:CGITP_framwork}
\end{center}
\vspace{-1.3em}
\end{figure*}
\textcolor{black}{Given the scene information in a combined form as $\boldsymbol{C} = \boldsymbol{C}_A \cup \boldsymbol{C}_B$, our objective is to predict the future joint states $\boldsymbol{Y} = \boldsymbol{Y}_A \cup \boldsymbol{Y}_B$ of two interacting agents up to a finite horizon $T$, modeled as a joint distribution $p( \boldsymbol{Y} \mid \boldsymbol{C})$.
\textcolor{black}{Towards each interacting agent $i$ \textcolor{black}{$\in$ $\{A, B\}$}, the scene information $\boldsymbol{C}_i: \{\boldsymbol{X}_i, \boldsymbol{L}_i\}$ contains dynamic and static representations \textcolor{black}{normalized at its reference frame}, where the agent trajectory set $\boldsymbol{X}_i=\{\boldsymbol{X}_i^{m}, m\in[0, O]\}$ includes the observed trajectory of predicted agent $\boldsymbol{X}_i^{0}$ and other agents' trajectories $\{\boldsymbol{{X}_i^{m}},{m\in[1,O]}\}$,
and $\boldsymbol{L}_i=\{\boldsymbol{L}_i^{m}, m\in[1, P]\}$ describes $P$ coarse-scale lanes that the agent $i$ is likely to reach in the future.} }
\textcolor{black}{\subsection{Conditional Goal-oriented Trajectory Prediction}}\label{sec:formulation}
For better comprehension, the marginal prediction method is first introduced, which lays a solid foundation for our proposed CGTP framework. In general, the marginal prediction methods are commonly built based on two assumptions.
\newtheorem{assumption}{Assumption}
\begin{assumption}\label{Agent Independence}
The agent's future states evolve independently from another interacting agent \cite{Desire2017, MultiPath2019, VectorNet2020, TNT2020}.
\end{assumption}
\textcolor{black}{Such independence assumption implies that marginal prediction methods predict the marginal distributions over individual agents without considering other interactions in the future.} \textcolor{black}{Once the \textit{Assumption \ref{Agent Independence}} is confirmed, the factorization over the joint distribution can be simplified as two marginal distributions}:
\begin{equation} \label{eq:n3-1}
\begin{gathered}
p(\boldsymbol{Y} \mid\boldsymbol{C}) = p( \boldsymbol{Y}_A \mid \boldsymbol{C}) p( \boldsymbol{Y}_B \mid \boldsymbol{C}).
\end{gathered}
\end{equation}
Furthermore, \textcolor{black}{we adopt a goal-oriented trajectory prediction method TNT, a representative marginal prediction method, to effectively produce future trajectory with multimodality.}
Towards each agent $i\in\{A,B\}$, the marginal distribution $p(\boldsymbol{Y}_i \mid \boldsymbol{C})$ can be decomposed based on future goal anchors, and then is marginalized over them:
\begin{equation} \label{eq:n3-2}
\begin{aligned}
p( \boldsymbol{Y}_i \mid \boldsymbol{C}) &= p(\boldsymbol{G}_{i}\mid \boldsymbol{C})p(\boldsymbol{Y}_i \mid \boldsymbol{G}_{i}, \boldsymbol{C}) \\
&= \sum_{\boldsymbol{g}_{i}^k \in \boldsymbol{G}_{i}}p(\boldsymbol{g}_{i}^k\mid \boldsymbol{C})p(\boldsymbol{Y}_i \mid \boldsymbol{g}_{i}^k, \boldsymbol{C}),
\end{aligned}
\end{equation}
where $\boldsymbol{G}_i = \left\{\boldsymbol{g}_{i}^1, \boldsymbol{g}_{i}^2, \cdots, \boldsymbol{g}_{i}^K \right\}$ represents the location space of plausible future goal candidates for agent $i$, which captures $K$ uncertainties by relying on the known road information $\boldsymbol{L}_i$.
\begin{assumption}\label{time Independence}
As for each agent $i$, the generation of future states is performed in an independent rollout manner \cite{TNT2020, Jean2019}.
\end{assumption}
Based on the \textit{Assumption \ref{time Independence}}, the future distribution for per agent $i$ can be factorized across time steps, by merely referring to its own previous states.
\begin{equation} \label{eq:n3-3}
\begin{gathered}
p( \boldsymbol{Y}_i \mid \boldsymbol{g}_{i}^k, \boldsymbol{C}) = \prod_{\delta=t+1}^{\delta=t+T} p(\boldsymbol{y}_i^{\delta} \mid \boldsymbol{y}_i^{t : \delta-1}, \boldsymbol{g}_{i}^k, \boldsymbol{C}),
\end{gathered}
\end{equation}
where $\boldsymbol{y}_i^\delta$ describes the future state of agent $i$ at time step $\delta$.
\textcolor{black}{Until now, the analysis above concludes that the goal-oriented trajectory prediction method ignores the future interaction during the joint trajectory prediction process. To bridge the gap between marginal prediction and interactive behavior prediction, we propose a novel CGTP framework by \textcolor{black}{considering conditional momdeling both in goal predictor and goal-oriented trajectory predictor},
collaboratively outputting scene-compliant joint future trajectories.
In this way, we first approximate the joint distribution as the factorization over a marginal distribution and a conditional distribution:}
\begin{equation} \label{eq:n4-10}
\begin{gathered}
p( \boldsymbol{Y} \mid \boldsymbol{C}) = p( \boldsymbol{Y}_A \mid \boldsymbol{C}) p( \boldsymbol{Y}_B \mid \boldsymbol{Y}_A, \boldsymbol{C}).
\end{gathered}
\end{equation}
\textcolor{black}{Different from Eq.~(\ref{eq:n3-1}), the factorization in Eq.~(\ref{eq:n4-10}) considers the agent $A$'s future intents have a potential influence on agent $B$. On one hand, the realization of agent $A$'s future trajectory can be roughly regarded as a marginal modeling process via a goal-oriented trajectory prediction method. On the other hand, we further make an approximate assumption to implement the conditional trajectory prediction for agent $B$. }
\begin{figure*}[htbp]
\centering
\begin{center}
\scriptsize
\includegraphics*[width=7.1in]{./photo/GoINet_modified.pdf}
\caption{\textbf{The structure of the GoINet.} Towards each interacting agent, a unified graph-based representation is first formulated based on the scene information at its reference frame. Then, the hierarchical interactions are modeled from the individual, local and global three levels. Finally, we obtain the hierarchical features over the interacting agent and its fine-grained future goal candidates by means of a concatenation operator.}
\label{fig:GoINet}
\end{center}
\vspace{-1.3em}
\end{figure*}
\begin{assumption}\label{Conditional Dependence}
For interactive behavior prediction, the conditional distribution over agent $B$ can be largely determined by the agent $A$'s future goals instead of its overall future trajectories.
\end{assumption}
Based on the \textit{Assumption \ref{Conditional Dependence}} and the goal-oriented trajectory prediction method, the conditional distribution over agent $B$ is decomposed in a similar manner of agent $A$:
\begin{equation} \label{eq:n4-11}
\begin{gathered}
p(\boldsymbol{Y}_B \mid \boldsymbol{Y}_A, \boldsymbol{C}) = p( \boldsymbol{Y}_B \mid \boldsymbol{G}_A, \boldsymbol{C}) \\ = p(\boldsymbol{G}_{B} \mid \boldsymbol{G}_{A}, \boldsymbol{C})p(\boldsymbol{Y}_B \mid \boldsymbol{G}_{B}, \boldsymbol{G}_{A}, \boldsymbol{C}) \\
= \sum_{\small{\boldsymbol{g}_{A}^q, \boldsymbol{g}_{B}^k \in \boldsymbol{G}}}p(\boldsymbol{g}_{B}^k \mid \boldsymbol{g}_{A}^q, \boldsymbol{C})p(\boldsymbol{Y}_B \mid \boldsymbol{g}_{B}^k, \boldsymbol{g}_{A}^q, \boldsymbol{C}).
\end{gathered}
\end{equation}
\textcolor{black}{\textcolor{black}{There exits an obvious difference in the goal prediction process between Eq.~(\ref{eq:n3-2}) and Eq.~(\ref{eq:n4-11})}, indicating that the conditional modeling over agent $B$ aims to tackle the pairwise interactive trajectory prediction problem with the conditional modeling over future goal candidates, as described by $p(\boldsymbol{g}_{B}^k \mid \boldsymbol{g}_{A}^q, \boldsymbol{C})$.}
Here, in order to distinguish the indexes of future goal candidates for two interacting agents, we use $q$ to describe the index of the future goal candidates for agent $A$ instead.
\textcolor{black}{Besides, our proposed CGTP framework conducts the goal-oriented trajectory prediction for each interacting agent via conditional modeling by referring to \cite{mfp2019}.} In the following, we take the interacting agent $A$ as an instance to describe the realization process of trajectory forecasting:
\begin{equation} \label{eq:n4-18}
\begin{aligned}
p( \boldsymbol{Y}_A \mid \boldsymbol{g}_{A}^q, \boldsymbol{C}) = \prod_{\delta=t+1}^{\delta=t+T} p(\boldsymbol{y}_A^{\delta} \mid \boldsymbol{y}^{t : \delta-1}, \boldsymbol{g}_{A}^q, \boldsymbol{C}) \\ =
\prod_{\delta=t+1}^{\delta=t+T} p(\boldsymbol{y}_A^{\delta} \mid \boldsymbol{y}_{A}^{t : \delta-1}, \boldsymbol{y}_{B}^{t : \delta-1}, \boldsymbol{g}_{A}^q, \boldsymbol{C}).
\end{aligned}
\end{equation}
As shown in Eq.~(\ref{eq:n4-18}), the conditional trajectory distribution of each interacting agent is explicitly dependent on its own future goal but implicitly dependent on the predicted future states of another interacting agent, which can be considered as a trajectory interactive prediction process in a step-wise rollout manner.
\textcolor{black}{\section{Methodology}}\label{sec:ETCNet}
An overview of our CGTP framework is shown in Fig.~\ref{fig:CGITP_framwork}. In the following, we first present the GoINet which summarizes the structural interactive representations over fine-grained future goal candidates.
Then, we develop a CGPNet to conduct future interactions at the goal-based level, and to select the goal-pair candidates with future interactive intents. Further, a GTFNet is developed to generate goal-oriented trajectory-pairs in a step-wise rollout manner. Finally, we introduce the optimization process of our CGTP framework.
\subsection{GoINet-based Context Encoding}\label{sec:architecture}
The GoINet has three core steps: (1) establish a unified graph-based formulation for two typical types of vectorized representations, $i.e.$, agent history trajectories and future lanes; (2) leverage GNN, max-pooling and Transformer to construct hierarchical interactions at individual, local and global levels, respectively; (3) concatenate the features from three levels above to obtain the structural features over fine-grained future goal candidates, as shown in Fig.~\ref{fig:GoINet}.
\textbf{Graph-based Representation Formulation.} \textcolor{black}{Inspired by VectorNet \cite{VectorNet2020}, we first abstract the scene elements $\{X_i, L_i\}_{|i\in\{A,B\}}$ (including agent history trajectories and future lanes) as polylines $\mathcal{P}_i$. All of these polylines can be approximated as sequences of vectors: for future lanes, we uniformly sample key points from the polylines at the same spatial distance to approximately represent the fine-grained goals, and sequentially connect the neighboring key points into vectors; for agent history trajectories, we can just sample key points with a fixed temporal interval, and connect them into vectors. Each vector can be denoted by
\begin{equation}
\begin{gathered}
\boldsymbol{v}_{i}^{\mathcal{P}} = [\boldsymbol{d}_{start}, \boldsymbol{d}_{end}, j],
\end{gathered}
\end{equation}
where $\boldsymbol{d}_{start}$ and $\boldsymbol{d}_{end}$ are coordinates of the start and end point of the vector; $j$ is the integer index of polyline $\mathcal{P}_i^j$. Then, for each polyline, we build a local graph as $\boldsymbol{\mathcal{G}}_{i}^{\mathcal{P}}=(\boldsymbol{\mathcal{V}}_{i}^{\mathcal{P}}, \boldsymbol{\mathcal{E}}_{i}^\mathcal{P})$, where $\boldsymbol{\mathcal{V}}_{i}^{\mathcal{P}}$ denotes a set of nodes with vector features $\boldsymbol{v}_{i}^{\mathcal{P}}$ and $\boldsymbol{\mathcal{E}}_{i}^\mathcal{P}$ is a set of edges encoding pairwise relations between nodes.
}
\textbf{Modeling Hierarchical Interactions.}
To extract the temporal-spatial and semantic locality of nodes, we first deploy a general GNN approach to extract the individual features of vectors in each polyline. Towards each local graph $\boldsymbol{\mathcal{G}}_{i}^{\mathcal{P}}$, we formulate the learning scheme of every node representation $\boldsymbol{v}^{(l)} = (\boldsymbol{v}_{i}^{\mathcal{P}})^{(l)} \in \mathbb{R}^{2 l d_h }$ with max-pooling operator $f_{mp}(\cdot)$ and concatenation operator $f_{cc}(\cdot)$ in the $l$-th layer as
\begin{equation} \label{eq:n4-4}
\begin{gathered}
\boldsymbol{v}^{(l)} = f_{cc}\left( \left\{ h^{(l)}\left(\boldsymbol{v}^{(l-1)}\right), f_{mp} \left( \left\{ h^{(l)}\left(\boldsymbol{v'}^{(l-1)}\right) \right\} \right) \right\} \right), \\
\forall l \in [1, L],
\end{gathered}
\end{equation}
where $\boldsymbol{v'}$ denotes the remaining nodes in $\boldsymbol{\mathcal{V}}_{i}^{\mathcal{P}}$ except for $\boldsymbol{v}$; $d_h$ represents the initial dimension of the hidden units at the first layer of GNN. In addition, $h^{(l)}(\cdot)$ denotes a mapping function at the $l$-th layer to iteratively encode the individual node embedding, which shares weights across all nodes. The mapping function $h^{(l)}(\cdot)$ is realized by a single fully connected layer with Layer Normalization \cite{LayerNorm2016} and ReLU non-linearity. Specifically, we initialize $(\boldsymbol{v}_{i}^{\mathcal{P}})^{(0)} = \boldsymbol{v}_{i}^{\mathcal{P}}$.
After $L$ layers of aggregation, we obtain the individual feature of nodes in each local graph. Second, the local-level representation over each entire polyline can be summarized by
\begin{equation} \label{eq:n4-5}
\begin{gathered}
\boldsymbol{h}_{i}^{\mathcal{P}} = f_{mp} \left( \left\{ \left(\boldsymbol{v}_{i}^{\mathcal{P}}\right)^{(L)} \mid \forall \boldsymbol{v}_{i}^{\mathcal{P}} \in \boldsymbol{\mathcal{V}}_{i}^{\mathcal{P}} \right\}\right),
\end{gathered}
\end{equation}
which models interactions among all nodes' individual representations in each local graph via max-pooling operator. More formally, we stack these local features into a matrix as $\boldsymbol{H}_i \in \mathbb{R}^{(O+P) \times d_H}$, where $d_H = 2Ld_h$.
\textcolor{black}{Finally, a Transformer layer is employed to draw global dependencies between the local-level features over agent trajectories and future lanes.} The output of self-attention computation for per interacting agent $i$ is described as follows
\begin{equation} \label{eq:n4-6}
\begin{gathered}
\boldsymbol{Att}_i = {\rm{softmax}} \left( \frac{\boldsymbol{Q}_i \left(\boldsymbol{K}_i\right) ^ \mathrm{T}}{\sqrt{d_k}} \right) \boldsymbol{V}_i,
\end{gathered}
\end{equation}
where each row of the matrix $\boldsymbol{Att}_i$ is a global feature of a specific polyline, i.e. $\boldsymbol{att}_{i}^{\mathcal{P}}$, and $d_k=d_H$.
\textcolor{black}{In Eq.~(\ref{eq:n4-6}), the set of queries $\boldsymbol{Q}_i$, keys $\boldsymbol{K}_i$ and values $\boldsymbol{V}_i$ are obtained by making linear projections to the local representation matrix $\boldsymbol{H}_i$.}
\textbf{Obtaining Structural Representations.} \textcolor{black}{As shown in the right part of Fig.~\ref{fig:CGITP_framwork}, the hierarchical encoding information $\boldsymbol{s}_i^X$ for per interacting agent $i$ is a concatenation of the local and global representations over its observed history $\boldsymbol{X}_i^0$. Since it is necessary to consider the nodes feature of future lanes in the representations of future goals, we combine future lane features from individual, local and global views to encode the fine-grained structural representations $\{\boldsymbol{s}_{i}^{g,k}, k \in [1, K]\}$ of the goal candidates $\{\boldsymbol{g}_{i}^{k}, k \in [1, K]\}$.}
In the end, we take as input the structural interactive features above to the following modules.
\subsection{CGPNet-based Goal Interactive Prediction}\label{sec:definition}
In this section, we focus on introducing an implementation paradigm for estimating the conditional probability distribution over the future goal candidates for agent $B$, $i.e.$ $p\left( \boldsymbol{g}_{B}^{k} \mid \boldsymbol{g}_{A}^{q}, \boldsymbol{C} \right)$, which takes as conditional queries the potential future goals from agent $A$. As illustrated in TNT \cite{TNT2020}, the determination of future goals relies on discrete future goal candidates $\boldsymbol{g}_{B}^{k}$ and their corresponding continuous offsets $\boldsymbol{\Delta} \boldsymbol{g}_{B}^{k}$ to the real endpoint $\boldsymbol{y}_{B}^{t+T}$. Hence, the conditional probability distribution over future goal candidates can be factorized into these two influential elements above:
\begin{equation} \label{eq:n4-12}
\begin{aligned}
p\left( \boldsymbol{g}_{B}^{k} \mid \boldsymbol{g}_{A}^{q}, \boldsymbol{C} \right) &= \pi \left( \boldsymbol{g}_{B}^{k} \mid \boldsymbol{g}_{A}^{q}, \boldsymbol{C} \right) \\ &\cdot \mathcal{N}\left( \boldsymbol{\Delta} \boldsymbol{g}_{B}^{k} \mid \boldsymbol{\mu} \left( \boldsymbol{\Delta} \boldsymbol{g}_{B}^{k}\right), \boldsymbol{\Sigma} \left( \boldsymbol{\Delta} \boldsymbol{g}_{B}^{k}\right) \right),
\end{aligned}
\end{equation}
where $\pi \left( \boldsymbol{g}_{B}^{k} \mid \boldsymbol{g}_{A}^{q}, \boldsymbol{C} \right)$ describes the uncertainty across a candidate set of agent $B$'s future goals using a softmax distribution:
\begin{equation} \label{eq:n4-13}
\begin{gathered}
\pi \left( \boldsymbol{g}_{B}^{k} \mid \boldsymbol{g}_{A}^{q}, \boldsymbol{C} \right) = \frac{ {\rm exp} \ f_B^{seg}\left( \boldsymbol{s}_{B}^{g,k}, \boldsymbol{g}_{B}^{k}, \phi_{B} \left(\boldsymbol{g}_{A}^{q}\right) \right)}{\sum \limits_{ \small{\boldsymbol{g}_{B}^{ k'} }} {\rm exp} \ f_B^{seg}\left( \boldsymbol{s}_{B}^{g,k'}, \boldsymbol{g}_{B}^{k'}, \phi_{B} \left(\boldsymbol{g}_{A}^{q} \right)\right)}.
\end{gathered}
\end{equation}
This conditional probability distribution is learned by a segmentation task.
Subsequently, we obtain the corresponding offset from a generalized Gaussian distribution $\mathcal{N}\left( \boldsymbol{\Delta} \boldsymbol{g}_{B}^k \mid \boldsymbol{\mu} \left( \boldsymbol{\Delta} \boldsymbol{g}_{B}^k\right), \boldsymbol{\Sigma}\left( \boldsymbol{\Delta} \boldsymbol{g}_{B}^k\right)\right)$, where $\boldsymbol{\mu} \left( \boldsymbol{\Delta} \boldsymbol{g}_{B}^k\right)$ denotes mean as
\begin{equation} \label{eq:n4-14}
\begin{gathered}
\boldsymbol{\mu} \left( \boldsymbol{\Delta} \boldsymbol{g}_{B}^{k}\right) = f_B^{reg}\left( \boldsymbol{s}_{B}^{g,k}, \boldsymbol{g}_{B}^k, \phi_B \left(\boldsymbol{g}_{A}^q\right) \right)
\end{gathered}
\end{equation}
which is primarily modeled by a regression task. Besides, let $\boldsymbol{\Sigma} \left( \boldsymbol{\Delta} \boldsymbol{g}_{B}^{k}\right)$ denote variance assumed to be an identity matrix in this paper. From another view, both $f_B^{seg}(\cdot)$ and $f_B^{reg}(\cdot)$ are implemented by a three-layer multilayer perceptron (MLP) to predict the conditional distribution and offsets over future goal candidates. Concretely, the input of these two mapping functions is mainly derived from two aspects. One class of input is related to each future goal candidate $\boldsymbol{g}_{B}^k$ and its corresponding structural representation $\boldsymbol{s}_{B}^{g,k}$ extracted from the GoINet. Furthermore, an agent $B$-centric transformation function $\phi_{B} (\cdot)$ is deployed to acquire the potential future goals $\boldsymbol{g}_{A}^q$ of agent $A$ at the agent $B$'s reference frame, which we refer to as conditional queries, as well as another class of input.
\textcolor{black}{Before obtaining conditional probability distribution upon Eq.~(\ref{eq:n4-12}), we also require estimating the marginal probability distribution over the future goal candidates for agent $A$, named $p\left( \boldsymbol{g}_{A}^{q} \mid \boldsymbol{C}\right)$, by turning off inputs from the conditional query in the model. Thus, the marginal probability distribution is described by the simplified expressions $\pi \left( \boldsymbol{g}_{A}^{q} \mid \emptyset, \boldsymbol{C} \right)$ and $\boldsymbol{\mu} \left( \boldsymbol{\Delta} \boldsymbol{g}_{A}^{q} \mid \emptyset \right)$, and the realizations over them can approximately refer to Eq.~(\ref{eq:n4-13}) and Eq.~(\ref{eq:n4-14}).}
Given the marginal and conditional probability distribution for the goal interactive prediction process, we now can compute the joint probability distribution $p\left(\boldsymbol{g}_{B}^{k}, \boldsymbol{g}_{A}^{q} \mid \boldsymbol{C}\right)$. For simplification, suppose that the joint probability distribution above can be approximately replaced by $\pi \left( \boldsymbol{g}_{B}^{k}, \boldsymbol{q}_{A}^{q} \mid \boldsymbol{C}\right)$, described as
\begin{equation} \label{eq:n4-17}
\begin{gathered}
p\left( \boldsymbol{g}_{B}^{k}, \boldsymbol{g}_{A}^{q} \mid \boldsymbol{C}\right) = \pi\left( \boldsymbol{g}_{B}^{k}, \boldsymbol{g}_{A}^{q} \mid \boldsymbol{C}\right) \\
= \pi\left( \boldsymbol{g}_{B}^{k} \mid \boldsymbol{g}_{A}^{q}, \boldsymbol{C}\right) \pi\left( \boldsymbol{g}_{A}^{q} \mid \emptyset, \boldsymbol{C}\right).
\end{gathered}
\end{equation}
\subsection{GTFNet-based Trajectory Interactive Prediction}\label{sec:definition}
After obtaining a candidate set of future goal-pairs, we then build a trajectory interactive prediction module to output joint future trajectories in a synchronized rollout manner.
Different from Eq.(\ref{eq:n3-3}), this module predicts the joint states of two interacting agents at time step $\delta + 1$ by taking into account each other's predicted state at time step $\delta$.
Given the determined future goal, we take agent $A$ as an instance to introduce the implementation paradigm over the unimodal conditional distribution for future trajectory, as described by Eq.(\ref{eq:n4-18}). We design a GRU-based encoder-decoder neural network to realize the generation of future trajectory, which shares the trainable parameters across two interacting agents. In detail, both encoder and decoder use GRU mapping $f_{gru}(\cdot)$ to recursively update the hidden unit along the temporal axis. Especially, the input representations of GRU mapping are defined by different forms in terms of encoding and decoding necessity. On one hand, the encoding GRU captures temporal relationships by taking as input the observed history $\boldsymbol{X}_i^0$. On the other hand, the decoding GRU updates the hidden state and then a trajectory predictor $f_{traj}(\cdot)$, implemented by 1-layer MLP, is followed to predict the future location at the current time step, which is transformed via $\phi_{B}(\cdot)$ and subsequently referred as input to the prediction process of agent $B$ at the next future time step. In the meanwhile, the concatenation of a goal candidate $\boldsymbol{g}_{A}^{q}$ and hierarchical interactive representation $\boldsymbol{s}_A^X$ is also served as input to determine the future intent at which agent $A$ will arrive.
\subsection{Optimization Design for CGTP Framework}\label{sec:algorithm}
The proposed CGTP framework is trained via supervised learning in an end-to-end way.
The total learning loss function contains goal prediction loss and trajectory prediction loss, defined as:
\begin{equation} \label{eq:n4-19}
\begin{gathered}
L^{total} = L^{g} + L^{traj}.
\end{gathered}
\end{equation}
In the following, we illustrate the training strategy of the two components above from interacting agents and joint modes in two aspects. Besides, a detailed training algorithm pseudocode is provided as
described in Algorithm~\ref{alg:algrithm1}.
\textbf{Training on Goal Interactive Prediction.} To effectively model the joint distribution of future goal candidates, our goal prediction loss $L^{g}$ is modified to supervise the general intersection of future goal candidate sets from the two interacting agents, in addition to both of the single interacting agent's future goal candidate set. Thus, the goal prediction loss can be decomposed into three parts in sequential order according to their single forms and joint form:
\begin{equation} \label{eq:n4-20}
\begin{gathered}
L^{g} =L_A^{g} + L_B^{g} + L_{Joint}^{g}.
\end{gathered}
\end{equation}
First, We introduce the marginal goal prediction loss $L_A^g$ of agent $A$. On one hand, the binary cross entropy loss $L_{BCE}(\cdot)$ is used to learn the marginal probability distribution $\pi \left( \boldsymbol{g}_{A}^{q} \mid \emptyset, \boldsymbol{C}\right)$.
On the other hand, the mean square error loss $L_{MSE}(\cdot)$ is employed to learn the offset mean $ \boldsymbol{\mu} \left( \boldsymbol{\Delta} \boldsymbol{g}_{A}^{q} \mid \emptyset \right)$ instead.
Hence, the marginal goal prediction loss $L_A^g$ is represented by
\begin{equation} \label{eq:n4-21}
\begin{gathered}
L_A^{g} = \frac{1}{K}\sum_{q=1}^{K} \left[ L_{BCE}\left( \pi\left(\boldsymbol{g}_{A}^{q} \mid \emptyset, \boldsymbol{C}\right), \mathbb{1}\left(q \in \boldsymbol{K}_A\right)\right)
\right.
\\
\left.
+ L_{MSE}\left( \boldsymbol{\mu}\left(\boldsymbol{ \Delta} \boldsymbol{g}_{A}^{q} \mid \emptyset \right), \mathbb{1}\left(q \in \boldsymbol{K}_A\right) \boldsymbol{\Delta} \boldsymbol{g}_{A}^{q}\right)\right],
\end{gathered}
\end{equation}
where $\mathbb{1}(\cdot)$ is an indicator function, and $\boldsymbol{K}_A$ represents the index set of goal candidates which cover the top $\mathcal{K}$ candidates closest to the real endpoint $\boldsymbol{y}_A^{t+T}$ of agent $A$.
Subsequently, once determining the top $\mathcal{K}$ agent $A$'s future modes as estimated by $\pi \left( \boldsymbol{g}_{A}^{q} \mid \emptyset, \boldsymbol{C}\right)$, we accordingly optimize the conditional goal prediction loss over them for another interacting agent $B$:
\begin{equation} \label{eq:n4-22}
\begin{gathered}
L_B^{g} =
\frac{1}{\mathcal{K} \cdot K}\sum_{q=1}^{\mathcal{K}}\sum_{k=1}^{K} \left[ L_{BCE}\left( \pi\left(\boldsymbol{g}_{B}^{k} \mid \boldsymbol{g}_{A}^{q}, \boldsymbol{C} \right), \mathbb{1}\left(k \in \boldsymbol{K}_B\right)\right)
\right.
\\
\left.
+ L_{MSE}\left( \boldsymbol{\mu}\left(\boldsymbol{ \Delta} \boldsymbol{g}_{B}^{k}\right), \mathbb{1}\left(k \in \boldsymbol{K}_B\right) \boldsymbol{\Delta} \boldsymbol{g}_{B}^{k}\right)\right].
\end{gathered}
\end{equation}
\noindent Furthermore, to enable a smooth training process, we employ a teacher forcing technique \cite{teacherforcing1989} by using the real endpoint of agent $A$ as the conditional query. Similarly, we correspondingly obtain the top $\mathcal{K}$ potential goal candidates of the agent $B$ at each conditional mode $q$, and then $\mathcal{K}^2$ goal-pairs begin to emerge that reflect different future interactive intents.
Finally, we design a novel goal interactive loss to accurately learn the joint probability distribution among the two classes of selected goal candidate sets:
\begin{equation} \label{eq:n4-23}
\begin{gathered}
L_{Joint}^{g} = \frac{1}{\mathcal{K}^2}\sum_{q=1}^{\mathcal{K}}\sum_{k=1}^{\mathcal{K}} L_{BCE}\left( \pi\left(\boldsymbol{g}_{B}^{k}, \boldsymbol{g}_{A}^{q} \mid \boldsymbol{C}\right), \mathbb{1}\left(\kappa = k^{J}\right)\right), \\
\kappa = \mathcal{K}(q-1)+k,
\end{gathered}
\end{equation}
\noindent where $\kappa$ represents the index of the selected goal-pairs, which also denotes the index of joint modes for the goal-oriented trajectory-pairs later. Different from Eq.(\ref{eq:n4-21}) and Eq.(\ref{eq:n4-22}), $k^{J}$ is the index of a specific case where both two agents' future goal candidates most closely match their corresponding ground truth endpoints.
\textbf{Training on Trajectory Interactive Prediction.} After attaining $\mathcal{K}^2$ diverse combinations of future intents, we accordingly obtain their $\mathcal{K}^2$ goal-oriented trajectory-pairs. Let $\hat{\boldsymbol{Y}}_{i}^{\kappa} = \left\{ \hat{\boldsymbol{y}}_{i}^{\kappa,t+1}, \hat{\boldsymbol{y}}_{i}^{\kappa,t+2}, \cdots, \hat{\boldsymbol{y}}_{i}^{\kappa,t+T} \right\}$ represent the goal-oriented trajectory prediction of interacting agent $i$ at joint mode $\kappa$. We also adopt the mean square error loss to minimize the Euclidean distance between the most likely predicted joint states and the ground truth at per future time step:
\begin{equation}
\label{eq:n4-24}
\begin{gathered}
L^{traj} = \frac{1}{2\mathcal{K}^2 \cdot T}\sum_{\kappa=1}^{\mathcal{K}^2}\sum_{\delta=t+1}^{t+T} \sum_{i} L_{MSE}\left( \hat{\boldsymbol{y}}_{i}^{\kappa,\delta}, \mathbb{1}\left(\kappa = k^{J}\right) \boldsymbol{y}_i^{\delta}\right).
\end{gathered}
\end{equation}
\noindent
Moreover, the teacher forcing approach is utilized during the trajectory predictive rollouts as well by feeding one agent's ground truth observation at time step $\delta$, served as a conditional interaction, to another agent at time step $\delta+1$. In addition, during the training time, we also consider the real endpoint of each interacting agent as the goal anchor to guide the prediction of the goal-oriented future trajectory.
At inference process, we replace the real endpoint of agent $A$ with its predicted future goals, served as conditional queries, to estimate conditional distribution over the future goal candidates for agent $B$. In addition, during the trajectory interactive prediction process, each interacting agent generates trajectories by considering its predicted future goals as anchors instead of its real endpoint. Also, the future ground truth is substituted by the predicted states to instruct the future interactive rollouts between two interacting agents in a step-wise manner.
\begin{algorithm*}[t]
\caption{{Optimization for CGTP Framework}}
\label{alg:algrithm1}
\LinesNumbered
\textbf{{Input:}}
scene information $\boldsymbol{C}=\left(\boldsymbol{C}_A, \boldsymbol{C}_B \right)$ and future joint states $\boldsymbol{Y}=\left(\boldsymbol{Y}_A, \boldsymbol{Y}_B \right)$
\textbf{{Initialize:}} Randomly initialize GoINet's parameter $\theta_{GoINet}$, the CGPNet's all parameters $\theta_{A}^{seg}$, $\theta_{A}^{reg}$, $\theta_{B}^{seg}$, $\theta_{B}^{reg}$ and the GTFNet's all parameters $\theta^{Enc}_{gru}$, $\theta^{Dec}_{gru}$, and $\theta_{traj}$
\While{ \rm{not \ convergence}}
{
\For{agent $i$ in \{A, B\}}
{$\boldsymbol{s}_i^X \gets$ encode the structural representation over history states via GoINet
$\boldsymbol{s}_i^{g, n} \gets$ encode the structural representations over future goal candidates via GoINet
$\boldsymbol{u}_i^X \gets f_{gru}^{Enc}(\boldsymbol{X}_i^0 \mid \theta^{Enc}_{gru}) $: encode the temporal representation over history states via GRU-based encoder}
\textbf{Training on the marginal prediction over future goal candidates for agent $A$}
$ \pi \left( \boldsymbol{g}_{A}^{q} \mid \emptyset, \boldsymbol{C}; \theta_A^{seg}\right) \gets$ estimate the marginal probability distribution
$\boldsymbol{\mu}\left( \boldsymbol{\Delta} \boldsymbol{g}_{A}^{q} \mid \emptyset; \theta_A^{reg}\right) \gets$ predict the offset mean
Update $\theta_{GoINet}$, $\theta_{A}^{seg}$ and $\theta_{A}^{reg}$ by minimizing $L_A^g$
\For{ condional queries $q = 1 \ to \ \mathcal{K} $ }
{
\textbf{Training on the conditional prediction over future goal candidates for agent $B$ at per query $q$}
$ \pi \left( \boldsymbol{g}_{B}^{k} \mid \boldsymbol{g}_{A}^{q}, \boldsymbol{C}; \theta_B^{seg}\right) \gets$ estimate the conditional probability distribution
$\boldsymbol{\mu}\left( \boldsymbol{\Delta} \boldsymbol{g}_{B}^{k} \mid \theta_B^{reg}\right) \gets$ predict the offset mean
\textbf{Training on the joint prediction over $\mathcal{K}$ goal-pair candidates at per query $q$}
$\pi \left( \boldsymbol{g}_{B}^{k}, \boldsymbol{g}_{A}^{q} \mid \boldsymbol{C}; \theta_A^{seg}, \theta_B^{seg}\right) \gets$ estimate the joint probability distribution
\For{ conditional intents $k= 1 \ to \ \mathcal{K} $ }
{
$\kappa = \mathcal{K}(q-1) + k \gets$ calculate the prediction mode
\textbf{Training on the joint prediction over goal-oriented future trajectories at per mode $\kappa$ }
\For{ $Timesteps \ \delta= t+1 \ to \ t+T $ }
{
$\boldsymbol{r}_{A}^{\kappa,\delta} \gets f^{Dec}_{gru}\left( \phi_A \left( \hat{\boldsymbol{y}}_{B}^{\kappa,\delta-1}\right), \boldsymbol{g}_{A}^{q}, \boldsymbol{s}_A^X, \boldsymbol{u}_A^X \mid \theta^{Dec}_{gru}\right)$
$\boldsymbol{r}_{B}^{\kappa,\delta} \gets f^{Dec}_{gru}\left( \phi_B \left( \hat{\boldsymbol{y}}_{A}^{\kappa,\delta-1}\right), \boldsymbol{g}_{B}^{k}, \boldsymbol{s}_B^X, \boldsymbol{u}_B^X \mid \theta^{Dec}_{gru}\right)$
$\hat{\boldsymbol{y}}_{A}^{\kappa,\delta}, \hat{\boldsymbol{y}}_{B}^{\kappa,\delta} \gets f_{traj}\left(\boldsymbol{r}_{A}^{\kappa,\delta}, \boldsymbol{r}_{B}^{\kappa,\delta} \mid \theta_{traj} \right)$
}
}
}
Update $\theta_{GoINet}$, $\theta_B^{seg}$ and $\theta_B^{reg}$ by minimizing $L_B^g$
Update $\theta_{GoINet}$, $\theta_A^{seg}$ and $\theta_B^{seg}$ by minimizing $L_{Joint}^g$
Update $\theta_{GoINet}$, $\theta^{Enc}_{gru}$, $\theta^{Dec}_{gru}$ and $\theta_{traj}$ by minimizing $L^{traj}$
}
The $\mathcal{K}^2$ multimodal futue trajectories $\left\{ \hat{\boldsymbol{Y}}_{i}^1, \hat{\boldsymbol{Y}}_{i}^2, \cdots, \hat{\boldsymbol{Y}}_{i}^{\mathcal{K}^2}\right\}$ are obtained for per interacting agent $i$
\textbf{return} All trainable parameters of GoINet, CGPNet and GTFNet
\end{algorithm*}
\section {Experiment}\label{sec:Experiment}
In this section, we first introduce the experimental settings, including datasets, metrics and implementation details. Subsequently, we compare our CGTP framework against the existing trajectory prediction methods.
In addition, the ablation studies are conducted to validate the effectiveness of the key design for our novel approach. In the end, some qualitative analysis are performed in the multimodality and future interactivity aspects.
\subsection{Experimental Settings}
\emph{(1) Datasets:} \textcolor{black}{We evaluate our CGTP framework on three large-scale complex driving datasets: Argoverse motion forecasting dataset \cite{Argoverse2019}, In-house cut-in dataset and Waymo open motion dataset \cite{WaymoDataset2021}.}
\noindent\textbf{Argoverse motion forecasting dataset} is a widely-used trajectory prediction dataset recorded in over 30K traffic scenarios from Pittsburgh and Miami. These scenarios produce a series of frames sampled at 10Hz, which are further split into training and validation sets with 205942 and 39472 frames, respectively. Different from prior literature, our work focuses on joint trajectory prediction for two agents. Thus, given the positions of all agents in each frame within the past 2 seconds observation, we consider the two interesting agents, with type 'agent' and 'av', as agent $A$ and agent $B$, separately, whose 3 seconds future trajectories need to be evaluated. Besides, this dataset provides a friendly interface to conveniently retrieve lane segments and their connection relationships for each frame. However, one limitation of this dataset is that there exist rare scenarios where the two agents interact with each other in the future. \textcolor{black}{To overcome this issue, the interactive datasets are taken into account as follows.}
\noindent\textbf{In-house cut-in dataset} is a Baidu internal dataset to support the trajectory interactive prediction task with two interacting agents. This large-scale dataset was collected in the specific cut-in scenarios from Beijing, China, which is divided into two branches in terms of junction and non-junction environments. On one hand, towards the junction environment, there are 180201 interactive frames in total extracted from over 11K unique cut-in scenarios, which are then split into 162381 frames for training and 17820 frames for validation. On the other hand, we provide 193401 interactive frames recorded in more than 12K non-junctional cut-in scenarios, while the training and validation sets contain 162556 and 30845 frames, saparately.
Further, in this paper, the interactive pair is achieved by agent $A$ and $B$, and we choose the agent $A$ as the query agent to influence the cut-in reactions for agent $B$. Given 2 seconds of observed history, our objective is to predict 3 seconds of joint future trajectories for two interacting agents in each cut-in frame. The agent trajectories are sampled at 10Hz and the road topology are provided in the form of centerlines and lane boundaries.
\noindent\textcolor{black}{\textbf{Waymo open motion dataset} (WOMD) is by far the most diverse interactive motion dataset to the best of our knowledge. It contains more than 570 hours of unique data over 1750 km of roadways. Since WOMD provides specific labels for interacting vehicles, 158810 and 33686 interactive frames can be extracted from the training and validation dataset, respectively. Further, we leverage the relation predictor in M2I \cite{sun2022m2i} to provide the influencer-reactor relationships between each interactive pair, with the influencer and reactor determined as agent $A$ and $B$, separately. Given 1.1 seconds of agent states sampled at 10 Hz, we focus on the interactive prediction task to predict the joint future positions of two interacting agents for the next 8 seconds in the future. In addition to history trajectories, the map features, represented by lane polylines, are also included in the prior observations of each frame.}
\emph{(2) Metrics:} By referring to the evaluation setting from \cite{WaymoDataset2021} and \cite{Argoverse2019}, we use minimum Average Displacement Error (minADE), minimum Final Displacement Error (minFDE) and Miss Rate (MR) to measure distance error over joint trajectory predictions, and these metrics are applied to the three datasets. Towards WOMD and the In-house dataset, we also report the Overlap Rate (OR) to measure the collision frequency between two interacting agents. \textcolor{black}{Besides, mean Average Precision (mAP) is considered in WOMD to measure the quality of confidence score over joint predictions, which is the official ranking metric used by WOMD benchmark.} On the other hand, due to the unique cut-in attributes recorded from the In-house dataset, we design the Cut-in Rate (CR) metric to identify whether the agent $B$ has the ability to early merge to that lane where agent $A$ keeps, on the condition that no collision occurs. In the following, we give detailed expressions of these metrics above.
\noindent\textbf{minADE}. The minimum Average Displacement Error is defined as the $\ell_2$ distance between the ground truth and the closest predicted trajectory-pair:
\begin{equation} \label{eq:n5-1}
\begin{gathered}
minADE=\frac{1}{2T} \min\limits_{\kappa}\sum_{i}\sum_{\delta=t+1}^{t+T} \Vert \hat{\boldsymbol{y}}_{i}^{\kappa,\delta}-\boldsymbol{y}_i^{\delta} \Vert _ 2.
\end{gathered}
\end{equation}
\noindent\textbf{minFDE}. The minimum Final Displacement Error is obtained by merely computing the minADE at the last future time step:
\begin{equation} \label{eq:n5-2}
\begin{gathered}
minFDE=\frac{1}{2} \min\limits_{\kappa}\sum_{i} \Vert \hat{\boldsymbol{y}}_{i}^{\kappa, t+T}-\boldsymbol{y}_i^{t+T} \Vert _ 2.
\end{gathered}
\end{equation}
\noindent\textbf{MR}. The Miss Rate is created by calculating an indicator function $IsMiss(\cdot)$ for each frame in turn and then averaging over all dataset. \textcolor{black}{For a specific frame, a miss is assigned if none of the joint predictions are within the given threshold(s) of the ground truth:}
\textcolor{black}{\begin{equation} \label{eq:n5-3}
\begin{gathered}
IsMiss(\cdot) = {\rm{min}}_{\kappa} \vee_{i} \mathbb{1}\left(Dist_{i}^{\kappa} > Dist_{thre} \right),
\end{gathered}
\end{equation}}
\textcolor{black}{where $Dist_{i}^{\kappa}$ and $Dist_{thre}$ have different definitions in terms of different datasets. For Argoverse and In-house datasets, $Dist_i^{\kappa}$ calculates the final displacement error between ground truth and the future trajectory of agent $i$ at the joint mode $\kappa$, and $Dist_{thre}$ is a single distance threshold that is set to 2. Different from above, WOMD adopts different criteria for lateral deviation versus longitudinal depending on the initial velocity of the predicted agents. In this way, $Dist_{i}^{\kappa}$ and $Dist_{thre}$ are set as trajectory displacement errors and thresholds in the lateral and longitudinal two aspects.}
\begin{table*}[htbp]
\centering
\caption{Comparison with marginal prediction methods and ablations on Argoverse motion forecasting dataset}
\begin{tabular}{c|c|ccc}
\toprule
\textbf{Joint Prediction} & \textbf{Methods} & \textbf{minADE $\downarrow$} & \textbf{minFDE $\downarrow$} & \textbf{MR $\downarrow$} \\
\midrule
\multicolumn{1}{c|}{\multirow{4}[2]{*}{\makecell[c]{Marginal \\ Predictions}}} & LSTM-ED \cite{Argoverse2019} & 1.2221 & 2.7970 & 0.6868 \\
& VectorNet (noGNN) \cite{VectorNet2020} & 1.1327 & 2.6005 & 0.6522 \\
& VectorNet \cite{VectorNet2020} & 1.0959 & 2.3197 & 0.5592 \\
& TNT \cite{TNT2020} & 1.1320 & 2.5341 & 0.5664 \\
\midrule
\multicolumn{1}{c|}{\multirow{2}[2]{*}{\makecell[c]{Ablations}}} & CGTP (wo interactive loss) & 1.0063 & 2.2847 & 0.4900 \\
& CGTP (w interactive loss) & \textbf{0.7533} & \textbf{1.6140} & \textbf{0.3369} \\
\bottomrule
\end{tabular}%
\label{tab:Argoverse}%
\end{table*}%
\begin{table*}[htbp]
\centering
\caption{Comparison with marginal prediction methods and ablations on In-house cut-in dataset}
\begin{tabular}{c|c|c|ccccc}
\toprule
\textbf{Scenarios} & \textbf{Joint Prediction} & \textbf{Methods} & \textbf{minADE $\downarrow$} & \textbf{minFDE $\downarrow$} & \textbf{MR $\downarrow$} & \textbf{OR $\downarrow$} & \textbf{CR $\uparrow$}\\
\midrule
\multicolumn{1}{c|}{\multirow{6}[4]{*}{Junction}} & \multicolumn{1}{c|}{\multirow{4}[2]{*}{\makecell[c]{Marginal \\ Predictions}}} & LSTM-ED \cite{Argoverse2019} & 0.9089 & 2.2090 & 0.6581 & \textbf{0.008817 } & 0.7047 \\
& & VectorNet (noGNN) \cite{VectorNet2020} & 0.8560 & 1.9331 & 0.5753 & 0.012480 & 0.7253 \\
& & VectorNet \cite{VectorNet2020} & 0.7130 & 1.7041 & 0.4538 & 0.011643 & 0.7368 \\
& & TNT \cite{TNT2020} & 0.8180 & 1.8980 & 0.4625 & 0.012556 & 0.7161 \\
\cmidrule{2-8} & \multicolumn{1}{c|}{\multirow{2}[2]{*}{\makecell[c]{Ablations}}} & CGTP (wo interactive loss) & 0.7022 & 1.6974 & 0.4308 & 0.011217 & 0.7575 \\
& & CGTP (w interactive loss) & \textbf{0.6454} & \textbf{1.5481} & \textbf{0.3736} & 0.010379 & \textbf{0.7639} \\
\midrule
\midrule
\multirow{6}[4]{*}{Non-Junction} & \multicolumn{1}{c|}{\multirow{4}[2]{*}{\makecell[c]{Marginal \\ Predictions}}} & LSTM-ED \cite{Argoverse2019} & 0.9337 & 2.1816 & 0.6077 & 0.002002 & 0.7965 \\
& & VectorNet (noGNN) \cite{VectorNet2020} & 0.9318 & 2.0643 & 0.5887 & 0.002486 & 0.8151 \\
& & VectorNet \cite{VectorNet2020} & 0.8933 & 1.9584 & 0.5180 & 0.002518 & 0.8018 \\
& & TNT \cite{TNT2020} & 0.9090 & 2.0577 & 0.5285 & 0.002970 & 0.7153 \\
\cmidrule{2-8} & \multicolumn{1}{c|}{\multirow{2}[2]{*}{\makecell[c]{Ablations}}} & CGTP (wo interactive loss) & 0.7814 & 1.9149 & 0.4721 & 0.002098 & 0.8341 \\
& & CGTP (w interactive loss) & \textbf{0.6209} & \textbf{1.4780} & \textbf{0.3544} & \textbf{0.001793} & \textbf{0.8583} \\
\bottomrule
\end{tabular}%
\label{tab:In-house}%
\vspace{-1em}
\end{table*}%
\noindent\textbf{OR}. A single overlap is described by a frame where the bounding boxes of two interacting agents overlap with each other at any future time step in the highest confidence trajectory-pair prediction. The average over all frames constructs the overlap rate.
Here, we use $\bar{\kappa}$ to represent the index of the predicted trajectory-pair with highest confidence score, and define a single overlap indicator over it as
\begin{equation} \label{eq:n5-4}
\begin{gathered}
IsOverlap(\cdot) = \mathbb{1}\left(\sum_{\delta=t+1}^{t+T} IOU\left( b\left(\hat{\boldsymbol{y}}_{A}^{\bar{\kappa},\delta}\right), b\left(\hat{\boldsymbol{y}}_{B}^{\bar{\kappa},\delta}\right)
\right) > 0\right),
\end{gathered}
\end{equation}
where $b(\cdot)$ is a function to obtain the bounding box information (length, width and heading) from the predicted state of each interacting agent at any future time step $\delta$. Subsequently, inspired by \cite{TrafficSim2021}, $IOU(\cdot)$ computes the intersection-over-union between two bounding boxes of interacting agents.
\noindent\textcolor{black}{\textbf{mAP}. Given the confidence score of joint predictions estimated by $\pi\left(\boldsymbol{g}_B^k, \boldsymbol{g}_A^{q} \mid \boldsymbol{C}\right)$, mAP calculates the area under the precision-recall curve, where the definition of MR is employed to determine true positives, false positives, $etc$. }
\noindent\textbf{CR}. The Cut-in Rate is computed as the total number of cut-in frames divided by the total number of safety frames. In this paper, we use the definition of OR above to define safety frames. In addition to satisfying the existence of no overlap between two interacting agents, a cut-in frame is determined by a cut-in indicator:
\begin{equation} \label{eq:n5-5}
\begin{gathered}
IsCutin(\cdot) = \mathbb{1}\left( lane\left(\hat{\boldsymbol{y}}_{A}^{\bar{\kappa},t+T}\right)=lane\left(\hat{\boldsymbol{y}}_{B}^{\bar{\kappa},t+T}\right)\right)
\\
\wedge \mathbb{1}\left( y\left(\hat{\boldsymbol{y}}_{B}^{\bar{\kappa},t+T}\right)>y\left(\hat{\boldsymbol{y}}_{A}^{\bar{\kappa},t+T}\right)\right)
\end{gathered}
\end{equation}
where $lane(\cdot)$ denotes a function to calculate the index of the future lane where each interacting agent locates at the last future time step, and $y(\cdot)$ is responsible to derive the longitudinal coordinate of the endpoints for the predicted trajectory-pair.
\emph{(3) Implementation Details:} This section introduces the implementation details from the aspects of pre-processing, network architecture design and learning scheme.
\noindent \textbf{Pre-processing}. Towards each interacting agent, both past and future trajectories are normalized at its own reference frame, with the origin centered around the location at its last observed time step. Further, the other observations, including agent trajectories and future lanes, are accordingly transformed to the reference frame of each interacting agent. For the dynamic information, the heuristic rules are performed to select $O$=14 vehicles in the neighbor as surrounding agents. On the other hand, for the static information, we use the Depth-First-Search algorithm to search $P$=6 potential future lanes which each interacting agent is likely to reach, and every future lane has 200 goal candidates sampled at every 0.5 meters. Obviously, we obtain $K$=1200 fine-grained goal candidates in total to represent diverse uncertainties. If the number of surrounding agents or future lanes is insufficient, the corresponding locations are masked out by zeros.
\noindent \textbf{Network Architecture Design}. In the context encoding aspect, GoINet extracts the individual-level feature over every node via $L$=3 graph layers.
Due to the existence of the concatenation operator $f_{cc}(\cdot)$, the number of hidden units at graph layer $l$ is twice as that at graph layer $l$-1, and its initial value $d_h$ is set to 16.
Subsequently, the goal distribution and offset prediction of two interacting agents, realized by $f_i^{seg}(\cdot)$ and $f_i^{reg}(\cdot)$, are 3-layer MLPs, with the number of hidden units set to 128. Based on the realization of goal interactive prediction, the proposed CGPNet selects the top $\mathcal{K}$=5 future goal candidates of agent $A$ from its marginal goal distribution, which takes turns to be served as a conditional query to determine the same number of future goal candidates of agent $B$ based on the conditional goal distribution.
Further, during the trajectory interactive prediction process, a 2-layer bidirectional GRUs are used both by encoder and decode, the hidden dimensions of which are set to 128. Until now, given $\mathcal{K}^2$=25 goal-pairs, 25 goal-oriented trajectory-pairs are produced jointly by employing the proposed GTFNet. \textcolor{black}{Different from Argoverse and In-house cut-in dataset, we further reduce the size of joint trajectory-pairs to 6 to satisfy the evaluation necessity of Waymo motion prediction benchmark. In our CGTP framework, we filter 6 joint predictions from $\mathcal{K}^2$=25 candidates by using the non-maximum suppression method \cite{TNT2020}.}
\noindent \textbf{Learning Scheme}. Our proposed CGTP framework is trained on 8 A100 GPUs with the Adam optimizer \cite{Adam2014}. The learning rate is initialized as 5e-3, which is decayed by a factor of 0.5 every 30 epochs. Our model approximately requires 200 epochs to train on average with a batch size of 64.
\begin{table*}[t]
\centering
\caption{\textcolor{black}{Comparison with interactive prediction benchmark of WOMD}}
\begin{tabular}{c|c|ccccc}
\toprule
\textbf{Joint Prediction} & \textbf{Methods} & \textbf{minADE $\downarrow$} & \textbf{minFDE $\downarrow$} & \textbf{MR $\downarrow$} & \textbf{OR $\downarrow$} &\textbf{mAP $\uparrow$}\\
\midrule
\multicolumn{1}{c|}{\multirow{2}[1]{*}{\makecell[c]{Marginal \\ Predictions}}} & Waymo LSTM Baseline \cite{WaymoDataset2021} & 2.420 & 6.070 & 0.660 & - & 0.070\\
& TNT \cite{TNT2020} & 2.585 & 6.136 & 0.605 & 0.186 & 0.167 \\
\midrule
\multicolumn{1}{c|}{\multirow{2}[1]{*}{\makecell[c]{Conditional \\ Predictions}}}
& ProspectNet \cite{ProspectNet} & 3.012 & 8.118 & 0.826 & 0.416 & 0.115 \\
& M2I \cite{sun2022m2i} & 2.399 & 5.477 & 0.552 & 0.174 & 0.177 \\
\midrule
\multicolumn{1}{c|}{\multirow{2}[2]{*}{\makecell[c]{Ablations}}} & CGTP (wo interactive loss) & 2.414 & 5.531 & \textbf{0.551} & 0.173 & 0.179 \\
& CGTP (w interactive loss) & \textbf{2.371} & \textbf{5.395} & 0.559 &\textbf{0.169} & \textbf{0.180}\\
\bottomrule
\multicolumn{7}{l}{{$^{\star}$mAP is the official ranking metric.}}
\end{tabular}%
\label{tab:Waymo}%
\end{table*}%
\subsection{Quantitative Results}
\noindent\textbf{\textcolor{black}{Comparisons on Argoverse and In-house dataset.}} We first evaluate the performance of our CGTP framework with the existing mainstream marginal prediction approaches on Argoverse and In-house cut-in dataset. In this paper, we extend LSTM-based encoder-decoder (LSTM-ED) \cite{Argoverse2019}, VectorNet \cite{VectorNet2020} and TNT \cite{TNT2020} to the joint prediction task, producing the marginal predictions for both two agents, without considering their future interactions. Among them, LSTM and VectorNet predict trajectory-pairs via
pure regression method.
Especially for VectorNet, to validate the effectiveness of message passing in a graph, we provide a variant of VectorNet, named VectorNet(noGNN), whose context representations are purely captured by MLP and Max-pooling. \textcolor{black}{On the other hand, we compare our CGTP framework with the goal-oriented trajectory prediction model TNT to further verify the significance of the proposed goal interactive predictor in our model.} As shown in Table~\ref{tab:Argoverse} and \ref{tab:In-house}, our proposed model outperforms all marginal approaches on Argoverse and In-house cut-in dataset by a large margin in all distance error metrics (minADE, minFDE and MR). More specifically, for the In-house non-junction environment, the comparative results show that the proposed CGTP framework significantly outperforms the TNT model with 32.9$\%$ reduction in MR. This great enhancement can be attributed to the accurate estimation of joint distribution over future goals by relying on the goal interactive prediction stage.
Also note that VectorNet achieves significant improvements against VectorNet(noGNN), demonstrating that GNN can aggregate interactive features from context information via message passing.
In terms of interactive metrics like CR and OR, compared with the marginal prediction methods, our CGTP framework can achieve on par or better performance on the In-house cut-in dataset, as shown in Table~\ref{tab:In-house}.
Specifically in the non-junction environment, the conditional model trained with our proposed framework beats the marginal model trained with TNT, achieving 39.6$\%$ relative reduction on OR and 20.0$\%$ relative gain on CR. Unlike TNT, the proposed CGTP framework models the future interactions at the goal-based level,
which is capable of learning the joint distribution of cut-in interactive behaviors. In reverse, the TNT-based marginal model assumes the goal predictions of two interacting agents are independent of each other, which hardly generates reasonable cut-in trajectory-pairs in some scenarios with complex future interactions. \textcolor{black}{Towards the junction environment, our CGTP framework greatly outperforms LSTM-ED in CR, while the performance of OR is slightly worse than it. Such experimental phenomenon results from the poor imitation ability of the simple regression-based marginal model, which may output some
inaccurate behaviors far away from ground truth cut-in behaviors yet with a safety guarantee between two interacting agents, leading to the illusion of a lower collision rate. }
\noindent \textcolor{black}{\textbf{Comparisons on interactive prediction benchmark of WOMD.} In Table \ref{tab:Waymo}, we compare our model both with marginal and conditional prediction methods. On one hand, the marginal approaches include Waymo LSTM Baseline \cite{WaymoDataset2021} and TNT \cite{TNT2020}, where the former is the official baseline provided by the benchmark and the latter is a typical goal-oriented prediction method that also served as a comparison model for the two datasets above. On the other hand, we take ProspectNet \cite{ProspectNet} and M2I \cite{sun2022m2i} as conditional comparative approaches, which are the state-of-the-art models on the WOMD benchmark of the interactive prediction task. Such conditional models commonly build conditional dependencies on the explicit overall future trajectories while differing in the process of context encoding. In detail, ProspectNet leverages attention to aggregate vectorized features both spatially and temporally,
while M2I learns features both from rasterized and vectorized representations. In addition, the unique novelty of M2I lies in the proposal of relation predictor, which infers the influencer-reactor relations of two interacting agents and then leverage marginal and conditional trajectory predictor in turn to generate the joint trajectory-pairs. In our work, we adopt the relation predictor of M2I to determine the agent $A$ and $B$ before training and validation, yet focus on the goal interactive prediction in a combined format of marginal and conditional methods.}
\begin{figure*}[htbp]
\centering
\begin{center}
\scriptsize
\includegraphics*[width=0.85\textwidth]{./photo/v.jpg}
\caption{\textcolor{black}{Qualitative examples from TNT, M2I, and our CGTP framework in four classes of pairwise interactive scenarios, including (a) cut-in, (b) yielding, (c) merging, and (d) intersection left-turn. Each pairwise interactive scenario is demonstrated by a group of examples. Compared with TNT (upper row) and M2I (medium row), our CGTP framework (lower row) accounts for future interactions at the goal-based level and achieves better prediction accuracy and scene compliance.
} }
\label{fig:Future-Interaction}
\end{center}
\vspace{-1.3em}
\end{figure*}
\textcolor{black}{Comparison results demonstrate our model CGTP outperforms Waymo LSTM Baseline in terms of all metrics. Similar to the observations from the two datasets above, the performance of our CGTP framework is superior to the goal-oriented trajectory prediction method TNT.
\textcolor{black}{When compared to the conditional model ProspectNet, our CGTP framework improves mAP by 56.52$\%$, meaning that the combined design of CGPNet and GTFNet is capable of learning more accurate joint distribution in the future.}
\textcolor{black}{We further validate the effectiveness of future interactive modeling between the sparse explicit future goals by comparing the performance of joint predictions using our model and state-of-the-art model M2I, with the latter considering future interactions between the redundant explicit future trajectories instead, and achieve 2.87$\%$ reduction in OR and 1.69$\%$ gain in mAP, the official ranking metric.}
}
\noindent\textcolor{black}{\textbf{Comparisons on ablation study.}} We conduct ablation studies to analyze the contribution of interactive loss in the proposed CGTP framework. As shown in Table \ref{tab:Argoverse} - \ref{tab:In-house}, our current CGTP framework achieves on par or better results in all metrics by adding a novel interactive loss. The improvements indicate that our model with the interactive loss can obtain the high-quality goal-pair estimated by the learned joint distribution, and then produces the scene-compliant goal-oriented trajectory-pair most closely matching the ground truth. \textcolor{black}{Towards WOMD, the most likely trajectory-pair learned by the interactive loss can characterize a more reasonable interactive behavior in the future, improving mAP metric by 0.56$\%$ gain.} Similar observations are also presented in the interactive metrics of the In-house cut-in dataset.
More specifically, in the junction environment, OR and CR is 7.5$\%$ and 0.8$\%$ better compared to our model without interactive loss. In the non-junction environment, our model with interactive loss is 14.5$\%$/2.9$\%$ better in OR/CR compared to the one without interactive loss.
\subsection{Qualitative Results}
\textcolor{black}{In Fig.~\ref{fig:Future-Interaction}, we present four classes of challenging pairwise interactive scenarios in WOMD, including cut-in, yielding, merging and intersection left-turn, and visualize the most likely trajectory-pair from the goal-oriented trajectory prediction method TNT, the SOTA method M2I, and our CGTP framework, respectively. In Fig.~\ref{fig:Future-Interaction}.(a), a group of examples depicts a pairwise interactive scenario where agent $B$ is cutting in front of agent $A$. The goal-oriented trajectory prediction model TNT fails to capture the interaction and predict overlapping trajectories, as shown in the first column of Fig.~\ref{fig:Future-Interaction}.(a). In spite of no overlap exhibited in the remaining cut-in examples, TNT also results in less accurate predictions that mismatch the ground truth interactive behaviors. Also, M2I hardly captures accurate cut-in aggressive interactive behaviors by considering the overall predicted trajectory of agent $A$. Instead, our CGTP framework is sensitively aware of the underlying interaction between future goals of two interacting agents, and predicts an accurate endpoint of agent $B$ conditioned on the predicted endpoint of agent $A$, and then outputs a scene-compliant goal-oriented trajectory-pair given an accurate cut-in goal-pair prediction. Different from Fig.~\ref{fig:Future-Interaction}.(a), Fig.~\ref{fig:Future-Interaction}.(d) provides a set of examples where agent $B$ turns left when its opposite agent $A$ goes straight at the intersection, which represents a more challenging pairwise interactive scenario. In each example, our CGTP framework successfully improves prediction accuracy and scene compliance, while TNT predicts trajectories far away from the ground truth without considering the future interaction between two agents.}
\section {Conclusion}\label{sec:Conclusion}
In this paper, we propose a novel CGTP framework for interactive behavior prediction of two agents. We build the hierarchical representations of fine-grained future goals, and focus on the goal interactive prediction stage by a combined form of marginal and conditional goal predictors, where we predict the future goals of agent $A$ via marginal goal predictor and then perform future goal prediction of agent $B$ conditioned on per marginal prediction. Once the goal-pairs of two interacting agents are determined, a trajectory interactive prediction module is designed to generate the goal-oriented trajectory-pairs in a step-wise rollout manner. \textcolor{black}{The experimental results conducted on Argoverse motion forecasting dataset, In-house cut-in dataset and Waymo open motion dataset show the superiority of our proposed method in prediction accuracy and scene compliance. As future work, the joint prediction for more interacting agents with low computational burden is an interesting and important frontier field.}
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
\section{Introduction}\label{sec:introduction}
\begin{figure*}[htbp]
\centering
\begin{center}
\scriptsize
\includegraphics*[width=7.1in]{./photo/Motivation.pdf}
\caption{\textcolor{black}{\textbf{Motivation demonstration of the CGTP framework.}}
\textcolor{black}{In (a), the goal-oriented trajectory prediction model, a representative marginal prediction method, generates diverse goal-oriented future trajectories for each agent independently, producing joint predictions by combinations of self-consistent predictions over individual agents, with unrealistic or collision behaviors possibly occurring. In (b), the CGTP framework, a novel multimodal joint prediction method, accounts for future interactions via conditional modeling and outputs scene-compliant future trajectories. It focuses on goal interactive prediction realized by a combined form of marginal goal predictor with agent \textit{A} and conditional goal predictor with agent \textit{B}, and then the joint predictions can be generated from the conditional goal-oriented trajectory predictors.}}
\label{fig:Motivation}
\end{center}
\vspace{-1.3em}
\end{figure*}
Predicting or forecasting future trajectories of multiple agents is a mission-critical component in the autonomous driving system \cite{LKQ2021, DynaNet2021}, which plays an important role in the subsequent motion planning \cite{Brain2016, Lihaoran2020} and decision \cite{Wangjunjie1, Wangjunjie2} modules. In particular, the interactive behavior prediction has received increasing attention in recent years \cite{WaymoDataset2021}, which aims to jointly predict interactive behaviors
of interacting agents in critical situations such as cut-in and yielding.
\textcolor{black}{Fig.~\ref{fig:Motivation} presents two research pipelines focusing on the joint prediction fields.
A naive approach for the joint prediction is to use a marginal prediction method. This class of models \cite{Desire2017, MultiPath2019, VectorNet2020, TNT2020} generate diverse predictions independently for each agent, and then make the combinations of marginal predictions to output the joint realizations. \textcolor{black}{Notably, the goal-oriented trajectory prediction (GTP) \cite{TNT2020} is a representative marginal prediction method, achieving great success in multimodal trajectory prediction by first identifying endpoints via the goal predictor and then predicting multiple trajectories via the goal-oriented trajectory predictor.} However, the weakness
of such a method lies in the absence of future interactive modeling with the other interacting agent for the interactive behavior prediction. Limited by this, while the goal-oriented prediction method produces self-consistent predictions over individual agents, the joint prediction-pairs may result in unrealistic or collision behaviors. For instance, in spite of non-colliding predictions in the modal 1 of Fig.~\ref{fig:Motivation} (a), agent $B$ should aggressively speed up when making lane changing to keep a safe distance from agent $A$. Such scene-compliant interactive behaviors can be hardly captured by the pure marginal prediction method. To overcome this issue, recent advances have shown great success in predicting scene-compliant trajectories by learning from the multimodal joint prediction methods which adopt conditional prediction models to consider interactions between agent future predictions \cite{CBP2021, mfp2019, Precog2019, ILVM2020}.}
\textcolor{black}{Especially, Waymo also provide a large-scale
interactive motion forecasting dataset \cite{WaymoDataset2021} for autonomous driving, and some multimodal joint prediction methods \cite{SceneTrans2021, sun2022m2i, ProspectNet} achieve better performances on this dataset.}
Hence, the multimodal joint prediction rather than marginal prediction is required for interactive driving scenarios.
\textcolor{black}{In the literature, the researchers characterize the underlying intents of interacting agents into various forms of future prediction information, which served as conditional dependencies to the multimodal joint prediction methods.
A family of methods \cite{CBP2021, mfp2019, Precog2019, SceneTrans2021, sun2022m2i, ProspectNet} leverages future overall trajectories of interacting agents as explicit future intents, focusing on interactive behavior prediction conditioned on them. Instead, ILVM \cite{ILVM2020} adopts implicit latent variables to describe the future intents over interacting agents. Different from recent studies above, in this paper, we propose a novel conditional goal-oriented trajectory prediction (CGTP) framework} \textcolor{black}{by combining significant advantages between the goal-oriented trajectory prediction method and conditional prediction method, as indicated in Fig.~{\ref{fig:Motivation}} (b). On one hand, CGTP conducts conditional inferences both in goal predictor and goal-oriented trajectory predictor by comparison with the goal-oriented trajectory prediction method. On the other hand, compared with the recent studies on multimodal joint prediction, CGTP focuses more on the future interactions at the goal-based level via a combined form of marginal and conditional goal predictors.}
We factorize the CGTP framework into three parts: context encoding, goal interactive prediction and trajectory interactive prediction. To summarize, we list the main contributions of CGTP framework as follows:
\begin{itemize}
\item For the context encoding, we design a Goals-of-Interest Network (GoINet) by combining the advantages of Graph Neural Network (GNN) and Transformer, to hierarchically capture interactive features over prior knowledge (agent trajectories and future lanes), and then obtain the structural representations of fine-grained future goals for per interacting agent, by aggregating interactive features from the individual, local and global levels.
\item For the goal interactive prediction and trajectory interactive prediction, we propose the Conditional Goal Prediction Network (CGPNet) and the Goal-oriented Trajectory Forecasting Network (GTFNet) by embedding the conditional prediction into the goal-oriented trajectory prediction method.
Based on CGPNet and GTFNet, the future diverse interactions between two interacting agents can be captured by the learned joint distribution.
\item In addition, a goal interactive loss is established for the CGPNet, which aims to better learn the joint probability distribution over future goal candidates for the two interacting agents.
\item \textcolor{black}{
Comparison experiments on Argoverse motion forecasting dataset, In-house cut-in dataset, and Waymo open motion dataset verify the superiority of our CGTP framework over the mainstream marginal prediction models and state-of-the-art conditional prediction model.
}
\end{itemize}
\section{Related Work}\label{sec:relatedWork}
In this section, we provide a detailed trajectory prediction literature review with a particular emphasis on deep learning methods from the perspective of context encoding, anchor-based prediction and conditional prediction, respectively.
\subsection{Context Encoding}\label{sec:Contex Information Encoding}
There is a family of work on trajectory prediction via convolutional neural networks (CNNs) rendered input as a multi-channel rasterized bird's-eye view (BEV) image \cite{ MTP2019, TPCN2021}. \textcolor{black}{However, such rasterized approaches are difficult in modeling long-range interactions and representing continuous physical states. An popular alternative is to use a vectorized method.
With this approach, the history of agent motion is typically encoded via sequence modeling techniques like RNNs \cite{MATF2019}, while the elements of the road graphs are approximately treated as pairwise-linear segments with additional attributes such as current states and semantic type. Furthermore, the information aggregation techniques are utilized to learn relationships between the agent dynamics in the context of the road graph scenarios. Transformer \cite{Transformer2017} can be denoted as one popular choice for interaction-aware motion modeling based on the attention mechanism, capturing relationships via three different axes over timesteps, agents and road elements. For instance, \cite{zhao2020spatial} focuses on temporal encoding and decoding by applying a self-attention module for timesteps-axis, while \cite{Jean2019} employs a new multi-head attention architecture to complete interactions between all agents. Unlike past work using independent self-attention for each axis, SceneTransformer \cite{SceneTrans2021} is designed to handle the interactive modeling among timesteps, agents and road graph elements in a unified way. Alternatively, GNN-based methods have recently shown promise in motion prediction tasks, by learning interactive graph representations from vectorized features via operators like graph convolution and message passing \cite{GraphReview2018, Pedestrain2021, Chaochenzhuolei}.}
VectorNet \cite{VectorNet2020} introduces a hierarchical graph method that first processes agent histories and map features in the form of polylines and then fuses them using a global interactive graph. Different from VectorNet, LaneGCN \cite{LaneGCN2020} merely constructs lane graph using graph convolution before capturing all possible agent-map interactions. However, the interaction-aware models above concentrate their focus more on interactive modeling between coarse-scale objects such as agent past trajectories or lanes \cite{zhang2022trajgen}, rather than the fine-grained elements of map topology such as the goals.
\subsection{Anchor-based Multimodal Trajectory prediction}\label{sec:anchor-based method}
The multimodal trajectory prediction models are largely realized by anchor-based methods. This class of methods choose different types of possible intents, including a diverse of future trajectories, goals, relevant lanes and regions, to represent the modes of agent trajectory distribution. A family of studies leverage future trajectories as good prior anchors, and then produce the final trajectory predictions by using a learning-based method. For example, MultiPath \cite{MultiPath2019} and CoverNet \cite{CoverNet2019} generate a candidate set of predefined future trajectories as hypothesis proposals, while PRIME \cite{PRIME2021} and TPNet \cite{TPNet2020} produce feasible future trajectories based on the model-based generator instead. Further, TNT \cite{TNT2020} generates goal-oriented trajectories to diversify the prediction modes, with the goal anchor candidates sampled from the High-Definition (HD) map. Besides, since the topological structure of lanes can be thought of a guidance for the motion of drivers, a vast majority of recent work leverage a set of instance-level lane entities as spatial proposals to generate multimodal plausible trajectories \cite{LaPred2021, GoalNet2020}. Unlike these anchor-based models above, \cite{mmTransformer2021} constructs a novel region-based training method in order to cover all the possible prediction modes in the determined scene with limited training samples. This approach divides the surrounding space into a small number of regions in the first place, and then refines the prediction outcomes to a specific region where ground truth locates. In conclusion, these anchor-based approaches commonly have two stages including the anchor selecting and trajectory regression, which are trained end-to-end with stronger interpretability. Unfortunately, the anchor-based methods are largely used in the marginal prediction process until now.
\subsection{Conditional Trajectory Prediction}
As illustrated in Section I, the marginal prediction approach can be hardly applied in the interactive driving scenarios, since such a model ignores the fact that the action made by interacting agent $A$ in the future may have a critical effect on the interacting agent $B$ and vice versa. Hence, a minimal number of studies have made explorations on modeling joint future trajectories based on the conditional prediction models \cite{CBP2021, SceneTrans2021, mfp2019, Precog2019, sun2022m2i, ProspectNet, ILVM2020}. These methods output future agent trajectories by conditioning on other interacting agents' explicit future trajectories or implicit latent variables.
By comparison, we develop a CGPNet to complete goal interactive prediction in priority based on the conditional method, which takes as queries the potential future goals of agent $A$ and predicts the probability distribution over the future goal candidates of agent $B$ conditioned on per query.
Followed by this, we consider interactions over the future trajectories timestep-by-timestep, designing GTFNet to predict interactive future behaviors in a rollout manner.
\textcolor{black}{\section{Problem Formulation}}\label{sec:backgound}
\textcolor{black}{\subsection{Variable Definition}}\label{sec:formulation}
\begin{figure*}[!t]
\centering
\begin{center}
\scriptsize
\includegraphics*[width=7.1in]{./photo/CGITP_modified.pdf}\\
\caption{\textbf{An overview of the CGTP framework.} The proposed GoINet is first used to extract the hierarchical features over each interacting agent and its future goal candidates. Then, we select the interactive future goal-pairs via a novel CGPNet. Finally, the proposed GTFNet conducts the trajectory interactive prediction process to produce the goal-oriented predictions of two interacting agents, generating multimodal interactive behaviors such as cut-in, yielding, lane-keeping and turning right et al..}
\label{fig:CGITP_framwork}
\end{center}
\vspace{-1.3em}
\end{figure*}
\textcolor{black}{Given the scene information in a combined form as $\boldsymbol{C} = \boldsymbol{C}_A \cup \boldsymbol{C}_B$, our objective is to predict the future joint states $\boldsymbol{Y} = \boldsymbol{Y}_A \cup \boldsymbol{Y}_B$ of two interacting agents up to a finite horizon $T$, modeled as a joint distribution $p( \boldsymbol{Y} \mid \boldsymbol{C})$.
\textcolor{black}{Towards each interacting agent $i$ \textcolor{black}{$\in$ $\{A, B\}$}, the scene information $\boldsymbol{C}_i: \{\boldsymbol{X}_i, \boldsymbol{L}_i\}$ contains dynamic and static representations \textcolor{black}{normalized at its reference frame}, where the agent trajectory set $\boldsymbol{X}_i=\{\boldsymbol{X}_i^{m}, m\in[0, O]\}$ includes the observed trajectory of predicted agent $\boldsymbol{X}_i^{0}$ and other agents' trajectories $\{\boldsymbol{{X}_i^{m}},{m\in[1,O]}\}$,
and $\boldsymbol{L}_i=\{\boldsymbol{L}_i^{m}, m\in[1, P]\}$ describes $P$ coarse-scale lanes that the agent $i$ is likely to reach in the future.} }
\textcolor{black}{\subsection{Conditional Goal-oriented Trajectory Prediction}}\label{sec:formulation}
For better comprehension, the marginal prediction method is first introduced, which lays a solid foundation for our proposed CGTP framework. In general, the marginal prediction methods are commonly built based on two assumptions.
\newtheorem{assumption}{Assumption}
\begin{assumption}\label{Agent Independence}
The agent's future states evolve independently from another interacting agent \cite{Desire2017, MultiPath2019, VectorNet2020, TNT2020}.
\end{assumption}
\textcolor{black}{Such independence assumption implies that marginal prediction methods predict the marginal distributions over individual agents without considering other interactions in the future.} \textcolor{black}{Once the \textit{Assumption \ref{Agent Independence}} is confirmed, the factorization over the joint distribution can be simplified as two marginal distributions}:
\begin{equation} \label{eq:n3-1}
\begin{gathered}
p(\boldsymbol{Y} \mid\boldsymbol{C}) = p( \boldsymbol{Y}_A \mid \boldsymbol{C}) p( \boldsymbol{Y}_B \mid \boldsymbol{C}).
\end{gathered}
\end{equation}
Furthermore, \textcolor{black}{we adopt a goal-oriented trajectory prediction method TNT, a representative marginal prediction method, to effectively produce future trajectory with multimodality.}
Towards each agent $i\in\{A,B\}$, the marginal distribution $p(\boldsymbol{Y}_i \mid \boldsymbol{C})$ can be decomposed based on future goal anchors, and then is marginalized over them:
\begin{equation} \label{eq:n3-2}
\begin{aligned}
p( \boldsymbol{Y}_i \mid \boldsymbol{C}) &= p(\boldsymbol{G}_{i}\mid \boldsymbol{C})p(\boldsymbol{Y}_i \mid \boldsymbol{G}_{i}, \boldsymbol{C}) \\
&= \sum_{\boldsymbol{g}_{i}^k \in \boldsymbol{G}_{i}}p(\boldsymbol{g}_{i}^k\mid \boldsymbol{C})p(\boldsymbol{Y}_i \mid \boldsymbol{g}_{i}^k, \boldsymbol{C}),
\end{aligned}
\end{equation}
where $\boldsymbol{G}_i = \left\{\boldsymbol{g}_{i}^1, \boldsymbol{g}_{i}^2, \cdots, \boldsymbol{g}_{i}^K \right\}$ represents the location space of plausible future goal candidates for agent $i$, which captures $K$ uncertainties by relying on the known road information $\boldsymbol{L}_i$.
\begin{assumption}\label{time Independence}
As for each agent $i$, the generation of future states is performed in an independent rollout manner \cite{TNT2020, Jean2019}.
\end{assumption}
Based on the \textit{Assumption \ref{time Independence}}, the future distribution for per agent $i$ can be factorized across time steps, by merely referring to its own previous states.
\begin{equation} \label{eq:n3-3}
\begin{gathered}
p( \boldsymbol{Y}_i \mid \boldsymbol{g}_{i}^k, \boldsymbol{C}) = \prod_{\delta=t+1}^{\delta=t+T} p(\boldsymbol{y}_i^{\delta} \mid \boldsymbol{y}_i^{t : \delta-1}, \boldsymbol{g}_{i}^k, \boldsymbol{C}),
\end{gathered}
\end{equation}
where $\boldsymbol{y}_i^\delta$ describes the future state of agent $i$ at time step $\delta$.
\textcolor{black}{Until now, the analysis above concludes that the goal-oriented trajectory prediction method ignores the future interaction during the joint trajectory prediction process. To bridge the gap between marginal prediction and interactive behavior prediction, we propose a novel CGTP framework by \textcolor{black}{considering conditional momdeling both in goal predictor and goal-oriented trajectory predictor},
collaboratively outputting scene-compliant joint future trajectories.
In this way, we first approximate the joint distribution as the factorization over a marginal distribution and a conditional distribution:}
\begin{equation} \label{eq:n4-10}
\begin{gathered}
p( \boldsymbol{Y} \mid \boldsymbol{C}) = p( \boldsymbol{Y}_A \mid \boldsymbol{C}) p( \boldsymbol{Y}_B \mid \boldsymbol{Y}_A, \boldsymbol{C}).
\end{gathered}
\end{equation}
\textcolor{black}{Different from Eq.~(\ref{eq:n3-1}), the factorization in Eq.~(\ref{eq:n4-10}) considers the agent $A$'s future intents have a potential influence on agent $B$. On one hand, the realization of agent $A$'s future trajectory can be roughly regarded as a marginal modeling process via a goal-oriented trajectory prediction method. On the other hand, we further make an approximate assumption to implement the conditional trajectory prediction for agent $B$. }
\begin{figure*}[htbp]
\centering
\begin{center}
\scriptsize
\includegraphics*[width=7.1in]{./photo/GoINet_modified.pdf}
\caption{\textbf{The structure of the GoINet.} Towards each interacting agent, a unified graph-based representation is first formulated based on the scene information at its reference frame. Then, the hierarchical interactions are modeled from the individual, local and global three levels. Finally, we obtain the hierarchical features over the interacting agent and its fine-grained future goal candidates by means of a concatenation operator.}
\label{fig:GoINet}
\end{center}
\vspace{-1.3em}
\end{figure*}
\begin{assumption}\label{Conditional Dependence}
For interactive behavior prediction, the conditional distribution over agent $B$ can be largely determined by the agent $A$'s future goals instead of its overall future trajectories.
\end{assumption}
Based on the \textit{Assumption \ref{Conditional Dependence}} and the goal-oriented trajectory prediction method, the conditional distribution over agent $B$ is decomposed in a similar manner of agent $A$:
\begin{equation} \label{eq:n4-11}
\begin{gathered}
p(\boldsymbol{Y}_B \mid \boldsymbol{Y}_A, \boldsymbol{C}) = p( \boldsymbol{Y}_B \mid \boldsymbol{G}_A, \boldsymbol{C}) \\ = p(\boldsymbol{G}_{B} \mid \boldsymbol{G}_{A}, \boldsymbol{C})p(\boldsymbol{Y}_B \mid \boldsymbol{G}_{B}, \boldsymbol{G}_{A}, \boldsymbol{C}) \\
= \sum_{\small{\boldsymbol{g}_{A}^q, \boldsymbol{g}_{B}^k \in \boldsymbol{G}}}p(\boldsymbol{g}_{B}^k \mid \boldsymbol{g}_{A}^q, \boldsymbol{C})p(\boldsymbol{Y}_B \mid \boldsymbol{g}_{B}^k, \boldsymbol{g}_{A}^q, \boldsymbol{C}).
\end{gathered}
\end{equation}
\textcolor{black}{\textcolor{black}{There exits an obvious difference in the goal prediction process between Eq.~(\ref{eq:n3-2}) and Eq.~(\ref{eq:n4-11})}, indicating that the conditional modeling over agent $B$ aims to tackle the pairwise interactive trajectory prediction problem with the conditional modeling over future goal candidates, as described by $p(\boldsymbol{g}_{B}^k \mid \boldsymbol{g}_{A}^q, \boldsymbol{C})$.}
Here, in order to distinguish the indexes of future goal candidates for two interacting agents, we use $q$ to describe the index of the future goal candidates for agent $A$ instead.
\textcolor{black}{Besides, our proposed CGTP framework conducts the goal-oriented trajectory prediction for each interacting agent via conditional modeling by referring to \cite{mfp2019}.} In the following, we take the interacting agent $A$ as an instance to describe the realization process of trajectory forecasting:
\begin{equation} \label{eq:n4-18}
\begin{aligned}
p( \boldsymbol{Y}_A \mid \boldsymbol{g}_{A}^q, \boldsymbol{C}) = \prod_{\delta=t+1}^{\delta=t+T} p(\boldsymbol{y}_A^{\delta} \mid \boldsymbol{y}^{t : \delta-1}, \boldsymbol{g}_{A}^q, \boldsymbol{C}) \\ =
\prod_{\delta=t+1}^{\delta=t+T} p(\boldsymbol{y}_A^{\delta} \mid \boldsymbol{y}_{A}^{t : \delta-1}, \boldsymbol{y}_{B}^{t : \delta-1}, \boldsymbol{g}_{A}^q, \boldsymbol{C}).
\end{aligned}
\end{equation}
As shown in Eq.~(\ref{eq:n4-18}), the conditional trajectory distribution of each interacting agent is explicitly dependent on its own future goal but implicitly dependent on the predicted future states of another interacting agent, which can be considered as a trajectory interactive prediction process in a step-wise rollout manner.
\textcolor{black}{\section{Methodology}}\label{sec:ETCNet}
An overview of our CGTP framework is shown in Fig.~\ref{fig:CGITP_framwork}. In the following, we first present the GoINet which summarizes the structural interactive representations over fine-grained future goal candidates.
Then, we develop a CGPNet to conduct future interactions at the goal-based level, and to select the goal-pair candidates with future interactive intents. Further, a GTFNet is developed to generate goal-oriented trajectory-pairs in a step-wise rollout manner. Finally, we introduce the optimization process of our CGTP framework.
\subsection{GoINet-based Context Encoding}\label{sec:architecture}
The GoINet has three core steps: (1) establish a unified graph-based formulation for two typical types of vectorized representations, $i.e.$, agent history trajectories and future lanes; (2) leverage GNN, max-pooling and Transformer to construct hierarchical interactions at individual, local and global levels, respectively; (3) concatenate the features from three levels above to obtain the structural features over fine-grained future goal candidates, as shown in Fig.~\ref{fig:GoINet}.
\textbf{Graph-based Representation Formulation.} \textcolor{black}{Inspired by VectorNet \cite{VectorNet2020}, we first abstract the scene elements $\{X_i, L_i\}_{|i\in\{A,B\}}$ (including agent history trajectories and future lanes) as polylines $\mathcal{P}_i$. All of these polylines can be approximated as sequences of vectors: for future lanes, we uniformly sample key points from the polylines at the same spatial distance to approximately represent the fine-grained goals, and sequentially connect the neighboring key points into vectors; for agent history trajectories, we can just sample key points with a fixed temporal interval, and connect them into vectors. Each vector can be denoted by
\begin{equation}
\begin{gathered}
\boldsymbol{v}_{i}^{\mathcal{P}} = [\boldsymbol{d}_{start}, \boldsymbol{d}_{end}, j],
\end{gathered}
\end{equation}
where $\boldsymbol{d}_{start}$ and $\boldsymbol{d}_{end}$ are coordinates of the start and end point of the vector; $j$ is the integer index of polyline $\mathcal{P}_i^j$. Then, for each polyline, we build a local graph as $\boldsymbol{\mathcal{G}}_{i}^{\mathcal{P}}=(\boldsymbol{\mathcal{V}}_{i}^{\mathcal{P}}, \boldsymbol{\mathcal{E}}_{i}^\mathcal{P})$, where $\boldsymbol{\mathcal{V}}_{i}^{\mathcal{P}}$ denotes a set of nodes with vector features $\boldsymbol{v}_{i}^{\mathcal{P}}$ and $\boldsymbol{\mathcal{E}}_{i}^\mathcal{P}$ is a set of edges encoding pairwise relations between nodes.
}
\textbf{Modeling Hierarchical Interactions.}
To extract the temporal-spatial and semantic locality of nodes, we first deploy a general GNN approach to extract the individual features of vectors in each polyline. Towards each local graph $\boldsymbol{\mathcal{G}}_{i}^{\mathcal{P}}$, we formulate the learning scheme of every node representation $\boldsymbol{v}^{(l)} = (\boldsymbol{v}_{i}^{\mathcal{P}})^{(l)} \in \mathbb{R}^{2 l d_h }$ with max-pooling operator $f_{mp}(\cdot)$ and concatenation operator $f_{cc}(\cdot)$ in the $l$-th layer as
\begin{equation} \label{eq:n4-4}
\begin{gathered}
\boldsymbol{v}^{(l)} = f_{cc}\left( \left\{ h^{(l)}\left(\boldsymbol{v}^{(l-1)}\right), f_{mp} \left( \left\{ h^{(l)}\left(\boldsymbol{v'}^{(l-1)}\right) \right\} \right) \right\} \right), \\
\forall l \in [1, L],
\end{gathered}
\end{equation}
where $\boldsymbol{v'}$ denotes the remaining nodes in $\boldsymbol{\mathcal{V}}_{i}^{\mathcal{P}}$ except for $\boldsymbol{v}$; $d_h$ represents the initial dimension of the hidden units at the first layer of GNN. In addition, $h^{(l)}(\cdot)$ denotes a mapping function at the $l$-th layer to iteratively encode the individual node embedding, which shares weights across all nodes. The mapping function $h^{(l)}(\cdot)$ is realized by a single fully connected layer with Layer Normalization \cite{LayerNorm2016} and ReLU non-linearity. Specifically, we initialize $(\boldsymbol{v}_{i}^{\mathcal{P}})^{(0)} = \boldsymbol{v}_{i}^{\mathcal{P}}$.
After $L$ layers of aggregation, we obtain the individual feature of nodes in each local graph. Second, the local-level representation over each entire polyline can be summarized by
\begin{equation} \label{eq:n4-5}
\begin{gathered}
\boldsymbol{h}_{i}^{\mathcal{P}} = f_{mp} \left( \left\{ \left(\boldsymbol{v}_{i}^{\mathcal{P}}\right)^{(L)} \mid \forall \boldsymbol{v}_{i}^{\mathcal{P}} \in \boldsymbol{\mathcal{V}}_{i}^{\mathcal{P}} \right\}\right),
\end{gathered}
\end{equation}
which models interactions among all nodes' individual representations in each local graph via max-pooling operator. More formally, we stack these local features into a matrix as $\boldsymbol{H}_i \in \mathbb{R}^{(O+P) \times d_H}$, where $d_H = 2Ld_h$.
\textcolor{black}{Finally, a Transformer layer is employed to draw global dependencies between the local-level features over agent trajectories and future lanes.} The output of self-attention computation for per interacting agent $i$ is described as follows
\begin{equation} \label{eq:n4-6}
\begin{gathered}
\boldsymbol{Att}_i = {\rm{softmax}} \left( \frac{\boldsymbol{Q}_i \left(\boldsymbol{K}_i\right) ^ \mathrm{T}}{\sqrt{d_k}} \right) \boldsymbol{V}_i,
\end{gathered}
\end{equation}
where each row of the matrix $\boldsymbol{Att}_i$ is a global feature of a specific polyline, i.e. $\boldsymbol{att}_{i}^{\mathcal{P}}$, and $d_k=d_H$.
\textcolor{black}{In Eq.~(\ref{eq:n4-6}), the set of queries $\boldsymbol{Q}_i$, keys $\boldsymbol{K}_i$ and values $\boldsymbol{V}_i$ are obtained by making linear projections to the local representation matrix $\boldsymbol{H}_i$.}
\textbf{Obtaining Structural Representations.} \textcolor{black}{As shown in the right part of Fig.~\ref{fig:CGITP_framwork}, the hierarchical encoding information $\boldsymbol{s}_i^X$ for per interacting agent $i$ is a concatenation of the local and global representations over its observed history $\boldsymbol{X}_i^0$. Since it is necessary to consider the nodes feature of future lanes in the representations of future goals, we combine future lane features from individual, local and global views to encode the fine-grained structural representations $\{\boldsymbol{s}_{i}^{g,k}, k \in [1, K]\}$ of the goal candidates $\{\boldsymbol{g}_{i}^{k}, k \in [1, K]\}$.}
In the end, we take as input the structural interactive features above to the following modules.
\subsection{CGPNet-based Goal Interactive Prediction}\label{sec:definition}
In this section, we focus on introducing an implementation paradigm for estimating the conditional probability distribution over the future goal candidates for agent $B$, $i.e.$ $p\left( \boldsymbol{g}_{B}^{k} \mid \boldsymbol{g}_{A}^{q}, \boldsymbol{C} \right)$, which takes as conditional queries the potential future goals from agent $A$. As illustrated in TNT \cite{TNT2020}, the determination of future goals relies on discrete future goal candidates $\boldsymbol{g}_{B}^{k}$ and their corresponding continuous offsets $\boldsymbol{\Delta} \boldsymbol{g}_{B}^{k}$ to the real endpoint $\boldsymbol{y}_{B}^{t+T}$. Hence, the conditional probability distribution over future goal candidates can be factorized into these two influential elements above:
\begin{equation} \label{eq:n4-12}
\begin{aligned}
p\left( \boldsymbol{g}_{B}^{k} \mid \boldsymbol{g}_{A}^{q}, \boldsymbol{C} \right) &= \pi \left( \boldsymbol{g}_{B}^{k} \mid \boldsymbol{g}_{A}^{q}, \boldsymbol{C} \right) \\ &\cdot \mathcal{N}\left( \boldsymbol{\Delta} \boldsymbol{g}_{B}^{k} \mid \boldsymbol{\mu} \left( \boldsymbol{\Delta} \boldsymbol{g}_{B}^{k}\right), \boldsymbol{\Sigma} \left( \boldsymbol{\Delta} \boldsymbol{g}_{B}^{k}\right) \right),
\end{aligned}
\end{equation}
where $\pi \left( \boldsymbol{g}_{B}^{k} \mid \boldsymbol{g}_{A}^{q}, \boldsymbol{C} \right)$ describes the uncertainty across a candidate set of agent $B$'s future goals using a softmax distribution:
\begin{equation} \label{eq:n4-13}
\begin{gathered}
\pi \left( \boldsymbol{g}_{B}^{k} \mid \boldsymbol{g}_{A}^{q}, \boldsymbol{C} \right) = \frac{ {\rm exp} \ f_B^{seg}\left( \boldsymbol{s}_{B}^{g,k}, \boldsymbol{g}_{B}^{k}, \phi_{B} \left(\boldsymbol{g}_{A}^{q}\right) \right)}{\sum \limits_{ \small{\boldsymbol{g}_{B}^{ k'} }} {\rm exp} \ f_B^{seg}\left( \boldsymbol{s}_{B}^{g,k'}, \boldsymbol{g}_{B}^{k'}, \phi_{B} \left(\boldsymbol{g}_{A}^{q} \right)\right)}.
\end{gathered}
\end{equation}
This conditional probability distribution is learned by a segmentation task.
Subsequently, we obtain the corresponding offset from a generalized Gaussian distribution $\mathcal{N}\left( \boldsymbol{\Delta} \boldsymbol{g}_{B}^k \mid \boldsymbol{\mu} \left( \boldsymbol{\Delta} \boldsymbol{g}_{B}^k\right), \boldsymbol{\Sigma}\left( \boldsymbol{\Delta} \boldsymbol{g}_{B}^k\right)\right)$, where $\boldsymbol{\mu} \left( \boldsymbol{\Delta} \boldsymbol{g}_{B}^k\right)$ denotes mean as
\begin{equation} \label{eq:n4-14}
\begin{gathered}
\boldsymbol{\mu} \left( \boldsymbol{\Delta} \boldsymbol{g}_{B}^{k}\right) = f_B^{reg}\left( \boldsymbol{s}_{B}^{g,k}, \boldsymbol{g}_{B}^k, \phi_B \left(\boldsymbol{g}_{A}^q\right) \right)
\end{gathered}
\end{equation}
which is primarily modeled by a regression task. Besides, let $\boldsymbol{\Sigma} \left( \boldsymbol{\Delta} \boldsymbol{g}_{B}^{k}\right)$ denote variance assumed to be an identity matrix in this paper. From another view, both $f_B^{seg}(\cdot)$ and $f_B^{reg}(\cdot)$ are implemented by a three-layer multilayer perceptron (MLP) to predict the conditional distribution and offsets over future goal candidates. Concretely, the input of these two mapping functions is mainly derived from two aspects. One class of input is related to each future goal candidate $\boldsymbol{g}_{B}^k$ and its corresponding structural representation $\boldsymbol{s}_{B}^{g,k}$ extracted from the GoINet. Furthermore, an agent $B$-centric transformation function $\phi_{B} (\cdot)$ is deployed to acquire the potential future goals $\boldsymbol{g}_{A}^q$ of agent $A$ at the agent $B$'s reference frame, which we refer to as conditional queries, as well as another class of input.
\textcolor{black}{Before obtaining conditional probability distribution upon Eq.~(\ref{eq:n4-12}), we also require estimating the marginal probability distribution over the future goal candidates for agent $A$, named $p\left( \boldsymbol{g}_{A}^{q} \mid \boldsymbol{C}\right)$, by turning off inputs from the conditional query in the model. Thus, the marginal probability distribution is described by the simplified expressions $\pi \left( \boldsymbol{g}_{A}^{q} \mid \emptyset, \boldsymbol{C} \right)$ and $\boldsymbol{\mu} \left( \boldsymbol{\Delta} \boldsymbol{g}_{A}^{q} \mid \emptyset \right)$, and the realizations over them can approximately refer to Eq.~(\ref{eq:n4-13}) and Eq.~(\ref{eq:n4-14}).}
Given the marginal and conditional probability distribution for the goal interactive prediction process, we now can compute the joint probability distribution $p\left(\boldsymbol{g}_{B}^{k}, \boldsymbol{g}_{A}^{q} \mid \boldsymbol{C}\right)$. For simplification, suppose that the joint probability distribution above can be approximately replaced by $\pi \left( \boldsymbol{g}_{B}^{k}, \boldsymbol{q}_{A}^{q} \mid \boldsymbol{C}\right)$, described as
\begin{equation} \label{eq:n4-17}
\begin{gathered}
p\left( \boldsymbol{g}_{B}^{k}, \boldsymbol{g}_{A}^{q} \mid \boldsymbol{C}\right) = \pi\left( \boldsymbol{g}_{B}^{k}, \boldsymbol{g}_{A}^{q} \mid \boldsymbol{C}\right) \\
= \pi\left( \boldsymbol{g}_{B}^{k} \mid \boldsymbol{g}_{A}^{q}, \boldsymbol{C}\right) \pi\left( \boldsymbol{g}_{A}^{q} \mid \emptyset, \boldsymbol{C}\right).
\end{gathered}
\end{equation}
\subsection{GTFNet-based Trajectory Interactive Prediction}\label{sec:definition}
After obtaining a candidate set of future goal-pairs, we then build a trajectory interactive prediction module to output joint future trajectories in a synchronized rollout manner.
Different from Eq.(\ref{eq:n3-3}), this module predicts the joint states of two interacting agents at time step $\delta + 1$ by taking into account each other's predicted state at time step $\delta$.
Given the determined future goal, we take agent $A$ as an instance to introduce the implementation paradigm over the unimodal conditional distribution for future trajectory, as described by Eq.(\ref{eq:n4-18}). We design a GRU-based encoder-decoder neural network to realize the generation of future trajectory, which shares the trainable parameters across two interacting agents. In detail, both encoder and decoder use GRU mapping $f_{gru}(\cdot)$ to recursively update the hidden unit along the temporal axis. Especially, the input representations of GRU mapping are defined by different forms in terms of encoding and decoding necessity. On one hand, the encoding GRU captures temporal relationships by taking as input the observed history $\boldsymbol{X}_i^0$. On the other hand, the decoding GRU updates the hidden state and then a trajectory predictor $f_{traj}(\cdot)$, implemented by 1-layer MLP, is followed to predict the future location at the current time step, which is transformed via $\phi_{B}(\cdot)$ and subsequently referred as input to the prediction process of agent $B$ at the next future time step. In the meanwhile, the concatenation of a goal candidate $\boldsymbol{g}_{A}^{q}$ and hierarchical interactive representation $\boldsymbol{s}_A^X$ is also served as input to determine the future intent at which agent $A$ will arrive.
\subsection{Optimization Design for CGTP Framework}\label{sec:algorithm}
The proposed CGTP framework is trained via supervised learning in an end-to-end way.
The total learning loss function contains goal prediction loss and trajectory prediction loss, defined as:
\begin{equation} \label{eq:n4-19}
\begin{gathered}
L^{total} = L^{g} + L^{traj}.
\end{gathered}
\end{equation}
In the following, we illustrate the training strategy of the two components above from interacting agents and joint modes in two aspects. Besides, a detailed training algorithm pseudocode is provided as
described in Algorithm~\ref{alg:algrithm1}.
\textbf{Training on Goal Interactive Prediction.} To effectively model the joint distribution of future goal candidates, our goal prediction loss $L^{g}$ is modified to supervise the general intersection of future goal candidate sets from the two interacting agents, in addition to both of the single interacting agent's future goal candidate set. Thus, the goal prediction loss can be decomposed into three parts in sequential order according to their single forms and joint form:
\begin{equation} \label{eq:n4-20}
\begin{gathered}
L^{g} =L_A^{g} + L_B^{g} + L_{Joint}^{g}.
\end{gathered}
\end{equation}
First, We introduce the marginal goal prediction loss $L_A^g$ of agent $A$. On one hand, the binary cross entropy loss $L_{BCE}(\cdot)$ is used to learn the marginal probability distribution $\pi \left( \boldsymbol{g}_{A}^{q} \mid \emptyset, \boldsymbol{C}\right)$.
On the other hand, the mean square error loss $L_{MSE}(\cdot)$ is employed to learn the offset mean $ \boldsymbol{\mu} \left( \boldsymbol{\Delta} \boldsymbol{g}_{A}^{q} \mid \emptyset \right)$ instead.
Hence, the marginal goal prediction loss $L_A^g$ is represented by
\begin{equation} \label{eq:n4-21}
\begin{gathered}
L_A^{g} = \frac{1}{K}\sum_{q=1}^{K} \left[ L_{BCE}\left( \pi\left(\boldsymbol{g}_{A}^{q} \mid \emptyset, \boldsymbol{C}\right), \mathbb{1}\left(q \in \boldsymbol{K}_A\right)\right)
\right.
\\
\left.
+ L_{MSE}\left( \boldsymbol{\mu}\left(\boldsymbol{ \Delta} \boldsymbol{g}_{A}^{q} \mid \emptyset \right), \mathbb{1}\left(q \in \boldsymbol{K}_A\right) \boldsymbol{\Delta} \boldsymbol{g}_{A}^{q}\right)\right],
\end{gathered}
\end{equation}
where $\mathbb{1}(\cdot)$ is an indicator function, and $\boldsymbol{K}_A$ represents the index set of goal candidates which cover the top $\mathcal{K}$ candidates closest to the real endpoint $\boldsymbol{y}_A^{t+T}$ of agent $A$.
Subsequently, once determining the top $\mathcal{K}$ agent $A$'s future modes as estimated by $\pi \left( \boldsymbol{g}_{A}^{q} \mid \emptyset, \boldsymbol{C}\right)$, we accordingly optimize the conditional goal prediction loss over them for another interacting agent $B$:
\begin{equation} \label{eq:n4-22}
\begin{gathered}
L_B^{g} =
\frac{1}{\mathcal{K} \cdot K}\sum_{q=1}^{\mathcal{K}}\sum_{k=1}^{K} \left[ L_{BCE}\left( \pi\left(\boldsymbol{g}_{B}^{k} \mid \boldsymbol{g}_{A}^{q}, \boldsymbol{C} \right), \mathbb{1}\left(k \in \boldsymbol{K}_B\right)\right)
\right.
\\
\left.
+ L_{MSE}\left( \boldsymbol{\mu}\left(\boldsymbol{ \Delta} \boldsymbol{g}_{B}^{k}\right), \mathbb{1}\left(k \in \boldsymbol{K}_B\right) \boldsymbol{\Delta} \boldsymbol{g}_{B}^{k}\right)\right].
\end{gathered}
\end{equation}
\noindent Furthermore, to enable a smooth training process, we employ a teacher forcing technique \cite{teacherforcing1989} by using the real endpoint of agent $A$ as the conditional query. Similarly, we correspondingly obtain the top $\mathcal{K}$ potential goal candidates of the agent $B$ at each conditional mode $q$, and then $\mathcal{K}^2$ goal-pairs begin to emerge that reflect different future interactive intents.
Finally, we design a novel goal interactive loss to accurately learn the joint probability distribution among the two classes of selected goal candidate sets:
\begin{equation} \label{eq:n4-23}
\begin{gathered}
L_{Joint}^{g} = \frac{1}{\mathcal{K}^2}\sum_{q=1}^{\mathcal{K}}\sum_{k=1}^{\mathcal{K}} L_{BCE}\left( \pi\left(\boldsymbol{g}_{B}^{k}, \boldsymbol{g}_{A}^{q} \mid \boldsymbol{C}\right), \mathbb{1}\left(\kappa = k^{J}\right)\right), \\
\kappa = \mathcal{K}(q-1)+k,
\end{gathered}
\end{equation}
\noindent where $\kappa$ represents the index of the selected goal-pairs, which also denotes the index of joint modes for the goal-oriented trajectory-pairs later. Different from Eq.(\ref{eq:n4-21}) and Eq.(\ref{eq:n4-22}), $k^{J}$ is the index of a specific case where both two agents' future goal candidates most closely match their corresponding ground truth endpoints.
\textbf{Training on Trajectory Interactive Prediction.} After attaining $\mathcal{K}^2$ diverse combinations of future intents, we accordingly obtain their $\mathcal{K}^2$ goal-oriented trajectory-pairs. Let $\hat{\boldsymbol{Y}}_{i}^{\kappa} = \left\{ \hat{\boldsymbol{y}}_{i}^{\kappa,t+1}, \hat{\boldsymbol{y}}_{i}^{\kappa,t+2}, \cdots, \hat{\boldsymbol{y}}_{i}^{\kappa,t+T} \right\}$ represent the goal-oriented trajectory prediction of interacting agent $i$ at joint mode $\kappa$. We also adopt the mean square error loss to minimize the Euclidean distance between the most likely predicted joint states and the ground truth at per future time step:
\begin{equation}
\label{eq:n4-24}
\begin{gathered}
L^{traj} = \frac{1}{2\mathcal{K}^2 \cdot T}\sum_{\kappa=1}^{\mathcal{K}^2}\sum_{\delta=t+1}^{t+T} \sum_{i} L_{MSE}\left( \hat{\boldsymbol{y}}_{i}^{\kappa,\delta}, \mathbb{1}\left(\kappa = k^{J}\right) \boldsymbol{y}_i^{\delta}\right).
\end{gathered}
\end{equation}
\noindent
Moreover, the teacher forcing approach is utilized during the trajectory predictive rollouts as well by feeding one agent's ground truth observation at time step $\delta$, served as a conditional interaction, to another agent at time step $\delta+1$. In addition, during the training time, we also consider the real endpoint of each interacting agent as the goal anchor to guide the prediction of the goal-oriented future trajectory.
At inference process, we replace the real endpoint of agent $A$ with its predicted future goals, served as conditional queries, to estimate conditional distribution over the future goal candidates for agent $B$. In addition, during the trajectory interactive prediction process, each interacting agent generates trajectories by considering its predicted future goals as anchors instead of its real endpoint. Also, the future ground truth is substituted by the predicted states to instruct the future interactive rollouts between two interacting agents in a step-wise manner.
\begin{algorithm*}[t]
\caption{{Optimization for CGTP Framework}}
\label{alg:algrithm1}
\LinesNumbered
\textbf{{Input:}}
scene information $\boldsymbol{C}=\left(\boldsymbol{C}_A, \boldsymbol{C}_B \right)$ and future joint states $\boldsymbol{Y}=\left(\boldsymbol{Y}_A, \boldsymbol{Y}_B \right)$
\textbf{{Initialize:}} Randomly initialize GoINet's parameter $\theta_{GoINet}$, the CGPNet's all parameters $\theta_{A}^{seg}$, $\theta_{A}^{reg}$, $\theta_{B}^{seg}$, $\theta_{B}^{reg}$ and the GTFNet's all parameters $\theta^{Enc}_{gru}$, $\theta^{Dec}_{gru}$, and $\theta_{traj}$
\While{ \rm{not \ convergence}}
{
\For{agent $i$ in \{A, B\}}
{$\boldsymbol{s}_i^X \gets$ encode the structural representation over history states via GoINet
$\boldsymbol{s}_i^{g, n} \gets$ encode the structural representations over future goal candidates via GoINet
$\boldsymbol{u}_i^X \gets f_{gru}^{Enc}(\boldsymbol{X}_i^0 \mid \theta^{Enc}_{gru}) $: encode the temporal representation over history states via GRU-based encoder}
\textbf{Training on the marginal prediction over future goal candidates for agent $A$}
$ \pi \left( \boldsymbol{g}_{A}^{q} \mid \emptyset, \boldsymbol{C}; \theta_A^{seg}\right) \gets$ estimate the marginal probability distribution
$\boldsymbol{\mu}\left( \boldsymbol{\Delta} \boldsymbol{g}_{A}^{q} \mid \emptyset; \theta_A^{reg}\right) \gets$ predict the offset mean
Update $\theta_{GoINet}$, $\theta_{A}^{seg}$ and $\theta_{A}^{reg}$ by minimizing $L_A^g$
\For{ condional queries $q = 1 \ to \ \mathcal{K} $ }
{
\textbf{Training on the conditional prediction over future goal candidates for agent $B$ at per query $q$}
$ \pi \left( \boldsymbol{g}_{B}^{k} \mid \boldsymbol{g}_{A}^{q}, \boldsymbol{C}; \theta_B^{seg}\right) \gets$ estimate the conditional probability distribution
$\boldsymbol{\mu}\left( \boldsymbol{\Delta} \boldsymbol{g}_{B}^{k} \mid \theta_B^{reg}\right) \gets$ predict the offset mean
\textbf{Training on the joint prediction over $\mathcal{K}$ goal-pair candidates at per query $q$}
$\pi \left( \boldsymbol{g}_{B}^{k}, \boldsymbol{g}_{A}^{q} \mid \boldsymbol{C}; \theta_A^{seg}, \theta_B^{seg}\right) \gets$ estimate the joint probability distribution
\For{ conditional intents $k= 1 \ to \ \mathcal{K} $ }
{
$\kappa = \mathcal{K}(q-1) + k \gets$ calculate the prediction mode
\textbf{Training on the joint prediction over goal-oriented future trajectories at per mode $\kappa$ }
\For{ $Timesteps \ \delta= t+1 \ to \ t+T $ }
{
$\boldsymbol{r}_{A}^{\kappa,\delta} \gets f^{Dec}_{gru}\left( \phi_A \left( \hat{\boldsymbol{y}}_{B}^{\kappa,\delta-1}\right), \boldsymbol{g}_{A}^{q}, \boldsymbol{s}_A^X, \boldsymbol{u}_A^X \mid \theta^{Dec}_{gru}\right)$
$\boldsymbol{r}_{B}^{\kappa,\delta} \gets f^{Dec}_{gru}\left( \phi_B \left( \hat{\boldsymbol{y}}_{A}^{\kappa,\delta-1}\right), \boldsymbol{g}_{B}^{k}, \boldsymbol{s}_B^X, \boldsymbol{u}_B^X \mid \theta^{Dec}_{gru}\right)$
$\hat{\boldsymbol{y}}_{A}^{\kappa,\delta}, \hat{\boldsymbol{y}}_{B}^{\kappa,\delta} \gets f_{traj}\left(\boldsymbol{r}_{A}^{\kappa,\delta}, \boldsymbol{r}_{B}^{\kappa,\delta} \mid \theta_{traj} \right)$
}
}
}
Update $\theta_{GoINet}$, $\theta_B^{seg}$ and $\theta_B^{reg}$ by minimizing $L_B^g$
Update $\theta_{GoINet}$, $\theta_A^{seg}$ and $\theta_B^{seg}$ by minimizing $L_{Joint}^g$
Update $\theta_{GoINet}$, $\theta^{Enc}_{gru}$, $\theta^{Dec}_{gru}$ and $\theta_{traj}$ by minimizing $L^{traj}$
}
The $\mathcal{K}^2$ multimodal futue trajectories $\left\{ \hat{\boldsymbol{Y}}_{i}^1, \hat{\boldsymbol{Y}}_{i}^2, \cdots, \hat{\boldsymbol{Y}}_{i}^{\mathcal{K}^2}\right\}$ are obtained for per interacting agent $i$
\textbf{return} All trainable parameters of GoINet, CGPNet and GTFNet
\end{algorithm*}
\section {Experiment}\label{sec:Experiment}
In this section, we first introduce the experimental settings, including datasets, metrics and implementation details. Subsequently, we compare our CGTP framework against the existing trajectory prediction methods.
In addition, the ablation studies are conducted to validate the effectiveness of the key design for our novel approach. In the end, some qualitative analysis are performed in the multimodality and future interactivity aspects.
\subsection{Experimental Settings}
\emph{(1) Datasets:} \textcolor{black}{We evaluate our CGTP framework on three large-scale complex driving datasets: Argoverse motion forecasting dataset \cite{Argoverse2019}, In-house cut-in dataset and Waymo open motion dataset \cite{WaymoDataset2021}.}
\noindent\textbf{Argoverse motion forecasting dataset} is a widely-used trajectory prediction dataset recorded in over 30K traffic scenarios from Pittsburgh and Miami. These scenarios produce a series of frames sampled at 10Hz, which are further split into training and validation sets with 205942 and 39472 frames, respectively. Different from prior literature, our work focuses on joint trajectory prediction for two agents. Thus, given the positions of all agents in each frame within the past 2 seconds observation, we consider the two interesting agents, with type 'agent' and 'av', as agent $A$ and agent $B$, separately, whose 3 seconds future trajectories need to be evaluated. Besides, this dataset provides a friendly interface to conveniently retrieve lane segments and their connection relationships for each frame. However, one limitation of this dataset is that there exist rare scenarios where the two agents interact with each other in the future. \textcolor{black}{To overcome this issue, the interactive datasets are taken into account as follows.}
\noindent\textbf{In-house cut-in dataset} is a Baidu internal dataset to support the trajectory interactive prediction task with two interacting agents. This large-scale dataset was collected in the specific cut-in scenarios from Beijing, China, which is divided into two branches in terms of junction and non-junction environments. On one hand, towards the junction environment, there are 180201 interactive frames in total extracted from over 11K unique cut-in scenarios, which are then split into 162381 frames for training and 17820 frames for validation. On the other hand, we provide 193401 interactive frames recorded in more than 12K non-junctional cut-in scenarios, while the training and validation sets contain 162556 and 30845 frames, saparately.
Further, in this paper, the interactive pair is achieved by agent $A$ and $B$, and we choose the agent $A$ as the query agent to influence the cut-in reactions for agent $B$. Given 2 seconds of observed history, our objective is to predict 3 seconds of joint future trajectories for two interacting agents in each cut-in frame. The agent trajectories are sampled at 10Hz and the road topology are provided in the form of centerlines and lane boundaries.
\noindent\textcolor{black}{\textbf{Waymo open motion dataset} (WOMD) is by far the most diverse interactive motion dataset to the best of our knowledge. It contains more than 570 hours of unique data over 1750 km of roadways. Since WOMD provides specific labels for interacting vehicles, 158810 and 33686 interactive frames can be extracted from the training and validation dataset, respectively. Further, we leverage the relation predictor in M2I \cite{sun2022m2i} to provide the influencer-reactor relationships between each interactive pair, with the influencer and reactor determined as agent $A$ and $B$, separately. Given 1.1 seconds of agent states sampled at 10 Hz, we focus on the interactive prediction task to predict the joint future positions of two interacting agents for the next 8 seconds in the future. In addition to history trajectories, the map features, represented by lane polylines, are also included in the prior observations of each frame.}
\emph{(2) Metrics:} By referring to the evaluation setting from \cite{WaymoDataset2021} and \cite{Argoverse2019}, we use minimum Average Displacement Error (minADE), minimum Final Displacement Error (minFDE) and Miss Rate (MR) to measure distance error over joint trajectory predictions, and these metrics are applied to the three datasets. Towards WOMD and the In-house dataset, we also report the Overlap Rate (OR) to measure the collision frequency between two interacting agents. \textcolor{black}{Besides, mean Average Precision (mAP) is considered in WOMD to measure the quality of confidence score over joint predictions, which is the official ranking metric used by WOMD benchmark.} On the other hand, due to the unique cut-in attributes recorded from the In-house dataset, we design the Cut-in Rate (CR) metric to identify whether the agent $B$ has the ability to early merge to that lane where agent $A$ keeps, on the condition that no collision occurs. In the following, we give detailed expressions of these metrics above.
\noindent\textbf{minADE}. The minimum Average Displacement Error is defined as the $\ell_2$ distance between the ground truth and the closest predicted trajectory-pair:
\begin{equation} \label{eq:n5-1}
\begin{gathered}
minADE=\frac{1}{2T} \min\limits_{\kappa}\sum_{i}\sum_{\delta=t+1}^{t+T} \Vert \hat{\boldsymbol{y}}_{i}^{\kappa,\delta}-\boldsymbol{y}_i^{\delta} \Vert _ 2.
\end{gathered}
\end{equation}
\noindent\textbf{minFDE}. The minimum Final Displacement Error is obtained by merely computing the minADE at the last future time step:
\begin{equation} \label{eq:n5-2}
\begin{gathered}
minFDE=\frac{1}{2} \min\limits_{\kappa}\sum_{i} \Vert \hat{\boldsymbol{y}}_{i}^{\kappa, t+T}-\boldsymbol{y}_i^{t+T} \Vert _ 2.
\end{gathered}
\end{equation}
\noindent\textbf{MR}. The Miss Rate is created by calculating an indicator function $IsMiss(\cdot)$ for each frame in turn and then averaging over all dataset. \textcolor{black}{For a specific frame, a miss is assigned if none of the joint predictions are within the given threshold(s) of the ground truth:}
\textcolor{black}{\begin{equation} \label{eq:n5-3}
\begin{gathered}
IsMiss(\cdot) = {\rm{min}}_{\kappa} \vee_{i} \mathbb{1}\left(Dist_{i}^{\kappa} > Dist_{thre} \right),
\end{gathered}
\end{equation}}
\textcolor{black}{where $Dist_{i}^{\kappa}$ and $Dist_{thre}$ have different definitions in terms of different datasets. For Argoverse and In-house datasets, $Dist_i^{\kappa}$ calculates the final displacement error between ground truth and the future trajectory of agent $i$ at the joint mode $\kappa$, and $Dist_{thre}$ is a single distance threshold that is set to 2. Different from above, WOMD adopts different criteria for lateral deviation versus longitudinal depending on the initial velocity of the predicted agents. In this way, $Dist_{i}^{\kappa}$ and $Dist_{thre}$ are set as trajectory displacement errors and thresholds in the lateral and longitudinal two aspects.}
\begin{table*}[htbp]
\centering
\caption{Comparison with marginal prediction methods and ablations on Argoverse motion forecasting dataset}
\begin{tabular}{c|c|ccc}
\toprule
\textbf{Joint Prediction} & \textbf{Methods} & \textbf{minADE $\downarrow$} & \textbf{minFDE $\downarrow$} & \textbf{MR $\downarrow$} \\
\midrule
\multicolumn{1}{c|}{\multirow{4}[2]{*}{\makecell[c]{Marginal \\ Predictions}}} & LSTM-ED \cite{Argoverse2019} & 1.2221 & 2.7970 & 0.6868 \\
& VectorNet (noGNN) \cite{VectorNet2020} & 1.1327 & 2.6005 & 0.6522 \\
& VectorNet \cite{VectorNet2020} & 1.0959 & 2.3197 & 0.5592 \\
& TNT \cite{TNT2020} & 1.1320 & 2.5341 & 0.5664 \\
\midrule
\multicolumn{1}{c|}{\multirow{2}[2]{*}{\makecell[c]{Ablations}}} & CGTP (wo interactive loss) & 1.0063 & 2.2847 & 0.4900 \\
& CGTP (w interactive loss) & \textbf{0.7533} & \textbf{1.6140} & \textbf{0.3369} \\
\bottomrule
\end{tabular}%
\label{tab:Argoverse}%
\end{table*}%
\begin{table*}[htbp]
\centering
\caption{Comparison with marginal prediction methods and ablations on In-house cut-in dataset}
\begin{tabular}{c|c|c|ccccc}
\toprule
\textbf{Scenarios} & \textbf{Joint Prediction} & \textbf{Methods} & \textbf{minADE $\downarrow$} & \textbf{minFDE $\downarrow$} & \textbf{MR $\downarrow$} & \textbf{OR $\downarrow$} & \textbf{CR $\uparrow$}\\
\midrule
\multicolumn{1}{c|}{\multirow{6}[4]{*}{Junction}} & \multicolumn{1}{c|}{\multirow{4}[2]{*}{\makecell[c]{Marginal \\ Predictions}}} & LSTM-ED \cite{Argoverse2019} & 0.9089 & 2.2090 & 0.6581 & \textbf{0.008817 } & 0.7047 \\
& & VectorNet (noGNN) \cite{VectorNet2020} & 0.8560 & 1.9331 & 0.5753 & 0.012480 & 0.7253 \\
& & VectorNet \cite{VectorNet2020} & 0.7130 & 1.7041 & 0.4538 & 0.011643 & 0.7368 \\
& & TNT \cite{TNT2020} & 0.8180 & 1.8980 & 0.4625 & 0.012556 & 0.7161 \\
\cmidrule{2-8} & \multicolumn{1}{c|}{\multirow{2}[2]{*}{\makecell[c]{Ablations}}} & CGTP (wo interactive loss) & 0.7022 & 1.6974 & 0.4308 & 0.011217 & 0.7575 \\
& & CGTP (w interactive loss) & \textbf{0.6454} & \textbf{1.5481} & \textbf{0.3736} & 0.010379 & \textbf{0.7639} \\
\midrule
\midrule
\multirow{6}[4]{*}{Non-Junction} & \multicolumn{1}{c|}{\multirow{4}[2]{*}{\makecell[c]{Marginal \\ Predictions}}} & LSTM-ED \cite{Argoverse2019} & 0.9337 & 2.1816 & 0.6077 & 0.002002 & 0.7965 \\
& & VectorNet (noGNN) \cite{VectorNet2020} & 0.9318 & 2.0643 & 0.5887 & 0.002486 & 0.8151 \\
& & VectorNet \cite{VectorNet2020} & 0.8933 & 1.9584 & 0.5180 & 0.002518 & 0.8018 \\
& & TNT \cite{TNT2020} & 0.9090 & 2.0577 & 0.5285 & 0.002970 & 0.7153 \\
\cmidrule{2-8} & \multicolumn{1}{c|}{\multirow{2}[2]{*}{\makecell[c]{Ablations}}} & CGTP (wo interactive loss) & 0.7814 & 1.9149 & 0.4721 & 0.002098 & 0.8341 \\
& & CGTP (w interactive loss) & \textbf{0.6209} & \textbf{1.4780} & \textbf{0.3544} & \textbf{0.001793} & \textbf{0.8583} \\
\bottomrule
\end{tabular}%
\label{tab:In-house}%
\vspace{-1em}
\end{table*}%
\noindent\textbf{OR}. A single overlap is described by a frame where the bounding boxes of two interacting agents overlap with each other at any future time step in the highest confidence trajectory-pair prediction. The average over all frames constructs the overlap rate.
Here, we use $\bar{\kappa}$ to represent the index of the predicted trajectory-pair with highest confidence score, and define a single overlap indicator over it as
\begin{equation} \label{eq:n5-4}
\begin{gathered}
IsOverlap(\cdot) = \mathbb{1}\left(\sum_{\delta=t+1}^{t+T} IOU\left( b\left(\hat{\boldsymbol{y}}_{A}^{\bar{\kappa},\delta}\right), b\left(\hat{\boldsymbol{y}}_{B}^{\bar{\kappa},\delta}\right)
\right) > 0\right),
\end{gathered}
\end{equation}
where $b(\cdot)$ is a function to obtain the bounding box information (length, width and heading) from the predicted state of each interacting agent at any future time step $\delta$. Subsequently, inspired by \cite{TrafficSim2021}, $IOU(\cdot)$ computes the intersection-over-union between two bounding boxes of interacting agents.
\noindent\textcolor{black}{\textbf{mAP}. Given the confidence score of joint predictions estimated by $\pi\left(\boldsymbol{g}_B^k, \boldsymbol{g}_A^{q} \mid \boldsymbol{C}\right)$, mAP calculates the area under the precision-recall curve, where the definition of MR is employed to determine true positives, false positives, $etc$. }
\noindent\textbf{CR}. The Cut-in Rate is computed as the total number of cut-in frames divided by the total number of safety frames. In this paper, we use the definition of OR above to define safety frames. In addition to satisfying the existence of no overlap between two interacting agents, a cut-in frame is determined by a cut-in indicator:
\begin{equation} \label{eq:n5-5}
\begin{gathered}
IsCutin(\cdot) = \mathbb{1}\left( lane\left(\hat{\boldsymbol{y}}_{A}^{\bar{\kappa},t+T}\right)=lane\left(\hat{\boldsymbol{y}}_{B}^{\bar{\kappa},t+T}\right)\right)
\\
\wedge \mathbb{1}\left( y\left(\hat{\boldsymbol{y}}_{B}^{\bar{\kappa},t+T}\right)>y\left(\hat{\boldsymbol{y}}_{A}^{\bar{\kappa},t+T}\right)\right)
\end{gathered}
\end{equation}
where $lane(\cdot)$ denotes a function to calculate the index of the future lane where each interacting agent locates at the last future time step, and $y(\cdot)$ is responsible to derive the longitudinal coordinate of the endpoints for the predicted trajectory-pair.
\emph{(3) Implementation Details:} This section introduces the implementation details from the aspects of pre-processing, network architecture design and learning scheme.
\noindent \textbf{Pre-processing}. Towards each interacting agent, both past and future trajectories are normalized at its own reference frame, with the origin centered around the location at its last observed time step. Further, the other observations, including agent trajectories and future lanes, are accordingly transformed to the reference frame of each interacting agent. For the dynamic information, the heuristic rules are performed to select $O$=14 vehicles in the neighbor as surrounding agents. On the other hand, for the static information, we use the Depth-First-Search algorithm to search $P$=6 potential future lanes which each interacting agent is likely to reach, and every future lane has 200 goal candidates sampled at every 0.5 meters. Obviously, we obtain $K$=1200 fine-grained goal candidates in total to represent diverse uncertainties. If the number of surrounding agents or future lanes is insufficient, the corresponding locations are masked out by zeros.
\noindent \textbf{Network Architecture Design}. In the context encoding aspect, GoINet extracts the individual-level feature over every node via $L$=3 graph layers.
Due to the existence of the concatenation operator $f_{cc}(\cdot)$, the number of hidden units at graph layer $l$ is twice as that at graph layer $l$-1, and its initial value $d_h$ is set to 16.
Subsequently, the goal distribution and offset prediction of two interacting agents, realized by $f_i^{seg}(\cdot)$ and $f_i^{reg}(\cdot)$, are 3-layer MLPs, with the number of hidden units set to 128. Based on the realization of goal interactive prediction, the proposed CGPNet selects the top $\mathcal{K}$=5 future goal candidates of agent $A$ from its marginal goal distribution, which takes turns to be served as a conditional query to determine the same number of future goal candidates of agent $B$ based on the conditional goal distribution.
Further, during the trajectory interactive prediction process, a 2-layer bidirectional GRUs are used both by encoder and decode, the hidden dimensions of which are set to 128. Until now, given $\mathcal{K}^2$=25 goal-pairs, 25 goal-oriented trajectory-pairs are produced jointly by employing the proposed GTFNet. \textcolor{black}{Different from Argoverse and In-house cut-in dataset, we further reduce the size of joint trajectory-pairs to 6 to satisfy the evaluation necessity of Waymo motion prediction benchmark. In our CGTP framework, we filter 6 joint predictions from $\mathcal{K}^2$=25 candidates by using the non-maximum suppression method \cite{TNT2020}.}
\noindent \textbf{Learning Scheme}. Our proposed CGTP framework is trained on 8 A100 GPUs with the Adam optimizer \cite{Adam2014}. The learning rate is initialized as 5e-3, which is decayed by a factor of 0.5 every 30 epochs. Our model approximately requires 200 epochs to train on average with a batch size of 64.
\begin{table*}[t]
\centering
\caption{\textcolor{black}{Comparison with interactive prediction benchmark of WOMD}}
\begin{tabular}{c|c|ccccc}
\toprule
\textbf{Joint Prediction} & \textbf{Methods} & \textbf{minADE $\downarrow$} & \textbf{minFDE $\downarrow$} & \textbf{MR $\downarrow$} & \textbf{OR $\downarrow$} &\textbf{mAP $\uparrow$}\\
\midrule
\multicolumn{1}{c|}{\multirow{2}[1]{*}{\makecell[c]{Marginal \\ Predictions}}} & Waymo LSTM Baseline \cite{WaymoDataset2021} & 2.420 & 6.070 & 0.660 & - & 0.070\\
& TNT \cite{TNT2020} & 2.585 & 6.136 & 0.605 & 0.186 & 0.167 \\
\midrule
\multicolumn{1}{c|}{\multirow{2}[1]{*}{\makecell[c]{Conditional \\ Predictions}}}
& ProspectNet \cite{ProspectNet} & 3.012 & 8.118 & 0.826 & 0.416 & 0.115 \\
& M2I \cite{sun2022m2i} & 2.399 & 5.477 & 0.552 & 0.174 & 0.177 \\
\midrule
\multicolumn{1}{c|}{\multirow{2}[2]{*}{\makecell[c]{Ablations}}} & CGTP (wo interactive loss) & 2.414 & 5.531 & \textbf{0.551} & 0.173 & 0.179 \\
& CGTP (w interactive loss) & \textbf{2.371} & \textbf{5.395} & 0.559 &\textbf{0.169} & \textbf{0.180}\\
\bottomrule
\multicolumn{7}{l}{{$^{\star}$mAP is the official ranking metric.}}
\end{tabular}%
\label{tab:Waymo}%
\end{table*}%
\subsection{Quantitative Results}
\noindent\textbf{\textcolor{black}{Comparisons on Argoverse and In-house dataset.}} We first evaluate the performance of our CGTP framework with the existing mainstream marginal prediction approaches on Argoverse and In-house cut-in dataset. In this paper, we extend LSTM-based encoder-decoder (LSTM-ED) \cite{Argoverse2019}, VectorNet \cite{VectorNet2020} and TNT \cite{TNT2020} to the joint prediction task, producing the marginal predictions for both two agents, without considering their future interactions. Among them, LSTM and VectorNet predict trajectory-pairs via
pure regression method.
Especially for VectorNet, to validate the effectiveness of message passing in a graph, we provide a variant of VectorNet, named VectorNet(noGNN), whose context representations are purely captured by MLP and Max-pooling. \textcolor{black}{On the other hand, we compare our CGTP framework with the goal-oriented trajectory prediction model TNT to further verify the significance of the proposed goal interactive predictor in our model.} As shown in Table~\ref{tab:Argoverse} and \ref{tab:In-house}, our proposed model outperforms all marginal approaches on Argoverse and In-house cut-in dataset by a large margin in all distance error metrics (minADE, minFDE and MR). More specifically, for the In-house non-junction environment, the comparative results show that the proposed CGTP framework significantly outperforms the TNT model with 32.9$\%$ reduction in MR. This great enhancement can be attributed to the accurate estimation of joint distribution over future goals by relying on the goal interactive prediction stage.
Also note that VectorNet achieves significant improvements against VectorNet(noGNN), demonstrating that GNN can aggregate interactive features from context information via message passing.
In terms of interactive metrics like CR and OR, compared with the marginal prediction methods, our CGTP framework can achieve on par or better performance on the In-house cut-in dataset, as shown in Table~\ref{tab:In-house}.
Specifically in the non-junction environment, the conditional model trained with our proposed framework beats the marginal model trained with TNT, achieving 39.6$\%$ relative reduction on OR and 20.0$\%$ relative gain on CR. Unlike TNT, the proposed CGTP framework models the future interactions at the goal-based level,
which is capable of learning the joint distribution of cut-in interactive behaviors. In reverse, the TNT-based marginal model assumes the goal predictions of two interacting agents are independent of each other, which hardly generates reasonable cut-in trajectory-pairs in some scenarios with complex future interactions. \textcolor{black}{Towards the junction environment, our CGTP framework greatly outperforms LSTM-ED in CR, while the performance of OR is slightly worse than it. Such experimental phenomenon results from the poor imitation ability of the simple regression-based marginal model, which may output some
inaccurate behaviors far away from ground truth cut-in behaviors yet with a safety guarantee between two interacting agents, leading to the illusion of a lower collision rate. }
\noindent \textcolor{black}{\textbf{Comparisons on interactive prediction benchmark of WOMD.} In Table \ref{tab:Waymo}, we compare our model both with marginal and conditional prediction methods. On one hand, the marginal approaches include Waymo LSTM Baseline \cite{WaymoDataset2021} and TNT \cite{TNT2020}, where the former is the official baseline provided by the benchmark and the latter is a typical goal-oriented prediction method that also served as a comparison model for the two datasets above. On the other hand, we take ProspectNet \cite{ProspectNet} and M2I \cite{sun2022m2i} as conditional comparative approaches, which are the state-of-the-art models on the WOMD benchmark of the interactive prediction task. Such conditional models commonly build conditional dependencies on the explicit overall future trajectories while differing in the process of context encoding. In detail, ProspectNet leverages attention to aggregate vectorized features both spatially and temporally,
while M2I learns features both from rasterized and vectorized representations. In addition, the unique novelty of M2I lies in the proposal of relation predictor, which infers the influencer-reactor relations of two interacting agents and then leverage marginal and conditional trajectory predictor in turn to generate the joint trajectory-pairs. In our work, we adopt the relation predictor of M2I to determine the agent $A$ and $B$ before training and validation, yet focus on the goal interactive prediction in a combined format of marginal and conditional methods.}
\begin{figure*}[htbp]
\centering
\begin{center}
\scriptsize
\includegraphics*[width=0.85\textwidth]{./photo/v.jpg}
\caption{\textcolor{black}{Qualitative examples from TNT, M2I, and our CGTP framework in four classes of pairwise interactive scenarios, including (a) cut-in, (b) yielding, (c) merging, and (d) intersection left-turn. Each pairwise interactive scenario is demonstrated by a group of examples. Compared with TNT (upper row) and M2I (medium row), our CGTP framework (lower row) accounts for future interactions at the goal-based level and achieves better prediction accuracy and scene compliance.
} }
\label{fig:Future-Interaction}
\end{center}
\vspace{-1.3em}
\end{figure*}
\textcolor{black}{Comparison results demonstrate our model CGTP outperforms Waymo LSTM Baseline in terms of all metrics. Similar to the observations from the two datasets above, the performance of our CGTP framework is superior to the goal-oriented trajectory prediction method TNT.
\textcolor{black}{When compared to the conditional model ProspectNet, our CGTP framework improves mAP by 56.52$\%$, meaning that the combined design of CGPNet and GTFNet is capable of learning more accurate joint distribution in the future.}
\textcolor{black}{We further validate the effectiveness of future interactive modeling between the sparse explicit future goals by comparing the performance of joint predictions using our model and state-of-the-art model M2I, with the latter considering future interactions between the redundant explicit future trajectories instead, and achieve 2.87$\%$ reduction in OR and 1.69$\%$ gain in mAP, the official ranking metric.}
}
\noindent\textcolor{black}{\textbf{Comparisons on ablation study.}} We conduct ablation studies to analyze the contribution of interactive loss in the proposed CGTP framework. As shown in Table \ref{tab:Argoverse} - \ref{tab:In-house}, our current CGTP framework achieves on par or better results in all metrics by adding a novel interactive loss. The improvements indicate that our model with the interactive loss can obtain the high-quality goal-pair estimated by the learned joint distribution, and then produces the scene-compliant goal-oriented trajectory-pair most closely matching the ground truth. \textcolor{black}{Towards WOMD, the most likely trajectory-pair learned by the interactive loss can characterize a more reasonable interactive behavior in the future, improving mAP metric by 0.56$\%$ gain.} Similar observations are also presented in the interactive metrics of the In-house cut-in dataset.
More specifically, in the junction environment, OR and CR is 7.5$\%$ and 0.8$\%$ better compared to our model without interactive loss. In the non-junction environment, our model with interactive loss is 14.5$\%$/2.9$\%$ better in OR/CR compared to the one without interactive loss.
\subsection{Qualitative Results}
\textcolor{black}{In Fig.~\ref{fig:Future-Interaction}, we present four classes of challenging pairwise interactive scenarios in WOMD, including cut-in, yielding, merging and intersection left-turn, and visualize the most likely trajectory-pair from the goal-oriented trajectory prediction method TNT, the SOTA method M2I, and our CGTP framework, respectively. In Fig.~\ref{fig:Future-Interaction}.(a), a group of examples depicts a pairwise interactive scenario where agent $B$ is cutting in front of agent $A$. The goal-oriented trajectory prediction model TNT fails to capture the interaction and predict overlapping trajectories, as shown in the first column of Fig.~\ref{fig:Future-Interaction}.(a). In spite of no overlap exhibited in the remaining cut-in examples, TNT also results in less accurate predictions that mismatch the ground truth interactive behaviors. Also, M2I hardly captures accurate cut-in aggressive interactive behaviors by considering the overall predicted trajectory of agent $A$. Instead, our CGTP framework is sensitively aware of the underlying interaction between future goals of two interacting agents, and predicts an accurate endpoint of agent $B$ conditioned on the predicted endpoint of agent $A$, and then outputs a scene-compliant goal-oriented trajectory-pair given an accurate cut-in goal-pair prediction. Different from Fig.~\ref{fig:Future-Interaction}.(a), Fig.~\ref{fig:Future-Interaction}.(d) provides a set of examples where agent $B$ turns left when its opposite agent $A$ goes straight at the intersection, which represents a more challenging pairwise interactive scenario. In each example, our CGTP framework successfully improves prediction accuracy and scene compliance, while TNT predicts trajectories far away from the ground truth without considering the future interaction between two agents.}
\section {Conclusion}\label{sec:Conclusion}
In this paper, we propose a novel CGTP framework for interactive behavior prediction of two agents. We build the hierarchical representations of fine-grained future goals, and focus on the goal interactive prediction stage by a combined form of marginal and conditional goal predictors, where we predict the future goals of agent $A$ via marginal goal predictor and then perform future goal prediction of agent $B$ conditioned on per marginal prediction. Once the goal-pairs of two interacting agents are determined, a trajectory interactive prediction module is designed to generate the goal-oriented trajectory-pairs in a step-wise rollout manner. \textcolor{black}{The experimental results conducted on Argoverse motion forecasting dataset, In-house cut-in dataset and Waymo open motion dataset show the superiority of our proposed method in prediction accuracy and scene compliance. As future work, the joint prediction for more interacting agents with low computational burden is an interesting and important frontier field.}
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
|
2,869,038,155,836 | arxiv | \section{Introduction}
The elementary fermions that make up the proton and the neutron are called quarks. These fundamental particles are strongly interacting and have been successfully described by a non-Abelian color gauge theory called quantum chromodynamics, or QCD for short. In QCD the gauge group is the color $SU(3)_{c}$, and quarks $\psi$ and the quanta of the gauge fields (gluons) $A^{a}_{\mu}$ belong to its fundamental and adjoint representation, respectively.
A remarkable feature of QCD is the self-interactions among the $SU(3)_{c}$ gauge fields. It is a well-known fact that this self-interactions of the gauge fields is the main source of its asymptotic freedom behavior. In the high energy regime, the quarks interact weakly, and one can safely use the standard techniques of perturbation theory to discuss QCD at this regime. On the other hand, however, at the low energy regime, the quarks are bounded inside the mesons and baryons as quark-antiquark or three quarks states with net color charge being zero. These bounded state of quarks are usually called hadrons. The quarks inside the hadrons are strongly interacting system. Consequently, there is a lack of analytically method to discuss from first principles the low energy properties of QCD, since the standard techniques of perturbation theory only apply to a weakly interacting system.
The solvability of QCD at low energy regions is plagued by a lack of expansion parameter. To overcome this issue, 't Hooft \cite{tH} suggested that one should learn properties about QCD from $SU(N)$ gauge theory in the limit of $N\mapsto \infty$. Since at this limit, $1/N$ can be used as an expansion parameter and a perturbative study of QCD can be performed. In addition, studies of this large N expansion suggested that there must be a correspondence between strongly coupled conformal field theories and gauge theories living in a higher dimensional AdS background geometry. This duality, original propounded by Maldacina \cite{M}, preaches the equivalence between the low energy approximation of type $IIB$ string theory on $AdS_{5}\bigotimes S^{5}$ and $\cN=4$ U(N) SYM theory for large $N$ in four dimensions. This Anti-de Sitter/conformal field theory (AdS/CFT) correspondence\cite{GKP, OGO, W}, has been widely studied at length in the literature and it advocates the correspondence between a strongly conformal theory in $d$ dimensional spacetime and a gauge theory formulated in $AdS_{d+1}$ background. Although QCD is not a conformal theory, we believe that we can still apply the AdS/CFT recipe to it due to its conformal behavior at high energy regime. This phenomenological cousin of AdS/CFT has been termed AdS/QCD.
In many respects, the AdS/QCD is an ideal pedagogical device for learning about the low energy properties of QCD (the mass spectrum, the form factors, the correlator functions, the coupling constant and the decay constant of the bounded state of quarks). The fundamental lesson preached by the AdS/QCD duality is the equivalence between theory of gravity living in a five-dimensional AdS geometry and low energy QCD at the boundary of this geometrical background. This is mainly due to two basic tenets: On one hand the boundary of the five-dimensional AdS model is a four-dimensional spacetime that looks like flat spacetime with three spatial directions and one time direction; and on the other hand Yang-Mills theories at the boundary of $AdS_{5}$ is equivalent to gravitational physics in the $AdS_{5}$ geometry. There are two approaches to study QCD through AdS/QCD: the hard-wall model \cite{dTB, EKSS, DP1} and the soft-wall model \cite{KKSS}.
In this paper, we limit ourselves to the soft-wall model. Specifically, we analyze the mass spectrum of the vector ($\rho$) mesons. The virtue of soft-wall AdS/QCD model is that it achieves linear confinement \cite{KKSS, Z, SWY} and chiral symmetry breaking \cite{GKK, DP1, VS2} in a very simple way.
Over the recent years, by using the soft wall AdS/QCD device, several researchers have theoretically investigated \cite{KKSS, Z, VS1, VS2, GKK, SWY, WF} the mass spectra of the resonance messons sector of the low energy QCD. Nevertheless, the discrepancy between the theoretical value and the experimental value of some of the masses is still very large.
The potential of the Schr$\ddot{o}$dinger like equation that determines the mass spectrum of the $\rho$ mesons is the simplest of all the messons sector, since it only depends on the dilaton field $\Phi(z)$ and the conformal factor $a(z)$. So we believe that if on arrives to reduce the discrepancy for the $\rho$ messon mass, the others sectors should be reduce by a suitable parameterizations of the vev of the scalar field $X(z)$. Therefore, It would seem reasonable to redo the analysis of the $\rho$ meson sector again.
The rest of this paper is organized as follows: The next section opens with a review of the soft-wall AdS/QCD. In section 3, we present our model and the mass spectrum it gives. The final section is devoted to the conclusion.
\section{The Soft-Wall AdS/QCD model}
The soft-wall AdS/QCD is the bottom-up approach to QCD in AdS/QCD correspondence. That is, one starts from QCD and constructs its gravity dual theory. For an understanding of how it works see ref(\cite{JP}).
For an illustration of this bottom-up approach, let us consider the Dirac Lagrangian that governs the motion of quarks, that is,
\begin{equation}\label{dirac}
L_{Dirac}=\bar{\psi}(i\gamma^{\mu}\partial_{\mu}-m)\psi.
\end{equation}
In term of the Chiral components, \begin{eqnarray}
\psi=\begin{pmatrix}\,\, \psi_{R} \,\,\,\, \\[0.2cm]
\,\, \psi_{L} \,\,\,\, \end{pmatrix} \,\,,\nonumber
\end{eqnarray} equation (\ref{dirac}) can be rewritten as:
\begin{equation}\label{chiral}
L_{Dirac}=\bar{\psi}_{L}i\gamma^{\mu}\partial_{\mu}\psi_{L}+\bar{\psi}_{R}i\gamma^{\mu}\partial_{\mu}\psi_{R}-m(\bar{\psi}_{R}\psi_{L}+\bar{\psi}_{L}\psi_{R}).
\end{equation}
It can be demonstrated that this lagrangian has a global $SU(N_f)_L \times SU(N_f)_R$ symmetry in the massless limit (m=0)\footnote{$N_{f}$ is the number of quarks flavors}. This Chiral symmetry is explicitly broken by the mass term. Additionally, there is another source, the so-called quarks condensate $\langle\bar{\psi}(x)\psi(x)\rangle$, that spontaneously breaks the Chiral symmetry. Furthermore, the spectra of mesons in low energy QCD are linear and the quark are confined inside the hadrons. This linear confinement feature has to be realized in any realistic model of low energy QCD.
The building of the holographic (gravity) dual of QCD starts with these basic ingredients of low energy QCD. According to the dictionary, each global symmetry of QCD becomes a local symmetry in the gravity side. Thus, in the gravity side, there is one left-handed gauge vector field (A$_{L}$) for the global $SU(N_{f})_{L}$ symmetry of QCD and one right-handed gauge vector field (A$_{R}$) for the global $SU(N_{f})_{R}$ symmetry of QCD. The spontaneously breaking of the chiral symmetry is achieved by introducing a bifundamental scalar $X(z)$ field, which belong to the adjoint representation of the 5D gauge group $SU(N_f)_L \times SU(N_f)_R$. To satisfy the linear confinement feature of QCD, one simply has to turn on a holographic coordinate dependent dilaton field $\Phi(z)$ with the limit $\Phi(z\rightarrow\infty)\sim \,\lambda^2 z^2\, $ \cite{KKSS}. Finally and more importantly, the background in which these bulk fields live is assume to be a slice of AdS spacetime. The exact mapping between the fields in the two side is depicted in table $1$. The holographic coordinate $z$, which corresponds to the energy scale in the four-dimensional theory, is defined within the range $0<z<\infty$.
\begin{table}
\begin{center}
\caption{The mapping between the fields in the two side and their dimensions.}
\begin{tabular}{ c c c | c c c | c c c | c c c }
\hline
\hline
& 4D : \textit{O}(x) & & & 5D : $\Phi^{bulk}(x,z)$ & & & $\Delta$ & & & $m_{5}^{2}$ & \\
\hline
& $\overline{\psi}_{L} \gamma^{\mu} t^{a} \psi_{L}$ & & & $A_{L \mu}^{a}$ & & & 3 & & & 0 & \\
& $\overline{\psi}_{R} \gamma^{\mu} t^{a} \psi_{R}$ & & & $A_{R \mu}^{a}$ & & & 3 & \\
& $\overline{\psi}_{R}^{\alpha} \psi_{L}^{\beta}$ & & & $\frac{1}{z} X^{\alpha\beta}$& & & $3$ & & & $-3$ & \\
\hline
\hline
\end{tabular}
\end{center}
\end{table}
The 5D non-renormalizable bulk action\footnote{ We limit ourselves to the quadratic parts of the fields}, writing out of these fields, that describes the meson sector reads
\begin{eqnarray}
S_M=\int d^4x\,dz\, \sqrt{G}\,e^{-\Phi}\, \mathrm{Tr}\left\{-\frac{1}{4g_5^2}(\,F_L^2+F_R^2)+
|DX|^2-m_X^2|X|^2\,\right\}\, \label{SM},
\end{eqnarray}
where $F_L$ and $F_R$ are the nonabelian field strength formed from the gauge potentials (A$_{L}$) and (A$_{R}$) respectively and are defined by
\begin{equation}
F_{L,R}^{MN} = \partial^{M} A_{L,R}^{N} - \partial^{N} A_{L,R}^{M} - i [A_{L,R}^{M},A_{L,R}^{N}]. \nonumber
\end{equation}
The symbol $D$ is the Yang-Mills covariant derivative containing the gauge fields (A$_{L}$, A$_{R}$). That is, $D_M{X}=\p_M{X}-iA_{LM}{X}+i{X}A_{RM}$ and
the AdS/QCD dictionary fixes the gauge coupling $g_5$ with QCD to be $g_5^2=12\pi^2/N_c=4\pi^2$ (for $N_c=3$ as in QCD).
The metric is an AdS geometrical background
\begin{eqnarray}
ds^2=\,G_{MN}\,dx^M dx^N=a(z)\,(\,\eta_{\mu\nu}dx^{\mu}dx^{\nu}-dz^2),\label{metric},
\end{eqnarray} where the 4D Minkowski metric is given by $(\eta_{\mu\nu})=diag(1,-1,-1,-1)$, and $a(z)$ is the conformal factor or the warped factor.
The vacuum expectation value(vev) of the scalar field ($\langle{X}\rangle=\frac{1}{2}\,v(z)$) is assumed to have a $z$-dependent with the limit
\begin{eqnarray}
v(z\rightarrow0)\sim \,\alpha z+\beta z^3\,. \label{vUV}\nonumber
\end{eqnarray}
From (\ref{SM}), the equation of motion (EOM) of the vev $v(z)$, in the axial gauge to be defined below, reads
\begin{eqnarray}
\p_z(\,a^3 e^{-\Phi}\p_z v)+3a^5 e^{-\Phi}v=0\,. \label{EOMv}
\end{eqnarray}
The hadrons are defined as the renormalizable modes of the 5D gauge fields. For instance,
the vector mesons (V) and the axial-vector mesons (A) are respectively define by:
\begin{eqnarray}
V=(A_{L}+A_{R})/2\,,\qquad A=(A_{L}-A_{R})/2
\nonumber
\end{eqnarray}
An analysis of gauge theory is usually simplified by choosing a gauge fixing term. Here, we choose the most commonly used gauge fixing term, that is, the axial gauge $V_z=0$.
It is a well-known fact that the Kalusa-Klein dimensional reduction of a 5D vector field gives rise to an infinite tower of 4D massive vector fields called the (KK) modes. This KK decomposition is usually done by breaking the field of interest into an infinite tower of 4D component satisfying the Proca's equation and purely extra dimension dependent parts. For example, one can decompose the vector meson field (V) as $V_\mu(x,z)=\sum_{n}\,\rho_\mu^{(n)}(x)h_V^{(n)}(z)$. The z-dependent parts satisfy the constraint \cite{KM}
\begin{equation}\label{eigen}
-\p_5(ae^{-\Phi}\p_5h_V^{(n)})=ae^{-\Phi}M_V^{(n)2}h _V^{(n)},
\end{equation} where $M_V^{(n)2}$ are the masses of the vector fields $\rho_\mu^{(n)}(x)$.
The infinite tower of 4D massive vector fields $\rho_\mu^{(n)}(x)$ resulting from the Kalusa-Klein decomposition of the vector field $V$ are assumed to be the vector $\rho$ mesons of low energy QCD and the equation (\ref{eigen}) determines its mass spectrum.
Equation (\ref{eigen}) can be deformed, by simply setting $h_V^{(n)}=e^{[\Phi(z)-\log{a(z)}]/2}\chi_V^{(n)}$, to a Schr\"{o}dinger form
\begin{equation}\label{sch}
-\chi_V^{(n)\prime\prime}+V_V\chi_V^{(n)}=M_V^{(n)2}\chi_V^{(n)}
\end{equation}
with the potential
\begin{eqnarray}
V_V=\frac{1}{4}[\Phi(z)-\log{a(z)}]'^{\,2}-\frac{1}{2}[\Phi(z)-\log{a(z)}]''\,,\label{VV}
\end{eqnarray} where ($'$) denotes derivation with respect to z.
\section{The Model and its Parameters}
The analysis we intend to do is based on a modified version of the soft-wall model advocated by Erlich et al.\cite{EKSS} and further scrutinize in \cite{GKK, Z, VS1, KKSS2}. We assume that the bulk fields are propagating on the boundary of a slice 5D AdS background geometry, where the metric in the Poincar\'{e} coordinates is given by equation (\ref{metric}).
With the constraint on the dilaton field in our mind, and the fact that we should recovered the AdS geometry in the UV regime, that is, $a(z\rightarrow0)\sim \,L/z\, $, we define our model in table 2.
\begin{table}
\begin{center}
\caption{The form of the dilaton field and the conformal factor used in this model.}
\begin{tabular}{|cc|cc|cc|cc|}
\hline
A & & $\Phi_{A}(z)= \lambda^2 z^2+\lambda z$ & & $a_{A}(z)=1/z$ & \\
B && $\Phi_{B}(z)= \lambda^2 z^2+\lambda z$ && $a_{B}(z)=(m+n z^2)/z$ &\\
C && $\Phi_{C}(z)= \lambda^2 z^2+\lambda z$ && $a_{C}(z)=\sqrt{\alpha^2+d z^2}/z$ & \\
\hline
\end{tabular}
\end{center}
\end{table}
Comparatively, the form of our dilation field $\Phi(z)= \lambda^2 z^2+\lambda z$ is different to that of \cite{HG} in two points. First thing to notice in our model is that it simply needs one parameter in contrast to two in \cite{HG}. Finally and more importantly, the sign of the z-dependent tern is positive in our model. This is very important, since it has been shown in \cite{KKSS2} that,in order to avoid the presence of a spurious massless state in the vector sector, the sign of the exponential profile defining the wall should be positive.
Using (\ref{VV}) and the parametrization given in table 2, One can easily evaluated the potentials of the Schr\"{o}dinger-like problem, and the results are:
\begin{flushleft}
\begin{equation}\label{Model A}
V_{VA}(x)=\lambda^4 x^2+\lambda^3 x+\frac{\lambda^2}{4}+\frac{\lambda^2}{2 x}+\frac{3}{4 x^2},\nonumber
\end{equation}
\end{flushleft}
\begin{multline}\label{Model B}
V_{VB}(x)=\lambda^4 x^2+\lambda^3 x-\lambda^2\frac{n x^2-m}{n x^2+m}-
\frac{3 \lambda^2}{4}-\frac{n \lambda x^2-m \lambda-n x}{2 n x^3+2 m x}\\+
\frac{m}{2 x^2(n x^2+m)}-\frac{3 n^2 x^2}{4 (n x^2+m)^2}+\frac{mn}{2 (n x^2+m)^2}+\frac{m^2}{4 x^2(n x^2+m)^2},\nonumber
\end{multline}
\begin{equation}\label{Model c}
V_{VC}(x)=\lambda^4 x^2+\lambda^3 x+\frac{\lambda^2}{4}+\frac{\lambda^2}{2 x}+\frac{3}{4 x^2}-d x \lambda\frac{(2 d x \lambda+1)}{2 \alpha^2+2 d x^2}-3 \frac{d^2 x^2}{4(\alpha^2+d x^2)^2}.
\end{equation}
To get the mass spectrum from the Schr\"{o}dinger equation (\ref{sch}) with the potentials given in (\ref{Model c}), we used the shooting method with the boundaries $\chi_n(z\to 0) =0$, $\partial_z
\chi_n(z\to \infty) =0$; the obtained mass spectra are presented in table 3.
The values of the five parameters that fit the experimental data are:
\begin{eqnarray}
&\lambda=401.57\,\mathrm{MeV}\,,\quad \alpha=0.7\,\mathrm{MeV}\,;& \nonumber \\
&m=1.1\,\mathrm{MeV}\,,\quad n=6000\,\mathrm{MeV}\,,\quad d=5900\,\mathrm{MeV}\,.& \label{para_m}
\end{eqnarray}
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
$\rho$ mass (MeV) & 0 & 1 & 2 & 3 & 4 & 5 & 6 \\
\hline
Model A &983.6 & 1297 & 1544 & 1755 & 1941.6 & 2140 & 2266.8 \\
\hline
Model B &970 & 1280& 1525 & 1734 & 1920 & 2113 & 2244 \\
\hline
Model C &970 & 1281& 1527 &1737 & 1924 & 2093 & 2287\\
\hline
experimental &775.5$\pm$ 1& 1282$\pm$ 37 & 1465$\pm$ 25 & 1720$\pm$ 20 & 1909$\pm$30 & 2149$\pm$ 17 & 2265$\pm$ 40 \\
\hline
error (Model A) &26.8\%& 1.2\% &5.4\% & 2\% & 1.7\% & 0.4\% & 0.1\% \\
\hline
error (Model B) &25\%& 0.2\% &4.1\% & 0.8\% & 0.6\% & 1.7\% & 0.9\% \\
\hline
error (Model C) &25\%& 0.1\% &4.2\% & 1\% & 0.8\% & 2.6\% & 1\%\\
\hline
\end{tabular}
\caption{\small{The theoretical and experimental values of the masses of the vector $\rho$ meson in the three cases.}}\label{rho}
\end{table}
\section{Conclusions}
In this paper, by using the device of the soft-wall AdS/QCD model, we analyzed the $\rho$ mass spectrum with the form of the dilation field $\Phi(z)= \lambda^2 z^2+\lambda z$. This form of the dilation field satisfies the constraint for having the correct Regge behavior as well as the constraint for the non-existence of the spurious massless scalar mode in the spectrum. Moreover, as it can be seen in table 3, there is an agreement between our model and the experimental data. Therefore, it seems important to extend our analysis by redoing the investigation of the other sectors using our dilaton field profile. Comparatively, in figure 1, we can see that the model $B$ as well as back-reacted geometry \cite{WF} (case C) approache very well to the experimental data. Consequently, in the soft wall AdS/QCD model, the back-reaction of the geometry has to be taken into account.
\subsection*{Acknowledgments}
We wish to express our thanks to Alfredo Vega, Thomas Kelley, and Peng Zhang for useful correspondences. Keita is indebted to Wu Feng for his encouragement. He would also like to thank all the members of the department of physics of Nanchang University, and the teams of the Center for Relativistic Astrophysics and High Energy Physics of Nanchang University.
|
2,869,038,155,837 | arxiv | \section{Introduction}
Dynamical studies of the central regions of nearby inactive galaxies have
revealed that supermassive black holes (BHs; {$M_{\rm BH}$}\ $\approx 10^6-10^9$
$M_\odot$) are ubiquitous, and that their masses are strongly coupled to the
host galaxy's bulge luminosity (Kormendy \& Richstone 1995; Magorrian et al.
1998) and stellar velocity dispersion ($M_{\rm BH}-\sigma_*$\ relation: Gebhardt et al.
2000a; Ferrarese \& Merritt 2000; Tremaine et al. 2002). The occupation
fraction of central BHs in bulgeless or very late-type galaxies is poorly
constrained, but at least some such systems have been found to contain BHs
with masses as low as $\sim 10^5$ $M_\odot$\ (Filippenko \& Ho 2003; Barth et
al. 2004; Greene \& Ho 2004, 2007b, 2007c), which continue to obey the
$M_{\rm BH}-\sigma_*$\ relation (Barth et al. 2005; Greene \& Ho 2006b). These empirical
correlations, which strongly suggest that BH growth is closely coupled to
galaxy formation and evolution, have inspired considerable observational and
theoretical attention in the last few years (see, e.g., reviews in Ho 2004a).
BH growth, its observational manifestation as nuclear activity, and the
consequences of feedback from active galactic nuclei (AGNs) are now widely
viewed as unavoidable pieces of the overall puzzle of cosmological structural
formation (e.g., Granato et al. 2004; Springel et al. 2005; Hopkins et al.
2006).
The BH-bulge correlations beg several pressing, unanswered questions. Do BHs
grow predominantly by radiatively efficient accretion during the luminous (Yu
\& Tremaine 2002) or obscured (Fabian 1999) quasar phase, by low accretion
rates in moderately luminous AGNs at recent times (e.g., Cowie et al. 2003),
or by mergers (e.g., Yoo \& Miralda-Escud\'e 2004)? Which came first, BH or
galaxy? Must the growth of the BH and its host be finely synchronized so as
to preserve the small intrinsic scatter observed in the local BH-host scaling
relations?
These issues can be observationally tackled in two steps: first by extending
the BH-host scaling relations to {\it active}\ galaxies, wherein the BHs are
still growing, and second by probing the evolution of the scaling relations by
stepping back in redshift. To this end, two ingredients are needed---BH
masses and host galaxy parameters. Fortunately, {$M_{\rm BH}$}\ can be estimated
readily in broad-line (type 1) AGNs, solely from basic spectroscopic data
using the mass-luminosity-line width relation (also known as the ``virial
method''; Kaspi et al. 2000; Greene \& Ho 2005b; see Peterson 2007, and
references therein). Based on such mass estimates, local AGNs do seem to obey
roughly the same BH-host scaling relations as in inactive galaxies (Gebhardt
et al. 2000b; Ferrarese et al. 2001; McLure \& Dunlop 2001; Nelson et al.
2004; Onken et al. 2004; Greene \& Ho 2006b; Shen et al. 2008). It is crucial
to recognize, however, that at the moment this claim is based on {\it very}\
limited, and possibly biased, data. The bright AGN core renders measurement
of bulge luminosity and, in particular, central stellar velocity dispersion
extremely challenging (see detailed discussion in Greene \& Ho 2006a).
Notwithstanding these worries, many people now routinely assume that the
BH-host scaling relations can be directly applied to AGNs, of arbitrarily high
luminosity and redshift.
Here we propose a new angle to investigate the relationship between BH masses
and the host galaxies of AGNs. In normal, inactive galaxies, it is well known
that the stellar velocity dispersion of the bulge tracks the maximum rotation
velocity of the disk, roughly as ${\upsilon_m}$\ = $(\sqrt{2}-\sqrt{3}) \times \sigma_*$
(Whitmore et al. 1979). Although the theoretical
underpinnings of this correlation are still murky, and the $\upsilon_m-\sigma_*$\ relation
is not as tight as has been claimed (Ferrarese 2002; Baes et al. 2003;
Pizzella et al. 2005), nevertheless an empirical relation between ${\upsilon_m}$\ and
$\sigma_*$\ {\it does}\ exist (Courteau et al. 2007; Ho 2007a), which implies that
$\sigma_*$\ can be estimated from ${\upsilon_m}$. Now, ${\upsilon_m}$\ can be measured straightforwardly
through \ion{H}{1}\ observations for galaxies that are sufficiently gas-rich. In the
absence of a resolved rotation curve, ${\upsilon_m}$\ can be estimated from a single-dish
measurement of the \ion{H}{1}\ profile, provided that we have some constraint on the
inclination angle of the disk. With the rotation velocity in hand, we can
deduce immediately two physical quantities of interest: the total galaxy
luminosity through the Tully-Fisher relation (Tully \& Fisher 1977), and,
given an estimate of the size of the disk (e.g., from optical imaging), the
dynamical mass of the system. The advantages of this approach are clear.
Since the \ion{H}{1}\ is distributed mostly at large radii (e.g., Broeils \& Rhee
1997), it should be largely ``blind'' to the AGN core. This allows us to
circumvent the problems encountered in the optical, where attempts to
disentangle the bulge from the active nucleus are maximally impacted.
Applying this principle, Ho (2007b) has recently used the integrated profile
of the rotational CO line to infer certain properties of high-redshift quasar
host galaxies.
Apart from dynamical constraints, the \ion{H}{1}\ observations yield another piece
of information of significant interest---a first-order measurement of the cold
gas content. As the raw material responsible for fueling not only the growth
of the BH but ultimately also the galaxy, one can hardly think of a more
fundamental quantity to ascertain. How does the gas content of AGN host
galaxies compare with that in inactive galaxies? Depending on one's view on
the evolutionary path of AGNs and the efficacy of gas removal by AGN feedback,
active galaxies may be more or less gas-rich than inactive galaxies. Does
the gas content scale with the degree of nuclear activity? Does it vary with
the nebular properties of the central source? One of the major goals of this
paper is to try to address some of these basic questions, which to date
have been largely unanswered.
\section{The Database}
The analysis in this paper is based on the database presented in the companion
paper by Ho et al. (2008), to which the reader is referred for full details.
In brief, a comprehensive sample of 154 nearby type~1 (broad-lined) AGNs was
assembled, consisting of new Arecibo\footnote{The Arecibo Observatory
is part of the National Astronomy and Ionosphere Center, which is operated by
Cornell University under a cooperative agreement with the National Science
Foundation.} observations of 101 sources with $z$ {$\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}}$}\
0.11, mostly selected from the Fourth Data Release of the Sloan Digital Sky
Survey (SDSS; Adelman-McCarthy et al. 2006), and a supplementary sample of 53
other nearby sources collected from the literature. In addition to basic \ion{H}{1}\
properties (line fluxes, line widths, and radial velocities), we also
assembled optical data for the AGN (emission-line strengths, line widths, line
centroid, nuclear luminosity) and the host galaxy (image, concentration index,
size, axial ratio, total magnitude, central stellar velocity dispersion).
From this material a number of important physical parameters were derived,
including BH mass, Eddington ratio, morphological type, inclination angle,
deprojected rotation velocity, \ion{H}{1}\ mass, dynamical mass, estimated host galaxy
luminosity, and certain rudimentary properties of the narrow-line region.
Distance-dependent quantities were calculated assuming $H_0$ = 70
km s$^{-1}$~Mpc$^{-1}$, $\Omega_{m} = 0.3$, and $\Omega_{\Lambda} = 0.7$.
\section{Analysis}
\subsection{The $\upsilon_m-\sigma_*$\ Relation in Active Galaxies}
To set the stage of using ${\upsilon_m}$\ as a surrogate dynamical variable to
investigate BH-host scaling relations, we first examine the correlation
between ${\upsilon_m}$\ and $\sigma_*$\ for the active galaxies in our sample. Including all
objects that have measurements of both quantities, the scatter is
discouragingly large (Fig.~1{\it a}). Closer inspection, however, reveals
that many of the extreme outliers correspond to objects that have potentially
untrustworthy deprojected rotation velocities because of uncertain inclination
corrections, as well as a handful of sources whose central stellar velocity
dispersions were estimated indirectly from the width of the [\ion{O}{2}]\ \lamb3727
emission line (following Greene \& Ho 2005a). Removing these objects improves
the correlation. As further discussed in \S 3.5, a sizable fraction of the
objects in our sample contain non-classical \ion{H}{1}\ velocity profiles, which are
either single-peaked, highly asymmetric, or both, indicative of strongly
perturbed or dynamically unrelaxed \ion{H}{1}\ distributions. If we further remove
these cases---a cut that, unfortunately, drastically reduces the sample to
only 40 objects---a much cleaner correlation between ${\upsilon_m}$\ and $\sigma_*$\ emerges
(Fig.~1{\it b}). Because of the small number of objects and their limited
dynamic range, the present sample is not well suited to define the ${\upsilon_m}$--$\sigma_*$\
relation. Nevertheless, the present sample still reveals a statistically
significant correlation between ${\upsilon_m}$\ and $\sigma_*$; the Kendall's $\tau$
correlation coefficient is $r = 0.53$, which rejects the null hypothesis
of no correlation with a probability of $P$ = 98.2\%. We overplot on the
figure the ${\upsilon_m}$--$\sigma_*$\ relation obtained from the 550 ``kinematically normal,''
inactive spiral galaxies from the study of Ho (2007a). The ordinary
least-square bisector fit for the inactive objects,
\begin{equation}
\log \upsilon_m = (0.80\pm0.029) \log \sigma_* + (0.62\pm0.062),
\end{equation}
\noindent
provides a reasonably good match to the AGN sample. According to the
Kolmogorov-Smirnov test, the probability of rejecting the null hypothesis that
the AGN and non-AGN samples are drawn from the same parent population is $P$ =
69.2\%. We conclude that the two populations are not significantly different.
The scatter is still substantial, but recall that the zeropoint of the $\upsilon_m-\sigma_*$\
relation depends on morphological type, such that early-type systems have a
lower value of ${\upsilon_m}$/$\sigma_*$\ than late-type spirals (${\upsilon_m}$/$\sigma_*$\ $\approx$ $1.2-1.4$
for E and S0, compared with ${\upsilon_m}$/$\sigma_*$\ $\approx$ $1.6-1.8$ for spirals of type
Sc and later; see Ho 2007a). That the zeropoint for inactive spirals seems to
roughly match the current sample of AGNs suggests that their host galaxies are
mostly disk galaxies with modest bulge components, consistent with the actual
estimated BH masses (median {$M_{\rm BH}$}\ = $1.6\times10^7$ $M_\odot$) and
morphological types (median $T$ = 2.5, corresponding to Sab) of the objects.
Since local AGN host galaxies obey the $M_{\rm BH}-\sigma_*$\ relation, and we have just
shown that they roughly follow the same
\vskip 0.3cm
\figurenum{1}
\begin{figure*}[t]
\centerline{\psfig{file=f1.eps,width=19.5cm,angle=-90}}
\figcaption[fig1.ps]{
Correlation between the central stellar velocity dispersion and the maximum
rotation velocity for the active galaxies in our sample. Panel ({\it a})
plots all available data, including those that have uncertain inclination
corrections ({\it open circles}) and [\ion{O}{2}]-based estimates of the central
velocity dispersion ({\it open triangles}); objects with
both reliable rotation velocities and stellar velocity dispersions are plotted
as filled circles. No selection has been done with regards to \ion{H}{1}\ profile
type. Panel ({\it b}) excludes all objects with single-peaked and/or
asymmetric \ion{H}{1}\ profiles, as well as those with uncertain rotation velocities
and [\ion{O}{2}]-based estimates of the central velocity dispersion. Note the
significant reduction in scatter. The dashed line represents the fit to the
$\upsilon_m-\sigma_*$\ relation for the 550 ``kinematically normal'' inactive spiral
galaxies from the sample of Ho (2007a); see text.
\label{fig1}}
\end{figure*}
\vskip 0.3cm
\noindent
asymmetric \ion{H}{1}\ profiles (panel {\it a})\footnote{For the rest of the
$\upsilon_m-\sigma_*$ relation as in
inactive galaxies, we expect {$M_{\rm BH}$}\ to be correlated with ${\upsilon_m}$. Not
surprisingly, this is confirmed in Figure~2, which additionally reemphasizes
that a tighter relation results after removing objects with single-peaked
and/or paper, we have also excluded two objects from the literature sample that
have suspiciously low values of ${\upsilon_m}$\ (Mrk 359, ${\upsilon_m}$\ = 38 km s$^{-1}$; Mrk 493, ${\upsilon_m}$\ =
24.7 km s$^{-1}$). Closer inspection of the original \ion{H}{1}\ data shows that Mrk 359
has a highly unusual profile consisting of a narrow peak and a very broad base
(Springob et al. 2005); moreover, the inclination angle given in Hyperleda,
$i$ = 39.5$^{\circ}$, seems inconsistent with the nearly face-on appearance of its
Digital Sky Survey image (see Appendix in Ho 2007a for a discussion on the
uncertainties associated with inclination angles listed in Hyperleda),
suggesting that we have seriously underestimated the inclination correction.
Mrk 493 may suffer from the same problem; the optical (SDSS) image of the
object appears to be much more face-on than Hyperleda's value of $i$ =
45$^{\circ}$.}. From the Kendall's $\tau$ test, $r = 0.58$ and $P$ = 99.9\%.
Overplotted on the figure is the relation expected from combining equation (1)
with the $M_{\rm BH}-\sigma_*$\ relation for local AGNs obtained by Greene \& Ho (2006b),
$\log (M_{\rm BH}/M_\odot) = 4.02 \log (\sigma_*/200\,{\rm km~s^{-1}}) + 7.96$:
\begin{equation}
\log (M_{\rm BH}/M_\odot) = 5.1 \log \upsilon_m - 4.4.
\end{equation}
\noindent
As with Figure~1, we ran a Kolmogorov-Smirnov test to see whether the present
sample of AGNs is drawn from the same population of objects used to
define the predicted correlation between $M_{\rm BH}$ and ${\upsilon_m}$; the probability
of rejecting the null hypothesis of no correlation is $P$ = 68.6\%. Again,
the two populations are not significantly different.
Nevertheless, the $M_{\rm BH}-\upsilon_m$\ relation still
contains significant scatter, even after omitting the kinematically peculiar
objects. What is responsible for this? We believe that most of the scatter
is intrinsic. The virial BH mass estimates for nearby AGNs, based on broad
H$\alpha$\ or H$\beta$, currently have an uncertainty of $\sim 0.3-0.5$ dex (Greene \&
Ho 2006b; Peterson 2007), and errors associated with the inclination
correction applied to the \ion{H}{1}\ line widths do not seem to be the dominant
source of scatter (Fig.~2{\it b}). Part of the scatter can be attributed
to an effect related to Hubble type. As mentioned above and
discussed at length in Ho (2007a), ${\upsilon_m}$/$\sigma_*$\ varies systematically with galaxy
concentration or morphological type, and because our sample contains a wide
range of morphological types (see Ho et al. 2008), we expect the zeropoint of
the $M_{\rm BH}-\upsilon_m$\ relation to be smeared by the actual mixture of galaxy types.
This is illustrated in Figure~2{\it c}, wherein the sample is divided into
four broad bins in Hubble type. The small subsamples make it difficult to see
the trend in detail, but comparison of the E+S0 galaxies with the later-type
spirals clearly shows that the two subgroups are offset from each other, in
the sense expected from the variation of ${\upsilon_m}$/$\sigma_*$\ with morphological type.
Taken at face value, another contribution to the scatter seems to be
connected to variations in AGN properties, as shown in Figures~2{\it d} and
2{\it e}, where we examine possible dependences on broad-line region line
width and Eddington ratio. Objects with broad H$\alpha$\ or H$\beta$\ full-width at
half maximum (FWHM) $\leq 2000$ km s$^{-1}$, commonly designated ``narrow-line''
Seyfert 1 (NLS1) galaxies, have a slight tendency to be offset toward lower
{$M_{\rm BH}$}\ for a given ${\upsilon_m}$\ (Fig.~2{\it d}). Since many NLS1 galaxies tend to have
elevated accretion rates (e.g., Collin \& Kawaguchi 2004), the weak trend
with FWHM may be more clearly discerned by dividing the sample according to
Eddington ratio. Indeed this expectation seems to hold, as shown in
Figure~2{\it e}, where, within the limited statistics, there appears to be a
monotonic decrease of {$M_{\rm BH}$}/${\upsilon_m}$\ with increasing $L_{{\rm bol}}$/$L_{{\rm Edd}}$. The trend with
Eddington
\vskip 0.3cm
\figurenum{2}
\begin{figure*}[t]
\centerline{\psfig{file=f2.eps,width=19.5cm,angle=-90}}
\figcaption[fig2.ps]{
Correlation between BH mass and the maximum rotation velocity deduced from
\ion{H}{1}\ observations. The different panels test for possible dependences on
({\it a}) \ion{H}{1}\ profile type, ({\it b}) inclination angle correction, ({\it c})
Hubble type, ({\it d}) FWHM of broad H$\alpha$\ or H$\beta$, and ({\it e}) Eddington
ratio. In panel ({\it a}), objects with uncertain inclination corrections
were removed; in panels ({\it b})--({\it e}), objects with single and/or
asymmetric \ion{H}{1}\ profiles were additionally removed. Panel ({\it f}) is the
same as panel ({\it e}), but the different Hubble types have been adjusted to
a common zeropoint by shifting ${\upsilon_m}$. The solid line represents the predicted
correlation for the whole sample, equation (2), given the $M_{\rm BH}-\sigma_*$\ relation of
Greene \& Ho (2006b) and the $\upsilon_m-\sigma_*$\ relation (eq. 1); the dotted and
dashed lines show the zeropoint offset (in {$M_{\rm BH}$}) for the objects with
$L_{\rm bol}/L_{\rm Edd} \leq 0.1$ and $L_{\rm bol}/L_{\rm Edd} > 0.1$,
respectively.
\label{fig2}}
\end{figure*}
\vskip 0.3cm
\noindent
ratio is stronger than that with luminosity alone (not shown). Could
the apparent variation of {$M_{\rm BH}$}/${\upsilon_m}$\ with FWHM and $L_{{\rm bol}}$/$L_{{\rm Edd}}$\ be a secondary
effect related to the dependence of {$M_{\rm BH}$}/${\upsilon_m}$\ on morphological type? The most
active AGNs in the nearby Universe live in moderate-mass, relatively late-type
disk galaxies (e.g., Heckman et al. 2004; Greene \& Ho 2007b); these are
precisely the hosts of high-$L_{{\rm bol}}$/$L_{{\rm Edd}}$\ systems, such as NLS1s. We attempt
to remove the Hubble type dependence by shifting ${\upsilon_m}$\ for the different Hubble
type subgroups to a common reference zeropoint (defined by the entire sample).
As expected, the scatter goes down, but, interestingly, the trend with
$L_{{\rm bol}}$/$L_{{\rm Edd}}$\ persists (Fig.~2{\it f}).
\subsection{Correlation between {$M_{\rm BH}$}\ and Galaxy Dynamical Mass}
Although our \ion{H}{1}\ observations yield no information on the spatial distribution
of the gas, we can still obtain a rough estimate of the characteristic
dynamical mass of the galaxy by combining the deprojected rotation velocity
from the \ion{H}{1}\ line width with the optical diameter. This approach is justified
for the following reason. The size of the \ion{H}{1}\ disk in spiral galaxies over
a wide range of Hubble types and luminosities tightly scales with the size
of the optical disk; within 30\%--40\%, $D_{\rm H~{\tiny I}}/D_{\rm 25}
\approx 1.7$ (Broeils \& Rhee 1997; Noordermeer et al. 2005), where
$D_{\rm 25}$ is the optical isophotal diameter at a surface brightness level
of $\mu_B = 25$ mag~arcsec$^{-2}$. Thus, even if the absolute dynamical mass
may be uncertain because of our ignorance of the size of the \ion{H}{1}-emitting
disk, the {\it relative}\ masses should be reasonably accurate. Following
Casertano \& Shostak (1980), Ho et al. (2008) calculated dynamical masses for
our AGN sample, which, interestingly, show a fairly strong correlation with
BH mass (Fig.~3). In particular, BH mass correlates more strongly with
$M_{\rm dyn}$, which scales as $D_{\rm 25} \upsilon_m^2$, than with
$\upsilon_m$ alone. An ordinary least-squares bisector fit to the entire
sample, after making the quality cut as above, gives
\begin{equation}
\log (M_{\rm BH}/M_\odot) = (1.29\pm0.11) \log (M_{\rm dyn}/M_\odot) -
(6.96\pm1.17),
\end{equation}
\noindent
with an rms scatter of 0.61 dex. The slope is formally steeper than unity,
but this result should be regarded as provisional considering that most of the
objects span a relatively narrow range of $M_{\rm dyn}$. Closer inspection
reveals that the zeropoint of the $M_{\rm BH}-M_{\rm dyn}$ correlation also
depends on Hubble type (Fig.~3{\it a}). At a given $M_{\rm dyn}$, early-type
galaxies have a higher {$M_{\rm BH}$}\ than late-type galaxies. Relative to the
best-fitting line for the whole sample, at a fixed log~{$M_{\rm BH}$}\ the shift in
$\log M_{\rm dyn}$ is $+0.19$, $-0.05$, $-0.10$, and $-0.59$ for galaxies of
type E/S0, Sa, Sb, and Sc/Sd,
\vskip 0.3cm
\figurenum{3}
\begin{figure*}[t]
\centerline{\psfig{file=f3.eps,width=19.5cm,angle=-90}}
\figcaption[fig3.ps]{
Correlation between BH mass and dynamical mass of the host galaxy. Objects
with uncertain inclination corrections and that have single and/or asymmetric
\ion{H}{1}\ profiles are excluded. Panel ({\it a}) divides the sample into Hubble
types, and panel ({\it b}) bins the sample by Eddington ratio, after adjusting
the different Hubble types to a common zeropoint by shifting $M_{\rm dyn}$.
The solid line shows the ordinary least-squares bisector fit given in equation
(3) for the entire sample; the dotted and dashed lines show the zeropoint
offset (in {$M_{\rm BH}$}) for the objects with $L_{\rm bol}/L_{\rm Edd} \leq 0.1$ and
$L_{\rm bol}/L_{\rm Edd} > 0.1$, respectively.
\label{fig3}}
\end{figure*}
\vskip 0.3cm
\noindent
respectively. If we adjust the zeropoint of
these groups by their corresponding offsets (in $M_{\rm dyn}$), the rms scatter
of the resulting correlation decreases to 0.55 dex (Fig.~3{\it b}). As in the
case of the $M_{\rm BH}-\upsilon_m$\ diagram, we find that at a given $M_{\rm dyn}$ objects with
higher $L_{{\rm bol}}$/$L_{{\rm Edd}}$\ tend to have smaller {$M_{\rm BH}$}.
\subsection{The Tully-Fisher Relation for Active Galaxies}
The empirical correlation between ${\upsilon_m}$\ and total galaxy luminosity, first
introduced by Tully \& Fisher (1977), provides yet another avenue to assess
potential differences between active and inactive galaxies. As described in
Ho et al. (2008), approximate optical host galaxy luminosities can be obtained
by comparing the integrated photometry with the nuclear spectroscopy. Figure~4
plots the host galaxy absolute magnitude (after removing the AGN contribution)
converted to the $B$ band, versus the maximum rotation velocity; overlaid for
comparison is the corresponding Tully-Fisher relation for inactive (spiral)
galaxies in the Ursa Major cluster (Verheijen 2001). Considering first the
entire sample (Fig.~4{\it a}), $M_{B,{\rm host}}$ loosely traces ${\upsilon_m}$, but
the scatter is again considerable and the apparent correlation is formally
statistically insignificant ($r = -0.23$, $P$ = 93.3\%). Pruning, as before,
the objects with unreliable inclination corrections and kinematically peculiar
profiles filters out most of the egregious outliers, such that the remaining
sample falls mostly within the locus of inactive spirals (Figs.~4{\it b} and
4{\it c}). The formal correlation between $M_{B,{\rm host}}$ and ${\upsilon_m}$\ is
still low ($r = -0.30$) and only marginally significant ($P$ = 95.0\%),
probably because of the small sample size and limited dynamic range
in ${\upsilon_m}$\ (90\% of the sample has ${\upsilon_m}$\ = 200$\pm$100 km s$^{-1}$).
Apart from the larger scatter exhibited by the active sample, an
effect that can be attributed, at least partly, to its broad mixture of Hubble
types, the approximate nature in which the host luminosities were estimated,
and the sensitivity of the blue bandpass to extinction and stellar population
variations (e.g., De~Rijcke et al. 2007), there are no other gross differences
between the Tully-Fisher relation of active and inactive galaxies. Different
Hubble types define slightly offset, parallel sequences (Fig.~4{\it b}), an
effect that has been well-documented in the $B$ band. For a given $B$-band
luminosity, an early-type spiral has a larger characteristic rotation
amplitude than a late-type spiral (e.g., Rubin et al. 1985; De~Rijcke et al.
2007). To separate out this effect, we applied small corrections to the
zeropoints (in host galaxy luminosity) of each Hubble type bin. The adjusted
distribution shows no obvious segregation by AGN properties, such as Eddington
ratio (Fig.~4{\it c}).
\subsection{\ion{H}{1}\ Content}
We next examine the \ion{H}{1}\ content of our sample, with special emphasis on
whether AGN hosts differ in any noticeable way from inactive galaxies. In
absolute terms, our survey objects are quite gas-rich in neutral atomic
hydrogen. The \ion{H}{1}\ masses for the detected sources range from $M_{{\rm H~I}}$\
$\approx\,10^9$ to $4\times10^{10}$ $M_\odot$\ (Ho et al. 2008). Taking into
account the upper limits for the nondetections, the Kaplan-Meier product-limit
estimator (Feigelson
\vskip 0.3cm
\figurenum{4}
\begin{figure*}[t]
\centerline{\psfig{file=f4.eps,width=19.5cm,angle=-90}}
\figcaption[fig4.ps]{
The $B$-band Tully-Fisher relation for the active galaxies in our sample.
In panel ({\it a}), solid symbols denote reliable deprojected rotation
velocities, while less reliable values are marked with open symbols; crosses
indicate lower limits on ${\upsilon_m}$\ that lack estimates of the inclination angle.
In panel ({\it b}), the sample is divided according to Hubble type (open
circles = E/S0; triangles = Sa; squares = Sb; filled circles = Sc/Sd), and
panel ({\it c}) bins the sample by Eddington ratio (solid, open, and star
symbols denote objects with $L_{\rm bol}/L_{\rm Edd} \leq 0.1$, $0.1-0.3$,
and $> 0.3$, respectively), after adjusting the different Hubble types to a
common zeropoint by shifting $M_{B,{\rm host}}$. Objects with uncertain
deprojected rotation velocities and single-peaked and/or asymmetric \ion{H}{1}\
profiles were removed from panels ({\it b}) and ({\it c}). The {\it solid}\
line represents the $B$-band Tully-Fisher relation for inactive spiral
galaxies, as derived by Verheijen (2001) for galaxies in the Ursa Major
cluster; the {\it dashed}\ lines mark the region that has twice the rms
scatter of the Ursa Major sample.
\label{fig4}}
\end{figure*}
\vskip 0.3cm
\noindent
\& Nelson 1985) yields a mean of $M_{{\rm H~I}}$\ =
$(7.0\pm0.66)\times10^{9}$ $M_\odot$\ and a median of $M_{{\rm H~I}}$\ =
$5.0\times10^{9}$ $M_\odot$. This is almost identical to the total
\ion{H}{1}\ mass of the Milky Way ($5.5 \times 10^9$ $M_\odot$; Hartmann \& Burton
1997), and comparable to the average value for Sb spirals (Roberts \& Haynes
1994). The above statistics pertain just to the newly surveyed SDSS sources.
The literature sample is both heterogeneous and possibly biased because the
database from which the \ion{H}{1}\ data were compiled,
Hyperleda\footnote{{\tt http://leda.univ-lyon1.fr/}}, does not report
upper limits. Nonetheless, we confirm that combining both samples does not
significantly alter these statistics.
The \ion{H}{1}\ content of galaxies varies greatly and systematically across the
Hubble sequence (Haynes \& Giovanelli 1984; Roberts \& Haynes 1994). Thus,
in order to conduct a meaningful comparison of the \ion{H}{1}\ budget of active
and inactive galaxies, we must know their morphological types. Among the
154 sources in our sample that have \ion{H}{1}\ detections or meaningful upper
limits, 148 have estimates of both their morphological type and host
galaxy optical luminosity. Figure~5 shows the distribution of \ion{H}{1}\ masses
normalized to the $B$-band luminosity of the host galaxy, subdivided into six
bins of Hubble types. For reference, we computed the distribution of
$M_{{\rm H~I}}/L_{B,{\rm host}}$ for 13,262 inactive galaxies culled from
Hyperleda. These represent all galaxies in the database, reported to be
current up to the end of 2003, that have reliable entries of \ion{H}{1}\ flux,
morphological type, and $B$-band magnitude; known AGNs were excluded, as
discussed in Ho et al. (2008; Appendix). As previously mentioned, Hyperleda
does not record upper limits for \ion{H}{1}\ nondetections, so these distributions
should be viewed strictly as upper bounds. Since the majority of the galaxies
in Hyperleda are relatively bright and nearby, the detection rate of \ion{H}{1}\
among the spirals should be very high ($\sim$90\%; Haynes \& Giovanelli
1984), such that the observed distribution can be regarded as being quite
close to the true distribution. The situation for early-type galaxies,
however, is quite different, because the detection rate is only $\sim$15\% for
ellipticals (Knapp et al. 1985) and $\sim$30\% for S0s (Wardle \& Knapp 1986),
and so the distributions shown in the figure are highly biased.
Comparison of the active and inactive distributions reveals two interesting
points. Among mid- to late-type spirals (Sb and later), the frequency
distribution of $M_{{\rm H~I}}/L_{B,{\rm host}}$ is roughly the same for the
two populations. However, for the bulk of the sample, which comprise
bulge-dominated Sa spirals and S0s, active galaxies appear to be {\it more}\
gas-rich than their inactive counterparts. The most dramatic manifestation of
this effect shows up among the ellipticals, although here we
are handicapped somewhat by the small number of sources (9) in the active
sample. Note that among the E and S0 subgroups, the difference between
the active and inactive samples is far greater than portrayed (although
we have no rigorous statistical way to quantify this) because, as mentioned
above, the inactive distribution is highly biased by the omission of upper
limits. (As noted, the literature data for the active sample are likely
biased as well, but we verified that our conclusions are essentially
unaffected if we exclude these data from the AGN sample.) The host galaxies of
nearby type~1 AGNs across all Hubble types are at least as gas-rich as
inactive galaxies, and among early-type systems their
\vskip 0.3cm
\figurenum{5}
\psfig{file=f5.eps,width=8.5cm,angle=0}
\figcaption[fig5.ps]{
The distribution of \ion{H}{1}\ masses normalized to the $B$-band luminosity of the
host galaxy, as a function of its Hubble type. The black histograms show the
distributions for 13,262 inactive galaxies with \ion{H}{1}\ detections listed in
Hyperleda. The AGN sample from our study is plotted in red histograms, with
upper limits indicated by dotted lines. The entire sample of 154 objects is
plotted in the penultimate panel from the bottom, while the top panels show
the sample sorted by Hubble type. The number of inactive galaxies in
each group is labeled, with the number of active objects shown in
parentheses. The bottom-most panel isolates the quasars. The red
histograms correspond to objects from the current \ion{H}{1}\ sample; the blue
histograms correspond to PG quasars observed in CO (Ho 2005a); the green
histograms correspond to $z < 0.5$ PG quasars with dust masses (Haas et al.
2003); see text for details.
\label{fig5}}
\vskip 0.3cm
\noindent
gas content appears to
be markedly enhanced.
\subsection{Environmental Effects}
Crude information on the spatial distribution or dynamical state of the gas
can be ascertained from the degree to which the observed \ion{H}{1}\ line profiles
deviate from the classical double-horned signature of an optically thin
rotating disk. Experience with nearby galaxies indicates that tidally
disturbed systems often exhibit asymmetric, single-peaked, or otherwise highly
irregular line profiles (e.g., Gallagher et al. 1981; Haynes et al. 1998).
Given the modest signal-to-noise ratio of our data, Ho et al. (2008) decided
not to implement a rigorous scheme to quantify the line morphology, but rather
adopted a qualitative classification scheme in which obviously non-classical
profiles were simply labeled as ``A'' (asymmetric), ``S'' single-peaked, or
``AS'' (combination of both). Among the 66 objects detected in our new survey,
eight (12\%) are classified as ``A,'' five (8\%) are classified as ``S,''
and 16 (24\%) are classified as ``AS,'' for an overall frequency of
non-classical profiles of 44\%. Ho et al. (2008) performed a systematic
census of physical neighbors within a search radius of 7\farcm5 around each
object. There seems to be no clear association between profile peculiarity
and the presence of nearby neighbors. While some objects with single-peaked
and/or asymmetric profiles have plausible nearby companions, many do not;
at the same time, a number of objects with apparent companions exhibit
seemingly regular line profiles. The presence or absence of kinematic
irregularity also appears to be uncorrelated with any of the global or AGN
parameters that we have at our disposal. We searched for possible differences
in the following quantities, but found none that was statistically
significant: morphology type, galaxy luminosity, total and relative \ion{H}{1}\ mass,
AGN luminosity, broad-line region FWHM, BH mass, and Eddington ratio.
\subsection{Connections with AGN Properties}
The availability of optical data affords us an opportunity to explore possible
connections between the \ion{H}{1}\ and AGN properties of our sample. Concentrating
on the SDSS objects, for which we have homogeneous optical spectroscopic
measurements, we find that their narrow emission-line ratios place the
majority of them in the territory of Seyfert nuclei (Fig.~6). This is not
surprising, since most of the SDSS-selected type~1 AGNs tend to have
relatively high accretion rates (Greene \& Ho 2007b), which generally
correspond to high-ionization sources (Ho 2004b). The objects detected in
\ion{H}{1}\ do not stand out in any noticeable way from the nondetections. The
same holds for the electron densities of the narrow-line region, as can be
inferred from the line ratio [\ion{S}{2}]\ \lamb6716/[\ion{S}{2}]\ \lamb6731: the \ion{H}{1}\
detections are statistically indistinguishable from the \ion{H}{1}\ nondetections
(Fig.~7). Next, we searched for a possible dependence of AGN
luminosity or Eddington ratio on \ion{H}{1}\ content, either in absolute ($M_{{\rm H~I}}$) or
relative ($M_{{\rm H~I}}/L_{B,{\rm host}}$) terms, but again found
none (Fig.~8). There is, at best, a mild trend of increasing H$\alpha$\ luminosity
with increasing \ion{H}{1}\ mass (Fig.~8{\it a}), but given the mutual dependence of
the two quantities on distance, we regard this result as highly suspect.
\section{Discussion}
\subsection{Black Hole-Host Galaxy Scaling Relations}
The relative ease with which integrated \ion{H}{1}\ emission can be detected in
nearby AGNs opens up the possibility of using the global \ion{H}{1}\ line width as
a new dynamical variable to investigate the scaling between BH mass and host
galaxy potential. A quantity that can be readily measured from single-dish
spectra, the \ion{H}{1}\ line width has been widely used in a variety of
extragalactic contexts as an effective shortcut to estimate ${\upsilon_m}$, the maximum
rotation velocity of galaxy disks. If ${\upsilon_m}$\ correlates with bulge velocity
dispersion, $\sigma_*$, as originally suggested by Whitmore et al. (1979), ${\upsilon_m}$\ can
be used in place of $\sigma_*$\ to define a $M_{\rm BH}-\upsilon_m$\ relation to substitute for the more
traditional $M_{\rm BH}-\sigma_*$\ relation. A number of authors have suggested that
galaxies over a wide range in morphological types obey a tight correlation
between ${\upsilon_m}$\ and $\sigma_*$, to the point that ${\upsilon_m}$\ can actually replace, and
perhaps should be regarded as more fundamental than, $\sigma_*$\ (Ferrarese 2002;
Baes et al. 2003; Pizzella et al. 2005). This result has been challenged by
Courteau et al. (2007) and Ho (2007a), who found, using much larger samples,
that the ratio ${\upsilon_m}$/$\sigma_*$\ shows significant intrinsic scatter and systematic
variation across the Hubble sequence. Nevertheless, a $\upsilon_m-\sigma_*$\ relation
{\it does}\ exist, at least statistically, and in circumstances when $\sigma_*$\ is
difficult or impossible to measure (e.g., in very bright or very distant
AGNs), ${\upsilon_m}$\ may offer the best or possibly the {\it only}\ means of
constraining the host galaxy. Such was the motivation behind the recent study
of Ho (2007b; see also
\vskip 0.3cm
\figurenum{6}
\begin{figure*}[t]
\centerline{\psfig{file=f6.eps,width=19.5cm,angle=-90}}
\figcaption[fig6.ps]{
The location of the \ion{H}{1}\ sample on the line intensity ratio diagrams
({\it a}) log [\ion{O}{3}]\ \lamb5007/H$\beta$\ vs. log [\ion{N}{2}]\ \lamb6583/H$\alpha$\ and
({\it b}) log [\ion{O}{3}]\ \lamb5007/H$\beta$\ vs. log [\ion{S}{2}]\ $\lambda$\lamb6716, 6731/H$\alpha$.
Objects detected and undetected in \ion{H}{1}\ are plotted as solid circles and
triangles, respectively. The crosses represent \ion{H}{2}\ nuclei and the open
squares AGNs from the Palomar survey of nearby galaxies (Ho et al. 1997a).
The \ion{H}{1}\ sample mostly occupies the locus of Seyfert galaxies, with no
apparent variation with \ion{H}{1}\ detection.
\label{fig6}}
\end{figure*}
\vskip 0.3cm
\noindent
Shields et al. 2006), who used the width of the
rotational CO line, locally calibrated against \ion{H}{1}, to infer the masses of the
host galaxies of high-redshift quasars. The line width method, however, can
prove to be useful even under less extreme conditions. As discussed at length
in Greene \& Ho (2006a), a variety of factors conspire to make measurement of
$\sigma_*$\ very challenging in galaxies containing bright AGNs, regardless of their
redshift. To bypass this difficulty, many studies resort to using gas velocity
dispersions measured from narrow nebular lines, but this shortcut has its own
set of complications (Greene \& Ho 2005a). Others skip the kinematical route
altogether and instead use the host galaxy (or bulge, if available) luminosity
as the variable to relate to the BH mass. However, cleanly
\vskip 0.3cm
\figurenum{7}
\psfig{file=f7.eps,width=8.5cm,angle=0}
\figcaption[fig7.ps]{
Distribution of the line intensity ratio
[\ion{S}{2}]\ \lamb6716/[\ion{S}{2}]\ \lamb6731, which is inversely proportional to the
electron density, for objects detected (hashed histograms) and
undetected (open, dashed histograms) in \ion{H}{1}.
\label{fig7}}
\vskip 0.3cm
\noindent
decomposing the
underlying galaxy, much less its bulge, in the presence of a bright AGN core
is often a nontrivial task, even under the most ideal conditions (e.g., Kim
et al. 2007).
Within this backdrop, we explored the correlation between BH mass and ${\upsilon_m}$.
As expected, a loose correlation exists between these two quantities, but the
scatter is enormous, $\sim 1$ dex (Fig.~2{\it a}). Many of the extreme
outliers correspond to objects with single-peaked and/or clearly asymmetric
\ion{H}{1}\ profiles, while others have especially questionable inclination
corrections; removing these produces a cleaner correlation.
Note that apart from being inefficient (roughly 40\% of the sample was
rejected), this step imposes no serious complication, since classical
double-horned line profiles are easy to recognize. Nonetheless, even after
this filtering, the scatter is still substantial (0.9 dex; Fig.~2{\it b}). We
identified a number of parameters associated with the AGN that seem to
contribute to the scatter (FWHM of the broad emission lines, nuclear
luminosity, and Eddington ratio), but an important contribution comes from
the morphological type variation within the sample (Fig.~2{\it c}). As
discussed in Courteau et al. (2007) and Ho (2007a), the zeropoint of the
$\upsilon_m-\sigma_*$\ relation varies systematically with galaxy concentration (which is
loosely related to galaxy morphology or bulge-to-disk ratio). If we
arbitrarily shift the zeropoints of the different Hubble type bins to the
average relation for all Hubble types, the scatter decreases somewhat to
$\sim 0.7$ dex (Fig.~2{\it f}).
More promising still seems to be the correlation between BH mass and
galaxy dynamical mass, which requires knowledge of one additional parameter,
namely the galaxy's optical diameter. Retaining, as before, only the objects
with robust inclination corrections and kinematically normal \ion{H}{1}\ profiles,
the $M_{\rm BH}-M_{\rm dyn}$ has an rms scatter of 0.61 dex, which further
improves to 0.55 dex after shifting the different Hubble types to a common
reference point (Fig.~3). While this scatter is larger
\vskip 0.3cm
\figurenum{8}
\begin{figure*}[t]
\centerline{\psfig{file=f8.eps,width=19.5cm,angle=0}}
\figcaption[fig8.ps]{
The variation of nuclear H$\alpha$\ luminosity (panels {\it a}\ and {\it b})
and Eddington ratio (panels {\it c}\ and {\it d}) with \ion{H}{1}\ content. Objects
detected in \ion{H}{1}\ are plotted as filled symbols, while the nondetections are
marked as open symbols with arrows.
\label{fig8}}
\end{figure*}
\vskip 0.3cm
\noindent
than that of the
local AGN $M_{\rm BH}-\sigma_*$\ relation (0.4 dex; Greene \& Ho 2006b), our estimates
of the dynamical masses leave much room for improvement, both in terms of
getting higher signal-to-noise ratio line profiles and deeper optical images,
which will allow more accurate measurements of inclination angles and
isophotal diameters and better estimates of morphological types.
\vskip 1.0cm
\subsection{The Chicken or the Egg}
One of the most intriguing results from our analysis is the tentative
detection, both in the $M_{\rm BH}-\upsilon_m$\ (Fig.~2{\it f}) and $M_{\rm BH}-M_{\rm dyn}$
(Fig.~3{\it b}) diagrams, of differential growth between the central BH and
the host galaxy. For a given host galaxy potential (${\upsilon_m}$\ or $M_{\rm dyn}$),
AGNs with higher accretion rates (Eddington ratios) have systematically
{\it less}\ massive BHs, implying that in these systems the central BH, still
vigorously accreting near its maximum rate, has yet to reach its final mass,
which, from observations of inactive systems, we know to be a well-defined
constant fraction of the total bulge luminosity or mass (Kormendy \& Richstone
1995; Magorrian et al. 1998; H\"aring \& Rix 2004). Many (but not all) of the
objects with still-growing BHs turn out to have narrower broad Balmer lines
(FWHM {$\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}}$}\ 2000 km s$^{-1}$) because NLS1 galaxies tend to have higher accretion
rates. At a fixed ${\upsilon_m}$, the AGNs in our sample with $L_{{\rm bol}}$/$L_{{\rm Edd}}$\ {$\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$>$}}}$}\ 0.1 on
average have BHs 0.19 dex (factor of 1.5) less massive than those with
$L_{{\rm bol}}$/$L_{{\rm Edd}}$\ {$\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}}$}\ 0.1; in terms of fixed $M_{\rm dyn}$, the difference in BH
masses between the low- and high-accretion rate subgroups is $\sim$0.41
dex (factor of 2.6). In their study of type 1 AGNs with stellar velocity
dispersion measurements, Greene \& Ho (2006a) also noticed that the
best-fitting $M_{\rm BH}-\sigma_*$\ relation for AGNs has a small zeropoint offset of
$-$0.17 dex relative to Tremaine et al.'s (2002) fit of the $M_{\rm BH}-\sigma_*$\ relation
for inactive galaxies. A result similar to that of Greene \& Ho (2006a) has
been reported by Shen et al. (2008), who additionally note that the amplitude
of the $M_{\rm BH}-\sigma_*$\ relation for AGNs depends on Eddington ratio, in the same
sense that we find in our study. The overall qualitative agreement among the
three independent studies lends credence to the idea that the most highly
accreting BHs are still actively growing.
Mathur et al. (2001; see also Grupe \& Mathur 2004; Mathur \& Grupe 2005),
using the width of the [\ion{O}{3}]\ $\lambda$ 5007 line as a substitute for $\sigma_*$, a
shortcut that allows large numbers of AGNs to be placed on the $M_{\rm BH}-\sigma_*$\
relation, proposed that NLS1s contain BHs that are undermassive with respect
to their bulges, by as much as an order of magnitude. Employing the same
technique, Bian \& Zhao (2004) arrived at a similar result, as did Wandel
(2002), who used bulge luminosities as a reference. The claim that NLS1s
are ``young'' AGNs, however, has not been universally embraced, in part
because different authors, using the same methods, have arrived at conflicting
conclusions (e.g., Wang \& Lu 2001), because of worries concerning the
reliability of the [\ion{O}{3}]-based velocity dispersions (Botte et al. 2004; Greene
\& Ho 2005a), and, most seriously, because direct measurements of stellar
$\sigma_*$\ in NLS1s find no compelling evidence that they deviate from the $M_{\rm BH}-\sigma_*$\
relation (Barth et al. 2005; Botte et al. 2005; Greene \& Ho 2006b). Part of
the confusion stems from the very definition of the NLS1 class. Strictly
speaking, NLS1s are classified on the basis of optical spectroscopic criteria
(e.g., Pogge 2000), which, apart from the (fairly arbitrary) line width limit
of FWHM $\leq$ 2000 km s$^{-1}$, are neither rigorously defined nor universally
followed. That many cataloged and well-studied NLS1s have high accretion
rates (e.g., Pounds et al. 1995; Collin \& Kawaguchi 2004) is a statistical
reflection of the fact that bright AGNs in the local Universe tend to have
moderately low-mass BHs (Heckman et al. 2004; Greene \& Ho 2007b), and that
soft X-ray selection preferentially favors systems in their high-state (Greene
\& Ho 2007a). There is no requirement that NLS1s must have high accretion
rates. The present sample, in fact, provides two useful examples. By their
line widths alone, both NGC 4051 and NGC 4395 qualify as NLS1s, and yet the
former has $L_{{\rm bol}}$/$L_{{\rm Edd}}$\ = $6.3\times10^{-2}$ while the latter, with
$L_{{\rm bol}}$/$L_{{\rm Edd}}$\ = $2.7\times10^{-3}$, formally has the lowest Eddington ratio in
the sample.
That {$M_{\rm BH}$}/${\upsilon_m}$\ and {$M_{\rm BH}$}/$M_{\rm dyn}$ decrease {\it systematically}\ with
increasing $L_{{\rm bol}}$/$L_{{\rm Edd}}$\ suggests not only that the growth of the BH and the
growth of the galaxy are not finely synchronized, but also that there is a
preferential sequence to their coevolution: during any particular common
growth event, star formation {\it precedes}\ BH growth. Thus, at least for
the local sample under consideration, we have a tentative solution to the
proverbial ``the chicken or the egg problem.'' Phenomenologically, starburst
activity comes first, followed by AGN activity---not the other way around, and
probably not concurrently. This resembles the scenario envisioned by Ho
(2005a, 2005b; see also Kim et al. 2006) in his analysis of the star formation
rate in nearby quasars. Interestingly, the sequence of events seems to be
reversed at high redshifts, where the best evidence to date suggests that
galaxy growth actually {\it lags}\ BH growth (Peng et al. 2006a, 2006b; Shields
et al. 2006; Ho 2007b). One might speculate that the pathway for BH growth
and its interdependence on the host galaxy could be radically different for
luminous quasars in massive, early-type galaxies, compared to the much more
modest AGNs under consideration here, which reside mostly in later type, disk
systems (e.g., Hopkins \& Hernquist 2006). On the other hand, we note that
moderate-redshift ($z=0.36$) AGNs, mostly Seyfert galaxies not too dissimilar
from those in our sample, also follow the trend of high-redshift quasars in
showing a higher ratio of BH mass to host galaxy (bulge) mass (Woo et al.
2006; Treu et al. 2007). We cannot, at the moment, reconcile these
conflicting results, which urgently need to be verified.
We end the discussion on BH growth with some cautionary remarks. Throughout
this study we take at face value that the virial BH masses contain no major
sources of systematic error, other than the $\sim$0.5 dex uncertainty in the
zeropoint (Greene \& Ho 2005b; Peterson 2007). As discussed in Greene \& Ho
(2007c; Appendix), however, our assumption that the virial mass formalism is
invariant with respect to AGN properties is, at the moment, more an article of
faith than an indisputable fact. The size-luminosity relation on which the
virial formalism depends continues to undergo revision and is especially poorly
constrained at the low-luminosity end (Bentz et al. 2006). The very structure
of the broad-line region, and hence the geometric (``$f$'') factor of the
virial relation, may depend on accretion rate or Eddington ratio, but
presently we have no reliable means to quantify this effect (Greene \& Ho
2007c). Strictly speaking, therefore, the trend of decreasing {$M_{\rm BH}$}/${\upsilon_m}$\ and
{$M_{\rm BH}$}/$M_{\rm dyn}$ with increasing $L_{{\rm bol}}$/$L_{{\rm Edd}}$\ can be interpreted as evidence
that {$M_{\rm BH}$}\ is systematically underestimated in high-$L_{{\rm bol}}$/$L_{{\rm Edd}}$\ AGNs.
Resolving this degeneracy lies beyond the scope of this work. At the
moment we also cannot prove that the identified trends are not the result of
subtle mass-dependent selection effects. In their investigation of the local
BH mass function of nearby AGNs, Greene \& Ho (2007b) note that low-mass BHs,
which even at their Eddington limit are still relatively faint AGNs,
preferentially make it into a magnitude-limited survey such as SDSS if they
reside in more luminous (more massive) galaxies.
\subsection{Host Galaxies and Environment}
Even though most sizable nearby galaxies harbor central BHs, they exhibit
very feeble traces of nuclear activity (Ho et al. 1997b). Only a tiny
fraction of the local galaxy population accrete at rates high enough to
qualify them as respectable AGNs (Ho 2004b; Greene \& Ho 2007b). Why is this?
The low space density of luminous AGNs in the local Universe surely reflects
some combination of the overall reduced gas supply in present-day galaxies
as well as their more placid dynamical environments, but precisely which
physical parameters---internal or external to the system---actual trigger the
onset of nuclear activity in any given galaxy remains largely an unsolved
problem (Ho et al. 1997c, 2003; Ho 2004b). In a similar vein, and especially
given the recent discussion of the purported link between BH and galaxy
growth, it is of interest to know what impact AGN activity actually has on the
properties of the host galaxy.
Our new \ion{H}{1}\ survey offers an opportunity to examine these issues from some
fresh angles. Despite the limited quality of the data, our visual examination
the optical images do not give the impression that the galaxies or their
immediate environments are particularly unusual. Some clear cases of
interacting or binary systems exist within a separation of 50 kpc (Ho et al.
2008; see Fig.~3{\it g}, SDSS~J165601.61+211241.2; Fig.~4{\it b},
SDSS~J130241.53+040738.6), but overall the optical morphologies of the hosts
appear undisturbed and large (i.e. nearly equal-mass) physical companions are
rare. There are also no discernible differences between the objects
detected and undetected in \ion{H}{1}\ (cf. Fig.~3 vs. Fig.~4 in Ho et al. 2008).
Excluding the objects with anomalous \ion{H}{1}\ profiles, the rest of the sample is
not just morphologically normal but also dynamically normal. We draw this
inference from our analysis of the Tully-Fisher relation
for the active galaxies, which, besides the anticipated larger scatter due
to various measurement uncertainties, reveals no striking differences compared
to regular disk galaxies (Fig.~4). As an aside, it is worth making an obvious
point: the fact that our objects obey the Tully-Fisher relation implies that
the \ion{H}{1}\ must be roughly regularly distributed, at least regular enough so
that its integrated line width traces the flat part of the galaxy rotation
curve. Hutchings et al. (1987) and Hutchings (1989) previously failed to
see a clear Tully-Fisher relation for their sample of AGNs because of the
high frequency of asymmetric \ion{H}{1}\ profiles. By contrast, the more extensive
study by Whittle (1992) found that Seyfert galaxies do define a Tully-Fisher
relation, but one that is offset toward lower velocities compared to normal
spiral galaxies. He interpreted this as an indication that Seyfert galaxies
have lower mass-to-light ratios (by a factor of $1.5-2$), possibly as a
result of enhanced star formation. Our results do not agree with Whittle's.
We have verified that the discrepancy between the two studies does not stem
from the different fiducial relations chosen for inactive spirals. While
Whittle's reference Tully-Fisher relation has a much steeper slope than the
one we adopted (from Verheijen 2001), the two zeropoints are very similar. We
speculate, but cannot prove, that the low-velocity offset seen by Whittle may
be caused by contamination from objects with kinematically peculiar,
preferentially narrow \ion{H}{1}\ profiles. Inspection of our data (Fig.~4) reveals
that prior to excluding the objects with asymmetric or single-peaked profiles
our sample also has a mild excess of low-velocity points. Indeed, a
nonnegligible fraction of all galaxies, irrespective of their level of nuclear
activity, show this behavior, which Ho (2007a) attributes to dynamically
unrelaxed gas acquired either through minor mergers or primordial accretion.
Enhanced star formation frequently seems to precede nuclear activity (Kauffmann
et al. 2003; see discussion in Ho 2005b), but at least for the AGN luminosities
probed in this study, the extent of the young stellar population has left no
visible imprint on the mass-to-light ratio of host galaxies, insofar as
we can gauge from the Tully-Fisher relation.
Do AGN host galaxies possess a higher fraction of kinematically anomalous
\ion{H}{1}\ profiles than inactive galaxies? Our study, as in previous ones
(Mirabel \& Wilson 1984; Hutchings et al. 1987), tentatively suggests that the
answer is yes, but a quantitative comparison is difficult. Single-dish
surveys of isolated, (mostly) inactive disk galaxies also report frequent
detections of asymmetric \ion{H}{1}\ line profiles (e.g., Baldwin et al. 1980; Lewis
et al. 1985; Matthews et al. 1998), which are attributed to noncircular
motions, unresolved companions, or genuine perturbations in the spatial
distribution of the gas. Richter \& Sancisi (1994) and Haynes et al. (1998),
for example, find asymmetric profiles in roughly 50\% of the spiral galaxies
they surveyed. Similarly, Lewis (1987) studied a large sample of face-on
spiral galaxies and found narrow \ion{H}{1}\ profiles to be rare. He interprets this
result to imply that a substantial fraction of the galaxies must have
distorted \ion{H}{1}\ distributions, whose large-scale motion is misaligned with
respect to the plane of the stellar disk. Thus, in absolute terms the
frequency of non-classical \ion{H}{1}\ profiles in our sample (44\%; \S3.5) is very
similar to the frequency reported for inactive galaxies, but we are wary to
draw any firm conclusions from this because the signal-to-noise ratio of our
spectra is generally much lower than those of the control samples, and
because of the qualitative nature of our profile classification.
The kinematically anomalous objects account for 17\% of the 792 galaxies studied
by Ho (2007a), but this fraction depends on Hubble type, being more common
among early-type systems. Among S0 galaxies, the fraction reaches 40\% (Ho
2007a), nearly identical to the frequency found among the active systems
studied here, which, although strictly not all S0 galaxies, nonetheless tend
to be bulge-dominated disk galaxies. As in the case of inactive galaxies
(Ho 2007a), the kinematic peculiarity for the AGN sample cannot be attributed
to tidal interactions or comparable-sized nearby neighbors.
\vspace{0.3cm}
\subsection{Gas Content and Implications for AGN Feedback Models}
Insofar as their \ion{H}{1}\ content is concerned, the host galaxies of the AGNs in
our sample are endowed with plenty of gas. The most meaningful metric is the
specific rather than the absolute gas mass, and because this quantity spans
a wide range across the Hubble sequence (Roberts \& Haynes 1994), we can
sharpen the comparison even further by specifying the morphological type of
the galaxy. This exercise (Fig.~5) convinces us that type~1 AGNs are {\it at
least}\ as gas-rich as inactive galaxies, and among early-type hosts,
beginning with Sa spirals and certainly by the time we reach S0s and Es, AGN
hosts appear to be even {\it more}\ gas-rich than their inactive counterparts.
For example, the sensitive survey of 12 inactive E and S0 galaxies by Morganti
et al. (2006) detected \ion{H}{1}\ emission in nine objects, of which only one has
$M_{{\rm H~I}}/L_{B,{\rm host}} > 0.1$. By contrast, among the 30 active E
and S0 galaxies in our survey that were detected in \ion{H}{1}, 26 (87\%) have
$M_{{\rm H~I}}/L_{B,{\rm host}} > 0.1$. Our results are qualitatively
consistent with those of Bieging \& Biermann (1983) and Mirabel \& Wilson
(1984). Because of the nature of the control sample of inactive objects (only
detections are available), we hesitate to be more specific, but E and S0
galaxies hosting AGNs may be overabundant in \ion{H}{1}\ by as much as an order of
magnitude. Previous studies of nearby elliptical and S0 galaxies have hinted
of a possible statistical connection between \ion{H}{1}-richness and radio activity
(Dressel et al. 1982; Jenkins 1983; Morganti et al. 2006), two well-known
examples being the low-power radio galaxies NGC~1052 (Knapp et al. 1978) and
NGC~4278 (Raimond et al. 1981). The validity of our conclusion, of course,
critically depends on the accuracy of the morphological types. However, to
bring the average $M_{{\rm H~I}}/L_{B,{\rm host}}$ ratios of the E and S0
subgroups to conform to the average values of inactive galaxies, we would have
to shift the morphological types to as late as Sb or even Sc spirals. It
seems unlikely that the morphologies could be so blatantly wrong.
Alternatively, perhaps we have grossly miscalculated the host galaxy optical
luminosities. Without improved optical imaging (\S5), however, these
alternative explanations remain purely hypothetical.
The global gas content bears no relationship to the level of AGN activity,
either in terms of absolute luminosity or Eddington ratio (Fig.~8). Ho et al.
(2003) had come to the same conclusion for nearby LINERs and Seyferts, but now
the same holds for AGNs on average 2--3 orders of magnitude more luminous.
Whether \ion{H}{1}\ is detected or not also makes no impact whatsoever on the optical
spectrum (Figs.~6 and 7). Perhaps none of this should come as a surprise.
The \ion{H}{1}\ data, after all, probe spatial scales vastly disproportionate to those
relevant for the AGN central engine. Moreover, the accretion rates required
to power the observed activity are quite modest. The H$\alpha$\ luminosities of
our sample range from $\sim 10^{40}$ to $10^{44}$ ergs s$^{-1}$, with a median value of
$4\times 10^{41}$ ergs s$^{-1}$. Adopting the H$\alpha$\ bolometric correction given in
Greene \& Ho (2007b) and a radiative efficiency of 0.1, this corresponds
to a mass accretion rate of merely $\sim 2\times 10^{-2}$ $M_\odot$\ yr$^{-1}$.
Many current models of galaxy formation envision AGN feedback to play a
pivotal role in controlling the joint evolution of central BHs and their host
galaxies. During the major merger of two gas-rich galaxies, each initially
seeded with its own BH, gravitational torques drive a large fraction of the
cold gas toward the central region of the resulting merger remnant. Most of
the gas forms stars with high efficiency in a nuclear starburst, at the
same time feeding the central BH at an Eddington-limited rate. This process
consumes a large amount of the original cold gas reservoir. The combined
energy generated from supernova explosions and the central engine wrecks havoc
on the rest of the interstellar medium in the galaxy, shocking it to high
temperatures and redistributing it to large scales, thereby shutting off
further star formation and accretion. While the specific formulation of the
problem and the methodology for solving it may vary from study to study (e.g.,
Granato et al. 2004; Springel et al. 2005; Croton et al. 2006; Hopkins et al.
2006; Hopkins \& Hernquist 2006; Sijacki et al. 2007; Di~Matteo et al. 2008),
one generic prediction runs constant: the onset of nuclear activity,
especially during the peak phase of accretion when the central object unveils
itself as an optically visible AGN, in concert with supernova feedback from
the accompanying starburst, liberates so much energy on such a short timescale
that the bulk of the cold gas gets expelled from the galaxy. None of the
existing models makes very precise predictions about the cold gas content
during the evolution of the system, but it seems clear, regardless of the
details, that during the optically visible (unobscured) phase of the AGN the
host galaxy should be {\it deficient}\ in cold gas. This expectation is
inescapable if AGN feedback is to have as dramatic an effect on the evolution
of the host galaxy as has been suggested.
Our observations present a challenge to the framework of AGN feedback just
outlined. Far from being gas-deficient, the host galaxies of optically
selected type~1 AGNs are, if anything, unusually gas-rich, and at the very
least we can state with confidence that their gas content is normal. To be
fair, the vast majority of our sample consists of Seyfert galaxies. They are
hosted in disk (S0 and spiral) galaxies, which probably never experienced many
major mergers, and their nuclei fall far short of luminosity threshold of
quasars, which, although rare, dominate the BH mass density in the Universe
(e.g., Yu \& Tremaine 2002). These objects probably fall outside of the
purview of major merger-driven models (Hopkins \& Hernquist 2006). Our
sample, on the other hand, {\it does}\ contain seven objects that formally
satisfy the luminosity criterion of quasars, of which four (PG~0844+349,
PG~1426+015, PG~2130+099, and RX~J0608.0+3058) are detected in \ion{H}{1}. All four
are very gas-rich, with \ion{H}{1}\ masses ranging from $M_{{\rm H~I}}$\ = $5 \times10^9$ to
$2\times10^{10}$ $M_\odot$\ and $M_{{\rm H~I}}/L_{B,{\rm host}}$ values that
are not obviously low.
Beyond the present set of observations, two other lines of evidence contradict
the notion that optically selected quasars are gas-deficient. Scoville et al.
(2003) performed an unbiased search for CO emission from all $z < 0.1$ quasars
from the ultraviolet-selected sample of Palomar-Green (PG) sources (Schmidt \&
Green 1983), finding that the majority of them ($\sim$75\%) contain abundant
molecular gas, ranging from $M_{{\rm H_2}} \approx 10^{9}$ to $10^{10}$
$M_\odot$; those that were undetected have upper limits consistent with the
detections. Bertram et al. (2007) report very similar statistics from their
recent CO study of low-luminosity quasars selected from the Hamburg/ESO
survey. These molecular gas masses are comparable to, if not greater than,
those typically found in mid- to late-type spirals. Ho's (2005a) compilation
of CO observations of other PG quasars further reinforces this point, and even
more extreme molecular gas masses ($M_{{\rm H_2}} \approx 10^{10}-10^{11}$
$M_\odot$) have been detected in high-redshift quasars (Solomon \& Vanden~Bout
2005, and references therein). The Milky Way, for reference, has
$M_{{\rm H_2}} = 2\times10^9$ $M_\odot$\ (Scoville \& Good 1989). Of the three
PG quasars that overlap with our \ion{H}{1}\ sample, for instance, PG~1426+015 has
$M_{{\rm H_2}} = 6.5\times10^9$ $M_\odot$, PG~2130+099 has $M_{{\rm H_2}} =
3\times10^9$ $M_\odot$, and PG~0844+349 has an upper limit of $M_{{\rm H_2}} =
1\times10^9$ $M_\odot$. Combining this with the \ion{H}{1}\ masses, all three objects
have a total (\ion{H}{1}\ $+\,{\rm H}_2$) neutral gas mass in excess of $10^{10}$
$M_\odot$. Yet another way to recognize that PG quasars have plenty of cold
gas comes from noting that substantial amounts of dust have been detected in
them. Haas et al. (2003) appraised the dust content in a sample of 51 PG
quasars by combining far-infrared continuum observations along with millimeter
and submillimeter data. Their derived dust masses range from $M_{{\rm dust}}
=7\times10^5$ to $3\times10^8$ $M_\odot$, with a median value of $8.8\times10^6$
$M_\odot$. For a Galactic molecular gas-to-dust ratio of 150, this corresponds
to $M_{{\rm H_2}} = 1.3\times10^9$ $M_\odot$; for a higher gas-to-dust ratio of
600 (Young \& Scoville 1991), $M_{{\rm H_2}} = 5.3\times10^9$ $M_\odot$. These
indirect estimates of molecular gas mass are consistent with those derived
from CO observations.
To graphically compare the gas masses of PG quasars side-by-side with those
of the lower luminosity AGNs in our sample, we converted the molecular and
dust masses into approximate \ion{H}{1}\ masses. We adopt, for the sake of
illustration, $M_{{\rm H_2}}$/$M_{{\rm H~I}}$\ = 3, a value characteristic of early-type
spirals (Young \& Scoville 1991), and $M_{{\rm H_2}}/M_{{\rm dust}} = 600$.
We calculated BH masses for the sources using the H$\beta$\ line widths from
Bororon \& Green (1992) and optical continuum luminosities from the
spectrophotometry of Neugebauer et al. (1987), and then inferred $B$-band host
galaxy luminosities using the correlation between BH mass and bulge luminosity
given in Kormendy \& Gebhardt (2001), for simplicity assigning all the
luminosity to the bulge. As summarized in the bottom panel of Figure~5, PG
quasars span almost 4 orders of magnitude in $M_{{\rm H~I}}/L_{B,{\rm host}}$,
from objects as gas-poor as the most extreme ellipticals to as gas-rich as the
most gas-dominated dwarf irregulars. Although this exercise becomes
progressively more uncertain for high-redshift objects, we can attempt a
similar order-of-magnitude calculation for the nine high-redshift quasars
listed in Ho (2007b), using BH masses listed therein and the molecular gas
masses given in Solomon \& Vanden~Bout (2005). Under the same assumptions
adopted for the PG quasars, we find an average value of
$\log \langle M_{{\rm H~I}}/L_{B,{\rm host}} \rangle \approx -0.9$, in the
middle of the range observed for PG quasars (Fig.~5). In fact, this estimate,
which assumes a local value for the ratio of BH mass to bulge luminosity, is
most likely a lower limit because high-redshift quasars
in general (Peng et al. 2006a, 2006b) and this subset of objects in particular
(Shields et al. 2006; Ho 2007b) seem to exhibit a higher ratio (by a factor of
a few) of BH mass to host galaxy mass.
Given the lack of specific theoretical predictions, we cannot attempt a
quantitative comparison of our results with AGN feedback models. Nonetheless,
we wish to stress that the typical unobscured quasar possesses a gas reservoir
normal for its stellar content and does {\it not}\ appear to be depleted in
cold gas. Whether this poses a difficulty or not for AGN feedback models
deserves further consideration.
\section{Future Directions}
The pilot study presented here can be improved and extended in several
ways. A number of the \ion{H}{1}\ observations would benefit from longer
integrations. Roughly $1/3$ of our sample was undetected in the current
experiment, and the upper limits on their \ion{H}{1}\ masses remain relatively high;
nearly half of the nondetections have limits above $M_{{\rm H~I}}$\ $=\,10^{10}$
$M_\odot$. It would be useful to improve these limits by a factor of a few,
down to $M_{{\rm H~I}}$\ $\approx 5\times 10^{9}$ $M_\odot$, the scale of the Milky Way.
Higher signal-to-noise ratio spectra of the detected sources would allow us
to better study the \ion{H}{1}\ line profiles, especially to quantify the apparent
excess in line asymmetry and its implications for the dynamical state of
the gas and the circumgalactic environment of the host. In terms of new
observations, it would be particularly worthwhile to observe a larger sample
of high-Eddington ratio systems in order to verify the trends with Eddington
ratio identified in this study. This can be easily accomplished using the
current SDSS database (Greene \& Ho 2007b, 2007c). To put more stringent
constraints on AGN feedback scenarios, it would be highly desirable to
increase the number of high-luminosity AGNs (quasars) in the survey. Given
the rarity of nearby quasars and the redshift limit of Arecibo, this will be a
difficult task, but some progress can still be made.
The current project represents a major initiative in characterizing the cold
gas content in type~1 AGNs. We have demonstrated that \ion{H}{1}\ observations can
yield many useful insights into the nature of AGNs, BHs, and their host
galaxies. This tool should be applied to other classes of AGNs. A useful
extension of this program would be to observe a well-matched sample of type~2
(narrow-line) AGNs in order to test their relationship to type~1 objects. Kim
et al. (2006; see also Ho 2005b and Lal \& Ho 2007), for example, have argued
that type~2 quasars are the evolutionary precursors, as opposed to simply
misoriented counterparts, of type~1 quasars. If true, type~2 sources are
likely to be more gas-rich than their type~1 counterparts, or perhaps their
interstellar medium might be less dynamically relaxed, differences that can
potentially be discerned through \ion{H}{1}\ observations.
The analysis presented here critically depends on a number of quantities
derived from optical images of the host galaxy, namely its inclination angle,
isophotal diameter, luminosity, morphological type, and, to a lesser extent,
local environment. All of these measurements can be greatly improved
with the help of better optical imaging. In many instances, the SDSS images
(Ho et al. 2008) simply lack the necessary depth or resolution to yield
clear-cut measurements of these parameters. High-resolution (e.g.,
{\it Hubble Space Telescope}) images would be particularly valuable, as they
would simultaneously yield a reliable decomposition of the AGN core from the
host galaxy, as well as a quantitative measurement of the bulge component.
All of the correlations presented in this study should be reexamined once
such data become available, to see if their scatter can be reduced.
We have stressed the utility of \ion{H}{1}\ observations, but, of course, a full
inventory of the cold interstellar medium should include the molecular
component as well, which, depending on the galaxy type, can dominate over the
atomic component. A systematic CO survey of the objects in our \ion{H}{1}\ program
would be highly desirable, not only to complete the gas inventory but also to
bootstrap the kinematical analysis to the CO line width. Unlike the current
limitations of \ion{H}{1}\ observations, the rotational transitions of CO can be
observed to almost arbitrarily high redshifts (Solomon \& Vanden~Bout 2005),
a necessary ingredient to access AGN host galaxies at early epochs. Ho
(2007b) discusses the use of the CO Tully-Fisher relation as a powerful
tool to investigate the host galaxies of high-redshift quasars.
\acknowledgements
The work of L.~C.~H. was supported by the Carnegie Institution of Washington
and by NASA grants HST-GO-10149.02 and HST-AR-10969 from the Space Telescope
Science Institute, which is operated by the Association of Universities for
Research in Astronomy, Inc., for NASA, under contract NAS5-26555. Support for
J.~D. and J.~E.~G. was provided by NASA through Hubble Fellowship grants
HF-01183.01-A and HF-01196, respectively, awarded by the Space Telescope
Science Institute. We thank Paul Martini for useful correspondence and
the anonymous referee for helpful suggestions.
|
2,869,038,155,838 | arxiv | \section{Introduction}
\label{sec:intro}
Type Ic supernovae (SNe) are hydrogen-stripped core-collapse explosions (CCSNe) of massive stars with M$_{ZAMS} \ga 8 M_{\odot}$ that show no evidence for hydrogen and helium in their spectra \citep{Filippenko97}.
Potential candidates for type Ic SN progenitors are massive Wolf Rayet (WR) stars and stars in close binary systems (\citealt{Ensman88,Galyam17}).
At the time of writing the exact nature of their progenitors is unclear
\citep{Podsiadlowsky92,Yoon10,Eldridge13,Smartt09,Smartt15,Dessart15a,Dessart15b,Dessart17}.
Notable in this respect is the recent detection of the progenitor system of the Ic SN\,2017ein \citep{Kilpatrick18,VanDyk18} which pointed to a massive stellar progenitor with $M \sim 60$~$M_{\odot}$ in a binary system.
Ic SNe typically show a bell-shaped radio spectrum powered by synchrotron emission and extending all the way to the X-ray band. The spectral peak frequency describes the transition between the optically thick part of the spectrum --below which synchrotron self-absorption (SSA) takes place-- and the optically thin portion of the spectrum \citep{RybickiLightman79,Chevalier98,ChevalierFransson06}.
The synchrotron emission is produced by electrons that are accelerated at the shock front between the SN ejecta and the the circumstellar medium (CSM). As the shock wave expands, the optical depth to SSA decreases and hence the spectral peak frequency cascades down to lower frequencies with time.
In a SN explosion, the X-ray and radio emission resulting from the SN shock propagation in the medium track the fastest material ejected by the explosion, while the optical emission is of thermal origin and originates from the inner ejecta layers.
A small fraction ($\sim 4 \%$; \citealt{Shivvers17}) of Ic SNe, called broad-line Ic SNe (BL-Ic SNe), are characterized by broad lines in the optical spectrum implying large expansion velocities of the ejecta ($\ga 2\times 10^{4}$ km s$^{-1}$, e.g. \citealt{Mazzali02,Cano17rev}), $\sim 10^4$ km s$^{-1}$ faster than in ``ordinary'' Ic SNe \citep{Modjaz16}.
Some BL-Ic SNe are associated with ultra-relativistic jets that generate long duration ($\ga 2$ s) Gamma-Ray Bursts (L-GRBs, e.g. \citealt{Cano17rev}), which are observable at cosmological distances up to $z\sim10$ (e.g. \citealt{Cucchiara11b}).
In the local universe ($z\le0.1$) some BL-Ic SNe have also been found in association with mildly relativistic outflows in low-luminosity GRBs (ll-GRBs, which are too weak to be detected at larger distances, \citealt{Liang07}).
As opposed to L-GRBs, ll-GRBs show no evidence for collimation of their fastest ejecta, i.e. no jet \citep{Kulkarni98,Soderberg06d,Bromberg11c}.
A possible interpretation of the observational lack of evidence for L-GRB counterparts in the majority of BL-Ic SNe is the off-axis jet scenario \citep{Rhoads99,Eichler99,Yamazaki03,Piran04_rev,Soderberg06e,Bietenholz14VLBI,Corsi16}, where the explosion powers a GRB-like jet that is misaligned with respect to the observer line of sight.
In this scenario, as the jet velocity gradually decreases and relativistic beaming becomes less severe, the emission becomes observable from increasingly larger viewing angles.
Deep radio and X-ray observations extending to hundreds of days post explosion offer the opportunity to reveal the emission from off-axis jets as well as to recover weak GRBs that would not trigger current $\gamma$-ray observing facilities.
Here we present extensive ($\delta t\sim10-1000$ days) broad-band (radio to X-ray) observations of SN\,2014ad, a BL-Ic SN that exploded in the galaxy PGC\,37625 (Mrk 1309) at $d = 26.44$~Mpc \citep{Sahu18}. SN\,2014ad is among the closest BL-Ic SNe discovered to date, which enables very deep limits on its radio and X-ray emission (Figure~\ref{fig:radioworld} and Table~\ref{tab:tablevla1}).
We present constraints on the progenitor mass-loss rate $\dot{M}$ and the total energy of the fast ejecta $E$ in two scenarios: (i) mildly relativistic, nearly isotropic, synchrotron self-absorbed radio emission due to the SN ejecta ploughing through a wind-like CSM; (ii) synchrotron emission from a relativistic off-axis GRB-like jet.
The analysis of the optical emission from SN\,2014ad by \cite{Sahu18} and \cite{Stevance17} revealed that the bulk of its ejecta velocity is $\sim 3 \times 10^{4}$ km s$^{-1}$ at early times, with kinetic energy $E_k \sim (1.0 \pm 0.3) \times 10^{52}$~erg, larger than in type-Ic SNe, and similar to BL-Ic SNe and GRB-SNe.
The metallicity of the host galaxy of SN\,2014ad is $\sim 0.5$ Z$_{\odot}$.
The total explosion ejecta mass inferred by \cite{Sahu18} and \cite{Stevance17} is $M_{ej} \sim (3.3 \pm 0.8)$~M$_{\odot}$ suggesting a massive progenitor star with $M_{ZAMS} \ga 20$~M$_{\odot}$.
Spectropolarimetry by \citet{Stevance17} also suggests a mild deviation from a spherical geometry of the ejecta.
This paper is organized as follows.
In Section~\ref{sec:obs} we describe our radio and X--ray observations; in Section~\ref{sec:IC} we present the constraints on the environment derived from our X-ray limits, whereas in Section~\ref{sec:mod} we present environment constraints derived from the radio and X-ray broadband modeling in two different scenarios (i.e., an ``ordinary'' isotropic SN outflow, and a beamed relativistic jet).
Our results and analysis are discussed in Section~\ref{sec:disc} and conclusions are drawn in Section~\ref{sec:conc}.
\begin{figure}
\centering
\includegraphics[width=8.9cm]{RadioLC_GRBSN.eps}
\caption{Deep radio limits on the emission from SN\,2014ad (red stars) in the context of L-GRBs (circles; gray is for cosmological GRBs, while orange is for GRBs at $z\le 0.3$) and H-stripped CCSNe (squares; gray is for normal SNe, blue is for SNe with relativistic ejecta).
Our deep radio limits on the emission from the BL-Ic SN\,2014ad are consistent with a luminosity comparable to that of SN\,2002ap. The detected radio emission from SN\,2002ap points to a non-relativistic (shock velocity $\sim 0.3c$) uncollimated explosion with a small energy budget of the fast ejecta ($E \sim 1.5 \times 10^{45}$~erg; \citealt{Berger02}). \\
}
\label{fig:radioworld}
\end{figure}
\section{Observations}
\label{sec:obs}
SN\,2014ad was discovered by \citet{howerton14} on March $12.4$, 2014 (MJD $56728.4$) in public images from the Catalina Real-Time Transient Survey \citep{CATALINA} at $\alpha=11^{\rm h}57^{\rm m}44^{\rm s}.44$, $\delta=-10^{\circ}10'15.7''$.
Throughout this paper we assume a SN explosion date $t_0=56725\pm 3$~MJD \citep{Sahu18}; times given are in reference to this explosion date unless otherwise noted.
\subsection{Radio Observations with the VLA}
\label{subsec:radiofu}
VLA follow-up observations were carried out between March 22, 2014 (MJD 56738) and September 23, 2016 (MJD 57654), from $\sim 13$~d to $\sim 930$~d post explosion, under Proposal VLA/14A-531 (PI: Kamble).
Data were taken in eight spectral windows at L-band (with baseband central frequencies of $1.3$ and $1.7$~GHz, respectively), C-band ($5$ and $7$~GHz), X-band ($8.5$ and $11$~GHz), Ku-band ($13.5$ and $16$~GHz), with a nominal bandwidth of $\sim 1$~GHz ($\sim 0.4$~GHz for L-band).
3C286 and J1330-1449 were used as flux/bandpass and phase/amplitude calibrators, respectively.
The Common Astronomy Software Application ({\sc casa}, v. 4.7.2, \citealt{CASA})\footnote{\url{https://casa.nrao.edu/}} was used to calibrate, flag and image the data.
Images were formed from the visibility data using the CLEAN algorithm \citep{Hogbom74}.
The image size was set to ($1024 \times 1024$) pixels, the pixel size was determined as $1/5$ of the nominal beam width and the images were cleaned using natural weighting.
The upper limits on the flux densities were calculated at a $3 \sigma$ confidence level (Table~\ref{tab:tablevla1}).
\begin{deluxetable*}{cc|cccccccc}
\tablecolumns{9}
\tablecaption{Log of VLA observations of SN\,2014ad: observation central time $t_{mid}$, epoch $t_{\rm e} = t_{\rm mid}-t_0$ since the estimated explosion date $t_0$, VLA array configuration, beam size $\theta_{\rm FWHM}$, central frequency $\nu_c$ and its bandwidth $\Delta \nu$, the uncertainty $\sigma_S$, the upper limit on the flux density $S$ (at $3$--$\sigma$) and the relative luminosity $L_{25}$ (in units of $10^{25}$ erg s$^{-1}$ Hz$^{-1}$) of the source. In no case was the source was detected with $\ge 3$--$\sigma$ confidence.\label{tab:tablevla1}}
\tablehead{
\colhead{$t_{mid}$} & \colhead{$t_{\rm e}$} & \colhead{VLA} & \colhead{$\theta_{\rm FWHM}$} & \colhead{$\nu_c$} & \colhead{$\Delta \nu$} & \colhead{$\sigma_S$} & \colhead{S(3--$\sigma$)} & \colhead{$L_{25}$} \\
\colhead{[MJD]} & \colhead{[days]} & \colhead{Configuration} & \colhead{[arcsec]} & \colhead{[GHz]} & \colhead{[GHz]} & \colhead{[$\mu$Jy]} & \colhead{[$\mu$Jy]} & \colhead{[erg s$^{-1}$ Hz$^{-1}$]}
}
\startdata
56738.19 & 13.19 & A & 1.42 & 1.26 & 0.384 & 28.8 & 86.4 & 12.1 \\
& & A & 0.93 & 1.80 & 0.384 & 30.8 & 92.4 & 12.9 \\
& & A & 0.34 & 5.0 & 0.896 & 9.0 & 27.0 & 3.8 \\
& & A & 0.24 & 7.1 & 0.896 & 8.1 & 24.3 & 3.4 \\
& & A & 0.19 & 8.6 & 0.896 & 7.9 & 20.7 & 3.3 \\
& & A & 0.15 & 11.0 & 0.896 & 7.8 & 23.4 & 3.3 \\
& & A & 0.13 & 13.5 & 0.896 & 7.7 & 23.1 & 3.2 \\
& & A & 0.11 & 16.0 & 0.896 & 9.1 & 27.3 & 3.8 \\
56763.21 & 38.21 & A & 1.42 & 1.26 & 0.384 & 31.6 & 94.8 & 13.2 \\
& & A & 0.93 & 1.80 & 0.384 & 31.4 & 94.2 & 13.1 \\
& & A & 0.34 & 5.0 & 0.896 & 10.5 & 31.5 & 4.4 \\
& & A & 0.24 & 7.1 & 0.896 & 7.2 & 21.6 & 3.0 \\
& & A & 0.19 & 8.6 & 0.896 & 8.0 & 24.0 & 3.4 \\
& & A & 0.15 & 11.0 & 0.896 & 10.3 & 30.9 & 4.3 \\
& & A & 0.13 & 13.5 & 0.896 & 7.3 & 21.9 & 3.1 \\
& & A & 0.11 & 16.0 & 0.896 & 7.6 & 22.8 & 3.2 \\
56828.96 & 103.96 & AnD &12.02 & 5.0 & 0.896 & 6.9 & 20.7 & 2.9 \\
& & AnD & 7.93 & 7.1 & 0.896 & 5.2 & 15.6 & 2.2 \\
& & AnD & 0.20 & 8.6 & 0.896 & 6.0 & 18.0 & 2.5 \\
& & AnD & 0.15 & 11.0 & 0.896 & 6.3 & 18.9 & 2.6 \\
56906.76 & 181.76 & D &12.02 & 5.0 & 0.896 & 13.9 & 41.7 & 5.8 \\
& & D & 7.93 & 7.1 & 0.896 & 10.8 & 32.4 & 4.5 \\
& & D & 6.98 & 8.6 & 0.896 & 15.7 & 47.1 & 6.6 \\
& & D & 5.46 & 11.0 & 0.896 & 16.8 & 50.4 & 7.0 \\
57227.81 & 502.81 & A & 0.34 & 5.0 & 0.896 & 9.9 & 26.7 & 4.1 \\
& & A & 0.24 & 7.1 & 0.896 & 9.4 & 27.7 & 3.9 \\
& & A & 0.19 & 8.6 & 0.896 & 6.9 & 20.7 & 2.9 \\
& & A & 0.15 & 11.0 & 0.896 & 10.1 & 30.3 & 4.2 \\
57654.66 & 929.66 & B & 1.15 & 5.0 & 0.896 & 6.6 & 19.8 & 2.8 \\
& & B & 0.79 & 7.1 & 0.896 & 6.3 & 18.9 & 2.6 \\
& & B & 0.65 & 8.6 & 0.896 & 7.0 & 21.0 & 2.9 \\
& & B & 0.51 & 11.0 & 0.896 & 6.8 & 20.4 & 2.8
\enddata
\end{deluxetable*}
\subsection{X--ray Observations with Swift-XRT}
\label{subsec:XRT}
The X-Ray Telescope (XRT; \citealt{Burrows05}) onboard the {\it Swift}{} Gehrels spacecraft \citep{Gehrels04} observed the region of SN\,2014ad in Photon Counting (PC) mode several times from March 19, 2014 to March 11, 2017.
We find no evidence for statistically significant X-ray emission at the location of SN\,2014ad.
We extracted the $0.3$--$10$~keV light curve, consisting of $3\sigma$ upper limits, using the web interface provided by Leicester University\footnote{\url{http://www.swift.ac.uk/user\_objects/}}, which used {\sc heasoft} (v. 6.22).
We performed flux calibration by assuming an absorbed simple power-law spectral model ({\sc wabs*powerlaw} within {\sc xspec}) with column density frozen to the Galactic value along the SN line of sight, $N_{H,{\rm Gal}}=3.1\times10^{20}$~cm$^{-2}$ \citep{Kalberla05}.
We assumed a conservative value for the photon index, $\Gamma=2$, and derived the upper limit to the flux density at $1$~keV.
Finally, we calculated three light curves with different integration times: $10^5$, $2\times10^5$, and $5\times10^5$~s, respectively. Table~\ref{tab:tablexrt} reports the values for the longest timescale having the deepest limits.
We also calculated the corresponding $3\sigma$ upper limits on the $0.3$--$10$~keV luminosity.
\begin{deluxetable}{lrccr}
\tablecolumns{5}
\tablewidth{0pc}
\tablecaption{{\it Swift}-XRT 3--$\sigma$ upper limits on the flux density at $1$~keV ($F_{\nu,\;{\rm 1~KeV}}$) and $0.3$--$10$~keV luminosity ($L_{0.3-10}$). $t_{\rm e} = t_{\rm mid}-t_0$ is the epoch since the estimated SN explosion date $t_0$, $\Delta t$ is the bin time.
\label{tab:tablexrt}}
\tablehead{
\colhead{$t_{\rm mid}$} & \colhead{$t_{\rm e}$} & \colhead{$\Delta t$} & \colhead{$F_{\nu,\;{\rm 1~KeV}}$} & \colhead{$L_{0.3-10}$}\\
\colhead{[MJD]} & \colhead{[days]} & \colhead{[days]} & \colhead{[$\mu$Jy]} & \colhead{[erg\,s$^{-1}$]}
}
\startdata
56738.1 & $13.1$ & $5.8$ & $< 1.3\times10^{-2}$ & $< 1.0\times10^{42}$\\
56743.9 & $18.9$ & $5.8$ & $< 1.2\times10^{-2}$ & $< 9.0\times10^{41}$\\
56749.6 & $24.6$ & $5.8$ & $< 1.7\times10^{-2}$ & $< 1.3\times10^{42}$\\
56755.4 & $30.4$ & $5.8$ & $< 4.1\times10^{-2}$ & $< 3.2\times10^{42}$\\
57774.0 & $1049.0$ & $5.8$ & $< 0.11$ & $< 8.5\times10^{42}$\\
57808.7 & $1083.7$ & $5.8$ & $< 1.1$ & $< 8.5\times10^{43}$\\
57820.2 & $1095.2$ & $5.8$ & $< 6.7\times10^{-2}$ & $< 5.2\times10^{42}$\\
\enddata
\end{deluxetable}
\section{Constraints on the environment density from inverse Compton emission}
\label{sec:IC}
Inverse Compton (IC) emission from the upscattering of optical photospheric photons into the X-ray band by relativistic electrons at the shock front has been demonstrated to dominate the X-ray emission from H-stripped CCSNe that explode in low-density environments ($\dot{M} \la 10^{-5}$~M$_{\odot}$ yr$^{-1}$) at $\delta t\la 30$~d (e.g. \citealt{Bjornsson04}; \citealt{ChevalierFransson06}).
We adopt the IC formalism by \cite{Margutti12b} modified to account for the outer density structure of progenitors of BL-Ic SNe (which are likely to be compact) as in \cite{Margutti14b}.
The IC emission depends on: (i) the density structure of the SN ejecta and of the CSM; (ii) the electron distribution responsible for the up-scattering; (iii) explosion parameters (ejecta mass $M_{\rm{ej}}$ and kinetic energy\footnote{This is the kinetic energy carried by the slowly moving material powering the optical emission.} $E_{\rm{k}}$); and (iv) the bolometric luminosity of the SN: $L_{\rm{IC}}\propto L_{\rm{bol}}$.
For compact progenitors that are relevant here, the density scales as $\rho_{\rm{SN}}\propto r^{-n}$ with $n\sim10$ (see e.g. \citealt{Matzner99}; \citealt{ChevalierFransson06}).
We further assume a power-law electron distribution $n_{e}(\gamma)\propto \gamma^{-p}$ with $p\sim3$ as found in radio observations of type H-stripped CCSNe \citep{ChevalierFransson06} and a fraction of energy into relativistic electrons $\epsilon_e=0.1$.
We use the explosion parameters $E_{\rm{k}}=(1\pm0.3)\times 10^{52}$~erg and $M_{\rm{ej}}=(3.3\pm 0.8)$~M$_{\sun}$.
For a wind-like CSM structure $\rho_{\rm{CSM}}\propto r^{-2}$ with a typical wind velocity $v_w=1000$~km s$^{-1}$ as appropriate for massive stars (and hence BL-Ic SN progenitors, e.g. \citealt{Smith14}), the {\it Swift}{}-XRT non-detections at $\delta t<30$~d yield $\dot M<5\times10^{-5}$~M$_{\sun}$ yr$^{-1}$.
\section{Broadband modeling}
\label{sec:mod}
We interpret our deep radio and X-ray limits in the context of synchrotron self-absorbed (SSA) emission from either (i) uncollimated (i.e. spherical) non-relativistic ejecta (Sect.~\ref{subsec:ssamod}), or (ii) relativistic GRB-like jet (Sect.~\ref{sec:constraints}).
\subsection{SSA emission from non-relativistic uncollimated ejecta}
\label{subsec:ssamod}
We follow \cite{Soderberg05} and adopt their formalism in the context of the radio emission from non-relativistic SN ejecta interacting with a wind-like CSM. The brightness temperature of a source is:
\begin{equation}
T_B\ =\ \frac{c^2}{2\pi k}\ \frac{f_\nu\,d^2}{(v_{\rm ph} t)^2\,\nu^2}\ \;,
\label{eq:t_b}
\end{equation}
where $c$ is the speed of light, $k$ is the Boltzmann constant, $f_\nu$ is the flux density at observed frequency $\nu$, $d$ is the source distance, $v_{\rm ph}$ is the photospheric velocity and $t$ is the observational epoch. For SN\,2014ad we find $T_B \la 2.8 \times 10^{11}$~K at $t \sim 13.2$~d, where $v_{\rm ph} \sim 3.2 \times 10^4$~km s$^{-1}$ and $f_\nu < 86.4$~$\mu$Jy at $\nu=1.26$~GHz (Table \ref{tab:tablevla1}). Our inferred $T_B$ does not violate the $10^{12}$\,K limit of the Inverse Compton Catastrophe (ICC; \citealt{Kellermann81}), consistent with the expectations from a non relativistic spherical SSA source.
In the SSA model radiation originates from an expanding spherical shell of shock-accelerated electrons with radius $r$ and thickness $r/\eta$ (here we assume the standard scenario of a thin shell with $\eta = 10$; e.g. \citealt{LiChevalier99,Soderberg05}).
As the shock wave propagates through the CSM, it accelerates relativistic electrons into a power-law distribution $N(\gamma) \propto \gamma^{-p}$ for $\gamma\ge\gamma_m$, where $\gamma_m$ is the minimum Lorentz factor of the electrons \citep{Chevalier82,Chevalier98}. In this analysis we assume $p \sim 3$ as typically found in H-stripped core-collapse SNe (e.g. \citealt{ChevalierFransson06}).
The post-shock energy fraction in the electrons and magnetic field is given by $\epsilon_e$ and $\epsilon_B$, respectively; we further adopt equipartition of the post-shock energy density of the radio-emitting material between relativistic electrons and magnetic fields ($\epsilon_e = \epsilon_B = 1/3$).
The synchrotron emission from SNe typically peaks at radio frequencies on timescales of a few days to weeks after the SN explosion (e.g. \citealt{Corsi14}); this emission is suppressed at low frequencies by absorption processes.
\cite{Chevalier98} showed that the dominant absorption process is internal SSA for H-stripped SNe, and external free-free absorption (FFA) in H-rich SNe, as H-rich SNe tend to explode in higher density media.
Following \cite{Soderberg05}, the temporal evolution of the magnetic field $B(t)$, minimum Lorentz factor $\gamma_m(t)$, shock radius $r(t)$ and the ratio $\Im = \epsilon_e / \epsilon_B$) can be parametrized as:
\begin{equation}
B\ =\ B_0 \left(\frac{t-t_e}{t_0-t_e}\right)^{\alpha_B} \quad \gamma_m = \gamma_{m,0} \left(\frac{t-t_e}{t_0-t_e}\right)^{\alpha_{\gamma}}
\label{eq:rb}
\end{equation}
\begin{equation}
r\ =\ r_0 \left(\frac{t-t_e}{t_0-t_e}\right)^{\alpha_r} \quad \Im = \Im_0 \left(\frac{t-t_e}{t_0-t_e}\right)^{\alpha_{\Im}}
\label{eq:gf}
\end{equation}
where $r_0$, $B_0$, $\Im_0$ and $\gamma_{m,0}$ are measured at an arbitrary reference epoch $t_0$, and $t_e$ is the explosion time.
In this paper we adopt $t_0 = 13.2$~d (for which $r_0 \sim v_{ph} \times t_0 = 4 \times 10^{15}$~cm) and $t_e = 0$~d.
The temporal indices $\alpha_{r}$, $\alpha_{B}$, $\alpha_{\Im}$, $\alpha_{\gamma}$ are determined by the hydrodynamic evolution of the ejecta, as described in \citet{Soderberg05}.
In particular, $\alpha_{r}$ and $\alpha_{\Im}$ can be expressed as:
\begin{equation}
\alpha_r\ =\ \frac{n-3}{n-s},
\label{eq:alphar}
\end{equation}
\begin{equation}
\alpha_{\Im}\ =\ -s \alpha_r + \alpha_{\gamma} - 2\alpha_B,
\label{eq:alphaf}
\end{equation}
where $n$ and $s$ describe the density profile of the outer SN ejecta ($\rho_{ej} \propto r^{-n}$), and of the CSM ($\rho_{CSM} \propto r^{-s}$)\footnote{$s=0$ corresponds to the case of ISM-like CSM and $s=2$ correspond to the case of wind-like CSM.}, respectively.
The self-similar conditions $s < 3$ and $n > 5$ result in $\sim 0.5 < \alpha_r < 1$ \citep{Chevalier82}.
In this work we consider a wind-like CSM case (i.e. $s = 2$), and $n = 10$ as appropriate for massive compact stars that are thought to be progenitors of H-stripped CCSNe.
In the standard scenario \citep{Chevalier96}, $\epsilon_e$ and $\epsilon_B$ do not vary with time, from which we derive through Eq.~\ref{eq:gf} that $\alpha_{\Im}=0$, implying that:
\begin{equation}
\alpha_B\ =\ \left(\frac{2-s}{2}\right) \alpha_r - 1,
\label{eq:alphab_sm}
\end{equation}
\begin{equation}
\alpha_{\gamma}\ =\ 2\, (\alpha_r-1).
\label{eq:alphagamma_sm}
\end{equation}
Since $\alpha_{\Im} = 0$ and under the equipartition hypothesis ($\Im = 1$; Eq.~\ref{eq:gf}), it follows that $\alpha_r = 0.875$ (Eq.~\ref{eq:alphar}), $\alpha_B = -1$ (Eq.~\ref{eq:alphab_sm}) and $\alpha_{\gamma} = -0.25$ (Eq.~\ref{eq:alphagamma_sm}).
Under these assumptions and through Eq.~(\ref{eq:rb}), the characteristic synchrotron frequency is:
\begin{equation}
\begin{split}
\nu_m(t)\ =\ & \gamma_m^2 \frac{q B}{2 \pi m_e c}\ =\ \gamma_{m,0}^2 \frac{q B_0}{2 \pi m_e c}
\left(\frac{t}{t_0}\right)^{2\alpha_{\gamma}+\alpha_B} \\
& = \nu_{m,0} \left(\frac{t}{t_0}\right)^{2\alpha_{\gamma}+\alpha_B}\;,
\label{eq:num2}
\end{split}
\end{equation}
where $q$ is the electron charge and $m_e$ is the electron mass.
The frequency $\nu_{m,0}\equiv \nu_{m}(t_0)$ depends on $\gamma_{m,0}$ and $B_0$ as follows:
\begin{equation}
\nu_{m,0}\ =\ \gamma_{m,0}^2 \frac{q B_0}{2 \pi m_e c} \;.
\label{eq:num3}
\end{equation}
The radio flux density at a given observing frequency $\nu$ and epoch $t$ is thus given by:
\begin{equation}
\begin{split}
F(t,\nu)\ =\ & 10^{26}\ C_f \left(\frac{t}{t_0}\right)^{(4\alpha_r - \alpha_B)/2} \left(1 - e^{-\tau_{\nu}^{\xi}}\right)^{1/\xi} \\
& \times {\nu}^{5/2}\ F_3(x)\ F_2^{-1}(x)\ \ \ {\rm{mJy}}
\end{split}
\label{eq:denflux}
\end{equation}
with the optical depth $\tau_{\nu}$:
\begin{equation}
\tau_{\nu}(t)\ =\ C_{\tau} \left(\frac{t}{t_0}\right)^{\alpha_r + (3+p/2)\alpha_B + (p-2)\alpha_{\gamma} + \alpha_{\Im}} \nu^{-(p+4)/2}\ F_2(x)\;.
\label{eq:optdepth}
\end{equation}
$C_f$ and $C_{\tau}$ are normalization constants (see Appendix A2 of \citealt{Soderberg05}), $F_2(x)$ and $F_3(x)$ are Bessel functions with $x = 2/3\,(\nu/\nu_m)$, $\xi = [0, 1]$ describes the sharpness of the spectral break between optically thin and thick regimes.
We adopt $\xi = 1$.
As we can see from Eqs~(\ref{eq:denflux}, \ref{eq:optdepth}, \ref{eq:alphar} and \ref{eq:num2}), $F(t,\nu)$ depends on $C_f$, $C_{\tau}$, $p$, $n$, $s$, $\nu_{m,0}$, and $\xi$.
From Eqs~6--8 of \citet{Soderberg05} $C_f$ and $C_{\tau}$ can be expressed in terms of $r_0$, $B_0$ and $\eta$; thus, also using (\ref{eq:num3}), $F(t,\nu)$ can be expressed as a function of $r_0$, $B_0$, $p$, $n$, $s$, $\gamma_{m,0}$, $\eta$, and $\xi$, which are all fixed apart from $B_0$ and $\gamma_{m,0}$.
These two free parameters can be further expressed as a function of physically more useful quantities\footnote{These parameters are showed in Eqs.~13 and 14 of \citet{Soderberg05}, respectively.}, the SN progenitor mass-loss rate ($\dot{M}$) and the total kinetic energy of the radio-bright (fast) ejecta ($E$):
\begin{equation}
B_0 = \left(\frac{2 \eta \epsilon_e}{\Im_0 r_0^3}\right)^{1/2} \ E^{1/2} \;
\label{eq:b_0}
\end{equation}
\begin{equation}
\gamma_{m,0} = \Big(\frac{p-2}{p-1}\Big) \ \frac{2 m_p \epsilon_e v_w}{m_e c^2 r_0} \ \left(\frac{E}{\dot{M}}\right)
\label{eq:gamma_0}
\end{equation}
where $m_p$ is the proton mass and $v_w$ is the wind velocity.
Consequently, we express $\nu_{m,0}$ as a function of $\dot{M}$ and $E$ from (\ref{eq:num3}):
\begin{equation}
\nu_{m,0} = \left(\frac{p-2}{p-1}\right)^2 \frac{2 q}{\pi m_e c} \left(\frac{m_p v_w}{m_e c^2}\right)^2 \left(\frac{2 \eta \epsilon_e^5}{r_0^7 \Im_0}\right)^{1/2} \left(\frac{E^{5/2}}{\dot{M}^2}\right) \;.
\label{eq:nu_0b}
\end{equation}
As a result, $F(t,\nu)$ just depends on $\dot{M}$ and $E$.
We use a grid of $\dot{M}$ and $E$ values to compare our VLA upper limits (Table \ref{tab:tablevla1}) with the flux densities derived from (\ref{eq:denflux}).
In Figure~\ref{fig:grid_nr_sn} we explore the kinetic energy vs. mass-loss rate parameter space considering the (i) radio upper limits (hatched) and (ii) the radio limits plus the X-ray limits (red), which results in more stringent constraints: $E \la 10^{45}$~erg for $\dot{M}\la 10^{-6}~M_\odot$\,yr$^{-1}$ and $E \la 10^{46}$~erg for $\dot{M}\la 10^{-4}M_\odot$\,yr$^{-1}$. We end by noting that at these low mass-loss rates the effects of FFA are negligible (e.g. \citealt{Weiler86,FranssonBjornsson98}).
\begin{figure}
\centering
\includegraphics[width=8.8cm]{grid_nr_sne.eps}
\caption{
Regions of the total kinetic energy of the fast ejecta--mass-loss rate space excluded by VLA (hatched area) and VLA + XRT (red area) upper limits (see Table~\ref{tab:tablevla1}), as derived assuming the SSA model for a mildly relativistic, nearly isotropic explosion (Sect.~\ref{subsec:ssamod}).
In addition we show: some peculiar BL-Ic SNe (in green) [1] SN\,2002ap \citep{Berger02}, [2] SN\,2010ay \citep{Sanders12b}, [3] SN\,2007bg \citep{salas13}, [4-5-6] PTF\,11cmh - PTF\,11qcj - PTF\,14dby \citep{Corsi16}; the relativistic SNe (in blue) [7] SN\,2009bb \citep{Soderberg10a} and [8] SN\,2012ap \citep{Chakraborti15}; [9] SN\,2016coi (brown; \citealt{Terreran19}); the ll-GRBs (in red) [10] SN\,1998bw/GRB\,980425 \citep{LiChevalier99}, [11] SN\,2006aj/GRB\,060218 \citep{Soderberg06d} and [12] SN\,2010bh/GRB\,100316D \citep{Margutti13b}. \\
\label{fig:grid_nr_sn}
}
\end{figure}
\subsection{SSA emission from a relativistic GRB-like jet}
\label{sec:constraints}
We generated a grid of radio light-curves powered by synchrotron emission from off-axis relativistic jets using the {\sc boxfit} code (v2; \citealt{Vaneerten12b}), which is based on high-resolution, two-dimensional relativistic hydrodynamical simulations of relativistic jets. All the synthetic light curves were compared to our VLA upper limits (Table \ref{tab:tablevla1}) to determine the allowed region in the parameter space, using the same procedure as in \citet{Coppejans18}.
The radio emission from an off-axis jet depends on the following physical parameters: (1) isotropic-equivalent total kinetic energy $E_{k,{\rm iso}}$; (2) CSM density, either for an ISM-like ($n$ constant) or a wind-like CSM ($\rho_{CSM} = \dot{M}/(4\pi R^2 v_w$) produced by a constant $\dot{M}$; (3) microphysical shock parameters $\epsilon_e$ and $\epsilon_B$; (4) jet opening angle $\theta_j$; (5) observer angle with respect to the jet axis $\theta_{\rm obs}$.
We fix the power-law index of the shocked electron energy distribution for a typical value in the range $p=2$--$3$, as derived from GRB afterglow modeling (e.g., \citealt{Curran10,Wang15}) and we generate a model for a range of $\dot{M}$ for an assumed wind velocity of $v_w = 1000$~km s$^{-1}$.
We explored a grid of parameters, specifically: $10^{-3}$ cm$^{-3} \leq n \leq 10^2$ cm$^{-3}$; $10^{-8}$ $M_\odot$ yr$^{-1} \leq \dot{M} \leq 10^{-3}$ $M_\odot$ yr$^{-1}$.
Two different jet opening angles were used, which encompass representative measured values for other GRBs: $\theta_j = 5^{\circ}$ and $30^{\circ}$.
We considered three observer angles ($\theta_{obs} = 30^{\circ}$, $60^{\circ}$, and $90^{\circ}$) and isotropic-equivalent kinetic energies in the range $10^{50}$~erg $\leq$ $E_{k,{\rm iso}} \leq 10^{55}$~erg. These ranges describe the typical parameters derived from accurate broadband modeling of GRB afterglows (e.g., \citealt{Schulze11,Laskar13,Perley14,Laskar16}).
Moreover, in this analysis we discuss the results for $\epsilon_e = 0.1$ and $\epsilon_B = 0.01$, but for completeness we show the results for other typical values in Figures~\ref{fig:grid_boxfit_ism} and \ref{fig:grid_boxfit_wind}. We find that our radio limits are consistent with the expected emission from off-axis ($\theta_{\rm obs} \geq 60^{\circ}$) narrow ($\theta_j = 5^{\circ}$) jets expanding in a low-density CSM environment with $\dot{M} \la 10^{-5}$~M$_{\odot}$ yr$^{-1}$ that are typical of BL-Ic SNe and GRBs. The allowed beaming-corrected kinetic energy values are $E_k \le 4 \times 10^{49}$~erg.
\begin{figure*}
\begin{center}
\scalebox{1.}
{\includegraphics[width=0.72\textwidth]{boxfit_ism.eps}}
\caption{
Constraints on jetted outflows in an ISM-like density profile in the CSM, based on the VLA upper limits of SN\,2014ad and hydrodynamic simulations with {\sc boxfit}(v2) code (Sect.~\ref{sec:constraints}).
Black circles represents jet opening angles of $\theta_j = 5^{\circ}$, whereas gray circle represents jet opening angles of $\theta_j = 30^{\circ}$.
The symbol size indicates the observer angle ($\theta_{obs}$) out to which we can rule out the corresponding jet, with larger symbols corresponding to larger $\theta_{obs}$.
Red crosses indicate that we cannot rule out an off-axis relativistic jet with the given parameters in SN\,2014ad.
The top (bottom) panels are $\epsilon_e = 0.1$ ($\epsilon_e = 0.01$), and the left (right) panels are $\epsilon_B = 0.0001$ ($\epsilon_B = 0.01$).
\label{fig:grid_boxfit_ism}
}
\end{center}
\end{figure*}
\begin{figure*}
\begin{center}
\scalebox{1.}
{\includegraphics[width=0.72\textwidth]{boxfit_wind.eps}}
\caption{
Constraints on jetted outflows in a wind-like density profile in the CSM ($\rho \propto r^{-2}$), based on the VLA upper limits of SN\,2014ad and hydrodynamic simulations with {\sc boxfit}(v2) code (Sect.~\ref{sec:constraints}).
See the caption of Figure~\ref{fig:grid_boxfit_ism} for a full description of the symbols.
\label{fig:grid_boxfit_wind}
}
\end{center}
\end{figure*}
\section{Discussion}
\label{sec:disc}
Here we put our results on the environment and on the energetics of SN\,2014ad into the broader context of nearby ($z\le 0.2$) BL-Ic SNe with or without an associated GRB.
\subsection{Constraints on uncollimated outflows in SN\,2014ad}
\label{subsec:disc_ordinary}
In the case of sub-relativistic and nearly isotropic ejecta (Sect.~\ref{subsec:ssamod}) expanding in a wind-like CSM, assuming equipartition ($\epsilon_e = \epsilon_B = 1/3$), Figure~\ref{fig:grid_nr_sn} shows that the combination of VLA + XRT data constrains the fast-ejecta kinetic energy to $E \la 10^{45}$~erg for $\dot{M}\la 10^{-6}~M_\odot$\,yr$^{-1}$ and to $E \la 10^{46}$~erg for $\dot{M}\la 10^{-4}M_\odot$\,yr$^{-1}$.
These very deep constraints rule out outflows with properties similar to (i) relativistic SNe, such as SN\,2009bb \citep{Soderberg10a} and SN\,2012ap \citep{Chakraborti15}, for which no GRB counterpart was detected, and (ii) SN\,1998bw, a prototypical GRB-SN associated with a low-luminosity GRB, propagating into a similar environment (Figure~\ref{fig:grid_nr_sn}). Our limits also point to very low density environments, consistent with previous findings that BL-Ic SNe favor low-density media (e.g., see Fig.~5 from \citealt{Margutti18b}), as was also the case for SN\,2002ap \citep{Berger02} and SN\,2010ay \citep{Sanders12b}.
\subsection{Is SN\,2014ad associated with an off-axis GRB-like jet?}
\label{subsec:disc_offaxis}
Our VLA radio observations place stringent constraints on off-axis relativistic jets expanding into an ISM-like (Figure~\ref{fig:grid_boxfit_ism}) and a wind-like CSM (Figure~\ref{fig:grid_boxfit_wind}), respectively (Sect.~\ref{sec:constraints}).
First, we consider the case of a wind-like CSM and a highly collimated jet with $\theta_j = 5^{\circ}$ (as is typical for cosmological GRBs) viewed off-axis, for $\epsilon_e = 0.1$ and $\epsilon_B = 0.01$ (top right panel, Figure \ref{fig:grid_boxfit_wind}).
These off-axis narrow jets are ruled out regardless of the observer angle for $\dot{M} \ga 10^{-5}$~M$_{\odot}$ yr$^{-1}$ and $E_{k,iso} \ga 10^{52}$~erg (typical value for a GRB).
Hence, GRB-like jets expanding either in a low-density CSM typical of BL-Ic SNe ($\dot{M} \la 10^{-5}$ -- $10^{-6}$~M$_{\odot}$ yr$^{-1}$ in Table~1 of \citealt{Smith14}; see also \citealt{LiChevalier99} and \citealt{Soderberg06d}) or in typical GRB environments ($10^{-7} \la \dot{M} \la 10^{-5}$~M$_{\odot}$ yr$^{-1}$; \citealt{Laskar14,Laskar15}) cannot be ruled out.
In the case of off-axis jets with larger opening angle $\theta_j = 30^{\circ}$, for $\epsilon_e = 0.1$ and $\epsilon_B = 0.01$ (top right panel, Figure \ref{fig:grid_boxfit_wind}), we obtain stronger constraints, due to their larger jet energy.
Specifically, regardless of the observer angle, we can rule out scenarios where $\dot{M} \ga 10^{-6}$~M$_{\odot}$ yr$^{-1}$ and $E_{k,{\rm iso}} \ga 10^{52}$~erg.
Mass-loss rates typically found in the winds of WR stars ($\dot{M} \la 10^{-5}$ -- $10^{-6}$~M$_{\odot}$ yr$^{-1}$; \citealt{Smith14}) are mostly ruled out.
In the case of wide ($\theta_j = 30^{\circ}$), slightly off-axis ($\theta_{obs}\le30^{\circ}$) jets, for $\epsilon_e = 0.1$ and $\epsilon_B = 0.01$ (top right panel, Figure \ref{fig:grid_boxfit_wind}), we can rule out the combination of $\dot{M} \ga 10^{-8}$~M$_{\odot}$ yr$^{-1}$ and $E_{k,{\rm iso}} \ga 10^{51}$~erg.
Assuming a progenitor wind velocity of $1000$~km s$^{-1}$, all the CSM profiles of all the detected SNe Ibc and most of the GRBs detected to date are rejected (see Figure 5 in \citealt{Coppejans18}).
We also report the results for a jet propagating into an ISM-like CSM, as the modeling of GRB afterglows often indicates an ISM environment as opposed to a wind-like density profile (e.g., \citealt{Laskar14,laskar18}).
For $\epsilon_e = 0.1$ and $\epsilon_B = 0.01$ (top right panel, Figure \ref{fig:grid_boxfit_ism}), highly collimated jets with $\theta_j = 5^{\circ}$ are ruled out regardless of the observer angle for $n \ga 10$~cm$^{-3}$ and $E_{k,{\rm iso}} \ga 10^{50}$~erg, or for $n \ga 10^{-1}$~cm$^{-3}$ and $E_{k,{\rm iso}} \ga 10^{52}$~erg. A jet with $\theta_j = 30^{\circ}$ is ruled out for $n \ga 10^{-1}$~cm$^{-3}$ and $E_{k,{\rm iso}} \ga 10^{50}$~erg.
We obtain deeper constraints for jets with $\theta_{obs} < 60^{\circ}$: for $\theta_j = 5^{\circ}$ and $\theta_{obs} = 60^{\circ}$ a jet is ruled out for $n \ga 10^{-3}$~cm$^{-3}$ and $E_{k,{\rm iso}} \ga 10^{52}$~erg.
Hence, GRB-like jets expanding in a ISM-like medium with $n \la 10^{-2}$~cm$^{-3}$ and $E_{k,{\rm iso}} \la 10^{50}$~erg cannot be ruled out: these densities are compatible with those of some GRBs ($10^{-5} \la n \la 10^3$~cm$^{-3}$; e.g., \citealt{Laskar14,Laskar15}).
We conclude that we cannot rule out the case of an off-axis ($\theta_{obs} \ga 30^{\circ}$), narrow ($\theta_j = 5^{\circ}$) GRB-like jet ploughing through low-density CSM typical of BL-Ic SNe and GRBs; this scenario allows for beaming-corrected kinetic energies $E_{k,iso} \la 10^{52}$~erg in environments sculpted by $\dot{M} \la 10^{-6}$~M$_{\odot}$ yr$^{-1}$.
\subsection{Constraining the $E_k(\Gamma\beta)$ distribution of the ejecta of SN\,2014ad}
\label{subsec:disc_others}
Compared with BL-Ic GRB-less SNe, GRB-SNe seemed to show (i) high mass of $^{56}$Ni synthetized in the SN explosion, (ii) higher degree of asphericity in the SN explosion, and (iii) low metallicity of the SN environment (e.g., \citealt{Cano13}).
However, \citet{Taddia19} recently showed that the distributions of these observables for the two classes of BL-Ic SNe are still compatible within uncertainties.
Another way to investigate the differences between the two classes is offered by the slope $x$ of the kinetic energy profile ($E_k$) as a function of the ejecta four-velocity ($\Gamma \beta$), parametrized as $E_k\propto (\Gamma\beta)^x$. What is more, this may help to reveal the nature of the explosion (see Fig.~2, \citealt{Margutti14b}).
Steep profiles ($x \la -2.4$) indicate a short-lived central engine, and hence an ordinary Ibc SN \citep{Lazzati12}; flat profiles ($x \ga -2.4$) indicate the presence of a mildly short-lived central engine, and hence a possible GRB-SN \citep{Margutti13b}; very flat profiles ($x = -0.4$) are typical of ordinary GRBs in the decelerating Blandford-McKee phase \citep{BM76}, whereas very steep profiles ($x = -5.2$) are characteristic of a pure hydrodynamical spherical explosion \citep{Tan01}.
For SN\,2014ad we explored a grid of parameters in the $E_k$ -- $\Gamma\beta$ space.
$\Gamma$ is calculated at $t = 1$~d applying the standard formulation of the fireball dynamics with expansion in a wind-like CSM (e.g., \citealt{ChevalierLi00})
\begin{equation}
\Gamma_{(t = 1\,d)} \sim 18.7 \left(\frac{E_{k,{\rm iso}}}{10^{54}erg}\right)^{1/4} \left(\frac{A_*}{0.1}\right)^{-1/4}\;,
\label{eq:gamma_wind}
\end{equation}
where $A_*$ is the circumstellar density, defined with respect to progenitor mass-loss rate $\dot{M}$ and wind velocity $v_w$ as:
\begin{equation}
A_*\ =\ \left(\frac{\dot{M}}{10^{-5}M_\odot \; {\rm yr}^{-1}}\right) \left(\frac{v_w}{1000 \; {\rm km\ s}^{-1}}\right)\;.
\label{eq:a_wind}
\end{equation}
The allowed regions are derived through the conditions described in Sect.~\ref{subsec:disc_offaxis} for the case of a highly collimated jet with $\theta_j = 5^{\circ}$ (as typical for cosmological GRBs) viewed off-axis in a wind-like CSM (Figure~\ref{fig:grid_boxfit_wind}; top right panel).
Figure~\ref{fig:grid_ekgammabeta} shows the allowed region of the beaming-corrected energy $E_k$ -- ejecta velocity $\Gamma \beta$ space (in the relativistic regime).
Relativistic jets for SN\,2014ad are possible for progenitors with very low densities ($\dot{M} \la 10^{-7}$~M$_{\odot}$ yr$^{-1}$); for example, a faster-moving ejecta (with a beaming-corrected energy $E_k \sim 10^{51}$~erg) ploughing through a wind-like CSM with a very low density $\dot{M} \sim 10^{-7}$~M$_{\odot}$ yr$^{-1}$ has $\Gamma \beta \sim 24$ (at $t = 1$~d), compatible with the flat profile ($x = -0.4$) of ordinary GRBs.
\begin{figure}
\centering
\includegraphics[width=8.8cm]{grid_ek_vs_gammabeta.eps}
\caption{
Region of the beaming-corrected energy $E_k$ -- ejecta velocity $\Gamma \beta$ (with $\Gamma$ estimated at $t = 1$~d) space allowed by our upper limits of SN\,2014ad (in wind-like CSM for relativistic regime).
The color scale shows the allowed progenitor mass-loss rate $\dot{M}$.
The dashdot lines indicate the slope $x$ of the kinetic energy profile.
The orange hatched area indicates the region of relativistic SNe, where the cocoon emission might be observable \citep{DeColle18a,DeColle18b}. \\
\label{fig:grid_ekgammabeta}
}
\end{figure}
The lack of any associated GRB suggests a possible off-axis GRB propagating in a wind-like CSM with a very low density ($\dot{M} \la 10^{-7}$~M$_{\odot}$ yr$^{-1}$).
\subsection{Constraints on Cocoon emission in SN\,2014ad}
\label{subsec:cocoon}
The interaction between the jet emission and the outer layers of the progenitor star causes the swelling of the outer envelope of the jet, called cocoon.
The recent broadband spectroscopic analysis of \citet{Izzo19} of a BL-Ic GRB-SN (SN\,2017iuk/GRB\,171205A) shows the first direct evidence for the cocoon emission.
This cocoon is characterized by a very high expansion velocity ($\sim 0.3\,c$) and probably originates from the energy injection of a mildly-relativistic GRB jet.
This discovery could explain the lack of GRBs observed in association with some BL-Ic SNe: the jet, since it transfers a significant part of its total energy to the cocoon, produces the typical GRB emission only if it manages to completely pierce the star photosphere.
This conclusion is in agreement with the analysis of \citet{DeColle18a,DeColle18b}: they show that the radio emission observed in relativistic SNe can be explained as synchrotron emission from the cocoon created by an off-axis GRB jet (either failed or successful), that propagates through the progenitor star.
Figure~\ref{fig:grid_ekgammabeta} shows the allowed region (red hatched area) for relativistic SNe, where the cocoon emission in principle might be observable: even if the radio emission from SN\,2014ad is much fainter than SN\,2009bb and SN\,2012ap (Figure~\ref{fig:radioworld}), this region is compatible with $E_k$ of the fast ejecta for a SN\,2014ad progenitor with mildly-low densities ($\dot{M} \sim 10^{-5}$~M$_{\odot}$ yr$^{-1}$).
\citet{DeColle18a} suggest that, in the off-axis GRB scenario, the cocoon synchrotron emission at radio frequencies dominates (i) always for failed GRB/cocoon or weak GRB observed off-axis, or (ii) only at early times for energetic off-axis jets with late-times peaks (timescale of years).
A more quantitative discussion of the cocoon emission for SN\,2014ad is beyond the scope of the present investigation.
\section{Conclusions}
\label{sec:conc}
We present deep X-ray and radio limits of the BL-Ic SN\,2014ad.
Radio and X-ray observations are crucial for probing the fastest moving ejecta in the explosion, as the optical emission is produced by the slow-moving ejecta.
Previous studies of this source showed that it has a number of properties that, taken together, suggest a possible GRB counterpart. These include a large bulk energy $E_k$ of the slow ejecta, the asphericity in the explosion and ejecta velocity, the large inferred Nickel mass, and the low progenitor mass-loss rate $\dot{M}$.
Consequently, we investigated two different physical scenarios for SN\,2014ad:
(i) a sub-relativistic, nearly isotropic explosion of an ordinary BL-Ic SN in a wind-like CSM (Sect.~\ref{subsec:ssamod});
(ii) an off-axis relativistic jet (Sect.~\ref{sec:constraints}).
These models place strong constraints on the total energy of the fast ejecta ($E$), the progenitor mass loss rate ($\dot{M}$), the jet opening angle ($\theta_j$) and the observer angle ($\theta_{obs}$).
We obtained the following results:
\begin{itemize}
\item Assuming that the dominant source of X-ray emission at early times is IC emission from the upscattering of optical photospheric photons into the X-ray band by relativistic electrons at the shock front (Sect.~\ref{sec:IC}), we infer $\dot{M} < 5\times10^{-5}$~M$_{\sun}$ yr$^{-1}$, for a wind velocity $v_w = 1000$~km s$^{-1}$ for a spherical outflow.
\item If SN\,2014ad launched a sub-relativistic and isotropic outflow (Sect.~\ref{subsec:ssamod}), assuming equipartition ($\epsilon_e = \epsilon_B = 0.33$) we derive limits of $E \la 10^{45}$~erg for $\dot{M}\la 10^{-6}~M_\odot$\,yr$^{-1}$ and $E \la 10^{46}$~erg for $\dot{M}\la 10^{-4}M_\odot$\,yr$^{-1}$.
These deep constraints rule out outflows with properties similar to (i) relativistic SN\,2009bb and SN\,2012ap, for which no associated GRB was reported, and (ii) SN\,1998bw, a prototypical GRB-SN, propagating into a similar environment.
$E$ and $\dot{M}$ of the kind seen in the GRB-less SN\,2002ap and SN\,2010ay, which are characterized by a modest energy budget in the fast ejecta, are not ruled out.
\item If SN\,2014ad launched a relativistic jet, we (i) rule out collimated on-axis jets of the kind detected in GRBs, (ii) put strong constraints on the energies and CSM densities for an off-axis jet (Figure~\ref{fig:grid_boxfit_ism} and \ref{fig:grid_boxfit_wind}).
We cannot rule out an off-axis GRB in very low-density CSM environments (e.g., $\theta_{\rm obs} \ga 30^{\circ}$, $\theta_j = 5^\circ$, in a CSM sculpted by $\dot{M} \la 10^{-6}$~M$_{\odot}$ yr$^{-1}$, typical of BL-Ic SNe and GRBs).
Moreover, we cannot reject the possibility of a radio synchrotron emission dominated by the cocoon created by a GRB jet viewed off axis, that propagates through the stellar progenitor, as expected for relativistic SNe.
\end{itemize}
With our analysis of the off-axis jet scenario we have demonstrated that it is not possible to rule out off-axis jets expanding into low-density environments (as previously found by \citealt{Bietenholz14VLBI} for other SNe). For SN\,2014ad we find $\dot{M} \la 10^{-6}$~M$_{\odot}$ yr$^{-1}$ (Figure \ref{fig:grid_ekgammabeta}).
\emph{If} SN\,2014ad was indeed powered by an off-axis relativistic jet, our X-ray and radio observations imply extremely low environment densities and energies coupled to jet (unless the jet was far off-axis).
Deep radio and X-ray observations at early \emph{and} at late times of a large sample of nearby BL-Ic SNe will clarify if relativistic jets are ubiquitous in BL-Ic SNe.
\acknowledgments
We thank D.~K.~Sahu for kindly sharing their bolometric light curves.
M.M. thanks M.~Orienti and E.~Egron for their precious suggestions about VLA data reduction and Bath University for the hospitality during the final stages of this work.
We acknowledge University of Ferrara for use of the local HPC facility co-funded by the ``Large-Scale Facilities 2010'' project (grant 7746/2011).
We thank University of Ferrara and INFN--Ferrara for the access to the COKA GPU cluster.
This research was supported in part through the computational resources and staff contributions provided for the Quest high performance computing facility at Northwestern University which is jointly supported by the Office of the Provost, the Office for Research, and Northwestern University Information Technology.
We gratefully acknowledge Piero Rosati for granting us usage of proprietary HPC facility.
Development of the {\sc boxfit} code was supported in part by NASA through grant NNX10AF62G issued through the Astrophysics Theory Program and by the NSF through grant AST-1009863.
Simulations for {\sc boxfit}v2 have been carried out in part on the computing facilities of the Computational Center for Particle and Astrophysics of the research cooperation ``Excellence Cluster Universe'' in Garching, Germany. Support for this work was provided by Universit\`a di Ferrara through grant FIR~2018 ``A Broad-band study of Cosmic Gamma-Ray Burst Prompt and Afterglow Emission" (PI Guidorzi).
The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc..
\bibliographystyle{apj}
|
2,869,038,155,839 | arxiv | \section{Introduction}
Discontinuous Galerkin (DG) methods have gained increasing popularity in the last decades for solving the compressible and incompressible Navier-Stokes equations \cite{Ferrer2017,Manzanero2018b,Wang2013High,Fraysse2016,Browne2014a,Cockburn1998}.
The lack of a continuity constraint on element interfaces makes DG methods robust for describing advection-dominated problems when an appropriate Riemann solver is selected, and allows them to handle non-conforming meshes with hanging nodes and/or different polynomial orders efficiently \cite{Riviere2008,Kopriva2002,Ferrer2012}.
This is advantageous for accelerating the computations through adaptation strategies that adjust the element size (h) or the polynomial order (p) locally. Multigrid solvers have also been used to accelerate high-order DG time marching computations for a fixed number of degrees of freedom \cite{Fidkowski2005,Luo2006,Luo2006a,Wang2007,Bassi2009,Nastase2006,Shahbazi2009,Wang2009,Mitchell2010,Botti2017}.
The Discontinuous Galerkin Spectral Element Method (DGSEM) \cite{kopriva2009implementing,Black1999} is a high-order nodal variant of the DG technique on hexahedral meshes that is especially suited for mesh adaptation strategies because, in addition to the mentioned properties, it handles p-anisotropic representations efficiently \cite{Kopriva2002,Kompenhans2016,RuedaRamirez2018}.\\
To fully exploit this feature, we can adapt the mesh locally and anisotropically (both in element size and approximation order), so that the solution captures the flow features of interest at a minimum cost. Local adaptation can be performed by subdividing or merging elements (h-adaptation) or by enriching or reducing the polynomial order in certain elements (p-adaptation). To that end, it is paramount to identify the flow regions that require refinement or coarsening. This has been done historically using three different approaches: the \textit{feature-based adaptation}, the \textit{adjoint-based adaptation}, and the \textit{local error-based adaptation}. A comparison of these three approaches was performed by Fraysse et al. \cite{Fraysse2012} in the context of finite volume approximations and by Kompenhans et al. \cite{Kompenhans2016a} and Naddei et al. \cite{Naddei2018} for high-order DG methods. The key ideas behind the adaptation approaches are:\\
\begin{itemize}
\item The \textit{feature-based adaptation} is the classical approach and consists in refining where high velocity, density or pressure gradients are identified. The main disadvantage of these methods is that there is no direct relation between the adaptation criterion and the numerical errors and thus the accuracy is not easily predictable.\\
\item A second and more sophisticated approach is known as \textit{adjoint-based adaptation}. In this approach, a functional target is defined (e.g. drag or lift) and the adjoint problem is solved in order to obtain a spatial distribution of the functional error, which is then used for adapting the mesh. This technique was originally developed for variational formulations \cite{Estep1995,Hartmann2002,Hartmann2006}, and it has been also implemented successfully for Finite Volume schemes \cite{Venditti2002,Pierce2004}. More recently, Wang and Mavriplis \cite{Wang2009} implemented a non-variational formulation for the error estimates and used it to adapt a DG method. The main drawback of this approach is the high computational cost involved in solving the adjoint problem and the storage requirements needed for saving the error estimators.\\
\item A computationally more efficient alternative is the \textit{local error-based adaptation}, which can be based on any local error estimate. On the one hand, estimations of the local discretization error have been used by Mavriplis \cite{Mavriplis1989,Mavriplis1994} to develop hp-adaptation techniques for the spectral element method. Later, Casoni et al. \cite{EvaCasoniyAntonioHuerta2011} extended her approach to adapt the artificial viscosity in shock capturing discontinuous Galerkin discretizations. On the other hand, the $\tau$-estimation method proposed by Brandt \cite{Brandt1984}, which estimates the local truncation error by injecting a fine grid solution into coarser meshes, has been used for adaptation purposes in low-order schemes \cite{berger1987adaptive,Fraysse2012,Fraysse2013,Fraysse2014,Syrakos2012,Syrakos2006}. Rubio et al. \cite{Rubio2013} extended the $\tau$-estimation approach to high-order methods using a continuous Chebyshev collocation method. Later, Rubio et al. applied it to DGSEM discretizations \cite{Rubio2015}, and studied the quasi-\textit{a priori} truncation error estimation, which allows estimating the truncation error without having fully converged fine solutions. Kompenhans et al. \cite{Kompenhans2016} applied the $\tau$-estimation approach to perform p-adaptation using the Euler and Navier-Stokes equations, and showed that a reduction of the truncation error increases the numerical accuracy of all functionals. Furthermore, Kompenhans et al. \cite{Kompenhans2016a} also compared $\tau$-based to featured based adaption, showing better performance for the former. The adaptation strategy consisted in converging a high order representation (reference mesh) to a specified global residual and then performing a single error estimation followed by a corresponding p-adaptation process. More recently, Rueda-Ramírez et al. \cite{RuedaRamirez2018} developed a new method for estimating the truncation error that is cheaper to evaluate than previous implementations, and showed that it produces very accurate extrapolations of the truncation error, which enables using coarser reference meshes.\\
\end{itemize}
The second methodology used in this work are multigrid algorithms. Multigrid methods were first proposed by Brandt \cite{Brandt1977}, who discovered that the classic iterative methods (also referred to as \textit{smoothers}) eliminate the high-frequency components of the error quickly, but fail to eliminate the low-frequency components efficiently. Therefore, he proposed to use coarser meshes to eliminate low-frequency modes. His approach is known as h-multigrid and has been used extensively in low order methods such as traditional Finite Difference and Finite Volume schemes \cite{Hortmann1990,Leister1992,versteeg2007introduction}. Craig and Zienkiewicz \cite{craig1985multigrid}, and Rønquist and Patera \cite{Ronquist1987} were the first authors working on high-order methods that proposed the use of the polynomial order, $p$, to define the levels of a multigrid scheme, the former for p-finite elements and the latter for nodal spectral elements. After these initial works, the use of multilevel methods spread in the high-order community; initially as p-multigrid methods \cite{Fidkowski2005,Luo2006,Luo2006a,Wang2007,Bassi2009} and more recently as hp-multigrid methods \cite{Nastase2006,Shahbazi2009,Wang2009,Mitchell2010,Botti2017}. Most of these implementations use modal hierarchical shape functions \cite{Nastase2006,Wang2007,Fidkowski2005,Shahbazi2009}, and only a small number of publications focus on nodal-based shape functions \cite{Bassi2009,Fidkowski2005}.\\
Two types of multilevel methods can be found in the literature: linear and nonlinear multigrid methods. The former is \textit{de facto} a linear solver and is usually employed to solve the system of equations obtained from an implicit time integration scheme after linearizing with a Newton or Picard iteration. In this case, the smoother is an iterative method for sparse linear systems \cite{saad2003iterative}. The latter, also known as Full Approximation Scheme (FAS), consists in applying the multigrid directly to the set of nonlinear equations. In such a case, the smoother can be either a time-marching scheme (implicit or explicit), or an iterative method applied to the linearized problem. A comparison of linear and nonlinear multigrid methods for DG discretizations can be found in \cite{Nastase2006}. In our work, we make use of the nonlinear multigrid scheme since, as will be shown, it enables the estimation of the truncation error of coarse representations. \\
In this paper, we build on the work on p-adaptation by Kompenhans et al. \cite{Kompenhans2016} and on $\tau$-estimators by Rueda-Ramírez et al. \cite{RuedaRamirez2018}, and combine them with multigrid solution techniques in order to accelerate the convergence of steady-state solutions of the Navier-Stokes equations using the DGSEM. We use the multigrid scheme both as a solver and as an estimator of the truncation error of anisotropic polynomial representations. To do so, we show that the recently developed truncation error estimator by Rueda-Ramírez et al. \cite{RuedaRamirez2018} is well suited to be evaluated inside an anisotropic p-multigrid cycle with a negligible extra cost. The proposed method results in measured speed-ups of up to 816 in a proposed 2D boundary layer case and of 151 in a 3D study of the flow around a sphere, as compared to a traditional explicit solution method. The coupling of multigrid and p-adaptation also enables to propose a multi-stage adaptation process with increasing order representations which reduces the computational cost when very accurate results are required, resulting in speed-ups of 2.6 with respect to the single-stage adaptation process. To the best of our knowledge, this is the first work on DG that couples an anisotropic p-adaptation technique with multigrid.\\
The rest of the paper is organized as follows: in section \ref{sec:DGSEM}, the discontinuous Galerkin spectral element method is briefly explained. Section \ref{sec:AccelTech} details the acceleration methods the current work builds on; namely, multigrid, p-adaptation based on $\tau$-estimations, and their coupling. We finish section \ref{sec:AccelTech} describing how the coupling of multigrid and p-adaptation enables to introduce new features that speed-up the solution procedure. In section \ref{sec:Results}, we study the performance of the proposed p-adaptation algorithm by means of solving 2D and 3D boundary layer test cases. Finally, the main conclusions are gathered in section \ref{sec:Conclusions}.
\section{The Discontinuous Galerkin Spectral Element Method}\label{sec:DGSEM}
We consider the approximation of systems of conservation laws,
\begin{equation}\label{eq:NScons}
\mathbf{q}_t + \nabla \cdot \mathscr{F} = \mathbf{s},
\end{equation}
where $\mathbf{q}$ is the vector of conserved variables, $\mathscr{F}$ is the flux dyadic tensor which depends on $\mathbf{q}$ and $\nabla \mathbf{q}$, and $\mathbf{s}$ is a source term. As detailed in \ref{sec:NS}, the compressible Navier-Stokes equations can be represented using equation \eqref{eq:NScons}. Multiplying equation \eqref{eq:NScons} by a test function $\mathbf{v}$ and integrating by parts over the domain $\Omega$ yields the weak formulation:
\begin{equation} \label{eq:weak}
\int _{\Omega} \mathbf{q}_t \mathbf{v} \textrm{d} \Omega - \int _{\Omega} \mathscr{F} \cdot \nabla \mathbf{v} \textrm{d} \Omega + \int_{\partial \Omega} \mathscr{F} \cdot \mathbf{n} \mathbf{v} \textrm{d} \sigma = \int _{\Omega} \mathbf{s} \mathbf{v} \textrm{d} \Omega,
\end{equation}
where $\mathbf{n}$ is the normal unit vector on the boundary $\partial \Omega$. Let the domain $\Omega$ be approximated by a tessellation $\mathscr{T} = \lbrace e \rbrace$, a combination of $K$ finite elements $e$ of domain $\Omega^e$ and boundary $\partial \Omega^e$. Moreover, let $\mathbf{q}$, $\mathbf{s}$, $\mathscr{F}$ and $\mathbf{v}$ be approximated by piece-wise polynomial functions (that are continuous in each element) defined in the space of $L^2$ functions:
\begin{equation}
\mathscr{V}^N = \lbrace \mathbf{v}^N \in L^2(\Omega) : \mathbf{v}^N\vert_{\Omega^e} \in \mathscr{P}^N(\Omega^e) \ \ \forall \ \Omega^e \in \mathscr{T} \rbrace,
\end{equation}
where $\mathscr{P}^N(\Omega^e)$ is the space of polynomials of degree at most $N$ defined in the domain of the element $e$. We remark that the functions in $\mathscr{V}^N$ may be discontinuous at element interfaces and that the polynomial order $N$ may be different in each element and direction. Equation \eqref{eq:weak} can then be rewritten for each element as:
\begin{equation} \label{eq:weak2}
\int _{\Omega^e} {\mathbf{q}^e_t}^N {\mathbf{v}^e}^N \textrm{d} \Omega^e - \int_{\Omega^e} {\mathscr{F}^e}^N \cdot \nabla {\mathbf{v}^e}^N \textrm{d} \Omega^e + \int_{\partial \Omega^e} {\mathscr{F}^*} \left( {\mathbf{q}^e}^N, {\mathbf{q}^-}^N, \mathbf{n} \right) {\mathbf{v}^e}^N \textrm{d} \sigma^e = \int _{\Omega^e} {\mathbf{s}^e}^N {\mathbf{v}^e}^N \textrm{d} \Omega^e,
\end{equation}
where the superindex ``$e$" refers to the functions as evaluated inside the element $e$, i.e. ${\mathbf{q}^e}^N = {\mathbf{q}^N}\rvert_{\Omega^e}$; whereas the superindex ``$-$" refers to the value of the functions on the external side of the interface $\partial \Omega^e$. The numerical flux function, ${\mathscr{F}^*}$, allows to uniquely define the flux at the element interfaces and to weakly prescribe the boundary data as a function of the conserved variable on both sides of the boundary/interface (${\mathbf{q}^{e}}^N$ and ${\mathbf{q}^{-}}^N$) and the normal vector ($\mathbf{n}$). Multiple choices for the numerical flux functions can be found in the literature \cite{toro2013riemann,Manzanero2018a}. In the present work, we use Roe \cite{Roe1981} as the advective Riemann Solver and Bassi-Rebay 1 \cite{Bassi1997} as the diffusive Riemann solver. We remark that the numerical flux must be computed in a specific manner when the representation is non-conforming \cite{Kopriva2002}. \\
Since $\mathbf{q}^N$, $\mathbf{s}^N$, $\mathbf{v}^N$ and $\mathscr{F}^N$ belong to the polynomial space $\mathscr{V}^N$, it is possible to express them inside every element as a linear combination of basis functions $\phi_n \in \mathscr{P}^N(\Omega^e)$:
\begin{eqnarray}\label{eq:PolExp}
\mathbf{q}\rvert_{\Omega^e} \approx {\mathbf{q}^e}^N = \sum_n \mathbf{Q}^e_n \phi^e_n (\mathbf{x}), & & \ \ \ \
\mathbf{s}\rvert_{\Omega^e} \approx {\mathbf{s}^e}^N = \sum_n \mathbf{S}^e_n \phi^e_n (\mathbf{x}), \nonumber \\
\mathbf{v}\rvert_{\Omega^e} \approx {\mathbf{v}^e}^N = \sum_n \mathbf{V}^e_n \phi^e_n (\mathbf{x}), & &
\mathscr{F}\rvert_{\Omega^e} \approx {\mathscr{F}^e}^N = \sum_n \pmb{\mathscr{F}}^e_n \phi^e_n (\mathbf{x}).
\end{eqnarray}
Therefore, equation \eqref{eq:weak2} can be expressed in a discrete form as
\begin{equation} \label{eq:DiscretNSElem}
[\mathbf{M}]^e \frac{\partial \mathbf{Q}^e}{\partial t} + \mathbf{F}^e(\mathbf{Q}) = [\mathbf{M}]^e \mathbf{S}^e,
\end{equation}
where $\mathbf{Q}^e=(\mathbf{Q}^e_1, \mathbf{Q}^e_2, \cdots, \mathbf{Q}^e_n, \cdots)^T$ is the local solution that contains the coefficients of the linear combination for the element $e$; $\mathbf{Q}=(\mathbf{Q}^1,\mathbf{Q}^2, \cdots, \mathbf{Q}^K)^T$ is the global solution that contains the information of all elements; $[\mathbf{M}]^e$ is known as the elemental mass matrix, and $\mathbf{F}^e(\cdot)$ is a nonlinear spatial discrete operator on the element level:
\begin{align}
[\mathbf{M}]^e_{i,j} &= \int_{\Omega^e} \phi^e_i \phi^e_j \textrm{d} \Omega^e, \\
\mathbf{F}^e(\mathbf{Q})_j &= \sum_i \left[ - \int_{\Omega^e} \pmb{\mathscr{F}}_i^e \cdot \phi^e_i \nabla \phi^e_j \textrm{d} \Omega^e \right] + \int_{\partial \Omega^e} {\mathscr{F}^*}^N \left( \mathbf{Q}^e, \mathbf{Q}^{-}, \mathbf{n} \right) \phi^e_j \textrm{d} \sigma^e.
\end{align}
Note that the operator $\mathbf{F}^e$ is applied on the global solution, since it is the responsible for connecting the elements of the mesh (weakly). Assembling the contributions of all elements into the global system we obtain
\begin{equation} \label{eq:DiscretNS}
[\mathbf{M}] \frac{\partial \mathbf{Q}}{\partial t} + \mathbf{F}(\mathbf{Q}) = [\mathbf{M}] \mathbf{S}.
\end{equation}
In the DGSEM \cite{kopriva2009implementing}, the tesselation is performed with non-overlapping hexahedral elements of order $N=(N_1,N_2,N_3)$ (independent in every direction) and the integrals are evaluated numerically by means of a Gaussian quadrature that is also of order $N=(N_1,N_2,N_3)$. For complex geometries, it is most convenient to perform the numerical integration in a reference element and transform the results to the physical space by means of a high-order mapping of order $M = (M_1,M_2,M_3)$:
\begin{eqnarray} \label{eq:mapping}
\mathbf{x}^e = \mathbf{x}^e \left( \pmb{\xi} \right) \in \mathscr{P}^M, & & \pmb{\xi} = \left( \xi, \eta, \zeta \right) \in \left[ -1, 1 \right]^3.
\end{eqnarray}
The differential operators can be expressed in the reference element in terms of the covariant ($\mathbf{a}_i$) and contravariant ($\mathbf{a}^i$) metric tensors \cite{kopriva2009implementing}:
\begin{equation}
\mathbf{a}_i = \frac{\partial \mathbf{x}^e}{\partial \xi_i}, \ \ \mathbf{a}^i = \nabla \xi_i, \ \ i = 1, 2, 3.
\end{equation}
Using these mappings, the gradient and divergence operators become
\begin{equation}
\nabla q = \frac{1}{J} \sum_{i=1}^d \frac{\partial}{\partial \xi_i} \left( J\mathbf{a}^i q \right), \ \ \nabla \cdot \mathbf{f} = \frac{1}{J} \sum_{i=1}^d \frac{\partial}{\partial \xi_i} \left( J\mathbf{a}^i \cdot \mathbf{f} \right),
\end{equation}
where the Jacobian of the transformation can be expressed in terms of the covariant metric tensor:
\begin{equation}
J = \mathbf{a}_i \cdot \left( \mathbf{a}_j \times \mathbf{a}_k \right), \ \ \left( i,j,k \right) \ \textrm{cyclic}.
\end{equation}
The covariant vectors can be readily obtained from the mapping (equation \eqref{eq:mapping}). For 2D problems, the contravariant vectors can be obtained with the well-known ``cross product form" \cite{Kopriva2006}. However, for fully 3D problems, the contravariant vectors must be obtained using either the ``conservative curl form" or the ``invariant curl form" \cite{Kopriva2006}. Since in this work we deal with 3D curved meshes, the ``invariant curl form" is selected:
\begin{equation}
Ja_n^i = - \frac{1}{2} \hat x_i \cdot \nabla_{\xi} \times \left[ \mathbf{I}^N \left( X_l \nabla_{\xi} X_m - X_m \nabla_{\xi} X_l \right) \right] \ \ \ i=1,2,3, \ n=1,2,3, \ \ (n,m,l) \ \text{cyclic},
\end{equation}
where $\mathbf{I}^N$ is an interpolating operator that converts an arbitrary continuous function into a polynomial expansion (as in equation \eqref{eq:PolExp}).\\
Similarly, in the DGSEM the order of the mapping ($M$ in equation \eqref{eq:mapping}) must be $M_i \le N_i$ for 2D, 2D-extruded and 3D p-conforming representations (subparametric or at most isoparametric mapping) to retain free-stream-preservation \cite{Kopriva2006}, whereas it is limited to $M_i \le N_i/2$ for general 3D p-nonconforming representations (David Kopriva, private communication, April 2018). \\
Furthermore, in the DGSEM the polynomial basis functions ($\phi_n$ in equation \eqref{eq:PolExp}) are tensor product reconstructions of Lagrange interpolating polynomials on quadrature points in each of the Cartesian directions of the reference element:
\begin{equation}
\mathbf{q}^N = \sum_n \mathbf{Q}_n \phi_n (\mathbf{x}) = \sum_{i=0}^{N_1} \sum_{j=0}^{N_2} \sum_{k=0}^{N_3} \mathbf{Q}_{i,j,k} l_i (\xi) l_j (\eta) l_k (\zeta).
\end{equation}
Therefore, $\mathbf{Q}_n=\mathbf{Q}_{i,j,k}$ are simply the nodal values of the solution, and $[\mathbf{M}]$ is a diagonal matrix containing the quadrature weights and the mapping terms. In the present work, we make use of the Legendre-Gauss quadrature points \cite{kopriva2009implementing}.\\
A final remark should be made regarding how the time step is chosen. Since in this paper we make use of explicit time integration schemes, the Courant-Friedrich-Levy (CFL) condition dictates a time step limit \cite{karniadakis2013spectral,hesthaven2007nodal}:\\
\begin{equation}
\Delta t = \min(\Delta t^a, \Delta t^{\nu}),
\end{equation}
where the advective time-step restriction is
\begin{equation}
\Delta t^a \le C^a \left( \norm{\pmb{\mathcal{S}}} \frac{N^2}{h} \right)^{-1},
\end{equation}
and the diffusive time-step restriction is
\begin{equation}
\Delta t^{\nu} \le C^{\nu} \left( \mu \frac{N^4}{h^2} \right)^{-1},
\end{equation}
where $C^a$ and $C^{\nu}$ are constants that depend on the time integration method, $\pmb{\mathcal{S}} = \mathbf{v} + c$ is the characteristic velocity (with $\mathbf{v}$ the flow velocity and $c$ the speed of sound), $\mu$ is the fluid viscosity, and $h$ is the local mesh size. This quantity is evaluated in every time step on the Gauss points of the domain taking into account the possibility of having anisotropic polynomial orders. The most restrictive $\Delta t$ is always chosen.
\section{Acceleration techniques to converge to steady-state} \label{sec:AccelTech}
In this section, we describe the two methods that will be used in the present work to obtain steady-state solutions of the Navier-Stokes equations; namely, nonlinear multigrid schemes and p-adaptation methods based on truncation error estimators.\\
A common way of obtaining a steady-state solution for an unsteady PDE is to start from an arbitrary initial condition and integrate in time until the system converges to a steady solution. The time-stepping scheme can be either explicit or implicit. Explicit high-order Runge-Kutta methods have been traditionally preferred in DGSEM approximations because they do not require solving large complex nonlinear systems that result from implicit implementations. Furthermore, explicit techniques facilitate the parallelization in multi-core systems \cite{Hindenlang2012}. \\
Note that multigrid methods can be (and have been) adapted to unsteady cases in a straightforward manner \cite{Wang2007,Arnone1995,Birken2012}. On the contrary, p-adaptation methods based on $\tau$-estimators have only been applied to steady-state solutions in the context of high-order methods \cite{Kompenhans2016a,Kompenhans2016}. This issue will be addressed in future studies.\\
\subsection{Nonlinear p-multigrid} \label{sec:Multigrid}
Multigrid methods are techniques used to accelerate the convergence to the solution of large linear or nonlinear problems. They constitute a workaround to the fact that standard iterative solvers tend to reduce the high-frequency contents of the error fast, but fail to reduce the low-frequencies efficiently. For this reason, the iterative procedures are commonly referred to as \textit{smoothers} in the multigrid parlance. \\
In h-multigrid, a sequence of progressively coarsening meshes is used where the iterative solver is employed. Every time the mesh is coarsened, some of the smooth components of the error become oscillatory relative to the mesh sampling. Therefore, further coarse-grid smoothing enhances the convergence rate. The p-multigrid scheme relies on the same notion, but low-order polynomial representations are used as coarse levels, which makes it very appropriate for high order methods. p-Multigrid methods typically use a fixed h-mesh.\\
For compactness, we shortly describe the Full Approximation Storage (FAS) nonlinear multigrid algorithm. Further details can be found in \cite{Brandt1984,Mavriplis2002,Nastase2006,Wang2007}. Let us consider the steady-state form ($\partial \mathbf{q} / \partial t = 0$) of our nonlinear problem (equation \eqref{eq:DiscretNS}):
\begin{equation}
[\mathbf{M}]^{-1} \mathbf{F}(\mathbf{Q}) = \mathbf{S},
\end{equation}
and define $\mathbf{A}(\mathbf{Q})=[\mathbf{M}]^{-1}\mathbf{F}(\mathbf{Q})$, to obtain
\begin{equation} \label{eq:NonLinFAS}
\mathbf{A}(\mathbf{Q}) = \mathbf{S},
\end{equation}
and use the temporal discretization as the smoothing technique.\\
We select a third order Williamson's low-storage Runge-Kutta scheme (RK3) \cite{williamson1980low} as the smoothing time-marching scheme, so that the developed multigrid schemes can be compared to the purely explicit RK3 (see section \ref{sec:FlatPlateMG}). After some smoothing sweeps in a mesh with polynomial order $P$, the nonlinear residual equation holds
\begin{equation}\label{eq:nonlinRes}
\mathbf{S}^P-\mathbf{A}^P(\mathbf{\tilde Q}^P) =\mathbf{r}^P,
\end{equation}
where $\mathbf{\tilde Q}^P$ is the approximated solution and $\mathbf{r}^P$ is known as the nonlinear residual. Remember that $P = (P_1,P_2,P_3)$ can be different in each element and direction. Using equation \eqref{eq:NonLinFAS}, equation \eqref{eq:nonlinRes} can be rewritten as
\begin{align}
\mathbf{A}^P(\mathbf{Q}^P)-\mathbf{A}^P(\mathbf{\tilde Q}^P) &=\mathbf{r}^P, \label{eq:FASresidual}\\
\mathbf{A}^P(\mathbf{\tilde Q}^P+\pmb{\epsilon}_{it}^P)-\mathbf{A}^P (\mathbf{\tilde Q}^P) &=\mathbf{r}^P,
\end{align}
where $\pmb{\epsilon}_{it}^P$ is the iteration error on the mesh $P$. The standard two-level p-multigrid FAS scheme consists in transferring equation \eqref{eq:FASresidual} to a lower polynomial representation of order $N = P - \Delta N$ (coarser grid), and using additional smoothing sweeps there. In the lower-order grid, the smoother now targets lower frequencies than the ones removed on the finer grid. Therefore, solving the residual equation on the coarse grid,
\begin{equation}\label{eq:FASCoarseResEq}
\mathbf{A}^N (\mathbf{Q}^N) - \mathbf{A}^N (\mathbf{\tilde Q}_0^N) = \mathbf{r}^N,
\end{equation}
for $\mathbf{Q}^N$, leads to an improved low frequency approximation of the fine grid problem. This holds if $\mathbf{\tilde Q}_0^N$ and $\mathbf{r}^N$ are transferred (interpolated or projected) from the fine grid:
\begin{align}
\mathbf{\tilde Q}_0^N &= \mathbf{I}_P^N \mathbf{\tilde Q}^P \label{eq:FASCoarseSol}\\
\mathbf{r}^{N} &= \mathbf{I}_P^N \mathbf{r}^P. \label{eq:FASCoarseRes}
\end{align}
Here, $\mathbf{I}_P^N$ is the restriction operator, an $L_2$ projection to the lower polynomial order. Note that no distinction is made between the solution and residual transfer operators since in this work both the solution and the residual are spanned in the same polynomial space. This is an advantage of our implementation since less storage is needed. It is also important to remark that the $L_2$ projection preserves the energy of the transferred quantities. This is an important difference with the transfer operators that are commonly employed in modal DG \cite{Nastase2006,Wang2007,Shahbazi2009,Fidkowski2005}, which do not conserve energy since only the low-order coefficients are transferred for coarse-grid smoothing and the correction is then injected to the lower modes of the high order representation (the transfer matrices are simply identity matrices with rows or columns appended).\\
The coarse-grid nonlinear problem holds
\begin{equation} \label{eq:FAScoarseProblem}
\mathbf{A}^N (\mathbf{Q}^N) = \mathbf{S}^N,
\end{equation}
where $\mathbf{S}^N$ is an artificial source term that can be obtained combining equations \eqref{eq:FASCoarseResEq}, \eqref{eq:FASCoarseSol} and \eqref{eq:FASCoarseRes}:
\begin{equation}\label{eq:FAScoarseSource}
\mathbf{S}^N = \mathbf{A}^N(\mathbf{I}_P^N \mathbf{\tilde Q}^{P}) + \mathbf{I}_P^N \mathbf{r}^P,
\end{equation}
which, according to equation \eqref{eq:NonLinFAS}, is the same as
\begin{equation}\label{eq:FAScoarseSource2}
\mathbf{S}^N = [\mathbf{M}]^{-1} \mathbf{F}^N(\mathbf{I}_P^N \mathbf{\tilde Q}^{P}) + \mathbf{I}_P^N \mathbf{r}^P.
\end{equation}
After solving equation \eqref{eq:FAScoarseProblem} for $\mathbf{Q}^N$ using a smoothing procedure, we obtain a low frequency approximation of the iteration error:
\begin{equation}\label{eq:FASIterError}
\pmb{\epsilon}^N_{it} = \mathbf{\tilde Q}^N - \mathbf{\tilde Q}^N_0,
\end{equation}
which is then used to update the solution on the fine grid:
\begin{equation}\label{eq:FASCorrectSol}
\mathbf{Q}^P_{i+1}=\mathbf{Q}^P_{i}+ \mathbf{I}_N^P \pmb{\epsilon}^N_{it}.
\end{equation}
The two-level process described above can be generalized to a multilevel FAS V-Cycle and coded efficiently as a recursive procedure as depicted in algorithm \ref{alg:FASVCycle}. Note that the superindex $c$ now denotes the next coarser multigrid level and that the fine superindexes have been dropped for readability. The multigrid cycle has $N_{MG}$ levels, where $level = 1$ is the coarsest (lowest polynomial order) and $level = N_{MG}$ is the finest (highest polynomial order).
\begin{algorithm}[H]
\caption{FAS: V-Cycle}
\label{alg:FASVCycle}
\begin{algorithmic}
\State \textbf{Recursive Procedure:} FASVCycle( $\mathbf{\tilde Q}$,$\mathbf{r}$,$level$)
\State \textbf{if} $level < N_{MG}$ \textbf{then} $\mathbf{S}=\mathbf{A}(\mathbf{\tilde Q}) + \mathbf{r}$ \Comment{Find coarse-grid source term (eq \eqref{eq:FAScoarseSource})}
\State $\mathbf{\tilde Q}_0 \gets \mathbf{\tilde Q}$ \Comment{Store fine grid solution}
\State $\mathbf{\tilde Q} \gets$ \textit{Smooth}($\mathbf{\tilde Q}$,$\beta_1$) \Comment{Pre-smooth $\beta_1$ times (RK3)}
\If{$level > 1$}
\Comment{If not on the coarsest level, correct the solution using multigrid}
\State $\mathbf{\tilde Q}^c \gets \mathbf{I}_f^c \mathbf{\tilde Q}$ \Comment{Restrict solution to coarse grid (eq \eqref{eq:FASCoarseSol})}
\State $\mathbf{r}^c \gets \mathbf{I}_f^c (\mathbf{S}-\mathbf{A}(\mathbf{\tilde Q}))$ \Comment{Restrict residual to coarse grid (eq \eqref{eq:FASCoarseRes})}
\State CALL \textit{FASVCycle}( $\mathbf{\tilde Q}^c$, $\mathbf{r}^c$,$level-1$) \Comment{Recursive calling}
\State $\mathbf{\tilde Q}=\mathbf{\tilde Q} + \mathbf{I}_c^f \pmb{\epsilon}^c_{it}$ \Comment{Correct solution using coarse-grid approximation (eq \eqref{eq:FASCorrectSol})}
\EndIf
\State $\mathbf{\tilde Q} \gets$ \textit{Smooth}($\mathbf{\tilde Q}$,$\beta_2$) \Comment{Post-smooth $\beta_2$ times (RK3)}
\State \textbf{if} $level < N_{MG}$ \textbf{then} $\pmb{\epsilon}_{it} \gets \mathbf{\tilde Q}_0 - \mathbf{\tilde Q}$ \Comment{Compute iteration error (eq \eqref{eq:FASIterError})}
\end{algorithmic}
\end{algorithm}
In this work, we use $\Delta N = P - N = 1$ as the polynomial order reduction in every coarsening. Therefore, the number of multigrid levels corresponds to the maximum polynomial order of the mesh.
\subsubsection{Multigrid cycling strategy} \label{sec:MGcycling}
The typical cycling strategy used for h- and hp-multigrid implementations is to perform repeated V-cycles \cite{Shahbazi2009,Fidkowski2005,Luo2006,Botti2017,Ronquist1987,Nastase2006,Luo2006a,Mitchell2010,Wang2009} (Figure \ref{fig:Vcycle}). Some authors \cite{Shahbazi2009,Mavriplis2002} make use of V or W saw-tooth cycles (without post-smoothing). This technique is well-suited for modal discretizations since the solution correction (equation \eqref{eq:FASIterError}) is injected in the low-order coefficients of the fine-grid representation after coarse-grid smoothing. However, in the nodal discretizations of DGSEM (used in this paper), the coarse-grid smoothed solution can excite high-frequency modes of the fine-grid representation after the interpolation to the fine grid. In consequence, we find that post-smoothing is required.\\
The V-cycling strategy can be very sensitive to the initial condition. To get an appropriate initial condition in the high-order representation, the mainstream alternative is to employ a Full Multigrid (FMG) cycle (see Figure \ref{fig:FMGcycle}) at the beginning of the simulation. Such a cycling strategy can be easily implemented using a recursive algorithm that calls algorithm \ref{alg:FASVCycle} in every ascending level. The number of iterations in every level can be fixed \cite{Shahbazi2009,Nastase2006} or can be tuned using a residual-based approach \cite{Fidkowski2005,Bassi2009}. In this work we use a residual-based approach, where multiple V-cycle repetitions are taken at each level, $p$, until a fixed residual is reached, before raising the approximation level to $p+1$.\\
\begin{figure}[h]
\subfigure[V-cycle.]{\label{fig:Vcycle} \includegraphics[width=0.4\textwidth]{Figs/Vcycle.eps}}\qquad
\subfigure[FMG-cycle for getting an appropriate initial condition.]{\label{fig:FMGcycle} \includegraphics[width=0.4\textwidth]{Figs/FMGcycle.eps}}
\caption{FMG- and V-cycing strategies. Equal signs represent the continuation of the V-cycling process until reaching the desired residual.}\label{fig:MGcycles}
\end{figure}
\subsubsection{Designing the smoothing}\label{sec:TuninSmoothing}
In general, the number of pre-smoothing sweeps ($\beta_1$ in algorithm \ref{alg:FASVCycle}) must be high enough to ensure that the high-frequency modes of the error have been smoothed out, so the $L_2$ restriction does not introduce noise into lower multigrid levels. Likewise, the number of post-smoothing sweeps ($\beta_2$) must be high enough to guarantee that mid-frequency modes of the error do not develop into higher-order representations. These mid-frequency modes of the error can be excited by the $L_2$ prolongation of the solution that was smoothed in a lower multigrid level.\\
A common practice is to set a fixed number of pre- and post-smoothing sweeps \cite{Shahbazi2009,Haupt2013,Bassi2009,Nastase2006,Fidkowski2005}. Nonetheless, when very high polynomial orders and anisotropic non-conforming representations are used, some stages of the simulation can be very sensitive to insufficient smoothing (e.g. at the beginning of the simulation or after an adaptation stage). With that in mind, we propose two residual-based strategies for tuning the number of smoothing sweeps:
\begin{enumerate}
\item \textbf{Pre-smoothing:} After every $\beta_1^0$ sweeps (fixed number), the residual in the next (coarser) representation is checked. If $\norm{\mathbf{r}^{P}}_{\infty} < \eta \norm{\mathbf{r}^{N}}_{\infty}$, the pre-smoothing is stopped; otherwise, $\beta_1^0$ additional sweeps are performed. This strategy is a modification of the residual-based approach that some authors employ in FMG cycles for checking if the coarse level smoothing is enough \cite{Fidkowski2005,Bassi2009}. For the simulations of this paper $\eta \le 1.1$ showed to work fine for meshes with both uniform polynomial orders and also p-anisotropic non-conforming meshes. For this reason, all the simulations that are shown henceforth employ $\eta=1.1$.
\item \textbf{Post-smoothing:} The norm of the residual after the post-smoothing must be at least as low as it was after the pre-smoothing, $\norm{\mathbf{r}^{N}_{post}}_{\infty} \le \norm{\mathbf{r}^{N}_{pre}}_{\infty}$. This condition is checked every $\beta_2^0$ sweeps and the post-smoothing loop is exited when fulfilled. This way, we guarantee that most of the high-frequency errors that could be excited during coarse smoothing are eliminated.
\end{enumerate}
\subsection{p-Adaptation process} \label{sec:AdaptProcess}
Mesh adaptation procedures aim to reduce the number of degrees of freedom of a problem retaining a comparable accuracy. Within those, p-adaptation methods work by increasing the polynomial order of the elements in regions of interest and decreasing it where low order representations are accurate enough. In the present work, we perform anisotropic p-adaptation based on estimations of the truncation error. To do so, we need a methodology for estimating the error of anisotropic polynomial order combinations, which is summarized in next section and detailed in \cite{RuedaRamirez2018}.
\subsubsection{Truncation error estimation} \label{sec:TruncError}
The \textit{non-isolated} truncation error of a discretization of order $N$ ($\tau^N$) is defined as the difference between the discrete partial differential operator ($\mathcal{R}^N$) and the exact partial differential operator ($\mathcal{R}$) applied to the exact solution, $\bar{\mathbf{q}}$:
\begin{equation}\label{eq:TruncError}
\tau^N = \mathcal{R}^N(\mathbf{I}^N \bar{\mathbf{q}})-\mathcal{R}(\bar{\mathbf{q}}),
\end{equation}
where $\mathbf{I}^N$ is a discretizing operator. For steady-state ($\partial \mathbf{q} / \partial t = 0$), the exact partial differential operator can be derived from equation \eqref{eq:NScons} as
\begin{equation}
\mathcal{R}(\bar{\mathbf{q}}) = \mathbf{s} - \nabla \cdot \mathscr{F} = \bar{\mathbf{q}}_t = 0,
\end{equation}
and the discrete partial differential operator can be derived point-wise from equation \eqref{eq:DiscretNS} as
\begin{equation} \label{eq:Rdiscrete}
\pmb{\mathcal{R}}^N (\pmb{I}^N \bar{\mathbf{q}}) = [\mathbf{M}] \mathbf{S} - \mathbf{F}(\pmb{I}^N \bar{\mathbf{q}}),
\end{equation}
where $\pmb{\mathcal{R}}^N$ contains the sampled values of $\mathcal{R}^N$ in all the nodes of the domain and $\pmb{I}^N$ is a sampling operator. The \textit{non-isolated} truncation error can then be simplified to
\begin{equation}\label{eq:TruncErrorSteady}
\pmb{\tau}^N=\pmb{\mathcal{R}}^N(\pmb{I}^N \bar{\mathbf{q}})=[\mathbf{M}] \mathbf{S} - \mathbf{F}(\pmb{I}^N \bar{\mathbf{q}}).
\end{equation}
In addition to the \textit{non-isolated} truncation error, Rubio et al. \cite{Rubio2015} defined the \textit{isolated} truncation error as
\begin{equation}\label{eq:IsolTruncErrorSteady}
\hat{\pmb{\tau}}^N=\hat{\pmb{\mathcal{R}}}^N(\pmb{I}^N\bar{\mathbf{q}})=[\mathbf{M}] \mathbf{S} - \mathbf{\hat F}(\pmb{I}^N\bar{\mathbf{q}}),
\end{equation}
where $\mathcal{\hat R}^N(\cdot)$ is the \textit{isolated} discrete partial differential operator, which is derived without substituting the flux, $\mathscr{F}$, by the numerical flux ,$\mathscr{F}^*$, in equation \eqref{eq:weak2}, thus eliminating the influence of neighboring elements and boundaries in the truncation error of each element ( see \cite{Rubio2015,RuedaRamirez2018}). Rubio et al. \cite{Rubio2015} showed that the \textit{isolated} truncation error in an element depends on the polynomial order of the element, whereas the \textit{non-isolated} truncation error in the same element depends on the polynomial order of the element \textit{and} its neighbors. In consequence, it was suggested that the \textit{isolated} truncation error may be a better sensor for local p-adaptation than the \textit{non-isolated} truncation error since it is not contaminated by neighbors' errors. Moreover, Rueda-Ramírez et al. \cite{RuedaRamirez2018} showed that the accurate estimation of the \textit{isolated} truncation error imposes fewer conditions, and therefore can be computationally cheaper, than the one of its \textit{non-isolated} counterpart. In this work, we retain the isolated truncation as the driver of the proposed p-adaptation procedure.\\
The aim of this work is the development of a method for solving the Navier-Stokes equations in complex geometries, where the exact solution, $\bar{\mathbf{q}}$, is usually not at hand. Therefore, we utilize the $\tau$-estimation method, which approximates the truncation error using an approximate solution on a high order grid, $P$, instead of the exact one. Furthermore, we are interested in a low cost approximation which suits the multigrid procedure. In consequence, we use the \textit{quasi a-priori} approach without correction \cite{Kompenhans2016}, which makes use of a non-converged solution, $\mathbf{\tilde Q}^P$:
\begin{equation}\label{eq:TruncError!}
\pmb{\tau}_P^N = [\mathbf{M}^N]\mathbf{S}^N-\mathbf{F}^N(\mathbf{I}_P^N \mathbf{\tilde Q}^P).
\end{equation}
Here, the estimate of the \textit{isolated} truncation error can be obtained by simply replacing $\mathbf{F}^N$ by $\hat{\mathbf{F}}^N$ in equation \eqref{eq:TruncError!}. In the rest of this work, the expressions containing the symbol $\tau$ are valid for both the \textit{non-isolated} and the \textit{isolated} truncation error unless the contrary explicitly stated. Kompenhans et al. \cite{Kompenhans2016} showed that $\mathbf{\tilde Q}^P$ must be converged down to a residual $\tau_{max}/10$, in order for equation \eqref{eq:TruncError!} to yield accurate estimations of $\tau^N$ in regions where $\tau^N > \tau_{max}$. In addition, Rueda-Ramírez et al. \cite{RuedaRamirez2018} showed that for p-anisotropic representations, the truncation error of a polynomial order combination, $N=(N_1,N_2,N_3)$, can be obtained as the sum of individual directional components:
\begin{equation} \label{eq:AnisTruncError}
\tau^{N_1N_2N_3} \approx \tau_1^{N_1N_2N_3} + \tau_2^{N_1N_2N_3} + \tau_3^{N_1N_2N_3} \approx \tau_{P_1P_2P_3}^{N_1P_2P_3} + \tau_{P_1P_2P_3}^{P_1N_2P_3} + \tau_{P_1P_2P_3}^{P_1P_2N_3}.
\end{equation}
Each of the directional components, $\tau_i$, has spectral convergence with respect to the polynomial order in the corresponding direction, $N_i$ \cite{RuedaRamirez2018}. This allows obtaining accurate extrapolations of $\tau$ by extrapolating the values of $\tau_i$ with a \textit{linear-log} regression and summing the individual contributions \cite{RuedaRamirez2018}.
\subsection{Coupling anisotropic $\tau$-estimation-based adaptation with multigrid}\label{sec:Coupling}
In this section, we present a new technique for obtaining steady-state solutions based on coupling anisotropic p-adaptation methods and multigrid. As pointed out firstly by Brandt \cite{Brandt1984}, and then recently used by Syrakos et al. \cite{Syrakos2012} in the context of h-refinement techniques, the concept of truncation error arises naturally in FAS multigrid methods. In fact, the second term of our \textit{non-isolated} truncation error estimator (equation \eqref{eq:TruncError!}) is contained in the coarse grid source term of the multigrid scheme (equation \eqref{eq:FAScoarseSource2}). Consequently, computing $\tau_P^N$ inside the multigrid cycle only involves a few additional operations. In the case of the \textit{isolated} truncation error, an additional inexpensive step is required to evaluate the operator $\hat{\mathbf{F}}^N$.\\
Two main differences with previous $\tau$-estimators can be identified. First, instead of interpolating the finest solution directly to every coarser representation, the solution is interpolated level by level in multigrid methods; and second, the smoothing procedure modifies the finest solution before it is transferred to lower orders. In that regard, preliminary tests showed no significant difference between the multigrid $\tau$-estimations and the conventional ones. \\
Since in p-multigrid techniques the coarsening is usually performed in all directions simultaneously, i.e. the polynomial order of every direction is decreased (isotropic multigrid), only certain combinations of low-order polynomial orders are evaluated. This makes it impossible to generate the full anisotropic truncation error map \cite{RuedaRamirez2018} (that is needed for performing anisotropic p-adaptation) with the conventional $\tau$-estimation procedure. For this reason, in section \ref{sec:AnisFAS} we propose a p-anisotropic multigrid procedure and explain how the anisotropic decoupled truncation error estimator by Rueda-Ramírez et al. \cite{RuedaRamirez2018} can be evaluated using such a multigrid scheme for generating the full truncation error map. In section \ref{sec:NewPolorders}, we describe how the new polynomial orders are computed based on the proposed estimations; and finally, we present a multi-stage p-adaptation process in section \ref{sec:MultiStage}. \\
\subsubsection{Anisotropic multigrid} \label{sec:AnisFAS}
The classical approach to implement a p-multigrid method is to perform coarsening in all directions simultaneously. This strategy will be referred to as isotropic multigrid in following sections. In this paper, we propose the use of an anisotropic multigrid method in which the coarsening is done in each direction at a time, in order to estimate the truncation error. For instance, in a 3D p-anisotropic multigrid case, when coarsening in $\xi$, the coarse grid problem is derived from equation \eqref{eq:FAScoarseProblem} as
\begin{equation}
\mathbf{A}^{N_1P_2P_3} (\mathbf{Q}^{N_1P_2P_3}) = \mathbf{S}^{N_1P_2P_3},
\end{equation}
where the source term is obtained from equation \eqref{eq:FAScoarseSource2}:
\begin{equation}
\mathbf{S}^{N_1P_2P_3} = [\mathbf{M}]^{-1} \mathbf{F}^{N_1P_2P_3}(\mathbf{I}_{P_1P_2P_3}^{N_1P_2P_3} \mathbf{\tilde Q}^{P_1P_2P_3}) + \mathbf{I}_{P_1P_2P_3}^{N_1P_2P_3} \mathbf{r}^{P_1P_2P_3}.
\end{equation}
If the anisotropic p-multigrid method is used to solve a p-anisotropic representation, the number of multigrid levels can be different in every direction, $N_{MG,i}$.\\
Note that this method is perfectly suited to generate the truncation error map using the decoupled truncation error estimator proposed by Rueda-Ramírez et al. \cite{RuedaRamirez2018} (equation \eqref{eq:AnisTruncError}). Figure \ref{fig:AnisFAS} depicts the so-called anisotropic 3V FAS cycle. In every V-cycle, the coarsening is performed in one of the coordinate directions of the reference element and the directional component of the truncation error is estimated. Afterwards, the p-adaptation process that is detailed in section \ref{sec:NewPolorders} takes place. It is noteworthy that the reference coordinate frame of an element inside a general 3D mesh is commonly not aligned with its neighbors'. This can pose a problem for the \textit{non-isolated} truncation error estimation, but not for the \textit{isolated} truncation error that neglects the contribution of the neighboring elements (a thorough analysis can be found in \cite{RuedaRamirez2018}). \\
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.6\textwidth]{Figs/AnisFAS.eps}
\caption{Adaptation process: Anisotropic 3V FAS cycle and subsequent adaptation.}\label{fig:AnisFAS}
\end{center}
\end{figure}
Notice that instead of evaluating every possible combination of $N=(N_1,N_2,N_3)$ for $N_{i}<P_i$ (which can be a large number for 3D cases), as in \cite{Rubio2015,Kompenhans2016}, the full truncation error map is constructed from a completely decoupled approach. One important advantage of doing so is that all the storage needed for the decoupled error estimators is already allocated in the anisotropic multigrid routines and only a few inexpensive additional computations are required. Hence, the multigrid process works indeed as both solver and $\tau$-estimator.\\
\subsubsection{Computing the new polynomial orders} \label{sec:NewPolorders}
Given a truncation error threshold, $\tau_{max}$, that needs to be achieved in a specific case, and a maximum polynomial order allowed, $N_{max}$, the proposed adaptation process can be summarized in six steps:
\begin{enumerate}
\item A high-order representation, $P= (P_1,P_2,P_3)$, is converged down to a residual $\tau_{max}/10$ using the multigrid method described in section \ref{sec:Multigrid}.
\item An anisotropic multigrid procedure (section \ref{sec:AnisFAS}) is used to estimate the decoupled truncation error contribution of every direction. For instance, when coarsening in the direction $(1)$, the contribution is:
\begin{equation}
\pmb{\tau}_{1}^{N_1N_2N_3} \approx \pmb{\tau}_{P_1P_2P_3}^{N_1P_2P_3} = [\mathbf{M}^{N_1P_2P_3}] \mathbf{S}^{N_1P_2P_3}-\mathbf{F}^{N_1P_2P_3}(\mathbf{I}_{P_1P_2P_3}^{N_1P_2P_3}\tilde{\mathbf{Q}}^{P_1P_2P_3}).
\end{equation}
\item The \textit{inner} truncation error map (for $N_i<P_i$) is generated using equation \eqref{eq:AnisTruncError}.
\item If $\tau_{max}$ can be achieved using one of these combinations, it is selected and the simulation continues.
\item If $\tau_{max}$ cannot be achieved in the \textit{inner} truncation error map, an extrapolation procedure based on linear-log regression is conducted in each of the three directions of the decoupled truncation error ($\tau_i$), and then the full truncation error map is generated for $P_i \le N_i \le N_{max,i}$.
\item If $\tau_{max}$ can be achieved using one of these combinations, it is selected. If not, $N_1 = N_2 = N_3 = {N}_{max}$ is selected.
\end{enumerate}
In steps 4 and 6, there can be multiple combinations ($N_1$,$N_2$,$N_3$) that achieve $\tau < \tau_{max}$. In that case, the combination with the lowest number of degrees of freedom is selected. Notice that the two main differences with the method of Kompenhans et al. \cite{Kompenhans2016} are: ($i$) the way in which the truncation error is estimated for $N_i<P_i$ (step 3) and later for $N_i \ge P_i$ (step 5); and ($ii$) that if the truncation error is not achieved, the element is fully enriched in all directions, instead of in only one.
\subsubsection{Uniform coarsening versus high-order coarsening} \label{sec:HOcoarsening}
We propose two ways of obtaining the polynomial orders of the coarser representations.
Let us now define the diadic tensor $\mathcal{N}$, which contains the polynomial orders of all the elements in a mesh:
\begin{equation}
\mathcal{N} = (N^1, N^2, \cdots, N^{e}, \cdots, N^K),
\end{equation}
where $K$ is the number of elements of the mesh, $e$ is the element index, and $(N^{e}=N^{e}_1,N^{e}_2,N^{e}_3)$.\\
After the p-adaptation procedure is done, the mesh consists of elements with non-uniform anisotropic polynomial orders. Taking into account that $\Delta N = 1$ (section \ref{sec:Multigrid}), the number of multigrid levels is $N_{MG} = \max(\mathcal{N}) - N_{\textit{coarse}} + 1$ for the isotropic multigrid and $N_{MG,i} = \max(\mathcal{N}_i) - N_{\textit{coarse}} + 1$ for the anisotropic multigrid (the latter is a function of the maximum polynomial order per direction). Let us define two ways of performing the coarsening inside a multigrid cycle:
\begin{itemize}
\item \textit{Uniform coarsening}: The coarsening is performed in all elements simultaneously:
\begin{equation} \label{eq:coarsening_ini}
\left( N^{e}_i \right)_{level} = \left( N^{e}_i \right)_{level+1} - \Delta N,
\end{equation}
except in the elements where the minimum polynomial order allowed has been reached:
\begin{equation}
if \ \left( \left( N^{e}_i \right)_{level}<N_{coarse} \right) \ \mathbf{then} \ \left( N^{e}_i \right)_{level}=N_{coarse}.
\end{equation}
\item \textit{High-order coarsening}: Since the maximum polynomial order in every multigrid level can be known beforehand:
\begin{equation} \label{eq:highorderCoarse}
\left( N_i \right)_{level} ^{max} = \max(\mathcal{N}_i) - \Delta N (N_{MG,i} - level),
\end{equation}
we can coarsen only the elements that do not fulfill this condition:
\begin{equation} \label{eq:coarsening_fin}
if \ ( \left( N_i^{e} \right)_{level+1} > \left( N_i \right)_{level}^{max}) \ \mathbf{then} \ \left( N_i^{e} \right)_{level} = \left( N_i \right)_{level}^{max}.
\end{equation}
In this way only the high-order elements are coarsened. In this paper, we use $N_{\textit{coarse}}= \Delta N = 1$. Therefore, equation \eqref{eq:highorderCoarse} reduces to
\begin{equation}
\left( N_i \right)_{level}^{max} = level.
\end{equation}
\end{itemize}
Notice that equations \eqref{eq:coarsening_ini} to \eqref{eq:coarsening_fin} are valid for isotropic and anisotropic multigrid procedures. In the former, $N_{MG,i}$ must be simply replaced by $N_{MG}$, $\max(\mathcal{N}_i)$ by $\max(\mathcal{N})$, and the operations are performed in all directions. In the latter, the operations are only performed in the direction in which the coarsening is done. Furthermore, both coarsening methods are equivalent for isotropic polynomial representations.\\
One could argue that the \textit{uniform coarsening} involves less computational cost than the \textit{high-order coarsening} since coarse representations have fewer degrees of freedom. Nevertheless, the latter has two main advantages:
\begin{itemize}
\item Several preliminary tests showed that \textit{uniform coarsening} could be unstable for highly anisotropic meshes in 2D and 3D.
\item In 3D meshes (that are not 2D extrusions), p-nonconforming representations require the mapping order to be $M \le N/2$ (see section \ref{sec:DGSEM}). This means that the minimum polynomial order of the mesh must be $\min(\mathcal{N}) \ge 2$. If the \textit{uniform coarsening} is used, the mapping restriction forces the coarsest multigrid level to have a polynomial order $N_{\textit{coarse}} \ge 2$. However, if the \textit{high-order coarsening} is used, the coarsest polynomial order can be as low as $N_{\textit{coarse}} \ge 1$ since the two coarsest levels are always p-conforming. The additional coarse multigrid level helps to eliminate the low frequency components of the error.
\end{itemize}
For these reasons, in this paper we use only \textit{high-order coarsening}.
\subsubsection{Multi-stage adaptation process} \label{sec:MultiStage}
The proposed \textit{multi-stage} adaptation strategy takes advantage of an FMG-cycle and performs multiple adaptation processes at different polynomial orders ($\mathcal{P}_i$), as depicted in Figure \ref{fig:FMGAdapt}. In an adaptation stage at level $\mathcal{P}_i$ (red circular markers), a $\tau$-estimation procedure is performed using a $3V$ anisotropic multigrid cycle (Figure \ref{fig:AnisFAS}), and subsequently, the polynomial orders are adjusted accordingly, but never to a polynomial order that is higher than $\mathcal{P}_{i+1}$. In such a case, $\mathcal{P}_{i+1}$ is selected. This differs from the previous adaptation strategies based on $\tau$-estimation in that, traditionally, the whole domain had to be solved in a considerably high-order mesh before performing the single-stage adaptation process. The main advantage of using this methodology is that the zones of the domain that only require a low order representation are identified early in the simulation and are not enriched. This reduces the overall computational costs.\\
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.6\textwidth]{Figs/FMGAdapt.eps}
\caption{Proposed FMG cycle with multiple adaptation stages. Equal signs represent the continuation of the V-cycling process until reaching the desired residual (see section \ref{sec:MGcycling}).}\label{fig:FMGAdapt}
\end{center}
\end{figure}
After a truncation error estimation at the level $\mathcal{P}_i$, the algorithm checks if the maximum \textit{required} polynomial order is lower or equal to the polynomial order of the next stage, $\mathcal{P}_{i+1}$. In such a case, the performed adaptation step is marked as the last one and the simulation continues without any further adaptation processes.
\section{Numerical results} \label{sec:Results}
In this section, we test the accuracy and performance of the proposed p-adaptation algorithms in 2D (section \ref{sec:FlatPlate}) and 3D (\ref{sec:Sphere}) cases. For the reasons exposed in section \ref{sec:TruncError}, we will use the \textit{isolated} truncation error for the p-adaptation algorithms. All the results presented in this section were obtained using an 8-core 2.6 GHz Intel Xeon E5-2670 and 32 GB RAM, and shared memory parallelization (OpenMP) for computing the spatial terms, as explained in \cite{Hindenlang2012}. Note that our parallel implementation has not been optimized for a specific cache architecture and further speed-ups may be achieved.\\
\subsection{2D Flow over a flat plate} \label{sec:FlatPlate}
For this boundary layer test case, the mesh is constructed using 458 quadrilateral elements, and the simulations are computed with a Reynolds number of $\rm{Re}_{\infty}=6000$ (based on the reference length $L=12$) and a Mach number of M$_{\infty}=0.2$. Figure \ref{fig:BLcontour} shows the mesh and the distribution of the momentum in the x direction,$\rho u$.\\
\begin{figure}[h]
\begin{center}
\includegraphics[width=\textwidth]{Figs/FlatPlate/BLcontour.eps}
\caption{Flat plate test case.}\label{fig:BLcontour}
\end{center}
\end{figure}
On the boundary at $x=0$, a uniform inflow boundary condition was imposed. On the boundary $y=0$, $x<10$, a free-slip boundary condition was prescribed, whereas for $y=0$, $x \ge 10$, a no-slip adiabatic wall boundary condition emulates the effect of the flat plate. On the remaining boundaries we use a subsonic outflow boundary condition where only the far-field pressure is specified.\\
\subsubsection{Multigrid method}\label{sec:FlatPlateMG}
First, a uniform mesh of order $N_1=N_2=10$ was simulated using different solution procedures. Figure \ref{fig:FASSpeedUp} shows the infinity norm of the residual (equation \eqref{eq:nonlinRes}) as a function of the iterations and the simulation time for the classic 3$^{rd}$ order Runge-Kutta scheme (RK3 - in blue), an isotropic p-multigrid FAS procedure (in red), and an anisotropic p-multigrid FAS procedure (in black), both of the latter using RK3 as smoother. All the results were obtained using $\beta_1^0=100$ pre-smoothing sweeps, $\beta_2^0=100$ post-smoothing sweeps, 400 smoothing sweeps on the coarsest multigrid level (common strategy for getting a good low-frequency representation), the smoothing tuning explained in section \ref{sec:TuninSmoothing}, and an FMG cycling strategy for obtaining an appropriate initial condition. As stated in section \ref{sec:MGcycling}, a residual-based strategy is used to control when the polynomial order is increased in the FMG cycle. A fixed residual of $10^{-1}$ must be obtained before the polynomial order is raised to the next FMG level. This threshold was selected because it showed good performance in preliminary tests.\\
\begin{figure}[h]
\begin{center}
\subfigure[Residual norm vs. iterations.]{\label{fig:PerformanceFASiter} \includegraphics[width=0.45\textwidth]{Figs/FlatPlate/PerformanceFASiter.eps}} \qquad
\subfigure[Residual norm vs. CPU-time.]{\label{fig:PerformanceFAS} \includegraphics[width=0.45\textwidth]{Figs/FlatPlate/PerformanceFAS.eps}}
\caption{Comparison of performance of a classic RK3 method and a p-multigrid FAS (with a RK3 smoother) method for solving a subsonic boundary layer test case.}\label{fig:FASSpeedUp}
\end{center}
\end{figure}
It can be seen that the convergence rate of the multigrid strategies is much higher than the one of the completely explicit RK3 time integration, both in number of iterations and in CPU-time, even when the multigrid methods use the same RK3 as smoother. Additionally, the isotropic and anisotropic multigrid methods have a similar convergence rate with respect to the number of iterations, being the latter slightly better. However, when comparing the simulation times, it is remarkable that the isotropic FAS multigrid procedure is more efficient than the anisotropic one. The reason is that in the isotropic multigrid the lower multigrid levels have fewer degrees of freedom than in its anisotropic counterpart because the coarsening is done in all directions. For this reason, in next sections the anisotropic FAS will only be used as the $\hat \tau$-estimator (although during the estimation it is also used as a smoother) and the isotropic FAS will be mainly used as solver.
\subsubsection{Single-stage adaptation}
In this section, we study the computational cost involved in solving the boundary layer test case for different accuracy levels. To do that, we compare the results obtained using uniform adaptation with the ones obtained using the single-stage adaptation algorithm of section \ref{sec:NewPolorders}. In every case, the two main solvers considered in this paper were analyzed (the RK3 and the FAS solver). The single-stage adaptation process was performed for $N_{max}=10$ and $N_{max}=5$, where a reference mesh of $P_1=P_2=4$ was used. Notice that the use of such a coarse mesh as reference mesh is now possible because of the extrapolation capacities of the new estimation algorithm \cite{RuedaRamirez2018}. After adapting the mesh, the polynomial order jump across faces is limited to
\begin{equation} \label{eq:poljump2D}
|N^{+}_i -N^{-}_i | \le 1,
\end{equation}
where the symbols $+$ and $-$ indicate the polynomial order in the direction $i$ of an element and its neighbor, respectively (the relative rotation between neighboring elements is taken into account). This condition provides robustness to the adapted mesh and is comparable with the \textit{two-to-one rule} that is usually employed in h-adaptation methods \cite{Liszka1997,Burgess2011}. Since the anisotropic truncation error estimator (equation \eqref{eq:AnisTruncError}) has been shown to generate more accurate extrapolations of the truncation error map than conventional $\hat \tau$-estimators \cite{RuedaRamirez2018}, the single-stage p-adaptation method (that is used in all cases) employs a 3V anisotropic V-cycle for estimating the \textit{isolated} truncation error, even when the time-marching solver is RK3.\\
A higher-order solution of order $N_1=N_2=15$ was used to estimate the relative error in the drag coefficient:
\begin{equation}
e_{\textit{drag}}^{N=15}= \frac{|C_d-C_d^{N=15}|}{C_d^{N=15}},
\end{equation}
where $C_d^{N=15}=0.211$, a value that is comparable to results in the literatute \cite{Lynn1995} for a flat plate at Re$_{\infty}=6000$. \\
The results obtained with the different methods are illustrated in Figure \ref{fig:DragError}. As shown in Figure \ref{fig:DragErrorDOFs}, the p-adapted meshes require a much fewer number of degrees of freedom for achieving a specific error than the uniformly adapted meshes. Note that the minimum relative error, that is achieved for low values of $\hat \tau_{max}$, tends to the relative error that corresponds to a mesh with uniform $N_{\textit{max}}$, in the same way as the minimum $\norm{\hat \tau}_{\infty}$ is a function of $N_{max}$ (see \cite{RuedaRamirez2018}). After meeting this plateau, no further improvement in the functional error is expected. This plateau is not necessarily obtained when all elements have $N_{max}$, as can be seen in Figure \ref{fig:AdaptedNxNy}. \\
\begin{figure}[htbp]
\begin{center}
\subfigure[Drag error vs. number of DOFs.]{\label{fig:DragErrorDOFs} \includegraphics[width=0.46\textwidth]{Figs/FlatPlate/DragErrorDOFs.eps}}\qquad
\subfigure[Drag error vs. CPU-Time.]{\label{fig:DragErrorTime} \includegraphics[width=0.46\textwidth]{Figs/FlatPlate/DragErrorTime.eps}}
\caption{Relative error in the drag coefficient calculation for different methods. The reference drag $C_d^{P=15}$ was calculated on a uniformly refined mesh with $P = 15$. The blue lines represent uniform refinement, the red lines represent the $\hat \tau$-based p-adaptation procedure with $N_{max}=10$, and the black lines with $N_{max}=5$. Overlapping curves in (a).} \label{fig:DragError}
\end{center}
\end{figure}
As can be seen in Figure \ref{fig:DragErrorTime}, the p-adaptation procedures are especially efficient when high accuracy is needed. Using a low $N_{max}$ can lead to faster simulations, but the stagnation point is met sooner. It can also be observed that the most efficient procedure is the one that uses both FAS multigrid and p-adaptation. In fact, for the analyzed test case, this method achieves a better accuracy after a two hours of simulation than the classical approach (uniform order + RK3) after several days of computations. Table \ref{tab:SpeedUpOneStage} shows the CPU-time comparison of different solution procedures for reaching a drag error of at least $1.8 \times 10^{-4}$. The speed-up is as high as 815.76.\\
\begin{table}[h!]
\caption{Computation times and speed-up for the different methods for achieving a relative drag error of at least $1.8 \times 10^{-4}$ after converging until $\norm{\mathbf{r}}_{\infty} < 10^{-9}$.}
\begin{center}
\begin{tabular}{l|rrr}
Method & CPU-time[s] & Time [\%] & Speed-up \\ \hline \hline
RK3 & $4.78\times 10^{6}$ & $100.00\%$ & $1.00$ \\
RK3 + p-adaptation & $1.02\times 10^{5}$ & $2.14\%$ & $46.72$ \\
FAS & $6.69\times 10^{4}$ & $1.40\%$ & $71.51$ \\
FAS + p-adaptation & $5.86\times 10^{3}$ & $0.12\%$ &$ 815.76$ \\
\end{tabular}
\label{tab:SpeedUpOneStage}
\end{center}
\end{table}
Figure \ref{fig:AdaptedNxNy} shows the final polynomial orders as computed by the proposed method for $\hat \tau_{max}=10^{-3}$ (equivalent to a drag error of $e_{\textit{drag}}^{N=15}=1.49 \times 10^{-4}$). It can be seen that intensive polynomial enrichment is performed on the leading edge of the flat plate around the singularity and on the regions where the boundary layer grows, as expected. Further polynomial enrichment can be observed in regions where the mesh size changes.\\
\begin{figure}[h!]<
\begin{center}
\subfigure[Average polynomial order ($N_{av}$).]{\label{fig:AdaptedNav} \includegraphics[width=0.8\textwidth]{Figs/FlatPlate/AdaptedNav.eps}}\qquad
\subfigure[Detail of the Gauss-points.]{\label{fig:AdaptedNy} \includegraphics[width=0.7\textwidth]{Figs/FlatPlate/AdaptedDetail.eps}}
\caption{Contour indicating the final average polynomial orders after the adaptation procedure (a) and a detail of the Gauss-Points that shows the anisotropic nature of the p-adaptation method (b) for a threshold of $\hat \tau_{max}=10^{-3}$, which produces a relative drag error of $ e_{\textit{drag}}^{N=15} =1.49 \times 10^{-4}$. White boxes represent $N_1=N_2=1.$. $N_{av}=(N_1+N_2)/2$.
} \label{fig:AdaptedNxNy}
\end{center}
\end{figure}
\subsubsection{Multi-stage adaptation} \label{sec:MultiStageRes2D}
In section \ref{sec:MultiStage}, we proposed a multi-stage p-adaptation procedure based on a full multigrid scheme with increasing polynomial orders and explored some of its theoretical advantages. Now, we apply this scheme to the boundary layer test case and analyze when it may be advantageous.\\
Moreover, in the last section we showed how the accuracy of the solution can be increased by choosing a higher $N_{max}$. Nonetheless, a higher $N_{max}$ represents a larger truncation error map. The calculations needed for generating the larger map are not computationally intensive \cite{RuedaRamirez2018}. However, as we increase the area of the map where we have to extrapolate the values, the uncertainty of the estimations also increases.\\
Figure \ref{fig:DOFsVsTauMax} shows the number of degrees of freedom of the mesh after a single-stage adaptation procedure ($9\times 10^{-4} \le \hat \tau_{max} < 10^{-1}$ and $N_{max}=20$) for reference meshes of different order. As can be seen, the number of DOFs increases drastically when the specified truncation error is reduced below a certain value. This behavior occurs sooner for low-order reference meshes, where some elements are over-enriched to $N_{max}$. In fact, the polynomial order of the reference mesh is related to the maximum polynomial order it can accurately extrapolate the truncation error to. This relation is highly dependent on the PDE being approximated and the h-size of the mesh. Preliminary tests showed that, for the cases presented in this paper, a reference mesh of order $P$ can extrapolate accurately up to $2P$.\\
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.5\textwidth]{Figs/FlatPlate/DOFsTauMax.eps}
\caption{Number of degrees of freedom obtained after adapting the mesh with different thresholds ($\hat \tau_{max}$) and different reference meshes ($P$) for $N_{max}=20$.}\label{fig:DOFsVsTauMax}
\end{center}
\end{figure}
For high values of $N_{max}$, a multi-stage p-adaptation procedure becomes very useful. As was explained in section \ref{sec:MultiStage}, instead of starting with a high-order reference mesh (which can be very expensive), a coarse reference mesh of order $P=\mathcal{P}_1$ is chosen to estimate the truncation error. With the estimation, the regions where a low-order approximation is enough are identified. Afterwards, the p-adaptation algorithm sets the polynomial orders of the mesh according to the $\hat \tau$-estimation and limits the over-enrichment in more complex flow regions to $\mathcal{P}_2$. In the second adaptation process at $P=\mathcal{P}_2$, and in subsequent adaptation stages, the polynomial orders of the mesh are corrected with a more accurate error estimation at hand. \\
In order to illustrate how this method can reduce the computational cost of highly accurate simulations, we present a comparison of the convergence of the single-stage and the multi-stage p-adaptation procedures for $\hat \tau_{max}=4 \times 10^{-3}$, and $N_{max}=20$ (Figure \ref{fig:PerfNmax20}) and $N_{max}=30$ (Figure \ref{fig:PerfNmax30}). The reference meshes of the multi-stage algorithm were selected at $\mathcal{P}_1=4$, $\mathcal{P}_2=8$ and $\mathcal{P}_3=16$. The measured speed-up is 1.69 for $N_{max}=20$ and 1.72 for $N_{max}=30$ with respect to the single-stage adaptation.
\begin{figure}[htbp]
\begin{center}
\subfigure[$N_{max}=20$.]{\label{fig:PerfNmax20} \includegraphics[width=0.46\textwidth]{Figs/FlatPlate/PerformanceNmax20.eps}}\qquad
\subfigure[$N_{max}=30$.]{\label{fig:PerfNmax30} \includegraphics[width=0.46\textwidth]{Figs/FlatPlate/PerformanceNmax30.eps}}
\caption{Comparison of a single-stage and a multi-stage adaptation process for solving the boundary layer test case with a truncation error threshold of $\hat \tau_{max}=4 \times 10 ^{-3}$: $N_{max}=20$ (a), and $N_{max}=30$ (b).} \label{fig:OneStageVsMultiStage}
\end{center}
\end{figure}
\subsection{3D Flow around a Sphere} \label{sec:Sphere}
For this test case, the mesh is constructed with 1904 hexahedral elements, and the simulations are computed with a Reynolds number of Re$_{\infty}=200$ and a Mach number of M$_{\infty}=0.2$. The curvilinear hexahedral mesh has a mapping order $M=3$ and was created using the HOPR package \cite{hindenlang2015mesh}. Figure \ref{fig:Spherecontour} shows the mesh and the distribution of the conserved variable $\rho u$ around the sphere.\\
In order to assess the properties of the representations obtained after performing $\hat \tau$-based adaptation, we use a relative drag error that is computed against a high-order solution of order $N=12$ in the same mesh:
\begin{equation}
e_{\textit{drag}}^{N=12}= \frac{|C_d-C_d^{N=12}|}{C_d^{N=12}}.
\end{equation}
Table \ref{tab:DragSphere} shows a comparison between the reference drag coefficient obtained in this work and in other studies.
\begin{table}[h!]
\caption{Drag coefficient for sphere at Re$_{\infty}=200$.}
\begin{center}
\begin{tabular}{lr}
Author & Value \\ \hline\hline
Campregher et al. \cite{Campregher2009} & 0.815 \\
Fornberg \cite{Fornberg1988} & 0.7683 \\
Fadlun et al. \cite{Fadlun2000} & 0.7567 \\
This work & 0.7771 \\
\end{tabular}
\label{tab:DragSphere}
\end{center}
\end{table}
\begin{figure}[tbp]
\begin{center}
\includegraphics[width=.8\textwidth]{Figs/Sphere/Sphere.eps}
\caption{Sphere at Re$_{\infty}=200$.}\label{fig:Spherecontour}
\end{center}
\end{figure}
\subsubsection{3D Considerations}
Since in the general 3D p-nonconforming DGSEM the mapping order in every direction is limited by the solution order as $M_i \le N_i/2$ (as indicated in section \ref{sec:DGSEM}), and considering that the p-adapted meshes are in general p-nonconforming, the minimum polynomial order after p-adaptation is set to $N_{min}=2$. Additionally, taking into account the observations made in section \ref{sec:HOcoarsening}, we use high-order coarsening and $N_{\textit{coarse}}=1$ for the p-multigrid method before and after p-adaptation. \\
Moreover, in order to represent the curved boundary on the sphere as exactly as possible, after the p-adaptaion, a conforming algorithm changes the polynomial orders of all elements on that surface, so that there is no polynomial order jump across their faces. This allows using a mapping of order $M_i \le N_i$ there. \\
Finally, let us remark that in 3D, the condition of equation \eqref{eq:poljump2D} (polynomial jump across faces of 1) can cause a steep increase in the number of degrees of freedom because the polynomial enriching is transmitted in three directions. Therefore, for this test case the polynomial order jump across faces after p-adaptation is softened to
\begin{equation}
N^{+}_i \ge \bigg\lfloor \frac{2}{3} N^{-}_i \bigg\rfloor,
\end{equation}
where $\lfloor \cdot \rfloor$ is the integer part floor function.\\
This condition showed to provide enough robustness to the p-adapted representations and lowered the number of degrees of freedom of the adapted meshes. The conforming algorithm that is used on the sphere boundary and the algorithm that controls the polynomial order jump everywhere must be executed iteratively, until no further changes are needed, to ensure that the final mesh has all the desired properties.
\subsubsection{Single-stage adaptation}
The single-stage adaptation process is performed for $N_{max}=7$, where a reference mesh of order $P_1=P_2=P_3=5$ is used. Different values of the specified truncation error threshold were tested in the range $10^{-1} \le \hat \tau_{max} \le 10^{-4}$.\\
The isotropic and conforming reference mesh is iterated down to a residual of $\norm{\mathbf{r}}_{\infty} \le \hat \tau_{max}/10$ using a p-multigrid algorithm with $\beta_1^0=\beta_2^0=100$ pre- and post-smoothing sweeps, and 400 smoothing sweeps on the coarsest multigrid level. After the p-adaptation, the pre- and post-smoothing sweeps are $\beta_1^0=\beta_2^0=50$, and the number of smoothing sweeps on the coarsest multigrid level is 200. This combination exhibited the best performance. The smoothing tuning detailed in section \ref{sec:TuninSmoothing} is used and an FMG cycling strategy is employed for obtaining an appropriate initial condition with a residual of $\norm{\mathbf{r}}_{\infty} \le 1.0$. \\
The relative drag error and the absolute lift of the adapted meshes are assessed. Figure \ref{fig:SphereErrors} shows a comparison between the errors obtained using the $\hat \tau$-based adaptation procedure and the ones using uniform p-refinement. As can be observed in Figures \ref{fig:SphereDragDOFs} and \ref{fig:SphereLiftDOFs}, the number of degrees of freedom is greatly reduced for the same accuracy when using the $\hat \tau$-based p-adaptation. The maximum error in both coefficients is related to the one obtained with a uniform mesh of $N_1=N_2=N_3=N_{min}=2$, as expected. Similarly, the minimum error tends to stagnate at a value that is comparable to the one obtained for $N_1=N_2=N_3=N_{min}=7$. \\
It is interesting to notice that, for high truncation error thresholds, the $\hat \tau$-based adaptation does not provide an advantage in CPU-time (Figures \ref{fig:SphereDragTime} and \ref{fig:SphereLiftTime}). This is due to the cost of obtaining a semi-converged solution on the reference mesh of $P=5$, for cases where the final polynomial order post-adaptation is $N<5$. Additionally, let us remark that the rate of convergence in CPU-time deteriorates after the p-adaptation. This is because the p-anisotropic nonconforming representations are \textit{more difficult} to solve. Further investigation on multigrid, or other solution methods, could improve the speed-ups here observed.
\begin{figure}[htbp]
\begin{center}
\subfigure[Drag error vs. number of DOFs.]{\label{fig:SphereDragDOFs} \includegraphics[width=0.45\textwidth]{Figs/Sphere/DragErrorDOFs.eps}}\qquad
\subfigure[Absolute lift vs. number of DOFs.]{\label{fig:SphereLiftDOFs} \includegraphics[width=0.45\textwidth]{Figs/Sphere/LiftErrorDOFs.eps}}
\subfigure[Drag error vs. CPU-Time.]{\label{fig:SphereDragTime} \includegraphics[width=0.45\textwidth]{Figs/Sphere/DragErrorTime.eps}}
\subfigure[Absolute lift vs. CPU-Time.]{\label{fig:SphereLiftTime} \includegraphics[width=0.45\textwidth]{Figs/Sphere/LiftErrorTime.eps}}
\caption{Relative error in the drag and lift coefficients for different methods on the sphere. The blue lines represent uniform refinement, and the red lines represent the $\hat \tau$-based p-adaptation procedure with $N_{max}=7$.} \label{fig:SphereErrors}
\end{center}
\end{figure}
Using the data provided by Figures \ref{fig:SphereDragTime} and \ref{fig:SphereLiftTime}, it is possible to compute the speed-up as a function of the drag or lift errors. Table \ref{tab:SpeedUpSphere} shows the computation times and speed-ups achieved for the lowest error obtained in lift and drag ($e_{\textit{drag}}^{N=12}$ and $|C_l|$). The maximum speed-up is 151.94 for this 3D challenging case.\\
Figure \ref{fig:AdaptedSphere} illustrates the polynomial order distribution after p-adaptation for $\hat \tau_{max}=4\times 10^{-4}$, which corresponds to a drag error of $e_{\textit{drag}}=8.08\times 10^{-4}$ and an absolute lift of $|C_l|=1.33\times 10^{-4}$. It can be seen that intensive polynomial enrichment is performed on the recirculation bubble, the wake, and on the boundary layer, as expected. Further polynomial enrichment can be observed in regions where the mesh size changes drastically. In particular, we observe that the polynomial enrichment is higher on the wake that on the recirculation bubble because of the element sizes of the available mesh. \\
\begin{table}[h!]
\caption{Computation times and speed-up for the different methods after converging until $\norm{\mathbf{r}}_{\infty} < 10^{-9}$}
\begin{center}
\begin{tabular}{l|rrr|rrr|}
& \multicolumn{3}{c|}{Drag coefficient ($e_{\textit{drag}} \le \times 5.31 \times 10^{-4}$)} & \multicolumn{3}{c|}{Lift coefficient ($|C_l| \le 3.34 \times 10^{-4}$)} \\
Method & CPU-time[s] & Time [\%] & Speed-up & CPU-time[s] & Time [\%] & Speed-up \\ \hline \hline
RK3 & $7.46\times 10^{6}$ & $100.00\%$ & $1.00$
& $7.46\times 10^{6}$ & $100.00\%$ & $1.00$ \\
FAS & $2.72\times 10^{5}$ & $3.65\%$ & $27.41$
& $2.72\times 10^{5}$ & $3.65\%$ & $27.41$ \\
FAS + p-adaptation & $4.91\times 10^{4}$ & $0.68\%$ & $151.94$
& $5.80\times 10^{4}$ & $1.06\%$ & $128.55$ \\
\end{tabular}
\label{tab:SpeedUpSphere}
\end{center}
\end{table}
\begin{figure}[h!]
\begin{center}
\subfigure[Average polynomial order ($N_{av}$).]{\label{fig:SphereNav} \includegraphics[width=0.7\textwidth]{Figs/Sphere/SphereNav.eps}}\qquad
\subfigure[Detail of the Gauss-points.]{\label{fig:SphereDetail} \includegraphics[width=0.7\textwidth]{Figs/Sphere/SphereDetail.eps}}
\caption{Contours indicating the final polynomial orders after p-adaptation for the sphere test case: Average polynomial orders ($N_{av}$) (a) and a detail of the Gauss-Points that shows the anisotropic nature of the p-adaptation method (b) for a threshold of $\hat \tau_{max}=4 \times 10^{-4}$. White boxes represent $N_1=N_2=N_3=2.$. $N_{av}=(N_1+N_2+N_3)/3$.
} \label{fig:AdaptedSphere}
\end{center}
\end{figure}
\subsubsection{Multi-stage adaptation}
The multi-stage adaptation procedure introduced in section \ref{sec:MultiStage} becomes useful when the maximum allowable polynomial order after adaptation ($N_{\textit{max}}$) is increased and the specified \textit{isolated} truncation threshold ($\hat \tau_{\textit{max}}$) is low. In this section, we use the multi-stage p-adaptation procedure on the sphere test case and set the maximum polynomial order after adaptation to $N_{\textit{max}}=11$, the truncation error threshold to $\hat \tau_{\textit{max}}=10^{-4}$, and the adaptation stages at $\mathcal{P}_1=4$ and $\mathcal{P}_2=8$. Figure \ref{fig:SphereMultiStage} shows a comparison of performance (in CPU-Time) between the multi-stage p-adaptation procedure and two single-stage procedures with $P=4$ and $P=5$. The maximum polynomial order after adaptation in the single stage cases is also $N_{\textit{max}}=11$.\\
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.5\textwidth]{Figs/Sphere/MultiStagePerformance.eps}
\caption{Comparison of single-stage ($P=4$ and $P=5$) and multi-stage adaptation ($\mathcal{P}_1=4$, $\mathcal{P}_2=8$) processes for the sphere. $N_{\textit{max}}=11$, $\hat \tau_{\textit{max}}=10^{-4}$.}\label{fig:SphereMultiStage}
\end{center}
\end{figure}
As can be observed, the convergence rate (with respect to CPU-time) of the multi-stage p-adapted mesh is higher than for the single-stage p-adapted meshes. The reason is that the former has fewer degrees of freedom. Table \ref{tab:MultiStageSphere} shows a summary of results for the simulations of this section.
\begin{table}[h!]
\caption{Summary of performance for single- and multi-stage simulations with $\hat \tau_{\textit{max}}=10^{-4}$}
\begin{center}
\begin{tabular}{l|rrrrrr}
Adaptation strategy & DOFs(1) & DOFs(2) & CPU-Time(s) & Speed-up & $e_{\textit{drag}}$ & $|C_l|$ \\ \hline\hline
Single-Stage: $P=4$ & $1.07 \times 10^6$ & --- & $4.53 \times 10^5$ & 1.00 & $2.27 \times 10^{-5}$ & $1.39 \times 10^{-5}$ \\
Single-Stage: $P=5$ & $7.90 \times 10^5$ & --- & $2.68 \times 10^5$ & 1.69 & $ 3.57 \times 10^{-5}$ & $1.90 \times 10^{-4}$\\
Multi-Stage: $\mathcal{P}_1=4$, $\mathcal{P}_2=8$
& $6.20 \times 10^5$ & $3.85 \times 10^5$
& $1.75 \times 10^5$ & 2.59 & $4.50 \times 10^{-5}$
& $2.12 \times 10^{-5}$\\
\end{tabular}
\label{tab:MultiStageSphere}
\end{center}
\end{table}
The number of degrees of freedom for the single-stage $P=4$ is the highest, since in that case many elements are enriched to the maximum $N_1=N_2=N_3=11$ due to problems in the error estimation (as explained in section \ref{sec:MultiStageRes2D}). In the single-stage $P=5$ this behavior is also observed, but to a lesser extent. On the other hand, in the multi-stage case the number of degrees of freedom in the first stage is limited by the condition $N_{\textit{max},1}=8$, and the distribution of polynomial orders is then corrected in the second stage, where the number of degrees of freedom decreases, even though the maximum polynomial order is $N_{\textit{max},2}=11$. It is remarkable that the multi-stage adapted mesh can achieve comparable drag and lift errors with about one third of the degrees of freedom and a speed-up of 2.59 with respect to the single-stage $P=4$.
\section{Conclusions} \label{sec:Conclusions}
In this paper, we have developed a coupled solver using truncation error estimators, anisotropic p-adaptation and multigrid. The most important conclusions of this work are:
\begin{enumerate}
\item A novel anisotropic p-adaptation multigrid algorithm is presented which uses the multigrid method both as a solver and as a truncation error estimator.
\item The coupling of single-stage p-adaptation strategies and multigrid methods resulted in a speed-up of 816 for a 2D boundary layer case and of 152 for the 3D sphere case.
\item The technique for evaluating the truncation error by Rueda-Ramírez et al. \cite{RuedaRamirez2018} can be performed directly inside an anisotropic multigrid procedure needing only a few additional operations.
\item Isotropic multigrid methods show better performance than anisotropic multigrid methods. The reason is that the successive coarse grids are cheaper to compute when the polynomial order is reduced in all directions.
\item A multi-stage p-adaptation technique based on coupling $\tau$-estimations and multigrid was developed. Experiments show that multi-stage is advantageous for highly accurate simulations compared with single-stage adaptation procedures. The multi-stage procedure showed to be a promising alternative for 3D simulations, since coarser reference meshes can be used: the elements that do not need to be enriched are identified early and their polynomials are frozen in a low value. The achieved speed-ups with this methods were as high as 2.59 with respect to the single-stage adaptation.
\end{enumerate}
\section*{Acknowledgments}
The authors would like to thank David Kopriva for his friendly advise and cooperation. This project has received funding from the European Union’s Horizon 2020 Research and Innovation Program under the Marie Skłodowska-Curie grant agreement No 675008 for the SSeMID project.\\
The authors acknowledge the computer resources and technical assistance provided by the \textit{Centro de Supercomputación y Visualización de Madrid} (CeSViMa).
\section*{References}
|
2,869,038,155,840 | arxiv | \section{Introduction} \label{S:introduction}
In an effort to understand the canonical model of $\bar{M}_g$, Hassett and Keel initiated a program to give modular interpretations to the log canonical models
\[
\bar{M}_g(\alpha) := \operatorname{Proj} \bigoplus_{d \ge 0} \HH^0(\bar{\cM}_g, \lfloor d( K_{\bar{\cM}_g} + \alpha \delta) \rfloor),
\]
for all $\alpha \in [0,1] \cap \QQ$ such that $K_{\bar{\cM}_g} + \alpha \delta$ is effective; see \cite{Hgenus2, hassett-hyeon_contraction,hassett-hyeon_flip, hyeon-lee_genus3}. The assertion that these log canonical models should admit modular interpretations, which is implicit in the work of Hassett, Hyeon, and Keel, can be formulated precisely as follows:
\vspace{1pc}
\noindent
\textbf{Modularity principle for the log MMP for $\M_g$. } {\it Let $\cU_{g}$ be the stack
of all complete connected Gorenstein curves $C$ of arithmetic genus $g$ with $\omega_C$ ample.
For $\alpha \in [0,1] \cap \QQ$ such that $K_{\bar{\cM}_g} + \alpha \delta$ is effective, there exists an open
substack $\bar{\cM}_g(\alpha) \subseteq \cU_g$, and a map
$\phi\co\bar{\cM}_g(\alpha) \rightarrow \bar{M}_g(\alpha)$ such that
$\phi$ is cohomologically-affine and $\phi_*\cO_{ \bar{\cM}_g(\alpha)}=\cO_{\bar{M}_g(\alpha)}$.
Equivalently, the log canonical model $\M_{g}(\alpha)$ is a good moduli space for $\bar{\cM}_{g}(\alpha)$.
}
\vspace{1pc}
\par
\noindent
We refer to \cite{alper_good_arxiv} for
a discussion of the essential properties of good moduli spaces, which may be thought of as best-possible approximations to a coarse moduli space in cases where the existence of a coarse moduli space is precluded by non-separatedness of the moduli stack. Hassett, Hyeon, and Lee have verified the modularity principle for the log MMP for $\bar{M}_{g}$ for all $\alpha$ when $g=2,3$ \cite{Hgenus2, hyeon-lee_genus3}, and for $\alpha > \frac{7}{10}-\epsilon$ in arbitrary genus \cite{hassett-hyeon_contraction,hassett-hyeon_flip}. In exploring possible extensions of their work, it is natural to consider the following question: Assuming the modularity principle holds, what curves should appear in the stacks $\bar{\cM}_{g}(\alpha)$? How can we tell at which $\alpha$-value a given singular curve should appear? In this paper, we develop two methods for answering these questions, at least for curves with a $\mathbb{G}_m$-action, and show that the two methods give identical predictions.
To explain the first method, consider a complete
curve $C$ with a $\GG_m$-action
$\eta\co \GG_m \to \operatorname{Aut}(C)$ and an isolated singularity at a point $p \in C$. If $\cL$ is a line bundle on
$\bar{\cM}_g$ that
extends to a neighborhood of $[C]$ in the stack of all curves, then there is an induced action of $\GG_m$ on
the fiber of $\cL$ over $[C]$ given by a character $\chi_{\cL}(C,\eta)$. The key observation connecting
characters with the modularity principle is:
If a curve $C$ is to appear in $\bar{\cM}_g(\alpha)$ for some $\alpha$, then the character of
$K_{\bar{\cM}_{g}(\alpha)}+ \alpha \delta = 13 \lambda - (2-\alpha) \delta$
is necessarily trivial since the line bundle descends to $\bar{M}_g(\alpha)$; this is the essence of
Proposition \ref{C:CriticalAlpha}.
We compute the characters of generators of $\text{Pic}(\bar{\cM}_g)$ for a large class of singular curves with $\GG_m$-action. In particular, we calculate the characters for curves with ADE singularities, planar toric singularities, and unibranch Gorenstein singularities, as well as for ribbons;
we collect our results in Tables \ref{T:table-characters} and \ref{T:table-dangling}. As a consequence, we predict the precise $\alpha$-values at which curves with these singularities arise in the modular interpretations of $\bar{\cM}_g(\alpha)$; see Table \ref{table-predictions}. This is our first main result.
Our second method for predicting $\alpha$-values is based on the following observation: If a locus $\cT\subseteq \overline{\cM}_g$ is covered by $(K_{\Mg{g}}+\alpha\delta)$-negative curves, i.e. curves on
which $K_{\Mg{g}}+\alpha\delta$ has negative degree, then $\cT $ falls in the stable base
locus of $K_{\Mg{g}}+\alpha\delta$ and thus is flipped by the rational map $\M_g \dashrightarrow \bar{M}_g(\alpha)$. If $\cT = \cT_C$ is the variety of stable curves arising from stable reduction of a singular curve $C$, then
$C$ appears in a modular interpretation of $\bar{\cM}_g(\alpha)$ for those $\alpha$ such that
$\cT_C$ is covered by $(K_{\Mg{g}} + \alpha \delta)$-negative curves. In Section \ref{S:intersection-theory}, we compute these anticipated $\alpha$-values for toric singularities using degeneration and intersection theory techniques.
Comparing with Table \ref{T:table-characters}, we observe that the $\alpha$-values obtained by character theory and intersection theory are the same. The fact that the two techniques yield the same $\alpha$-values is not merely coincidental: In Theorem \ref{theorem-character-intersection}, we prove a general theorem which provides a formal relationship between the characters and the intersection numbers.
Because these two heuristics give such a useful guide for defining the moduli functors $\Mg{g}(\alpha)$,
we expect them to play an important role in verifying the modularity principle for the log MMP for
$\M_{g}$ for $\alpha<7/10$.
\subsection{Notation} \label{section-notation}
We work over an algebraically closed field $k$ of characteristic $0$.
Let $\cU_g$ be the stack of all complete connected Gorenstein curves $C$ with $\omega_C$ ample. Let $\pi\co \cC_g \arr \cU_g$ be the universal curve. Denote by $\omega_{\cC_g / \cU_g}$ the relative dualizing sheaf on $\cC_g$.
The following line bundles are defined on all of
$\cU_g$:
\begin{align*}
\lambda = \lambda_1 & := \det \pi_* (\omega_{\cC_g / \cU_g}), \\
\lambda_m & := \det \pi_* (\omega_{\cC_g / \cU_g}^{m}).
\end{align*}
The following divisor classes are defined on any open
substack
of $\cU_g$ that satisfies Serre's condition $S_2$ and whose locus of
worse-than-nodal curves is of codimension at least $2$:
\begin{align*}
\kappa & = \pi_* (c_1^2(\omega_{\cC_g / \cU_g})), \\
K & = \text{the canonical divisor class}, \\
\delta_0 = \delta_{\text{irr}}&= \text{the divisor of irreducible singular curves}, \\
\delta = \delta_{0} + \delta_{\text{red}} & = \delta_0 + \delta_1 + \cdots + \delta_{\lfloor g/2 \rfloor}.
\end{align*}
We can define $K$ simply as $K=13\lambda - 2\delta$; see below for a more intrinsic definition. Furthermore, we have the following relations on this open substack:
\begin{equation} \label{relations}
\begin{aligned}
\lambda_2 &= 13 \lambda - \delta = K + \delta, \\
\kappa &= 12 \lambda - \delta = - \lambda + \lambda_2 = \frac{12}{13}\bigl(K + \frac{11}{12} \delta\bigr).
\end{aligned}
\end{equation}
We define the \emph{slope} of a divisor $s \lambda - \delta$ to be $s$ and the \emph{$\alpha$-value} of
a divisor $K + \alpha \delta$ to be $\alpha$.
In particular, the slope of $K+ \alpha \delta$ is $13/(2-\alpha)$ and the $\alpha$-value of $s\lambda - \delta$ is
$2- 13/s$.
If $B\rightarrow \Mg{g}$ is a complete curve, the {\em slope} of $B$ is defined to be
$(\delta\cdot B)/(\lambda\cdot B)$.
\subsection{Defining the canonical divisor $K$} \label{section-discussion-K}
Let $\cM$ be the smooth locus of $\cU_g$.
Consider the cotangent complex $L_{\cM}$ of $\cM$ which can be described explicitly as
follows. Choose a quotient stack presentation $\cM=[M / G]$; e.g., we
may take $M$ to be a Hilbert scheme and $G$ to be a group of
automorphisms of a projective space. Then the cotangent complex is given by:
$$L_{\cM}\co (\Omega_M \xrightarrow{\alpha} \mathfrak{g}^{\vee} \otimes \cO_M),$$
where $\Omega_M$ inherits its natural $G$-linearization and
$\mathfrak{g}$ is the adjoint representation; the morphism $\alpha$ is
the pullback of the natural map
$\text{pr}_2^{\,*} \Omega_M \to \Omega_{G \times M} = \text{pr}_2^{\,*} (\Omega_G)$ along the identity section.
Then the canonical line bundle is defined as
$$K_{\cM} := \det L_{\cM} = K_M \tensor (\mathfrak{g} \tensor \cO_M).$$
\begin{remark} One can check that the Grothendieck-Riemann-Roch
calculation implies that
$$K_{\cM} =13 \lambda - 2 \delta$$
whenever $K_{\cM}$, $\lambda$ and $\delta$ are all defined.
\end{remark}
\subsection*{Acknowledgements}
We thank Anand Deopurkar, David Hyeon, and Fred van der Wyck for stimulating discussions.
We also thank David Hyeon for sharing an early version of his preprint \cite{hyeon_predictions}.
\section{$\alpha$-invariants of curve singularities}
\label{S:character-theory}
To begin, recall that if $[C] \in \cU_g$ is any point and $\cL$ is a line bundle defined in a neighborhood of $[C]$, then the natural action of $\operatorname{Aut}(C)$ on the fiber $\cL\vert_{[C]}$ induces a character $\operatorname{Aut}(C) \rightarrow \mathbb{G}_m$. If $\eta\co \mathbb{G}_m \rightarrow \operatorname{Aut}(C)$ is any one-parameter subgroup, then there is an induced character $\mathbb{G}_m \rightarrow \mathbb{G}_m$ which is necessarily of the form $z \mapsto z^{n}$ for some integer $n \in \mathbb{Z}$. Given a curve $[C]$, a one-parameter subgroup $\eta\co \mathbb{G}_m \rightarrow \operatorname{Aut}(C)$, and a line bundle $\cL$, we denote this integer by $\chi_{\cL}(C, \eta)$.
If $\cL=\lambda_{m}$, we write simply $\chi_{m}(C,\eta)$. Furthermore, if $\operatorname{Aut}(C) \simeq \mathbb{G}_m$, then we write $\chi_{\cL}(C)$ or $\chi_{m}(C)$, where the one-parameter subgroup $\eta\co \GG_m \rightarrow \operatorname{Aut}(C)$ is understood to be the identity.\footnote{Note that, in general, the integers $\chi_{m}(C)$ are only defined up to sign since the choice of isomorphism $\operatorname{Aut}(C) \simeq \mathbb{G}_m$ depends on a sign. The ratios $\chi_{l}(C)/\chi_{m}(C)$ however are well-defined, and this is all we will need.}
\begin{lem}\label{L:Observations}
Suppose the modularity principle for the log MMP for $\M_g$ holds and that
$\bar{\cM}_g(\alpha)$ is $S_2$ and the locus
of worse-than-nodal curves in $\bar{\cM}_g(\alpha)$ has codimension at least $2$. Then
for $c(\alpha):=\frac{13\alpha-13}{2-\alpha}$, some multiple of the $\mathbb{Q}$-line bundle $c(\alpha)\lambda_1+\lambda_2$ descends to a $\mathbb{Q}$-line bundle on
$\M_g(\alpha)$.
\end{lem}
\begin{proof}
Since $\lambda_2 = 13\lambda_1-\delta$ on $\bar{\cM}_{g}$, we have
$K_{\bar{\cM}_{g}}+\alpha\delta \sim c(\alpha)\lambda_1+\lambda_2.$
Now consider the commutative diagram
\[
\xymatrix{
\bar{\cM}_{g} \ar[d] \ar@{-->}[r] &\bar{\cM}_{g}(\alpha)\ar[d]^{\phi}\\
\bar{M}_{g} \ar@{-->}[r]^{f}&\bar{M}_g(\alpha)\\
}
\]
By the definition of $\M_{g}(\alpha)$, the divisor class $K_{\bar{\cM}_{g}}+\alpha\delta$ pushes forward to a $\QQ$-Cartier divisor class on $\M_{g}(\alpha)$. We claim that $\phi^* f_*(K_{\bar{\cM}_{g}}+\alpha\delta)=c(\alpha)\lambda_1+\lambda_2$ on $\bar{\cM}_g(\alpha)$. Evidently, this equality holds on $\bar{\cM}_g(\alpha) \cap \bar{\cM}_{g}$. Since the complement of $\bar{\cM}_g(\alpha) \cap \bar{\cM}_{g}$ has codimension $2$ in $\bar{\cM}_g(\alpha)$ and $\bar{\cM}_g(\alpha)$ is $S_2$, equality holds over all of $\bar{\cM}_g(\alpha)$.
\end{proof}
As a consequence, we obtain:
\begin{prop}\label{C:CriticalAlpha}
Suppose the modularity principle for the log MMP for $\M_g$ holds for $\alpha$
such that $\bar{\cM}_g(\alpha)$ is $S_2$ and the locus
of worse-than-nodal curves in $\bar{\cM}_g(\alpha)$ has codimension at least $2$.
Let $C$ be a curve in $\bar{\cM}_{g}(\alpha)$. Let $\eta\co \mathbb{G}_m \rightarrow \operatorname{Aut}(C)$ be any
one-parameter subgroup. Then either $\chi_m(C,\eta)=0$ for all $m$ or
$$
\alpha=\frac{13-2\left(\frac{\chi_2(C,\eta)}{\chi_1(C,\eta)} \right)}{13-\left(\frac{\chi_2(C,\eta)}{\chi_1(C,\eta)}\right)} \ .
$$
In other words, either $\operatorname{Aut}(C)^\circ$ acts trivially on all the vector spaces $\bigwedge H^0(C, \omega_C^m)$, or else $\alpha$ is uniquely determined by the characters $\chi_1(C,\eta)$ and $\chi_{2}(C,\eta)$.
\end{prop}
\begin{proof} Set $c(\alpha)=\frac{13\alpha-13}{2-\alpha}$.
By Lemma \ref{L:Observations}, the line bundle $c(\alpha)\lambda_1+\lambda_2$ must descend from $\bar{\cM}_{g}(\alpha)$ to $\bar{M}_g(\alpha)$. In particular, the one-parameter subgroup $\eta\co \mathbb{G}_m \rightarrow \operatorname{Aut}(C)$ must act trivially on the fiber $(c(\alpha)\lambda_1+\lambda_2)|_{[C]}$. Thus
the character for this action, given by $c(\alpha)\chi_1(C,\eta)+\chi_2(C,\eta)$, must be $0$.
We conclude that either $\chi_1(C,\eta)=\chi_2(C,\eta)=0$, or
$$
c(\alpha)=-\chi_2(C,\eta)/\chi_{1}(C,\eta),
$$
as desired.
\end{proof}
Evidently, Proposition \ref{C:CriticalAlpha} says nothing whatsoever concerning curves with finite automorphisms.
At critical values of $\alpha$ however, the stacks $\bar\cM_{g}(\alpha)$ typically contain curves
admitting a $\GG_m$-action.
We define the \emph{$\alpha$-value} of a complete curve $C$ with a $\mathbb{G}_m$-action $\eta\co \mathbb{G}_m \rightarrow \operatorname{Aut}(C)$ as
$$
\alpha(C, \eta):= \frac{13-2\left(\frac{\chi_2(C,\eta)}{\chi_1(C,\eta)} \right)}{13-\left(\frac{\chi_2(C,\eta)}{\chi_1(C,\eta)}\right)} \ .
$$
We note that $\alpha(C, \eta) = 2 - 13 \chi_{\lambda}(C, \eta) / \chi_{\delta}(C, \eta)$ as long as the deformation space of $C$ is $S_2$ and the locus of worse-than-nodal curves has codimension at least 2.
Proposition \ref{C:CriticalAlpha} implies that the $\alpha$-value of any complete curve
with $\GG_m$-action
is the {\em only} $\alpha$ at which
the curve can show up in $\Mg{g}(\alpha)$.
Note also that whenever $\Mg{g}(\alpha)$ is constructed as a GIT quotient, the necessary condition for
$[C]$ to be semistable is that the character of $K_{\Mg{g}(\alpha)}+\alpha\delta$ is $0$,
as this character computes the
Hilbert-Mumford index of $[C]$ with respect to $\eta$. We discuss a connection of the character
theory with GIT in Section \ref{section-git}.
Next, we explain how to define and extract critical $\alpha$-values for an arbitrary curve singularity with a
$\GG_m$-action. Given a curve singularity $\hat{\cO}_{C,p}$ with $n$ branches and $\delta$-invariant $\delta(p)$, we
may consider a curve of the form
$$
C=E_1 \cup \ldots \cup E_n \cup C_0,
$$
where $C_0$ is any smooth curve of genus $g-\delta(p)-n+1$ and $E_1, \ldots, E_n$ are rational curves attached to $C_0$ nodally and meeting in a singularity analytically isomorphic to $\hat{\cO}_{C,p}$ (see Figure \ref{F:J10}).
\begin{figure}[hbt]
\begin{centering}
\begin{tikzpicture}[scale=0.55]
\node [style=black] (0) at (-12, 3.5) {};
\node [style=black] (e1) at (-12, 4) {$E_1$};
\node [style=black] (1) at (-11, 3.5) {};
\node [style=black] (e2) at (-11, 4) {$E_2$};
\node [style=black] (2) at (-10, 3.5) {};
\node [style=black] (e3) at (-10, 4) {$E_3$};
\node [style=black] (3) at (-8, 3.5) {};
\node [style=black] (4) at (-7, 3.5) {};
\node [style=black] (5) at (-6, 3.5) {};
\node [style=black] (6) at (-8.5, 3) {};
\node [style=black] (7) at (-6, 2.5) {};
\node [style=black] (8) at (-3.5, 2) {};
\node [style=black] (c0) at (-3, 2) {$C_0$};
\node [style=black] (a) at (-7.5, 1.5) [label=left:{$y^3=x^6$}]{};
\draw [very thick,bend right=60] (1.center) to (4.center);
\draw [very thick,bend right=90, looseness=1.75] (2.center) to (3.center);
\draw [very thick,bend right=15] (7.center) to (8.center);
\draw [very thick,bend right=35] (0.center) to (5.center);
\draw [very thick,bend left=15] (6.center) to (7.center);
\end{tikzpicture}
\end{centering}
\vspace{-0.5pc}
\caption{Rational curves $E_1, E_2$, and $E_3$ meet in the monomial $y^3=x^6$ singularity.}\label{F:J10}
\end{figure}
\vspace{-0.5pc}
If $\GG_m$ acts algebraically on $\hat{\cO}_{C,p}$ via $\eta$, then this
action extends canonically to $C$, which induces a one-parameter subgroup $\tilde \eta\co \GG_m \rightarrow \operatorname{Aut}(C)$.
The characters $\chi_1(C,\tilde \eta)$ and
$\chi_2(C,\tilde \eta)$ depend only on the singularity $\hat{\cO}_{C,p}$ and the $\GG_{m}$-action.
\begin{definition}
We define the \emph{$\alpha$-value of $\hat{\cO}_{C,p}$ with respect to $\eta$}, denoted by $\alpha(\hat{\cO}_{C,p}, \eta)$, as the corresponding $\alpha$-value, $\alpha(C,\tilde \eta)$, of the complete curve $C$.
\end{definition}
\begin{table}[tbh]
{\footnotesize
\renewcommand{\arraystretch}{1.7}
\begin{tabular}{| c || c | c | c || c | c | }
\hline
\text{Singularity type} & $\lambda$ & $\lambda_2$ & $\delta$ & $\alpha$-value & slope \\
\hline
\hline
$A_{2k}: y^2 - x^{2k+1}$ & $k^2$ & $5k^2-4k+1$& $8k^2+4k-1$ & $\frac{3k^2+8k-2}{8k^2+4k-1}$ & $\frac{8k^2+4k-1}{k^2}$ \\
\hline
$A_{2k+1}: y^2 - x^{2k+2}$ & $\frac{k^2+k}{2}$&$\frac{5k^2+k}{2}$& $4k^2+6k$& $\frac{3k+11}{8k+12}$ & $\frac{8k+12}{k+1}$\\
\hline
$D_{2k+1}: x(y^2-x^{2k-1})$ &$k^2$ &$5k^2-2k$ &$8k^2+2k$ & $\frac{3k+4}{8k+2}$ & $\frac{8k+2}{k}$ \\
\hline
$D_{2k+2}: x(y^2-x^{2k})$ &$\frac{k^2+k}{2}$ &$\frac{5k^2+3k}{2}$ & $4k^2+5k$ & $\frac{3k+7}{8k+10}$ & $\frac{8k+10}{k+1}$\\
\hline
$E_6: y^3-x^4$ & $8$ & $33$ & $71$ & $38 / 71$ & $71/8$ \\
\hline
$E_7: y(y^2-x^3)$ & $7$ & $31$ & $60$ & $29/60$ & $60/7$ \\
\hline
$E_8: y^3 - x^5 $ & $14$ & $63$ & $119$ & $8/17$ & $17/2$ \\
\hline
$y^3-x^6$ & $7$ & 34 & $57$ & $23/57$ & $57/7$ \\
\hline
$y^3-x^7$ & $31$ & $152$ & $251$ & $99/251$ & $251/31 $ \\
\hline
$y^3 -x^8$ & $42$ & $211$ & $335$ & $124/335$ & $335/42 $ \\
\hline
$T_{p,q}: y^p-x^q $& \multicolumn{5}{|c|}{\SMALL{See Proposition \ref{P:toric-family} for character values}} \\%& \multirow{2}{*}{$\frac{12pq(p-1)(q-1)-1}{(p-1)(q-1)(2pq-p-q-1)}$} \\
\hline
\SMALL{monomial unibranch}& \multirow{2}{*}{$\sum b_i$} & \multirow{2}{*}{$(2k-1)^2 + \sum b_i$}
& \multirow{2}{*}{\SMALL{$12 \sum b_i - (2k-1)^2$}} & \multirow{2}{*}{\tiny{$\dfrac{11 \sum b_i - 2 (2k-1)^2}{12 \sum b_i - (2k-1)^2}$}}
& \multirow{2}{*}{$12 - \frac{ (2k-1)^2}{\sum b_i}$} \\
\SMALL{with gaps $\{b_1, \dots, b_k\}$} & & & & & \\
\hline
\SMALL{Ribbon $C_{\ell}$} & $g\left(\ell-\frac{g-1}{2}\right)$ & \SMALL{$(5g-4)(\ell-\frac{g-1}{2})$} & \SMALL{$(8g+4)(\ell-\frac{g-1}{2})$} & $\frac{3g+8}{8g+4}$ & $8+\frac{4}{g}$ \\
\hline
\end{tabular} }
\hfill
\caption{Character values of Gorenstein singular curves from Section \ref{S:character-computations}}
\label{T:table-characters}
\end{table}
\vspace{-1pc}
We compute the character values of all ADE, toric, and monomial unibranch Gorenstein singularities, as well as of ribbons, in Section \ref{S:character-computations}. The results are displayed in Table \ref{T:table-characters}.
We expect that the $\alpha$-values displayed in the table are the $\alpha$-values at which the given
singularity type first appears on a curve in $\bar{\cM}_{g}(\alpha)$. There are two caveats. First, the $\alpha$-value
depends not only on the analytic isomorphism type of a singularity but also on the global geometry of the curve.
This dependence is described in Section \ref{S:dangling} below. Second,
there is no guarantee that at the prescribed $\alpha$-values exactly the predicted singularities appear.
Theorem \ref{theorem-character-intersection} below is the first step towards confirming these predictions.
In addition, we note that our predictions for when $A$ singularities arise agree with the computations of Hyeon, who uses different heuristics \cite{hyeon_predictions}.
\subsection{Dangling singularities}\label{S:dangling} \label{section-dangling}
We now explain a variant of the above ideas applied to curves with {\em dangling} singularities; such
singular curves have not yet appeared in the work of Hassett and Hyeon but we expect them to play an important role in the future stages of the program.
We define a collection of modified $\alpha$-values associated to a multi-branch singularity. If $\hat{\cO}_{C,p}$ is any curve singularity with $n \geq 2$ branches, we may enumerate the branches, and for any non-empty subset $S \subset \{1, \ldots, n\}$, we consider a curve of the form
$$
C^S=E_1 \cup \ldots \cup E_n \cup C_0,
$$
where each $E_i$ is $\PP^1$ with two distinguished points $0$ and $\infty$,
each $E_i$ with $i \in S$ meets $C_0$ nodally at $\infty$, and all the $E_i$'s are glued along singularity of type
$\hat{\cO}_{C,p}$ at $0$ (see Figure \ref{F:dangling}).
\begin{figure}[hbt]
\begin{centering}
\begin{tikzpicture}[scale=0.38]
\node [style=black] (n1) at (-0.5, 3) [label=left:$E_1$]{};
\node [style= black] (1) at (-3, 2) [label=left:$E_3$]{};
\node [style= black] (2) at (2, 2) {};
\node [style= black] (3) at (4, 0) [label=right:$C_0$] {};
\node [style= black] (4) at (1, -1.25) {};
\node [style= black] (5) at (-3, -2) [label=left:$E_2$]{};
\node [style= black] (6) at (2, -2) {};
\node [style= black] (7) at (-2, -2.5) {};
\node [style= black] (n2) at (-0.5, -3) {};
\draw [very thick, bend right=75, looseness=1.41] (1.center) to (2.center);
\draw [very thick, bend left=75, looseness=1.41] (5.center) to (6.center);
\draw [very thick] (n2.center) to (n1.center);
\draw [very thick, bend right=45] (7.center) to (4.center);
\draw [very thick, bend left=45] (4.center) to (3.center);
\end{tikzpicture}
\end{centering}
\vspace{-1pc}
\caption{Dangling $D_{6}^{\{1,2\}}$-singularity.}\label{F:dangling}
\end{figure}
\vspace{-0.2pc}
As before, if $\GG_m$ acts algebraically on $\hat{\cO}_{C,p}$ via $\eta$, there is an induced one-parameter subgroup $\tilde \eta\co \GG_m \rightarrow \operatorname{Aut}(C)$. We define the \emph{$\alpha$-value of $\hat{\cO}_{C,p}$ with respect to $S$ and $\eta$}, denoted by $\alpha^{S}(\hat{\cO}_{C,p}, \eta)$, as the corresponding $\alpha$-value, $\alpha(C^S,\tilde \eta)$, of the complete curve $C^S$.
In this notation, $\alpha^{[n]}(\hat{\cO}_{C,p})$ is the standard $\alpha$-invariant defined above. In general, the invariants $\alpha^{S}(\hat{\cO}_{C,p})$ will depend on the subset $S$, which reflects the fact that curves $C^S$ may appear in the moduli stack $\bar{\cM}_{g}(\alpha)$ at different values of $\alpha$.
\begin{figure}[hbt]
\begin{centering}
\begin{tikzpicture}[scale=0.4]
\node [style=black] (0) at (-8, 4) {};
\node [style=black] (y) at (-7.6, 4.2) [label=left:{\SMALL $C_0$}] {};
\node [style=black] (1) at (-9, 2) {};
\node [style=black] (x) at (-8.7, 2) [label=left:{\small ${g=2}$}] {};
\node [style=black] (2) at (-6, 1.5) {};
\node [style=black] (z) at (-6.5, 1.7) [label=right:$\PP^1$] {};
\node [style=black] (3) at (-9, 1) {};
\node [style=black] (4) at (-8, 1) {};
\node [style=black] (5) at (-10, 0) {};
\node [style=black] (6) at (-8, 0) {};
\node [style=black] (7) at (-7, 0) {};
\node [style=black] (8) at (-6, 0) {};
\node [style=black] (9) at (-12.5, -1) {};
\node [style=black] (d) at (-12.1, -1) [label=left:{\SMALL $C_0$}] {};
\node [style=black] (10) at (-11, -1) {};
\node [style=black] (11) at (-5, -1) {};
\node [style=black] (12) at (-4, -1) {};
\node [style=black] (w) at (-2.1, -1) {}
\node [style=black] (r) at (-1.9, -1) [label=left:{\SMALL $C_0$}] {};
\node [style=black] (13) at (-13.5, -3) {};
\node [style=black] (q) at (-13.1, -3) [label=left:{\small ${g=2}$}] {};
\node [style=black] (14) at (-13.5, -4) {};
\node [style=black] (15) at (-12.5, -4) {};
\node [style=black] (16) at (-5, -4) {};
\node [style=black] (17) at (-11.5, -5) {};
\node [style=black] (18) at (-4, -5) {};
\node [style=black] (eq) at (-4, -3) [label=right:{\small ${y^2=x^{6}}$}] {};
\node [style=black] (19) at (-3, -5) {};
\node [style=black] (e) at (-3.3, -5) [label=right:$\PP^1$] {};
\draw [very thick,bend right=300] (12.center) to (16.center);
\draw [very thick, out=105, looseness=5.80, in=105] (18.center) to (19.center);
\draw [very thick,bend right=300] (0.center) to (3.center);
\draw [very thick,bend right=45, looseness=1.5] (15.center) to (17.center);
\draw [very thick, bend right=45, looseness=1.5] (4.center) to (7.center);
\draw [very thick, bend left=45, looseness=1.5] (1.center) to (4.center);
\draw [very thick, ->] (5.center) to (10.center);
\draw [very thick, bend left=45, looseness=1.5] (13.center) to (15.center);
\draw [very thick, bend right=300] (9.center) to (14.center);
\draw [very thick] (6.center) to (2.center);
\draw [very thick, ->] (8.center) to (11.center);
\end{tikzpicture}
\end{centering}
\caption{Given a smoothing of a curve with a genus $2$ tail attached at an arbitrary point $p$, after blowing up the conjugate point of $p$ and contracting the genus $2$ curve, one obtains a dangling $\PP^1$ attached at an oscnode.}\label{F:dangling-genus-2}
\end{figure}
{\footnotesize
\begin{table}[htb]
\renewcommand{\arraystretch}{1.6}
\begin{tabular}{|c || c | c | c || c | c |}
\hline
\text{Dangling type} & $\lambda$ & $\lambda_2$ & $\delta$ & $\alpha$-value & slope \\
\hline
\hline
$A_{2k}^{\{\}}: y^2 - x^{2k+1}$ & $k^2$ & $5k^2-4k$ & $8k^2+4k$ & $\frac{3k^2+8k}{8k^2+4k}$ & $\frac{8k+4}{k}$\\
\hline
$A_{2k+1}^{\{\}}: y^2 - x^{2k+2}$ & $\frac{k^2+k}{2}$ & $\frac{5k^2+k-4}{2}$ & $4k^2+6k+2$ & $\frac{3k^2+11k+8}{8k^2+12k+4}$ & $\frac{8k^2+12k+4}{k^2+k}$\\
\hline
$A_{2k+1}^{\{1\}}: y^2 - x^{2k+2}$ & $\frac{k^2+k}{2}$ & $\frac{5k^2+k-2}{2}$ & $4k^2+6k+1$ & $\frac{3k^2+11k+4}{8k^2+12k+2}$ & $\frac{8k^2+12k+2}{k^2+k}$\\
\hline
$D_{2k+1}^{\{1\}}: x(y^2-x^{2k-1})$ &$k^2$ & $5k^2-2k-1$ & $8k^2+2k+1$ & $\frac{3k^2+4k+2}{8k^2+2k+1}$& $\frac{8k^2+2k+1}{k^2}$\\
\hline
$D_{2k+2}^{\{1\}}: x(y^2-x^{2k})$ & $\frac{k^2+k}{2}$ & $\frac{5k^2+3k-4}{2}$ & $4k^2+5k+2$ & $\frac{3k^2+7k+8}{8k^2+10k+4}$ & $\frac{8k^2+10k+4}{k^2+k}$\\
\hline
$D_{2k+2}^{\{1,2\}}: x(y^2-x^{2k})$ & $\frac{k^2+k}{2}$ & $\frac{5k^2+3k-2}{2}$ & $4k^2+5k+1$ &$\frac{3k^2+7k+4}{8k^2+10k+2}$ & $\frac{8k^2+10k+2}{k^2+k}$ \\
\hline
$E_6^{\{\}}: y^3-x^4$ & $8$ & $32$ & $72$ & $5 / 9$ & $9$ \\
\hline
$E_7^{\{1\}}: y(y^2-x^3)$ & $7$ & $30$ & $61$ & $31/61$ & $61/7$ \\
\hline
$E_8^{\{\}}: y^3 - x^5 $ & $14$ & $62$ & $120$ & $29/60$ & $60/7$ \\
\hline
\hline
Dangling chains (see \ref{S:dangling-chains}) & \multicolumn{2}{|c|}{$\lambda$} & $\delta$ \\
\cline{1-4}
\cline{1-4}
$A_{2i+1/2j+1}$ & \multicolumn{2}{|c|}{$\frac{j^2+j-i^2-i}{2}$} & $4j^2+6j-4i^2-6i+1$ \\
\cline{1-4}
$A_{2i+1/2j}$ & \multicolumn{2}{|c|}{$j^2-(\frac{i^2+i}{2})$} & $8j^2+4j-4i^2-6i-1$ \\
\cline{1-4}
\end{tabular}
\smallskip
\caption{Character values for dangling ADE singularities from Section \ref{S:character-computations} }
\label{T:table-dangling}
\end{table}
}
The first example of this phenomenon should occur with the oscnode $(y^2=x^6)$. As seen in Table \ref{T:table-dangling}, the $\alpha$-invariant of the oscnode is $17/28$ reflecting the fact that we expect oscnodes to replace genus $2$ bridges attached by conjugate points. By contrast, the $\alpha$-invariant of the dangling oscnode $A_{5}^{\{1\}}$ is $19/29$. The key point is that this is precisely the threshold $\alpha$-value at which $\Delta_2$ is covered by $(K+\alpha\delta)$-negative curves, and indeed one can replace arbitrary genus $2$ tails by a dangling oscnode, using the blow-up/blow-down procedure pictured in Figure \ref{F:dangling-genus-2}.
While it would be too laborious to compute the associated $\alpha^{S}$-values even for toric planar
singularities, we will do a sample in order to given an indication. In Table \ref{T:table-dangling}, we list
$\alpha$-values for all dangling ADE singularities. Note that since the branches of any $A_k$ or toric singularity are isomorphic the only relevant feature of the subset $S\subset \{1, \ldots, n\}$ is the size. For $D_{2k+1}$ singularities, we use the labeling ``1'' for the smooth branch and ``2'' for the singular branch, and for $D_{2k+2}$-singularities, we use ``1'' for the smooth branch with unique tangent direction and ``2,3'' for the tangent branches.
Similarly for the $E_7$ singularity, we use the labeling ``1'' for the smooth branch and ``2" for the singular branch.
\subsection{Chains of dangling singularities}\label{S:dangling-chains}
\begin{comment}
$A_{2i+1/2j+1}$ (see \ref{S:dangling-chains}) & $\frac{j^2+j-i^2-i}{2}$ & $13\lambda-\delta$ & \small{$4j^2+6j-4i^2-6i+5$} & $2-13\frac{\lambda}{\delta}$ & $\frac{\delta}{\lambda}$\\
\hline
$A_{2i+1/2j}$ & $j^2-\frac{i^2+i}{2}$ & $13\lambda-\delta$ & \small{$8j^2+4j-4i^2-6i-1$} & $2-13\frac{\lambda}{\delta}$ & $\frac{\delta}{\lambda}$\\
\hline
\end{comment}
We also predict that
in future steps in the log MMP for $\bar{M}_g$ it will be necessary to parameterize curves admitting certain chains of dangling singularities. Rather than defining a general theory of chains of dangling singularities, we will introduce two particular sequences which we anticipate will arise before $\alpha = 5/9$.
We will say that a genus $g$ curve $C$ has an {\em $A_{2i+1/2j+1}$-singularity} (resp. {\it $A_{2i+1/2j}$-singularity}) if $C$ is of the form
$$C = C_0\cup E_1 \cup E_2 \cup E_3 \qquad (\text{resp. }\, C = C_0\cup E_1 \cup E_2 ),$$
where $C_0$ is a genus $g-i-j$ curve, each $E_k$ is a smooth rational curve, $E_1$ meets $C_0$ at a node, $E_2$ meets $E_1$ at an $A_{2i+1}$ singularity, and $E_3$ meets $E_2$ at an $A_{2j+1}$ singularity (resp. $E_2$ has a monomial $A_{2j}$-singularity); see Figure \ref{Fig:dangling-slash}.
\begin{figure}[hbt]
\begin{centering}
\begin{tikzpicture}[scale=0.7]
\node (0) at (-4, 2.5) {};
\node (1) at (-3, 2.5) {};
\node (2) at (-2, 2.5) {};
\node (3) at (-1, 2.5) {};
\node (4) at (0, 2.5) {};
\node (5) at (1, 2.5) {};
\node (6) at (-3.75, 1.75) {};
\node (7) at (-3, 1.75) {};
\node (8) at (-0.75, 1.75) {};
\node (9) at (0, 1.75) {};
\node (10) at (-3.75, 1) {};
\node (11) at (-3, 1) {};
\node (b) at (-3.25, 1.5) [label=right:{\footnotesize ${A_{2i+1}}$}] {};
\node (12) at (-0.75, 1) {};
\node (13) at (0, 1) {};
\node (a) at (-0.25, 1.5) [label=right:{\footnotesize ${A_{2i+1}}$}] {};
\node (c) at (-0.25, 0.2) [label=right:{\footnotesize ${A_{2j}}$}] {};
\node (d) at (-3.25, 0.2) [label=right:{\footnotesize ${A_{2j+1}}$}] {};
\node (14) at (-3.75, 0.5) {};
\node (15) at (-3, 0.5) {};
\node (16) at (-0.75, 0) {};
\node (17) at (0, 0) {};
\node (18) at (-3.75, -0.25) {};
\node (19) at (-3, -0.25) {};
\node (20) at (-0.75, -0.75) {};
\node (21) at (-3, -1) {};
\node (22) at (0, -1) {};
\node (23) at (0, -1.75) {};
\draw [very thick, out=-90, looseness=1.50, in=15] (13.center) to (16.center);
\draw [very thick, out=0, looseness=1.25, in=82] (16.center) to (22.center);
\draw [very thick, bend right=45] (3.center) to (5.center);
\draw [very thick, bend left=90, looseness=1.75] (18.center) to (19.center);
\draw [very thick, bend left=90, looseness=1.75] (12.center) to (13.center);
\draw [very thick, bend left=90, looseness=1.75] (10.center) to (11.center);
\draw [very thick] (1.center) to (7.center);
\draw [very thick, bend left=90, looseness=1.50] (7.center) to (6.center);
\draw [very thick, bend right=45] (0.center) to (2.center);
\draw [very thick] (11.center) to (15.center);
\draw [very thick, bend right=90, looseness=1.50] (14.center) to (15.center);
\draw [very thick] (4.center) to (9.center);
\draw [very thick] (19.center) to (21.center);
\draw [very thick, bend left=90, looseness=1.50] (9.center) to (8.center);
\end{tikzpicture}
\end{centering}
\vspace{-2pc}
\caption{ $A_{2i+1/2j+1}$ and $A_{2i+1/2j}$-singularities.}\label{Fig:dangling-slash}
\end{figure}
\section{Predictions for the log MMP for $\M_g$}
\label{S:predictions}
Using heuristics provided by both intersection theory and character computations, we offer predictions
in Table \ref{table-predictions} for modular interpretations of $\Mg{g}(\alpha)$ for $\alpha \ge 5/9$.
(For small $g$ these predictions have to be modified.
For example, in $\M_3$ Weierstrass genus $2$ tails are also elliptic tails and so
$\alpha=2/3$ is not a threshold value, as can be seen from \cite{hyeon-lee_genus3}.)
\addtocounter{footnote}{1}
\footnotetext[\value{footnote}]{Although it goes beyond the scope of this paper, the replacement of genus $2$
bridges requires both $D_6^{\{1,2\}}$ singularities {\em and} certain non-reduced double line bridges;
in the case of genus $5$ this is worked
out explicitly in \cite{fedorchuk-genus5}.}
{\footnotesize
\begin{table}[htb]
\renewcommand{\arraystretch}{1.3}
\begin{tabular}{l || c | c}
\multirow{2}{*}{$\alpha$-value} & Singularity type & \multirow{2}{*}{Locus removed at
$\alpha - \epsilon$} \\
& added at $\alpha$\\
\hline
\hline
$9/11$ & $A_2$ & \small{elliptic tails attached nodally}\\
\hline
$7/10$ & $A_3$ & \small{elliptic bridges attached nodally } \\
\hline
$2/3$ & $A_4$ & \small{genus $2$ tails attached nodally at a Weierstrass point} \\
\hline
\multirow{2}{*}{$19/29$}
& $A_5^{\{1\}}$ & \small{genus $2$ tails attached nodally}\\
& $A_{3/4}$ & \small{genus $2$ tails attached tacnodally at a Weierstrass point}\\
\hline
$12/19$
& $A_{3/5}$ & \small{genus $2$ tails attached tacnodally}\\
\hline
$17/28$ & $A_5$ & \small{genus $2$ bridges attached nodally at
conjugate points} \\
\hline
$49/83$ & $A_6$ & \small{hyperelliptic genus $3$ tails attached nodally at
a Weierstrass point} \\
\hline
$32/55$ & $A_7^{\{1\}}$ & \small{hyperelliptic genus $3$ tails attached
nodally} \\
\hline
$42/73$ & $A_{3/6}$ & \small{hyperelliptic genus $3$ tails attached
tacnodally at a Weierstrass point} \\
\hline
\multirow{7}{*}{$5/9$} & $D_4$ & \small{elliptic triboroughs attached nodally} \\
&$D_5$ & \small{genus $2$ bridges attached nodally at a
Weierstrass and free point}\\
& $D_6^{\{1,2\}}$ & \small{genus $2$ bridges attached
nodally at two free points$^{\decimal{footnote}}$ } \\
& $A_{3/7}$ & \small{hyperelliptic genus $3$ tails attached tacnodally}\\
& $A_{5/7}$ & \small{hyperelliptic genus $3$ tails attached oscnodally}\\
& $A_{5/6}$ & \small{hyperelliptic genus $3$ tails attached oscnodally at a Weierstrass point}\\
& $A_7$ & \small{hyperelliptic genus $3$ bridges attached
nodally at conjugate points} \\
\end{tabular}
\medskip
\caption{Predictions for the log MMP for $\alpha \ge 5/9$}
\label{table-predictions}
\end{table}
}
\begin{remark}\label{R:chamber-remark}
There are in fact other singular curves with $\GG_m$-action whose characters allow them to appear
in $\Mg{g}(\alpha)$ for $\alpha\in [5/9, 1]$. However, they can be excluded by geometric considerations.
Heuristically, a necessary condition for a curve to appear in $\Mg{g}(\alpha)$ is that it is an isotrivial specialization
of curves in $\Mg{g}(\alpha+\epsilon)$. For example, an $A_{5/4}$-curve has $\alpha$-value $9/11$ but obviously
is not an isotrivial specialization of any stable curve.
\end{remark}
Giving a complete description of $\bar{\cM}_g(\alpha)$ is much more subtle than generally describing the singularities added and the loci removed. For instance, after removing elliptic tails (connected genus $1$ subcurves which are attached nodally) at $\alpha = 9/11 - \epsilon$, in each subsequent moduli stack parameterizing curves with additional singularities, one needs to redefine what is meant by an elliptic tail by specifying the allowed attaching singularities.
\section{Character theory computations}
\label{S:character-computations}
\subsection{Computing the characters of $\lambda$ and $\lambda_2$}
Suppose we are given a curve $C$ and a one-parameter subgroup $\eta\co \GG_m \to \operatorname{Aut}(C)$. Then there is an induced $\GG_m$-action on the sequence of one-dimensional vector spaces
$$
\lambda_{m}|_{[C]}:=\bigwedge \HH^0(C, \omega_{C}^m).
$$
For many classes of singularities, the induced character $\chi_{m}(C, \eta) \in \mathbb{Z}$ of this action can be explicitly computed. In this section, we will calculate these characters for $A_{2k}$, $D_{2k+2}$ singularities, elliptic $m$-fold points, monomial unibranch Gorenstein singularities, and ribbons. The same procedure can be used to compute these characters for all ADE and toric singularities, and these results are listed in Table \ref{T:table-characters}.
Throughout this section, we will use the following basic result about the dualizing sheaf $\omega_C$
of a reduced singular curve $C$.
Let $\nu\co \tilde{C} \rightarrow C$ be the normalization of $C$ and consider the sheaf $\Omega_{\tilde{C}} \otimes
K(\tilde{C})$ of rational differentials on $\tilde{C}$.
Then $\omega_C\subset \nu_*\bigl(\Omega_{\tilde{C}} \otimes K(\tilde{C})\bigr)$ is the subsheaf of {\em Rosenlicht differentials}
defined as follows: A differential $\omega\in \nu_* \bigl(\Omega_{\tilde{C}} \otimes K(\tilde{C}) \bigr)$ is Rosenlicht at
$p\in C$
if for every function $f \in \cO_{C,p}$
$$
\sum_{p_i \in \nu^{-1}(p)}\text{Res}_{\, p_i} \, (f \,\omega)=0.
$$
See \cite[Prop.6.2]{barth} for the proof of this fact in the
analytic setting or \cite[Ch.IV]{serre-corps} for a general discussion of duality on singular curves.
\begin{example}[$A_{2k}: y^2=x^{2k+1}$]
Let $C=C_{0} \cup E$, where $E$ is a smooth rational curve with a higher cusp $y^2=x^{2k+1}$ at zero,
attached nodally at infinity to $p\in C_{0}$.
If $t$ is a uniformizer at zero, then $dt/t^{2k}$ is a generator for
$\omega_{C}$ at the cusp, and we may write down a basis for $\HH^0(C, \omega_{C})$ as follows:
$$
\left(0, \frac{dt}{t^{2k}}\right),\left(0, \frac{dt}{t^{2k-2}}\right), \ldots, \left(0, \frac{dt}{t^2}\right), (\omega_{1}, 0), \ldots, \left(\omega_{g-k},0 \right),
$$
where $\omega_{1}, \ldots, \omega_{g-k}$ is a basis for $\omega_{C_0}$.
This basis diagonalizes the $\GG_m$-action
$\eta\co t \mapsto \lambda^{-1} t$ with weights $(2k-1), (2k-3), \ldots, 1$.
Thus, the character $\chi_{1}(C, \eta)$ is
$$
\chi_{1}=\sum_{i=1}^{k}(2i-1)=k^2.
$$
Similarly, we may write down a basis for $\HH^0(C, \omega_{C}^2)$ as
\begin{multline*}
\left(0, \frac{(dt)^2}{t^{4k}}\right),\left(0, \frac{(dt)^2}{t^{4k-2}}\right), \ldots, \left(0, \frac{(dt)^2}{t^{2k}}
\right), \left(0, \frac{(dt)^2}{t^{2k-1}}\right), \ldots, \left(w_{0}, \frac{(dt)^2}{t^{2}}\right),
\\ (w_{1}, 0), \ldots, \left(w_{3g-3k-2},0 \right),
\end{multline*}
where $w_{1}, \ldots, w_{3g-3k-2}$ is basis for $\HH^0(C_0,\omega_{C_0}^2(p))$, and
$w_{0}$ is an appropriately chosen element of $\HH^0(C_0,\omega_{C_0}^2(2p)) \setminus \HH^0(C_0,\omega_{C_0}^2(p))$.
Thus, the character $\chi_{2}(C,\eta)$ is given by
$$\chi_{2}=\sum_{i=0}^{k-1}(2k+2i)+\sum_{i=0}^{2k-2}i =5k^2-4k+1.$$
\end{example}
\begin{example}[$D_{2k+2}: x(y^2-x^{2k})=0$]
Let $C=C_0\cup E$, where $C_0$ is a genus $g-k-2$ curve and $E=E_1\cup E_2\cup E_3$ is the union
of three rational curves at the monomial $D_{2k+2}$ singularity.
The normalization map is given by:
\begin{align*}
\left(
\begin{matrix}
x \\ y
\end{matrix}
\right)&\rightarrow
\left(
\begin{matrix}
0 & \ \ t_2 & \ \ t_3\\
t_1& -t_2^{k} & \ \ t_3^{k}
\end{matrix}
\right).
\end{align*}
A local generator for $\omega_C$ is $\omega_0=\left(\dfrac{2dt_1}{t_1^2}, \dfrac{dt_2}{t_2^{k+1}}, -\dfrac{dt_3}{t_3^{k+1}}\right)$.
If $\omega_1,\dots,\omega_{g-k-2}$ is a basis of $\HH^0(C_0,\omega_{C_0})$ and
$v_1\in \HH^0(C_0,\omega_{C_0}(p_1+p_2))$, $v_2\in \HH^0(C_0,\omega_{C_0}(p_1+p_2+p_3))$
are appropriately chosen differentials, then the basis
\begin{align*}
(\omega_1,0), \dots, (\omega_{g-k-2},0), (0,\omega_0), (0, x\omega_0),
\dots, (0, x^{k-1}\omega_0), (v_1, x^k\omega_0), (v_2, y\omega_0),
\end{align*}
of $\HH^0(C,\omega_C)$ diagonalizes the action $(t_1,t_2,t_3) \mapsto (\lambda^{-k}t_1,\lambda^{-1} t_2, \lambda^{-1} t_3)$. Thus,
$$
\chi_{1}=1+2+\cdots+k=\frac{k(k+1)}{2}.
$$
A generator for $\omega_C^2$ is $\omega_0^2=\left(\dfrac{4(dt_1)^2}{t_1^4}, \dfrac{(dt_2)^2}{t_2^{2k+2}}, \dfrac{(dt_3)^2}{t_3^{2k+2}} \right)$, so we write out an array of $(3k+3)$ quadratic differentials with non-zero weight ($2k+1$ in
the first column, $k+1$ in the second column, $1$ in the third):
{\large
\begin{align*}
\begin{matrix}
(\frac{4(dt_1)^2}{t_1^4}, \frac{(dt_2)^2}{t_2^{2k+2}}, \frac{(dt_3)^2}{t_3^{2k+2}})&(\frac{4(dt_1)^2}{t_1^3}, \frac{(dt_2)^2}{t_2^{k+2}}, \frac{-(dt_3)^2}{t_3^{k+2}})&(\frac{4(dt_1)^2}{t_1^2}, \frac{(dt_2)^2}{t_2^{2}}, \frac{(dt_3)^2}{t_3^{2}})\\
& & \\
(0, \frac{(dt_2)^2}{t_2^{2k+1}}, \frac{(dt_3)^2}{t_3^{2k+1}})&(0, \frac{(dt_2)^2}{t_2^{k+1}}, \frac{-(dt_3)^2}{t_3^{k+1}})&\\
\vdots&\vdots&\\
\end{matrix}
\end{align*}
}
By summing the weights, we find:
$$
\chi_2 = \sum_{i=0}^{2k}i+\sum_{i=0}^{k}i=(2k+1)(2k)/2+(k+1)k/2=\frac{5k^2+3k}{2}.
$$
\end{example}
\begin{example}[Elliptic $m$-fold points] Let $m\geq 3$.
An elliptic $m$-fold point $E$ is a Gorenstein union of $m$ general
lines through a point in $\PP^{m-1}$ \cite{smyth-compositio}. Every such singularity is isomorphic to
the cone over points
$p_1=(1,0,\dots, 0)$, $p_2=(0,1,\dots,0)$, $\dots$,
$p_{m-1}=(0, 0, \dots, 1)$, and $p_m=(1,\dots, 1)$, with the vertex at $0\in \AA^{m-1}$.
If $(x_1,\dots, x_{m-1})$ are coordinates centered at the vertex then the normalization map from $m$ copies
of $\PP^1$ to $E$
is given by
\begin{align*}
\left(
\begin{matrix}
x_1\\
\vdots\\
\vdots\\
x_{m-1}
\end{matrix}
\right)&\rightarrow
\left(
\begin{matrix}
t_1& 0& \hdots & 0 & t_{m}\\
0&t_2& \ddots & \vdots & t_{m} \\
\vdots& \ddots& \ddots& 0& \vdots \\
0 & \hdots &0 & t_{m-1} & t_{m}
\end{matrix}
\right).
\end{align*}
We let $C$ be the singular curve obtained by attaching $E$ to a smooth curve $C_0$
nodally at points $p_1,\dots, p_m$.
A generator for $\omega_C$ in the neighborhood of the $m$-fold point is
$$
\omega_0
=\left(\frac{dt_1}{t_1^2}, \ldots, \frac{dt_{m-1}}{t_{m-1}^2}, -\frac{dt_m}{t_{m}^{2}}\right).
$$
In fact, $\omega_0$
spans the only weight space of $\HH^0(C,\omega_C)$ with a non-zero weight.
\begin{comment}
\footnote{The remaining eigenspaces $\HH^0(C,\omega_C)$ are spanned by
$(x_1\omega_0, \omega'_0), \dots, (x_{m-1}\omega_0, \omega'_{m-1}), (0, \omega''_{1}),
\dots, (0,\omega''_{g-m}),
$
where $\omega'_i\in \HH^0(C_0,\omega_{C_0}(p_i))$ satisfies $\textrm{Res}_{p_i}\omega'_i=1$,
and where $\omega''_{j}\in \HH(C_0,\omega_{C_0})$.}
\end{comment}
Thus, $\chi_{1}(C)=1$.
A generator for $\omega_{C}^2$ in the neighborhood of the $m$-fold point is
$$
\omega_0^2=\left(\frac{(dt_1)^2}{t_1^4}, \ldots, \frac{(dt_{m-1})^2}{t_{m-1}^4}, \frac{(dt_m)^2}{t_{m}^{4}}\right),
$$
and the only weight spaces of $\HH^0(C,\omega_C^2)$ with non-zero weights are
spanned by
$$(\omega_0^2, 0), (x_1\omega_0^2, 0), \ldots, (x_{m-1}\omega_0^2, 0).$$
It follows that $\chi_{2}(C)=2+(m-1)=m+1$.
Thus, the $\alpha$-invariant of the elliptic $m$-fold point is
$$\alpha=\frac{11-2m}{12-m}.$$
\end{example}
\begin{example}[Monomial unibranch singularities] \label{E:monomial-unibranch}
Let $C$ be the projective closure of the curve
$\spec k[t^{n}: n \geq 0, n\notin \{b_1,\dots,b_k\}]$. Clearly, $p_a(C)=k$,
the normalization of $C$ is $\PP^1$, and $C$ has an isolated monomial singularity at $t=0$.
From now on we assume that $C$ is Gorenstein.
The condition for $C$ to be Gorenstein is that the gap sequence $\{b_1=1, \dots, b_k\}$
is symmetric:
$$
n\in \{b_1,\dots,b_k\} \Leftrightarrow 2k-1-n\notin \{b_1,\dots,b_k\}.
$$
In particular, $b_k= 2k-1$. Evidently, a generator for $\omega_C$ in a neighborhood of zero is given by
$dt/t^{b_k+1}$. Therefore, we can write down bases
\begin{align*}
\HH^0(C, \omega_C) &= \left\langle \frac{dt}{t^{b_1+1}}, \frac{dt}{t^{b_2+1}}, \ldots, \frac{dt}{t^{b_k+1}} \right\rangle \\
\HH^0(C, \omega_C^{2}) &= \left\langle \frac{(dt)^2}{t^{2b_k+2-j}} \quad : \quad j \in \{ 0, \ldots, 2b_k-2 \}\smallsetminus \{b_1,\dots,b_k\}\right\rangle
\end{align*}
and we compute
\begin{align}
\chi_1 & = \sum_{i=1}^k b_i \ , \label{unibranch-1} \\
\chi_2 & = \sum_{j=0}^{2b_k-2} (2b_k-j) - \sum_{i=1}^k (2b_k-b_i)
= (2k-1)^2 + \sum_{i=1}^k b_i - 1. \label{unibranch-2}
\end{align}
In the case when $C' = C \cup E$ is the nodal union of $C$ and a genus $g-k$ curve attached at the infinity,
the corresponding characters of $C'$ are $\chi_1 = \sum_{i=1}^k b_i$ and $\chi_2 = (2k-1)^2 + \sum_{i=1}^k b_i$.
For the toric singularity $x^p = y^q$ with $p$ and $q$ coprime, the local ring of the singularity is
$k[t^{pi+qj}: i,j\in \ZZ_{\geq 0}]$. The gap sequence $\{b_1,\dots, b_k\}$
is the set of positive integers that cannot be expressed
as $pi+qj$ with $i,j\geq 0$. The study of this sequence, e.g. finding its cardinality and the largest element
is classically known in elementary number theory as the {\em Frobenius problem} \cite{frobenius-problem}.
It is well-known that the largest gap is $b_k=pq-p-q$. It is also
easy to see that the gap sequence is symmetric: $n$ is a gap if and only if $pq-p-q-n$ is not a gap.
It follows that the genus of the singularity $x^p=y^q$ is $g=(p-1)(q-1)/2$. By \cite{frobenius-1} (see also \cite{frobenius-2}), the sum of gaps is
\begin{equation}\label{E:sum-of-gaps}
\sum_{n=1}^{g} b_i=(p-1)(q-1)(2pq-p-q-1)/12.
\end{equation}
It follows from Equations \eqref{unibranch-1}-\eqref{unibranch-2} that
\begin{align*}
\chi_1&=\frac{1}{12}(p-1)(q-1)(2pq-p-q-1), \\
\chi_2 &= (pq-p-q)^2+\frac{1}{12}(p-1)(q-1)(2pq-p-q-1)-1.
\end{align*}
Remarkably, intersection theory calculations of Proposition \ref{P:toric-family} give an independent
algebro-geometric proof of the highly nontrivial combinatorial Formula \eqref{E:sum-of-gaps};
see Section \ref{S:intersection-theory} below.
\end{example}
\begin{example}[Non-reduced curves: A case study of ribbons]
\label{E:ribbons}
The character theory is particularly suited to the study of
non-reduced Gorenstein
schemes. Here, we treat the case of {\em ribbons}. Ribbons occur as certain flat limits (in the Hilbert scheme) of canonically embedded smooth curves degenerating to hyperelliptic curves \cite{fong}. Our exposition is self-contained but we refer the reader to \cite{BE} for a more systematic study of ribbons.
A ribbon
is a scheme obtained by gluing together two copies of
$\AA^1[\epsilon]:=\spec k[x,\varepsilon]/(\varepsilon^2)$. Precisely, let $U_1=\spec k[x,\varepsilon]/(\varepsilon^2)$
and $U_2=\spec k[y,\eta]/(\eta^2)$, and let $(U_1)_x$ and $(U_2)_y$ be the corresponding open affine
subschemes.
Then by \cite[p. 733]{BE} a ribbon of genus $g$
is given by a gluing isomorphism
$\varphi\co (U_1)_y \ra (U_2)_x$ defined by
\begin{align*}
x &\mapsto y^{-1}-y^{-2}f(y)\eta, \\
\varepsilon &\mapsto y^{-g-1}\eta,
\end{align*}
where $f(y)=f_1y^{-1}+\cdots+f_{g-2}y^{-(g-2)}\in \dfrac{k[y,y^{-1}]}{k[y]+y^{-g+1}k[y^{-1}]}$.
We focus here on non-split ribbons that admit a $\GG_m$-action. There are $g-2$ such ribbons,
each given by $f(y)=y^{-\ell}$, for $\ell\in\{1,\dots, g-2\}$. Denote the ribbon
corresponding to $f(y)=y^{-\ell}$ by $C_{\ell}$. Then
the $\GG_m$-action on $C_\ell$ is given by $t\cdot (x,y,\varepsilon, \eta)=(tx, t^{-1}y, t^{g-\ell}\varepsilon, t^{-\ell-1}\eta)$.
By adjunction, the sections of $\omega_{C_\ell}$ over $U_1$
are identified with restrictions to $U_1$ of $2$-forms
$f(x,\varepsilon)\frac{dx\wedge d\varepsilon}{\varepsilon^2}$ on $\spec k[x,\varepsilon]$, and the
sections of $\omega_{C_\ell}$ over $U_2$ are identified with restrictions to $U_2$ of $2$-forms
$f(y,\eta)\frac{dy\wedge d\eta}{\eta^2}$ on $\spec k[y,\eta]$. With this in mind, we can write
down $g$ linearly independent global sections of $\omega_{C_\ell}$:
\begin{align*}
\intertext{For $k=0,\dots, g-\ell-2$, take}
\omega_k &=x^{k}\frac{dx\wedge d\varepsilon}{\varepsilon^2}=-(y^{g-1-k}+(g-\ell-k-1)y^{g-\ell-k-2}\eta)\frac{dy\wedge d\eta}{\eta^2}, \\
\intertext{for $k=g-\ell-1, \dots, g-1$, take}
\omega_k &=(x^{k}+(\ell+k+1-g)x^{\ell+k-1}\varepsilon)\frac{dx\wedge d\varepsilon}{\varepsilon^2}=-y^{g-1-k}\frac{dy\wedge d\eta}{\eta^2}.
\end{align*}
It follows that $\{\omega_i\}_{i=0}^{g-1}$ form the basis of $\HH^0(C_\ell,\omega_{C_\ell})$.
Note that we recover the second part of \cite[Theorem 5.1]{BE}, namely
the identification of the sections of $\HH^0(C_\ell,\omega_{C_\ell})$
with functions
\[
1, y, y^2, \dots, y^{\ell}, y^{\ell+1}+\eta, y^{\ell+2}+2y\eta, \dots, y^{g-1}+(g-\ell-1)y^{g-\ell-2}\eta,
\]
under a trivialization of $\omega_{C_\ell}$ on $U_2$.
We now proceed with character computations. Under the $\GG_m$-action above, we have that
\[
t\cdot \omega_k=t^{k-g+\ell+1}\omega_k.
\]
Summing up the weights of the $\GG_m$-action on the basis $\{\omega_i\}_{i=0}^{g-1}$, we
obtain the character of $\lambda$:
\[
\chi_{1}(C_\ell)=\sum_{k=0}^{g-1}(k-g+\ell+1)=g(\ell+1-g)+g(g-1)/2=g\left(\ell-\frac{g-1}{2}\right).
\]
It remains to compute the weights of the $\GG_m$-action on a basis of $\HH^0(C_{\ell},\omega_{C_{\ell}}^2)$ and
the corresponding character $\chi_2(C_{\ell})$.
Since $h^0(C_{\ell},\omega_{C_{\ell}}^2)=3g-3$, it suffices to exhibit $3g-3$ linearly independent
sections. One such choice is presented by
\begin{align*} 1, y, y^2,\dots, y^{2\ell},
&y^{2\ell+1}+y^\ell\eta, \dots, y^{\ell+g-1}+(g-\ell-1)y^{g-2}\eta, \\
& y^{\ell+g}
+(g-\ell)y^{g-1}, \dots, y^{2g-2}+(2g-2\ell-2)y^{2g-\ell-3}, \eta, y\eta, \dots, y^{g-3}\eta.
\end{align*}
In particular, taking into account that the weight of $\frac{dy\wedge d\eta}{\eta^2}$ is $2\ell$,
we see that the weights of the $\GG_m$-action on $\HH^0(C_{\ell},\omega_{C_{\ell}}^2)$ are
\begin{align*}
2\ell,2\ell-1, \dots, 2\ell-(2g-2),
& \ \ell-1, \dots, \ell-g+2.
\end{align*}
Summing up these weights, we obtain the character of $\lambda_2$:
\begin{align*}
\chi_2(C_\ell) &=2(2g-1)\ell-(g-1)(2g-1)+(g-2)\ell-(g-2)(g-1)/2 \\
&=(5g-4)(\ell-\frac{g-1}{2})=(5g-4)\chi_1(C_\ell).
\end{align*}
By Equation \eqref{relations}, the character of $\delta$ is $\chi_{\delta}(C_{\ell})=(8g+4)(\ell-\frac{g-1}{2})$. In particular, if $\ell\neq (g-1)/2$, then all three characters $\chi_1(C_\ell), \chi_2(C_\ell), \chi_\delta(C_\ell)$ are non-zero, and we have
\[
\frac{\chi_{\delta}(C_{\ell})}{\chi_{\lambda}(C_{\ell})}=\frac{8g+4}{g}.
\]
\end{example}
\begin{remark}\label{R:anand} Generalizing the computations of Example \ref{E:ribbons} above,
Anand Deopurkar recently computed characters of Gorenstein
$n$-ribbons\footnote{An $n$-ribbon is a non-reduced scheme supported on $\PP^1$ and
locally isomorphic to $U\times \spec k[\varepsilon]/(\varepsilon^n)$, where $U\subset \PP^1$ is affine.}
with
$\GG_m$-action and verified that always
$$
\chi_{\delta}=\frac{12(2g+n-1)n}{n^2+(4g-3)n+2-2g}\chi_{\lambda}.
$$
This recovers the ratio $\frac{8g+4}{g}$ for $2$-ribbons, gives the ratio
$\frac{36(g+1)}{5g+1}$ for $3$-ribbons (see Corollary \ref{C:stankova} and the subsequent discussion
for the significance of this slope),
and more generally gives the same ratio $\chi_{\delta}/\chi_{\lambda}$ as that of
the toric singularity $y^{n}=x^{2g/(n-1)+1}$ computed in Corollary \ref{C:toric-family}
(note that the arithmetic genus of an $n$-ribbon
always satisfies $n-1\mid 2g$).
\end{remark}
\subsection{Computing the characters of $\delta_i$} \label{subsection-delta}
In this section, we illustrate how the characters of line bundles $\delta_i$ can be computed.
If $C$ is a curve with a $\GG_m$-action such that the discriminant locus inside
$\operatorname{Def}(C)$ is Cartier, then line bundles $\delta_i$ can be
defined in a neighborhood of $[C]$ in $\cU_g$. The following proposition shows that the character of
$\delta_i$ is precisely minus the weighted degree of the discriminant.
\begin{prop} \label{proposition-character-delta}
Let $C$ be a complete curve with miniversal deformation space $\operatorname{Spf} A$ and a $\GG_m$-action
$\eta\co \GG_m \to \operatorname{Aut}(C)$.
Let $D$ be a Cartier divisor defined on a neighborhood $\cU_g$ in the stack of all complete
genus $g$ curves.
Suppose that there is a cartesian diagram
$$\xymatrix{
V(f) \ar[r] \ar[d] & \operatorname{Spf} A \ar[d] \\
D \ar[r] & \cV
}$$
such that $f \mapsto \lambda^d f$ under the induced action of
$\GG_m = \operatorname{Spec} k[\lambda, \lambda^{-1}]$ on $\operatorname{Spf} A$. Then $$\chi_{\cL(D)}(C, \eta) = -d.$$
\end{prop}
\begin{proof}
Denote by $\sigma\co A \to k[\lambda, \lambda^{-1}] \hat{\tensor} A$ the dual action of
$\GG_m$ on $\operatorname{Spf} A$. The exact sequence $0 \to \cL(-D) \to
\cO_{\cU_g} \to \cO_D \to 0$ restricted to $\operatorname{Spf} A$ corresponds to the
exact sequence
$$ 0 \to A_{\eta} \xrightarrow{f} A \to A/f \to 0$$
where $A_{\eta}$ is the $\GG_m$-$A$-module corresponding to the
character $\GG_m \xrightarrow{d} \GG_m$; that is, $A_{\eta}$ is $A$ as
an $A$-module with coaction $a \mapsto \lambda ^d \sigma(a)$. Therefore
$\cL(-D)|_{B \GG_m}$ corresponds to the character $\GG_m
\xrightarrow{d} \GG_m$ and $\chi_{\cL(-D)}(C, \eta) = d$.
\end{proof}
We include the computation of the character of $\delta_i$ only for certain curves with $A_{2k}$ and $D_{2k+2}$
singularities but the same approach can be applied to a large class of curves; sampling of our computations
is given in column four of Table \ref{T:table-characters}.
The
characters of $\delta_i$ will depend on the global geometry of the curve. For instance, if an $A_{2k+1}$-singularity lies
on a connected
component intersecting the rest of the curve at two nodes, the character of $\delta_0$ will depend on whether the
component is separating or not. Furthermore, if an $A_{2k+1}$-singularity lies on a rational curve attached to the rest
of the curve at one point, which we refer to as a ``dangling'' singularity (see Section \ref{section-dangling}), the value of
$\delta_0$ will be different from the non-dangling case.
\begin{example}[$A_{2k+1}$-singularity: non-separating case]
Let $C = C_0 \cup E$, where $C_0$ is a smooth curve of genus $g-k$, $E = E_1 \cup E_2$ is the union of two rational curves at the monomial\footnote{This simply means that $E_1\cup E_2$ is the projective closure of the affine curve
$y^2=x^{2k+2}$.} $A_{2k+1}$ singularity at $p$, and $E_i$ intersects $C_0$ at infinity in the node $q_i$.
The versal deformation space of $C$ can be written as
$$\operatorname{Def}(C) = \operatorname{Def}(C_0, q_1, q_2) \times \operatorname{Cr}(\hat{\cO}_{C,p}) \times \operatorname{Def}(\hat{\cO}_{C,p}) \times
\operatorname{Def}(\hat{\cO}_{C,q_1}) \times \operatorname{Def}(\hat{\cO}_{C,q_2}),$$
where $\operatorname{Cr}(\hat{\cO}_{C,p})$ denotes the ``crimping'' deformations (see \cite{asw_crimping} for more details); the
$\GG_m$-action on $\operatorname{Cr}(\hat{\cO}_{C,p})$ can be explicitly determined but doesn't affect this calculation.
We choose coordinates $a_{0},\dots, a_{2k}$ on $\operatorname{Def}(\hat{\cO}_{C,p})$ so that miniversal deformation of
$\hat{\cO}_{C,p}$
is
$$y^2 =x^{2k+2} + a_{2k}x^{2k} + \cdots + a_1 x +a_0,$$
and $n_i$ on $\operatorname{Def}(\hat{\cO}_{C,q_i})$ so that the miniversal deformation of $\hat{\cO}_{C,q_i}$
is $zw+ n_i=0$, where $z=1/x$ in the neighborhood of $\infty$ on $E_i$.
We have a one-parameter subgroup $\eta\co \GG_m \to \operatorname{Aut}(C)$ such that $\GG_m$ acts by
$\lambda\cdot (x,y)=(\lambda^{-1} x, \lambda ^{-k-1}y)$,
$a_i \mapsto \lambda^{i-2k-2} a_i$, and $n_i \mapsto \lambda n_i$. The
discriminant $\Delta\subset \operatorname{Def}(\hat{\cO}_{C,p})$ is given by the vanishing locus of the discriminant of the polynomial
$x^{2k+2} + a_{2k}x^{2k}+\cdots+ a_0$. Thus it has weighted degree
$-(2k+1)(2k+2)$. The discriminant inside $\operatorname{Def}(\hat{\cO}_{C,q_i})$ is $\{n_i=0\}$ and has weighted degree
$1$.
Since $\delta_0 = V(\Delta) \cup V(n_1) \cup V(n_2)$, we conclude:
$$\chi_{\delta_0} = (2k+1)(2k+2)-2, \quad \text{and} \quad \chi_{\delta_i}
= 0 \text { for } i > 0.$$
\end{example}
\begin{example}[$A_{2k+1}$-singularity: separating case]
Let $C= C_0 \cup E$ be a curve as in the previous example with the exception that
$C_0$ is now a {\em disconnected} curve with two connected components $C_1$ and $C_2$ of genera
$h_1$ and $h_2$, respectively. (Clearly, $g=h_1+h_2+k$.)
Using the calculation of the previous example, we conclude from $\delta_0 = V(\Delta)$
and $\delta_{h_i} = V(n_i)$ that
$$\chi_{\delta_0} = (2k+1)(2k+2), \quad
\chi_{\delta_{h_1}} = \chi_{\delta_{h_2}} = -1, \quad
\chi_{\delta_{i}} = 0 \text { for } i \neq 0, h_1, h_2.$$
\end{example}
\begin{example}[$A_{2k+1}^{\{1\}}$-singularity: dangling case]
Let $C= C_0 \cup E$, where $C_0$ is a smooth curve of genus $g-k$ , $E = E_1 \cup E_2$ is the union of two rational curves at the monomial $A_{2k+1}$ singularity, and $E_1$ intersects $C_0$ at a node. Then
$$\chi_{\delta_0} = (2k+1)(2k+2), \quad
\chi_{\delta_{k}} = -1, \quad
\chi_{\delta_{i}} = 0 \text { for } i \neq 0, k.
$$
\end{example}
\begin{example}[$A_{2k+1}^{\{\}}$-singularity: isolated case]
Let $C= E_1 \cup E_2$ be the union of two rational curves at the monomial $A_{2k+1}$ singularity. Then
$$\chi_{\delta_0} = (2k+1)(2k+2), \quad
\chi_{\delta_{i}} = 0 \text { for } i > 0.
$$
\end{example}
\begin{example} [$D_{2k+2}$-singularity: non-separating case]
Let $C = C_0 \cup E$, where $C_0$ is a genus $g-k$ curve, $E = E_1 \cup E_2 \cup E_3$ is the union of three rational curves at the monomial $D_{2k+2}$ singularity at $p \in E$, with $E_2$ and $E_3$ tangent, and each $E_i$ intersects $C_0$ at a node $q_i$. We write
$$ \operatorname{Def}(C) = \operatorname{Def}(C_0, q_1, q_2, q_3) \times \operatorname{Cr}(\hat{\cO}_{C,p}) \times \operatorname{Def}(\hat{\cO}_{C,p}) \times \prod_{i=1}^3\operatorname{Def}(\hat{\cO}_{C,q_i}).$$
We can choose coordinates so that
$$\begin{aligned}
\operatorname{Def}(\hat{\cO}_{C,p}) &= \{xy^2=x^{2k+1} + a_{2k-1}x^{2k-1} + \cdots + a_1 x + a_0 + b y \}, \\
\operatorname{Def}(\hat{\cO}_{C,q_i}) &= \{z_iw_i+ n_i=0\}, \\
\end{aligned}$$
where $z_1=1/y$ and $z_2=z_3=1/x$ near $\infty$ on $E_1$, and $E_2, E_3$, respectively.
We have a one-parameter subgroup $\eta\co \GG_m \to \operatorname{Aut}(C)$ such that $\GG_m$ acts via
$\lambda\cdot(x,y)=(\lambda ^{-1}x, \lambda ^{-k}y)$, and
$$a_i \mapsto \lambda ^{i-2k-1}a_i \qquad b \mapsto \lambda ^{-k-1} b \qquad n_1 \mapsto \lambda ^{k} n_1 \qquad n_2 \mapsto \lambda n_2 \qquad n_3 \mapsto \lambda n_3$$
The discriminant $\Delta\subset \operatorname{Def}(\hat{\cO}_{C,p})$ has weight $(2k+1)(2k+2)$, so we conclude that
$$\chi_{\delta_0} = (2k+1)(2k+2)-(k+2), \quad \chi_{\delta_i}= 0 \text { for } i > 0.$$
\end{example}
\begin{example} [$D_{2k+2}^{\{1,2\}}$-singularity
Let $C = C_0 \cup E$, where $C_0$ is a genus $g-k$ curve, $E = E_1 \cup E_2 \cup E_3$ is the union of three rational curves at the monomial $D_{2k+2}$ singularity, with $E_2$ and $E_3$ tangent, and $E_1$ and $E_2$ meet
$C_0$ in nodes. Using the calculation above, we conclude that
$\chi_{\delta_0} = (2k+1)(2k+2)-(k+1)$ and $\chi_{\delta_i}
= 0$ for $i \neq 0$.
\end{example}
\begin{example} [$D_{2k+2}^{\{1,2\}}$-singularity
Let $C$ be a curve as in the previous example except that only the branch $E_1$ intersects $C_0$. Then
$\chi_{\delta_0} = (2k+1)(2k+2)-k$ and $\chi_{\delta_i}
= 0$ for $i \neq 0$.
\end{example}
\subsection{Computing the characters of $K$}
\begin{lem} Let $C$ be a curve with a smooth deformation space and $\operatorname{Aut}(C)^\circ$ abelian. Let $\eta \co \GG_m \to \operatorname{Aut}(C)$ be a one-parameter subgroup. The character
$\chi_{K}(C, \eta)$ is the character of the determinant of $T^1(C)$ given by the natural $\GG_m$-action.
\end{lem}
\begin{proof} With the notation from Section \ref{section-discussion-K},
choose a presentation $\cM = [\operatorname{Hilb} / \operatorname{PGL}_{N+1}]$. Fix a closed immersion $C
\hookarr \PP^N$; this determines an element $x=[C \hookarr \PP^N]$ of $\operatorname{Hilb}$. By considering the dual of the
cotangent complex $L_{\cM}$, we arrive at an exact sequence
$$0 \to H^0(L_{\cM}^{\vee}) \to \mathfrak{g} \tensor \cO_{\operatorname{Hilb}} \to
T_{\operatorname{Hilb}} \to H^1(L_{\cM}^{\vee}) \to 0$$
of sheaves on $\operatorname{Hilb}$. By restricting this sequence to $x$, we obtain an exact sequence
$$ 0 \to \mathfrak{g}_x \to \mathfrak{g} \to H^0(C, N_{C/\PP^N}) \to
T^1(C) \to 0$$
of $G_x$-representations. The morphism $\mathfrak{g} \to H^0(C, N_{C/\PP^N})$ is obtained by
differentiating the map $G \to \operatorname{Hilb}, g \mapsto g \cdot x$. Since the adjoint action on $\mathfrak{g}_x$ is trivial, we obtain the result.
\end{proof}
We will use the above lemma to compute the character of $K$ in one particular example. Other examples can be computed similarly.
\begin{example}[$A_{2k}$-singularity]
Let $C=C_{0} \cup E$, where $E$ is a smooth rational curve with a higher cusp $y^2=x^{2k+1}$ at $p=0$, and a nodal attachment to $C_{0}$ at $q=\infty$. The first order deformation space can be written as
$$T^1(C) = T^1(C_0, q) \times \operatorname{Cr}(\hat{\cO}_{C,p}) \times T^1(\hat{\cO}_{C,p}) \times
T^1(\hat{\cO}_{C,q}),$$
where $\operatorname{Cr}(\hat{\cO}_{C,p})$ denotes the ``crimping'' deformations (see \cite{asw_crimping} for more details). We can choose coordinates
$$\begin{aligned}
T^1(\hat{\cO}_{C,p}) &= \{ y^2 - x^{2k+1} + a_{2k-1}x^{2k-1} + \cdots + a_1 x +
a_0 = 0 \}, \\
T^1(\hat{\cO}_{C,q}) &= \{zw+ n=0\},
\end{aligned}$$
and a one-parameter subgroup $\eta\co \GG_m \to \operatorname{Aut}(C)$ acting
via $\lambda\cdot (x,y)= (\lambda ^{-2}x, \lambda ^{-(2k+1)}y)$.
Then $a_i \mapsto \lambda ^{2i-4k-2} a_i$ and $n \mapsto \lambda n$.
Therefore, the character of $T^1(\hat{\cO}_{C,p})$ is
$$-(4+6+\cdots + (4k+2)) = -(4k^2+6k).$$ The character of $T^1(\hat{\cO}_{C,q})$ is $1$. The character of $ T^1(C_0, q)$ is trivial. For $k \ge 2$, by \cite[Proposition 3.4]{asw_crimping}, the weights of the action on $\operatorname{Cr}(\hat{\cO}_{C,p})$ are $1,3, \ldots, 2k-3$. Therefore, the character of $\operatorname{Cr}(\hat{\cO}_{C,p})$ is $(k-1)^2$. It follows that $\chi_K = -3k^2-8k+2$. As a reality check, one sees that indeed $\chi_K = 13 \chi_{\lambda} - 2 \chi_{\delta}$ (consult Table
\ref{T:table-characters} for values of $\chi_{\lambda}$ and $\chi_{\delta}$).
\end{example}
\section{Character theory vs intersection theory}
Let $C$ be a complete curve of arithmetic genus $g$
with a $\GG_{m}$-action $\eta\co \GG_{m}\ra \operatorname{Aut}(C)$.
A \emph{versal deformation space with $\GG_m$-action} is a pointed affine scheme $0\in \operatorname{Def}(C)$ with a
$\GG_m$-action together with a smooth morphism $[\operatorname{Def}(C) / \GG_m] \to \cV_g$ to the
{\em stack of all complete curves of arithmetic genus $g$} sending $0$ to $[C]$.
If $\cL$ is a line bundle defined on $\cV_g$ in a neighborhood of $C$ and $\operatorname{Def}(C)$ is a versal deformation space with $\GG_m$-action, then after shrinking $\operatorname{Def}(C)$, we may assume that $\cL$ is defined on $[\operatorname{Def}(C)/\GG_m]$.
\begin{remark} If $\omega_C$ is ample, the versal deformation space of $C$ is normal, and $\operatorname{Aut}(C)$ is linearly
reductive, then it follows from \cite[Theorem 3]{alper_quotient} that there exists a versal deformation space with
$\GG_m$-action (see also \cite[Proposition 2.3]{pinkham} for the formal case).
\end{remark}
We specialize to the case when $C$
has an isolated singularity $p\in C$
and $\hat{\cO}_{C,p}$ is positively
graded by the $\GG_m$-action.
Then by Pinkham's theory of deformations of varieties with $\GG_m$-action \cite[Proposition 2.2]{pinkham},
the space of infinitesimal deformations of $C$ has a decomposition
into the weight spaces:
$$
T^1_C=\bigoplus_{\nu=-\infty}^{\infty} T^1_C(\nu).
$$
Following Pinkham \cite[Section (3.1)]{pinkham}, we define $\operatorname{Def}^{\, -}(C)$ to be the closed subscheme of
$\operatorname{Def}(C)$ corresponding to {\em negative deformations}: The tangent space to $\operatorname{Def}^{-1}(C)$ is
$\bigoplus_{\nu<0} T^1_C(\nu)$ and the coordinate ring of $\operatorname{Def}^{-}(C)$ is positively graded.
The relationship between intersection theory on $[\operatorname{Def}^{\, -}(C) /\GG_m]$
and characters is given by the following observation.
\begin{thm} \label{theorem-character-intersection}
Let $C$ be a complete curve of arithmetic genus $g$
and $0 \in \operatorname{Def}(C)$ be its versal deformation space with $\GG_m$-action. Let $B$ be any complete curve with
a non-constant map $B \to [\operatorname{Def}^{-} (C)/\GG_m]$ and let $\cL$ be a line bundle on $\cV_g$. Then
$$\chi_{\cL}(C, \eta) = - \frac{ \cL\cdot B } { \deg(B)},$$
where $\deg(B)$ is the degree of $B$ with respect to the natural $\cO(1)$ on $[\operatorname{Def}^{\, -}(C) / \GG_m]$. In particular, if $C$ is Gorenstein (resp. the discriminant locus in $\operatorname{Def}(C)$ is Cartier), then
$$\chi_{\lambda_i}(C, \eta) = - \frac{ \lambda_i \cdot B} { \deg(B)} \qquad
\left(\text{resp. } \chi_{\delta_i}(C, \eta) = - \frac{\delta_i \cdot B} { \deg(B)} \right).$$
\end{thm}
\begin{proof}
We can write $\operatorname{Def}^{\, -}(C) = \operatorname{Spec} A$ with $A$ a positively graded $k$-algebra. The line bundle $\cL$
corresponds to a graded projective
$A$-module which is free of rank $1$ by \cite[Theorem 19.2]{eisenbud}. It follows
that $\cL = \tilde{A(d)} = \cO(d)$ for some $d$.
Therefore $\chi_{\cL}(C, \eta) = -d$
and $\cL \cdot B = \deg_B(\cL) =d\deg(B)$
\end{proof}
Theorem \ref{theorem-character-intersection} allows us to compute characters via intersection theory on
one-parameter families of {\em stable} curves so long as the locus of stable curves
inside $[\operatorname{Def}^{-}(C) / \GG_m]$ contains complete one-parameter families. This is not an uncommon
occurrence since, as Pinkham shows (in the case of unibranch singularities),
$[\operatorname{Def}^{\, -}(C) / \GG_m]$ contains an open set parameterizing {\em smooth} curves of genus $g$
\cite[Theorem 1.15]{pinkham}.
For special classes of planar singularities even stronger statement holds. For example, if $\hat{\cO}_{C,p}$
is an ADE singularity, then $\operatorname{Def}^{\, -}(C)=\operatorname{Def}(C)$ and the locus of worse-than-nodal curves in
$[\operatorname{Def}^{\, -}(C) /\GG_m]$ is of codimension two. It follows that the
characters of ADE singularities can all be computed by writing down a complete one-parameter family
of stable curves
in $[\operatorname{Def}^{\, -}(C)/\GG_m]$ and computing degrees of line bundles $\lambda$ and $\delta$
using standard intersection theory. We do so in a number of cases in Section \ref{S:intersection-theory}.
\begin{comment}
\begin{align*}
\operatorname{Def}^{\, -}(C) &:=\{ t\in \operatorname{Def}(C) : \lim_{\lambda\to \infty}\, \lambda\cdot t = 0\},\\
\operatorname{Def}^{\, 0}(C) &:=\{ t\in \operatorname{Def}(C) : \forall \lambda\in \GG_m,\ \lambda\cdot x = x\},\\
\operatorname{Def}^{\, +}(C) &:=\{ t\in \operatorname{Def}(C) : \lim_{t\to 0}\, t\cdot x = 0\}.
\end{align*}
\end{comment}
In the other direction, Theorem \ref{theorem-character-intersection} suggests a possibility of computing
slopes of special loci inside $\Mg{g}$:
\begin{example}[Toric singularities] Consider the planar toric singularity $C: x^p-y^q=0$. Its miniversal
deformation is
\begin{equation*
x^p=y^q+\sum a_{ij} x^iy^j, \quad 0\leq i\leq p-2, \ 0\leq j\leq q-2.
\end{equation*}
We have that $\operatorname{Def}^{-}(C)=\spec k[a_{ij}: qi+pj<pq]$. The resulting weighted projective stack
$[(\operatorname{Def}^{-}(C)\setminus 0)/ \GG_m]$
is a moduli space of curves on $\PP(q,p,1)$ defined by the weighted homogeneous equation
\begin{equation}\label{E:toric-homogeneous}
x^p=y^q+\sum a_{ij} x^iy^jz^{pq-qi-pj}, \quad 0\leq i\leq p-2, \ 0 \leq j\leq q-2, \ qi+pj<pq.
\end{equation}
Theorem \ref{theorem-character-intersection} implies that for any complete family of stable curves
$B\ra [(\operatorname{Def}^{-}(C)\setminus 0)/ \GG_m]$, the slope of $B$ is
$(\delta\cdot B)/(\lambda\cdot B)=\chi_{\delta}(C)/\chi_{\lambda}(C)$.
By considering the monomial unibranch singularities $y^3=x^{3k+1}$ and $y^3=x^{3k+2}$, we recover
the following result of Stankova \cite{stankova} (see also Remark \ref{R:anand}).
\begin{cor}\label{C:stankova} For $g\equiv 0,1 \mod 3$, there is a complete family $B$ of generically smooth
genus $g$ stable trigonal curves such that $(\delta\cdot B)/(\lambda\cdot B)=36(g+1)/(5g+1)$.
\end{cor}
\begin{proof}
Let $C$ be the monomial unibranch singularity $y^3=x^{3k+1}$.
From Equation \eqref{E:toric-homogeneous}
the restriction of its miniversal deformation to $\operatorname{Def}^{\, -}(C)$ is
\begin{align}\label{E:trigonal}
y^3=x^{3k+1}+y(a_{2k}x^{2k}+\cdots+a_0)+(b_{3k-1}x^{3k-1}+\cdots+b_0).
\end{align}
It follows that
$[(\operatorname{Def}^{\, -}(C) \setminus 0 ) / \GG_m]\simeq \mathcal{P}(2,5,\dots,6k+2,6,9,\dots,9k+3)$ is a
moduli space of trigonal curves of genus $g=3k$ defined by Equation \eqref{E:trigonal} on
$\PP(3,3k+1,1)$.
Evidently, there is a complete family
$B\ra [(\operatorname{Def}^{\, -}(C) \setminus 0 ) / \GG_m]$ of at-worst nodal irreducible curves.
Applying Theorem \ref{theorem-character-intersection} to this family and using
computations of Example \ref{E:monomial-unibranch},
we find that
$\lambda\cdot B=\chi_{1}(C)=2g(5g+1)/12$ and $\delta\cdot B=13\chi_{1}(C)-\chi_2(C)=6g(g+1)$.
This gives slope $36(g+1)/(5g+1)$.
Considering $y^3=x^{3k+2}$, we obtain in an analogous fashion a complete family of trigonal
curves of genus $g=3k+1$ with slope $36(g+1)/(5g+1)$.
\end{proof}
We note that in contrast to a simple construction above, an extremal family achieving
slope $36(g+1)/(5g+1)$ is obtained by a laborious
construction in \cite{stankova}. However, our methods do not establish a stronger result,
also due to Stankova,
that $36(g+1)/(5g+1)$ is {\em maximal possible} among slopes of trigonal families
of genus $g$.
\end{example}
\section{Intersection theory computations}
\label{S:intersection-theory}
In this section, we use intersection theory to find slopes of one-parameter families of curves arising from
stable reduction of certain planar singularities. Namely, we treat
the cases of $A_{2k+1}$, $A_{2k+1}^{\{1\}}$, $D_{2k+2}$, $D_{2k+2}^{\{1,2\}}$,
and toric singularities $x^p=y^q$.
Our computations agree with the results obtained in Section \ref{S:character-computations}.
That this should be the case follows from Theorem \ref{theorem-character-intersection} after
verification that every family we write down comes from $[\operatorname{Def}^{-}(C)/\GG_m]$ of an appropriate
singular curve $C$.
The same techniques can be applied to other singularities. However, as singularities become more complicated,
the task of writing down a complete family of stable limits becomes substantially more challenging.
\subsection{Hyperelliptic tails, bridges and triboroughs}
\label{hyperelliptic}
\begin{example}[Hyperelliptic bridges]
\label{E:hyperelliptic-bridges}
We construct a complete one-parameter family $B_k$ of $2$\nobreak-pointed stable hyperelliptic curves of genus $k$,
with marked points conjugate under the hyperelliptic involution, that arises from
stable reduction of $A_{2k+1}$ singularity. It follows from our construction and \cite[Theorem 6.5]{hassett-stable}
that $B_k$ comes from $[\operatorname{Def}^{-1}(C)/\GG_m]$ where $C$ is the projective closure of $y^2=x^{2k+2}$.
We show that $B_k$ intersects divisors on $\Mg{k,2}$ as follows:
\begin{equation*}
\begin{aligned}
\lambda\cdot B_k &=(k^2+k)/2, \\
\delta_{0}\cdot B_k &=(2k+1)(2k+2), \\
\psi_1\cdot B_k &=\psi_2\cdot B_k=1, \\
\delta_1\cdot B_k &=\cdots =\delta_{\lfloor k/2\rfloor} \cdot B_k=0.
\end{aligned}
\end{equation*}
If $B_k \ra \Mg{g}$ is the family of hyperelliptic genus $k$ bridges
obtained by attaching a constant genus $(g-k-1)$ curve to the marked sections, then
$(K_{\Mg{g}}+\alpha\delta)\cdot B_k\leq 0$ exactly for $\alpha\leq (3k+11)/(8k+12)$. This of course
agrees with the character theory
computation of the $\alpha$-value of $A_{2k+1}$-singularity (see Table \ref{T:table-characters}) due to Theorem
\ref{theorem-character-intersection}.
To construct $B_k$,
take a Hirzebruch surface
$\mathbb{F}_1 \ra B$ over $B\simeq \PP^1$. Denote by
$E$ the unique $(-1)$\nobreak-section and by $F$ the fiber. Next,
choose $2k+2$ general divisors $S_1,\dots, S_{2k+2}$ in the linear system
$\vert E+F\vert$ (these are sections of $\mathbb{F}_1\ra B$ of self-intersection $1$).
The divisor $\sum_{i=1}^{2k+2} S_i$ is divisible by $2$ in $\operatorname{Pic}(\mathbb{F}_1)$
and so there is a cyclic degree $2$
cover $\pi\co X\ra \mathbb{F}_1$ branched over $\sum_{i=1}^{2k+2} S_i$.
We have that $\pi^{-1}(E)=\Sigma_1+\Sigma_2$ is a disjoint union of two sections.
Thus, $\pi\co X \ra B$ is a family of at-worst nodal hyperelliptic curves of genus $k$ with two conjugate sections
$\Sigma_1$ and $\Sigma_2$.
From the construction, there are $\binom{2k+2}{2}$ nodes in the fibers of $\pi$. Because $X$ has
$A_1$ singularity at each of these nodes, we have
$$\delta_{X/B}=2 \binom{2k+2}{2}=(2k+1)(2k+2).$$
From $K_{X/B}=\pi^*(K_{\mathbb{F}_1/B}+\frac{1}{2}\sum_{i=1}^{2k+2} S_i)$ we deduce that
$$12\lambda_{X/B}-\delta_{X/B}=K_{X/B}^2=2(K_{\mathbb{F}_1}+2F+(2k+2)(E+F))^2=2(k^2-1).$$
Finally, the self-intersection of each $\Sigma_i$
is $(-1)$. It follows that $\pi\co X\ra B$ is the requisite family.
\end{example}
\begin{example}[Hyperelliptic tails attached at arbitrary points]
We now consider a family of tails appearing from stable reduction of a dangling
$A_{2k+1}^{\{1\}}$ singularity (see Section \ref{S:dangling}).
Taking family $B_k$ constructed in Example \ref{E:hyperelliptic-bridges} above and forgetting one
marked section, we arrive at a family $H_k\subset \Mg{k,1}$ of hyperelliptic curves with a single marked section. Furthermore,
\begin{equation*}
\begin{aligned}
\lambda\cdot H_k &=(k^2+k)/2, \\
\delta_{0}\cdot H_k &=(2k+1)(2k+2), \\
\psi\cdot H_k &=1, \\
\delta_1\cdot H_k &=\cdots =\delta_{\lfloor g/2\rfloor} \cdot H_k=0.
\end{aligned}
\end{equation*}
In particular, the locus of curves with a hyperelliptic genus $k$ tail
falls in the
base locus of $K_{\Mg{g}}+\alpha\delta$ for $\alpha<(3k^2+11k+4)/(8k^2+12k+2)$. For example,
when $k=2$, this shows that $\Delta_2\subset \Mg{g}$ is covered by curves on which
$K_{\Mg{g}}+(19/29)\delta$ has degree $0$.
\end{example}
\begin{example}[Hyperelliptic triboroughs]
\label{E:hyperelliptic-triboroughs}
Next, we construct a complete one-parameter family $Tri_k$ of $3$\nobreak-pointed stable
hyperelliptic curves of genus $k$, with two marked points conjugate, that arises
from stable reduction of $D_{2k+2}$ singularity. It is easy to verify that this family comes
from $[\operatorname{Def}^{-1}(C)/\GG_m]$ where $C$ is the projective closure of $x(y^2-x^{2k})=0$.
We show that $Tri_k$ intersects divisor classes on $\Mg{k,3}$ as follows:
\begin{equation*}
\begin{aligned}
\lambda\cdot Tri_k &=k^2+k, \\
\delta_{0}\cdot Tri_k &=2(2k+1)(2k+2), \\
\psi_1\cdot Tri_k &=\psi_2\cdot Tri_k=2, \\
\psi_3\cdot Tri_k &=2k, \\
\delta_1\cdot Tri_k &=\cdots =\delta_{\lfloor k/2\rfloor} \cdot Tri_k=0.
\end{aligned}
\end{equation*}
\noindent
The construction of $Tri_k$ parallels that of the
family $B_k$ above.
Namely, keeping the notation of Example \ref{E:hyperelliptic-bridges}, consider an additional section $S_0$ of $\mathbb{F}_1$
of self-intersection $1$ such that $S_0$ is transverse to $\sum_{i=1}^{2k+2} S_i$.
Set $C:=\pi^{-1}(S_0)$. Then $C$ is a
degree $2$ cover of $B$ (of genus $k$). Note that $C^2=2$ on $X$.
Consider the base extension $\pi'\co Y:= X\times_{B} C \ra C$. The preimage of $C$ on $Y$ is the union of two sections $C_1$ and $C_2$, intersecting transversally in
$2k+2$ points. By construction,
$(C_1+C_2)^2=2C^2=4$, and so $C_1^2=C_2^2=-2k$. Setting $\Sigma_3:=C_1$, we obtain
the requisite family $\pi'\co Y\ra C$ of hyperelliptic genus $k$ curves with two conjugate sections
(the preimages of $\Sigma_1$ and $\Sigma_2$)
of self-intersection $(-2)$ and the third section $\Sigma_3$
of self-intersection $(-2k)$.
\end{example}
\begin{example}[Hyperelliptic bridges attached at arbitrary points]
We now consider a family of tails appearing in stable reduction of a dangling
$D_{2k+2}^{\{1,2\}}$ singularity (see Section \ref{S:dangling}).
Taking family $Tri_k$ constructed in Example \ref{E:hyperelliptic-triboroughs} above and forgetting one
conjugate section, we arrive at a family $BH_k$ of hyperelliptic curves with two marked points.
The intersection numbers of $BH_k$ are
\begin{equation*}
\begin{aligned}
\lambda\cdot BH_k &=k^2+k, \\
\delta_{0}\cdot BH_k &=2(2k+1)(2k+2), \\
\psi_1\cdot BH_k &=2, \quad \psi_2\cdot BH_k =2k, \\
\delta_1\cdot BH_k &=\cdots =\delta_{\lfloor k/2\rfloor} \cdot BH_k=0.
\end{aligned}
\end{equation*}
In particular, the locus of curves with a hyperelliptic bridge of genus $k$
attached at arbitrary points falls in the
base locus of $K_{\Mg{g}}+\alpha\delta$ for $\alpha<(3k^2+7k+4)/(8k^2+10k+2)$. For example,
when $k=2$, this shows that the locus of curves with genus $2$ bridges in $\Mg{g}$
is covered by curves on which $K_{\Mg{g}}+(5/9)\delta$ has degree $0$.
\end{example}
\subsection{Toric singularities}
\label{S:toric}
How can we write down a complete one-parameter family of stable
limits of a singularity in such a way that its
intersection numbers
with divisors on $\Mg{g}$ can be computed? We give a complete
answer only in the case of a planar toric singularity $x^{p}=y^{q}$,
even though our method applies more generally to any planar singularity.
Our approach is via degenerations: Begin with a complete family $F_1$
of at-worst nodal curves -- a general pencil of plane curves of degree
$d\gg 0$ will do.
Now vary $F_1$ in a one-parameter family $F_s$ in such a way that
among curves in $F_0$
exactly one curve $C$ has singularity $f(x,y)=0$ while the rest
are at-worst nodal.
Since the generic points of $F_0$ and $F_1$ are smooth curves of genus
$g=\binom{d-1}{2}$, we obtain two $1$-cycles $F_0, F_1\in \mathrm{N}_1(\Mg{g})$.
For a line bundle
$\L\in \operatorname{Pic}(\Mg{g})$ the numbers $\L\cdot F_0$ and $\L \cdot F_1$
will differ. If we denote
by $\mathcal{F}$ the total space of $\{F_s\}$, then
the discrepancy between $\L\cdot F_0$ and $\L\cdot F_1$ is accounted
for by indeterminacy
of the rational map $\mathcal{F} \dashrightarrow \Mg{g}$ at the point $[C]$. In fact, if
\[
\xymatrix{
&W \ar[rd]^{h} \ar[ld]_{f}&\\
\mathcal{F} \ar@{-->}[rr]&&\Mg{g}
}
\]
is the graph of this rational map, then in $\mathrm{N}_1(\Mg{g})$ we have
$F_1=F_0+h(f^{-1}([C]))$.
By construction, $Z:=h(f^{-1}([C]))$ is a $1$\nobreak-cycle inside the variety of stable limits of $f(x,y)=0$.
The slope of $Z$ is then given by
(\delta\cdot F_1 -\delta\cdot F_0)/(\lambda\cdot F_1 -\lambda\cdot F_0).
We now perform the necessary computations for toric planar singularities.
To begin, let $C$ be a plane curve of degree
$d\gg 0$, with an isolated singularity $x^{pb}=y^{qb}$, where $p$ and $q$ are coprime.
The possible stable limits of $C$ have the following description due to Hassett:
\begin{prop}[{\cite[Theorem 6.5]{hassett-stable}}
\label{P:tails-description}
The stable limits of $C$ are
of the form $\tilde{C} \cup T$,
where the tail $(T,p_1,\dots, p_b)$ is a $b$-pointed curve of arithmetic genus
$g=(pqb^2-pb-qb-b+2)/2$. Moreover,
\begin{enumerate}
\item $K_T=(pqb-p-q-1)(p_1+\dots+p_b)$.
\item $T$ is $qb$-gonal with $g^1_{qb}$ given by $|q(p_1+\dots+p_b)|$.
\end{enumerate}
\end{prop}
Given a two-parameter deformation
of the curve $C$, one obtains a one-dimensional family of stable limits. The next proposition constructs
one such family and computes its intersections with the divisor classes $\lambda, \delta$, and $\psi$ on
$\Mg{g,b}$.
\begin{prop}\label{P:toric-family} Let $(p,q)=1$.
Suppose $F_0$ is a pencil of plane curves of degree $d\gg 0$ containing
a curve $C$ with a unique singularity $x^{pb}=y^{qb}$ and such that the total family over
$F_0$ is smooth. Consider
a deformation $\mathcal{F}=\{F_s\}$ of $F_0$ such that $F_1$ is a general pencil.
If $\mathcal{F} \stackrel{f}{\longleftarrow} W \stackrel{h}{\longrightarrow} \Mg{\binom{d-1}{2}}$ is the graph of the rational
map $\mathcal{F}\dashrightarrow \Mg{\binom{d-1}{2}}$,
then
the $1$-cycle
$Z:=h(f^{-1}([C]))$ inside $\Mg{g, b}$ (here, $g=(pqb^2-pb-qb-b+2)/2$) is irreducible and
satisfies:
\begin{multline*
\begin{aligned}
\lambda\cdot Z &=\frac{b}{12}\left( (pqb-p-q)^2+pq(pqb^2-pb-qb+1)-1\right),\\
\delta_{0}\cdot Z &=pqb(pqb^2-pb-qb+1), \\
\psi\cdot Z &=b.
\end{aligned}
\end{multline*}
\end{prop}
\begin{proof}
Without loss of generality, we can assume that the total space of the family of plane curves over $\mathcal{F}=\{F_s\}$
has local equation
$x^p=y^q+sxy+t$. Then the simultaneous stable reduction of this family is obtained by
the weighted blow-up of $\AA^2_{s,t}$ with weights $w(s,t)=(pq-p-q,pq)$. It follows that
$Z\simeq \PP(pq-p-q,pq)$ is irreducible. For $[s:t]\in Z$, the stable
curve over $[s:t]$ is a curve on $\PP(q,p,1)$ defined by the weighted homogeneous equation
$$x^p=y^q+sxyz^{pq-p-q}+tz^{pq}=0.$$ It is easy to verify that every member of this
family is in fact an irreducible stable curve.
We proceed to compute the intersection numbers of $Z$ with divisor classes $\lambda, \delta_0$,
and $\psi$.
Let $\X_i$ be the total families of pencils $F_i$ ($i=0,1$).
Our first goal is to compare the degrees of $\delta$ and $\kappa$
on $\X_0$ and $\X_1$:
Since $F_1$ is a general pencil, we have
$\delta(\X_1)=\delta_0(\X_1)=3(d-1)^2$.
To find the number of singular fibers in $\X_0\smallsetminus C$, we observe
that the topological Euler characteristic of $C$ is
$
2-2g(C)-(b-1),
$
where $g(C)=\binom{d-1}{2}-g-b+1$ is the geometric genus of $C$. Since topological Euler characteristics
of $\X_0$ and $\X_1$ are the same, we have that
\[
\delta_{0}(\X_0 \smallsetminus C)=\delta_0(\X_1)-(2g+b-1)=\delta_0(\X_1)-(pqb^2-pb-qb+1).
\]
Since $\X_0$ and $\X_1$ are two families of plane curves of degree $d$,
we have
\[
\kappa(\X_0)=\kappa(\X_1).
\]
Next, to compare intersection numbers of $F_0$ and $F_1$ with $\lambda$ and
$\delta_0$, we
need to write down a family of stable curves over each $F_i$. There is
nothing to do in the
case of $F_1$, since it is already a general pencil of plane curves of degree $d$. In particular, we have
$\lambda\cdot F_1=(\kappa(\X_1)+\delta_0(\X_1))/12$ by Mumford's formula.
To write down a stable family over $F_0$, we perform stable reduction of $\X_0\ra F_0$ in two steps:
\noindent
{\em Base change:} To begin, make a base change of order $bpq$
to obtain the family $\cY$ with local equation $x^{pb}-y^{qb}=t^{pqb}$.
The numerical invariants of $\cY$ are
\begin{align*}
\kappa(\cY) &=pqb\kappa(\X_0)=pqb\kappa(\X_1), \quad \text{and}\\
\delta_0(\cY\smallsetminus C)
&=pqb\delta_0(\X_0\smallsetminus C)=pqb(\delta_{0}(\X_1)-(pqb^2-pb-qb+1)).
\end{align*}
\noindent
{\em Weighted blow-up:} Let $\cZ$ be the
weighted blow-up of $\cY$, centered at $x=y=t=0$, with weights $w(x,y,t)=(q,p,1)$. The central fiber of $\cZ$
becomes
$\tilde C \cup T$ of the form described in Proposition \ref{P:tails-description}, with smooth $T$.
The self-intersection of the tail $T$ inside $\cZ$ is $(-b)$. By intersecting both sides of
$K_{\cZ}=\pi^*K_{\cY}+aT$ with $T$, we find that $a=p+q-pqb$. It follows that
\[
\kappa(\cZ)=\kappa(\cY)-b(pqb-p-q)^2=pqb\kappa(\X_1)-
b(pqb-p-q)^2.
\]
The number $\delta_{0}(\cZ\setminus (\tilde C \cup T))$ of singular fibers in $\cZ\setminus (\tilde C \cup T)$ is the same as in $\cY \smallsetminus C$
and
equals to
\[
pqb\delta_{0}(\X_1)-pqb(pqb^2-pb-qb+1).
\]
Remembering that the central fiber of $\cZ$ has exactly $b$ nodes, we
compute
\begin{align*}
\delta_0\cdot Z=pqb\delta_0(\X_1)-\delta_0(\cZ) &=pqb(pqb^2-pb-qb+1)-b, \\
\kappa \cdot Z=pqb\kappa(\X_1)-\kappa(\cZ) &=b(pqb-p-q)^2.
\end{align*}
Using Mumford's formula $\lambda=(\kappa+\delta)/12$, we obtain
\begin{align*}
\lambda\cdot Z=\frac{b}{12}\left(
(pqb-p-q)^2+pq(pqb^2-pb-qb+1)-1\right).
\end{align*}
We leave it as an exercise for the reader to verify that $\psi\cdot Z=b$.
\end{proof}
\begin{cor}\label{C:toric-family} Suppose $p$ and $q$ are coprime.
Then for the one-parameter family $Z$ of irreducible
one-pointed tails of stable limits of $x^p=y^q$, constructed in Proposition \ref{P:toric-family}, we have
\[
\frac{(\delta-\psi)\cdot Z}{\lambda\cdot
Z}=12\frac{pq(p-1)(q-1)-1}{(p-1)(q-1)(2pq-p-q-1)}.
\]
Considered as a family of unpointed curves of genus $g=(p-1)(q-1)/2$, $Z$ has slope
\[
\frac{\delta \cdot Z}{\lambda\cdot Z}=\frac{12pq}{2pq-p-q-1}.
\]
\end{cor}
\section{Connections to GIT}
\label{section-git}
The $\alpha$-invariant can be reinterpreted in terms of the Hilbert-Mumford index in geometric invariant theory.
Recall that for every $g$, $n$, and $m$, we have the Hilbert and Chow GIT quotients
$$\overline{\operatorname{Hilb}}_{g,n}^{\, m, \ss} /\hspace{-0.25pc}/ \operatorname{SL}_{N} \quad \text{and} \quad \overline{\textrm{Chow}}_{g,n}^{\, \ss} /\hspace{-0.25pc}/ \operatorname{SL}_{N}$$
parameterizing, respectively, semistable $m^{th}$ Hilbert points of $n$-canonically
embedded curves of genus $g$, and semistable Chow points of $n$-canonically
embedded curves of genus $g$ curves, up to projectivities.
Here, $N = g$ if $n=1$, and $N = (2n-1)(g-1)$ if $n > 1$.
\begin{prop} \label{proposition-git}
Let $C$ be a Gorenstein $n$-canonically embedded genus $g$ curve which admits a $\GG_m$-action
$\eta\co \GG_m \to \operatorname{Aut}(C)$. Consider the induced one-parameter subgroup
$\tilde{\eta}\co \GG_m \to \operatorname{SL}_{N}$. Then the Hilbert-Mumford indices of the $m^{th}$ Hilbert
point of $C$, respectively the Chow point of $C$, with respect to $\tilde{\eta}$ are
$$
\mu^{\overline{\operatorname{Hilb}}_{g,n}^{\, m}} ([C], \tilde{\eta})=
\begin{cases}
\chi_{\lambda}+(m-1)\left[ ((4g+2)m-g+1) \chi_{\lambda}-\frac{gm}{2} \chi_{\delta} \right], & \ \text{if $n=1$}, \\
(m-1)(g-1)\left[ (6mn^2-2mn-2n+1) \chi_{\lambda}-\frac{mn^2}{2} \chi_{\delta} \right], & \ \text{if $n>1$,}\\
\end{cases}
$$
and
$$\mu^{\overline{\text{{\em Chow}}}_{g,n}} ([C], \tilde{\eta})=
\begin{cases}
(4g+2) \chi_{\lambda} - \frac{g}{2} \chi_{\delta}\ , & \text{ if } n = 1,\\
(g-1)n[(6n-2) \chi_{\lambda} - \frac{n}{2} \chi_{\delta}], & \text{ if } n > 1.
\end{cases}
$$
\end{prop}
\begin{proof} This result follows directly by computing the divisor classes of the GIT linearizations as in \cite[Theorem 5.15]{mumford-stability} or \cite[Section 5]{hassett-hyeon_flip}.
\end{proof}
This proposition implies that if one can compute the characters of $\lambda$ and $\delta$ (or equivalently $\lambda$
and $\lambda_2$) with respect to one-parameter subgroups of the automorphism group, then one immediately knows
the Hilbert-Mumford indices for all Hilbert and Chow quotients for such one-parameter subgroups. Moreover,
if the Hilbert-Mumford index with respect to a one-parameter subgroup of the automorphism group is non-zero, then
the curve is unstable. In particular, we recover results of \cite[Propositions 2 and 3]
{hyeon_predictions}.
\subsection*{GIT stability of ribbons}
Applying Proposition \ref{proposition-git}, we obtain the following result as a corollary of computations
made in Example \ref{E:ribbons}, whose notation we keep.
\begin{thm}[Hilbert stability of ribbons] \label{theorem-ribbons}
Let $C_\ell$ be a ribbon defined by $f(y)=y^{-\ell}$ for some $\ell\in \{1,\dots, g-2\}$.
Then $C_\ell$ admits a $\GG_m$-action $\rho\co \GG_m\ra \operatorname{Aut}(C_\ell)$ and
\begin{enumerate}
\item If $\ell\neq (g-1)/2$, then the
$m^{th}$ Hilbert point of the $n$-canonical embedding of
$C_{\ell}$ is unstable
for all $m\geq 2$ and $n\geq 1$.
\item If $\ell= (g-1)/2$, then the $m^{th}$ Hilbert point of the $n$-canonical embedding of
$C_ {\ell}$ is unstable for all $m\geq 2$ and $n\geq 2$.
\item If $\ell= (g-1)/2$, then the $m^{th}$ Hilbert point of the {\em canonical embedding} of $C_ {\ell}$ is strictly semistable with respect to $\rho$ for all $m\geq 2$.
\end{enumerate}
\end{thm}
\begin{proof}
The ribbon $C_{\ell}$ is obtained by gluing
$\spec k[x,\epsilon]/(\epsilon^2)$ and
$\spec k[y,\eta]/(\eta^2)$
along open affines
$\spec k[x,x^{-1},\varepsilon]/(\varepsilon^2)$ and $\spec k[y,y^{-1},\eta]/(\eta^2)$ by
\begin{align*}
x &\mapsto y^{-1}-y^{-\ell-2}\eta, \\
\varepsilon &\mapsto y^{-g-1}\eta,
\end{align*}
We consider the $\GG_m$-action on $C_\ell$ given by
$t\cdot (x,y,\varepsilon, \eta)=(tx, t^{-1}y, t^{g-\ell}\varepsilon, t^{-\ell-1}\eta)$.
It induces a one-parameter subgroup $\rho\co
\GG_m \ra \operatorname{Aut}(C_{\ell})$. By Example \ref{E:ribbons}, the characters of $C_\ell$ are
\begin{align*}
\chi_\lambda(C_\ell, \rho)=g\bigl(\ell-\frac{g-1}{2}\bigr), \qquad
\chi_\delta(C_\ell, \rho)=(5g-4)\bigl(\ell-\frac{g-1}{2}\bigr).
\end{align*}
It follows by Proposition \ref{proposition-git} that the Hilbert-Mumford index with respect to $\rho$
of the $m^{th}$ Hilbert point of the canonical embedding of $C_{\ell}$ is
\[
\mu^{\overline{\operatorname{Hilb}}_{g,1}^{\, m}} ([C], \rho)=
g(g+m-gm) \bigl(\ell-\frac{g-1}{2} \bigr).
\]
In particular, it is $0$ if and only if $\ell=(g-1)/2$. Similarly, we verify that the Hilbert-Mumford index
$\mu^{\overline{\operatorname{Hilb}}_{g,n}^{\, m}} ([C], \rho)$ of the $m^{th}$ Hilbert point of the
$n$-canonical
embedding of $C_{\ell}$ is non-zero for every $n, m\geq 2$. This finishes the proof.
\end{proof}
\begin{cor}
If $g$ is even, then every canonically embedded genus $g$
ribbon with a $\GG_m$-action is Hilbert unstable.
\end{cor}
|
2,869,038,155,841 | arxiv | \section{Introduction and main results}
In 1948 classic paper \cite{Shannon}, Shannon introduced the entropy power,
\begin{equation*}
N(X)=\exp\left(\frac2nH(X)\right).
\end{equation*}
and gave a proof of entropy power inequality(EPI) \eqref{EPI} for independent random variables $X$ and $Y$,
\begin{equation}\label{EPI}
N(X+Y)\ge N(X)+N(Y),
\end{equation}
where $H(X)$ was the entropy for random variable $X$ with a probability density function $u$,
\begin{equation*}
H(X)=-\int_{\mathbb{R}^{n}}u\log u\,dx.
\end{equation*}
The entropy power inequality plays an important role in the fields of information
theory, probability theory and convex geometry. There are many interesting consequences on the entropy power inequality, one of these consequences has been noticed by Costa. More precisely, in 1985, Costa \cite{Costa} proved that, if $u(t),\,t > 0$, are probability densities solving the heat equation $\partial_tu=\Delta u$ in the whole space $\mathbb{R}^n$, then
\begin{equation}\label{CEP}
\frac{d^2}{dt^2}{N}(u(t))\le0.
\end{equation}
Inequality \eqref{CEP} is referred to as the \textbf{concavity of entropy
power}. In \cite{Villani}, Villani gave a direct proof of \eqref{CEP} in a strengthened version with an exact error term, which connects the concavity of entropy power with some identities of Barky-\'Emery.
Recently, G.Savar\'e and G.Toscani \cite{ST} showed
that the concavity of entropy power
is a property which is not restricted to Shannon entropy power
in connection with the heat equation, but it holds for
the $\gamma$-th R\'enyi entropy power \eqref{RenPow},
\begin{equation}\label{RenPow}
{N}_{\gamma}(u)\doteqdot\exp\left(\frac{\lambda}{n}{R}_{\gamma}(u)\right),\quad \lambda=2+n(\gamma-1)>0,
\end{equation}
which connects
with the solution to the nonlinear diffusion equation
\begin{equation}\label{NDE}
\partial_tu=\Delta u^{\gamma},
\end{equation}
where ${R}_{\gamma}$ is the R\'enyi entropy
\begin{equation*}
{R}_{\gamma}(u)\doteqdot\frac1{1-\gamma}\log\int_{\mathbb{R}^n}u^{\gamma}(x)\,dx,\quad\gamma\in(0,\infty), \;\gamma\neq1.
\end{equation*}
When $\gamma > 1-\frac2n$, they have proved the concavity of R\'enyi
entropy power defined in \eqref{RenPow},
\begin{equation*}
\frac{d^2}{dt^2}{N}_{\gamma}(u(t))\le0,
\end{equation*}
where $u(t),t>0$ are probability densities solving \eqref{NDE}in $\mathbb{R}^n$.
In this paper, motivated by above works, we study the entropy with respect to nonlinear diffusion on $\mathbb{R}^n$ and Riemannian manifolds. Let $u$ be a positive solution to the $p$-heat equaiton
\begin{equation}\label{pheat}
\frac{\partial u^{p-1}}{\partial t}=(p-1)^{p-1}\Delta_{p}u,
\end{equation}
where $\Delta_{p}u={\rm div}(|\nabla u|^{p-2}\nabla u)$ is the $p$-Laplacian of $u$,
define the $p$-entropy
\begin{equation}\label{pentropy}
H_p(u)\doteqdot-\int_Mu^{p-1}\log u^{p-1}\,dV,\quad p>1
\end{equation}
and $p$-entropy power
\begin{equation}\label{pSEP}
N_p(u)\doteqdot\exp\left(\frac pn{H}_p(u)\right)
\end{equation}
on Riemannian manifold $M$, so that the Shannon's entropy and entropy power can be identified with the $p$-entropy and $p$-entropy power of index $p=2$ respectively.
Kotschwar and Ni \cite{Ni2009} introduced $p$-entropy \eqref{pentropy} on Riemannian manifold and proved the Perelman type $W$-entropy monotonicity formula with nonnegative Ricci curvature. The first author generalized this result to the weighted Riemannian manifold with nonnegative $m$-Bakry-Emery Ricci curvarure \cite{WYC} and $m$-Bakry-Emery Ricci curvarure bounded below \cite{WYZ} respectively.
The first result in this paper is the concavity of $p$-entropy power with respect to the $p$-heat equation on closed Riemannian manifold with nonnegative Ricci curvature. Due to this property and motivated by Villani \cite{Villani}, we establish a deep link between entropy, information and entropy power by means of an nonnegative quantity. We introduce the $p$-Fisher information
\begin{equation}\label{pfisher}
\frac{d}{dt}H_{p}(u)=I_{p}(u)=(p-1)^{p}\int_{M}\frac{|\nabla u|^{p}}{u}\,dV
\end{equation}
for the second order derivative of the $p$-entropy power resorting to the DeBruijn's identity.
The precise argument is following.
\begin{theorem}\label{Error}
Let $p>1$ and $u(x,t),t>0$ be a positive solution to the $p$-heat equation \eqref{pheat} on closed Riemannian manifold $(M,g)$. Assume $v=-(p-1)\log u$ and $\omega=|\nabla v|^{2}$, then we have
\begin{equation}
\frac{d^{2}}{dt^{2}}N_{p}(u)=-\frac{p^2}{n}N_{p}(u)\int_M\left(\left|\omega^{\frac{p}{2}-1}\nabla_{i}\nabla_{j}v
-\frac{I_{p}(u)}{n}a_{ij}\right|_{A}^{2}+\omega^{p-2}{\rm Ric}(\nabla v,\nabla v)\right) e^{-v}\,dV,
\end{equation}
where $I_p(u)$ is the $p$-Fisher information defined in \eqref{pfisher} with respect to $p$-entropy and for any $2$-tensor $T$, $|T|^2_A=a^{ij}a^{kl}T_{ik}T_{jl}$, $a^{ij}=g^{ij}+(p-2)\frac{v^{i}v^{j}}{\omega}$ is the inverse of $(a_{ij})$. When the Ricci curvature is nonnegative, the $p$-entropy power defined in \eqref{pSEP} is concave.
\end{theorem}
\begin{remark} When $M$ is an Euclidean space $\mathbb{R}^n$ and $u(x,t)$ is a smooth and rapidly decaying positive solution to the $p$-heat equation, then
\begin{equation}\label{concave}
\frac{d^{2}}{dt^{2}}N_{p}(u)=-\frac{p^2}{n}N_{p}(u)\int_{\mathbb{R}^n}\left|\omega^{\frac{p}{2}-1}\nabla_{i}\nabla_{j}v
-\frac{I_{p}(u)}{n}a_{ij}\right|_{A}^{2}e^{-v}\,dx\le0,
\end{equation}
that is the $p$-entropy power defined in \eqref{pSEP} is concave.
\end{remark}
In a recent preprint \cite{LL}, S.Z. Li and X-.D. Li studied the entropy power on the weighted Reimannian manifold, they proved that the Shannon entropy power and R\'enyi entropy power are concave under the nonnegative Barky-\'Emery Ricci curvature condition \cite{Bakry}. Motivated by their work, we can also prove the concavity of $p$-entropy power with nonnegative Barky-\'Emery Ricci curvature in another paper \cite{WZ2}.
There is an explicit connection between the $p$-entropy power and the solution to $p$-heat equation, which can be highlighted owing to its fundamental solution. In the preceding result, we assume $u(x,t)$ as a solution to $p$-heat equation \eqref{pheat} and obtain the fact that the second order derivative of $p$-entropy power is non-positive. Indeed, the case equal to zero can be achieved by the fundamental solution to \eqref{pheat}, which takes the form
\begin{equation}\label{Gpt}
G_{p,t}(x)=t^{-\frac{n}{p(p-1)}}\widetilde{G}_{p}\left(\frac{x}{t^{\frac{1}{p}}}\right)
\end{equation}
from the time-independent function
\begin{equation}\label{Gp}
\widetilde{G}_{p}(x)=\left(C_{p,n}e^{-\varphi_{0}(x,1)}\right)^{\frac{1}{p-1}}
\end{equation}
with
$$C_{p,n}\doteqdot(p^{\frac{1}{p}}q^{\frac{1}{q}})^{-n}\pi^{-\frac{n}{2}}
\frac{\Gamma(\frac{n}{2}+1)}{\Gamma(\frac{n}{q}+1)},
\;\varphi_{0}(x,t)\doteqdot\frac{p-1}{p^{q}}\left(\frac{| x|}{t^{\frac{1}{p}}}\right)^{q},\;\frac{1}{p}+\frac{1}{q}=1,\;x\in \mathbb{R}^{n},\;t>0.
$$
As for $p>1$, direct calculations indicate that $p$-entropy power of the fundamental solution \eqref{Gpt} is linear on time $t$, that is,
\begin{equation}\label{tGpt}
N_{p}(G_{p,t}(x))=t\cdot N_{p}(\widetilde{G}_{p}(x))
\end{equation}
with the result $\frac{d^{2}}{dt^{2}}N_{p}(G_{p,t}(x))=0$.
This expression of formula \eqref{Gpt}, which involves a couple of Gaussian densities for the choice of $p=2$ with variance equal to $2$, $2t$ respectively, is close to the definition of self-similar solution to the nonlinear diffusion equation \eqref{NDE} (\cite{ST}), in which we observe the same property as equation \eqref{tGpt}, and so that highlight the point that $p$-entropy power is rigorously concave apart from its fundamental solution \eqref{Gpt} of equation \eqref{pheat}.
The concavity ensure that a functional (see \eqref{FI}) with respect to the first derivative of $p$-entropy power is monotonously non-increasing. Thus it will reach its lower bound along the solutions to $p$-heat equation \eqref{pheat} as time tends to infinity. In this note, let us introduce the second result below.
\begin{theorem}\label{isoper}
Let $p>1$, $\| u^{p-1}\|_{L^{1}}=1$, every smooth and rapidly decaying positive function $u(x)$ satisfies
\begin{equation}\label{isoperi}
N_{p}(u)I_{p}(u)\geq
N_{p}\left(\widetilde{G}_{p}(x)\right)I_{p}\left(\widetilde{G}_{p}(x)\right)=\gamma_{n,p},
\end{equation}
where the value of the strictly positive constant $\gamma_{n,p}$ is given by
\begin{equation}\label{gammacon}
\gamma_{n,p}=n(qe)^{p-1}\pi^{\frac{p}{2}}\left[\frac{\Gamma(\frac{n}{2}+1)}{\Gamma(\frac{n}{q}+1)}\right]^{-\frac{p}{n}},\quad q=\frac p{p-1}.
\end{equation}
\end{theorem}
Theorem \ref{isoper} is a well-known property which goes back to the concavity of $p$-entropy power in the sharp form \eqref{isoperi}. When $p=2$, inequality \eqref{isoperi} turns to be
\begin{equation}\label{IEPI}
N_{2}(u)I_{2}(u)\geq2\pi ne,
\end{equation}
which known as {\bf Isoperimetric entropy power inequality}. In \cite{Toscani}, G.Toscani proved the logarithmic Sobolev inequality and the Nash's inequality
with the sharp constant again by using of isoperimetric entropy power inequality \eqref{IEPI}.
In \cite{ST}, G.Savar\'e and G.Toscani implied an improvement of Sobolev inequality by means of rewriting the isoperimetric inequality for entropy. Later, the concavity of entropy power and isoperimetric entropy power inequality can be viewed as powerful tools to prove various sharp functional inequalities. Motivated by their works, we can also show that some inequalities as applications of concavity theorem \eqref{concave} and isoperimetric inequality \eqref{isoperi}. The first application is the $L^p$-Nash inequality.
\begin{theorem}\label{OPNash} For $p>1$ and $q=\frac p{p-1}$, assume $g(x)$ is a positive function in $L^p(\mathbb{R}^n)$, then
\begin{equation}\label{PNashineq}
\left(\int_{\mathbb{R}^{n}}g^{p}dx\right)^{1+\frac{q}{n}}\leq
p^{p}\gamma^{-1}_{n,p}\left(\int_{\mathbb{R}^{n}}gdx\right)^{\frac{pq}{n}}\int_{\mathbb{R}^{n}}|\nabla
g|^{p}\,dx
\end{equation}
where $\gamma_{n,p}$ is the constant in \eqref{gammacon}.
\end{theorem}
\begin{remark}
When $p=2$, the inequality reduces \eqref{PNashineq} to the classical Nash's inequality in sharp form
\begin{equation}
\left(\int_{\mathbb{R}^{n}}g^{2}\,dx\right)^{1+\frac{2}{n}}\le\frac{2}{\pi en}\left(\int_{\mathbb{R}^{n}}g\,dx\right)^{\frac{4}{n}}\int_{\mathbb{R}^n}|\nabla g|^2\,dx.
\end{equation}
\end{remark}
The second application is a new proof of $L^p$-Logarithmic-Sobolev inequality by the concavity of $p$-entropy power. Furthermore, we give an improvement of $L^p$-Logarithmic-Sobolev inequality based on a generalized Csisz\'ar-Kullback inequality.
\begin{theorem}\label{improve}
Let $p>1$, $\theta\in(0,2]$ and $g(x)$ be a positive function. Then we have
\begin{equation}\label{LpEu}
\int_{\mathbb{R}^{n}}g^{p}\log g^{p}\,dx\leq\frac{n}{p}\log\left(p^{p}\gamma^{-1}_{n,p}\int_{\mathbb{R}^{n}}|\nabla g|^{p}\,dx\right)
\end{equation}
and
\begin{equation}\label{GLogSobolev}
p^{p}\int_{\mathbb{R}^{n}}|\nabla g|^{p}\,dx-\left(\int_{\mathbb{R}^{n}}g^{p}\log g^{p}\,dx
+\frac{n}{p}\log\left(\frac{pe}{n}\gamma_{n,p}\right)\right)
\geq
\frac{p}{8n}\left\{\int_{\mathbb{R}^{n}}\left(g^{p}/\widetilde{G}_{p}^{p-1}-1\right)^{\theta}\widetilde{G}_{p}^{p-1}\,dx\right\}^{\frac{4}{\theta}},
\end{equation}
where $\gamma_{n,p}$ is a constant in \eqref{gammacon} and $\widetilde{G}_{p}$ is defined in \eqref{Gp}.
\end{theorem}
\vspace{3mm}
An outline of this paper is as follows. Based on $p$-heat equation \eqref{pheat}, we show the concavity of $p$-entropy power(Section 2) and the corresponding isoperimetric entropy power inequality (section 3). In section 4 and 5, we obtain some applications of $p$-entropy power, which are generalizations of the $L^p$-Euclidean Nash inequality and (improved) $L^p$-Euclidean logarithmic Sobolev inequality.
\section{The concavity of $p$-entropy power}
Before starting our proof, we need following two dissipation formulae.
\begin{lemma}\label{lemma}
Let $p>1$ and $u(x,t),t>0$ be a positive solution to the
$p$-heat equation \eqref{pheat} on Riemannian manifold $(M,g)$. Assume $v=-(p-1)\log u$, and set $\omega=|\nabla v|^{2}$, then $p$-entropy defined in \eqref{pentropy}
can be rewritten in its simplified form
\begin{equation}\label{pentropy1}
H_{p}(u)=\int_{M}ve^{-v}\,dV.
\end{equation}
Hence,
\begin{equation}
\frac{d}{dt}H_p(u)=\int_{M}\omega^{\frac{p}{2}}e^{-v}\,dV
\end{equation}
and
\begin{equation}\label{2ndpentropy}
\frac{d^{2}}{dt^{2}}H_{p}(u)=-p\int_{M}\omega^{p-2}\left(|\nabla\nabla v|_{A}^{2}+{\rm Ric}(\nabla v,\nabla v)\right)e^{-v}\,dV,
\end{equation}
\end{lemma}
\proof
Set $u^{p-1}=e^{-v}$, recall the $p$-entropy in \eqref{pentropy},
we have
\begin{equation*}
H_{p}(u)=\int_{M}ve^{-v}\,dV,
\end{equation*}
Using the fact $\nabla u=-\frac{u}{p-1}\nabla v$ in $p$-heat equation \eqref{pheat}, we get
\begin{align*}
-e^{-v}\partial_{t}v&=(p-1)^{p-1}{\rm div}\left[\left|-\frac{u}{p-1}\nabla v\right|^{p-2}\left(-\frac{u}{p-1}\right)\nabla v\right]\\
&=-{\rm div}(e^{-v}|\nabla v|^{p-2}\nabla v)\\
&=-e^{-v}(\Delta_{p}v-|\nabla v|^{p}).
\end{align*}
Let $\omega=|\nabla v|^{2}$, we obtain
\begin{align}\label{pheat11}
\partial_{t}v&=\Delta_{p}v-\omega^{\frac{p}{2}}\notag\\
&=\left(\frac{p}{2}-1\right)\omega^{\frac{p}{2}-2}\langle\nabla \omega,\nabla v\rangle
+\omega^{\frac{p}{2}-1}\Delta v-\omega^{\frac{p}{2}}.
\end{align}
To evaluate the derivatives of $p$-entropy, integration by parts implies
\begin{align*}
\frac{d}{dt}H_{p} &=\frac{d}{dt}\int_{M}ve^{-v}\,dV
=\int_{\mathbb{R}^{n}}(1-v)e^{-v}\partial_{t}v\,dV\\
&=\int_{M}(1-v)e^{-v}(\Delta_{p}v-\omega^{\frac{p}{2}})\,dV\\
&=\int_{M}\omega^{\frac{p}{2}}e^{-v}\,dV,
\end{align*}
and
\begin{align}\label{2ndent}
\frac{d^{2}}{dt^{2}}H_{p}&=\frac{d}{dt}\int_{M}\omega^{\frac{p}{2}}e^{-v}\,dV
\notag\\
&=\frac{p}{2}\int_{M}\omega^{\frac{p}{2}-1}e^{-v}\partial_{t}\omega
\,dV-\int_{V}\omega^{\frac{p}{2}}e^{-v}\partial_{t}v\,dV
\notag\\
&=p\int_{M}\omega^{\frac{p}{2}-1}e^{-v}\nabla v\cdot\nabla
\partial_{t}v\,dV-\int_{M}\omega^{\frac{p}{2}}e^{-v}\partial_{t}v\,dV.
\end{align}
$1)$ Compute the first part of formula \eqref{2ndent}. Since
\begin{align*}
&\nabla v\cdot\nabla\partial_{t}v
=\nabla v\cdot\nabla\left[\left(\frac{p}{2}-1\right)\omega^{\frac{p}{2}-2}\langle\nabla \omega,\nabla v\rangle
+\omega^{\frac{p}{2}-1}\Delta v-\omega^{\frac{p}{2}}\right]
\\
=&\left(\frac{p}{2}-1\right)\left(\frac{p}{2}-2\right)\omega^{\frac{p}{2}-3}\langle\nabla \omega,\nabla
v\rangle^{2}
+\left(\frac{p}{2}-1\right)\omega^{\frac{p}{2}-2}\langle\nabla \omega,\nabla v\rangle\Delta v
\\
+&\omega^{\frac{p}{2}-1}\langle\nabla\Delta v,\nabla
v\rangle-\frac{p}{2}\omega^{\frac{p}{2}-1}\langle\nabla \omega,\nabla v\rangle
+\left(\frac{p}{2}-1\right)\omega^{\frac{p}{2}-2}\bigg(\langle\nabla\nabla v,\nabla
\omega\rangle+\langle\nabla v,\nabla\nabla \omega\rangle\bigg)\cdot\nabla v,
\end{align*}
we rewrite the first part of formula \eqref{2ndent} as the formula \eqref{thefirstpart}
\begin{align}\label{thefirstpart}
&p\left(\frac{p}{2}-1\right)\left(\frac{p}{2}-2\right)\int_{M}\omega^{p-4}e^{-v}\langle\nabla
\omega,\nabla v\rangle^{2}\,dV
\notag
+p\left(\frac{p}{2}-1\right)\int_{M}\omega^{p-3}e^{-v}\langle\nabla \omega,\nabla
v\rangle\Delta v\,dV
\notag\\
& +p\int_{M}\omega^{p-2}e^{-v}\langle\nabla\Delta v,\nabla v\rangle\,dV
-\frac{p^{2}}{2}\int_{M}\omega^{p-2}e^{-v}\langle\nabla \omega,\nabla
v\rangle\,dV
\notag\\
& +p\left(\frac{p}{2}-1\right)\int_{M}\omega^{p-3}e^{-v}\bigg(\langle\nabla\nabla v,\nabla
\omega\rangle+\langle\nabla v,\nabla\nabla \omega\rangle\bigg)\cdot\nabla v\,dV.
\end{align}
For the second term of the formula \eqref{thefirstpart}, integration by parts yields
\begin{align*}
& p\left(\frac{p}{2}-1\right)\int_{M}\omega^{p-3}e^{-v}\langle\nabla\omega,\nabla
v\rangle\Delta vdV
\\
=&-p(p-3)\left(\frac{p}{2}-1\right)\int_{M}\omega^{p-4}e^{-v}\langle\nabla\omega,\nabla
v\rangle^{2}\,dV
+ p\left(\frac{p}{2}-1\right)\int_{M}\omega^{p-2}e^{-v}\langle\nabla\omega,\nabla v\rangle\,dV
\\
- &p\left(\frac{p}{2}-1\right)\int_{M}\omega^{p-3}e^{-v}\bigg(\langle\nabla\nabla\omega,\nabla
v\rangle+\langle\nabla\omega,\nabla\nabla v\rangle\bigg)\cdot\nabla v\,dV.
\end{align*}
For the third term of the formula \eqref{thefirstpart}, one can get
\begin{align*}
& p\int_{M}\omega^{p-2}e^{-v}\langle\nabla\Delta v,\nabla v\rangle\,dV
\\
=&-\frac{p}{2}\int_{M}\nabla(\omega^{p-2}e^{-v})\cdot\nabla\omega\,dV
-p\int_{M}\omega^{p-2}e^{-v}\left(|\nabla\nabla v|^{2}+{\rm Ric}(\nabla v,\nabla v)\right)\,dV
\\
=&-\frac{p(p-2)}{2}\int_{M}\omega^{p-3}e^{-v}|\nabla\omega|^{2}\,dV
+\frac{p}{2}\int_{M}\omega^{p-2}e^{-v}\langle\nabla v,\nabla\omega\rangle\,dV
\\
-&p\int_{M}\omega^{p-2}e^{-v}\left(|\nabla\nabla v|^{2}+{\rm Ric}(\nabla v,\nabla v)\right)\,dV.
\end{align*}
By using of integration by parts and the Bochner formula
\begin{equation*}
\Delta|\nabla v|^{2}-2\nabla v\cdot\nabla\Delta v=2|\nabla\nabla v|^{2}+2{\rm Ric}(\nabla v,\nabla v).
\end{equation*}
Thus, collecting these terms back into formula \eqref{thefirstpart}, we get
\begin{align}\label{part1}
&-\frac{p(p-2)^{2}}{4}
\int_{M}\omega^{p-2}e^{-v}\frac{\langle\nabla\omega,\nabla v\rangle^{2}}{\omega^{2}}\,dV
-\frac{p}{2}\int_{M}\omega^{p-2}e^{-v}\langle\nabla\omega,\nabla v\rangle\,dV
\notag\\
&-\frac{p(p-2)}{2}\int_{M}\omega^{p-2}e^{-v}\frac{|\nabla\omega|^{2}}{\omega}\,dV
-p\int_{M}\omega^{p-2}e^{-v}\left(|\nabla\nabla
v|^{2}+{\rm Ric}(\nabla v,\nabla v)\right)\,dV.
\end{align}
$2)$ Compute the second part of formula \eqref{2ndent}. Recalling the detail expression of $p$-heat equation \eqref{pheat11} and integrating by parts, we have
\begin{align}\label{part2}
&-\int_{M}\omega^{\frac{p}{2}}e^{-v}\partial_{t}v\,dV
\notag\\
=&-\left(\frac{p}{2}-1\right)\int_{\mathbb{R}^{n}}\omega^{p-2}e^{-v}\langle\nabla \omega,\nabla
v\rangle\,dV
-\int_{\mathbb{R}^{n}}\omega^{p-1}e^{-v}\Delta v\,dV+\int_{\mathbb{R}^{n}}\omega^{p}e^{-v}\,dV
\notag\\
=&-\left(\frac{p}{2}-1\right)\int_{M}\omega^{p-2}e^{-v}\langle\nabla \omega,\nabla
v\rangle\,dV+\int_{M}\nabla(\omega^{p-1}e^{-v})\cdot\nabla
v\,dV+\int_{M}\omega^{p}e^{-v}\,dV
\notag\\
=&\frac{p}{2}\int_{M}\omega^{p-2}e^{-v}\langle\nabla \omega,\nabla v\rangle\,dV.
\end{align}
Adding the two parts \eqref{part1} and \eqref{part2} together establishes
\begin{align*}
\frac{d^{2}}{dt^{2}}H_{p}
=&-p\int_{M}\omega^{p-2}e^{-v}\left(\frac{(p-2)^{2}}{4}
\frac{\langle\nabla\omega,\nabla v\rangle^{2}}{\omega^{2}}
+\frac{p-2}{2}\frac{\langle\nabla\omega,\nabla v\rangle^{2}}{\omega^{2}}
+|\nabla\nabla v|^{2}\right)e^{-v}\,dV
\\
&-p\int_{M}\omega^{p-2}e^{-v}{\rm Ric}(\nabla v,\nabla v)\,dV
\\
=&-p\int_{M}\omega^{p-2}\left(|\nabla\nabla
v|_{A}^{2}+{\rm Ric}(\nabla v,\nabla v)\right)e^{-v}\,dV,
\end{align*}
where
\begin{align*}
|\nabla\nabla v|_{A}^{2}
&=a^{ij}a^{kl}v_{ik}v_{jl}
=\left(g^{ij}+(p-2)\frac{v^{i}v^{j}}{w}\right)\left(g^{kl}+(p-2)\frac{v^{k}v^{l}}{w}\right)v_{ik}v_{jl}
\\
&=(p-2)^{2}\frac{(v^{i}v^{k}v_{ik})(v^{j}v^{l}v_{jl})}{\omega^{2}}
+(p-2)\frac{g^{ij}v^{k}v^{l}v_{ik}v_{jl}+g^{kl}v^{i}v^{j}v_{ik}v_{jl}}{\omega}+g^{ij}g^{kl}v_{ik}v_{jl}
\\
&=\frac{(p-2)^{2}}{4}\frac{|\nabla
v\cdot\nabla\omega|^{2}}{\omega^{2}}+\frac{p-2}{2}\frac{|\nabla\omega|^{2}}{\omega}+|\nabla\nabla
v|^{2}.
\end{align*}
\endproof
Now we can start our proof.
\begin{proof}[\bf{Proof of Theorem \ref{Error}}]
The proof need us to evaluate the first and second order derivatives of the $p$-entropy power $N_{p}(u)$.
The first order derivative of the $p$-entropy power is
\begin{equation*}
\frac{d}{dt}N_{p}(u)=\frac{p}{n}N_{p}(u)I_{p}(u),
\end{equation*}
which is calculated by the $p$-DeBruijn's identity defined in $p$-Fisher information \eqref{pfisher}. Define $J_{p}(u)$
\begin{equation}\label{pdeb2}
J_{p}(u)\doteqdot-\frac1p\frac{d}{dt}I_{p}(u)
\end{equation}
as the second order derivative of $p$-entropy, we get
\begin{align*}
\frac{d^{2}}{dt^{2}}N_{p}(u)&=\frac{p}{n}N_{p}(u)\left(\frac{p}{n}
\left(I_{p}(u)\right)^{2}+\frac{d^{2}}{dt^{2}}H_{p}(u)\right)
\\
&=\frac{p^{2}}{n}N_{p}(u)\left(\frac{1}{n}\left(I_{p}(u)\right)^{2}+\frac{1}{p}\frac{d^{2}}{dt^{2}}H_{p}(u)\right)
\\
&=-\frac{p^{2}}{n}N_{p}(u)\left(J_{p}(u)-\frac{1}{n}\left(I_{p}(u)\right)^{2}\right).
\end{align*}
Hence, the concavity condition $\frac{d^{2}}{dt^{2}}N_{p}(u)\leq0$ is
equivalent to
\begin{equation}\label{equiconcavity}
J_{p}(u)\geq\frac{1}{n}\left(I_{p}(u)\right)^{2}.
\end{equation}
Motivated by Villani \cite{Villani}, consider the function $A(\lambda)$
\begin{align}\label{alambda}
A(\lambda)&\doteqdot\int_{M}\left(\left|\omega^{\frac{p}{2}-1}\nabla_{i}\nabla_{j}v+\lambda
a_{ij}\right|_{A}^{2}+\omega^{p-2}{\rm Ric}(\nabla v,\nabla v)\right)e^{-v}\,dV
\notag\\
&=\int_{M}\omega^{p-2}\left(|\nabla\nabla
v|_{A}^{2}+{\rm Ric}(\nabla v,\nabla v)\right)e^{-v}\,dV+n\lambda^{2}+2\lambda\int_{M}e^{-v}\Delta_{p}v\,dV
\notag\\
&=J_{p}(u)+n\lambda^{2}+2\lambda I_{p}(u).
\end{align}
Set $\lambda=-\frac{I_{p}(u)}{n}$, \eqref{alambda} yields the equality
\begin{equation*}
J_p(u)-\frac1n\left(I_p(u)\right)^2=\int_M\left(\left|\omega^{\frac{p}{2}-1}\nabla_{i}\nabla_{j}v
-\frac{I_{p}(u)}{n}a_{ij}\right|_{A}^{2}+\omega^{p-2}{\rm Ric}(\nabla v,\nabla v)\right) e^{-v}\,dV.
\end{equation*}
If the Ricci curvature is nonnegative, we get the inequality \eqref{equiconcavity}.
\end{proof}
\begin{remark}
The concavity inequality \eqref{equiconcavity} can also be proved by Cauchy-Schwarz
inequality. In fact, when Ricci curvature is nonnegative, the $p$-type trace inequality
\begin{equation*}
\omega^{p-2}|\nabla\nabla v|_{A}^{2}\geq\frac{1}{n}\left({\rm tr}_A(\omega^{\frac{p}{2}-1}\nabla_i\nabla_jv)\right)^{2}
=\frac{1}{n}(\Delta_{p}v)^{2}
\end{equation*}
yields
\begin{align*}
J_{p}(u)\ge\frac{1}{n}\int_{M}(\Delta_{p}v)^{2}e^{-v}\,dV
\geq\frac{1}{n}\left(\int_{M}e^{-v}\Delta_{p}v\,dV\right)^{2}
=\frac{1}{n}\left(I_{p}(u)\right)^{2}.
\end{align*}
\end{remark}
\section{Isoperimetric entropy power inequality}
This section is mainly based on the proof of Theorem \ref{isoper}. By observing the first order derivative of the $p$-entropy power, we notice that the concavity property can be interpreted as the decreasing in time property of $t\mapsto\Psi_{p}(u)$ if we introduce a functional $\Psi_{p}(u)$ (defined in \eqref{FI}). Meanwhile, it is a natural idea that one utilizes the monotonicity of a functional of the corresponding solutions to the heat equation equipped with the dilation invariance property to get inequalities in sharp form in Shannon's case. Accordingly, we try to construct inequality \eqref{isoperi} by the above method.
The proof is split into three
parts. In the first part, we verify that $\Psi_{p}(u)$ (defined in \eqref{FI}) is a dilation invariant
function. In the second part, we go further into the monotonicity of the function and
the establishment of the inequality. In the last part, we calculate the value of the
limit.
\begin{proof}[\bf{Proof of Theorem \ref{isoper}}]
Let us first introduce a functional
\begin{equation}\label{FI}
\Psi_{p}(u)\doteqdot N_{p}(u)I_{p}(u),
\end{equation}
Then inequality \eqref{isoperi} in Theorem \ref{isoper} is equivalent to
\begin{equation}\label{Psip}
\Psi_{p}(u)\geq\Psi_{p}\left(\widetilde{G}_{p}(x)\right),
\end{equation}
where $\widetilde{G}_{p}(x)$ defined in formula \eqref{Gp} is the fundamental solution of $p$-heat equation \eqref{pheat}. Therefore, our aim is to obtain the inequality \eqref{Psip}.
Set $\lambda>0$, define a mass-preserving dilation
\begin{equation}\label{dilainvar}
D_{\lambda}u(x)\doteqdot\lambda^{-\frac{n}{p-1}}u(x/\lambda),
\end{equation}
we observe that the functional $\Psi_{p}(u)$ is invariant
\begin{equation}\label{invar}
\Psi_{p}(D_{\lambda}u(x))=\Psi_{p}(u(x)).
\end{equation}
by using the following two identities:
\begin{equation}\label{NPDF}
N_{p}(D_{\lambda}u(x))=\lambda^{p}N_{p}(u(x)),
\end{equation}
\begin{equation}\label{IPDF}
I_{p}(D_{\lambda}u(x))=\lambda^{-p}I_{p}(u(x)).
\end{equation}
We state detailed calculations of the identities below. Recalling the definition of $p$-entropy \eqref{pentropy}, integration by substitution implies
\begin{align*}
H_{p}(D_{\lambda}u(x))&=-\int_{\mathbb{R}^n}(D_{\lambda}u(x))^{p-1}\log (D_{\lambda}u(x))^{p-1}\,dx\\
&=n\log\lambda\int_{\mathbb{R}^{n}}u^{p-1}(x/\lambda)\,d(x/\lambda)
-\int_{\mathbb{R}^{n}}u^{p-1}(x/\lambda)\log
u^{p-1}(x/\lambda)\,d(x/\lambda)
\\
&=n\log\lambda+H_{p}(u(x)),
\end{align*}
i.e.
\begin{equation}\label{HPDF}
H_{p}(D_{\lambda}u(x))=n\log\lambda+H_{p}(u(x)).
\end{equation}
Hence, by the definition of $p$-entropy power \eqref{pSEP}, we have
\begin{equation*}
N_{p}(D_{\lambda}u(x))=\exp\left\{\frac{p}{n}H_{p}(D_{\lambda}u(x))\right\}=\lambda^{p}N_{p}(u(x)).
\end{equation*}
Integration by substitution on the dilation of $u$ of $p$-Fisher information \eqref{pfisher}, we get
\begin{align*}
I_{p}(D_{\lambda}u(x))&=(p-1)^{p}\int_{\mathbb{R}^{n}}\frac{|\nabla
[\lambda^{-\frac{n}{p-1}}u(x/\lambda)]|^{p}}{\lambda^{-\frac{n}{p-1}}u(x/\lambda)}\,dx
\notag\\
&=(p-1)^{p}\lambda^{-p}\int_{\mathbb{R}^{n}}\frac{|\nabla
u(x/\lambda)|^{p}}{u(x/\lambda)}\,d(x/\lambda)
\notag\\
&=\lambda^{-p}I_{p}(u(x)).
\end{align*}
Note that if $\lambda=t^{\frac{1}{p}}$, a direct application of \eqref{NPDF} to \eqref{Gpt} yields \eqref{tGpt}.
The concavity of $p$-entropy power shows that the functional $\Psi_{p}(u)$ is non-increasing on time among solutions to the $p$-heat equation, and it will reach its maximum lower bound as time tends to infinity. Using property as in \eqref{dilainvar}, we scale $u(x,t)$ by this formula
\begin{equation*}
U(x,t)=t^{-\frac{n}{p(p-1)}}u(t^{-\frac{1}{p}}x)
=D_{t^{\frac{1}{p}}}u.
\end{equation*}
Then $\Psi_{p}(u)=\Psi_{p}(U)$ by \eqref{invar}.
On the other hand, we have $u_{\infty}\doteqdot\lim\limits_{t\rightarrow\infty}U(x,t)=\widetilde{G}_{p}(x).$
Thus, the decreasing in time property of $\Psi_{p}(u)$ implies
\begin{equation}\label{mono}
\Psi_{p}(u)\geq\Psi_{p}(u_{\infty})=\Psi_{p}(\widetilde{G}_{p}(x))=\gamma_{n,p}.
\end{equation}
The last point is the computation of $\gamma_{n,p}$. Let $q=\frac p{p-1}$, using the identities(See \cite{Agueh})
\begin{equation*}
\int_{\mathbb{R}^{n}}e^{-| x|^{q}}\,dx
=\frac{2\pi^{\frac{n}{2}}}{q}\frac{\Gamma(\frac{n}{q})}{\Gamma(\frac{n}{2})}
=\pi^{\frac{n}{2}}\frac{\Gamma(\frac{n}{q}+1)}{\Gamma(\frac{n}{2}+1)},\quad
\int_{\mathbb{R}^{n}}| x|^{q}e^{-| x|^{q}}\,dx =\frac{n}{q}\int_{\mathbb{R}^{n}}e^{-| x|^{q}}\,dx,
\end{equation*}
we get
\begin{equation*}
\int_{\mathbb{R}^{n}}e^{-\frac{p-1}{p^{q}}| x|^{q}}\,dx
=(p^{\frac{1}{p}}q^{\frac{1}{q}})^{n}\pi^{\frac{n}{2}}\frac{\Gamma(\frac{n}{q}+1)}{\Gamma(\frac{n}{2}+1)},
\quad
\frac{p-1}{p^{q}}\int_{\mathbb{R}^{n}}| x|^{q}e^{-\frac{p-1}{p^{q}}| x|^{q}}\,dx
=\frac{n}{q}(p^{\frac{1}{p}}q^{\frac{1}{q}})^{n}
\pi^{\frac{n}{2}}\frac{\Gamma(\frac{n}{q}+1)}{\Gamma(\frac{n}{2}+1)}.
\end{equation*}
Under the above two formulae, compute the $p$-entropy by putting \eqref{Gp} back into \eqref{pentropy}, we have
\begin{align}\label{HpGp}
H_{p}(\widetilde{G}_{p}(x))&=-\int_{\mathbb{R}^{n}}\widetilde{G}_{p}^{p-1}(x)
\log\widetilde{G}_{p}^{p-1}(x)\,dx
\notag\\
&=-C_{p,n}\log C_{p,n}\int_{\mathbb{R}^{n}}e^{-\frac{p-1}{p^{q}}| x|^{q}}\,dx
+\frac{p-1}{p^{q}}C_{p,n}\int_{\mathbb{R}^{n}}| x|^{q}e^{-\frac{p-1}{p^{q}}| x|^{q}}\,dx
\notag\\
&=-\log\left\{(p^{\frac{1}{p}}q^{\frac{1}{q}})^{-n}
\pi^{-\frac{n}{2}}\frac{\Gamma(\frac{n}{2}+1)}{\Gamma(\frac{n}{q}+1)}\right\}
+\frac{n}{q}
\notag\\
&=\log\frac{(p^{\frac{1}{p}}q^{\frac{1}{q}}
e^{\frac{1}{q}})^{n}}{\pi^{-\frac{n}{2}}\frac{\Gamma(\frac{n}{2}+1)}{\Gamma(\frac{n}{q}+1)}}.
\end{align}
Then, the $p$-entropy power of the time-independent function $\widetilde{G}_{p}$ is
\begin{equation}\label{NpGp}
N_{p}(\widetilde{G}_{p}(x))=\exp\left\{\frac{p}{n}H_{p}(\widetilde{G}_{p}(x))\right\}
=p(qe)^{p-1}\pi^{\frac{p}{2}}
\left[\frac{\Gamma(\frac{n}{2}+1)}{\Gamma(\frac{n}{q}+1)}\right]^{-\frac{p}{n}}.
\end{equation}
Note that $\nabla\widetilde{G}_{p}(x)=-\frac{q}{p^{q}}C_{p,n}^{\frac{1}{p-1}}e^{-\frac{| x|^{q}}{p^{q}}}| x|^{q-2}x$, so a direct calculation shows
\begin{equation}\label{IpGp}
I_{p}(\widetilde{G}_{p}(x))=p^{-q}C_{p,n}\int_{\mathbb{R}^{n}}|x|^{q}e^{-\frac{p-1}{p^{q}}|x|^{q}}\,dx
=\frac{n}{p}.
\end{equation}
Therefore, the value of $\gamma_{n,p}=\Psi_{p}(\widetilde{G}_{p}(x))$ conclude in \eqref{gammacon}.
\end{proof}
\section{$L^p$-Euclidean Nash Inequality}
One of the applications based on isoperimetric entropy power inequality \eqref{isoperi} in Theorem \ref{isoper} is to study $L^p$-Nash inequality.
\begin{proof}[\bf{Proof of Theorem \ref{OPNash}}]
Let $u$ be a positive solution to $p$-heat equation on $\mathbb{R}^n$, set $\xi=\| u^{p-1}\|_{L^{1}}\neq1$ and $\zeta^{p-1}(x)=\frac{u^{p-1}(x)}{\xi}$, we observe $\zeta^{p-1}(x)$ is a probability density. If we rewrite the formula $\Psi_{p}(u)\geq\Psi_{p}(\widetilde{G}_{p}(x))$ in \eqref{mono} as
\begin{equation}\label{inequali}
\frac{I_{p}(u)}{I_{p}(\widetilde{G}_{p})}\geq
\exp\left\{-\frac{p}{n}[H_{p}(u)-H_{p}(\widetilde{G}_{p})]\right\},
\end{equation}
then we get
\begin{align*}
I_p(u)&=I_{p}(\xi^{\frac{1}{p-1}}\zeta)=\xi I_{p}(\zeta)
\notag\\
&\geq \xi
I_{p}(\widetilde{G}_{p})\exp\left\{\frac{p}{n}H_{p}(\widetilde{G}_{p})\right\}
\exp\left\{-\frac{p}{n}H_{p}(\zeta)\right\}
\notag\\
&=\xi
I_{p}(\widetilde{G}_{p})\exp\left\{\frac{p}{n}[H_{p}(\widetilde{G}_{p})-\log\xi]\right\}
\exp\left\{-\frac{p}{n}[H_{p}(\zeta)-\log\xi]\right\}
\notag\\
&=\xi
I_{p}(\widetilde{G}_{p})
\exp\left\{\frac{p}{n}\frac{1}{\xi}H_{p}(\xi^{\frac{1}{p-1}}\widetilde{G}_{p})\right\}
\exp\left\{-\frac{p}{n}\frac{1}{\xi}H_{p}(\xi^{\frac{1}{p-1}}\zeta)\right\},
\end{align*}
where we use the identity $ H_{p}(\xi^{\frac{1}{p-1}}\zeta)=\xi H_{p}(\zeta)-\xi\log\xi.$
Applying the formulae \eqref{IpGp} and \eqref{HpGp}, we obtain
\begin{align}\label{infoineq}
I_{p}(u)&\geq\xi\frac{n}{p}\exp\left\{-\frac{p}{n}\frac{1}{\xi}H_{p}(u)\right\}
\exp\left\{\frac{p}{n}H_{p}(\widetilde{G_{p}})\right\}\exp\left\{-\frac{p}{n}\log\xi\right\}
\notag\\
&=\xi^{1-\frac{p}{n}}n(qe)^{p-1}\pi^{\frac{p}{2}}
\left[\frac{\Gamma(\frac{n}{2}+1)}{\Gamma(\frac{n}{q}+1)}\right]^{-\frac{p}{n}}
\exp\left\{-\frac{p}{n}\frac{1}{\xi}H_{p}(u)\right\}.
\end{align}
Let $g(x)$ be a probability density function and $u(x)=g^{q}(x)$, $q=\frac{p}{p-1}$, then
\begin{align*}
H_{p}(u)&=H_{p}(g^{q})
=-\int_{\mathbb{R}^{n}}g^{(p-1)q}(x)\log
g^{(p-1)q}(x)\,dx\\
&=-q\int_{\mathbb{R}^{n}}\left(g^{p-1}(x)\log g^{p-1}(x)\right)g(x)\,dx.
\end{align*}
Hence, we have
\begin{equation}\label{Jensen}
-H_{p}(g^{q})\geq q\left(\int_{\mathbb{R}^{n}}g^{p}(x)\,dx\right)
\log\left(\int_{\mathbb{R}^{n}}g^{p}(x)\,dx\right)=q\xi\log\xi
\end{equation}
by using Jensen's inequality on convex function $u\mapsto u\log u$ on $\mathbb{R}^{n}_+$.
Combining inequalities \eqref{infoineq} and \eqref{Jensen} with the identity
\begin{equation}
I_{p}(u)=I_{p}(g^{q})=p^{p}\int_{\mathbb{R}^{n}}|\nabla g(x)|^{p}\,dx,
\end{equation}
we obtain
\begin{align}\label{PNash}
\left(\int_{\mathbb{R}^{n}}g^{p}\,dx\right)^{1+\frac{q}{n}}
&\leq
\frac{p}{n}\left(\frac{p-1}{e}\right)^{p-1}\pi^{-\frac{p}{2}}
\left[\frac{\Gamma(\frac{n}{2}+1)}{\Gamma(\frac{n}{q}+1)}\right]^{\frac{p}{n}}
\int_{\mathbb{R}^{n}}|\nabla g(x)|^{p}\,dx
\notag\\
&=p^{p}\gamma_{n,p}^{-1}\int_{\mathbb{R}^{n}}|\nabla g(x)|^{p}\,dx,
\end{align}
where $\gamma_{n,p}$ is defined in \eqref{gammacon}.
If $\|g\|_{L^1}\neq1$, we can obtain the general $L^p$-Euclidean Nash inequality \eqref{PNashineq} by replacing $g$ with $g/\|g\|_{L^1}$ in \eqref{PNash}.
\end{proof}
\section{$L^p$-Euclidean logarithmic Sobolev inequality }
In \cite{Toscani}, G.Toscani showed the sharp logarithmic Sobolev inequality
as a direct result of the concavity of entropy power, moreover, he gave an improvement of the logarithmic Sobolev inequality. In this section, we obtain a new proof of sharp $L^p$-version of logarithmic Sobolev inequality (in \cite{Del Pino} and \cite{Gentil}) applying the concavity of $p$-entropy power, and also get an improvement of $L^p$-logarithmic Sobolev inequality.
Let us first recall the inequality \eqref{inequali}
$$
\frac{I_{p}(u)}{I_{p}(\widetilde{G}_{p})}\geq
\exp\left\{-\frac{p}{n}[H_{p}(u)-H_{p}(\widetilde{G}_{p})]\right\}.
$$
By using of the identities \eqref{HpGp}, \eqref{IpGp} and the facts that $e^{-x}\geq1-x$, we have
\begin{equation}\label{loga}
(p-1)^{p}\int_{\mathbb{R}^{n}}\frac{|\nabla u|^{p}}{u}\,dx
\geq \int_{\mathbb{R}^{n}}u^{p-1}\log u^{p-1}\,dx
+\log\frac{(p^{\frac{1}{p}}q^{\frac{1}{q}}e)^{n}}{\pi^{-\frac{n}{2}}
\frac{\Gamma(\frac{n}{2}+1)}{\Gamma(\frac{n}{q}+1)}},
\end{equation}
the inequality \eqref{loga} is exactly the $L^p$-Euclidean logarithmic Sobolev inequality (in \cite{Del Pino} and \cite{Gentil}). The equivalence of this is given in below.
In fact,
let $u=g^{q},q=\frac{p}{p-1},$ we rewrite the inequality \eqref{loga} as
\begin{equation}\label{logari}
\int_{\mathbb{R}^{n}}g^{p}\log g^{p}\,dx
\leq p^{p}\int_{\mathbb{R}^{n}}|\nabla g|^{p}\,dx
-\log\frac{(p^{\frac{1}{p}}q^{\frac{1}{q}}e)^{n}}{\pi^{-\frac{n}{2}}
\frac{\Gamma(\frac{n}{2}+1)}{\Gamma(\frac{n}{q}+1)}}.
\end{equation}
Changing $g(x)$ into $g_{h}(x)=h^{\frac{n}{p}}g(hx)$, $h>0$, $x\in \mathbb{R}^{n}$, and set $g_{h}(x)$ satisfies
$\int_{\mathbb{R}^{n}}g_{h}^{p}\,dx=1=\int_{\mathbb{R}^{n}}g^{p}\,dx$, we have the following two formulae
\begin{align*}
\int_{\mathbb{R}^{n}}|\nabla g_{h}|^{p}\,dx&=h^{p}\int_{\mathbb{R}^{n}}|\nabla g|^{p}\,dx,\\ \int_{\mathbb{R}^{n}}g_{h}^{p}\log g_{h}^{p}\,dx&=\int_{\mathbb{R}^{n}}g^{p}\log g^{p}\,dx+n\log h.
\end{align*}
Then applying the inequality \eqref{logari} into $g_{h}(x)$ yields
\begin{equation*}
\int_{\mathbb{R}^{n}}g^{p}\log g^{p}\,dx
\leq p^{p}h^{p}\int_{\mathbb{R}^{n}}|\nabla g|^{p}\,dx-n\log h -\log\frac{(p^{\frac{1}{p}}q^{\frac{1}{q}}e)^{n}}{\pi^{-\frac{n}{2}}\frac{\Gamma(\frac{n}{2}+1)}{\Gamma(\frac{n}{q}+1)}}
\end{equation*}
Set $h^{p}=\frac{np^{-(p+1)}}{\int_{\mathbb{R}^{n}}|\nabla g|^{p}\,dx}$, we conclude in
\begin{equation}\label{LpEu}
\int_{\mathbb{R}^{n}}g^{p}\log g^{p}\,dx
\leq\frac{n}{p}\log\left(p^{p}\gamma^{-1}_{n,p}\int_{\mathbb{R}^{n}}|\nabla g|^{p}\,dx\right).
\end{equation}
\begin{proof}[\bf Proof of inequality \eqref{GLogSobolev} in Theorem \ref{improve}]
In the above part of this section, we have shown the $L^p$-logarithmic Sobolev inequality \eqref{LpEu} is equivalent to the inequality \eqref{loga}.
Let us consider the function
\begin{align}\label{HpuHpG}
-H_{p}(u)+H_{p}(\widetilde{G}_{p})
=\int_{\mathbb{R}^{n}}u^{p-1}\log\left(u^{p-1}/\widetilde{G}_{p}^{p-1}\right)\,dx
+\int_{\mathbb{R}^{n}}\left(u^{p-1}-\widetilde{G}_{p}^{p-1}\right)\log\widetilde{G}_{p}^{p-1}\,dx.
\end{align}
If we set $\frac{1}{n}\int_{\mathbb{R}^{n}}|x|^{q}u^{p-1}\,dx\leq p^{\frac{1}{p-1}}$, then the second term in \eqref{HpuHpG} implies
\begin{equation*}
\int_{\mathbb{R}^{n}}\left(u^{p-1}-\widetilde{G}_{p}^{p-1}\right)\log\widetilde{G}_{p}^{p-1}\,dx
=\frac{p-1}{p^{q}}\int_{\mathbb{R}^{n}}|x|^{q}(\widetilde{G}_{p}^{p-1}-u^{p-1})\,dx\geq0,
\end{equation*}
which involves the facts
$\int_{\mathbb{R}^{n}}\widetilde{G}_{p}^{p-1}\,dx=1=\int_{\mathbb{R}^{n}}u^{p-1}\,dx$, and
$\int_{\mathbb{R}^{n}}| x|^{q}\widetilde{G}_{p}^{p-1}\,dx=np^{\frac{1}{p-1}}$.\\
A Taylor expansion of the first term in \eqref{HpuHpG} yields
\begin{align*}
&\int_{\mathbb{R}^{n}}u^{p-1}\log\left(u^{p-1}/\widetilde{G}_{p}^{p-1}\right)\,dx\\
=&\int_{\mathbb{R}^{n}}\left[\psi\left(u^{p-1}/\widetilde{G}_{p}^{p-1}\right)
-\psi(1)\right]\widetilde{G}_{p}^{p-1}\,dx
\\
\geq&\psi'(1)\int_{\mathbb{R}^{n}}\left(u^{p-1}/\widetilde{G}_{p}^{p-1}-1
\right)\widetilde{G}_{p}^{p-1}\,dx
+\frac{\psi''(1)}{2}\int_{\mathbb{R}^{n}}\left(u^{p-1}/\widetilde{G}_{p}^{p-1}-1
\right)^{2}\widetilde{G}_{p}^{p-1}\,dx
\\
=&\frac{1}{2}\int_{\mathbb{R}^{n}}\left(u^{p-1}/\widetilde{G}_{p}^{p-1}-1
\right)^{2}\widetilde{G}_{p}^{p-1}\,dx,
\end{align*}
where $\psi(x)=x\log x$ is a convex function. For $\theta\in(0,2]$, we obtain
\begin{equation*}
\int_{\mathbb{R}^{n}}\left(u^{p-1}/\widetilde{G}_{p}^{p-1}-1
\right)^{\theta}\widetilde{G}_{p}^{p-1}\,dx
\leq\left(\int_{\mathbb{R}^{n}}\left(u^{p-1}/\widetilde{G}_{p}^{p-1}-1
\right)^{2}\widetilde{G}_{p}^{p-1}\,dx\right)^{\frac{\theta}{2}}
\left(\int_{\mathbb{R}^{n}}\widetilde{G}_{p}^{p-1}\,dx\right)^{1-\frac{\theta}{2}}
\end{equation*}
by H\"older's inequality.
From this, we get the Csiszar-Kullback type inequality
\begin{equation}\label{C-Ktype}
\int_{\mathbb{R}^{n}}u^{p-1}\log\left(u^{p-1}/\widetilde{G}_{p}^{p-1}\right)\,dx
\geq\frac{1}{2}
\left(\int_{\mathbb{R}^{n}}\left(u^{p-1}/\widetilde{G}_{p}^{p-1}-1
\right)^{\theta}\widetilde{G}_{p}^{p-1}\,dx
\right)^{\frac{2}{\theta}}.
\end{equation}
When $\theta=1$, the right hand of inequality \eqref{C-Ktype} takes form in $L^{1}$-norm
\begin{equation*}
\int_{\mathbb{R}^{n}}u^{p-1}\log\left(u^{p-1}/\widetilde{G}_{p}^{p-1}\right)\,dx
\geq\frac{1}{2}|| u^{p-1}-\widetilde{G}_{p}^{p-1}||_{L^{1}}^{2}.
\end{equation*}
When $\theta=1$, $p=2$, \eqref{C-Ktype} is exactly the classic Csiszar-Kullback inequality
used in Shannon's case to reform the logarithmic Sobolev inequality (in \cite{Toscani}).
Going along the way to the improvement, equation \eqref{HpuHpG} indicates
\begin{equation*}
-H_{p}(u)+H_{p}(\widetilde{G}_{p})
\geq\frac{1}{2}\left(\int_{\mathbb{R}^{n}}\left(u^{p-1}/\widetilde{G}_{p}^{p-1}-1\right)
^{\theta}\widetilde{G}_{p}^{p-1}\,dx
\right)^{\frac{2}{\theta}}.
\end{equation*}
Applying the inequality $e^{-x}\geq1-x+\frac{1}{2}x^{2}$ to equation \eqref{inequali},
we can reform \eqref{loga} as
\begin{align*}
&(p-1)^{p}\int_{\mathbb{R}^{n}}\frac{|\nabla u|^{p}}{u}\,dx
-\left(\int_{\mathbb{R}^{n}}u^{p-1}\log u^{p-1}\,dx
+\log\frac{(p^{\frac{1}{p}}q^{\frac{1}{q}}e)^{n}}{\pi^{-\frac{n}{2}}
\frac{\Gamma(\frac{n}{2}+1)}{\Gamma(\frac{n}{q}+1)}}\right)\\
\geq &\frac{p}{8n}\left(\int_{\mathbb{R}^{n}}\left(u^{p-1}/\widetilde{G}_{p}^{p-1}-1\right)
^{\theta}\widetilde{G}_{p}^{p-1}\,dx
\right)^{\frac{4}{\theta}}.
\end{align*}
Recall the expression of $\gamma_{n,p}$, we know that
\begin{equation*}
\frac{n}{p}\log\left(\frac{pe}{n}\gamma_{n,p}\right)
=\log\frac{(p^{\frac{1}{p}}q^{\frac{1}{q}}e)^{n}}{\pi^{-\frac{n}{2}}
\frac{\Gamma(\frac{n}{2}+1)}{\Gamma(\frac{n}{q}+1)}}.
\end{equation*}
Set $u=g^{q}$, $q=\frac{p}{p-1}$, it implies \eqref{GLogSobolev} (an improvement of $L^p$-logarithmic Sobolev inequality).
\end{proof}
\section*{Acknowledgements}
This work has been partially supported by the National Science Foundation of China, NSFC (No.11701347). The first author would like to thank Professor Xiang-Dong Li for his interest and illuminating discussion.
|
2,869,038,155,842 | arxiv |
\section{Introduction}
Inferring the parameters of sources of gravitational-waves (GW) such as compact binaries detected by LIGO~\cite{TheLIGOScientific:2014jea} and Virgo~\cite{TheVirgo:2014hva} is based on the evaluation of the Bayesian posterior probability of fifteen or more parameters that govern the shape of the signals~\cite{LIGOScientific:2019hgc}.
Because the mapping between parameters and signals requires millions of waveform model evaluations, the analysis is computationally very expensive.
Each likelihood evaluation requires generating the GW signal corresponding to a set of source parameters and computing its noise-weighted correlation with detector data~\cite{LIGOScientific:2019hgc}.
At present, accurate component-mass estimates only become available hours or even days after the detection of a binary black-hole coalescence.
A full Markov-Chain Monte Carlo (MCMC) parameter estimation takes days to weeks to complete \emph{for each single event}. The improvements in sensitivity of the GW detector network for the next observing run, scheduled to start in late 2021, will lead to unprecedentedly high detection rates of binary merger events of $\sim 1$ per week to $\sim 1/$ per day~\cite{Aasi:2013wya}.
There is thus an urgent need to develop fast and efficient methods for parameter estimation.
Furthermore, low-latency detection and characterization of GWs is essential to trigger multi-messenger time-sensitive searches in order to find electromagnetic and/or astroparticle counterparts to the GW signal~\cite{Shawhan:2019kyc}.
Latest advances in deep learning offer an exciting direction of research towards fast detection and parameter estimation of GWs.
For \emph{signal detection}, several previous works~\citep{George:2016hay,Li:2017chi,George:2017pmj,Gabbard:2017lja,Gebhard:2019ldz} have framed the problem as an instance of supervised classification and showcased how to make use of convolutional neural networks (CNNs) to treat GW signals.
Results presented across these work have shown fast and accurate detection both for synthetic and real data, hence confirming that CNNs are a useful and promising tool to produce real-time triggers.
For \emph{parameter estimation}, deep learning-based alternatives to sampling algorithms have also been investigated. A large panels of neural architectures has been used, e.g. Bayesian Neural Networks~\citep{Shen:2019vep}, Conditional Variational-Autoencoders~\citep{Gabbard:2019rde}, Mixture Density Networks~\citep{Chua:2019wwt} and Normalizing Flows~\citep{green2020complete, Green:2020hst} (for a recent review see Ref.~\cite{Cuoco2020-ue}).
These studies all point towards the fact that fast and accurate parameter estimation is within reach.
Here, we build upon previous recent developments~\cite{Cranmer:2015bka,2019arXiv190304057H,Brehmer:2019jyt} for simulation-based inference~\cite{Cranmer:2019eaq}, and demonstrate that they can drastically accelerate Bayesian posterior parameter estimation for \ac{GW} signals in realistic multi-detector scenarios. Our approach makes use of supervised classification models, which unlocks battle-tested neural network architectures for high-dimensional data such as CNNs,
complementary to previous works. We find that, \textit{e.g.}, the localization of binary black-hole mergers can be accelerated by at least three orders of magnitude without compromising precision or accuracy.
\medskip
\section{GW signal inference}
The \ac{GW} emission by the quasi-circular inspiral and merger of a binary \ac{BH} system may be characterized by eight parameters intrinsic to the system: the two \ac{BH} masses $m_{1,2}$ and the six components of the spin vectors $\boldsymbol{S}_{1,2}$.
Additionally, there are seven extrinsic parameters that relate the source position and orientation to the observer frame: the luminosity distance $d_L$, the right ascension $\alpha$ and the declination $\delta$ defining the position on the sky, the coalescence time $t_c$, the binary inclination angle $\theta_\text{JN}$, the coalescence phase angle $\Phi$ and the polarization angle $\Psi$~\cite{LIGOScientific:2019hgc}.
Here we use the waveform model
\texttt{IMRPhenomPv2}~\cite{Hannam:2013oca},
which includes a phenomenological frequency domain description of the inspiral, merger and final \ac{BH} ringdown phases.
For a proof of concept, we choose to use this model because it is very fast to evaluate while still incorporating precession effects. However the model complexity is not really a problem here because our method only requires the simulator once, to generate the training dataset.
The simulation outputs the strain observed at each detector $\mathbf{h}(t, \boldsymbol \stattheta)$, a vector function of the time and of the 15 waveform parameters, $\boldsymbol \stattheta \equiv \{ m_1, m_2, \boldsymbol{S}_1, \boldsymbol{S}_2, d_L, \alpha, \delta, t_c, \theta_\text{JN}, \Phi, \Psi \}$. We denote synthetic and real time-series data by
$\boldsymbol x = \mathbf{h} + \mathbf{n}$, where $\mathbf{n}$ refers to heteroscedastic Gaussian noise that depends on the detectors.
The task of gravitational wave parameter estimation is to determine the waveform model parameters $\boldsymbol \stattheta$ that are the most compatible with the strain data time series $\boldsymbol x$.
In gravitational wave analysis, this inference problem is usually framed as the computation of the Bayesian posterior distribution
\begin{equation}
p(\boldsymbol \stattheta|\boldsymbol x) = \frac{p(\boldsymbol x|\boldsymbol \stattheta)p(\boldsymbol \stattheta)}{p(\boldsymbol x)},
\end{equation}
where $p(\boldsymbol x|\boldsymbol \stattheta)$ is the likelihood of the model parameters $\boldsymbol \stattheta$ given the observed data time series $\boldsymbol x$, $p(\boldsymbol \stattheta)$ is a prior distribution over the model parameters and $p(\boldsymbol x)$ is the model evidence.
Since both the likelihood function and the prior can be evaluated, sampling algorithms such as MCMC or Nested sampling are commonly used to approximate the posterior \cite{Veitch:2014wba}. Obtaining a sufficient number of posterior samples may however be computationally expensive, taking days for binary black-hole mergers, and weeks for binary neutron-star mergers.
\bigskip
\section{Inference amortization}
We build upon previous simulation-based inference algorithms~\cite{Cranmer:2015bka,2019arXiv190304057H,Brehmer:2019jyt} to approximate the likelihood-to-evidence ratio
\begin{equation}
r(\boldsymbol x|\boldsymbol \stattheta) \equiv \frac{p(\boldsymbol x|\boldsymbol \stattheta)}{p(\boldsymbol x)}
\end{equation}
with a neural network.
As demonstrated in \cite{2019arXiv190304057H}, this can be achieved by considering as positive examples (labeled $y=1$) those strain-parameter pairs $\boldsymbol x,\boldsymbol \stattheta \sim p(\boldsymbol x,\boldsymbol \stattheta) = p(\boldsymbol x|\boldsymbol \stattheta) p(\boldsymbol \stattheta)$ which are distributed jointly,
and as negative examples (labeled $y=0$) those strain-parameter pairs $\boldsymbol x,\boldsymbol \stattheta \sim p(\boldsymbol x)p(\boldsymbol \stattheta)$ which are sampled marginally (independently from their respective marginal distributions).
Under mild assumptions, the decision function modeled by the Bayes optimal classifier for this binary classification problem is
\begin{equation}
s^*(\boldsymbol x,\boldsymbol \stattheta) = \frac{p(\boldsymbol x,\boldsymbol \stattheta)}{p(\boldsymbol x,\boldsymbol \stattheta) + p(\boldsymbol x)p(\boldsymbol \stattheta)},
\end{equation}
which recovers the likelihood-to-evidence ratio as
\begin{equation}
\frac{s^*(\boldsymbol x,\boldsymbol \stattheta)}{1-s^*(\boldsymbol x,\boldsymbol \stattheta)} = \frac{p(\boldsymbol x,\boldsymbol \stattheta)}{p(\boldsymbol x)p(\boldsymbol \stattheta)} = \frac{p(\boldsymbol x|\boldsymbol \stattheta)}{p(\boldsymbol x)} = r(\boldsymbol x|\boldsymbol \stattheta).
\end{equation}
After an upfront simulation and training phase, Bayesian inference for any observed strain $\boldsymbol x$ is \emph{amortized}: the fast evaluation of the posterior reduces to
\begin{equation}
\hat{p}(\boldsymbol \stattheta|\boldsymbol x) = \hat{r}(\boldsymbol x|\boldsymbol \stattheta)p(\boldsymbol \stattheta),
\end{equation}
where the evaluation of the prior $p(\boldsymbol \stattheta)$ is immediate while the evaluation of likelihood-to-evidence ratio $\hat{r}(\boldsymbol x|\boldsymbol \stattheta)$ only requires a single forward pass through the neural network -- \emph{an operation which is fast and easily parallelizable in comparison to sampling algorithms.}
This amortization procedure is generic and can be applied to any Bayesian posterior inference task. In particular, it also applies for the fast evaluation of marginal posterior distributions defined over a subset of parameters of interest, such as the posterior over the component masses $m_1-m_2$, the posterior over the mass ratio and the effective spin $q-\chi_\text{eff}$, or the posterior over right ascension and declination $\alpha-\delta$. In this case, the corresponding marginal likelihood functions are typically not available analytically and the posterior sampling procedure must be run over the entire parameter space for the only purpose of marginalizing out the other parameters.
\medskip
\section{Network architecture and training}
The neural network used to model the likelihood-to-evidence ratio and to demonstrate our inference method on GW signals is illustrated in Figure~\ref{fig:nn}. The Hanford and Livingston detectors input strains $\boldsymbol x$ are processed by a convolutional trunk made of 13 layers of dilated convolutions. The output log-ratio is computed by a 3-layer fully connected network ending with no activation function and taking as input both the convolutional feature map and the parameters $\boldsymbol \stattheta$. At training time, the class $y=1$ probability $p(y=1|\boldsymbol x, \boldsymbol \stattheta)$ is computed as the sigmoid activation $\sigma(\log r(\boldsymbol x|\boldsymbol \stattheta))$ of the network's output.
\begin{figure*}[t]
\hspace{-0.75cm}
\begin{tikzpicture}[scale=0.85, every node/.style={transform shape}]
\tikzstyle{box} = [draw=black, fill=white, thick, rectangle, anchor=west, rounded corners, minimum height=0.7cm]
\tikzstyle{circ} = [circle, minimum height=2.85mm, inner sep=0mm]
\tikzstyle{cfill} = [draw=color0, very thick, fill=white]
\tikzstyle{cdot} = [circle, fill=color0, minimum size=0.75mm, inner sep=0mm]
\tikzstyle{dashy} = [thick, dashed, line cap=round, gray!50]
\tikzstyle{dilation} = [anchor=center, align=center, font=\scriptsize, inner sep=0.25mm, fill=white]
\tikzstyle{arr} = [very thick, line cap=round, postaction={decorate}, decoration={markings, mark=at position 0.55 with {\arrow{>}}}]
\foreach \y in {3} { \foreach \x in {3.5,4.0,...,8.5} { \draw [dashy] (\x, \y-0.65) -- (\x, \y); } \foreach \x in {9.5,10.0,...,14.5} { \draw [dashy] (\x, \y-0.65) -- (\x, \y); } }
\foreach \y in {4} { \foreach \x in {4.0,4.5,...,8.5} { \draw [dashy] (\x, \y) -- (\x, \y-1); \draw [dashy] (\x, \y) -- (\x-0.5, \y-1); } \foreach \x in {10.0,10.5,...,14.5} { \draw [dashy] (\x, \y) -- (\x, \y-1); \draw [dashy] (\x, \y) -- (\x-0.5, \y-1); } }
\foreach \y in {5} { \foreach \x in {4.5,5.0,...,8.0} { \draw [dashy] (\x, \y) -- (\x-0.5, \y-1); \draw [dashy] (\x, \y) -- (\x+0.5, \y-1); } \foreach \x in {10.0,10.5,...,14.0} { \draw [dashy] (\x, \y) -- (\x-0.5, \y-1); \draw [dashy] (\x, \y) -- (\x+0.5, \y-1); } }
\foreach \y in {6} { \foreach \x in {5.5,6.0,...,7.5} { \draw [dashy] (\x, \y) -- (\x-1.0, \y-1); \draw [dashy] (\x, \y) -- (\x+1.0, \y-1); } \foreach \x in {10.5,11.0,...,13.0} { \draw [dashy] (\x, \y) -- (\x-1.0, \y-1); \draw [dashy] (\x, \y) -- (\x+1.0, \y-1); } }
\node [circ, fill] (1-6) at (6.5, 6) {};
\node [circ] (1-5) at (5.5, 5) {};
\node [circ] (2-5) at (7.5, 5) {};
\node [circ] (1-4) at (5.0, 4) {};
\node [circ] (2-4) at (6.0, 4) {};
\node [circ] (3-4) at (7.0, 4) {};
\node [circ] (4-4) at (8.0, 4) {};
\node [circ] (1-3) at (4.5, 3) {};
\node [circ] (2-3) at (5.0, 3) {};
\node [circ] (3-3) at (5.5, 3) {};
\node [circ] (4-3) at (6.0, 3) {};
\node [circ] (5-3) at (6.5, 3) {};
\node [circ] (6-3) at (7.0, 3) {};
\node [circ] (7-3) at (7.5, 3) {};
\node [circ] (8-3) at (8.0, 3) {};
\draw [arr] (1-5) -- (1-6);
\draw [arr] (2-5) -- (1-6);
\draw [arr] (1-4) -- (1-5);
\draw [arr] (2-4) -- (1-5);
\draw [arr] (3-4) -- (2-5);
\draw [arr] (4-4) -- (2-5);
\draw [arr] (1-3) -- (1-4);
\draw [arr] (2-3) -- (1-4);
\draw [arr] (3-3) -- (2-4);
\draw [arr] (4-3) -- (2-4);
\draw [arr] (5-3) -- (3-4);
\draw [arr] (6-3) -- (3-4);
\draw [arr] (7-3) -- (4-4);
\draw [arr] (8-3) -- (4-4);
\draw [arr] (1-3) ++(0, -0.65) -- (1-3);
\draw [arr] (2-3) ++(0, -0.65) -- (2-3);
\draw [arr] (3-3) ++(0, -0.65) -- (3-3);
\draw [arr] (4-3) ++(0, -0.65) -- (4-3);
\draw [arr] (5-3) ++(0, -0.65) -- (5-3);
\draw [arr] (6-3) ++(0, -0.65) -- (6-3);
\draw [arr] (7-3) ++(0, -0.65) -- (7-3);
\draw [arr] (8-3) ++(0, -0.65) -- (8-3);
\foreach \y in {3} {
\foreach \x in {3.5,4.0,...,8.5} {
\node [circ, cfill] at (\x, \y) {};
}
\foreach \x in {9.5,10.0,...,14.5} {
\node [circ, cfill] at (\x, \y) {};
}
}
\foreach \y in {4} {
\foreach \x in {4.0,4.5,...,8.5} {
\node [circ, cfill] at (\x, \y) {};
}
\foreach \x in {9.5,10.0,...,14.5} {
\node [circ, cfill] at (\x, \y) {};
}
}
\foreach \y in {5} {
\foreach \x in {4.5,5.0,...,8.5} {
\node [circ, cfill] at (\x, \y) {};
}
\foreach \x in {9.5,10.0,...,14.0} {
\node [circ, cfill] at (\x, \y) {};
}
}
\foreach \y in {6} {
\foreach \x in {5.5,6.0,...,8.5} {
\node [circ, cfill] at (\x, \y) {};
}
\foreach \x in {9.5,10.0,...,13.0} {
\node [circ, cfill] at (\x, \y) {};
}
}
\node [circ, fill=color0] at (6.5, 6) {};
\foreach \y in {3, 4, 5, 6} { \foreach \x in {9.0} { \node [cdot, xshift=-1.5mm] at (\x, \y) {}; \node [cdot, xshift= 0.0mm] at (\x, \y) {}; \node [cdot, xshift= 1.5mm] at (\x, \y) {}; } }
\foreach \y in {6.5} { \foreach \x in {5.5,6.0,...,8.5} { \node [cdot, yshift=-1.5mm] at (\x, \y) {}; \node [cdot, yshift= 0.0mm] at (\x, \y) {}; \node [cdot, yshift= 1.5mm] at (\x, \y) {}; } \foreach \x in {9.5,10.0,...,13.0} { \node [cdot, yshift=-1.5mm] at (\x, \y) {}; \node [cdot, yshift= 0.0mm] at (\x, \y) {}; \node [cdot, yshift= 1.5mm] at (\x, \y) {}; } }
\foreach \y in {1.30} { \foreach \x in {3.5,4.0,...,14.5} { \draw [very thick, gray!50, ->] (\x, \y-0.2) -- (\x, \y+0.2); } }
\foreach \y in {7.95} { \foreach \x in {5.5,6.0,...,13.0} { \draw [gray!50, very thick] (\x, \y-0.4) -- (\x, \y); } }
\draw [very thick, gray!50, -] (16.0, 0.75) -- (16.0, 7.25);
\draw [very thick, gray!50, -] (16.0, 7.25) -- (14.5, 7.25);
\draw [very thick, gray!50, -] (14.5, 7.25) -- (14.5, 8.25);
\foreach \y in {10.25} {
\foreach \x in {5.5,6.0,...,9.5} { \draw [dashy] (\x, \y) -- (\x, \y-1); }
\foreach \x in {10.5, 11.0,...,14.5} { \draw [dashy] (\x, \y) -- (\x, \y-1);}
\foreach \x in {6.0,6.5,...,9.5} {\draw [dashy] (\x, \y) -- (\x-0.5, \y-1);}
\foreach \x in {11.0, 11.5,...,14.5} {\draw [dashy] (\x, \y) -- (\x-0.5, \y-1);}
\foreach \x in {6.5,7.0,...,9.5} {\draw [dashy] (\x, \y) -- (\x-1.0, \y-1);}
\foreach \x in {11.5, 12.0,...,14.5} {\draw [dashy] (\x, \y) -- (\x-1.0, \y-1);}
\foreach \x in {5.5,6.0,...,9.0} { \draw [dashy] (\x, \y) -- (\x+0.5, \y-1); }
\foreach \x in {10.5, 11.0,...,14.0} { \draw [dashy] (\x, \y) -- (\x+0.5, \y-1);}
\foreach \x in {5.5,6.0,...,8.0} { \draw [dashy] (\x, \y) -- (\x+1.0, \y-1); }
\foreach \x in {10.5, 11.0,...,13.5} { \draw [dashy] (\x, \y) -- (\x+1.0, \y-1);}
}
\draw [dashy] (9.5, 9.25) -- (10.5, 10.25);
\draw [dashy] (9.5, 9.25) -- (11.0, 10.25);
\draw [dashy] (9.0, 9.25) -- (10.5, 10.25);
\draw [dashy] (9.5, 10.25) -- (10.5, 9.25);
\draw [dashy] (9.5, 10.25) -- (11.0, 9.25);
\draw [dashy] (9.0, 10.25) -- (10.5, 9.25);
\foreach \y in {9.25} { \foreach \x in {5.5,6.0,...,9.5} { \node [circ, cfill] (\x, \y) at (\x, \y) {}; } \foreach \x in {10.5, 11.0,...,14.5} { \node [circ, cfill] at (\x, \y) {}; } }
\foreach \y in {10.25} { \foreach \x in {5.5,6.0,...,9.5} { \node [circ, cfill] at (\x, \y) {}; } \foreach \x in {10.5, 11.0,...,14.5} { \node [circ, cfill] at (\x, \y) {}; } }
\foreach \y in {9.25, 10.25} { \foreach \x in {10.0} { \node [cdot, xshift=-1.5mm] at (\x, \y) {}; \node [cdot, xshift= 0.0mm] at (\x, \y) {}; \node [cdot, xshift= 1.5mm] at (\x, \y) {}; } }
\foreach \y in {10.75} { \foreach \x in {5.5,6.0,...,9.5} { \node [cdot, yshift=-1.5mm] at (\x, \y) {}; \node [cdot, yshift= 0.0mm] at (\x, \y) {}; \node [cdot, yshift= 1.5mm] at (\x, \y) {}; } \foreach \x in {10.5, 11.0,...,14.5} { \node [cdot, yshift=-1.5mm] at (\x, \y) {}; \node [cdot, yshift= 0.0mm] at (\x, \y) {}; \node [cdot, yshift= 1.5mm] at (\x, \y) {}; } }
\foreach \y in {9.25} { \foreach \x in {5.5,6.0,...,9.5} { \draw [dashy] (\x, \y-0.65) -- (\x, \y); } \foreach \x in {10.5, 11.0,...,14.5} { \draw [dashy] (\x, \y-0.65) -- (\x, \y); } }
\node [box, minimum width=12cm] at (3.0, 2.0)
{$128 \times 8192 $};
\node [box, minimum width=8.5cm] at (5.0, 7.25)
{$128 \times 1$};
\node [box, minimum width=1cm] at (15.5, 0.75) (label_7)
{$\boldsymbol{\ensuremath{\vartheta}\xspace}$};
\node [box, minimum width=10cm] at (5.0, 8.25) (label_8)
{$128 + 2$};
\node [box, minimum width=10cm] at (5.0, 11.5)
{$\log r(\mathbf{x}|\mathbf{\ensuremath{\vartheta}\xspace})\in\mathbb{R}$};
\node [anchor=center, align=center, inner sep=0.75mm] at (1.5, 0.5) (label_1)
{H1/L1 strains\\ ($2 \times 8192$)};
\node [anchor=center, align=center, inner sep=0.75mm] at (1.5, 2.00) (label_2)
{Conv. layer};
\node [anchor=center, align=center, inner sep=0.75mm] at (1.5, 4.75) (label_3)
{Stack of\\ 13 blocks\\ with dilated\\ Convolutional \\layers};
\node [anchor=center, align=center, inner sep=0.75mm] at (1.5, 8.25) (label_5)
{Concatenation of $\boldsymbol{\ensuremath{\vartheta}\xspace}$};
\node [anchor=center, align=center, inner sep=0.75mm] at (1.5, 9.75) (label_6)
{Multilayer perceptron\\ 3 layers of 200 units};
\node [anchor=center, align=center, inner sep=0.75mm] at (1.5, 11.5) (label_9)
{Log likelihood-to-evidence ratio};
\node [dilation] at (9.0, 3.50)
{Dilation 1};
\node [dilation] at (9.0, 4.50)
{Dilation 2};
\node [dilation] at (9.0, 5.50)
{Dilation 4};
\draw [very thick, ->] (label_1.north) -- (label_2.south);
\draw [very thick, ->] (label_2.north) -- (label_3.south);
\draw [very thick, ->] (label_3.north) -- (label_5.south);
\draw [very thick, ->] (label_5.north) -- (label_6.south);
\draw [very thick, ->] (label_6.north) -- (label_9.south);
\draw [color3, line width=0.4mm] plot [smooth] coordinates {(3.000, 0.623) (3.020, 0.429) (3.040, 0.604) (3.060, 0.664) (3.080, 0.505) (3.100, 0.750) (3.120, 0.641) (3.140, 0.481) (3.160, 0.685) (3.180, 0.556) (3.200, 0.570) (3.220, 0.816) (3.240, 0.596) (3.260, 0.624) (3.280, 0.621) (3.300, 0.521) (3.320, 0.710) (3.340, 0.441) (3.360, 0.538) (3.380, 0.731) (3.400, 0.742) (3.420, 0.715) (3.440, 0.549) (3.460, 0.602) (3.480, 0.733) (3.500, 0.572) (3.520, 0.542) (3.540, 0.604) (3.560, 0.692) (3.580, 0.615) (3.600, 0.720) (3.620, 0.563) (3.640, 0.645) (3.660, 0.660) (3.680, 0.666) (3.700, 0.575) (3.720, 0.695) (3.740, 0.577) (3.760, 0.637) (3.780, 0.493) (3.800, 0.617) (3.820, 0.635) (3.840, 0.455) (3.860, 0.585) (3.880, 0.543) (3.900, 0.549) (3.920, 0.554) (3.940, 0.608) (3.960, 0.664) (3.980, 0.706) (4.000, 0.595) (4.020, 0.614) (4.040, 0.750) (4.060, 0.514) (4.080, 0.610) (4.100, 0.665) (4.120, 0.668) (4.140, 0.586) (4.160, 0.487) (4.180, 0.574) (4.200, 0.474) (4.220, 0.693) (4.240, 0.672) (4.260, 0.698) (4.280, 0.656) (4.300, 0.642) (4.320, 0.582) (4.340, 0.690) (4.360, 0.601) (4.380, 0.699) (4.400, 0.470) (4.420, 0.489) (4.440, 0.701) (4.460, 0.576) (4.480, 0.607) (4.500, 0.586) (4.520, 0.642) (4.540, 0.546) (4.560, 0.396) (4.580, 0.510) (4.600, 0.756) (4.620, 0.741) (4.640, 0.489) (4.660, 0.518) (4.680, 0.577) (4.700, 0.664) (4.720, 0.589) (4.740, 0.502) (4.760, 0.671) (4.780, 0.585) (4.800, 0.468) (4.820, 0.403) (4.840, 0.622) (4.860, 0.508) (4.880, 0.631) (4.900, 0.734) (4.920, 0.629) (4.940, 0.680) (4.960, 0.490) (4.980, 0.605) (5.000, 0.690) (5.020, 0.757) (5.040, 0.650) (5.060, 0.648) (5.080, 0.410) (5.100, 0.575) (5.120, 0.621) (5.140, 0.662) (5.160, 0.362) (5.180, 0.550) (5.200, 0.447) (5.220, 0.372) (5.240, 0.418) (5.260, 0.676) (5.280, 0.707) (5.300, 0.476) (5.320, 0.443) (5.340, 0.392) (5.360, 0.614) (5.380, 0.656) (5.400, 0.533) (5.420, 0.654) (5.440, 0.666) (5.460, 0.623) (5.480, 0.568) (5.500, 0.596) (5.520, 0.616) (5.540, 0.712) (5.560, 0.891) (5.580, 0.680) (5.600, 0.668) (5.620, 0.544) (5.640, 0.639) (5.660, 0.794) (5.680, 0.632) (5.700, 0.493) (5.720, 0.722) (5.740, 0.630) (5.760, 0.701) (5.780, 0.561) (5.800, 0.649) (5.820, 0.446) (5.840, 0.600) (5.860, 0.638) (5.880, 0.639) (5.900, 0.675) (5.920, 0.551) (5.940, 0.586) (5.960, 0.660) (5.980, 0.510) (6.000, 0.506) (6.020, 0.710) (6.040, 0.620) (6.060, 0.472) (6.080, 0.521) (6.100, 0.572) (6.120, 0.451) (6.140, 0.667) (6.160, 0.703) (6.180, 0.643) (6.200, 0.679) (6.220, 0.674) (6.240, 0.639) (6.260, 0.549) (6.280, 0.517) (6.300, 0.668) (6.320, 0.523) (6.340, 0.690) (6.360, 0.773) (6.380, 0.559) (6.400, 0.723) (6.420, 0.705) (6.440, 0.581) (6.460, 0.647) (6.480, 0.611) (6.500, 0.460) (6.520, 0.693) (6.540, 0.585) (6.560, 0.624) (6.580, 0.501) (6.600, 0.611) (6.620, 0.657) (6.640, 0.593) (6.660, 0.675) (6.680, 0.603) (6.700, 0.632) (6.720, 0.558) (6.740, 0.590) (6.760, 0.560) (6.780, 0.557) (6.800, 0.550) (6.820, 0.457) (6.840, 0.642) (6.860, 0.532) (6.880, 0.625) (6.900, 0.660) (6.920, 0.595) (6.940, 0.595) (6.960, 0.535) (6.980, 0.632) (7.000, 0.648) (7.020, 0.605) (7.040, 0.719) (7.060, 0.631) (7.080, 0.626) (7.100, 0.588) (7.120, 0.581) (7.140, 0.653) (7.160, 0.669) (7.180, 0.528) (7.200, 0.721) (7.220, 0.707) (7.240, 0.624) (7.260, 0.591) (7.280, 0.606) (7.300, 0.689) (7.320, 0.445) (7.340, 0.535) (7.360, 0.472) (7.380, 0.573) (7.400, 0.742) (7.420, 0.648) (7.440, 0.571) (7.460, 0.432) (7.480, 0.642) (7.500, 0.557) (7.520, 0.595) (7.540, 0.570) (7.560, 0.389) (7.580, 0.399) (7.600, 0.607) (7.620, 0.670) (7.640, 0.593) (7.660, 0.619) (7.680, 0.543) (7.700, 0.558) (7.720, 0.546) (7.740, 0.610) (7.760, 0.770) (7.780, 0.595) (7.800, 0.539) (7.820, 0.638) (7.840, 0.505) (7.860, 0.661) (7.880, 0.622) (7.900, 0.686) (7.920, 0.586) (7.940, 0.544) (7.960, 0.548) (7.980, 0.495) (8.000, 0.497) (8.020, 0.673) (8.040, 0.525) (8.060, 0.392) (8.080, 0.464) (8.100, 0.517) (8.120, 0.470) (8.140, 0.568) (8.160, 0.645) (8.180, 0.476) (8.200, 0.574) (8.220, 0.572) (8.240, 0.613) (8.260, 0.657) (8.280, 0.657) (8.300, 0.507) (8.320, 0.589) (8.340, 0.629) (8.360, 0.469) (8.380, 0.579) (8.400, 0.554) (8.420, 0.698) (8.440, 0.616) (8.460, 0.565) (8.480, 0.516) (8.500, 0.599) (8.520, 0.749) (8.540, 0.639) (8.560, 0.637) (8.580, 0.451) (8.600, 0.550) (8.620, 0.606) (8.640, 0.675) (8.660, 0.477) (8.680, 0.697) (8.700, 0.594) (8.720, 0.795) (8.740, 0.720) (8.760, 0.578) (8.780, 0.658) (8.800, 0.625) (8.820, 0.760) (8.840, 0.536) (8.860, 0.760) (8.880, 0.626) (8.900, 0.540) (8.920, 0.584) (8.940, 0.576) (8.960, 0.557) (8.980, 0.677) (9.000, 0.598) (9.020, 0.484) (9.040, 0.482) (9.060, 0.518) (9.080, 0.704) (9.100, 0.736) (9.120, 0.609) (9.140, 0.479) (9.160, 0.706) (9.180, 0.684) (9.200, 0.507) (9.220, 0.616) (9.240, 0.594) (9.260, 0.734) (9.280, 0.733) (9.300, 0.589) (9.320, 0.564) (9.340, 0.596) (9.360, 0.577) (9.380, 0.633) (9.400, 0.428) (9.420, 0.774) (9.440, 0.593) (9.460, 0.653) (9.480, 0.624) (9.500, 0.582) (9.520, 0.527) (9.540, 0.525) (9.560, 0.604) (9.580, 0.681) (9.600, 0.533) (9.620, 0.526) (9.640, 0.581) (9.660, 0.786) (9.680, 0.659) (9.700, 0.559) (9.720, 0.676) (9.740, 0.568) (9.760, 0.594) (9.780, 0.611) (9.800, 0.586) (9.820, 0.654) (9.840, 0.606) (9.860, 0.579) (9.880, 0.629) (9.900, 0.631) (9.920, 0.693) (9.940, 0.607) (9.960, 0.602) (9.980, 0.574) (10.000, 0.639) (10.020, 0.640) (10.040, 0.610) (10.060, 0.692) (10.080, 0.525) (10.100, 0.550) (10.120, 0.621) (10.140, 0.633) (10.160, 0.490) (10.180, 0.569) (10.200, 0.546) (10.220, 0.495) (10.240, 0.541) (10.260, 0.637) (10.280, 0.634) (10.300, 0.632) (10.320, 0.682) (10.340, 0.696) (10.360, 0.516) (10.380, 0.692) (10.400, 0.581) (10.420, 0.682) (10.440, 0.558) (10.460, 0.693) (10.480, 0.576) (10.500, 0.501) (10.520, 0.589) (10.540, 0.690) (10.560, 0.684) (10.580, 0.627) (10.600, 0.581) (10.620, 0.551) (10.640, 0.681) (10.660, 0.748) (10.680, 0.596) (10.700, 0.684) (10.720, 0.659) (10.740, 0.492) (10.760, 0.596) (10.780, 0.526) (10.800, 0.697) (10.820, 0.490) (10.840, 0.663) (10.860, 0.627) (10.880, 0.661) (10.900, 0.429) (10.920, 0.639) (10.940, 0.608) (10.960, 0.591) (10.980, 0.774) (11.000, 0.434) (11.020, 0.603) (11.040, 0.491) (11.060, 0.571) (11.080, 0.577) (11.100, 0.683) (11.120, 0.685) (11.140, 0.565) (11.160, 0.597) (11.180, 0.532) (11.200, 0.433) (11.220, 0.538) (11.240, 0.507) (11.260, 0.665) (11.280, 0.547) (11.300, 0.751) (11.320, 0.663) (11.340, 0.714) (11.360, 0.609) (11.380, 0.536) (11.400, 0.499) (11.420, 0.551) (11.440, 0.550) (11.460, 0.550) (11.480, 0.761) (11.500, 0.703) (11.520, 0.788) (11.540, 0.652) (11.560, 0.773) (11.580, 0.480) (11.600, 0.389) (11.620, 0.438) (11.640, 0.304) (11.660, 0.394) (11.680, 0.506) (11.700, 0.455) (11.720, 0.948) (11.740, 0.762) (11.760, 1.000) (11.780, 0.872) (11.800, 0.742) (11.820, 0.719) (11.840, 0.559) (11.860, 0.325) (11.880, 0.301) (11.900, 0.258) (11.920, 0.383) (11.940, 0.625) (11.960, 0.790) (11.980, 0.906) (12.000, 0.573) (12.020, 0.563) (12.040, 0.557) (12.060, 0.648) (12.080, 0.633) (12.100, 0.594) (12.120, 0.759) (12.140, 0.608) (12.160, 0.712) (12.180, 0.587) (12.200, 0.610) (12.220, 0.690) (12.240, 0.667) (12.260, 0.622) (12.280, 0.621) (12.300, 0.644) (12.320, 0.555) (12.340, 0.578) (12.360, 0.423) (12.380, 0.613) (12.400, 0.704) (12.420, 0.684) (12.440, 0.603) (12.460, 0.646) (12.480, 0.701) (12.500, 0.590) (12.520, 0.572) (12.540, 0.757) (12.560, 0.639) (12.580, 0.632) (12.600, 0.551) (12.620, 0.599) (12.640, 0.601) (12.660, 0.343) (12.680, 0.530) (12.700, 0.541) (12.720, 0.599) (12.740, 0.626) (12.760, 0.518) (12.780, 0.599) (12.800, 0.620) (12.820, 0.670) (12.840, 0.669) (12.860, 0.520) (12.880, 0.533) (12.900, 0.512) (12.920, 0.517) (12.940, 0.633) (12.960, 0.628) (12.980, 0.588) (13.000, 0.472) (13.020, 0.558) (13.040, 0.613) (13.060, 0.608) (13.080, 0.636) (13.100, 0.421) (13.120, 0.571) (13.140, 0.623) (13.160, 0.396) (13.180, 0.606) (13.200, 0.717) (13.220, 0.813) (13.240, 0.687) (13.260, 0.653) (13.280, 0.622) (13.300, 0.508) (13.320, 0.527) (13.340, 0.536) (13.360, 0.560) (13.380, 0.488) (13.400, 0.522) (13.420, 0.493) (13.440, 0.528) (13.460, 0.576) (13.480, 0.638) (13.500, 0.616) (13.520, 0.563) (13.540, 0.586) (13.560, 0.558) (13.580, 0.599) (13.600, 0.663) (13.620, 0.692) (13.640, 0.543) (13.660, 0.510) (13.680, 0.500) (13.700, 0.511) (13.720, 0.548) (13.740, 0.499) (13.760, 0.657) (13.780, 0.617) (13.800, 0.613) (13.820, 0.656) (13.840, 0.546) (13.860, 0.592) (13.880, 0.669) (13.900, 0.552) (13.920, 0.494) (13.940, 0.449) (13.960, 0.653) (13.980, 0.667) (14.000, 0.576) (14.020, 0.445) (14.040, 0.509) (14.060, 0.816) (14.080, 0.608) (14.100, 0.570) (14.120, 0.615) (14.140, 0.654) (14.160, 0.673) (14.180, 0.361) (14.200, 0.643) (14.220, 0.648) (14.240, 0.611) (14.260, 0.595) (14.280, 0.565) (14.300, 0.572) (14.320, 0.607) (14.340, 0.625) (14.360, 0.559) (14.380, 0.717) (14.400, 0.676) (14.420, 0.623) (14.440, 0.733) (14.460, 0.621) (14.480, 0.593) (14.500, 0.540) (14.520, 0.699) (14.540, 0.491) (14.560, 0.614) (14.580, 0.591) (14.600, 0.644) (14.620, 0.633) (14.640, 0.565) (14.660, 0.676) (14.680, 0.582) (14.700, 0.726) (14.720, 0.628) (14.740, 0.704) (14.760, 0.545) (14.780, 0.558) (14.800, 0.528) (14.820, 0.495) (14.840, 0.709) (14.860, 0.581) (14.880, 0.854) (14.900, 0.569) (14.920, 0.617) (14.940, 0.596) (14.960, 0.572) (14.980, 0.660) };
\draw [color1, line width=0.4mm] plot [smooth] coordinates{ (3.000, +0.773) (3.020, +0.579) (3.040, +0.754) (3.060, +0.814) (3.080, +0.655) (3.100, +0.900) (3.120, +0.791) (3.140, +0.631) (3.160, +0.835) (3.180, +0.706) (3.200, +0.720) (3.220, +0.966) (3.240, +0.746) (3.260, +0.774) (3.280, +0.771) (3.300, +0.671) (3.320, +0.860) (3.340, +0.591) (3.360, +0.688) (3.380, +0.881) (3.400, +0.892) (3.420, +0.865) (3.440, +0.699) (3.460, +0.752) (3.480, +0.883) (3.500, +0.722) (3.520, +0.692) (3.540, +0.754) (3.560, +0.842) (3.580, +0.765) (3.600, +0.870) (3.620, +0.713) (3.640, +0.795) (3.660, +0.810) (3.680, +0.816) (3.700, +0.725) (3.720, +0.845) (3.740, +0.727) (3.760, +0.787) (3.780, +0.643) (3.800, +0.767) (3.820, +0.785) (3.840, +0.605) (3.860, +0.735) (3.880, +0.693) (3.900, +0.699) (3.920, +0.704) (3.940, +0.758) (3.960, +0.814) (3.980, +0.856) (4.000, +0.745) (4.020, +0.764) (4.040, +0.900) (4.060, +0.664) (4.080, +0.760) (4.100, +0.815) (4.120, +0.818) (4.140, +0.736) (4.160, +0.637) (4.180, +0.724) (4.200, +0.624) (4.220, +0.843) (4.240, +0.822) (4.260, +0.848) (4.280, +0.806) (4.300, +0.792) (4.320, +0.732) (4.340, +0.840) (4.360, +0.751) (4.380, +0.849) (4.400, +0.620) (4.420, +0.639) (4.440, +0.851) (4.460, +0.726) (4.480, +0.757) (4.500, +0.736) (4.520, +0.792) (4.540, +0.696) (4.560, +0.546) (4.580, +0.660) (4.600, +0.906) (4.620, +0.891) (4.640, +0.639) (4.660, +0.668) (4.680, +0.727) (4.700, +0.814) (4.720, +0.739) (4.740, +0.652) (4.760, +0.821) (4.780, +0.735) (4.800, +0.618) (4.820, +0.553) (4.840, +0.772) (4.860, +0.658) (4.880, +0.781) (4.900, +0.884) (4.920, +0.779) (4.940, +0.830) (4.960, +0.640) (4.980, +0.755) (5.000, +0.840) (5.020, +0.907) (5.040, +0.800) (5.060, +0.798) (5.080, +0.560) (5.100, +0.725) (5.120, +0.771) (5.140, +0.812) (5.160, +0.512) (5.180, +0.700) (5.200, +0.597) (5.220, +0.522) (5.240, +0.568) (5.260, +0.826) (5.280, +0.857) (5.300, +0.626) (5.320, +0.593) (5.340, +0.542) (5.360, +0.764) (5.380, +0.806) (5.400, +0.683) (5.420, +0.804) (5.440, +0.816) (5.460, +0.773) (5.480, +0.718) (5.500, +0.746) (5.520, +0.766) (5.540, +0.862) (5.560, +1.041) (5.580, +0.830) (5.600, +0.818) (5.620, +0.694) (5.640, +0.789) (5.660, +0.944) (5.680, +0.782) (5.700, +0.643) (5.720, +0.872) (5.740, +0.780) (5.760, +0.851) (5.780, +0.711) (5.800, +0.799) (5.820, +0.596) (5.840, +0.750) (5.860, +0.788) (5.880, +0.789) (5.900, +0.825) (5.920, +0.701) (5.940, +0.736) (5.960, +0.810) (5.980, +0.660) (6.000, +0.656) (6.020, +0.860) (6.040, +0.770) (6.060, +0.622) (6.080, +0.671) (6.100, +0.722) (6.120, +0.601) (6.140, +0.817) (6.160, +0.853) (6.180, +0.793) (6.200, +0.829) (6.220, +0.824) (6.240, +0.789) (6.260, +0.699) (6.280, +0.667) (6.300, +0.818) (6.320, +0.673) (6.340, +0.840) (6.360, +0.923) (6.380, +0.709) (6.400, +0.873) (6.420, +0.855) (6.440, +0.731) (6.460, +0.797) (6.480, +0.761) (6.500, +0.610) (6.520, +0.843) (6.540, +0.735) (6.560, +0.774) (6.580, +0.651) (6.600, +0.761) (6.620, +0.807) (6.640, +0.743) (6.660, +0.825) (6.680, +0.753) (6.700, +0.782) (6.720, +0.708) (6.740, +0.740) (6.760, +0.710) (6.780, +0.707) (6.800, +0.700) (6.820, +0.607) (6.840, +0.792) (6.860, +0.682) (6.880, +0.775) (6.900, +0.810) (6.920, +0.745) (6.940, +0.745) (6.960, +0.685) (6.980, +0.782) (7.000, +0.798) (7.020, +0.755) (7.040, +0.869) (7.060, +0.781) (7.080, +0.776) (7.100, +0.738) (7.120, +0.731) (7.140, +0.803) (7.160, +0.819) (7.180, +0.678) (7.200, +0.871) (7.220, +0.857) (7.240, +0.774) (7.260, +0.741) (7.280, +0.756) (7.300, +0.839) (7.320, +0.595) (7.340, +0.685) (7.360, +0.622) (7.380, +0.723) (7.400, +0.892) (7.420, +0.798) (7.440, +0.721) (7.460, +0.582) (7.480, +0.792) (7.500, +0.707) (7.520, +0.745) (7.540, +0.720) (7.560, +0.539) (7.580, +0.549) (7.600, +0.757) (7.620, +0.820) (7.640, +0.743) (7.660, +0.769) (7.680, +0.693) (7.700, +0.708) (7.720, +0.696) (7.740, +0.760) (7.760, +0.920) (7.780, +0.745) (7.800, +0.689) (7.820, +0.788) (7.840, +0.655) (7.860, +0.811) (7.880, +0.772) (7.900, +0.836) (7.920, +0.736) (7.940, +0.694) (7.960, +0.698) (7.980, +0.645) (8.000, +0.647) (8.020, +0.823) (8.040, +0.675) (8.060, +0.542) (8.080, +0.614) (8.100, +0.667) (8.120, +0.620) (8.140, +0.718) (8.160, +0.795) (8.180, +0.626) (8.200, +0.724) (8.220, +0.722) (8.240, +0.763) (8.260, +0.807) (8.280, +0.807) (8.300, +0.657) (8.320, +0.739) (8.340, +0.779) (8.360, +0.619) (8.380, +0.729) (8.400, +0.704) (8.420, +0.848) (8.440, +0.766) (8.460, +0.715) (8.480, +0.666) (8.500, +0.749) (8.520, +0.899) (8.540, +0.789) (8.560, +0.787) (8.580, +0.601) (8.600, +0.700) (8.620, +0.756) (8.640, +0.825) (8.660, +0.627) (8.680, +0.847) (8.700, +0.744) (8.720, +0.945) (8.740, +0.870) (8.760, +0.728) (8.780, +0.808) (8.800, +0.775) (8.820, +0.910) (8.840, +0.686) (8.860, +0.910) (8.880, +0.776) (8.900, +0.690) (8.920, +0.734) (8.940, +0.726) (8.960, +0.707) (8.980, +0.827) (9.000, +0.748) (9.020, +0.634) (9.040, +0.632) (9.060, +0.668) (9.080, +0.854) (9.100, +0.886) (9.120, +0.759) (9.140, +0.629) (9.160, +0.856) (9.180, +0.834) (9.200, +0.657) (9.220, +0.766) (9.240, +0.744) (9.260, +0.884) (9.280, +0.883) (9.300, +0.739) (9.320, +0.714) (9.340, +0.746) (9.360, +0.727) (9.380, +0.783) (9.400, +0.578) (9.420, +0.924) (9.440, +0.743) (9.460, +0.803) (9.480, +0.774) (9.500, +0.732) (9.520, +0.677) (9.540, +0.675) (9.560, +0.754) (9.580, +0.831) (9.600, +0.683) (9.620, +0.676) (9.640, +0.731) (9.660, +0.936) (9.680, +0.809) (9.700, +0.709) (9.720, +0.826) (9.740, +0.718) (9.760, +0.744) (9.780, +0.761) (9.800, +0.736) (9.820, +0.804) (9.840, +0.756) (9.860, +0.729) (9.880, +0.779) (9.900, +0.781) (9.920, +0.843) (9.940, +0.757) (9.960, +0.752) (9.980, +0.724) (10.000, +0.789) (10.020, +0.790) (10.040, +0.760) (10.060, +0.842) (10.080, +0.675) (10.100, +0.700) (10.120, +0.771) (10.140, +0.783) (10.160, +0.640) (10.180, +0.719) (10.200, +0.696) (10.220, +0.645) (10.240, +0.691) (10.260, +0.787) (10.280, +0.784) (10.300, +0.782) (10.320, +0.832) (10.340, +0.846) (10.360, +0.666) (10.380, +0.842) (10.400, +0.731) (10.420, +0.832) (10.440, +0.708) (10.460, +0.843) (10.480, +0.726) (10.500, +0.651) (10.520, +0.739) (10.540, +0.840) (10.560, +0.834) (10.580, +0.777) (10.600, +0.731) (10.620, +0.701) (10.640, +0.831) (10.660, +0.898) (10.680, +0.746) (10.700, +0.834) (10.720, +0.809) (10.740, +0.642) (10.760, +0.746) (10.780, +0.676) (10.800, +0.847) (10.820, +0.640) (10.840, +0.813) (10.860, +0.777) (10.880, +0.811) (10.900, +0.579) (10.920, +0.789) (10.940, +0.758) (10.960, +0.741) (10.980, +0.924) (11.000, +0.584) (11.020, +0.753) (11.040, +0.641) (11.060, +0.721) (11.080, +0.727) (11.100, +0.833) (11.120, +0.835) (11.140, +0.715) (11.160, +0.747) (11.180, +0.682) (11.200, +0.583) (11.220, +0.688) (11.240, +0.657) (11.260, +0.815) (11.280, +0.697) (11.300, +0.901) (11.320, +0.813) (11.340, +0.864) (11.360, +0.759) (11.380, +0.686) (11.400, +0.649) (11.420, +0.701) (11.440, +0.700) (11.460, +0.700) (11.480, +0.911) (11.500, +0.853) (11.520, +0.938) (11.540, +0.802) (11.560, +0.923) (11.580, +0.630) (11.600, +0.539) (11.620, +0.588) (11.640, +0.454) (11.660, +0.544) (11.680, +0.656) (11.700, +0.605) (11.720, +1.098) (11.740, +0.912) (11.760, +1.150) (11.780, +1.022) (11.800, +0.892) (11.820, +0.869) (11.840, +0.709) (11.860, +0.475) (11.880, +0.451) (11.900, +0.408) (11.920, +0.533) (11.940, +0.775) (11.960, +0.940) (11.980, +1.056) (12.000, +0.723) (12.020, +0.713) (12.040, +0.707) (12.060, +0.798) (12.080, +0.783) (12.100, +0.744) (12.120, +0.909) (12.140, +0.758) (12.160, +0.862) (12.180, +0.737) (12.200, +0.760) (12.220, +0.840) (12.240, +0.817) (12.260, +0.772) (12.280, +0.771) (12.300, +0.794) (12.320, +0.705) (12.340, +0.728) (12.360, +0.573) (12.380, +0.763) (12.400, +0.854) (12.420, +0.834) (12.440, +0.753) (12.460, +0.796) (12.480, +0.851) (12.500, +0.740) (12.520, +0.722) (12.540, +0.907) (12.560, +0.789) (12.580, +0.782) (12.600, +0.701) (12.620, +0.749) (12.640, +0.751) (12.660, +0.493) (12.680, +0.680) (12.700, +0.691) (12.720, +0.749) (12.740, +0.776) (12.760, +0.668) (12.780, +0.749) (12.800, +0.770) (12.820, +0.820) (12.840, +0.819) (12.860, +0.670) (12.880, +0.683) (12.900, +0.662) (12.920, +0.667) (12.940, +0.783) (12.960, +0.778) (12.980, +0.738) (13.000, +0.622) (13.020, +0.708) (13.040, +0.763) (13.060, +0.758) (13.080, +0.786) (13.100, +0.571) (13.120, +0.721) (13.140, +0.773) (13.160, +0.546) (13.180, +0.756) (13.200, +0.867) (13.220, +0.963) (13.240, +0.837) (13.260, +0.803) (13.280, +0.772) (13.300, +0.658) (13.320, +0.677) (13.340, +0.686) (13.360, +0.710) (13.380, +0.638) (13.400, +0.672) (13.420, +0.643) (13.440, +0.678) (13.460, +0.726) (13.480, +0.788) (13.500, +0.766) (13.520, +0.713) (13.540, +0.736) (13.560, +0.708) (13.580, +0.749) (13.600, +0.813) (13.620, +0.842) (13.640, +0.693) (13.660, +0.660) (13.680, +0.650) (13.700, +0.661) (13.720, +0.698) (13.740, +0.649) (13.760, +0.807) (13.780, +0.767) (13.800, +0.763) (13.820, +0.806) (13.840, +0.696) (13.860, +0.742) (13.880, +0.819) (13.900, +0.702) (13.920, +0.644) (13.940, +0.599) (13.960, +0.803) (13.980, +0.817) (14.000, +0.726) (14.020, +0.595) (14.040, +0.659) (14.060, +0.966) (14.080, +0.758) (14.100, +0.720) (14.120, +0.765) (14.140, +0.804) (14.160, +0.823) (14.180, +0.511) (14.200, +0.793) (14.220, +0.798) (14.240, +0.761) (14.260, +0.745) (14.280, +0.715) (14.300, +0.722) (14.320, +0.757) (14.340, +0.775) (14.360, +0.709) (14.380, +0.867) (14.400, +0.826) (14.420, +0.773) (14.440, +0.883) (14.460, +0.771) (14.480, +0.743) (14.500, +0.690) (14.520, +0.849) (14.540, +0.641) (14.560, +0.764) (14.580, +0.741) (14.600, +0.794) (14.620, +0.783) (14.640, +0.715) (14.660, +0.826) (14.680, +0.732) (14.700, +0.876) (14.720, +0.778) (14.740, +0.854) (14.760, +0.695) (14.780, +0.708) (14.800, +0.678) (14.820, +0.645) (14.840, +0.859) (14.860, +0.731) (14.880, +1.004) (14.900, +0.719) (14.920, +0.767) (14.940, +0.746) (14.960, +0.722) (14.980, +0.810) };
\end{tikzpicture}
\caption{The architecture of the neural ratio estimator, adapted from \cite{Gebhard:2019ldz}. A feature map of both the Hanford's and Livingston's strains is produced by a CNN composed of a 1D convolutional layer ($128$ kernels of size $1$) followed by $13$ dilated convolutional layers ($128$ kernels of size $2$). Finally, the convolutional feature map ($128$ channels of size $1$) is concatenated with the parameters $\boldsymbol \stattheta$ and fed to a 3-layer fully connected network for approximating the log likelihood-to-evidence ratio.}
\label{fig:nn}
\end{figure*}
To train and evaluate our neural network model, we consider simulated signals of binary black hole mergers using \texttt{IMRPhenomPv2} as approximant (including the effects of spin precession), and parameters drawn from the prior distribution $p(\boldsymbol \stattheta)$ summarized in Table~\ref{tab:prior}. To ease comparison with standard analysis results, the prior is chosen identical to the analysis of GW150914~\cite{TheLIGOScientific:2016wfe} and the companion analysis of $\textsc{PyCBC}$~\cite{Biwer:2018osg}.
Our data generation process follows the procedure of \cite{Gebhard:2019ldz} in order to produce realistic synthetic training data. In short, background noise is emulated using a Gaussian noise model with a PSD identical to the one used in \cite{abbott2019gwtc}. Simulated signals are injected into the noise and the result is then whitened and high-passed at 20 Hz to remove simulation artefacts. Finally, the generated strains are cropped to a length of $4s$ in such a way that the maximum of the signal always ends up at a random location in a fixed interval of $0.2s$ within the sample. Using a sampling rate of 2048 Hz, the resulting observable $\boldsymbol x$ for each realization is an array of size $8192 \times 2$, where the channels correspond to the strains from each of the two interferometers. Using this procedure, we generated $10^6$ observables for training, $2\times 10^5$ for validation and $2\times 10^5$ for test.
\section{Results}
To evaluate the performance of our method, we perform inference on simulated gravitational waves. Figure \ref{fig:posteriors} showcases test GWs and the resulting approximate posteriors. The first row compares the credible intervals obtained with our method and with MCMC on a signal similar to GW150914. Both MCMC and our method produce consistent results but \textit{our method takes less than a minute to run while MCMC ran for more than a day!} Since performing an MCMC run is computationally expensive, the following rows only compare against the nominal parameter values that were used to generate the signals. We observe that our method is often able to infer the parameters with high precision and to produce narrow credible intervals. Sometimes, our method produces wide intervals.
We look at the coverage on 1000 simulated signals to evaluate the reliability of the estimated credible intervals.
If the model estimates correctly the credible intervals, then the empirical coverage probability should be close to the nominal coverage probability.
For instance, the estimated 50\%-credible intervals for pairs $\boldsymbol x, \boldsymbol \stattheta \sim p(\boldsymbol x,\boldsymbol \stattheta)$ should contain the nominal value $\boldsymbol \stattheta$ approximately 50\% of the time.
As reported in Table~\ref{tab:coverage}, the empirical coverage probability is usually slightly higher than expected. This shows that the model is slightly under-confident in its predictions and hence produces credible intervals than are slightly larger than expected. This under-confidence of the model is however moderated and does not preclude the contours to be useful in practice since predictions are conservative.
\begin{table}
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
& $m_{max}^{det}$, $m_{min}^{det}$ & $d_L$,$\theta_{JN}$ & $\chi_{\text{eff}}$, $q$ & $\alpha, \delta$ \\
\hline
50\% credible interval coverage & 51.8\% & 50.7\% & 57.5\% & 54.2\%\\
\hline
90\% credible interval coverage & 90.2\% & 89.8\% & 93.3\% & 92.8\%\\
\hline
\end{tabular}
\label{tab:coverage}\\
\vspace{1em}
\caption{Empirical coverage probability of the estimated credible intervals.}
\end{table}
\section{Conclusions}
In this paper, we provide a proof of concept that demonstrates that neural simulation-based inference, and more specifically likelihood-to-evidence ratio estimation, can be used to speed up the analysis of GWs by up to three orders of magnitude while producing good posterior parameter distributions. Although these preliminary results are promising, we note that there are still obstacles to overcome before deploying neural simulation-based inference in official GW analysis pipelines.
Possible improvements include accounting for the measured noise PSD, as well as further exploring the preprocessing of the data.
Further assessments of the statistical validity of the estimated posteriors would also be needed before making any reliable scientific claims.
Recently, complementary deep learning approaches such as normalizing flows and conditional variational auto-encoders have also been applied to the same problem. A detailed comparison between all those methods is required to further advance the field.
\section*{Broader impact}
Neural simulation-based inference has the potential to speed up scientific discoveries by allowing the fast analysis of gravitational waves. It also unlocks multi-messenger astronomy as the fast inference of the binary black-holes merger's sky position is required to measure electromagnetic and/or astroparticle counterparts to the GW signal. Malicious usage of our research is hard to imagine. However, failure modes of our inference engine could lead to erroneous scientific claims, and thus should be carefully validated.
\section*{Acknowledgments}
AW is a research fellow of the F.R.S.-FNRS (Belgium) and acknowledges its financial support. GL is recipient of the ULiege - NRB Chair on Big data and is thankful for the support of NRB. TH acknowledges support from NWO Projectruimte grant GWEM-NS and the DeltaITP. ARW is supported by STFC grant ST/S000550/1. SN is grateful for support from NWO VIDI and Projectruimte Grants of the Innovational Research Incentives Scheme (Vernieuwingsimpuls) financed by the Netherlands Organization for Scientific Research (NWO). We are grateful for computational resources provided by Cardiff University, and funded by an STFC grant supporting UK Involvement in the Operation of Advanced LIGO.
|
2,869,038,155,843 | arxiv | \section*{Abstract}
\end{center}
We report on x-ray absorption spectroscopy (XAS) and x-ray magnetic circular dichroism (XMCD) studies of the
paramagnetic (Mn,Co)-co-doped ZnO and ferromagnetic (Fe,Co)-co-doped ZnO nano-particles.
Both the surface-sensitive total-electron-yield mode and the bulk-sensitive total-fluorescence-yield mode have been employed to extract
the valence and spin states of the surface and inner core regions of the nano-particles.
XAS spectra reveal that significant part of the doped Mn and Co atoms are found in the trivalent and tetravalent state in particular in the surface region
while majority of Fe atoms are found in the trivalent state both in the inner core region and surface region.
The XMCD spectra show that the Fe$^{3+}$ ions in the surface region give rise to the ferromagnetism
while both the Co and Mn ions in the surface region show only paramagnetic behaviors.
The transition-metal atoms in the inner core region do not show magnetic signals, meaning that they are antiferromagnetically coupled.
The present result combined with the previous results on transition-metal-doped ZnO nano-particles and nano-wires suggest
that doped holes, probably due to Zn vacancy formation at the surfaces of the nano-particles and nano-wires,
rather than doped electrons are involved in the occurrence of ferromagnetism in these systems.
\newpage
\section{Introduction}
Various semiconducting oxides such as ZnO \cite{Sharma}, TiO$_2$ \cite{Yamasaki}, and SnO$_2$\cite{Wang} in thin film and
nano-particle forms are known to exhibit ferromagnetism at room temperature when they are doped with transition-metal atoms.
Current interest in such magnetic nano-particle systems is motivated by unique electronic structures and
magnetism at the surfaces of the nano-particles which are different from the inner core region.
In the nano-particle form, the structural and electronic properties are modified by
surface defects such as Zn and O vacancies with broken chemical bonds and charge imbalance,
which may mediate or modify exchange coupling between the doped atoms \cite{NGanguli}.
For example, in the case of (Mn,Co)-co-doped ZnO [ZnO:(Mn,Co)] nano-particles \cite{KataokaMnCo},
high-valence (3+ and 4+) Mn and Co ions are found to be present, probably due to the formation of Zn vacancies (V$\rm_{Zn}$) in the surface region.
The doped Fe atoms in the ferromagnetic ZnO nano-particles are converted from 2+ to 3+
due to hole doping in the surface regions \cite{NGanguli, KataokaFe, KarmakarFe}, resulting in the ferromagnetic interaction between the doped Fe atoms.
In the case of Co-doped ZnO systems such as (Co,Ga)-co-doped ZnO \cite{YHe} and Co-doped ZnO nano-particles \cite{HGu},
on the other hand, oxygen vacancies (V$\rm_{O}$), which induce electron doping, are reported to be necessary for ferromagnetism.
Recently, room-temperature ferromagnetism was reported for (Fe,Co)-co-doped ZnO [ZnO:(Fe,Co)] in thin film \cite{YMCho}
and nano-particle forms \cite{KarmakarFeCo}.
From the first-principle calculations, Karmakar ${et}$ ${al}$. \cite{KarmakarFeCo} have indicated
that V$\rm_{Zn}$-mediated double exchange interaction plays important role for ferromagnetism in ZnO:(Fe,Co) nano-particles.
Indeed, enhancement of ferromagnetic interaction between transition-metal atoms has been demonstrated in
previous first-principles calculations by Gopal and Spaldin \cite{Gopal}.
First-principle calculations by Park and Min \cite{MSPark}, on the other hand, have suggested the importance of
RKKY-type exchange interaction mediated by conduction carriers induced by V$\rm_O$ as the origin of ferromagnetism of ZnO:(Fe,Co).
Also, calculations by Ghosh ${et}$ ${al}$. \cite{SGhosh} have indicated direct exchange interaction
mediated by the doped electron carriers at the Fe-V$\rm_O$-Co defect configuration in the surface region of ZnO:(Fe,Co) nano-wires.
Thus, it has been controversial whether the enhancement of exchange interaction comes from electron doping or hole doping.
In this paper, we report on x-ray absorption spectroscopy (XAS) and x-ray magnetic circular dichroism (XMCD) studies of
paramagnetic ZnO:(Mn,Co) and ferromagnetic ZnO:(Fe,Co) nano-particles.
The valence and spin states of the doped ions and their magnetic interaction have been revealed by XAS and
XCMD measurements of the transition-metal core levels.
Also, both the surface-sensitive total-electron-yield mode and the bulk-sensitive total-fluorescence-yield mode have been employed to extract
the valence and spin states of the surface and inner core regions of the nano-particles separately.
The experimental results indicate that doped holes rather than doped electrons are involved in the occurrence of ferromagnetism in these systems.
\section{Experimental Methods}
Transition-metal-co-doped ZnO nano-particles were synthesized by a low temperature chemical pyrophoric reaction process.
We have prepared paramagnetic ZnO:(Mn,Co) nano-particles (Mn=15 \%, Co=15\%), and
ferromagnetic ZnO:(Fe,Co) nano-particles (Fe=5 \%, Co=5\%) with $T_C$ $>$ 300 K.
Details of the sample preparation were described in refs.\cite{KarmakarFeCo, KarmakarFe, SKMandal}.
Structure characterization was carried out by x-ray diffraction (XRD), selected area electron diffraction (SAED) and
transmission electron microscopy (TEM).
We have made pellets from calcined powders and then sintered them at a temperature of $\sim$ 570 K for 30 min.
The average size of the nano-particles were 7-10 nm \cite{KarmakarFeCo, KarmakarFe}.
XAS and XMCD measurements of ZnO:(Fe,Co) samples and XAS measurements of ZnO:(Mn,Co) samples
were performed at the Dragon Beamline BL-11A of National
Synchrotron Radiation Research Center (NSRRC), Taiwan.
The spectra were taken both in the total-electron-yield (TEY: probing depth $\sim$ 5 nm) and the
total-fluorescene-yield (TFY: probing depth $\sim$ 100 nm) modes, i.e.,
the TEY and TFY modes are relatively surface- and bulk-sensitive, respectively.
The degree of circular polarization of x-rays was $\sim$ 60\%.
XAS and XMCD measurements of ZnO:(Mn,Co) samples were also made at BL-16A of Photon Factory (KEK-PF).
The degree of circular polarization of x-rays was more than $\sim$ 90\%.
All the measurements were performed at room temperature.
Absorption spectra were analyzed using configuration-interaction (CI) cluster-model calculations.
The cluster consisted of a transition-metal ion octahedrally and/or tetrahedarally coordinated by O$^{2-}$ ions.
The ground state wave function was expanded in the
$\psi$= $\alpha$$|d^n\rangle$ + $\beta$$|d^{n+1}\underline{L}\rangle$ + $\gamma$$|d^{n+2}\underline{L^2}\rangle$,
where $\underline{L}$ denotes an ligand O 2$p$ hole.
The adjustable parameters of the calculation were the charge-transfer energy $\Delta$,
the $d$-$d$ Coulomb energy $U$, the $p$-$d$ transfer integral $T$, and the crystal field splitting parameters 10Dq.
We assumed high-spin states for the calculations, and 10Dq was assumed to be less than 1.0 eV.
\section{Results and Discussion}
Figures 1(a) and 1(b) show the Mn and Co 2$p$$\rightarrow$3$d$ XAS spectra of the paramagnetic ZnO:(Mn,Co) nano-particles,
respectively, taken both in the TEY and TFY modes.
In the figures, we compare the experimental spectra (circles) taken both in the TEY and TFY modes with the cluster-model calculations for the Mn and Co ions
with various valence states, tetrahedrally co-ordinated by oxygen atoms \cite{ATanaka}.
From the line-shape analysis shown in Figs. 1(a) and 1(b),
the relative concentrations of Mn$^{2+}$ and Co$^{2+}$ ions estimated using TFY mode are higher than those estimated using TEY mode
because the features due to the Mn$^{2+}$ and Co$^{2+}$ states in the TEY mode are weak compared to those in the TFY mode.
This indicates that the relative concentrations of Mn$^{2+}$ and Co$^{2+}$ ions are relatively high in the inner core region of the nano-particles
and those of the higher valence states of Mn$^{3+}$, Mn$^{4+}$, Co$^{3+}$, and Co$^{4+}$ are relatively high in the surface region.
Figures 2(a) and 2(b) show the Mn and Co 2$p$$\rightarrow$3$d$ XAS and XMCD spectra of the paramagnetic ZnO:(Mn,Co) nano-particles, respectively, taken in the TEY mode.
We compare the Mn 2$p$$\rightarrow$3$d$ XMCD spectra of the ZnO:(Mn,Co) nano-particles
with those of Ca$_{1-x}$Mn$_x$RuO (CMRO) \cite{Terai} and Zn$_{1-x}$Mn$_x$Se$_2$ \cite{Hofmann}, and
compare the Co 2$p$$\rightarrow$3$d$ XMCD spectra of the ZnO:(Mn,Co) nano-particles with that of Ti$_{1-x}$Co$_x$O$_2$\cite{Mamiya}.
It is likely that Mn 2$p$$\rightarrow$3$d$ XMCD spectrum comes from the Mn$^{3+}$ and Mn$^{4+}$ ions
because the line shape of XMCD spectrum of ZnO:(Mn,Co) is similar to that of CMRO, where Mn$^{3+}$ and Mn$^{4+}$ ions coexist.
The Co 2$p$$\rightarrow$3$d$ XMCD spectral line shape of the ZnO:(Mn,Co) nano-particles is similar to that of Ti$_{1-x}$Co$_x$O$_2$.
From the experimental results, we suggest that paramagnetic component of the XMCD signals
consists of the Mn$^{3+}$, Mn$^{4+}$ and Co$^{2+}$ states.
Figures~3(a) and~3(b) show the Fe and Co 2$p$$\rightarrow$3$d$ XAS spectra of the ferromagnetic ZnO:(Fe,Co) nano-particles, respectively.
In the figures, we compare the experimental spectra (circles) taken both in the TEY and TFY modes with the cluster-model calculations for the Fe and Co ions
with various valence states, tetrahedrally or octahedrally co-ordinated by oxygen atoms \cite{ATanaka}.
In the transition-metal-doped ZnO nano-particles, the valence and the co-ordination of
the doped atoms will be 2+($T_d$) if no vacancies are created, or often become 3+($T_d$) or 3+($O_h$) due to the vacancy formation in the surfaces \cite{NGanguli, KataokaFe}.
We therefore calculated spectra for the 2+($T_d$), 3+($T_d$), and 3+($O_h$) states of the Fe and Co ions.
Here, $O_h$ is an interstitial site of the Wurzite-type ZnO lattice.
From the line-shape analysis shown in Fig. 3(a), one notices that the Fe ions in the surface region are mostly Fe$^{3+}$($O_h$) with a small amount of Fe$^{2+}$($T_d$).
In the experimental XAS spectra taken in the TFY mode, the dip structure at 710 eV is shallower, that is,
the Fe$^{2+}$($T_d$) component increases in the inner core region, suggesting that Fe$^{3+}$($O_h$) ions mainly come from the surfaces.
From the Co 2$p$$\rightarrow$3$d$ XAS spectra, it is likely that the doped Co atoms in the surface region are Co$^{2+}$($T_d$), Co$^{3+}$($T_d$) and Co$^{3+}$($O_h$).
On the other hand, the Co atoms in the inner core region appear to be largely in the Co$^{2+}$($T_d$) state.
Figures 4(a) and 4(b) show the Fe 2$p$$\rightarrow$3$d$ XAS and XMCD spectra of the ferromagnetic ZnO:(Fe,Co) nano-particles, respectively, taken at $H$$=$1 T.
The Fe 2$p$$\rightarrow$3$d$ XMCD intensity taken in the TEY mode was finite,
while the XMCD spectrum taken in the TFY mode showed low intensity and not clear observed.
This indicates that the Fe ions in the surface region but not in the inner core region are magnetically active.
Also, one notices that XMCD signals at the Fe $L_2$ absorption edge are very weak,
suggesting that a large orbital magnetic moment ($M_{orb}$) of the Fe ion, probably due to a mixture of Fe$^{2+}$ component.
In the nano-particle form, which has a relatively large surface area, the spin-orbit coupling and
magnetic anisotropy may be enhanced due to surface effects.
Indeed, this large $M_{orb}$ has been observed for ZnO:Fe nano-particles \cite{KataokaFe}.
Figures 4(c) shows the Fe 2$p$$\rightarrow$3$d$ XMCD spectra taken in the TEY mode at various magnetic fields.
In Fig. 4(d), the XMCD intensities due to Fe$^{3+}$($O_h$) and Fe$^{2+}$($T_d$) are plotted as a function of magnetic field.
The intensity due to Fe$^{3+}$($O_h$) increases with magnetic field but persists at low fields down to $H$$=$0.2 T,
while the XMCD intensity due to Fe$^{2+}$($T_d$) remains unchanged with magnetic field.
These results indicate that Fe$^{3+}$($O_h$) contributes to both the ferromagnetism and paramagnetism and
that Fe$^{2+}$($T_d$) contributes only to the ferromagnetism.
Figures 5(a) and 5(b) show the Co 2$p$$\rightarrow$3$d$ XAS and XMCD spectra of the ferromagnetic ZnO:(Fe,Co) nano-particles, respectively, taken at $H$$=$1 T.
The Co 2$p$$\rightarrow$3$d$ XMCD intensity taken in the TEY mode was finite,
while the XMCD intensity taken in the TFY mode did not show finite intensity.
This suggests that the Co ions in the surface region are magnetically active as in the case of Fe.
One can see that the Co 2$p$$\rightarrow$3$d$ XMCD spectrum, taken in the TEY mode, comes from the Co$^{2+}$($T_d$) and Co$^{3+}$($T_d$) ions.
Figures 5(c) shows the Co 2$p$$\rightarrow$3$d$ XMCD spectra taken at various magnetic fields,
and Fig. 5(d) shown the Co 2$p$$\rightarrow$3$d$ XMCD intensity as a function of magnetic field.
This increases with magnetic field, indicating that the ionic Co atoms in the surface region is paramagnetic
and that the ferromagnetic component of the Co ions is negligibly small.
The negligibly weak XMCD signals in the spectra recorded in the TFY mode indicate that the Co ions in the inner core region is antiferromagnetically coupled,
We thus conclude that the ferromagnetism of the ZnO:(Fe,Co) nano-particles comes only from the Fe ions in the surface region.
It should be noted that the Fe 2$p$$\rightarrow$3$d$ XMCD spectra of ZnO:(Fe,Co) indicate the spins of
Fe$^{3+}$($O_h$) and Fe$^{2+}$($T_d$) signals to be in the same directions.
Therefore the segregation of ferromagnetic or ferrimagnetic Fe oxides such as ZnFe$_2$O$_4$ \cite{SNakashima, MHofmann},
$\gamma$-Fe$_2$O$_3$ \cite{SBProfeta}, and Fe$_3$O$_4$ \cite{RCornell} can be excluded because
in these materials Fe$^{3+}$($T_d$) and Fe$^{3+}$($O_h$) are antiferromagnetically coupled \cite{MAGilleo}.
Considering this and from the XRD, SAED and TEM results,
we conclude that the ferromagnetism in these nano-particles are intrinsic.
A schematic picture of hole-mediated exchange interaction between Fe$^{3+}$($O_h$) and Fe$^{2+}$($T_d$) ions is shown in Fig. 6.
\section{Conclusion}
In summary, we have investigated the electronic structure and magnetism of the
paramagnetic (Mn,Co)-co-doped ZnO and ferromagnetic (Fe,Co)-co-doped ZnO nano-particles using 2$p$$\rightarrow$3$d$ XAS and XMCD.
In the case of ZnO:(Mn,Co) nano-particles, the doped Mn and Co atoms are in a mixed-valence (2+, 3+, and 4+)
state and the relative concentrations of the high-valence (3+ and 4+) Mn and Co ions are higher in the surface region than in the deep core region.
Mn and Co 2$p$$\rightarrow$3$d$ XMCD results suggest that the paramagnetism comes from the Co$^{2+}$, Mn$^{3+}$ and Mn$^{4+}$ states.
In the case of the ZnO:(Fe,Co) nano-particles, too, the doped Fe and Co atoms
are found to be in a mixed-valence (2+ and 3+) state and
the relative concentrations of the Fe$^{3+}$ and Co$^{3+}$ ions are higher in the surface region than in the inner core region.
Fe and Co 2$p$$\rightarrow$3$d$ XMCD signals due to the ferromagnetic Fe ions and paramagnetic Fe and Co ions were observed in the surface region
while no appreciable XMCD signals were observed in the inner core region.
From these results, we suggest that the surface region is magnetically active and
Fe$^{3+}$ contributes to both the ferromagnetism and paramagnetism,
and that Fe$^{2+}$ contributes only to the ferromagnetism.
On the other hand, the ionic Co atoms in the surface region is paramagnetic and
that the ferromagnetic component of the Co ions is negligibly small.
Considering that the Fe$^{3+}$ ions are created due to Zn vacancies,
we conclude that the ferromagnetism of ZnO:(Fe,Co) nano-particles comes
from the hole-mediated exchange interaction between Fe$^{3+}$($O_h$) and Fe$^{2+}$($T_d$) in the surface region.
\section{Acknowledgments}
The experiment at PF was approved by the Photon Factory Program Advisory Committee (Proposal No. 2008G010, 2010G187, and 2010S2-001).
The work was supported by a Grant-in-Aid for Scientific Research (S22224005) from JSPS, Japan,
a Global COE Program “the Physical Sciences Frontier", from MEXT, Japan,
an Indo-Japan Joint Research Project “Novel Magnetic Oxide Nano-Materials Investigated by Spectroscopy and ab-initio Theories" from JSPS, Japan, and
the Quantum Beam Technology Development Program Search and Development of Functional Materials Using Fast Polarization-Controlled Soft X-Rays from JST, Japan.
|
2,869,038,155,844 | arxiv |
\section{Introduction}
Lung Ultrasound (LUS) imaging has presented itself to be an effective bedside tool for monitoring COVID-19 patients \cite{Mento2020On2019, Raheja2019ApplicationReview, Amatya2018DiagnosticSetting}. Several AI based applications have emerged that help with diagnosis and identification of COVID-19 lung biomarkers \cite{Born2021AcceleratingAnalysis, Born2020POCOVID-net:POCUS, Roy2020DeepUltrasound, VanSloun2020LocalizingResults, Xue2021ModalityInformation, Gare2021DenseDetectionb}. Most of these methods rely on expert annotated data for learning, demanding scarce and expensive time from expert physicians and radiologists involved in the mitigation of the COVID-19 pandemic. This raises a need for label efficient learning techniques.
Monitoring patient severity and making prognostic predictions play a critical role in the allocation of limited medical resources. For this, several AI based patient severity scoring techniques have recently been proposed \cite{Roy2020DeepUltrasound, Xue2021ModalityInformation} which rely on video- and frame-based annotations. Labeling all of the individual frames in an ultrasound video clip is time-consuming and expensive though. Just labeling the ultrasound video clip is more suitable and treating the video clip severity label as the pseudo frame severity label for the corresponding frames of the video would be preferable. But doing so introduces label noise as not all the frames in a clip actually display the same severity sign. For instance, B-line artifact which is indicative of an unhealthy lung would not be consistently seen in all the frames of an unhealthy lung ultrasound clip, so not all the frames show the same level of disease state. We propose a contrastive learning strategy as a way to mitigate the label noise introduced by the use of such weak frame severity labels directly obtained from the corresponding video severity label.
Contrastive learning has been used previously in the literature as semi- and self- supervised learning techniques \cite{Chen2020ARepresentations}, quite a few applications of it have already been presented in the medical domain \cite{ZhangCONTRASTIVETEXT, Wang2020ContrastiveClassification, Xue2021ModalityInformation}. Contrastive learning acts as a way to regularise feature embeddings to learn discriminative features that enforce intra-class features to have a greater overlap (or similarity) than inter-class features by using objective functions that operate on the cosine similarity of the feature embeddings. Many techniques apply contrastive learning for differentiating COVID-19, Healthy and other pneumonic diseases \cite{ZhangCONTRASTIVETEXT, Chen2020MomentumImages}. \citet{Chen2020MomentumImages} applied contrastive learning on CT scans as a few-shot COVID-19 diagnosis technique by bringing together the feature embedding of the same classes and pulling apart the feature embedding of different classes. Similarly, \citet{ZhangCONTRASTIVETEXT} applied contrastive learning on CT scans and paired text to enhance the network's domain invariance without using any expert annotation. \citet{Xue2021ModalityInformation} applied contrastive learning on the patient level feature embedding in an attempt to align features from 2 different modalities corresponding to LUS and clinical information, to predict the patient severity. The LUS feature embeddings are high level feature embeddings that are aggregated from frame level features to ultrasound zone level features. In addition to making the feature embedding of the two modalities align, they take care of preserving the patient severity discriminate features, by the introduction of novel additional loss components to the contrastive loss. Taking a cue from them, we also augment the contrastive loss with additional terms to retain the ultrasound severity discriminate features.
We propose a weakly supervised training methodology by applying contrastive learning for the prediction of ultrasound video clip severity score, by making use of the noisy frame severity scores directly obtained from the corresponding video severity score. We show that the proposed contrastive learning setup is more robust to the weak frame severity label noise and thus generalizes better, compared to the cross-entropy loss based training.
\section{Methodology}
\subsubsection{Problem Statement}
Given an ultrasound B-mode grey image $I_g$, the task is to find a function $F \colon [ \, I_g] \, \to L$ that maps the image $I_g$ to ultrasound severity score labels $L \in \{0, 1, 2, 3\}$. Because the pleural line produces distinct artifacts (A-lines, B-lines) when scattering ultrasound based on the lung condition, the classification model should learn underlying mappings between the pleural line, artifacts, and pixel values, for making the predictions.
\vspace{-2em}
\subsection{Data}
We compiled a lung ultrasound dataset with linear and curvilinear videos sourced from the publicly usable subset of the POCOVID-Net dataset \cite{Born2020POCOVID-net:POCUS, Born2021AcceleratingAnalysis} (128 videos), as well as our own private dataset (160 videos). Our dataset consists of multiple ultrasound B-scans of left and right lung regions at depths ranging from 4cm to 6cm under different scan settings, obtained using a Sonosite X-Porte ultrasound machine. The combined dataset consists of ultrasound scans of healthy and COVID-19 patients, totaling 288 videos (113 Healthy and 175 COVID-19) resulting in about 50K images. \figureref{fig:Lung-dataset} shows the data distribution into the various ultrasound severity scores and probes.
We use the same 4-level ultrasound severity scoring scheme as defined in \cite{SimpleClinicalTrials.gov} and similarly used in \cite{Roy2020DeepUltrasound}. The score-0 indicates a normal lung with the presence of a continuous pleural line and horizontal A-line artifact. Scores 1 to 3 signify an abnormal lung, wherein score-1 indicates the presence of alterations in the pleural line with $\leq 5$ vertical B-line artifacts, score-2 has the presence of $> 5$ B-lines and score-3 signifies confounding B-lines with large consolidations. All the manual labeling was performed by individuals with at least a month of training from a pulmonary ultrasound specialist. Refer to \figureref{fig:gradcam_results} for sample images corresponding to the severity scores.
\begin{figure}[!tbp]
\centering
\resizebox{0.8\columnwidth}{!}{
\begin{minipage}[b]{0.47\textwidth}
\includegraphics[width=\textwidth]{Lung_Dataset_dist_plot.pdf}
\caption{\small The distribution of ultrasound video clips into various severity scores and probes.}
\label{fig:Lung-dataset}
\end{minipage}
\hfill
\begin{minipage}[b]{0.47\textwidth}
\includegraphics[width=\textwidth]{clr_roc_plot.pdf}
\caption{\small RoC plots of the contrastive learning trained model for the video-based severity scoring.}
\label{fig:clr_roc_plot}
\end{minipage}
}
\end{figure}
\subsubsection{Data Preprocessing}
We perform dataset upsampling to address the class imbalance for the training data, wherein we upsample all the minority class labeled data to get a balanced training dataset \cite{Rahman2013AddressingDatasets}. All the images are resized to 312x232 pixels using bilinear interpolation. Data augmentation is not applied.
\subsection{Training Strategy}
To access the ultrasound severity score of the video clips, we make use of the video labels as the noisy weak labels for the corresponding video frames. We augment the cross-entropy loss training objective for the classification task, using the contrastive learning objective in order to learn features that are robust to the frame-level label noise.
\subsubsection{Contrastive Learning Objective}
The proposed contrastive learning objective is inspired by \cite{Xue2021ModalityInformation}, wherein discriminative representations are learned using the contrastive loss consisting of three parts, which respectively cope with the intra-class alignment $\mathcal{L}^{IA}$, inter-class contrastive learning $\mathcal{L}^{CL}$, and contrastive continuity $\mathcal{L}^{CC}$. The intra-class alignment $\mathcal{L}^{IA}$ objective is to bring the feature embeddings of the same severity score closer, the inter-class contrastive learning $\mathcal{L}^{CL}$ objective is to differentiate the feature embeddings of different severity scores and the contrastive continuity $\mathcal{L}^{CC}$ ensure that the hierarchy among the severity scores is preserved. The proposed contrastive learning approach can be implemented by optimizing the following objective:
\begin{equation}
\label{eq:Lcon}
\mathcal{L}_{con} = \frac{1}{N} \sum^N_{i=1} [ \mathcal{L}_i^{IA} + \mathcal{L}_i^{CL} + \mathcal{L}_i^{CC}]
\end{equation}
where,
\begin{equation}
\mathcal{L}_i^{IA} = 1 - sim(\mathbf{u}_i, \mathbf{u}_j) \quad \forall i, \exists j, |s_i - s_j| = 0
\end{equation}
\begin{equation}
\mathcal{L}_i^{CL} = \sum_{s} sim(\mathbf{u}_k, \mathbf{u}_i)\quad \forall i, \exists k, |s_i - s_k| > 0
\end{equation}
\begin{equation}
\mathcal{L}_i^{CC} = \sum_{s} max(sim(\mathbf{u}_m, \mathbf{u}_i) - sim(\mathbf{u}_n, \mathbf{u}_i), 0)
\end{equation}
$$\forall i, \exists m,n, |s_i - s_m| > 0, |s_i - s_n| > 0, |s_i - s_m| > |s_i - s_n| $$
where, $N$ is the total number of frames, $sim(\mathbf{a}, \mathbf{b}) = \frac{\mathbf{a}^T \mathbf{b}}{\|\mathbf{a}\| \|\mathbf{b}\|}$ is the cosine similarity between vectors $\mathbf{a}$ and $\mathbf{b}$. $\mathbf{u}$ is the feature embeddings extracted after the global average pooling layer of the network, which is 2048-dimensional vector. $s$ is the ultrasound severity score of the corresponding frame feature $\mathbf{u}$.
Unlike \cite{Xue2021ModalityInformation} which only relate the immediate severity levels, we explicitly relate all severity levels to enforce linear relationships in order to preserve the sequential nature of possible output choices (e.g. severity-1 is closer to severity-2 than severity-1 to severity-3) while simultaneously achieving the desired contrast in the loss. Our approach uniquely avoids the incorrect possibility of the model learning multi-dimensional distances among outputs, which could for example make severity-0 seem very close to severity-3 if the model incorrectly learned a cyclical order among the various severity levels. Prior systems do not take this ordinal relationship into account which can give rise to unnatural ordering. As can be observed in the confusion matrix shown in \figureref{fig:confusion_matrix}.
During training, for the input frame under consideration $i$, we randomly sample the frames $k,m,n$ from different video clips which have different severity scores than $i$ and randomly select frame $j$ corresponding to the same video clip as $i$ within a 10 frame window.
\subsubsection{Overall Training Objective}
The overall training objective $\mathcal{L}_{overall}$ consists of the weighted combination of cross-entropy loss $\mathcal{L}_{ce}$ for classification error and contrastive learning loss $\mathcal{L}_{con}$ for feature regularization:
\begin{equation}
\mathcal{L}_{overall} = \alpha \mathcal{L}_{ce} + (1 - \alpha) \mathcal{L}_{con}
\end{equation}
where, the cross-entropy loss $\mathcal{L}_{ce} = \frac{1}{N} \sum_i - \mathbf{g}_i \log \mathbf{p}_i$, in which $N$ is the total number of frames, $\mathbf{g}_i$ is the ground truth one-hot severity score, $\mathbf{p}_i$ is the predicted probability scores from the last softmax layer of the network and the contrastive learning loss $\mathcal{L}_{con}$ is as defined in \equationref{eq:Lcon}. For all our experiments we set $\alpha$ as 0.5.
Using the frame predicted probability scores $\mathbf{p_i}$, we calculate the video's predicted probability scores $\mathbf{p}^v$ by taking the max severity-category score from all the corresponding video frame's predicted probability scores as:
\begin{equation}
\label{eq:video_prediction}
\mathbf{p}^v = softmax(\max_{i \in v} \mathbf{p_i[0]}, \max_{i \in v} \mathbf{p_i[1]}, \max_{i \in v} \mathbf{p_i[2]}, \max_{i \in v} \mathbf{p_i[3]})
\end{equation}
where, $\mathbf{p_i[0]}$, $\mathbf{p_i[1]}$, $\mathbf{p_i[2]}$, $\mathbf{p_i[3]}$, is severity category probability scores 0 to 3 respectively of frame $i$ belonging to video $v$. Using these video predicted probability scores $\mathbf{p}^v$ we evaluate the video-based severity scoring metrics of the model.
\subsubsection{Implementation}
The network is implemented with PyTorch and trained using the stochastic gradient descent algorithm \cite{Bottou2010Large-scaleDescent} with an Adam optimizer \cite{Kingma2015Adam:Optimization} set with an initial learning rate of $0.001$. The model is trained on an Nvidia Titan RTX GPU, with a batch size of 8 for 30 epochs for the classification task. The ReduceLRonPlateau learning-rate scheduler was used which reduces the learning rate by a factor (0.5) when the performance metric (accuracy) plateaus on the validation set. For the final evaluation, we pick the best model with the highest validation set accuracy to test on the held out test set.
\subsubsection{Metrics}
For the severity classification, we report accuracy, precision, recall, and F1 score \cite{Born2020POCOVID-net:POCUS, Roy2020DeepUltrasound}. The receiver operating characteristic (ROC) curve is also reported along with its area under the curve (AUC) metric \cite{Kim2020ChangesStudy}, wherein for the calculation of the metric the weighted average is taken, where the weights correspond to the support of each class and for the multi-label we consider the one-vs-all approach. \cite{Fawcett2006AnAnalysis}
\section{Experiment}
We train the ResNet-50 (RN50) \cite{He2016DeepRecognition} model, commonly used for classification and benchmarking methods using the proposed contrastive learning setup and compare its performance with the model trained only using the cross-entropy loss, in order to access the robustness achieved using the contrastive learning objective to the noisy weak frame severity score labels. We also compare the performance with the model trained using the original contrastive learning loss in \citet{Xue2021ModalityInformation} and a TSM \cite{Lin2018TSM:Understanding} based video classification network similar to \cite{GareTheAI}, training details in Appendix-\ref{apn:tsm_training}. \emph{We conduct five independent runs, wherein each run we randomly split the videos into train, validation, and test sets with 70\%, 10\%, and 20\% split ratio respectively, by maintaining the same split ratio for all the individual severity scored clips and ensuring that all frames corresponding to a video remain in the same split.} The training set is upsampled to address the class imbalance \cite{Rahman2013AddressingDatasets}. We report the resulting metrics in form of mean and standard deviation over the five independent runs.
\vspace{-2em}
\begin{table*}[!ht]
\centering
\caption{Frame-based lung severity classification AUC of ROC, Accuracy, Precision, Recall, and F1 scores on lung dataset. Highest scores are shown in bold.}
\label{tab:frame_based_classification}
\resizebox{\textwidth}{!}{
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
Method & AUC of ROC & accuracy & severity & precision & recall & F1-score\\
\hline
\hline
\multirow{4}{*}{CE RN50} & \multirow{4}{*}{0.898 $\pm$ 0.016} & \multirow{4}{*}{0.693 $\pm$ 0.030} & score-0 & 0.872 $\pm$ 0.071 & 0.809 $\pm$ 0.037 & 0.836 $\pm$ 0.021 \\
& & & score-1 & 0.529 $\pm$ 0.053 & 0.536 $\pm$ 0.195 & 0.517 $\pm$ 0.116 \\
& & & score-2 & 0.763 $\pm$ 0.068 & 0.705 $\pm$ 0.089 & 0.727 $\pm$ 0.047 \\
& & & score-3 & 0.167 $\pm$ 0.048 & 0.296 $\pm$ 0.067 & 0.212 $\pm$ 0.056 \\
\cline{4-7}
& & & avg & 0.730 $\pm$ 0.038 & 0.693 $\pm$ 0.030 & 0.703 $\pm$ 0.035 \\
\hline
\multirow{4}{*}{proposed CL RN50} & \multirow{4}{*}{\bfseries{0.903 $\pm$ 0.022}} & \multirow{4}{*}{0.758 $\pm$ 0.042} & score-0 & 0.851 $\pm$ 0.039 & 0.886 $\pm$ 0.056 & 0.866 $\pm$ 0.016 \\
& & & score-1 & 0.610 $\pm$ 0.131 & 0.612 $\pm$ 0.212 & 0.599 $\pm$ 0.156 \\
& & & score-2 & 0.775 $\pm$ 0.070 & 0.771 $\pm$ 0.040 & 0.771 $\pm$ 0.041 \\
& & & score-3 & 0.373 $\pm$ 0.168 & 0.223 $\pm$ 0.099 & 0.264 $\pm$ 0.100 \\
\cline{4-7}
& & & avg & 0.752 $\pm$ 0.048 & 0.758 $\pm$ 0.042 & 0.748 $\pm$ 0.044 \\
\hline
\multirow{4}{*}{original CL RN50} & \multirow{4}{*}{0.899 $\pm$ 0.020} & \multirow{4}{*}{\bfseries{0.759 $\pm$ 0.041}} & score-0 & 0.855 $\pm$ 0.056 & 0.915 $\pm$ 0.024 & 0.883 $\pm$ 0.033 \\
& & & score-1 & 0.620 $\pm$ 0.060 & 0.555 $\pm$ 0.081 & 0.583 $\pm$ 0.065 \\
& & & score-2 & 0.764 $\pm$ 0.021 & 0.761 $\pm$ 0.076 & 0.760 $\pm$ 0.038 \\
& & & score-3 & 0.429 $\pm$ 0.294 & 0.295 $\pm$ 0.142 & 0.318 $\pm$ 0.171 \\
\cline{4-7}
& & & avg & \bfseries{0.754 $\pm$ 0.046} & \bfseries{0.759 $\pm$ 0.041} & \bfseries{0.752 $\pm$ 0.041} \\
\hline
\end{tabular}
}
\end{table*}
\begin{table}[!ht]
\centering
\caption{Video-based lung severity classification AUC of ROC, Accuracy, Precision, Recall, and F1 scores on lung dataset. Highest scores are shown in bold.}
\label{tab:video_based_classification}
\resizebox{\textwidth}{!}{
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
Method & AUC of ROC & accuracy & severity & precision & recall & F1-score\\
\hline
\multirow{4}{*}{CE RN50} & \multirow{4}{*}{0.842 $\pm$ 0.027} & \multirow{4}{*}{0.655 $\pm$ 0.055} & score-0 & 0.851 $\pm$ 0.083 & 0.739 $\pm$ 0.027 & 0.788 $\pm$ 0.036 \\
& & & score-1 & 0.523 $\pm$ 0.058 & 0.527 $\pm$ 0.156 & 0.516 $\pm$ 0.098 \\
& & & score-2 & 0.751 $\pm$ 0.088 & 0.684 $\pm$ 0.120 & 0.708 $\pm$ 0.077 \\
& & & score-3 & 0.243 $\pm$ 0.095 & 0.440 $\pm$ 0.150 & 0.312 $\pm$ 0.116 \\
\cline{4-7}
& & & avg & 0.704 $\pm$ 0.053 & 0.655 $\pm$ 0.055 & 0.669 $\pm$ 0.055 \\ \hline
\multirow{4}{*}{proposed CL RN50} & \multirow{4}{*}{0.867 $\pm$ 0.020} & \multirow{4}{*}{\bfseries{0.734 $\pm$ 0.065}} & score-0 & 0.832 $\pm$ 0.051 & 0.843 $\pm$ 0.071 & 0.835 $\pm$ 0.044 \\
& & & score-1 & 0.630 $\pm$ 0.162 & 0.636 $\pm$ 0.199 & 0.621 $\pm$ 0.154 \\
& & & score-2 & 0.761 $\pm$ 0.095 & 0.768 $\pm$ 0.071 & 0.761 $\pm$ 0.060 \\
& & & score-3 & 0.457 $\pm$ 0.290 & 0.320 $\pm$ 0.160 & 0.364 $\pm$ 0.201 \\
\cline{4-7}
& & & avg & 0.738 $\pm$ 0.068 & \bfseries{0.734 $\pm$ 0.065} & \bfseries{0.730 $\pm$ 0.064} \\ \hline
\multirow{4}{*}{original CL RN50} & \multirow{4}{*}{0.879 $\pm$ 0.026} & \multirow{4}{*}{0.731 $\pm$ 0.036} & score-0 & 0.819 $\pm$ 0.077 & 0.861 $\pm$ 0.017 & 0.837 $\pm$ 0.040 \\
& & & score-1 & 0.639 $\pm$ 0.026 & 0.582 $\pm$ 0.093 & 0.606 $\pm$ 0.058 \\
& & & score-2 & 0.763 $\pm$ 0.048 & 0.747 $\pm$ 0.117 & 0.747 $\pm$ 0.051 \\
& & & score-3 & 0.503 $\pm$ 0.261 & 0.400 $\pm$ 0.219 & 0.396 $\pm$ 0.130 \\
\cline{4-7}
& & & avg & 0.739 $\pm$ 0.045 & 0.731 $\pm$ 0.036 & 0.726 $\pm$ 0.036 \\ \hline
\multirow{4}{*}{CE TSM} & \multirow{4}{*}{\bfseries{0.897 $\pm$ 0.025}} & \multirow{4}{*}{0.710 $\pm$ 0.060} & score-0 & 0.911 $\pm$ 0.059 & 0.730 $\pm$ 0.139 & 0.801 $\pm$ 0.082 \\
& & & score-1 & 0.604 $\pm$ 0.081 & 0.764 $\pm$ 0.109 & 0.672 $\pm$ 0.079 \\
& & & score-2 & 0.745 $\pm$ 0.085 & 0.768 $\pm$ 0.026 & 0.755 $\pm$ 0.056 \\
& & & score-3 & 0.276 $\pm$ 0.097 & 0.280 $\pm$ 0.098 & 0.270 $\pm$ 0.089 \\
\cline{4-7}
& & & avg & \bfseries{0.744 $\pm$ 0.036} & 0.710 $\pm$ 0.060 & 0.716 $\pm$ 0.054 \\
\hline
\end{tabular}
}
\end{table}
\vspace{-2em}
\section{Results and Discussions}
\tableref{tab:frame_based_classification} shows the mean and standard deviation of the frame-based severity scoring metrics, obtained by evaluating on the held-out test set using the models from the five independent runs. We observe that the contrastive learning (CL) based trained models preform better than the cross-entropy (CE) trained model, wherein the original and the proposed contrastive learning loss have similar scores with the original loss performing slightly better.
We calculate the video-based severity scoring metrics of the models by calculating the video predicted probability score $\mathbf{p}^v$ obtained by taking the max severity-category score from all the corresponding video frame's predicted probability scores $\mathbf{p}$, as defined in \equationref{eq:video_prediction}. \tableref{tab:video_based_classification} shows the mean and standard deviation of the video-based severity scoring metrics, obtained by evaluating on the held out test set using the models from the five independent runs. We again observe that the contrastive learning (CL) based trained models preform better than the cross-entropy (CE) trained model and has comparable performance with the video based TSM model. With our proposed loss function achieving the highest accuracy, recall, and F1-score. The macro average and individual severity score's RoC plots of the CL trained model using the proposed loss for video-based prediction is shown in \figureref{fig:clr_roc_plot}. The lower performance on severity score-3 compared to other scores could be due to the limited number of training data for severity score-3. \figureref{fig:confusion_matrix} shows the confusion matrix of both the contrastive loss trained models on the combined 5 runs.
On comparing the model's scoring metrics on the held out test set with the validation (val) set used for hyperparameter optimization (see \tableref{tab:test_vs_val}), we see that though the CE trained model achieved higher accuracy and F1-score (avg) on the validation set compared to our CL trained model, it was outperformed on the held out test set by the CL trained model. This suggests that the CL trained model generalized better to the unseen data, which is indicative of robust features learned using the contrastive loss.
We visualize the model's layer-2 Grad-CAM \cite{Selvaraju2016Grad-CAM:Localization} and show the mean Grad-CAM image corresponding to the four severity scores taken over the entire test set ($\sim10$K images) for the best run in \figureref{fig:gradcam_results}. We also shown Grad-CAM on four randomly selected images for which our CL trained model appeared to be looking at the correct locations (pleural line and A-line \& B-line artifacts), whereas CE trained model was basing its predictions on non-lung tissue. For these four test images, the CL model correctly predicted the severity scores, whereas the CE model got all predictions wrong. Which suggests that the contrastive learning objective lead to learning better discriminative features.
\begin{figure}[!tbp]
\centering
\begin{minipage}[b]{0.47\textwidth}
\includegraphics[width=0.8\textwidth]{test_confusion_matrix_simLrn.png}
\end{minipage}
\hfill
\begin{minipage}[b]{0.47\textwidth}
\includegraphics[width=0.8\textwidth]{test_confusion_matrix_cntLrn.png}
\end{minipage}
\label{fig:confusion_matrix}
\caption{\small Confusion matrix of the contrastive learning loss original (left) vs proposed (right). Our proposed loss is confused between immediate severity scores which is reasonable and is less confused between non-immediate severity scores compared to the original loss.
}
\end{figure}
\vspace{-2em}
\begin{table*}[!ht]
\centering
\caption{Performance comparison of frame-based score prediction on Test and Val dataset.}
\label{tab:test_vs_val}
\resizebox{0.8\textwidth}{!}{
\begin{tabular}{|c|c|c|c|c|}
\hline
Dataset & Method & AUC of ROC & accuracy & F1-score\\
\hline
\multirow{2}{*}{Test set} & CE RN50 & 0.898 $\pm$ 0.016 & 0.693 $\pm$ 0.030 & 0.703 $\pm$ 0.035 \\
& CL RN50 & \bfseries{0.903 $\pm$ 0.022} & \bfseries{0.758 $\pm$ 0.042} & \bfseries{0.748 $\pm$ 0.043} \\
\hline
\multirow{2}{*}{Val set} & CE RN50 & 0.837 $\pm$ 0.074 & \bfseries{0.689 $\pm$ 0.094} & \bfseries{0.685 $\pm$ 0.093} \\
& CL RN50 & \bfseries{0.839 $\pm$ 0.048} & 0.652 $\pm$ 0.069 & 0.633 $\pm$ 0.091 \\
\hline
\end{tabular}
}
\end{table*}
\newlength{\width}
\setlength{\width}{0.55 in}
\newlength{\height}
\setlength{\height}{0.45 in}
\begin{figure}[!ht]
\centering
\setlength{\tabcolsep}{1pt}
\def\arraystretch{0.5}
\newcolumntype{C}{>{\centering\arraybackslash}m{\width}<{}}
\newcolumntype{F}{>{\centering\arraybackslash}m{0.3\width}<{}}
\resizebox*{\columnwidth}{0.35\textheight}{
\begin{tabular}{FF CC CC}
&
&
\tiny score-0 &
\tiny score-1 &
\tiny score-2 &
\tiny score-3 \\
\rotatebox[origin=c]{90}{\centering \tiny grey} & &
\includegraphics[height = \height, width = \width]{score0_Reg_clinicalreview_mov1_frame197.png} &
\includegraphics[height = \height, width = \width]{score1_LCPE4_frame69.png} &
\includegraphics[height = \height, width = \width]{score2_Cov-grep-7510_frame167.png} &
\includegraphics[height = \height, width = \width]{score3_Cov-grep-7507_frame9.png} \\
\multirow{2}{*}{\raisebox{0.4cm}{\rotatebox[origin=c]{90}{\parbox{1.5\height}{\centering \tiny random sample}}}} &
\raisebox{0.4cm}{\rotatebox[origin=c]{90}{\parbox{\height}{\centering \tiny CE RN50}}} &
\includegraphics[height = \height, width = \width]{nrm_R6_score-0_Label-0,Pred-1_cam.png} &
\includegraphics[height = \height, width = \width]{nrm_R1_score-1_Label-1,Pred-2_cam.png} &
\includegraphics[height = \height, width = \width]{nrm_R6_score-2_Label-2,Pred-1_cam.png} &
\includegraphics[height = \height, width = \width]{nrm_R1_score-3_Label-3,Pred-2_cam.png} \\ [-0ex]
&
\raisebox{0.4cm}{\rotatebox[origin=c]{90}{\parbox{\height}{\centering \tiny CL RN50}}} &
\includegraphics[height = \height, width = \width]{clr_R6_score-0_Label-0,Pred-0_cam.png} &
\includegraphics[height = \height, width = \width]{clr_R1_score-1_Label-1,Pred-1_cam.png} &
\includegraphics[height = \height, width = \width]{clr-R6_score-2_Label-2,Pred-2_cam.png} &
\includegraphics[height = \height, width = \width]{clr_R1_score-3_Label-3,Pred-3_cam.png} \\
\multirow{2}{*}{\raisebox{0.4cm}{\rotatebox[origin=c]{90}{\parbox{1.5\height}{\centering \tiny mean over testset}}}} &
\raisebox{0.4cm}{\rotatebox[origin=c]{90}{\parbox{\height}{\centering \tiny CE RN50}}} &
\includegraphics[height = \height, width = \width]{nrm_score-0_mean_cam.png} &
\includegraphics[height = \height, width = \width]{nrm_score-1_mean_cam.png} &
\includegraphics[height = \height, width = \width]{nrm_score-2_mean_cam.png} &
\includegraphics[height = \height, width = \width]{nrm_score-3_mean_cam.png} \\ [-0ex]
&
\raisebox{0.4cm}{\rotatebox[origin=c]{90}{\parbox{\height}{\centering \tiny CL RN50}}} &
\includegraphics[height = \height, width = \width]{cl_score-0_mean_cam.png} &
\includegraphics[height = \height, width = \width]{cl_score-1_mean_cam.png} &
\includegraphics[height = \height, width = \width]{cl_score-2_mean_cam.png} &
\includegraphics[height = \height, width = \width]{cl_score-3_mean_cam.png} \\
\end{tabular}
}
\caption{
\small Grad-CAM \cite{Selvaraju2016Grad-CAM:Localization} visualization of the layer-2 of cross-entropy (CE) and contrastive learning (CL) trained model on the four severity score test images (B-mode grey). We observe that CL trained model bases the predictions predominantly on the pleural line and A-line \& B-line artifacts, whereas the CE trained model predominantly bases the predictions on the subcutaneous tissues above the pleural line.}
\label{fig:gradcam_results}
\end{figure}
\vspace{-2em}
\section{Conclusion}
We demonstrated a weakly supervised method for scoring the COVID-19 lung ultrasound scan clips, using our proposed contrastive learning objective. Which treats video-based severity labels as frame-based severity labels thus reducing labeling cost. While these frame labels are noisy, we demonstrated that the contrastive learning objective is robust to such label noise compared to the cross-entropy learning objective. We showed that the frame based model trained using the proposed contrastive learning loss achieves comparable performance to a video based TSM model.
\midlacknowledgments{
This present work was sponsored in part by US Army Medical contracts W81XWH-19-C0083 and W81XWH-19-C0101. This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI-1548562. Specifically, it used the Bridges system, which is supported by NSF award number ACI-1445606, at the Pittsburgh Supercomputing Center (PSC). We would also like to thank our collaborators at the Carnegie Mellon University (CMU), Louisiana State University (LUS), and University of Pittsburgh (Upitt).
We are pursuing intellectual-property protection. Galeotti serves on the advisory board of Activ Surgical, Inc. He and Rodriguez are involved in the startup Elio AI, Inc.
}
\section{Introduction}
This is where the content of your paper goes. Some random
notes\footnote{Random footnote are discouraged}:
\begin{itemize}
\item You should use \LaTeX \cite{Lamport:Book:1989}.
\item JMLR/PMLR uses natbib for references. For simplicity, here, \verb|\cite| defaults to
parenthetical citations, i.e. \verb|\citep|. You can of course also
use \verb|\citet| for textual citations.
\item You should follow the guidelines provided by the conference.
\item Read through the JMLR template documentation for specific \LaTeX
usage questions.
\item Note that the JMLR template provides many handy functionalities
such as \verb|\figureref| to refer to a figure,
e.g. \figureref{fig:example}, \verb|\tableref| to refer to a table,
e.g. \tableref{tab:example} and \verb|\equationref| to refer to an equation,
e.g. \equationref{eq:example}.
\end{itemize}
\begin{table}[htbp]
\floatconts
{tab:example}%
{\caption{An Example Table}}%
{\begin{tabular}{ll}
\bfseries Dataset & \bfseries Result\\
Data1 & 0.12345\\
Data2 & 0.67890\\
Data3 & 0.54321\\
Data4 & 0.09876
\end{tabular}}
\end{table}
\begin{figure}[htbp]
\floatconts
{fig:example}
{\caption{Example Image}}
{\includegraphics[width=0.5\linewidth]{example-image}}
\end{figure}
\begin{algorithm2e}
\caption{Computing Net Activation}
\label{alg:net}
\KwIn{$x_1, \ldots, x_n, w_1, \ldots, w_n$}
\KwOut{$y$, the net activation}
$y\leftarrow 0$\;
\For{$i\leftarrow 1$ \KwTo $n$}{
$y \leftarrow y + w_i*x_i$\;
}
\end{algorithm2e}
\midlacknowledgments{We thank a bunch of people.}
\section{Introduction}
This is where the content of your paper goes. Some random notes:
\begin{itemize}
\item You should use \LaTeX \cite{Lamport:Book:1989}.
\item JMLR/PMLR uses natbib for references. For simplicity, here, \verb|\cite| defaults to
parenthetical citations, i.e. \verb|\citep|. You can of course also
use \verb|\citet| for textual citations.
\item You should follow the guidelines provided by the conference.
\item Read through the JMLR template documentation for specific \LaTeX
usage questions.
\item Note that the JMLR template provides many handy functionalities
such as \verb|\figureref| to refer to a figure,
e.g. \figureref{fig:example}, \verb|\tableref| to refer to a table,
e.g. \tableref{tab:example} and \verb|\equationref| to refer to an equation,
e.g. \equationref{eq:example}.
\end{itemize}
\begin{table}[htbp]
\floatconts
{tab:example}%
{\caption{An Example Table}}%
{\begin{tabular}{ll}
\bfseries Dataset & \bfseries Result\\
Data1 & 0.12345\\
Data2 & 0.67890\\
Data3 & 0.54321\\
Data4 & 0.09876
\end{tabular}}
\end{table}
\begin{figure}[htbp]
\floatconts
{fig:example}
{\caption{Example Image}}
{\includegraphics[width=0.5\linewidth]{example-image}}
\end{figure}
\begin{algorithm2e}
\caption{Computing Net Activation}
\label{alg:net}
\KwIn{$x_1, \ldots, x_n, w_1, \ldots, w_n$}
\KwOut{$y$, the net activation}
$y\leftarrow 0$\;
\For{$i\leftarrow 1$ \KwTo $n$}{
$y \leftarrow y + w_i*x_i$\;
}
\end{algorithm2e}
\midlacknowledgments{We thank a bunch of people.}
\section{Introduction}
\Blindtext[5][2]
\acks{\blindtext}
\section{Introduction}
\Blindtext[5][2]
\midlacknowledgments{\blindtext}
\end{document} |
2,869,038,155,845 | arxiv |
\section{adapter Synthesis} \label{sec:adapter_synthesis}
\subsection{An Algorithm for adapter Synthesis}
The idea of counterexample-guided synthesis is to alternate between synthesizing candidate adapter expressions, and checking whether those expressions meet the desired specification.
When a candidate adapter expression fails to meet the specification, a counterexample is produced to guide the next synthesis attempt.
We are interested in synthesizing adapters that map the arguments of the target function to the arguments of the inner function, and map the return value of the inner function to that of the target function, in such a way that the behavior of the two functions match.
Our specification for synthesis is provided by the behavior of the target function, and we define counterexamples to be inputs on which the behavior of the target and inner functions differ for a given adapter.
Our adapter synthesis algorithm is presented in Algorithm \ref{alg:adapter_search} and is explained in a corresponding figure in Figure~\ref{fig:adapter_synthesis}.
\begin{figure}[]
\centering
\includegraphics[scale=0.34,trim={0 5mm 9cm 0},clip]{figures/adapter_synthesis_diagram_v2}
\caption{Counterexample-guided adapter synthesis}
\label{fig:adapter_synthesis}
\end{figure}
Algorithm \ref{alg:adapter_search} will terminate with either an adapter that produces equivalence between the target and inner functions for all side-effects, or an indication that the functions cannot be made equivalent using the adapter family we specify.
\begin{algorithm}[ht]
\LinesNumbered
\small
\SetNlSty{texttt}{[}{]}
\SetKwData{CurrentFadapter}{A}
\SetKwData{CurrentRadapter}{R}
\SetKwData{CEList}{test-list}
\SetKwData{CE}{counterexample}
\SetKwFunction{SynthesizeAdapter}{SynthesizeAdapter}
\SetKwFunction{CheckAdapter}{CheckAdapter}
\SetKwInOut{Input}{Input}\SetKwInOut{Output}{Output}
\Input{Pointers to the target function T and inner function I}
\Output{(argument adapter A, return value adapter R) or \textit{null}}
\CurrentFadapter $\leftarrow$ default-function-args-adapter\;
\CurrentRadapter $\leftarrow$ default-return-value-adapter\;
\CEList$\leftarrow$empty-list\;
\While{true}{
\CE $\leftarrow$ \CheckAdapter(\CurrentFadapter, \CurrentRadapter, T, I);\\
\eIf{\CE == null}{
\Return{(\CurrentFadapter, \CurrentRadapter)\;}
}{
\CEList.append(\CE)\;
}
(\CurrentFadapter, \CurrentRadapter) $\leftarrow$ \SynthesizeAdapter(\CEList, T, I)\;
\If{\CurrentFadapter == null}{
\Return{\textit{null}\;}
}
}
\caption{Counterexample-guided adapter synthesis}
\label{alg:adapter_search}
\end{algorithm}
\begin{algorithm}[ht]
\small
\LinesNumbered
\SetNlSty{texttt}{[}{]}
\SetKwInOut{Input}{Input}
\SetKwInOut{Output}{Output}
\Input{Concrete adapter A for function arguments and R for return value, target function pointer T, inner function pointer I}
\Output{Counterexample to the given adapter or \textit{null}}
\SetKwData{fadapter}{A}
\SetKwData{radapter}{R}
\SetKwData{args}{args}
\SetKwData{tRet}{T-return-value}
\SetKwData{iRet}{I-return-value}
\SetKwData{tSE}{T-side-effects}
\SetKwData{iSE}{I-side-effects}
\args $\leftarrow$ symbolic\;
\While{execution path available} {
\tRet, \tSE $\leftarrow$ T(\args)\;
\iRet, \iSE $\leftarrow$ I(\textit{adapt(\fadapter, \args))}\;
\If{ ! (equivalent(\tSE, \iSE) and equivalent(\tRet, adapt(\radapter, \iRet))) }{
\Return{concretize(\args)\;}
}
}
\Return{null\;}
\caption{CheckAdapter used by Algorithm \ref{alg:adapter_search}}
\label{alg:check_adapter}
\end{algorithm}
Algorithm \ref{alg:adapter_search} first initializes the current adapter to a default adapter.
In our implementation, we often use an `identity adapter' which sets every argument of the inner function to be its corresponding argument to the target function.
Next, the list of current tests is set to be the empty list.
At every iteration (until a correct adapter is found) a new counterexample is added to this list, and any subsequently generated candidate adapter must satisfy all tests in the list.
This provides the intuition for why the adapter search process will always terminate: with every iteration the adapters found become more `correct' in the sense that they produce the desired behavior for more known tests than any previous candidate adapter.
Algorithm \ref{alg:adapter_search} terminates if the space of candidate adapters allowed by the adapter family is finite.
In practice, we found the number of iterations to be small.
\begin{algorithm}[]
\LinesNumbered
\small
\SetNlSty{texttt}{[}{]}
\SetKwInOut{Input}{Input}
\SetKwInOut{Output}{Output}
\SetKwData{CEList}{test-list}
\SetKwData{CE}{test}
\SetKwData{tRet}{T-return-value}
\SetKwData{iRet}{I-return-value}
\SetKwData{tSE}{T-side-effects}
\SetKwData{iSE}{I-side-effects}
\SetKwData{fadapter}{A}
\SetKwData{radapter}{R}
\SetKwData{eqCounter}{eq-counter}
\Input{List of previously generated counterexamples \CEList, target function pointer T, inner function pointer I}
\Output{(argument adapter \fadapter, return value adapter \radapter) or \textit{null}}
\fadapter $\leftarrow$ symbolic function args adapter\;
\radapter $\leftarrow$ symbolic return value adapter\;
\While{execution path available} {
\eqCounter $\leftarrow$ 0\;
\While{\eqCounter $<$ length(\CEList)} {
\tRet, \tSE $\leftarrow$ T(\CE)\;
\iRet, \iSE $\leftarrow$ I(adapt(\fadapter, \CE))\;
\eIf{ equivalent(\tSE, \iSE) \textit{and} equivalent(\tRet, adapt(\radapter, \iRet))}{
\eqCounter $\leftarrow$ \eqCounter + 1\;
} {break\;}
}
\If{ \eqCounter == length(\CEList) }{
\Return{(concretize(\fadapter), concretize(\radapter))\;}
}
}
\Return{null\;}
\caption{SynthesizeAdapter used by Algorithm \ref{alg:adapter_search}}
\label{alg:synthesize_adapter}
\end{algorithm}
The \textit{CheckAdapter} procedure (described in Algorithm \ref{alg:check_adapter}) first executes the target function with symbolic arguments and saves its return value and side-effects.
It then plugs in the symbolic function arguments to the concrete adapter given as input to produce adapted symbolic arguments for the inner function by calling the \textit{adapt} method.
Algorithm \ref{alg:check_adapter} executes the adapted inner function, saves its return value and a list of its side-effects.
Algorithm \ref{alg:check_adapter} is successful if it finds inequivalence between (1) side-effects of the target and inner functions, \textbf{or} (2) the target function\rq s return value and adapted return value of the inner function created by calling the \textit{adapt} method with the input return value adapter.
On success, it selects concrete function arguments which create this inequivalence and returns them as a counter-example to the given input adapter.
On failure, it concludes no such inequivalence can be found on any execution path, and returns \textit{null}.
The \textit{SynthesizeAdapter} procedure described in Algorithm \ref{alg:synthesize_adapter} first concretely executes the target function with a test case from the input test list, and saves the return value side-effects.
It then plugs in the concrete test case into the symbolic argument adapter to create symbolic arguments for the inner function by calling the \textit{adapt} method.
It then executes the inner function, saving its return value and side-effects.
If Algorithm \ref{alg:synthesize_adapter} finds equivalence between (1) side-effects of the target and inner functions, \textbf{and} (2) the target function\rq s return value and the inner function\rq s adapted return value, it considers this test case satisfied.
Finally, on line 15 of Algorithm \ref{alg:synthesize_adapter}, if it finds all tests to be satisifed, it concretizes the function argument and return value adapters and returns them.
The overall time it takes for Algorithm \ref{alg:synthesize_adapter} to find an adapter strongly depends on the space of operations permitted by the adapter family it operates on.
We describe the design of the adapter families, found useful in our evaluation, in the next subsection.
\subsection{Adapter Families}
\subsubsection{Argument Substitution:}
This family of adapters allows replacement of any inner function argument by one of the target function arguments or a constant.
This simple family is useful, for instance, when synthesizing adapters between the cluster of functions in the C library that wrap around the \textit{wait} system call as shown in Section \ref{sec:evaluation}.
\subsubsection{Argument Substitution with Type Conversion:}
This family extends the argument substitution adapter family by allowing inner function arguments to be the result of a type conversion applied to a target function argument.
Since type information is not available at the binary level, this adapter tries all possible combinations of type conversion on function arguments.
Applying a type conversion at the 64-bit binary level means that each target function argument itself may have been a \textit{char}, \textit{short} or a \textit{int}, thereby using only the low 8, 16, or 32 bytes of the argument register.
The correct corresponding inner function argument could be produced by either a sign extension or zero extension on the low 8, 16, or 32 bits of the argument register.
This adapter family also allows for converting target function arguments to boolean values by comparing those arguments to zero.
\subsubsection{Arithmetic adapter:}
This family allows inner function arguments to be arithmetic combinations of target function arguments.
To ensure that the space of adapters is finite, our implementation only allows for arithmetic expressions of a specified bounded depth.
Arithmetic adapters allow our tool to reproduce other kinds of synthesis.
In the description of the capabilities of the synthesis tool SKETCH, Solar-Lezama et. al.~\cite{Solar-LezamaTBSS2006} present the synthesis of an efficient bit expression that creates a mask isolating the rightmost 0-bit of an integer.
We can synthesize the same bit expression by synthesizing an arithmetic adapter that adapts the identity function to a less-efficient implementation of the operation.
\subsubsection{Memory Substitution:}
This family of adapters allows a field of an inner function structure argument to be adapted to a field of a target function structure argument.
Each field is treated as an array with \textit{n} entries (where n cannot be less than 1), with each entry of size 1, 2, 4, or 8 bytes.
Corresponding array entries used by the target and inner functions need not be at the same address and may also have different sizes, in which case both sign-extension and zero-extension are valid options to explore for synthesizing the correct adapter as shown in Figure~\ref{fig:memsub}.
This makes our adapter synthesis more powerful because it can be used in combination with other rules that allow argument substitution.
This adapter family synthesizes correct adapters between RC4 implementations in the mbedTLS and OpenSSL libraries in Section~\ref{sec:RC4experiment}.
\begin{figure}[h]
\caption{Memory substitution adapter to make \textit{struct i} adaptably equivalent to \textit{struct t}}
\label{fig:memsub}
\includegraphics[width=\columnwidth]{figures/memsub_pdf6}
\end{figure}
\subsubsection{Return Value Substitution:}
The argument substitution families described above can be applied on the return values as well.
An example of different return values having the same semantic meaning is the return value of the C library function \textit{isalpha} as shown in Listing \ref{lst:isalpha}.
\subsection{Example}
\begin{figure}
\lstinputlisting[caption={Rockbox iPod Nano implementation of the clamp function followed by a standard C++ implementation of the clamp function.}, label={lst:clamp_source}, style=nonumbers]{code_samples/clamp_example.c}
\end{figure}
To illustrate our approach, we walk through a representative run of our adapter synthesis algorithm using a target function, that represents binary code from a Rockbox firmware image built for the iPod Nano 2g device, and the clamp function in the Boost library as the reference function.
Both the target code region~(represented as a function) and reference function are shown in Listing~\ref{lst:clamp_source}.
Although our adapter synthesis implementation can use any binary code region as the target region, in this example we define the target code regions to be a C function, and let inputs correspond to function arguments and output correspond to the function return value.
Here we will focus only on synthesis of the input adapter, although the general algorithm also produces an adapter that acts the output of the reference function. A correct input adapter should set the first argument of \texttt{clamp\_reference} to the integer argument $x$ of \texttt{clamp\_target} and set the second and third arguments of \texttt{clamp\_reference} to 0 and 255 respectively. We write this adapter as $\mathcal{A}(x) = (x, 0, 255)$.
\textbf{Step 0:}
adapter synthesis begins with an empty counterexample list and a default adapter that maps every argument to the constant 0 (i.e. $\mathcal{A}(x) = (0,0,0)$). During counterexample generation (\texttt{CheckAdapter} in Figure~\ref{fig:adapter_synthesis}), we use symbolic execution to search for an input $x$ such that the output of \texttt{clamp\_target(x)} is not equivalent to the output of \texttt{clamp\_reference(}$\mathcal{A}(x)$\texttt{)} = \texttt{clamp\_reference(0,0,0)}. From \texttt{CheckAdapter}, we learn that $x = 1$ is one such counterexample.
\textbf{Step 1:} Next, during adapter synthesis (\texttt{adapterSynthesis} in Figure~\ref{fig:adapter_synthesis}), we use symbolic execution to search for a new adapter $\mathcal{A}$ that will make \texttt{clamp\_target(x)} equivalent to \texttt{clamp\_reference(}$\mathcal{A}(x)$\texttt{)} for every input $x$ in the list [1]. From \texttt{SynthesizeAdapter}, we learn that $\mathcal{A}(x) = (0,x,x)$ is a suitable adapter, and this becomes our new candidate.
\textbf{Step 2:} At the beginning of this step, the candidate adapter is $\mathcal{A}(x) = (0,x,x)$ and the counterexample list is [1]. First, we use \texttt{CheckAdapter} to search for a counterexample to the current candidate adapter. We find that $x = 509$ is a counterexample.
\textbf{Step 3:} Next, we use \texttt{SynthesizeAdapter} to search for an adapter $\mathcal{A}$ for which the output of \texttt{clamp\_target(x)} will be equivalent to the output of \texttt{clamp\_reference(}$\mathcal{A}(x)$\texttt{)} for both $x = 1$ and $x = 509$. \texttt{SynthesizeAdapter} identifies $\mathcal{A}(x) = (x,x,255)$ as the new candidate.
\textbf{Step 4:}
At the beginning of this step, the candidate adapter is $\mathcal{A}(x) = (x,x,255)$ and the counterexample list is [1, 509]. As before, first we use \texttt{CheckAdapter} to search for a counterexample to the current candidate adapter. We find that $x = -2147483393$ is a counterexample.
\textbf{Step 5:} Next, we use \texttt{SynthesizeAdapter} to search for an adapter $\mathcal{A}$ for which the output of \texttt{clamp\_target(x)} will be equivalent to the output of \texttt{clamp\_reference(}$\mathcal{A}(x)$\texttt{)} for every $x \in$ [1, 509, -2147483393]. \texttt{SynthesizeAdapter} identifies $\mathcal{A}(x) = (x, 0, 255)$ as the new candidate.
\textbf{Step 6:}
In this step, counterexample generation fails to find a counterexample for the current adapter, indicating that the current adapter is correct for all explored paths. Therefore, adapter synthesis terminates with the final adapter $\mathcal{A}(x) = (x, 0, 255)$.
Alternatively, adapter synthesis could have terminated with the decision that the target function is not substitutable by the reference function with any allowed adapter.
In our evaluations, adapter synthesis may also terminate with a timeout, indicating the total runtime has exceeded a predefined threshold.
\subsection{Extensibility}
The adapter synthesis algorithm presented in this section is not tied to any particular source programming language or family of adapters.
In our implementation (Section~\ref{sec:implementation}) we target binary x86 and ARM code, and we use adapters that allow for common argument structure changes in C code.
In Section~\ref{sec:evaluation} we present two different interpretations of ``target code regions.''
The first is the function interpretation discussed earlier, where inputs correspond to function arguments and outputs correspond to function return values and side effects.
The second interpretation, enabled by our focus on binary code, defines code regions as ``code fragments.''
We define a code fragment to be a sequence of instructions consisting of atleast one instruction.
Inputs to code fragments are all the general-purpose registers available on the architecture of the code fragment and outputs are registers written to within the code fragment.
We could also allow reference functions to be more general code regions, but we restricted ourselves to the function-level for now with the idea that a function is the most natural unit of code in which a reverse engineer can express a known behavior.
\section{Conclusion}\label{sec:conclusion}
We presented a new technique to search for semantically-equivalent pieces of code which can be substituted while adapting differences in their interfaces.
This approach is implemented at the binary level, thereby enabling wider applications and consideration of exact run-time behavior.
We implemented adapter synthesis for x86-64 and ARM binary code.
We presented examples demonstrating applications towards security, deobfuscation, efficiency, and library replacement, and an evaluation using the C library.
Our adapter families can be combined to find sophisticated adapters as shown by adaptation of RC4 implementations.
While finding thousands of functions to not be equivalent, our tool reported many instances of semantic equivalence, including C library functions such as \textit{ffs} and \textit{ffsl}, which have assembly language implementations.
Our comparison of concrete enumeration-based adapter search with binary symbolic execution-based adapter search allows users of adapter synthesis to choose between the two approaches based on size of adapter search space.
We selected more than 61,000 target code fragments from a 3rd party firmware image for the iPod Nano 2g and 24 reference functions from the VLC media player.
Given a adapter search space of 1.353 x $10^{127}$ adapters, we used binary symbolic execution-based adapter search to run more than a million adapter synthesis tasks.
Our tool finds dozens of instances of several reference functions in the firmware image, and confirms that the process of understanding the semantics of binary code fragments can be automated using adapter synthesis.
Our results show that the CEGIS approach for adapter synthesis of binary code is feasible and sheds new light on potential applications such as searching for efficient clones, deobfuscation, program understanding, and security through diversity.
\section{Limitations and Future Work}\label{sec:discussion}
We represented our synthesized adapters by an assignment of concrete values to symbolic variables and manually checked them for correctness.
adapters can be automatically translated into binary code which can replace the original function with the adapter function.
We plan to automate generation of such adapter code in the future.
During every adapter search step, symbolic execution explores all feasible paths, including paths terminated on a previous adapter search step because they did not lead to a correct adapter.
Once an adapter is found, the next adapter search can be accelerated by saving the state of adapter search, and picking up symbolic execution from the last path that led to a correct adapter in the next adapter search.
Our tool currently presumes that all behaviors of the target function must be matched, modulo failures such as null dereferences.
Using a tool like Daikon~\cite{ernst2007daikon} to infer the preconditions of a function from its uses could help our tool find adapters that are correct for correct uses of functions, such as {\em isupper} and {\em islower}.
adapter synthesis requires us to find if {\em there exists} an adapter such that {\em for all} inputs to the target function, the output of the target function and the output of the adapted inner function are equal.
Thus the synthesis problem can be posed as a single query whose variables have this pattern of quantification (whereas CEGIS uses only quantifier-free queries).
We plan to explore using solvers for this $\exists\forall$ fragment of first-order bitvector logic, such as Yices~\cite{yices}.
Symbolic execution can only check equivalence over inputs of bounded
size, though improvements such as path
merging~\cite{KuznetsovKBC2012,AvgerinosRCB2014} can improve scaling.
Our approach could also integrate with any other equivalence checking
approach that produces counterexamples, including ones that
synthesize inductive invariants to cover unbounded
inputs~\cite{SrivastavaG2009}, though we are not aware of any
existing binary-level implementations that would be suitable.
\section{Evaluation}\label{sec:evaluation}
\subsection{Example: Security}
\lstinputlisting[caption={two implementations for mapping ordered keys,negative or positive, to values using a C array},
label={lst:lookup}]{code_samples/lookup.c}
Consider a table implementing a function of a signed input.
For example, keys ranging from -127 to 127 can be mapped to a 255-element array.
Any key \textit{k} will then be mapped to the element at position \textit{k}+127 in this array.
We present two implementations of such lookup functions in Listing~\ref{lst:lookup}.
Both functions, \textit{l1} and \textit{l2}, assume keys ranging from -\textit{len}/2 to +\textit{len}/2 are mapped in the \textit{table} parameter.
However, \textit{l1} contains a bug caused due to undefined behavior.
The return value of \textit{abs} for the most negative 32-bit integer~(-2147483648)~is not defined~\cite{gnu-abs}.
The eglibc-2.19 implementation of \textit{abs} returns the absolute value of the most negative 32-bit integer as this same 32-bit integer.
This causes the check on line 2 of Listing~\ref{lst:lookup} to not be satisfied allowing execution to continue to line 4 and cause a segmentation fault.
Even worse, passing a carefully-chosen value for \textit{len} can allow a sensitive value to be read and allow this bug to be exploited by an attacker.
\textit{l2} in Listing~\ref{lst:lookup} performs a check, semantically-equivalent to the one on line 2, but does not contain this bug.
Our adapter synthesis implementations were able to synthesize correct argument substitution adapters in the \textit{l1} $\leftarrow$ \textit{l2} direction.
adapter synthesis with concrete enumeration-based adapter search takes 5 seconds, and with FuzzBALL-based adapter search takes 41 seconds.
This adapter synthesis requires adaptation modulo the potential segmentation fault in \textit{l1}.
This example shows adapter synthesis provides replacement of buggy functions with their bug-free siblings by adapting the interface of the bug-free function to the buggy one.
\subsection{Example: Deobfuscation}
A new family of banking malware named Shifu was reported in 2015~\cite{fireeye-shifu},~\cite{ibm-shifu}.
Shifu was found to be targeting banking entities across the UK and Japan.
It continues to be updated~\cite{paloalto-shifu}.
Shifu is heavily obfuscated, and one computation used frequently in Shifu is the computation of CRC-32 checksums.
We did not have access to the real malware binary, but we were able to simulate its obfuscated checksum computation binary function using freely-available obfuscation tools.
Given a reference implementation of CRC-32 checksum computation, adapter synthesis can be used to check if an obfuscated implementation is adaptably equivalent to the reference implementation.
We used the implementation of CRC-32 checksum computation available on the adjunct website~\cite{hd-crc} of Hacker\rq s Delight~\cite{hd-book} (modified so that we could provide the length of the input string) as our reference function.
We obfuscated this function at the source code and Intermediate Representation~(IR) levels to create three obfuscated clones.
For the first clone, we used a tool named Tigress~\cite{tigress} to apply the following source-level obfuscating transformations:
\begin{enumerate}
\item Function virtualization: This transformation turns the reference function into an interpreter with its own bytecode language.
\item Just-in-time compilation/dynamic unpacking: This transformation translates each function \textit{f} into function \textit{f'} consisting of intermediate code so that, when \textit{f'} is executed, \textit{f} is dynamically compiled to machine code.
\item reordering the function arguments randomly, inserting bogus arguments, adding bogus non-trivial functions and loops, and allowing superoperators~\cite{superoperators}.
\end{enumerate}
These transformations led to a 250\% increase in the number of source code lines.
For the second clone, we applied the following obfuscating transformations at the LLVM IR level using Obfuscator-LLVM~\cite{obfs-llvm}:
\begin{enumerate}
\item Instruction Substitution: This transformation replaces standard binary operators like addition, subtraction, and boolean operators with functionally equivalent, but more complicated, instruction sequences.
\item Bogus Control Flow: This transformation modifies the function call graph by injecting basic blocks for bogus control flow and modifying existing basic blocks by adding junk instructions chosen at random.
\item Control flow flattening: This transformation flattens the control flow graph of the clone in a way similar to L{\'a}szl{\'o} et al~\cite{fla}.
\end{enumerate}
These transformations caused the number of instruction bytes to increase from 126 to 2944 bytes.
Finally, we compiled the obfuscated C code (obtained using Tigress) with the LLVM obfuscator tool to create a third clone.
We then ran our adapter synthesis tool with the reference function as the target function and all three clones as inner functions.
We used the CRC-32 checksum of a symbolic 1 byte input string as the return value of each clone.
Our adapter synthesis tool, using FuzzBALL-based adapter search, correctly concluded that all three clones were adaptably equivalent to the reference function in less than 3 minutes using argument substitution.
A correct adapter for one obfuscated clone is shown in Figure~\ref{fig:obfs}.
It maps the string and length arguments correctly, while ignoring the four bogus arguments (the mappings to bogus arguments are irrelevant).
While performing adapter synthesis on heavily-obfuscated binary code is challenging, adaptation in this example is complicated further by an increase in the number of inner function arguments causing the adapter search space to increase to 43.68 million adapters.
\begin{figure}[]
\caption{Argument substitution adapter to make one obfuscated CRC-32 checksum function adaptably equivalent to the reference function}
\label{fig:obfs}
\includegraphics[width=\columnwidth]{figures/obfs}
\end{figure}
\subsection{Example: Efficiency}
\lstinputlisting[caption={Naive implementation of matrix multiplication},
label={lst:naive_mm}, style=nonumbers]{code_samples/naive_mm.c}
Adapter synthesis can also be applied to find more efficient versions of a function, even when those versions have a different interface.
Matrix multiplication is one of the most frequently-used operations in mathematics and computer science.
It can be used for other crucial matrix operations~(for example, gaussian elimination, LU decomposition~\cite{algorithms}) and as a subroutine in other fast algorithms~(for example, for graph transitive closure).
Adapting faster binary implementations of matrix multiplication to the naive one\rq s interface improves the runtime of such other operations relying on matrix multiplication.
Hence, as our target function, we use the naive implementation of matrix multiplication shown in Listing~\ref{lst:naive_mm}.
As our inner function we use an implementation of Strassen\rq s algorithm~\cite{strassen} from Intel\rq s website~\cite{intel-strassen}, which takes the input matrices \textit{A} and \textit{B} as the 1st and 2nd arguments respectively and places the product matrix in its 3rd argument.
We modified their implementation so that it used Strassen's algorithm for all matrix sizes.
Our adapter synthesis tool, using FuzzBALL-based adapter search, finds the correct argument substitution adapter for making the implementation using Strassen's algorithm adaptably equivalent to the naive implementation in 17.7 minutes for matrices of size 4x4.
When using concrete enumeration-based adapter search, the adapter search finds the correct adapter in less than 4.5 minutes.
This example shows that adapter synthesis can be used for finding adaptably equivalent clones of a function that have different algorithmic space and time complexity.
Program analysis techniques for checking space and time usage of different implementations are being actively researched~\cite{darpa-stac}.
Symbolic execution can also be used for finding inputs that cause functions to exhibit worst-case computational complexity~\cite{wise}.
adapter synthesis can be used as a pre-processing step before applying other techniques for detecting algorithmic complexity of semantic clones.
\subsection{Example: RC4 encryption} \label{sec:RC4experiment}
To show that adapter synthesis can be applied to replace one library with another, we chose to adapt functions implementing RC4 functionality in mbedTLS and OpenSSL.
\noindent
\subsubsection{RC4 key structure initialization:} The RC4 algorithm uses a variable length input key to initialize a table with 256 entries within the key structure argument.
Both cryptography libraries in our example, mbedTLS and OpenSSL, have their own implementation of this initialization routine. Both function signatures are shown in Figure \ref{fig:rc4setup_adapter}.
\begin{figure}[]
\caption{Argument substitution adapter to make \textit{RC4\_set\_key} adaptably equivalent to \textit{mbedtls\_arc4\_setup}}
\label{fig:rc4setup_adapter}
\includegraphics[width=\columnwidth]{figures/rc4setup_adapter}
\end{figure}
Executing each of these initialization routines requires 256 rounds of mixing bytes from the key string into the key structure.
The two initialization routines require the key length argument at different positions, so making \textit{RC4\_set\_key} adaptably equivalent to \textit{mbedtls\_arc4\_setup} requires not only mapping the \textit{mbedtls\_arc4\_context} object to a \textit{RC4\_KEY} object, but also figuring out the correct mapping of the key length argument.
This combination of argument substitution and memory substitution adapter families creates a search space of 421.823 million adapters.
Our adapter synthesis tool correctly figures out both mappings and finds adaptable equivalence by creating equivalence between side-effects on the structure objects~(\textit{ctx} for \textit{mbedtls\_arc4\_setup}, \textit{RC4\_KEY} for \textit{RC4\_set\_key}).
To setup adapter synthesis between these two function pairs (we synthesized adapters in both directions), we used a symbolic key string of length 1, and hence the synthesis tool correctly sets the key length argument to 1.
Our tool, when using FuzzBALL-based adapter search, figures out the correct memory and argument substitution adapters in the mbedTLS $\leftarrow$ OpenSSL direction for initialization routines in 60 minutes and in the OpenSSL $\leftarrow$ mbedTLS direction in 49 minutes.
Thus, we combined the memory substitution adapter with the argument substitution adapter family to synthesize adaptable equivalence between the RC4 setup pair of functions.
\noindent
\subsubsection{RC4 encryption:} RC4 encryption functions in mbedTLS and OpenSSL take 4 arguments each, one of which is the RC4 key structure argument.
The RC4 key structures~(\textit{RC4} in OpenSSL, \textit{mbedtls\_arc4\_context} in mbedTLS) contain three fields as shown in Figure \ref{fig:rc4_struct_adapter}.
\begin{figure}[]
\caption{Memory substitution adapter to make \textit{RC4\_KEY} adaptably equivalent to \textit{mbedtls\_arc4\_context}}
\label{fig:rc4_struct_adapter}
\includegraphics[width=\columnwidth]{figures/rc4_struct_adapter}
\end{figure}
The first two 4-byte fields are used to index into the third field, which is an array with 256 entries.
Each entry is 4 bytes long in OpenSSL and 1 byte long in mbedTLS.
In order to present an example of a memory substitution adapter synthesized in isolation, we created wrappers for both RC4 encryption functions so that only the key structure argument was exposed and used a fixed value for the input string.
This allowed us to direct the adapter search to search for all possible mappings between the mbedTLS and OpenSSL RC4 key structure fields.
Allowing arbitrary numbers of 1, 2, 4, or 8 byte entries in each field of the 264~(2*4+256*1) byte mbedTLS key structure and 1032~(2*4+256*4) byte OpenSSL key structure made the search space of memory mappings very large, so we instead only explored adapters where the number of entries in each array was a power of 2.
While this reduction is useful in practice, it still gives us a search space of about 4.7 million adapters in both directions of adaptation.
The correct adapter that adapts the OpenSSL key structure to the mbedTLS key structure~(mbedTLS $\leftarrow$ OpenSSL) performs 2 mapping operations: (1) it maps the first 2 mbedTLS key structure fields directly to the first 2 OpenSSL key structure fields and (2) it zero extends each 1 byte entry in the 3rd field of the mbedTLS key structure to the corresponding 4 byte entry in the 3rd field of the OpenSSL key structure.
The correct adapter for adapting in the reverse direction~(OpenSSL $\leftarrow$ mbedTLS) changes the second mapping operation to map the least significant byte of each 4 byte entry in the 3rd field to the 1 byte entry in its corresponding position.
Our adapter synthesis tool, when using FuzzBALL-based adapter search, found the correct memory substitution adapter in the mbedTLS $\leftarrow$ OpenSSL direction in 2.4 hours and in the OpenSSL $\leftarrow$ mbedTLS direction in 2.6 hours.
When using concrete enumeration-based adapter search, we found the correct adapter in 1.8 hours in the mbedTLS $\leftarrow$ OpenSSL direction, of which only 6 minutes were spent on adapter search.
In the OpenSSL $\leftarrow$ mbedTLS direction, we found the correct adapter, with concrete enumeration-based adapter search, in 65 minutes, of which only 1.5 minutes were spent on adapter search.
The correct adapter for making \textit{RC4\_KEY} adaptably equivalent to \textit{mbedtls\_arc4\_context} is shown in Figure \ref{fig:rc4_struct_adapter}.
We verified the correctness of our adapted key structures by using self-tests present in mbedTLS and OpenSSL.
\noindent
\subsubsection{RC4 adapter verification using nmap:} We verified our RC4 memory substitution adapter using nmap, as shown in Figure \ref{fig:nmap_struct_adapter}.
\begin{figure}[]
\caption{nmap using RC4 encryption in mbedTLS instead of OpenSSL}
\label{fig:nmap_struct_adapter}
\includegraphics[width=\columnwidth]{figures/nmap_struct_adapter}
\end{figure}
We created adapted versions of the OpenSSL RC4 setup and encryption functions that internally use the mbedTLS key structure adapted to the OpenSSL key structure.
On a 64-bit virtual machine running Ubuntu 14.04, we compiled the adapted setup and encryption functions into a shared library and setup a local webserver on the virtual machine, which communicated over port 443 using the \textit{RC4+RSA} cipher.
We used the stock nmap binary to scan our localhost and injected our specially created shared library using the \textit{LD\_PRELOAD} environment variable.
The preloading caused the RC4 functions in our shared library to be executed instead of the ones inside OpenSSL.
The output of nmap, run with preloading our specially created shared library which used the OpenSSL $\leftarrow$ mbedTLS structure adapter, was the same as the output of nmap which used the system OpenSSL library.
\subsection{Evaluation with C library}
\subsubsection{Setup}
We evaluated our adapter synthesis tool on the system C library available on Ubuntu 14.04~(eglibc 2.19).
The C library uses a significant amount of inlined assembly, for instance, the \textit{ffs}, \textit{ffsl}, \textit{ffsll} functions, which
motivates automated adapter synthesis at the binary level.
We enumerated 1316 exported functions in the library in the order they
appear, which caused functions that are defined in the same source files
to appear close to each other.
Considering every function in this list as the target function, we chose five functions that appear above and below it as 10 potential inner functions.
These steps gave us a list of 13130~(10$\times$1316 - 2$\times$ $\sum_{i=1}^5 i$) pairs of target and inner functions.
We used the argument substitution and type conversion adapter families combined with the return value adapter family because these families scale well and are widely applicable.
We ran our adapter synthesis with a 2 minute timeout on a machine running CentOS 6.8 with 64-bit Linux kernel version 2.6.32 using 64 GB RAM and a Intel Xeon E5-2680v3 processor.
To keep the running time of the entire adapter synthesis process within practical limits, we configured FuzzBALL to use a 5 second SMT solver timeout and to consider any queries that trigger this timeout as unsatisfiable.
We limited the maximum number of times any instruction can be executed to 4000 because this allowed execution of code which loaded library dependencies.
We limited regions to be symbolic up to a 936 byte offset limit (the size of the largest structure in the library interface) and any offset outside this range was considered to contain zero.
\subsubsection{Results}
Table~\ref{table:libc-evaluation} summarizes the results of searching for argument substitution and type conversion adapters with a return value adapter within the 13130 function pairs described above.
The similarity in the results for the type conversion adapter family and argument substitution adapter family arises from the similarity of these two families.
The most common causes of crashing during execution of the target function were missing system call support in FuzzBALL, and incorrect null dereferences~(caused due to lack of proper initialization of pointer arguments).
The timeout column includes all function pairs for which we had a solver timeout~(5 seconds), hit the iteration limit~(4000), or reached a hard timeout~(2 minutes).
The search terminated without a timeout for 70\% of the function pairs, which reflects a complete exploration of the space of symbolic inputs to a function, or of adapters.
\input{adapter_results_1}
Since there is no ground truth, we manually corroborated the results of our evaluation by checking the C library documentation and source code.
Our adapter synthesis evaluation on the C library reported 30 interesting true positives shown in Table \ref{table:libc-adapters}.
(The remaining adapters found are correct but trivial.)
The first column shows the function pair between which an adapter was
found (with the number of arguments) and the second
column shows the adapters.
We use the following notation to describe adapters in a compact way.
$f_1$ $\leftrightarrow$ $f_2$ means $f_1$ $\leftarrow$ $f_2$ and $f_2$ $\leftarrow$ $f_1$.
\# followed by a number indicates inner argument substitution by a target argument, while other numbers indicate constants.
X-to-YS represents taking the low X bits and sign extending them to Y bits, X-to-YZ represents a similar operation using zero extension.
The last three rows shown in Table \ref{table:libc-adapters} shows three arithmetic adapters found within the C library using partial automation.
We synthesized the correct adapters by writing wrappers containing preconditions around the \textit{isupper}, \textit{islower}, \textit{kill} functions.
\subsection{Comparison with Concrete Enumeration-based Adapter Search}
The adapter search step in our CEGIS approach need not use binary symbolic execution.
We swapped out our FuzzBALL-based adapter search step with a concrete enumeration-based adapter search.
We ensured that our concrete enumeration generated adapters, such that every adapter had the same probability of being chosen.
We synthesized every adapter, presented so far, using both adapter search implementations and captured the total adapter search time.
We also counted the size of the adapter search space for every adaptation.
In some cases, the adapter search space was too large to be concretely enumerated.
For example, the adapter search space for the \textit{killpg} $\leftarrow$ \textit{kill} adapter consists of 98.1 million arithmetic adapters.
In such cases, we reduced the size of the search space by using smaller constant bounds.
Based on the size of adapter search space, we compared the total adapter search times for both adapter search strategies.
We present the results from this comparison in Figure~\ref{fig:conc_vs_se}.
\begin{figure}[]
\caption{Comparing concrete enumeration-based adapter search with binary symbolic execution-based adapter search}
\label{fig:conc_vs_se}
\includegraphics[width=\columnwidth]{figures/conc_vs_se}
\end{figure}
For concrete enumeration-based adapter search, Figure~\ref{fig:conc_vs_se} shows the time required to find an adapter has a consistent exponential increase with an increase in the size of adapter search space.
But, we cannot derive any such conclusion for binary symbolic execution-based adapter search.
This is because symbolic execution is more sensitive to variations in difficulty of adapter search than concrete enumeration.
We further explored this comparison between concrete enumeration and binary symbolic execution-based adapter search using an example which would allow us to control adapter search difficulty.
\lstinputlisting[caption={naive and SKETCH-based implementations of \textit{popCnt}},
label={lst:popCnt}, style=nonumbers]{code_samples/popCnt.c}
The \textit{popCnt} function synthesized by SKETCH~\cite{Solar-LezamaTBSS2006} allows us to control the difficulty of adapter search.
The \textit{popCnt} function counts the number of bits set in a 16-bit value.
We present the target~(\textit{popCntNaive}) function and one variant of the inner function~(\textit{popCntSketch}) in Listing~\ref{lst:popCnt}.
The \textit{popCntSketch} function uses 8 constants~(1, 2, 4, 8, 0xf, 0x77, 0x3333, 0x5555), which can be passed as arguments instead of being hardcoded.
The argument substitution adapter family allows constant bounds to be specified to make the adapter search space finite.
By varying the constant bounds and the number of arguments (which were replaced by appropriate constants by the correct adapter) to \textit{popCntSketch}, we varied the size of the adapter search space while keeping the difficulty of adapter search uniform.
We created 24 variants of \textit{popCountSketch}.
Using each \textit{popCountSketch} variant as the inner function, and \textit{popCntNaive} as the target function, we synthesized adapters using concrete enumeration and binary symbolic execution-based adapter search.
Figure~\ref{fig:popCnt} shows the result of comparing total adapter search times across sizes of adapter search space when using concrete enumeration and binary symbolic execution-based adapter search.
\begin{figure}[]
\caption{Comparing concrete enumeration-based adapter search with binary symbolic execution-based adapter search using variants of \textit{popCnt}}
\label{fig:popCnt}
\includegraphics[width=\columnwidth]{figures/popcnt}
\end{figure}
Figure~\ref{fig:popCnt} shows concrete enumeration-based adapter search is faster than binary symbolic execution-based adapter search upto search spaces of size $10^3$.
But this gain quickly drops off as the size of search space approaches $10^7$.
We also created a variant of \textit{popCntSketch} that takes 6 arguments and uses them for its largest constants.
Synthesizing an adapter using this variant as the inner function creates a search space of size 3.517x$10^{18}$~(not including return value substitution adapters).
Using only binary symbolic execution-based adapter search, our tool synthesized the correct adpator in 168 seconds, with 154 seconds spent in adapter search.
Enumerating this search space concretely would take 11.15 million years.
\subsection{Reverse engineering using reference functions}
\label{sec:eval_general}
\subsubsection{Code fragment selection}:
Rockbox~\cite{rockbox} is a free replacement 3rd party firmware for digital music players.
We used a Rockbox image compiled for the iPod Nano (2g) device, based on the 32-bit ARM architecture, and disassembled it.
We dissected the firmware image into code fragments using the following rules:
(1) no code fragment could use memory, stack, floating-point, coprocessor, and supervisor call instructions,
(2) no code fragment could branch to an address outside itself,
(3) the first instruction of a code fragment could not be conditionally executed.
The first rule disallowed code fragments from having any inputs from and outputs to memory, thereby allowing us to use the 13 general purpose registers on ARM as inputs.
The second rule prevented a branch to an invalid address.
ARM instructions can be executed based on a condition code specified in the instruction, if the condition is not satisfied, the instruction is turned into a {\tt noop}.
The third rule disallowed the possibility of having code fragments that begin with a {\tt noop} instruction, or whose behavior depended on a condition.
The outputs of every code fragment were the last (up to) three registers written to by the code fragment.
This caused each code fragment to be used as the target code region up to three times, once for each output register.
This procedure gave us a total of 183,653 code regions, with 61,680 of them consisting of between 3 and 20 ARM instructions.
To evaluate which code fragments can be synthesized just with our
adapter family without a contribution from a reference function, we
checked
which of these 61,680 code fragments can be adaptably substituted by a reference function that simply returns one of its arguments.
Intuitively, any code fragment that can be adaptably substituted by an
uninteresting reference function must be uninteresting itself, and so
need not be considered further.
We found 46,831 of the 61,680 code fragments could not be adaptably substituted by our simple reference function, and so we focused our further evaluation on these 46,831 code fragments that were between 3 and 20 ARM instructions long.\\
\subsubsection{Reference functions}:
Since our code fragments consisted of between 3 and 20 ARM instructions, we focused on using reference functions that can be expressed in a similar number of ARM instructions.
We used the source code of version 2.2.6 of the VLC media player~\cite{vlc} as the codebase for our reference functions.
We performed an automated search for functions that were up to 20 lines of source code.
This step gave us a total of 1647 functions.
Similar to the three rules for code fragment selection, we discarded functions that accessed memory, called other VLC-specific functions, or made system calls to find 24 reference functions.
Other than coming from a broadly similar problem domain (media
players), our selection of reference functions was independent of the
Rockbox codebase, so we would not expect that every reference function
would be found in Rockbox.
\subsubsection{Results}
We used the type conversion adapter family along with the return value
substitution family, disallowing return value substitution adapters
from setting the return value to be a type-converted argument of the
reference function (which would lead to uninteresting adapters).
But we allowed the reference function arguments to be replaced by
unrestricted 32-bit constants, and we assumed each code segment takes
up to 13 arguments.
The size of this adapter search space can be calculated using the following formula:\\
$8 * \sum_{k=0}^{k=13} (2^{32})^{13-k} * \comb{13}{k} * \perm{13}{k} * 8^k$ \\
The first multiplicative factor of 8 is due to the 8 possible return value substitution adapters.
The permutation and combination operators occur due to the choices of arguments for the target code fragment and reference functions~(we assumed both have 13 arguments).
The final $8^k$ represents the 8 possible type conversion operators that a type conversion adapter can apply.
The dominant factor for the size of the adapter search space comes from size of the set of possible constants.
Our adapter family used unrestricted 32-bit constants, leading to a constants set of size $2^{32}$.
With this adapter family set up, we ran adapter synthesis trying to adaptably substitute each of the 46,831 code fragments by each reference function .
This gave us a total of 1,123,944~(46831*24) adapter synthesis tasks, with each adapter synthesis search space consisting of 1.353 x $10^{127}$ adapters, too large for concrete enumeration.
We set a 5 minute hard time limit and a total memory limit of 2 GB per adapter synthesis task.
We split the adapter synthesis tasks with each reference function into 32 parallel jobs creating a total of 768~(32*24) parallel jobs.
We ran our jobs on a machine cluster running CentOS 6.8 with 64-bit Linux kernel version 2.6.32 and Intel Xeon E5-2680v3 processors.
We present our results in Table~\ref{table:general}.
The full set of results is presented in Section~\ref{sec:all_tables} of
the Appendix.
\input{revengg_general}
The first column shows the reference functions chosen from the VLC media player source code.
The \textit{\#(full)} column reports how many code fragments were found to be adaptably substitutable~(represented by the value for \textit{\#}), and how many of those exploited the full generality of the reference function~(represented by the value of \textit{full}).
We report average number of steps, average total running time (average solver time), average total time spent in adapter search steps (average time during the last adapter search step) in columns \textit{steps}, \textit{total time (solver)}, \textit{AS time (last)} respectively.
In case of timeouts, only average solver time is reported since the average total running time was 5 minutes.\\
\subsubsection{Clustering using random tests:} For every reference function, we can either have a conclusion that finds an adapter, or finds the fragment to not be adaptably substitutable, or runs out of time.
Our adapter synthesis tool finds adaptable substitution using 18 out of the 24 reference functions.
For every reference function, we cluster its adapted versions using 100000 random tests: all adapted versions of a reference function that report the same output for all inputs are placed in the same cluster.
The number of clusters is reported in the \textit{\#clusters} column.
For each reference function, we then manually examine these clusters to judge which adapted versions use the complete functionality of that reference function; these are the cases where describing the functionality of the target fragment in terms of the reference function is mostly likely to be concise and helpful.
This took us less than a minute of manual effort for each reference function because we understood the intended semantic functionality of every reference function~(we had its source code).
We found several instances of adapters using the full generality of the reference function for 11 reference functions.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figures/clamp_partial_order}
\caption{Subset of partial order relationship among adapted clamp instances}
\label{fig:clamp_partial_order}
\end{figure}
We found that a majority of our found adapters exploit specific functionality of the reference functions.
We explored this observation further by manually summarizing the semantics of the 683 adapters reported for {\tt clamp}.
We found that these 683 adapters have a partial order between them created by our adapter families of type conversion and return value substitution.
We present a subset of this partial order as a lattice-like diagram in Figure~\ref{fig:clamp_partial_order}.
To explain one unexpected example, the {\tt invert-low-bit} operation on a value {\tt v} can be implemented in terms of {\tt val < N} by setting {\tt val} to the low bit of {\tt v} zero-extended to 32 bits and {\tt N} to 1, and zero-extending the low 1 bit of the return value of {\tt val < N} to 32 bits.
Some such functionalities owe more to the flexibility of the adapter
family than they do to the reference function.
These results suggest it would be worthwhile in the future to prune
them earlier by searching for instances of the simplest reference
functions first, and then excluding these from future searches.
Timeouts were the third conclusion of each adapter synthesis task as reported in Table~\ref{table:general}.
We report a histogram of the total running time used to find adapters in Figure~\ref{fig:tilepos_hist} for the {\tt tile\_pos} reference function, which had the most timeouts.
Similar histograms for {\tt clamp} and {\tt median} reference function are reported in Figures~\ref{fig:clamp_hist},~\ref{fig:median_hist}.
\begin{figure}[ht]
\centering
\includegraphics[width=0.5\textwidth]{figures/clamp_hist}
\caption{Running times for synthesized adapters using {\tt clamp} reference function}
\label{fig:clamp_hist}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=0.5\textwidth]{figures/median_hist}
\caption{Running times for synthesized adapters using {\tt median} reference function}
\label{fig:median_hist}
\end{figure}
The number of adapters found after 300 seconds decreases rapidly, consistent with the mean total running time~(subcolumn \textit{total time (solver)} under column \textit{adapter} in Table~\ref{table:general}) of 53.5 seconds for the {\tt tile\_pos} reference function.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{figures/tilepos_hist}
\caption{Running times for synthesized adapters using {\tt tile\_pos} reference function}
\label{fig:tilepos_hist}
\end{figure}
Table~\ref{table:general} also shows that the total running time, when our tool concludes with finding an adapter, is significantly lesser than 300 seconds for all reference functions that reported adapters.
Though setting any finite timeout can cause some instances to be lost,
these results suggest that a 300-second timeout was appropriate for
this experiment, and that most timeouts would not have led to adapters.
\subsection{Comparing adapter families}
\label{sec:eval_compare}
We also explored the tradeoff between adapter search space size and effectiveness of the adapter family.
We ran all 46,831 target code fragments with {\tt clamp} as the reference function using two additional adapter families beyond the combination of type conversion family with return value substitution described above.
The first adapter family allowed only argument permutation and the second allowed argument permututation along with substitution with unrestricted 32-bit constants.
We ran the first adapter family setup (argument permutation + return value substitution) with a 2.5 minute hard time limit, the second adapter family setup (argument substitution + return value substitution) with a 5 minute hard time limit, and the third adapter family setup (argument substitution + return value substitution) was the same as the previous subsection with also a 5 minute hard time limit.
We present our results in Table \ref{table:compare}.
\begin{table}
\centering
\caption{Comparing adapter families with 46,831 target code fragments and {\tt clamp} reference function}
\label{table:compare}
\begin{tabular}{|l|l|l|l|l|}
\hline
& size & \#-ad & \#-inequiv & \#-timeout \\ \hline
\begin{tabular}[c]{@{}l@{}}arg\_perm+\\ ret\_sub-2.5m\end{tabular} & 4.98E+10 & 9 & 46803 & 19 \\ \hline
\begin{tabular}[c]{@{}l@{}}arg\_sub+\\ ret\_sub-2.5m\end{tabular} & 1.3538427E+126 & 705 & 45782 & 344 \\ \hline
\begin{tabular}[c]{@{}l@{}}type\_conv+\\ ret\_sub-5m\end{tabular} & 1.3538430E+126 & 683 & 40553 & 5595 \\ \hline
\end{tabular}
\end{table}
As expected, the number of timeouts increases with an increase in the size of adapter search space.
Table \ref{table:compare} also shows that, for {\tt clamp}, a simpler adapter family is better at finding adapters than a more expressive family, because more searches can complete within the timeout.
But, this may not be true for all reference functions.
Table~\ref{table:compare} suggests that, when computationally feasible, adapter families should be tried in increasing order of expressiveness to have the fewest timeouts overall.
We plan to explore this tradeoff between expressiveness and effectiveness of adapter families in the future.
\section{Introduction}\label{sec:introduction}}
\else
\section{Introduction}
\label{sec:introduction}
\fi
\IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file''
for IEEE Computer Society journal papers produced under \LaTeX\ using
IEEEtran.cls version 1.8b and later.
I wish you the best of success.
\hfill mds
\hfill August 26, 2015
\subsection{Subsection Heading Here}
Subsection text here.
\subsubsection{Subsubsection Heading Here}
Subsubsection text here.
\section{Conclusion}
The conclusion goes here.
\appendices
\section{Proof of the First Zonklar Equation}
Appendix one text goes here.
\section{}
Appendix two text goes here.
\ifCLASSOPTIONcompsoc
\section*{Acknowledgments}
\else
\section*{Acknowledgment}
\fi
The authors would like to thank...
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\section{Introduction}
This demo file is intended to serve as a ``starter file''
for IEEE conference papers produced under \LaTeX\ using
IEEEtran.cls version 1.8b and later.
I wish you the best of success.
\hfill mds
\hfill August 26, 2015
\subsection{Subsection Heading Here}
Subsection text here.
\subsubsection{Subsubsection Heading Here}
Subsubsection text here.
\section{Conclusion}
The conclusion goes here.
\section*{Acknowledgment}
The authors would like to thank...
\section{Introduction}
This demo file is intended to serve as a ``starter file''
for IEEE Computer Society conference papers produced under \LaTeX\ using
IEEEtran.cls version 1.8b and later.
I wish you the best of success.
\hfill mds
\hfill August 26, 2015
\subsection{Subsection Heading Here}
Subsection text here.
\subsubsection{Subsubsection Heading Here}
Subsubsection text here.
\section{Conclusion}
The conclusion goes here.
\ifCLASSOPTIONcompsoc
\section*{Acknowledgments}
\else
\section*{Acknowledgment}
\fi
The authors would like to thank...
\section{Introduction}
\IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file''
for IEEE journal papers produced under \LaTeX\ using
IEEEtran.cls version 1.8b and later.
I wish you the best of success.
\hfill mds
\hfill August 26, 2015
\subsection{Subsection Heading Here}
Subsection text here.
\subsubsection{Subsubsection Heading Here}
Subsubsection text here.
\section{Conclusion}
The conclusion goes here.
\appendices
\section{Proof of the First Zonklar Equation}
Appendix one text goes here.
\section{}
Appendix two text goes here.
\section*{Acknowledgment}
The authors would like to thank...
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\section{Introduction}\label{sec:introduction}}
\IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file''
for IEEE Computer Society journal papers produced under \LaTeX\ using
IEEEtran.cls version 1.8b and later.
I wish you the best of success.
\hfill mds
\hfill August 26, 2015
\subsection{Subsection Heading Here}
Subsection text here.
\subsubsection{Subsubsection Heading Here}
Subsubsection text here.
\section{Conclusion}
The conclusion goes here.
\appendices
\section{Proof of the First Zonklar Equation}
Appendix one text goes here.
\section{}
Appendix two text goes here.
\ifCLASSOPTIONcompsoc
\section*{Acknowledgments}
\else
\section*{Acknowledgment}
\fi
The authors would like to thank...
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\section{Introduction}
\IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file''
for IEEE Communications Society journal papers produced under \LaTeX\ using
IEEEtran.cls version 1.8b and later.
I wish you the best of success.
\hfill mds
\hfill August 26, 2015
\subsection{Subsection Heading Here}
Subsection text here.
\subsubsection{Subsubsection Heading Here}
Subsubsection text here.
\section{Conclusion}
The conclusion goes here.
\appendices
\section{Proof of the First Zonklar Equation}
Appendix one text goes here.
\section{}
Appendix two text goes here.
\section*{Acknowledgment}
The authors would like to thank...
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\section{Introduction}
\IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file''
for IEEE \textsc{Transactions on Magnetics} journal papers produced under \LaTeX\ using
IEEEtran.cls version 1.8b and later.
I wish you the best of success.
\hfill mds
\hfill August 26, 2015
\subsection{Subsection Heading Here}
Subsection text here.
\subsubsection{Subsubsection Heading Here}
Subsubsection text here.
\section{Conclusion}
The conclusion goes here.
\appendices
\section{Proof of the First Zonklar Equation}
Appendix one text goes here.
\section{}
Appendix two text goes here.
\section*{Acknowledgment}
The authors would like to thank...
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\section{Implementation}\label{sec:implementation}
We implement adapter synthesis for Linux/x86-64 binaries using
the symbolic execution tool FuzzBALL~\cite{fuzzball}, which is freely available~\cite{fuzzball-github}.
FuzzBALL allows us to explore execution paths through the target and adapted inner functions to (1) find counterexamples that invalidate previous candidate adapters and (2) find candidate adapters that create behavioral equivalence for the current set of tests.
As FuzzBALL symbolically executes a program, it constructs and maintains Vine IR expressions using the BitBlaze~\cite{bitblaze-url} Vine library~\cite{bitblaze-vine} and interfaces with the STP~\cite{stp} decision procedure to solve path conditions.
We replace the symbolic execution-based implementation of adapter search with a concrete implementation that searches the adapter space in a random order.
\subsection{Test Harness}
To compare code for equivalence we use a test harness similar to the one used by Ramos et al.~\cite{Ramos:2011:PLE:2032305.2032360} to compare C functions for direct equivalence using symbolic execution.
The test harness exercises every execution path that passes first through the target code region, and then through the adapted reference function.
As FuzzBALL executes a path through the target code region, it maintains a path condition that reflects the branches that were taken.
As execution proceeds through the adapted reference function on an execution path, FuzzBALL will only take branches that are do not contradict the path condition.
Thus, symbolic execution through the target and reference code consistently satisfies the same path condition over the input.
Listing \ref{lst:test_harness} provides a representative test harness.
If the target code region is a code fragment, it's inputs $x_1$, ..., $x_n$ need to be written into the first {\tt n} general purpose registers available on the architecture.
Since the target code fragment may write into the stack pointer register ({\tt sp} on ARM), the value of the stack pointer also needs to be saved before executing the target code fragment and restored after the target code fragment has finished execution.
These operations are represented on lines 2, 3, and 5 of Listing \ref{lst:test_harness}.
On line 4 the test harness executes the target code region with inputs $x_1$, ..., $x_n$ and captures its output in {\tt r1}.
If the target code region is a code fragment, its output needs to be determined in a preprocessing phase.
One heuristic for choosing a code fragment\rq s output is to choose the last register that was written into by the code fragment.
On line 9, it calls the adapted reference function \texttt{REF} with inputs $y_1$, ..., $y_m$, which are derived from $x_1$, ..., $x_n$ using an adapter.
It adapts {\tt REF}\rq s return value using the return adapter {\tt R} and saves the adapted return value in {\tt r2}.
On line 10 the test harness branches on whether the results of the calls to the target and adapted reference code match.
\lstinputlisting[caption={Test harness}, label={lst:test_harness}]{code_samples/compare.c}
We use the same test harness for both counterexample search and adapter search.
During counterexample search, the inputs $x_1$, ..., $x_n$ are marked as symbolic and the adapter is concrete.
FuzzBALL first executes the target code region using the symbolic $x_1$, ..., $x_n$.
It then creates reference function arguments $y_1$, ..., $y_n$ using the concrete adapter and executes the reference function.
During adapter search, the adapter is marked as symbolic, and for each set of test inputs $x_1$, ..., $x_n$, FuzzBALL first executes the target code region concretely.
FuzzBALL then applies symbolic adapter formulas (described in \ref{sec:adapter_formulae}) to the concrete test inputs and passes these symbolic formulas as the adapted reference function arguments $y_1$, ..., $y_n$, before finally executing the reference function.
During counterexample search we are interested in paths that execute the ``Mismatch'' side, and during adapter search we are interested in paths that execute the ``Match'' side of the branches on line 7 of Listing \ref{lst:test_harness}.
For simplicity, Listing \ref{lst:test_harness} shows only the return values $r_1$ and $r_2$ as being used for equivalence checking.
\subsection{Adapters as Symbolic Formulae}
\label{sec:adapter_formulae}
\lstinputlisting[caption={Argument Substitution adapter}, label={lst:simple_adapter_formula}, style=nonumbers]{code_samples/simple_adapter_formula.c}
\lstinputlisting[label=lst:typeconv_adapter_formula,label={lst:typeconv_adapter_formula},caption={Vine IR formula for one type conversion operation and argument substitution}]{code_samples/typeconv_adapter_formula.c}
We represent adapters in FuzzBALL using Vine IR expressions involving symbolic variables.
As an example, an argument substitution adapter for the adapted inner function argument $y_i$ is represented by a Vine IR expression that indicates whether $y_i$ should be replaced by a constant value (and if so, what constant value) or an argument from the target function (and if so, which argument)
This symbolic expression uses two symbolic variables, \textit{y\_i\_type} and \textit{y\_i\_val}.
We show an example of an adapter from the argument substitution family represented as a symbolic formula in Vine IR in Listing \ref{lst:simple_adapter_formula}.
This listing assumes the target function takes three arguments, \textit{x1}, \textit{x2}, \textit{x3}.
This adapter substitutes the first adapted inner function argument with either a constant or with one of the three target function arguments.
A value of 1 in \textit{y\_1\_type} indicates the first adapted inner function argument is to be substituted by a constant value given by \textit{y\_1\_val}.
If \textit{y\_1\_type} is set to a value other than 1, the first adapted inner function argument is to be substituted by the target function argument at position present in \textit{y\_1\_val}.
We constrain the range of values \textit{y\_1\_val} can take by adding side conditions.
In the example shown in Listing \ref{lst:simple_adapter_formula}, when \textit{y\_1\_type} equals a value other than 1, \textit{y\_1\_val} can only equal 0, 1, or 2 since the target function takes 3 arguments.
Symbolic formulae for argument substitution can be extended naturally to more complex adapter families by adding additional symbolic variables.
For example, consider the Vine IR formula shown in Listing~\ref{lst:typeconv_adapter_formula} which extends the formula in Listing~\ref{lst:simple_adapter_formula} to allow sign extension from the low 16 bits of a value.
Listing \ref{lst:typeconv_adapter_formula} begins in the same way as Listing \ref{lst:simple_adapter_formula} on line 1.
But, this time, if \textit{y\_1\_type} is 0, it performs argument substitution based on the value in \textit{y\_1\_val} on lines 3, 4.
If \textit{y\_1\_type} is any value other than 0, it performs sign extension of the low 16 bits in a value.
This value is chosen based on the position set in \textit{y\_1\_val} on lines 8, 9.
Notice lines 8, 9 are the same as lines 3, 4, which means the value, whose low 16 bits are sign-extended, is chosen in exactly the same way as argument substitution.
During the adapter search step of our algorithm, Vine IR representations of adapted inner function arguments are placed into argument registers of the adapted inner function before it begins execution, and placed into the return value register when the inner function returns to the test harness.
When doing adapter synthesis using memory substitution, Vine IR formulas allowing memory substitutions are written into memory pointed to by inner function arguments.
We use the registers \%rdi, \%rsi, \%rdx, \%rcx, \%r8, and \%r9 for function arguments and the register \%rax for function return value, as specified by the x86-64 ABI calling convention~\cite{x64-abi}.
We do not currently support synthesizing adapters between functions that use arguments passed on the stack, use variable number of arguments, or specify return values in registers other than \%rax.
\subsection{Equivalence checking of side-effects}
\label{sec:eqchk-syscall}
We record the side-effects of executing the target and adapted inner functions and compare them for equivalence on every execution path.
For equivalence checking of side-effects via system calls, we check the sequence of system calls and their arguments, made by both functions, for equality.
For equivalence checking of side-effects on concretely-addressed memory, we record write operations through both functions and compare the pairs of (address, value) for equivalence.
For equivalence checking of side-effects on memory addressed by symbolic values, we use a FuzzBALL feature called \textit{symbolic regions}.
Symbolic address expressions encountered during adapted inner function execution are checked for equivalence with those seen during target function execution and mapped to the same symbolic region, if equivalent.
\subsection{Concrete adapter search}
\label{sec:conc-search}
Given an adapter family, the space of possible adapters is finite.
Instead of using symbolic execution for adapter search, we can concretely check if an adapter produces equal side-effects and return values for all previously-found tests.
We implement concrete enumeration-based adapter search in C for all the adapter families described in Section~\ref{sec:adapter_synthesis}.
We use the Pin~\cite{pin} framework for checking side-effects on memory and system calls for equality.
To prevent adapter search time from depending on the order of enumeration, we randomize the sequence in which adapters are generated.
\section{Introduction}
When required to write an implementation for matrix multiplication, the average programmer will come up with a naive implementation in a matter of minutes.
However, much research effort has been invested into creating more efficient matrix multiplication algorithms~\cite{strassen,coppersmith-winograd,gall}.
On attempting to replace the naive implementation with an implementation of a more efficient matrix multiplication algorithm, the programmer is likely to encounter interface differences, such as taking its arguments in a different order.
In this paper we present a technique that automates the process of finding functions that match the behavior specified by an existing function, while also discovering the necessary wrapper needed to handle interface differences between the original and discovered functions.
Other use cases for our technique include replacing insecure dependencies of off-the-shelf libraries with bug-free variants, deobfuscating binary-level functions by comparing their behavior to known implementations, and locating multiple versions of a function to be run in parallel to provide security through diversity~\cite{BorckBDHHJSS2016}, and reverse engineering a fragment of code to its intended semantic functionality.
Our technique works by searching for a wrapper that can be added around one function's interface to make it equivalent to another function.
We consider wrappers that transform function arguments and return values.
Listing~\ref{lst:isalpha} shows implementations in two commonly-used libraries of the \textit{isalpha} predicate, which checks if a character is a letter.
Both implementations follow the description of the \textit{isalpha} function as specified in the ISO C standard, but the glibc implementation signifies the input is a letter by returning 1024, while the musl implementation returns 1 in that case.
\lstinputlisting[caption={musl and glibc implementations of the \textit{isalpha} predicate and a wrapper around the glibc implementation that is equivalent to the musl implementation},
label={lst:isalpha}, style=nonumbers]{code_samples/musl_glibc.c}
The glibc implementation can be adapted to make it equivalent to the musl implementation by replacing its return value, if non-zero, by 1 as shown by the \textit{adapted\_isalpha} function.
This illustrates the driving idea of our approach: to check whether two functions $f_1$ and $f_2$ are different interfaces to the same functionality, we can search for a wrapper that allows $f_1$ to be replaced by $f_2$.
We refer to the function being wrapped around as the {\em inner} function and the function being emulated as the {\em target} function.
In the example above, the inner function is \textit{glibc\_isalpha} and the target function is \textit{musl\_isalpha}.
We refer to the wrapper code automatically synthesized by our tool as an {\em adapter}.
Our adapter synthesis tool searches in the space of all possible adapters allowed by a specified adapter family for an adapter that makes the behavior of the inner function $f_2$ equivalent to that of the target function $f_1$.
We represent that such an adapter exists by the notation $f_1 \leftarrow f_2$.
Note that this adaptability relationship may not be symmetric: $a \leftarrow b$ does not imply $b \leftarrow a$.
To efficiently search for an adapter, we use counterexample guided inductive synthesis~(CEGIS)~\cite{Solar-LezamaTBSS2006}.
An adapter family is represented as a formula for transforming values controlled by parameters: each setting of these parameters represents a possible adapter.
Each step of CEGIS allows us to conclude that either a counterexample exists for the previously hypothesized adapter, or that an adapter exists that will work for all previously found tests.
We use binary symbolic execution both to generate counterexamples and to find new candidate adapters; the symbolic execution engine internally uses a satisfiability modulo theories~(SMT) solver.
We contrast the performance of binary symbolic execution for adapter search with an alternate approach that uses a randomly-ordered enumeration of all possible adapters.
We always restrict our search to a specified finite family of adapters, and also bound the size of function inputs.
We show that adapter synthesis is useful for a variety of software engineering tasks.
One of our automatically synthesized adapters creates adaptable equivalence between a naive implementation of matrix multiplication and an implementation of Strassen\rq s matrix multiplication algorithm.
We also demonstrate the application of adapter synthesis to deobfuscation by deobfuscating a heavily obfuscated implementation of CRC-32 checksum computation.
We find adaptable equivalence modulo a security bug caused by undefined behavior.
Two other pairs of our automatically synthesized adapters create adaptable equivalence between RC4 setup and encryption functions in mbedTLS~(formerly PolarSSL) and OpenSSL.
We can use binary symbolic execution both to generate counterexamples and to find new candidate adapters.
Our notion of adapter correctness only considers code's behavior, so we can detect substitutability between functions that have no syntactic similarity.
We explore the trade-off between using concrete enumeration and binary symbolic execution for adapter search.
Guided by this experiment, we show that binary symbolic execution-based adapter synthesis can be used for reverse engineering at scale.
We use the Rockbox project~\cite{rockbox} to create an a ARM-based 3rd party firmware image for the iPod Nano 2g device and identify more than 61,000 target code fragments from this image.
We extract reference functions from the VLC media player~\cite{vlc}.
Using these target code fragments and reference functions, our evaluation completes more than 1.17 million synthesis tasks.
Each synthesis task navigates an adapter search space of more than 1.353 x $10^{127}$ adapters, enumerating these concretely would take an infeasible amount of time~($10^{14}$ years).
We find our adapter synthesis implementation finds several instances of reference functions in the firmware image.
Using the most interesting reference functions from this evaluation, we then compare adapter families to explore different parameter settings for adapter synthesis.
To test adapter synthesis within the C library, we evaluate two of our adapter families on more than 13,000 pairs of functions from the C library and present synthesized adapters for some of them.
The rest of this paper is organized as follows.
Section~\ref{sec:adapter_synthesis} presents our algorithm for adapter synthesis and describes our adapter families.
Section~\ref{sec:implementation} describes our implementation, and
Section~\ref{sec:evaluation} presents examples of application of adapter synthesis, large-scale evaluations, and a comparison of two adapter search implementations.
Section~\ref{sec:discussion} discusses limitations and future work,
Section~\ref{sec:related-work} describes related work, and
Section~\ref{sec:conclusion} concludes.
\section{Acknowledgements}\label{sec:acknowledgements}
\bibliographystyle{abbrv}
\section{Appendix}
\label{sec:appendix}
\subsection{Reverse engineering expanded tables}
\label{sec:all_tables}
For the results reported in Section~\ref{sec:eval_general}, we report detailed metrics for the three possible conclusions, adapter
found, not substitutable, timed out, in the
Tables~\ref{table:adapters_full},~\ref{table:inequiv_full},~\ref{table:timeouts_full}
respectively.
The \textit{AS-stops/CE-stops} column in Table~\ref{table:timeouts_full} reports the number of times a timeout resulted in an adapter search step or counter-example search step to be halted.
In the first column, after each reference function\rq s name, the {\tt
\#N} within parenthesis reports the number of arguments taken by the reference function.
\input{adapters_full_table}
\input{inequiv_full_table}
\input{timeouts_full_table}
\section{Overview} \label{sec:overview}
This section provides an overview of our adapter synthesis technique.
We begin with a motivating example. Consider the two implementations of the \textit{isPalindrome} predicate given in Listing \ref{lst:isPalindrome}.
The first implementation, \textit{isP1}, takes as input a pointer \textit{s} that points to the beginning of a string, and then computes another pointer \textit{p} that points to the end of that string using the C function \textit{strlen}.
\textit{isP1} then checks that the characters pointed to
by \textit{s} and \textit{p} are equal as the two pointers are moved closer to each other.
The second implementation of the \textit{isPalindrome} predicate, \textit{isP2}, takes as input a pointer to the beginning of a string and that string's length.
It then checks that the relevant corresponding characters are equal by indexing into the input string.
\lstinputlisting[caption={Two implementations of the
\textit{isPalindrome} predicate}, label={lst:isPalindrome}, style=nonumbers]{code_samples/simple-len.c}
Although \textit{isP1} and \textit{isP2} take a different number of arguments, they implement the same core functionality, and so we would like to consider them semantically-equivalent.
We justify their equivalence by the existence of an adapter that allows us to replace calls to \textit{isP1} with equivalent calls to \textit{isP2}.
An implementation of this adapter is given by the function \textit{adapted\_isP2}.
\textit{adapted\_isP2} has the same interface as \textit{isP1} and is implemented using a single call to \textit{isP2} with modified arguments.
Notice that it is not possible to adapt the interface of \textit{isP1} to \textit{isP2} because \textit{isP2} checks for the existence of a palindrome \textit{len} bytes long which need not always be equal to the length of the string beginning at \textit{start}.
This leads to the idea that the problem of comparing two functions for semantic equivalence is the same as the problem of finding an adapter between those two functions that allows one to be replaced by the other.
This is the driving idea behind our approach to detecting semantically-equivalent code.
To search for adapters between functions, we use a counterexample-guided search technique.
Our algorithm, along with more examples of sematically-equivalent function pairs, is presented in Section~\ref{sec:adapter_synthesis}.
But before moving on to the algorithm, we discuss some additional challenges of comparing the output of arbitrary functions for equivalence.
One aspect of output equivalence is return value equivalence.
In the example above this was straight-forward because both \textit{isP1} and \textit{isP2} only return either 0 or 1.
But it may be the case that two functions with semantically-equivalent behaviors return values that have the same meaning, without having the same value.
Consider the semantically similar functions $f_1$ and $f_2$ shown in Listing \ref{lst:strcmp}.
While $f_1$ returns one if its argument strings are equal and zero otherwise, $f_2$ returns zero if its argument strings are equal and a non-zero value otherwise.
$f_1$ can be adapted to $f_2$ only if its return value is changed to accommodate this difference in representation of string equality.
\lstinputlisting[caption={Two C++ implementations for comparing strings}, label={lst:strcmp}, style=nonumbers]{code_samples/return-cmp-adapter.c}
This idea of equivalent meaning (without equality) between return values motivates our use of adapters that apply to the inner function's return value as well as its input.
However, the return value alone does not capture a function's output behavior.
We also need to consider side-effects such as system calls and modifications to memory.
One way to handle the system call component of side-effects is to only consider functions equivalent if they make the same sequence of system calls with equivalent arguments.
Checking for equivalence of memory side-effects presents a greater challenge.
A shallow way to check for memory-based equivalence is to capture all writes made to memory by the target and inner functions and determine whether the same data is present at the same locations after both target and inner functions have finished execution.
But for a complete handling of memory side-effects, we also have to consider equivalence between writes to memory that write semantically equivalent values to the same location.
Further complications arise when we try to perform memory-based equivalence checking for functions that operate on pointer-rich data structures such as linked lists.
\section{Related Work}\label{sec:related-work}
\subsection{Detecting Equivalent Code}
The majority of previous work in this area has focused on detecting \emph{syntactically} equivalent code, or `clones,' which are, for instance, the result of copy-and-paste \cite{Kamiya:2002:CMT:636188.636191,Li:2004:CTF:1251254.1251274,Jiang:2007:DSA:1248820.1248843}.
Jiang et al.~\cite{Jiang:2009:AMF:1572272.1572283} propose an algorithm for automatically detecting functionally equivalent code fragments using random testing and allow for limited types of adapter functions over code inputs --- specifically permutations and combinations of multiple inputs into a single struct.
Ramos et al.~\cite{Ramos:2011:PLE:2032305.2032360} present a tool that checks for equivalence between arbitrary C functions using symbolic execution.
While our definition of functional equivalence is similar to that used by Jiang et al. and Ramos et al., our adapter families capture a larger set of allowed transformations during adapter synthesis than both.
Amidon et al.~\cite{program_fracture} describe a technique for fracturing a program into pieces which can be replaced by more optimized code from multiple applications.
They mention the need for automatic generation of adapters which enable replacement of pieces of code which are not immediately compatible.
While Amidon et al. describe a parameter reordering adapter, they do not mention how automation of synthesis of such adapters can be achieved.
David et al.~\cite{statistical_similarity} decompose binary code into smaller pieces, find semantic similarity between pieces, and use statistical reasoning to compose similarity between procedures.
Since this approach relies on pieces of binary code, they cannot examine binary code pieces that make function calls and check for semantic similarity across wrappers around function calls.
Goffi et al.~\cite{goffi} synthesize a sequence of functions that are equivalent to another function w.r.t a set of execution scenarios.
Their implementation is similar to our concrete enumeration-based adapter search which produces equivalence w.r.t. a set of tests.
In the hardware domain, adapter synthesis has been applied to low-level combinatorial circuits by Gasc\'{o}n et al~\cite{gascon}.
They apply equivalence checking to functional descriptions of a low-level combinatorial circuit and reference implementations while synthesizing a correct mapping of the input and output signals and setting of control signals.
They convert this mapping problem into a exists/forall problem which is solved using the Yices SMT solver~\cite{yices}.
\subsection{Component Retrieval}
Type-based component retrieval was an active area of research in the past.
Many papers in this area~\cite{rittri},~\cite{runciman1989},~\cite{runciman1991} focused on the problem of finding a function, whose polymorphic type is known to the user, within a library of software components.
Type-based hot swapping~\cite{duggan} and signature matching~\cite{zaremski} were also active areas of related research in the past.
These techniques relied on adapter-like operations such as currying or uncurrying functions, reordering tuples, and type conversion.
Reordering, insertion, deletion, and type conversion are only some of the many operations supported by our adapters.
These techniques can only be applied at the source level, whereas our adapter synthesis technique can be applied at source and binary levels
\subsection{Component Adaptation}
Component adaptation was another related active area of research in the past.
This includes techniques for adapter specification~\cite{nimble}, for component adaptation using formal specifications of components~\cite{spartacus},~\cite{penix},~\cite{penix1995},~\cite{yellin},~\cite{bracciali}.
Component adaptation has also been performed at the Java bytecode level~\cite{keller}, as well as the C bitcode level~\cite{nita}.
Behavior sampling~\cite{podgurski} is a similar area of research for finding equivalence over a small set of input samples.
However, these techniques either relied on having a formal specification of the behavior of all components in the library to be searched, or provided techniques for translating a formally specified adapter~\cite{nimble}.
\subsection{Program Synthesis}
Program synthesis is an active area of research that has many applications including generating optimal instruction sequences \cite{Massalin:1987:SLS:36206.36194,Joshi:2002:DGS:512529.512566}, automating repetitive programming, filling in low-level program details after programmer intent has been expressed \cite{Solar-LezamaTBSS2006}, and even binary diversification \cite{Jacob2008}.
Programs can be synthesized from formal specifications \cite{Manna:1980:DAP:357084.357090}, simpler (likely less efficient) programs that have the desired behavior \cite{Massalin:1987:SLS:36206.36194,Solar-LezamaTBSS2006,Joshi:2002:DGS:512529.512566}, or input/output oracles \cite{Jha:2010:OCP:1806799.1806833}.
We take the second approach to specification, treating existing functions as specifications when synthesizing adapter functions.
|
2,869,038,155,846 | arxiv | \section{}\label{}
\newcommand{MLHarness}{MLHarness}
\section{Introduction}
\input{src/1-introduction}
\section{Background}
\input{src/2-background}
\section{MLHarness{} Implementation}
\input{src/3-harness}
\section{Experimental Results}
\input{src/4-result}
\section{Conclusion}
\input{src/5-conclusion}
\section*{Acknowledgements}
This work is supported in part by IBM-ILLINOIS Center for Cognitive Computing Systems Research (C3SR) - a research collaboration as part of the IBM AI Horizons Network.
\printcredits
\bibliographystyle{cas-model2-names}
\subsection{ML/DL Benchmark Challenges}
ML and DL innovations such as applications, datasets, frameworks, models, and software and hardware systems, are being developed in a rapid pace. However, current practice of sharing ML/DL innovations is to build ad-hoc scripts and write manuals to describe the workflow. This makes it hard to reproduce the reported metrics and to port the innovations to different environments and solutions. Therefore, having a standard benchmarking platform with an exchange specification and well-defined metrics to fairly compare and benchmark the innovations is a crucial step toward the success of ML/DL community.
Previous work includes (1) ML/DL model zoos curated by framework developers \cite{GluonCV,ModelHub,ModelZoo,ONNXModelZoo,TFHub,TorchVision}, but they only aim for sharing ML/DL models as a library; (2) package managers for a specific software environment such as Spack \cite{Spack}, while they are just targeting on maintaining packages in different software and hardware stacks; (3) benchmarking platforms such as MLCommons \cite{mattson2019mlperf,reddi2019mlperf} and MLModelScope \cite{DBLP:journals/corr/abs-2002-08295}, but the former only focuses on few specific models and the latter only focuses on models in computer vision tasks; (4) collections of reproducible MLOps components and architectures \cite{CloudOps,DBLP:journals/corr/abs-2011-01149,MLOps}, while their main focuses are on deployment and automation; (5) plug-and-play shareable containers such as MLCube \cite{MLCube}, but its generality makes it hard to identify and locate the crucial components for the cause of abnormal behaviors in ML/DL models; (6) simulator of ML/DL inference servers such as iBench \cite{iBench}, but the main focus on capturing data transfer capabilities between clients and servers provides no insights on profiling models. As the above applications either only focus on a specific software and hardware stack, or use ad-hoc approaches to handle specific ML/DL tasks, or are lack of a consistent benchmarking method, it is hard to use them individually to have a well rounded experience when developing ML/DL innovations.
To address these ML/DL benchmark challenges, we propose a new scalable benchmarking system: MLHarness{} by taking advantage of two open-source projects: MLModelScope \cite{DBLP:journals/corr/abs-2002-08295} for its exchange specification on software and hardware stacks, and MLCommons Inference \cite{reddi2019mlperf} for its community-adopted benchmarking scenarios and metrics. With MLHarness{}, we are able to benchmark and compare quality and performance of models on a common ground through a set of well-defined metrics and exchange specification.
\subsection{Overview of MLCommons}
MLCommons \cite{mattson2019mlperf,reddi2019mlperf}, previously known as MLPerf, is a platform aims to answer the needs of the nascent machine learning industry. MLCommons Training \cite{mattson2019mlperf} measures how fast systems can train models to a target quality metric, while MLCommons Inference \cite{reddi2019mlperf} measures how fast systems can process inputs and produce results using a trained model. Both of these two benchmark suites target on providing benchmarking results on different scales of computing services, ranging from tiny mobile devices to high performance computing data centers. As the main focus of this paper is on benchmarking ML/DL inferences, we only focus on MLCommons Inference in the rest of this paper.
\subsubsection{Characteristics of MLCommons Inference}
MLCommons Inference is a standard ML/DL inference benchmark suite with a set of properly defined metrics and benchmarking methodologies to fairly measure the inference performance of ML/DL hardware, software, and services. MLCommons Inference focuses on the following perspectives when designing its benchmarking metrics:
\begin{itemize}
\item \textbf{Selection of Representative Models.} MLCommons Inference selects representative models that are mature, open source, and have earned community support. This permits accessibility and reproducible measurements, which facilitates MLCommons Inference becoming a standardized benchmarking suite.
\item \textbf{Scenarios.} MLCommons Inference consists of four evaluation scenarios, including single-stream, multistream, server, and offline. These four scenarios aim for simulating realistic behaviors of inference systems in many critical applications.
\item \textbf{Metrics.} Apart from the commonly used model metrics such as accuracy, MLCommons Inference also includes a set of systems related metrics, such as percentile-latency and throughput. These make MLCommons Inference appealing in satisfying the demand of different use cases, such as 99\% percentile-latency for a data center to respond to a user query.
\end{itemize}
\subsubsection{Workflows of MLCommons Inference}
Figure \ref{fig:mlcommonsworkflows} shows the critical components as defined in MLCommons Inference, where the numbers and arrows denote the sequence and the directions of the data flows, respectively. The description of the components follows:
\begin{itemize}
\item \textbf{Load Generator (LoadGen).} The LoadGen produces query traffics as defined by the four scenarios above, collects logging information, and summarizes benchmarking results. It is a stand-alone module that stays the same across different models.
\item \textbf{System Under Test (SUT).} The SUT consists of the inference system under benchmarking, including ML/DL frameworks, ML/DL models, software libraries and the target hardware system. Once the SUT receives a query from the LoadGen, it completes an inference run and reports the result to the LoadGen.
\item \textbf{Data Set.} Before issuing queries to the SUT, the LoadGen needs to let the SUT fetch the data needed for the queries from the dataset and pre-process the data. This is not included in the latency measurement.
\item \textbf{Accuracy Script.} After all queries are issued and the results are received, the accuracy script will be invoked to validate the accuracy of the model from the logging information.
\end{itemize}
\subsubsection{Limitations of MLCommons Inference}
As we can observe from the characteristics and the workflows of MLCommons Inference above, MLCommons Inference involves benchmarking under different scenarios with various metrics, which provides a community acknowledged ML/DL benchmark standard. However, the focus on the seven representative models shadows its advantage because MLCommons Inference only provides ad-hoc scripts for these representative models, and it is hard to extend them to many other models beyond MLCommons Inference.
In fact, the only critical component in MLCommons Inference is the LoadGen, while the other components can be replaced with any inference systems. In this paper, we present how to replace the components other than the LoadGen by MLModelScope \cite{DBLP:journals/corr/abs-2002-08295}, an inference platform with a clearly defined exchange specification and an across-stack profiling and analysis tool, and extend MLModelScope so that it becomes a scalable benchmarking harness for MLCommons Inference. This greatly extends the applicability of MLCommons Inference for models well beyond it.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.25]{img/components.png}
\end{center}
\caption{Workflow of MLCommons Inference \cite{reddi2019mlperf}}
\label{fig:mlcommonsworkflows}
\end{figure}
\subsection{Overview of MLModelScope}
MLModelScope \cite{DBLP:journals/corr/abs-2002-08295} is a hardware and software agnostic distributed platform for benchmarking and profiling ML/DL models across datasets, frameworks and systems.
\subsubsection{Characteristics of MLModelScope}
MLModelScope consists of a specification and a runtime that enable repeatable and fair evaluation. The design aspects follow:
\begin{itemize}
\item \textbf{Specification.} MLModelScope utilizes the software and model manifests as proposed in DLSpec \cite{DBLP:journals/corr/abs-2002-11262}, which capture different aspects of an ML/DL task and ensure usability and reproducibility. The software manifest defines the software requirements, such as ML/DL frameworks to run an ML/DL task. The model manifest defines the logic to run the model for the ML/DL task, such as pre-processing and post-processing methods, and the required artifact sources. An example is shown in Listing \ref{listing:mlmodelscopemanifest}.
\item \textbf{Runtime.} The runtime of MLModelScope follows the manifests to set up the required environment for inference. Moreover, MLModelScope includes the across-stack profiling and analysis tool, XSP \cite{DBLP:journals/corr/abs-1908-06869}, which introduces a leveled and iterative measurement approach to overcome the impact of profiling overhead. As shown in Figure \ref{fig:level}, MLModelScope captures profiling data for different levels, which enables users to correlate the information and analyze the performance data in different levels.
\end{itemize}
\begin{listing}[t]
\begin{minted}
[
frame=lines,
framesep=2mm,
baselinestretch=1.0,
fontsize=\footnotesize,
linenos
]
{yaml}
name: Inception-v3 # model name
version: 1.0.0 # semantic version of the model
task: classification
framework: # framework information
name: TensorFlow
version: ^1.x # framework version constraint
model: # model sources
graph_path: https://.../inception_v3.pb
graph_checksum: 328f68...3a813e
steps: # pre-processing steps
decode:
element_type: int8
data_layout: NHWC
color_layout: RGB
crop:
method: center
percentage: 87.5
resize:
dimensions: [3, 299, 299]
method: bilinear
keep_aspect_ratio: true
mean: [127.5, 127.5, 127.5]
rescale: 127.5
...
\end{minted}
\caption{An excerpt of manifest from MLModelScope \cite{DBLP:journals/corr/abs-2002-08295}}
\label{listing:mlmodelscopemanifest}
\end{listing}
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.20]{img/level.png}
\end{center}
\caption{Profiling levels in MLModelScope \cite{DBLP:journals/corr/abs-2002-08295}}
\label{fig:level}
\end{figure}
\subsubsection{Limitations of MLModelScope}
Although MLModelScope involves a clearly defined specification and is able to run several hundreds of models in different ML/DL frameworks, it currently only supports models for computer vision tasks. While MLModelScope discussed the possibility of using user-defined pre-processing and post-processing inline Python scripts to serve as a universal handler for all kinds of models, MLModelScope did not implement those interfaces but only introduced built-in image manipulations to support computer vision tasks. In this paper, we have actually implemented the user-defined pre-processing and post-processing interfaces and demonstrated its usage on models with different modalities and different pre-processing and post-processing, such as question answering and medical 3D image segmentation.
\subsection{Pre-processing and Post-processing Interfaces}
As described in DLSpec \cite{DBLP:journals/corr/abs-2002-11262} and MLModelScope \cite{DBLP:journals/corr/abs-2002-08295}, to make the user defined pre-processing and post-processing interfaces universal, inline Python scripts are chosen to allow great flexibility and productivity, as Python functions can download and run Bash scripts and some C++ code. On the other hand, MLModelScope is implemented in Go; therefore, it is necessary to build a bridge between the runtime of MLModelScope and the embedded Python scripts in the model manifest so that, within the MLModelScope runtime, we can invoke the Python runtime to execute user defined pre-processing and post-processing functions.
A naive solution is to save the input data and the functions as files and execute pre-processing and post-processing functions apart from MLModelScope. However, this approach is impractical since it introduces high serialization and process initialization overhead, and it also makes MLModelScope incapable of supporting streaming data \cite{DBLP:journals/corr/abs-2002-11262}.
In order to avoid using intermediate files, we instead use Python/C APIs \cite{CAPI} to embed a Python interpreter into MLModelScope to execute Python functions, as suggested by DLSpec. To use these APIs, we need to implement wrappers in Go to call them. Instead of building these wrappers from scratch, we use the open source Go-Python3 bindings \cite{DataDog}. In this fashion, the Python functions can be executed within MLHarness{} directly to avoid the problems mentioned in the naive solution.
\subsubsection{Implementation Details}
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.083]{img/Interfaces.png}
\end{center}
\caption{Workflow of MLHarness{}}
\label{fig:interfaces}
\end{figure}
Figure \ref{fig:interfaces} shows the invocation sequence of user defined pre-processing and post-processing functions by MLHarness{}. The \texttt{before\_preprocess} and the \texttt{before\_postprocess} functions are invoked only once at the startup stage; the \texttt{after\_preprocess} function and the \texttt{after\_postprocess} functions are invoked only once after all inferences are done. These four functions are for the sake of loading datasets, writing logging information to files, and specifying configurations during runtime if necessary. The \texttt{preprocess} function and the \texttt{postprocess} function are invoked right before and after every model inference, respectively, to pre-process and post-process the inputs and outputs of the model.
\begin{listing}[t]
\begin{minted}
[
frame=lines,
framesep=2mm,
baselinestretch=1.0,
fontsize=\footnotesize,
linenos
]
{go}
func Processing(tensor interface{}, functionName string) interface{} {
pyData := MoveDataToPythonInterpreter(tensor)
pyFunc := FindTheProcessingFunctionByItsName(functionName)
pyResult := ExecuteProcessingFunction(pyFunc, pyData)
result := GetResultFromPythonInterpreter(pyResult)
return result
}
\end{minted}
\caption{Pre-processing and post-processing implementation in Go}
\label{listing:prepostimplementation}
\end{listing}
To embed a Python interpreter, we need to initialize it through Python/C APIs at the beginning of MLHarness{}. Then, as Listing \ref{listing:prepostimplementation} shows, the function handling the embedded Python pre-processing and post-processing scripts consists of four parts, utilizing the Go-Python3 bindings:
\begin{itemize}
\item \texttt{MoveDataToPythonInterpreter.} Moving data from Go to Python is not easy since the data being processed are large, for example, a tensor representing an image. One solution is to serialize the data at one end, transfer the data as a string, and deserialize at the other end. However, it introduces a high overhead due to the high cost of encoding and decoding. To overcome this problem and to make data transfer efficient, we propose to copy the data in-memory, i.e., we only send the shape of the tensor and the address of its underlying flattened array, and reconstruct the tensor by copying data from the address and by its shape. Note that to guarantee the validity of data transfer, we need to make sure that the underlying flattened array represents the tensor contiguously, particularly in case lazy operations were done on the tensor, such as transposition.
\item \texttt{FindTheProcessingFunctionByItsName.} The processing functions in the model manifest are registered in the \texttt{\_\_main\_\_} module of the Python interpreter during its initialization. To get the corresponding \texttt{PyObject} of these functions, we query the \texttt{\_\_main\_\_} module by the names of functions, which are the six processing functions as listed in Figure \ref{fig:interfaces}.
\item \texttt{ExecuteProcessingFunction.} The signatures of the processing functions in the model manifest are in the form of \texttt{process(ctx, data)}, where \texttt{ctx} is a dictionary capturing additional information in the manifest, and \texttt{data} is the tensor we got from \texttt{MoveDataToPythonInterpreter}. Therefore, in order to invoke the processing function, we need to call the Go-Python3 binding with the \texttt{PyObject} of the processing function, the dictionary of \texttt{ctx} and the \texttt{data} going to be processed. Note that the \texttt{data} is only effective for \texttt{preprocess} and \texttt{postprocess}, and it is just a \texttt{PyObject} of \texttt{None} for the other four processing functions.
\item \texttt{GetResultFromPythonInterpreter.} This is similar to the first part except that it moves data from Python to Go instead of the other way around. Note that we still copy the data in-memory to avoid unnecessary overhead.
\end{itemize}
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.083]{img/MLModelScope4MLCommons.png}
\end{center}
\caption{Structure of MLHarness{}}
\label{fig:struct}
\end{figure}
\subsection{Structure of MLHarness{}}
Figure \ref{fig:struct} shows the encapsulation of MLModelScope for MLCommons Inference. In order to utilize MLCommons Inference defined scenarios and performance metrics, we keep the LoadGen and the accuracy script the same as they were in MLCommons Inference. On the other hand, We replace the built-in SUT and data set in MLCommons Inference with MLModelScope runtime to run the models. In this way, MLModelScope is capable of acting as an easy-to-use black box to respond to the queries issued by the LoadGen in MLCommons Inference, and it also provides across-stack profiling information for further analysis, which are not available when merely using MLCommons Inference.
\subsubsection{Implementation Details}
MLModelScope is developed in Go, but the LoadGen in MLCommons Inference is developed in C++ and used in Python through Python bindings. In order to make the communication between MLModelScope and MLCommons Inference feasible, we build MLModelScope as a C shared library \cite{SharedLibrary}, use the ctypes module \cite{Ctypes} in Python to load the shared library, and call the functions in the shared library. Three notable implementations are described below:
\begin{listing}[t]
\begin{minted}
[
frame=lines,
framesep=2mm,
baselinestretch=1.0,
fontsize=\footnotesize,
linenos
]
{yaml}
name: MLPerf_BERT # model name
version: 1.0.0 # semantic version of the model
framework: # framework information
name: PyTorch
version: '>=1.5.0' # framework version constraint
inputs: # model inputs
- type: text # input modality
element_type: string
outputs: # model outputs
- type: text # output modality
element_type: string
model: # model sources
graph_path: https://.../bert.pt
graph_checksum: c3bb5a...aa1ccd
preprocess: |
from transformers import BertTokenizer
import numpy as np
...
class SquadExample(object):
...
class InputFeatures(object):
...
def read_squad_examples(...):
...
def convert_examples_to_features(...):
...
features = []
tokenizer = BertTokenizer(...)
examples = read_squad_examples(...)
convert_examples_to_features(features, examples, tokenizer, ...)
def preprocess(ctx, data):
cur = features[int(data)]
return cur.input_ids, cur.input_mask, cur.segment_ids
postprocess: |
import numpy as np
import json
def postprocess(ctx, data):
res = np.stack([data[0], data[1]], axis = -1).squeeze(0).tolist()
return [json.dumps(res)]
...
\end{minted}
\caption{An excerpt of model manifest for BERT}
\label{listing:bertmanifest}
\end{listing}
\begin{itemize}
\item \textbf{Function wrappers.} To simplify the process of building the C shared library and leaving MLModelScope as a black box, we create function wrappers for critical applications in MLModelScope and only export them in the shared library. This includes the \texttt{Initialize} and \texttt{Finalize} wrappers to initialize and finalize the profiling tools in MLModelScope. It also includes the \texttt{LoadQuerySamples}, \texttt{IssueQuery}, and \texttt{UnloadQuerySamples} wrappers to pre-load and pre-process the data from the data set, handle queries from the LoadGen, and free the memory occupied by the pre-loaded data, respectively.
\item \textbf{Data transmissions.} It is hard to directly exchange data between Go and Python, since there is no one-to-one correspondence between data types in these two languages. To solve this problem, we utilize the built-in primitive C compatible data types in ctypes \cite{Ctypes} for Python and CGO \cite{CGO} for Go, since they define how to transform data if there is no clear correspondence between data types in C and the corresponding languages. Using this method, the data conversion can be done in-memory instead of through serialization.
\item \textbf{Blocking statements.} When we exchange data between Go and Python, the garbage collector at one end doesn't automatically know that it needs to keep the data before the data are really copied or used at the other end, which might result into undefined behaviors. To solve this problem, we need to manually create blocking statements to block garbage collection until a deep copy of the data is made at the other end. This can be done using the \texttt{KeepAlive} function \cite{KeepAlive} in Go and managing reference counts \cite{RefCnt} in Python to prevent garbage collection being invoked until the \texttt{KeepAlive} is executed and the reference count is decreased to zero, respectively.
\end{itemize}
\subsection{Example of MLHarness{}}
With the help of user defined pre-processing and post-processing interfaces, MLHarness{} is able to handle various models' inputs and outputs modalities that are not supported in MLModelScope. Also, it is easy to use the model manifest to add models for MLModelScope to report MLCommons Inference defined metrics, which is hard when merely using MLCommons Inference. Listing \ref{listing:bertmanifest} is the model manifest using the pre-processing and post-processing interfaces for BERT \cite{DBLP:journals/corr/abs-1810-04805}, a language representation model, to handle the question answering modality that was not supported in MLModelScope. The pre-processing step uses the tokenizer from the transformers Python third-party library \cite{wolf-etal-2020-transformers} to parse data and prepare input features. The post-processing step reshapes the outputs into the format as defined by the accuracy script. The tedious implementation of the tokenizer is one of the reason why MLModelScope can not support the question answering modality, since it is hard to create an equivalent built-in alternative inside MLModelScope using Go. On the contrary, through the user defined pre-processing and post-processing interfaces, MLHarness{} can utilize the community developed Python third-party libraries to overcome this obstacle.
\subsection{Experiment Setup}
\begin{table*}[h]
\centering
\caption{Systems used for experiments}
\scriptsize
\begin{tabular}{|c|c|c|c|}
\hline
System Annotations& Framework & Processor & Accelerator \\
\hline
9800-ORT-RTX{} & ONNX Runtime & 1x Intel(R) Core(TM) i7-9800X CPU @ 3.80GHz & 1x GeForce RTX 3090 \\
\hline
9800-PT-RTX{} & PyTorch & 1x Intel(R) Core(TM) i7-9800X CPU @ 3.80GHz & 1x GeForce RTX 3090 \\
\hline
9800-ORT{} & ONNX Runtime & 1x Intel(R) Core(TM) i7-9800X CPU @ 3.80GHz & None \\
\hline
9800-PT{} & PyTorch & 1x Intel(R) Core(TM) i7-9800X CPU @ 3.80GHz & None \\
\hline
7820-ORT-TITAN{} & ONNX Runtime & 1x Intel(R) Core(TM) i7-7820X CPU @ 3.60GHz & 1x TITAN V \\
\hline
7820-PT-TITAN{} & PyTorch & 1x Intel(R) Core(TM) i7-7820X CPU @ 3.60GHz & 1x TITAN V \\
\hline
7820-TF{} & TensorFlow & 1x Intel(R) Core(TM) i7-7820X CPU @ 3.60GHz & None \\
\hline
AMD-ORT-A100{} & ONNX Runtime & 1x AMD EPYC 7702 64-Core Processor & 1x A100 \\
\hline
AMD-ORT-V100{} & ONNX Runtime & 1x AMD EPYC 7702 64-Core Processor & 1x V100 \\
\hline
9800-MX-RTX{} & MXNet & 1x Intel(R) Core(TM) i7-9800X CPU @ 3.80GHz & 1x GeForce RTX 3090 \\
\hline
\end{tabular}
\label{tab:sys}
\end{table*}
\begin{table*}[t]
\centering
\caption{MLHarness{} and MLCommons reported results for all four scenarios on 9800-ORT-RTX{}}
\scriptsize
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
\multirow{3}{*}{Benchmark Suite} & Offline & \multicolumn{2}{c|}{Single-Stream} & \multicolumn{2}{c|}{Server} & \multicolumn{2}{c|}{Multi-Stream}\\
\cline{2-8} & \multirow{2}{*}{(sample/s)} & \multirow{2}{*}{(sample/s)} & 90th percentile & \multirow{2}{*}{(sample/s)} & 99th percentile & \multirow{2}{*}{(sample/query)} & 99th percentile\\
& & & latency (ms) & & latency (ms) & & latency (ms) \\
\hline
MLHarness & 133 & 118 & 9.2 & 69 & 44 & 5 & 42 \\ \hline
MLCommons Inference & 315 & 308 & 3.2 & 121 & 14 & 12 & 44 \\
\hline
\end{tabular}
\label{tab:scenario}
\end{table*}
Table \ref{tab:sys} shows the systems used for experiments. The system naming convention follows the rule as the identifier of the CPU types followed by the acronym of the ML/DL framework, and then the identifier of the GPU type if a GPU is used. There are three system instance categories in total. The first category is an Intel desktop-grade CPU system, including system 9800-ORT-RTX{}, 9800-PT-RTX{}, 9800-ORT{}, 9800-PT{}, and 9800-MX-RTX{}. The second category is also an Intel but different desktop-grade CPU system, including system 7820-ORT-TITAN{}, 7820-PT-TITAN{}, and 7820-TF. The last category is a server-based system using AMD CPUs, including system AMD-ORT-A100{} and AMD-ORT-V100{}. We choose different combinations of frameworks, software systems and hardware systems in order to demonstrate the flexibility and scalability of MLHarness{} as a harness for benchmarking.
For both sets of experiments, we report the accuracy, the throughput in the offline scenario, and the throughput and 90 percentile latency in the single-stream scenario. The accuracy is defined differently for each modalities, including the top-1 accuracy for image classification, mAP scores for object detection, F1 scores for question answering, and mean DICE scores for medical image 3D segmentation. As defined in MLCommons Inference \cite{reddi2019mlperf}, the offline scenario represents applications where all data are immediately available and latency is unconstrained, such as photo categorization; on the contrary, the single-stream scenario represents a query stream with sample size of 1, reflecting applications requiring swift responses, such as real time augmented reality. In order to facilitate the comparison between these two scenarios, we fix the batch size of the inferences to 1 in all experiments. This helps to demonstrate how scenarios can affect the throughput of models.
MLHarness{} is also capable of reporting results of the other two scenarios as defined by MLCommons Inference \cite{reddi2019mlperf}, which are the server and the multistream scenarios. The server scenario represents applications where query arrival is random and latency is important, such as online translation. The multistream scenario represents application with a stream of queries, where each query consists of multiple inferences, such as multi-camera driver assistance. We demonstrate that MLHarness{} is able to report the results of these two scenarios by running \texttt{ResNet50} on 9800-ORT-RTX.
Note that, because of the limited access to data-center scale systems, we are not able to develop and conduct experiments for the rest of the two models as provided by MLCommons Inference, which are \texttt{DLRM} for recommendation system and \texttt{RNNT} for speech recognition. But we believe our methodologies as discussed can be easily extended for the two models.
\begin{table*}[t]
\centering
\caption{MLHarness{} reported results for models in MLCommons Inference}
\scriptsize
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\multirow{2}{*}{Model} & \multirow{2}{*}{System} & \multirow{2}{*}{Accuracy} & \multirow{2}{*}{Offline (sample/s)} & \multicolumn{2}{c}{Single-Stream} \vline \\
\cline{5-6} & & & & (sample/s) & 90th percentile latency (ms) \\
\hline
\multirow{6}{*}{MLPerf ResNet50} & 9800-ORT-RTX & Top1: 76.452\% & 133 & 118 & 9.2 \\
\cline{2-6} & 9800-ORT & Top1: 76.456\% & 63 & 62 & 16 \\
\cline{2-6} & 7820-ORT-TITAN & Top1: 76.456\% & 118 & 116 & 9.2 \\
\cline{2-6} & 7820-TF & Top1: 76.456\% & 20 & 20 & 57 \\
\cline{2-6} & AMD-ORT-A100 & Top1: 76.456\% & 159 & 146 & 6.7 \\
\cline{2-6} & AMD-ORT-V100 & Top1: 76.456\% & 202 & 154 & 6.4 \\
\hline
\multirow{6}{*}{MLPerf MobileNet} & 9800-ORT-RTX & Top1: 71.676\% & 196 & 160 & 6.5 \\
\cline{2-6} & 9800-ORT & Top1: 71.676\% & 61 & 58 & 23 \\
\cline{2-6} & 7820-ORT-TITAN & Top1: 71.676\% & 188 & 159 & 6.6 \\
\cline{2-6} & 7820-TF & Top1: 71.676\% & 24 & 24 & 44 \\
\cline{2-6} & AMD-ORT-A100 & Top1: 71.666\% & 358 & 270 & 3.7 \\
\cline{2-6} & AMD-ORT-V100 & Top1: 71.676\% & 382 & 319 & 3.2 \\
\hline
\multirow{6}{*}{MLPerf SSD MobileNet 300x300} & 9800-ORT-RTX & mAP: 23.172\% & 35 & 32 & 37 \\
\cline{2-6} & 9800-ORT & mAP: 23.173\% & 28 & 28 & 37 \\
\cline{2-6} & 7820-ORT-TITAN & mAP: 23.173\% & 30 & 28 & 41 \\
\cline{2-6} & 7820-TF & mAP: 23.173\% & 13 & 13 & 78 \\
\cline{2-6} & AMD-ORT-A100 & mAP: 23.170\% & 20 & 20 & 52 \\
\cline{2-6} & AMD-ORT-V100 & mAP: 23.173\% & 18 & 18 & 57 \\
\hline
\multirow{6}{*}{MLPerf SSD ResNet34 1200x1200} & 9800-ORT-RTX & mAP: 19.961\% & 20 & 19 & 54 \\
\cline{2-6} & 9800-ORT & mAP: 19.955\% & 1.4 & 1.7 & 816 \\
\cline{2-6} & 7820-ORT-TITAN & mAP: 19.955\% & 16 & 15 & 66 \\
\cline{2-6} & 7820-TF & mAP: 20.215\% & 1.4 & 1.4 & 704 \\
\cline{2-6} & AMD-ORT-A100 & mAP: 19.957\% & 23 & 21 & 54 \\
\cline{2-6} & AMD-ORT-V100 & mAP: 19.955\% & 14 & 13 & 95 \\
\hline
\multirow{8}{*}{MLPerf BERT} & 9800-ORT-RTX & F1: 90.874\% & 41 & 38 & 27 \\
\cline{2-6} & 9800-PT-RTX & F1: 90.881\% & 21 & 18 & 67 \\
\cline{2-6} & 9800-ORT & F1: 90.874\% & 2.2 & 2.5 & 487 \\
\cline{2-6} & 9800-PT & F1: 90.874\% & 0.86 & 0.85 & 1305 \\
\cline{2-6} & 7820-ORT-TITAN & F1: 90.874\% & 30 & 29 & 35 \\
\cline{2-6} & 7820-PT-TITAN & F1: 90.874\% & 27 & 26 & 39 \\
\cline{2-6} & AMD-ORT-A100 & F1: 90.879\% & 92 & 78 & 15 \\
\cline{2-6} & AMD-ORT-V100 & F1: 90.874\% & 29 & 29 & 37 \\
\hline
\multirow{2}{*}{MLPerf 3D-UNet} & AMD-ORT-A100 & mean: 0.85300 & 0.043 & 0.045 & 22655 \\
\cline{2-6} & AMD-ORT-V100 & mean: 0.85300 & 0.045 & 0.045 & 22194 \\
\hline
\end{tabular}
\label{tab:inML}
\end{table*}
\subsection{Results of Models in MLCommons Inference}
In this set of experiments, we demonstrate the capability of MLHarness{} on reporting MLCommons Inference \cite{reddi2019mlperf} defined metrics, by benchmarking representative MLCommons Inference models with a variety of systems.
Table \ref{tab:scenario} shows the various MLCommons Inference defined experimental results of \texttt{ResNet50} produced by MLHarness{} on system 9800-ORT-RTX{}. From the results, we observe that using MLHarness{}, running \texttt{ResNet50} on such a target system is able to classify 133 images per second, respond to a query of one image in less than 9.2 ms in 90\% of the time if the queries are received contiguously, respond to a query of one image in less than 44 ms in 99\% of the time if the queries are received following the Poisson distribution with an average of 69 queries per second, and respond to a query of five images in less than 42 ms in 99\% of the time if the queries are received contiguously. Note that the number of queries per second in the server scenario and the number of samples per query in the multistream scenarios are tunable parameters for the system to meet the latency requirements.
We also run the same set of experiments on 9800-ORT-RTX{} using the original MLCommons Inference flows, as shown in Table \ref{tab:scenario}. The results show that MLCommons Inference performs two to three times better than MLHarness{}. In order to investigate the discrepancy that MLHarness{} has a worse performance than MLCommons Inference, we take the Offline scenario as an example and break down the execution time into two parts, including (1) model-inference time for the interval between the model receives pre-processed input tensors and returns output tensors and (2) post-processing time involving generating MLCommons Inference defined format. As Figure \ref{fig:pie} shows, while both MLHarness{} and MLCommons Inference spend nearly the same amount of time on model inference, the much higher latency for MLHarness{} to post-process data make it hard to achieve the same performance as reported by MLCommons Inference. The underlying reason of this high latency in MLHarness{} is due to the aggregated data transferring time between different languages, as data need to be moved several times among ML/DL frameworks, post-processing interfaces and wrappers, while it is not the case for MLCommons Inference since once the inference is done, data always reside in Python. One way to mitigate this high latency is to further optimize MLHarness{} for MLCommons Inference by responding directly to the LoadGen in the post-processing function instead of transferring data back to MLHarness{} and then reporting to MLCommons Inference suite through wrappers between different languages.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.4]{img/pie}
\end{center}
\caption{Break down execution time into model-inference time and post-processing time for MLHarness{} and MLCommons Inference running Offline scenario with ResNet50 and a single input on 9800-ORT-RTX{}.}
\label{fig:pie}
\end{figure}
Table \ref{tab:inML} shows the experimental results of \texttt{ResNet50} and \texttt{MobileNet} for image classification, \texttt{SSD MobileNet 300x300} and \texttt{SSD ResNet34 1200x1200} for object detection, \texttt{BERT} for question answering, and \texttt{3D-UNet} for medical image 3D segmentation. All of these models are provided by MLCommons Inference and can be found at its GitHub page \cite{MLCommonsInferenceGithub}.
An interesting observation from Table \ref{tab:inML} is that the throughput of \texttt{ResNet50} on system AMD-ORT-V100, which is 202 samples per second, is higher than that on system AMD-ORT-A100, which is 159 samples per second. This seems to be counter-intuitive as the A100 GPU is two generations newer than the V100 GPU, hence the A100 GPU is supposed to have better performance than V100. With MLCommons' inference methodology alone, we are not able to figure out the reason of this ``seemingly abnormal'' behavior. This is the place for MLHarness{} to shine with its extended MLModelScope capabilities. Leveraging the across-stack profiling and analysis capabilities from MLModelScope, we are able to align framework-level spans from the ONNX Runtime profiler and the library-level spans from CUDA Profiling Tools Interface, and capture the detailed view of this strange behavior by delving deeper into the results. Figure \ref{fig:xspanal} shows the performance of \texttt{ResNet50} with batch size one across the system AMD-ORT-V100 and system AMD-ORT-A100 at both the layer and the kernel (sub-layer) granularity levels, respectively. At the layer granularity, we observe that the end-to-end inference time on system AMD-ORT-V100 is indeed shorter than that on AMD-ORT-A100, and the reduced runtime mainly comes from the shortened runtime of many Conv2+ReLu layers (in orange color). For example, by focusing only on the second to the last
Conv2+ReLu layer, we see that the duration on system AMD-ORT-A100 is almost twice as large as the duration on system AMD-ORT-V100. By zooming into
that particular layer at the kernel level granularity, we
quickly realize that the two systems have executed different GPU kernels. For system AMD-ORT-V100, there are two major kernels, i.e., cudnn::winograd::generateWinogradTilesKernel and volta\_scudnn\_winograd\_128x128. In contrast, for system AMD-ORT-A100, there is only one major kernel, i.e., implicit\_convolve\_sgemm. We suspect that this discrepancy in performance is mainly due to the less optimized kernel selection algorithm offered by the newer system CUDNN library (v8.1) for A100 GPUs than for V100 GPUs. This further validates
the importance of full-stack optimization for system performance.
\begin{figure*}[t]
\begin{center}
\includegraphics[scale=0.5]{img/xsp}
\end{center}
\caption{Performance of ResNet50 with batch size one across systems AMD-ORT-V100 and AMD-ORT-A100 at both layer and kernel (sub-layer) granularity levels, respectively. The axis on the top is the duration to execute each layer in the model, while the axis at the bottom is the duration to execute kernels of the second to the last Conv2 + Relu layer.}
\label{fig:xspanal}
\end{figure*}
In summary, we show that MLHarness{} is capable of reporting MLCommons Inference defined metrics by encapsulating MLModelScope \cite{DBLP:journals/corr/abs-2002-08295} as an easy-to-use black box into MLCommons Inference, and that our harness system is able to benchmark models that are not supported in MLModelScope, including \texttt{BERT} for question answering and \texttt{3D-UNet} for medical image 3D segmentation, with the help of the new interfaces for user-defined pre-processing and post-processing functions. Moreover, as MLHarness{} is built on top of MLModelScope, we are able to utilize its across-stack profiling and analysis capabilities to align the information across the ML/DL framework level and the accelerating library level, and pinpoint critical distinctions between models, frameworks, and system.
\begin{table*}[ht]
\centering
\caption{MLHarness{} reported results for models beyond MLCommons Inference using PyTorch and ONNX Runtime as ML/DL frameworks}
\scriptsize
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\multirow{2}{*}{Model} & \multirow{2}{*}{System} & \multirow{2}{*}{Accuracy} & \multirow{2}{*}{Offline (sample/s)} & \multicolumn{2}{c}{Single-Stream} \vline \\
\cline{5-6} & & & & (sample/s) & 90th percentile latency (ms) \\
\hline
\multirow{6}{*}{TorchVision AlexNet} & 9800-ORT-RTX & Top1: 56.520\% & 218 & 171 & 6.1 \\
\cline{2-6} & 9800-PT-RTX & Top1: 56.516\% & 191 & 154 & 6.8 \\
\cline{2-6} & 9800-ORT & Top1: 56.522\% & 86 & 81 & 12 \\
\cline{2-6} & 9800-PT & Top1: 56.522\% & 12 & 12 & 90 \\
\cline{2-6} & 7820-ORT-TITAN & Top1: 56.522\% & 219 & 168 & 6.1 \\
\cline{2-6} & 7820-PT-TITAN & Top1: 56.522\% & 186 & 152 & 6.9 \\
\hline
\multirow{6}{*}{TorchVision ResNet18} & 9800-ORT-RTX & Top1: 69.758\% & 179 & 144 & 7.3 \\
\cline{2-6} & 9800-PT-RTX & Top1: 69.756\% & 122 & 113 & 9.6 \\
\cline{2-6} & 9800-ORT & Top1: 69.758\% & 128 & 118 & 8.8 \\
\cline{2-6} & 9800-PT & Top1: 69.758\% & 28 & 32 & 42 \\
\cline{2-6} & 7820-ORT-TITAN & Top1: 69.758\% & 175 & 145 & 7.2 \\
\cline{2-6} & 7820-PT-TITAN & Top1: 69.758\% & 132 & 119 & 9.2 \\
\hline
\multirow{6}{*}{TorchVision ResNet34} & 9800-ORT-RTX & Top1: 73.314\% & 142 & 124 & 8.7 \\
\cline{2-6} & 9800-PT-RTX & Top1: 73.306\% & 90 & 89 & 12 \\
\cline{2-6} & 9800-ORT & Top1: 73.314\% & 72 & 70 & 14 \\
\cline{2-6} & 9800-PT & Top1: 73.314\% & 20 & 19 & 65 \\
\cline{2-6} & 7820-ORT-TITAN & Top1: 73.314\% & 144 & 125 & 8.5 \\
\cline{2-6} & 7820-PT-TITAN & Top1: 73.314\% & 98 & 93 & 12 \\
\hline
\multirow{6}{*}{TorchVision ResNet50} & 9800-ORT-RTX & Top1: 76.130\% & 129 & 115 & 9.4 \\
\cline{2-6} & 9800-PT-RTX & Top1: 76.132\% & 76 & 76 & 15 \\
\cline{2-6} & 9800-ORT & Top1: 76.130\% & 63 & 61 & 16 \\
\cline{2-6} & 9800-PT & Top1: 76.130\% & 7.9 & 7.7 & 149 \\
\cline{2-6} & 7820-ORT-TITAN & Top1: 76.130\% & 128 & 112 & 9.6 \\
\cline{2-6} & 7820-PT-TITAN & Top1: 76.130\% & 79 & 78 & 15 \\
\hline
\multirow{6}{*}{TorchVision ResNet101} & 9800-ORT-RTX & Top1: 77.374\% & 100 & 89 & 12 \\
\cline{2-6} & 9800-PT-RTX & Top1: 77.376\% & 59 & 68 & 22 \\
\cline{2-6} & 9800-ORT & Top1: 77.374\% & 36 & 36 & 29 \\
\cline{2-6} & 9800-PT & Top1: 77.374\% & 4.7 & 4.8 & 239 \\
\cline{2-6} & 7820-ORT-TITAN & Top1: 77.374\% & 94 & 87 & 12 \\
\cline{2-6} & 7820-PT-TITAN & Top1: 77.374\% & 60 & 67 & 22 \\
\hline
\multirow{6}{*}{TorchVision ResNet152} & 9800-ORT-RTX & Top1: 78.310\% & 84 & 76 & 15 \\
\cline{2-6} & 9800-PT-RTX & Top1: 78.312\% & 46 & 64 & 26 \\
\cline{2-6} & 9800-ORT & Top1: 78.312\% & 26 & 26 & 41 \\
\cline{2-6} & 9800-PT & Top1: 78.312\% & 3.5 & 3.5 & 324 \\
\cline{2-6} & 7820-ORT-TITAN & Top1: 78.312\% & 74 & 72 & 15 \\
\cline{2-6} & 7820-PT-TITAN & Top1: 78.312\% & 52 & 60 & 26 \\
\hline
\end{tabular}
\label{tab:beyondML}
\end{table*}
\begin{table*}[t]
\centering
\caption{MLHarness{} reported results for models beyond MLCommons Inference using TensorFlow and MXNet as ML/DL frameworks}
\scriptsize
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\multirow{2}{*}{Model} & \multirow{2}{*}{System} & \multirow{2}{*}{Accuracy} & \multirow{2}{*}{Offline (sample/s)} & \multicolumn{2}{c}{Single-Stream} \vline \\
\cline{5-6} & & & & (sample/s) & 90th percentile latency (ms) \\
\hline
\multirow{2}{*}{VGG16} & 7820-TF & Top1: 70.962\% & 9.3 & 9.2 & 115 \\
\cline{2-6} & 9800-MX-RTX & Top1: 72.852\% & 100 & 88 & 11 \\
\hline
\multirow{2}{*}{VGG19} & 7820-TF & Top1: 71.056\% & 8.7 & 8.6 & 123 \\
\cline{2-6} & 9800-MX-RTX & Top1: 73.814\% & 91 & 82 & 12 \\
\hline
\end{tabular}
\label{tab:beyondMLContd}
\end{table*}
\subsection{Results of Models beyond MLCommons Inference}
Unlike the first set of experiments, which focuses on showcasing the success of MLHarness{} in orchestrating MLCommons Inference \cite{reddi2019mlperf} way of benchmarking MLCommons Inference models, the second set of experiments illustrates how to make use of the exchange specification and the across-stack profiling and analysis tool in MLModelScope \cite{DBLP:journals/corr/abs-2002-08295} to facilitate developments and comparisons of ML/DL innovations in the context of MLCommons Inference methodologies.
Table \ref{tab:beyondML} shows a sample of six models to demonstrate how easy it is to use MLHarness{} to scale the MLCommons Inference way of benchmarking of various models beyond MLCommons Inference on a variety of system configurations. In this particular example, these
results further show
the relationships among the depth of the convolutional neural networks, the accuracy, and the throughput. The six models are \texttt{AlexNet} along with five models from the \texttt{ResNet} family. All of these models are from TorchVision \cite{TorchVision}, where the implementation details and the reference accuracy can be found at its GitHub page \cite{TorchVisionGithub}. Again, the success of importing these models into MLHarness{} using the exchange specification is validated by the accuracy results, where all of them are within at least 99\% of the reference accuracy as stated by TorchVision. In addition, the pre-processing and post-processing functions in the exchange specification can be regarded as a reusable component because these models share the same pre-processing and post-processing steps.
Figure \ref{fig:accuracy} compares the accuracy of the six convolutional neural networks in systems 9800-ORT-RTX{} to 7820-PT-TITAN{} as listed in Table \ref{tab:sys}. The models are placed in increasing order of depth from left to right, with \texttt{ResNet152} being the deepest. As expected, there is no huge variance on the accuracy across systems, but the deeper the convolutional neural network is, the more accurate the model is.
Figure \ref{fig:throughput} compares the throughput of the six convolutional neural networks in system 9800-ORT-RTX{} to 7820-PT-TITAN{} as listed in Table \ref{tab:sys}. From Figure \ref{fig:throughput}, we observe that there is a trend that the deeper the convolutional neural network is, the lower the throughput it has. However, for the two systems 9800-ORT{} and 9800-PT{}, which are the two system configurations without GPUs, are not following the trend when comparing \texttt{AlexNet} and \texttt{ResNet18}. As our MLHarness{} is built on top of MLModelScope, we then use the across-stack profiling and analysis tool, XSP \cite{DBLP:journals/corr/abs-1908-06869}, to identify the bottleneck. Table \ref{tab:XSP} shows the top three most time-consuming layers of \texttt{AlexNet} and \texttt{ResNet18} identified by XSP on system 9800-PT{}, which has no GPU support and has PyTorch as the ML/DL framework. It clearly points out that the bottleneck of \texttt{AlexNet} is from matrix multiplications of fully connected layers. Although there is also a fully connected layer in \texttt{ResNet18} as recorded by XSP, its size is 512 by 1000, which is much smaller than the largest one in \texttt{AlexNet}, whose size is 4096 by 4096.
Although in Table \ref{tab:beyondML}, we use the same implementations of models by converting PyTorch models to ONNX formats that can be used in ONNX Runtime, it is also valuable to compare the same structure of model with different implementations and training processes. Table \ref{tab:beyondMLContd} shows the experiments on the models from the \texttt{VGG} family using TensorFlow and MXNet as ML/DL frameworks, where the models for TensorFlow can be found at TensorFlow Model Graden \cite{tensorflowmodelgarden2020} and the models for MXNet can be found at GluonCV \cite{GluonCV}. From Table \ref{tab:beyondMLContd}, we can observe that the accuracy is different between implementations of the same model, which further illustrates the difficulty of model reproducibility. This also shows how flexible MLHarness{} is in terms of running scalable
benchmarking across different
combinations of models and frameworks by utilizing the extended exchange specification as discussed in this work, and
how scalable experimentation
helps to identify common issues convincingly.
In summary, these exemplar experiments as discussed in this section show not only that it is easy to add models into MLHarness{} by utilizing the extended exchange specification and to report MLCommons Inference defined metrics for models that are beyond MLCommons Inference, but also that, with the help of MLModelScope, MLHarness{} can easily and scalably compare models and extract critical and detailed information, which is impossible when merely using MLCommons Inference.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.25]{img/accuracy}
\end{center}
\caption{Accuracy of Models in Different Systems}
\label{fig:accuracy}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.25]{img/throughput}
\end{center}
\caption{Offline Throughput of Models in Different Systems}
\label{fig:throughput}
\end{figure}
\begin{table}[t]
\centering
\caption{The top-3 most time-consuming layers\\ of AlexNet and ResNet18 on system 9800-PT{}}
\scriptsize
\begin{tabular}{|c|c|c|c|}
\hline
\multicolumn{2}{|c|}{AlexNet} & \multicolumn{2}{c|}{ResNet18} \\
\hline
Layer Name & Latency (ms) & Layer Name & Latency (ms) \\
\hline
aten::mm & 47.99 & aten::maxpool2d & 6.31 \\
\hline
aten::mm & 17.99 & aten::convolution & 2.46 \\
\hline
aten::mm & 4.13 & aten::convolution & 2.24 \\
\hline
\end{tabular}
\label{tab:XSP}
\end{table}
\subsection{Impact of MLHarness{}}
The experimental results above demonstrate the success of MLHarness{} in benchmarking ML/DL model inferences by providing an extended exchange specification for researchers to easily plug in their ML/DL innovations and collect a set of well-defined metrics.
One of our near future goals is to further extend MLHarness{} to support MLCommons training \cite{mattson2019mlperf}.
Nevertheless, the impact of MLHarness{} is not only restricted to ML/DL community.
Benchmarking, reproducibility, portability, and scalability are important aspects in any computing-related research, such as high performance computing and computational biology. The success of MLHarness{} is only a starting point, from which we are aiming for extending the same techniques to other research domains that utilize heterogeneous computational resources, and providing a scalable and flexible harness system to overcome the similar set of challenges.
|
2,869,038,155,847 | arxiv | \section{Causal quantum geometry in two dimensions}
\label{intro:sec}
The approach of Causal Dynamical Triangulations (CDT) \cite{physrep} provides concrete evidence that
one must include causal, Lorentzian properties in the nonperturbative gravitational path integral
in order for the associated quantum gravity theory to possess a classical limit.
This should be contrasted with purely Euclidean constructions where the path integral
is taken over geometric configurations which represent four-dimensional ``spacetimes", but
do not contain any information about time, light cones or causality. The problem with the latter appears to be
that -- quite independent of the elementary ``building blocks" used for constructing individual Euclidean
configurations -- their nonperturbative sum is completely dominated by highly degenerate
objects whose superposition never leads to a four-dimensional extended universe on macroscopic
scales, no matter how one looks at it.
The Euclidean precursor of CDT, so-called Dynamical Triangulations (DT), is a case in point.
In both DT and CDT quantum gravity one looks for scaling limits of regularized path integral
expressions, where curved geometries are represented by triangulated, piecewise flat manifolds.
However, the infinite-volume limit of the Euclidean theory is dominated by degenerate phases with no obvious
physical interpretation in terms of general relativity \cite{edt},
and phase transitions are of first order \cite{firstorder1,firstorder2}.
By contrast, one of the phases of CDT quantum gravity is characterized by a quantum geometry
which on large scales exhibits properties of a four-dimensional de Sitter space \cite{desitter}, and
the phase space of CDT contains a whole line of second-order critical points \cite{secondorder},
which are being investigated as natural candidates for defining the searched-for continuum theory \cite{cdtrg}.
At the inception of the CDT approach, the toy model version of the theory in two spacetime dimensions
played an important role: the nonperturbative CDT path integral over geometries can in this case be evaluated
analytically \cite{al}, and the resulting two-dimensional quantum gravity theory compared with
the much-studied theory of two-dimensional Euclidean quantum gravity, which likewise can be formulated
and solved exactly in terms of dynamical triangulations (see \cite{edtreviews} for reviews).
The two theories turn out to be inequivalent, and are characterized by different critical exponents (see \cite{alet}
for a comparison). In terms of continuum formulations, they lie in the universality class of Liouville quantum
gravity \cite{liouville} for DT and two-dimensional, projectable Ho\v rava-Lifshitz gravity \cite{hl2} for CDT.
These models provide
the first completely explicit example that ``signature matters" in the context of the nonperturbative
gravitational path integral, aka the ``sum over histories". As mentioned above, we now have good evidence
that the same is true for the physically relevant case of quantum gravity in four dimensions.
This paper will expand on the theme of two-dimensional quantum gravity as an interesting testing ground for
quantum gravity proper, where both analytical and numerical solution methods can be employed and compared.
We will study the two-dimensional implementation of a recently introduced version of CDT, which goes
by the name of ``Locally Causal Dynamical Triangulations (LCDT)" \cite{jordanthesis} or
``CDT without preferred foliation" \cite{jordanloll}. The path integral in these CDT models is performed over a
class of piecewise flat Lorentzian geometries that is enlarged compared to standard CDT quantum
gravity. The geometries are still causal, in the sense of having a well-defined light cone structure
everywhere, but are not required to have the preferred (discrete) proper-time slicing
characteristic of standard CDT configurations. An in-depth numerical investigation of locally causal DT in
three spacetime dimensions found nontrivial evidence that key results of CDT quantum gravity,
including the volume distribution of three-dimensional de Sitter space are reproduced in this
generalized causal theory \cite{jordanloll}.
This is an important and concrete piece of evidence that for a judicious choice of the bare coupling constants of the theory,
LCDT and CDT quantum gravity lead to equivalent continuum theories. We would like to
investigate whether the same is true in two spacetime dimensions. Although this toy model is arguably even less
representative of full gravity than the three-dimensional model, the properties of ``quantum geometry" are
much simpler to analyze in dimension two and may give us a hint of why the two causal theories are
equivalent or not, as the case may be.
Since in terms of its configuration space the locally causal model
lies in between DT and CDT quantum gravity, solving it will give us a better understanding of the universality
classes of theories of quantum geometry in two dimensions. The CDT universality class has so far proven
to be quite robust: inclusion of a higher-curvature term \cite{fgk1}, a decoration by arches along spatial
links (tantamount to including a restricted class of ``baby universes") \cite{fgk2,ambips},
an explicit inclusion of a finite number of baby universes within a string field-theoretic setting
based on CDT \cite{sft-cdt} (see also \cite{ambjornbudd}), or starting from a
conceptually rather different Hamiltonian ``string bit model" \cite{durhuuslee} all lead to the same
scaling limit. In the absence, to date, of an analytic solution of locally causal DT in two dimensions, we
will present below the results of a numerical investigation. We have examined several observables and
measured two critical exponents,
the expectation values of the Hausdorff and the spectral dimension of quantum spacetime, to try to understand
whether they coincide with those of DT or CDT quantum gravity, or perhaps signal yet another, new universality
class of two-dimensional quantum geometry.
We begin our analysis by introducing the locally causal DT model and its geometry in Secs.\ \ref{lcdt:sec} and
\ref{invest:sec}, and outline the set-up for the Monte Carlo simulations in Sec.\ \ref{setup:sec}. Closed
timelike curves and their role in LCDT are described in Sec.\ \ref{ctc-sec}. Sec.\ \ref{obs:sec} deals with observables,
including the volume profile, a characterization of the quantum geometry in terms of minimal loops,
and our results for the Hausdorff and spectral dimensions. Our conclusions are presented in Sec.\ \ref{concl:sec}.
The Appendix contains some details on the geometric shape of typical LCDT configurations.
\section{Locally causal dynamical triangulations}
\label{lcdt:sec}
We will first derive an expression for the gravitational action in locally causal DT in two dimensions.
Our starting point is the two-dimensional Einstein-Hilbert action in the continuum,
\begin{equation}
\label{contact}
S=\kappa \int d^2x \sqrt{|g|} (R-2\Lambda),
\end{equation}
where $\kappa$ is the inverse of Newton's constant, $\Lambda$ the cosmological constant,
and $g$ denotes the determinant of the Lorentzian spacetime metric $g_{\mu\nu}$. The integral over
the scalar curvature $R$ is topological and will not play a role in the path integral construction,
since the spacetime topology will be held fixed. Dropping the $R$-term leaves us with just the volume term.
Absorbing the (dimensionless) gravitational constant into the cosmological constant $\Lambda$, the path
integral and its regularized counterpart in terms of dynamical triangulations read
\begin{equation}
\label{pathints}
\int{\cal D}[g]\, \mbox{e}^{-i\Lambda\int\! d^2x \sqrt{|g|}}\; \longrightarrow\; \sum_T \frac{1}{C(T)}\,\mbox{e}^{-i\lambda V_2(T)},
\end{equation}
where the (formal) integration over diffeomorphism equivalence classes $[g]$ of metrics on the left-hand
side has been replaced by a sum over inequivalent triangulations $T$ with the usual DT measure
involving the order $C(T)$ of the automorphism group of $T$. The constant $\lambda$ on the right-hand
side is the bare cosmological coupling of the regularized theory, and $V_2(T)$ denotes the spacetime volume
of the triangulation $T$.
As described in \cite{jordanloll}, LCDT in 1+1 dimensions uses two types of triangular Minkowskian
building blocks, to allow for the construction of geometries without a preferred time foliation (see Fig.\ \ref{twotri}).
\begin{figure}[ht]
\centering
\scalebox{0.5}{\includegraphics{twotriangles.pdf}}
\caption[phase]{The two elementary building blocks of (1+1)-dimensional LCDT, with light cones
indicated: $stt$-triangle $\Delta_{stt}$ with one space- and two timelike edges (left) and $sst$-triangle $\Delta_{sst}$
with one time- and two spacelike edges (right). The colour-coding for spacelike edges is blue, and for timelike ones red.
}
\label{twotri}
\end{figure}
The usual CDT simplex $\Delta_{stt}$ with one spacelike and two timelike edges is supplemented by another
two-simplex $\Delta_{sst}$ with one timelike and two spacelike edges. The squared edge length of all
spacelike links is $l_s^2\! =\! a^2$ and that of timelike links $l_t^2\! =\! -\alpha a^2$, in terms
of the lattice cutoff $a$ and the ratio $\alpha >0$ of the two quantities. To determine the
spacetime volume $V_2(T)$
of a triangulation $T$ assembled from these building blocks, we simply need to count their numbers
$N_{stt}(T)$ and $N_{sst}(T)$ and compute the volumes of both $\Delta_{stt}$ and $\Delta_{sst}$.
The latter are determined in a straightforward way
from the values of the edge lengths of the two triangles, and are given by
\begin{equation}
\text{vol}(\Delta_{stt})= \frac{\sqrt{4\alpha +1}}{4}\, a^2 ,\;\;\;\; \text{vol}(\Delta_{sst}) =\frac{\sqrt{\alpha (\alpha+4)}}{4}\, a^2.
\end{equation}
To be able to perform Monte Carlo simulations of the system, we analytically continue the parameter $\alpha$
to $-\alpha$ in the lower-half complex plane according to the usual CDT prescription \cite{3d4d}. The resulting
real expression for the Wick-rotated regularized path integral in two dimensions is
\begin{equation}
\label{pilcdt}
Z(\lambda)= \sum_T \frac{1}{C(T)}\,\mbox{e}^{-\lambda a^2(N_{stt} \frac{\sqrt{4\alpha -1}}{4}+N_{sst}
\frac{\sqrt{\alpha (4-\alpha)}}{4})} .
\end{equation}
Note that for both triangle volumes to be positive, $\alpha$ must satisfy the inequality
$1/4 \! <\! \alpha\! < \! 4$. The limiting value $\alpha\! =\! 1/4$ corresponds geometrically to a collapse
of the $stt$-triangles to zero volume, whereas for $\alpha\! =\! 4$ the $sst$-triangles are collapsed.
In the isotropic case, $\alpha\! =\! 1$, both triangles after Wick rotation are equilateral and identical.
All LCDT simulations presented below were performed for $\alpha\! =\! 1$.
\begin{figure}[t]
\centering
\scalebox{0.5}{\includegraphics{triorient.pdf}}
\caption[phase]{Two time orientations are possible for each triangle type, as indicated by the future-pointing
arrow assignments.
}
\label{triorient}
\end{figure}
\begin{figure}[t]
\centering
\scalebox{0.5}{\includegraphics{vertexconf.pdf}}
\caption[phase]{Local causality implies the existence of one future and one past light cone at each point.
The edges meeting at a vertex are always arranged in four groups
of alternating type, time- or spacelike (with at least one edge in each group),
as depicted. In the figure, ``time" is pointing upwards, and
the thin lines indicate the light cones located at the central vertex.
}
\label{vertexconf}
\end{figure}
Building blocks of the two triangle types are assembled into simplicial manifolds $T$ by ``gluing" them together
pairwise along boundary edges, where a timelike edge can only be glued to another timelike edge,
and a spacelike edge only to another spacelike edge. Note that with respect to some overall flow of time, each of the
two building blocks can appear with two different time orientations, which can be indicated by
arrow assignments as illustrated in Fig.\ \ref{triorient}. By definition, all arrows are future-pointing.
Note that once a single triangle in a triangulation has been given a specific time orientation, the orientation of its
neighbours, and of its neighbours' neighbours, etc. is also fixed, because the arrows on shared edges have to match.
Local causality is incorporated in the gluing rules by stipulating
that before the analytic continuation there should be exactly one future and one past light cone at each
interior point of the triangulation \cite{jordanloll}. This condition is always satisfied at an interior
point of a triangle, because up to diffeomorphisms the metric by construction is given by the Minkowski metric.
It is also satisfied at points along edges where two triangles meet, unless the point happens to be a vertex, as
can be seen by inspecting the geometry of the building blocks in Fig.\ \ref{triorient}. The
requirement is only nontrivial at the vertices of the triangulation. When expressed in terms of the
edges meeting at a vertex, it implies that they should come in four groups of alternating type (time- or spacelike)
when going around the vertex once (Fig.\ \ref{vertexconf}), which imposes corresponding restrictions
on the triangles meeting at the vertex.
The generic vertex structure in ordinary CDT -- which uses only building blocks of type
$\Delta_{stt}$ -- is also of this type, but by construction there is only
a single spacelike edge each on the left and the right of the light cone, and the pair of these
spacelike links
forms part of a preferred slice of constant integer time. It is precisely the generalized vertex structure
in locally causal DT that allows for configurations without this preferred time slicing. Fig.\ \ref{DTstrips}
illustrates the difference between a piece of causal DT and one of locally causal DT.
\begin{figure}[t]
\centering
\scalebox{0.4}{\includegraphics{DTstrips.pdf}}
\caption[phase]{Two strips of dynamically triangulated spacetimes with an initial (bottom) and a final
spatial boundary (top). (a) In standard CDT only $stt$-triangles are used. There are two spatial
edges meeting at each internal vertex, giving rise to the characteristic preferred proper-time slicing
in terms of consecutive lines of spatial links. (b) LCDT works with both $stt$- and $sst$-triangles,
allowing for a more general vertex structure, without preferred slicing.
}
\label{DTstrips}
\end{figure}
\section{Properties of LCDT in two dimensions}
\label{invest:sec}
Our task will be to investigate the properties of the path integral (\ref{pilcdt}) in the continuum limit, where
the sum is taken over an ensemble of triangulations of fixed topology, obeying local simplicial manifold
conditions\footnote{in dimension 2:
each internal edge is shared by exactly two triangles, and any two triangles share at most one edge}
and with the vertex structure of locally causal DT described above.
At this stage, there is no known exact solution of the continuum dynamics of LCDT quantum gravity in two
dimensions; one difficulty is precisely the absence of a distinguished notion of time in terms of the lattice structure itself,
which prevents the straightforward introduction of a transfer matrix used
previously in solving CDT \cite{al,fgk1,fgk2,ambips}.
Since the configurations have two
different kinds of edges, they can be thought of as a particular kind of two-coloured graphs, whose
properties one may try to understand in a systematic way in the sense of enumerative combinatorics.
The subgraph consisting of spacelike links only has the form of a stack of ``bubbles", which on their inside are
decorated with timelike links (see \cite{hoekzema,jordanloll} for definitions and discussions of this substructure).
The LCDT model may be supplemented by additional conditions, which restrict the
type of (self-)overlaps among these bubbles that are allowed.
Since the bubbles are extended structures, these additional rules have a nonlocal character.
They are motivated by the finding that for spatially compact boundary conditions in two
dimensions the local condition of vertex causality described above does not
imply global causality in the sense of the absence of (a specific class of) closed timelike curves
(see Sec.\ \ref{ctc-sec} for an explicit example and further discussion).
In turn, the presence of such curves is related to the presence of overlapping bubbles.
In our simulations both space and time will
be compact. The topology of space will be a circle and time will be cyclically identified, which means that
spacetime is topologically a torus $T^2$. Since we are primarily interested in ``bulk" properties of the
geometry, this choice is technically convenient:
all vertices are interior vertices, on which vertex causality will
be imposed, and the action is the one appearing in the path integral expression (\ref{pilcdt}),
without the need for adding any boundary terms.
The functional form of the Euclidean action, schematically
given by $S\! =\! c_1 N_{stt} +c_2 N_{sst}$, for two positive constants $c_1$, $c_2$, is the most
general one linear in ``counting variables". These are the variables counting simplices of a
particular dimension and type in a given triangulation $T$: the numbers $N_0(T)$ of vertices,
$N_s(T)$ of spacelike edges and $N_t(T)$ of timelike edges, as well as the numbers
$N_{sst}(T)$ and $N_{stt}(T)$ already introduced earlier. Our statement follows from the fact
that these five variables are subject to three constraints,
\begin{equation}
N_0-N_s-N_t+N_{sst}+N_{stt}\! =0,\;\;
N_{sst}+2 N_{stt}-2 N_t\! =0,\;\;
N_{stt}+2 N_{sst}-2N_t\! =0,
\end{equation}
which must be satisfied on each configuration $T$ with torus topology.
We finally note that, at least in the absence of additional
constraints on the bubble configurations, toroidal boundary conditions
introduce a duality into the two-dimensional LCDT system. The duality transformation
consists in swapping
simultaneously the assignments ``timelike" and ``spacelike" of all edges in a given triangulation,
which will convert all $stt$-triangles into $sst$-triangles and vice versa.
An admissible triangulation (one that satisfies the local conditions of a simplicial manifold and
vertex causality) will under this transformation be mapped to another admissible triangulation with the same
topology, with the roles of time and
space interchanged. Of course, for $\alpha\! \not=\! 1$ the Boltzmann weights of a triangulation and its
dual will in general be different.
\section{Numerical set-up}
\label{setup:sec}
We have used Monte Carlo techniques to sample the partition function or Euclideanized path integral
(\ref{pilcdt}) of locally causal dynamical triangulations, and compute expectation values of selected observables.
An important ingredient are a set of Monte Carlo moves, which take the form of local changes in the simplicial
geometry, and are designed to get us around the configuration space of the model by way of a Markov process.
The four types of move we have worked with will be described below.
Further technical details on their
implementation may be found in \cite{ruijl}. Additional references on Monte Carlo moves in the context of
causal dynamical triangulations are \cite{jordanthesis,physrep}.
The first type of move is a generalization of the (0,2)-move used in CDT, in which two adjacent $stt$-triangles that
share a spacelike link are
created simultaneously. The local starting configuration consists of a pair of timelike links, belonging to
opposite sectors of a vertex (one link from inside the past and the other from inside the future light cone), see
Fig.\ \ref{collapseflip}(a). This move is compatible with the time slicing of CDT geometries. Since this compatibility is no
longer a requirement in locally causal DT, we will also use the colour-reversed counterpart of this move, where
two adjacent $sst$-triangles that share a timelike link are created from a pair of spacelike links,
this time from opposite spacelike sectors of the light cone at the central vertex.
\begin{figure}[t]
\centering
\scalebox{0.43}{\includegraphics{collapseflip.pdf}}
\caption[phase]{Two types of Monte Carlo moves used in locally causal DT: (a) example of a (0,2)-move and its inverse, also
called a ``link collapse"; (b) example of a (2,2)- or flip move and its inverse.
}
\label{collapseflip}
\end{figure}
Also the second type of move, the (2,2)- or ``flip" move generalizes a local move already employed in CDT. It consists in
flipping the diagonal inside a rhombus made of a pair of adjacent triangles. The version depicted in
Fig.\ \ref{collapseflip}(b) is the one also permitted in CDT. In our simulations of locally causal DT, we will in addition use
the move with the opposite assignments of time- and spacelike edges. These are the only two flip moves compatible
with vertex causality, if the character of the flipped edge remains unchanged. Two more flip moves are possible
if the flipped diagonal link is allowed to change from time- to spacelike or vice versa.
Another type of move we have used in the simulations is a (2,4)-move, where the starting point is again given
by a pair of adjacent triangles. A new configuration with identical boundary is obtained by ``subdividing" the rhombus
with another diagonal, thereby creating a four-valent vertex at the centre, see Fig.\ \ref{pinchother}(a) for an example.
In order for vertex causality to be satisfied at the new vertex, the added diagonal has to be of opposite (time-/spacelike)
type to the one already present.
Eight variations of the (2,4)-move (and its inverse) are possible, depending on the type and orientation of the
initial triangle pair, but six of them are equivalent to performing a (0,2)-move, which was already discussed above.
\begin{figure}[t]
\centering
\scalebox{0.43}{\includegraphics{pinchother.pdf}}
\caption[phase]{The two remaining types of Monte Carlo moves used in locally causal DT: (a) example of a (2,4)-move and its inverse; (b) the pinching move and its inverse.
}
\label{pinchother}
\end{figure}
Lastly, we employ a ``pinching" move, which is entirely new and not present in standard CDT, and was previously
described in \cite{jordanthesis}. The initial local
configuration for this move looks like a piece of regular CDT configuration, consisting of four $stt$-triangles
forming a strip bounded above and below by line segments made out of spacelike links. The move pinches
those two segments together in a single point, resulting in a pair of $sst$-triangles, as illustrated by
Fig.\ \ref{pinchother}(b). In the simulations we also use the colour-reversed version of this move.
In all cases, it is understood that whenever one of these moves is proposed by the computer algorithm,
it will always be rejected if it violates either vertex causality or the simplicial manifold condition.
The (overcomplete) set of
these moves is likely to be ergodic, but we do not have a formal proof at this stage.
The explicit proof may depend in subtle ways on the details of the
ensemble, in particular, on excluding classes of bubble configurations associated with
specific closed timelike curves that lead to unwanted global acausal behaviour.
We have run several kinds of cross check on the Monte Carlo simulations: firstly, that the acceptance
rates of moves and their inverses are approximately the same, and secondly, that the frequency of occurrence
for configurations with very small volume is compatible with the frequency predicted by the Boltzmann
distribution. Lastly, our set-up has an easy way to implement a CDT limit, which we can use to
cross-check measured CDT results for the dynamical dimensions with the theoretical results available.
Note that apart from the flip move, all Monte Carlo moves described above alter the number of triangles in
the triangulation, and therefore its two-volume. Since it is convenient from the numerical point of view to keep
the total volume fixed, at least approximately, we use the standard DT prescription where the volume $V_2$
is allowed to vary in a narrow interval around a fixed target volume $V_2^{(0)}$. This is achieved by adding
a quadratic term $\delta\, (V_2^{(0)}\! -\! V_2)^2$ to the action, with a parameter $\delta >0$ determining
the width of the interval.\footnote{Note that for our standard choice $\alpha\! =\! 1$, $V_2$ is proportional to the
number $N_2$ of triangles.} We tune the cosmological
constant $\lambda$ such that the measured volumes are distributed symmetrically around $V_2^{(0)}$,
and only collect data from triangulations that have precisely this target volume. The simulations are run
such that there is about one sweep of length $L\approx 10^6$ between successive measurements.
As mentioned before, there is a straightforward way to obtain ordinary CDT simulations in our set-up. It
consists in setting $\alpha\! =\! 1/4$ in the action (\ref{pilcdt}), while maintaining the constraint
$N_2\!\equiv\! N_{sst}+N_{stt}\! =\! constant$. Since the term proportional to $N_{stt}$ in the action now
vanishes, $stt$-triangles can be created ``at no cost" during the Monte Carlo simulation, while the number
of $sst$-triangles will diminish accordingly, thereby lowering the value of the Euclidean action. As a result,
the triangulations quickly become pure CDT configurations, consisting only of building blocks $\Delta_{stt}$.
Below in Sec.\ \ref{obs:sec} we measure the Hausdorff and spectral dimensions
of CDT quantum gravity. In addition to setting $\alpha\! =\! 1/4$, we will disable the $sst$-triangles completely,
to make sure that any fluctuations with nonvanishing $N_{sst}$ are eliminated.
\section{Closed timelike curves}
\label{ctc-sec}
For most of our measurements, the ensemble of locally causal triangulated geometries on which the dynamics takes
place will consist of spacetimes of torus topology satisfying simplicial manifold conditions
and vertex causality. However, for some purposes,
when considering the time evolution of observables, it is convenient to restrict this ensemble further,
because of the appearance of a particular class of closed timelike curves.
\begin{figure}[t]
\centering
\scalebox{0.55}{\includegraphics{tcurve.pdf}}
\caption[phase]{A piece of LCDT, with an initial and a final spatial boundary (blue edges, bottom and top).
When the two spatial boundaries are identified as indicated by the letters, the future-oriented timelike curve
running from bottom to top (timelike edges with arrows) becomes a timelike cycle, as defined in the text.
}
\label{tcurve}
\end{figure}
To explain this issue further, we introduce the notion of a {\it timelike cycle} in a locally causal
triangulation. By this we shall mean a contiguous set of timelike links which together form a {\it non-contractible}
loop of topology $S^1$, without self-crossings or self-overlaps (Fig.\ \ref{tcurve}). In addition, whenever the loop
passes through a vertex, the two timelike links of the loop meeting at the vertex must lie in opposite light cones,
never inside the same (half of the) light cone. A {\it spacelike cycle} is defined analogously in terms of
spacelike edges, and it is also required to be non-contractible.
In the spacelike case, the cycle has to cross at each vertex from one spacelike sector outside the light
cone to the opposite one.
In standard CDT in 1+1 dimensions with its preferred time slicing (c.f.\ Fig.\ \ref{DTstrips}a),
spacelike cycles only exist when
space is compactified to a circle. In this case they simply coincide with
the one-dimensional spatial slices at integer proper time. Likewise, timelike cycles only exist in CDT when
time is compactified, in which case they can be thought of as a particular\footnote{They are particular
in the sense that one could also consider paths that run not only along the
edges of a triangulation, but also through the interiors of triangles.} lattice realization of
closed timelike curves. As a consequence of
how they traverse the light cones at vertices, they are also time-oriented, either in positive or negative time direction.
As usual, the reason for compactifying time in CDT simulations is merely one of convenience, and the appearance of
closed timelike curves is an inevitable side effect, which is not expected to have much influence
on the measurement of ``bulk" observables like the dynamical dimensions considered below.\footnote{Obviously,
in any concrete implementation the
dependence of observables on boundary conditions and other finite-size effects should always be monitored.
Although CTCs are unphysical classically, their inclusion in the regularized path integral does not a priori imply
unphysical behaviour of the final continuum theory.}
\begin{figure}[t]
\centering
\scalebox{0.5}{\includegraphics{ctc.pdf}}
\caption[phase]{A piece of locally causal DT, with an initial and a final spatial boundary (blue edges, bottom and top).
Compactifying space as indicated by the letters results in a closed, future-oriented timelike curve (timelike edges
with arrows), which intersects neither the initial nor the final boundary.
}
\label{ctc}
\end{figure}
The closed timelike curves we are primarily concerned about in LCDT are not the ones winding around
the compactified time direction, but around the spatial direction, and which would still be
present if spacetime was a cylinder $I\times S^1$ (with compact spatial slices) instead of a torus, see Fig.\ \ref{ctc}
for an example. In the remainder of this work, when we talk about closed timelike curves (CTCs), we will mean
only this class of timelike cycles.
It turns out that local vertex causality does not preclude the presence of these curves, although it does
prevent the occurrence of {\it contractible} timelike loops \cite{hoekzema}. One way of finding
them is by running an algorithm that assigns time labels to vertices of a given locally causal DT.
Of course, an explicit choice of time has to be made in the LCDT model, because -- unlike in
usual CDT -- there is no preferred lattice substructure one can refer to as a natural time label.
The prescription for assigning time labels to vertices in a given geometry
we have used in the present work is to pick a spacelike cycle in the geometry and define it to be ``space
at time $t\! =\! 0$". Vertices lying in the future of this slice are then successively assigned time labels, which are
computed as the average distance of the vertex $v$ to the initial slice along any oriented timelike path from
the slice to $v$ (see \cite{ruijl} for more details on the algorithm).
The distance along any given path is simply given by the number of timelike links it contains, the
so-called {\it link distance}.
Note that the time label of a vertex will in general not be an integer. Once the vertices are labelled, one
can in a straightforward way also associate time labels with edges or more extended regions like spacelike cycles
by averaging over the time labels of the vertices contained in them. In standard CDT, this prescription
reproduces the usual integer proper-time slicing.
The algorithm implementing the vertex labelling breaks down when it encounters a CTC, like
that depicted in Fig.\ \ref{ctc}, because for any vertex lying on the curve or in its future there will be infinitely
many timelike paths connecting it to the initial slice.
We conclude that in LCDT, at least when space is compactified, local causality does not imply global causality in the sense of
an absence of CTCs.
Whether or not the presence of closed timelike curves has any consequences for the
continuum limit of the model is a priori unclear. We have performed a number of measurements
to get a better quantitative idea of how many CTCs there are, depending on the size of the
triangulation.
Fig.\ \ref{ctc_freq} shows two histograms of the frequencies of disjoint CTCs (CTCs without mutual
overlap\footnote{These curves form a subclass of CTCs; the number of all CTCs can be considerably larger,
especially when there are many disjoint CTCs.}).
We observe that the typical number of disjoint closed timelike curves present in a given configuration goes down
significantly when the volume is increased tenfold from $N_2\! =\! 10.000$ to $N_2\! =\! 100.000$ triangles.
On the other hand, the ratio
of triangulations which contain any CTCs at all increases from about 21\% to 30\%.
We therefore have no indication that CTCs disappear completely as the volume grows.
One contributing factor is presumably that it takes much longer to break up a CTC by a local Monte Carlo move
when the CTCs become highly diluted in a triangulation.
\begin{figure}[t]
\centering
\scalebox{0.55}{\includegraphics{ctc_freq_10k_100k.pdf}}
\caption{The frequency of disjoint timelike loops in 48.000 sweeps.
The upper histogram is for $N_2\! =\! 10.000$ and the lower one for $N_2\! =\! 100.000$.
}
\label{ctc_freq}
\end{figure}
\section{Observables}
\label{obs:sec}
Having introduced both the theoretical and numerical set-up of locally causal dynamical triangulations
in 1+1 dimensions, we will now discuss the measurements of several observables in this model of
quantum geometry, concentrating on the isotropic case $\alpha\! =\! 1$.
Our main aim is to understand whether the model's continuum limit coincides
with that of either Euclidean or causal dynamical triangulations in two dimensions.
It would be exciting if LCDT constituted a {\it new} universality class, but this seems a priori less
likely because of the apparent scarcity of universality classes among two-dimensional models of pure
geometry without matter coupling,
If LCDT quantum gravity were to lie in one of the two known universality of DT models in two dimensions,
our best guess at this stage would be that it is equivalent to CDT, for two
reasons: first, there is good evidence that this is true in three spacetime dimensions \cite{jordanloll},
in the sense that there one finds in both models a phase whose ground state has the scaling
properties of a Euclidean de Sitter universe. (Of course, this by no means constitutes a proof that the same happens
in two dimensions, which differs in terms of both its geometric degrees of freedom and its phase structure.)
Second, the difference between DT and CDT in two dimensions has so far been explained
in terms of the absence of baby universes in the latter \cite{alet,ackl}.\footnote{More precisely, it is the absence
of the possibility for baby universes to proliferate without limit; a limited, controlled presence of baby universes
{\it is} compatible with two-dimensional CDT, as demonstrated by the model of generalized CDT \cite{sft-cdt,ambjornbudd}.}
Since the condition of vertex causality in
locally causal DT suppresses the light cone degeneracies characteristic of topology change and therefore of the
creation of baby universes, the LCDT model seems closer to CDT than to DT, where baby universes dominate.
\subsection{Volume profile}
\label{vol:sec}
We begin by examining the so-called volume profile of a typical geometric configuration generated by
the Monte Carlo simulation of LCDT. The volume profile is simply given by the size of the
spatial volume as a function of time. Since by construction the model does not have a distinguished
notion of time in terms of some lattice substructure, we will make use of the notion of time
introduced in Sec.\ \ref{ctc-sec} above. For this purpose, a spatial slice at fixed time is any spacelike cycle -- as
defined at the beginning of Sec.\ \ref{ctc-sec} -- and its time label is obtained by averaging over the
time labels of all of its vertices. The volume of a spatial slice is the number of links contained in it.
To determine the complete volume profile of a spacetime configuration, one has to identify all of its
spatial cycles, which is a nontrivial task. Recall that unlike in CDT, in LCDT different spatial cycles can
cross and overlap along some subset of spatial edges.
\begin{figure}[htb]
\centering
\scalebox{0.65}{\includegraphics{volprof.pdf}}
\caption{Sampled LCDT volume profile for $N_2\! =\!10.000$, where $t$ denotes the time label of a spacelike cycle and the
discrete spatial extension is the number of spacelike links in the cycle at a given $t$.
}
\label{volprof}
\end{figure}
For simplicity, we have considered sampled volume profiles instead of complete ones; for a geometry of
volume $N_2\! =\! 10.000$ we have randomly sampled 100 spatial slices, and for each slice
determined its volume and time label. Fig.\ \ref{volprof} shows one such sample, to illustrate the situation. We
note that some time labels are very close to each other, which indicates that they probably share one or more
spacelike links. The sample shows large volume fluctuations within small
time intervals, without any discernible overall shape.
This finding is only qualitative, but it is compatible with the typical, strongly oscillating volume profiles
encountered in simulations of two-dimensional CDT \cite{cdtmatter1}.
\subsection{Behaviour of minimal loops}
\label{loop:sec}
A phenomenon that will potentially affect the measurement of dimensions discussed below is the overall shape of
the toroidal configurations. We saw in the previous subsection that their volume profiles seem to be strongly
fluctuating, and are comparable to what one finds in two-dimensional CDT quantum gravity.
Large fluctuations are commonplace in two dimensions, because there is only a single coupling constant
(the cosmological constant) which
sets the scale of both the spatial ``universe" and its quantum fluctuations \cite{al}.\footnote{The unique length scale
of the quantum theory is
$\Lambda^{-1/2}$, where $\Lambda$ is the dimensionful, renormalized cosmological constant.} This is in line with
the fact that general relativity in two dimensions is trivial.
A new feature in LCDT is the variable length of
the configurations in the time direction, since by construction the fixed time slicing is absent.
As a result, both the spatial extension of the universe and its time extension -- determined by the prescription
used for measuring the volume profile, say -- will evolve dynamically.
For example, the torus may become very thin in one of its directions, an effect which may
be quantified by monitoring the length of closed non-contractible curves. While in CDT individual slices of constant
time can become very short (the minimal length of a spatial $S^1$ compatible with manifold conditions is attained by
cycles of three links), the probability for this to happen can be made very small by choosing the total
time extent $t_{TOT}$ and the total volume $N_2$ suitably. By contrast,
in LCDT it can in principle happen that the torus becomes {\it uniformly} thin in one of its directions, even for
large $N_2$, if this is dynamically preferred.
The relevance of this for the measurement of dynamical dimensions is the possible appearance of finite-size
effects, even when the total volume is large. For example, this happens when the paths of random walkers -- used
to determine the spectral dimension -- start winding around the torus more than once. To obtain an estimate of
the size of this effect we have set up an algorithm which searches the tori for minimal non-contractible loops.
It does not distinguish between time- and spacelike links, which implies that the minimal loops found
can be made up of any link types.
\begin{figure}
\centering
\scalebox{0.55}{\includegraphics{loop_bounds.pdf}}
\caption{Evolution of the shortest and longest loop lengths
from a sample of minimal non-contractible loops
through randomly chosen vertices on a given triangulation, as a function of
Monte Carlo time $t$, and at discrete volume $N_2\! =\! 100.000$.
Largest $t$-values are just before the onset of thermalization.
}
\label{loop_bounds}
\end{figure}
The algorithm consists of the following steps:
on a given LCDT configuration $T$, pick a vertex $v$. Starting at $v$, perform a breadth-first search until
the area searched starts to self-overlap at some other vertex $v'$. Next, determine whether the
minimal closed curve $c$ through $v$ and $v'$ obtained in this way is contractible or not, by starting
breadth-first searches on either side of $c$. If those searches start overlapping, $c$ is a non-contractible closed
curve through $v$ of minimal length and we record its length.
Because this procedure is rather costly in computational terms, we do not repeat it for every vertex of
$T$, but only for a sample of 1.000 randomly chosen initial vertices on $T$. For a given triangulation $T$
we therefore end up with a sample of locally minimal loops (``locally" because they pass
through prescribed vertices on the torus).
To obtain Fig.\ \ref{loop_bounds}, we have performed the
sampling of minimal loops at each step during the thermalization of a Monte Carlo simulation of 280 time steps,
corresponding to 280 sweeps, for a triangulation size $N_2\! =\! 100.000$. At each step, we plot only the
shortest and longest minimal loop length of the sample; all other minimal loop lengths lie in between these
two values.
Since our samples are quite large, the lower curve is probably a good indicator of
the global minimum of the length of non-contractible loops on $T$, which roughly speaking lies in the range
20--35. This is far away from the kinematically allowed minimum of 3
and shows that the torus does not become
very thin in some places.\footnote{Note that our algorithm does not determine which of the torus directions
any particular minimal loop winds around. Also in this
respect the information is distinct from that contained in volume profiles of the kind shown in Fig.\ \ref{volprof}.}
In addition, the fact that there is a significant distance between the upper and lower curves shows that the
torus does not degenerate by becoming very long in one direction and uniformly short in the other.
On the other hand, a triangle count of $N_2\! =\! 100.000$ according to (\ref{pilcdt}) at $\alpha\! =\! 1$
corresponds to a volume $V\! =\! \sqrt{3} N_2/4\! \approx\! 43.300$, where we have set the lattice constant
to unity, $a\! =\! 1$. Assuming the two torus directions are approximately of equal length, this
corresponds to an average linear extension of the torus in the range of 150-200, of which the shortest
loop length therefore is only a small fraction. It implies that fluctuations in the linear extension of the LCDT configurations
are {\it large}, even if the total volume is also large. We will comment further on this characteristic feature
of two-dimensional quantum gravity in Sec.\ \ref{haus:sec} below.
The overall conclusion is that at system size $N_2\! =\! 100.000$ finite-size effects for observables involving shortest
(geodesic) distances should not
play a role at least up to link distances of about 30, and for observables involving closed random walks
at least up to about 500 steps.\footnote{Note that we are not making a distinction between link distance
on the triangulated lattice and link distance on the dual lattice. We have not determined the relative scale between
these two notions of geodesic distance on typical LCDT configurations. The numbers given in the text should
therefore be treated only as rough estimates.}
Looking ahead to the measurement of dimensions presented below, this still
leaves plenty of room for finite-size effects on larger scales, but they are not quantified easily
above the thresholds just mentioned.
\subsection{Spectral dimension}
\label{spec:sec}
Dynamical dimensions, like the spectral and Hausdorff dimension, are important and popular examples of observables
in models of nonperturbative quantum gravity because of their computational accessibility in many different
contexts.
A key insight is that the values of these dimensions do not have to coincide with the dimensionality of the
triangular building blocks used to construct the regularized model if one takes a nontrivial, infinite
continuum limit, as we are doing. Furthermore, a familiar feature from studying graphs and fractals, namely, the fact
that there exist ``spaces" of non-integer dimension, is also encountered in systems of dynamical triangulations.
This is not necessarily inconsistent from a physical point of view as long as the anomalous values of the
dimensions are confined to a highly quantum-fluctuating, non-semiclassical regime,
typically at the Planck scale. Since there is
no nontrivial classical theory of two-dimensional general relativity whose solutions might be recovered from a
corresponding quantum theory in the limit as $\hbar\rightarrow 0$,
there are no a priori physicality constraints on the Hausdorff and spectral dimension of
an ensemble\footnote{When talking about ``the Hausdorff dimension", say, of (C)DT, we mean of course
the {\it expectation value} of this quantity, measured in the ground state of the relevant ensemble.}
of DT configurations in two dimensions, causal or otherwise.
For Euclidean DT in two dimensions, from theoretical scaling arguments
the spectral dimension is 2 and the Hausdorff dimension 4 \cite{dim2d}, which has also been
corroborated numerically \cite{dim2dnum}. Invoking an equivalence
between CDT configurations and tree graphs, CDT in 1+1 dimensions can be shown to
have a spectral dimension of at most 2 and a Hausdorff dimension of almost surely 2 \cite{djw},
the latter in agreement with earlier theoretical \cite{al,alet} and numerical \cite{cdtmatter1} results.
In the context of LCDT,
we will first investigate the spectral dimension. In the section following this one, we will examine
the Hausdorff dimension, which appears to
be the quantity best suited to discriminating between the different universality classes.
The first step in measuring the spectral dimension is to define a discrete diffusion process
on a two-dimensional Euclideanized locally causal triangulation $T$. This takes the form of a random
walk moving in steps of unit distance between the centres of neighbouring triangles as function
of a discrete external diffusion time $\sigma$, analogous to what was done
in CDT in four dimensions \cite{reconstruct,spectral}. In other words, the diffusion takes place along the edges of
the trivalent lattice dual to $T$. Calling $K_T (i,i_0;\sigma)$ the probability to go from triangle $i_0$
to triangle $i$ in $\sigma$ steps, satisfying $\sum_i K_T (i,i_0;\sigma)=1$,
the discrete diffusion equation on the triangulation $T$ reads
\begin{equation}
\label{diffu}
K_T(i,i_0;\sigma+1) = (1-\chi) K_T (i,i_0;\sigma) + \, \frac{\chi}{3} \sum_{j\, {\rm n.n. \, of}\, i} K_T (j,i_0; \sigma),
\end{equation}
subject to the initial condition $K_T(i, i_0;\sigma\! =\! 0) = \delta_{i,i_0}$.
The sum on the right-hand side of (\ref{diffu}) is over the three nearest neighbours $j$ of triangle $i$,
and $\chi\in [0,1]$ is a diffusion constant which allows for a non-vanishing probability $(1-\chi)$ that the random
walker remains at the same triangle during a diffusion step.
It is included merely for convenience, to somewhat smoothen out the discretization artefacts for short diffusion paths.
In particular, there is an asymmetry between paths of even and odd numbers of steps (c.f. the discussion
in \cite{reconstruct}), with a corresponding oscillatory behaviour in the curve for $d_s$ that is also
present in our Figs.\ \ref{spec_cdt} and \ref{spec_iso} when one zooms into the region below
$\sigma \approx 50$. A diffusion constant $\chi\! <\! 1$
has been used previously when studying the spectral dimension in three-dimensional CDT \cite{benehenson}.
In our simulations, we have worked with $\chi\! =\! 0.8$ throughout.
To extract the spectral dimension, we consider closed random walks, beginning and ending at a
specified triangle $i_0$. They enter into the calculation of the average return probability
\begin{equation}
P_T(\sigma)= \frac{1}{N_2(T)}\sum_{i_0 \in T} K_T(i_0,i_0;\sigma)
\label{arp}
\end{equation}
for a given triangulation $T$. The spectral dimension $d_s$ is obtained from the expectation value
$\langle P(\sigma )\rangle_{N_2}$ of the observable (\ref{arp}) in the ensemble of triangulations of fixed
volume $N_2$ according to
\begin{equation}
\label{diffdim}
d_s(\sigma) := -2 \,\frac{\operatorname{d} \ln \langle P(\sigma)\rangle}{\operatorname{d} \ln \sigma }.
\end{equation}
In practice, we perform a ``double" random sampling, where for each randomly chosen triangulation
we pick 10 times a triangle randomly as starting point $i_0$ for a random walk, and
then repeat the process for at least 400 triangulations.
For a diffusion process on classical, flat ${\rm I\!R}^d$, the formula analogous to (\ref{diffdim})
simply reproduces the topological dimension $d$, independent
of $\sigma$, but in the quantum context the behaviour of $d_s$ can be
more complicated and, generally speaking, $\sigma$-dependent.
By not denoting the $N_2$-dependence in (\ref{diffdim}) explicitly we mean to indicate that $d_s$
is determined in the limit of large volumes where this dependence gradually disappears.
When the total spacetime is compact, the spectral
dimension will always go to zero for sufficiently large $\sigma$. This is a finite-size effect which occurs when
generic random walks become sufficiently long to wrap around space one or more times.
We are primarily interested in the $\sigma$-regime below this range.
If the system develops a stable plateau below the scale where significant
finite-size effects kick in, we will refer to this constant value of
$d_s$ as the spectral dimension of the underlying ``quantum spacetime". Note that if the system size is too small,
a plateau will never form due to a dominance of finite-size effects.
Since there are no published numerical results on the spectral dimension of two-dimensional CDT, and since it will be
useful to have a point of reference for the measurements in LCDT, we will first present our results for
the spectral dimension of regular CDT quantum gravity on a two-torus. As explained earlier, the reduction to
pure CDT configurations is achieved by setting $\alpha\! =\! 1/4$ in the action.
To understand better the effects
of the discretization, we have used two different discrete versions of the defining formula (\ref{diffdim}) for
the spectral dimension. Having determined the expectation value $\langle P(\sigma)\rangle$ for integer $\sigma$
from the data, we have employed two different implementations in terms of finite differences.
The standard choice is
\begin{equation}
d_s^{(1)} = -2\, \frac{\ln \langle P(\sigma + 1)\rangle - \ln \langle P(\sigma)\rangle }{\ln(\sigma + 1) - \ln \sigma }.
\label{canon_disc}
\end{equation}
In addition, we have used the alternative form
\begin{equation}
d_s^{(2)} = -2 \sigma \left(\frac{\langle P(\sigma + 1)\rangle }{\langle P(\sigma)\rangle } - 1 \right),
\label{disc}
\end{equation}
which has the same continuum limit and the advantage that no expensive functions are required.
\begin{figure}
\centering
\scalebox{0.55}{\includegraphics{spec_cdt.pdf}}
\caption{The spectral dimension of CDT as a function of the diffusion time $\sigma$, measured at volume
$N_2\! =\! 100.000$, and time extension $t_{TOT}\! =\! 80$.
The upper, green line is the dimension $d_s^{(1)}$ of eq.\ (\ref{canon_disc}) and the lower, blue line is
the dimension $d_s^{(2)}$ of eq.\ (\ref{disc}).
Statistical error bars are too small to be displayed.}
\label{spec_cdt}
\end{figure}
The CDT results for the spectral dimension $d_s$ are displayed in Fig.\ \ref{spec_cdt}, for data taken at volume
$N_2\! =\! 100.000$ and time extension $t_{TOT}\! =\! 80$.
For $\sigma \lesssim 700$ there is a small discrepancy between the curves corresponding to
the two different discretizations, giving us an estimate of the systematic error of determining $d_s$ for small
values of $\sigma$. For larger $\sigma$, both curves merge into what is essentially a single plateau.
The spectral dimension of CDT extracted from data on the plateau is $d_s = 2.02 \pm 0.02$,
in good agreement with the expected value of 2.
The curves for the spectral dimension for LCDT are shown in Fig.\ \ref{spec_iso}.
Qualitatively the plot is similar to that of
CDT, but the plateau is reached only for somewhat larger diffusion times $\sigma \gtrsim 1.000$.
The numerical result for the spectral dimension is $d_s=1.99 \pm 0.02$, which we regard as a
convincing confirmation that the spectral dimension of locally causal DT is 2, like that for DT and CDT
quantum gravity.
\begin{figure}
\centering
\scalebox{0.55}{\includegraphics{spec_iso.pdf}}
\caption{The spectral dimension of locally causal DT (with $\alpha\! =\! 1$) as a function of the diffusion time $\sigma$,
measured at volume $N_2=100.000$.
The upper, green line is the dimension $d_s^{(1)}$ of eq.\ (\ref{canon_disc}) and the lower, blue line is
the dimension $d_s^{(2)}$ of eq.\ (\ref{disc}).
Statistical error bars are too small to be displayed.}
\label{spec_iso}
\end{figure}
\subsection{Hausdorff dimension}
\label{haus:sec}
The Hausdorff dimension $d_h$ is a key quantity to discriminate between distinct universality classes
of two-dimensional DT quantum gravity. The general idea is to relate the volume $V$ of compact, connected regions
in space -- typically discrete analogues of geodesic balls around a chosen point --
to their linear size, e.g. the radius $r$ of the region, and to extract the leading scaling behaviour from
$\langle V(r)\rangle\! \sim\! r^{d_h}$. For our purposes, we will use a ``differential" version of this relation, where one monitors
the one-dimensional volumes of spherical shells around a given triangle $i_0$ or, equivalently, the number of (dual) vertices
at radial distance $r$ from a vertex $i_0$
of the lattice dual to a given LCDT configuration $T$. We define $n(r,i_0)$ as the number of triangles found at geodesic
distance $r$ from $i_0$, where geodesic distance is defined as the (integer) length of the shortest path along edges of
the dual lattice. We have $n(0,i_0)\! =\! 1$ and $n(1,i_0)\! =\! 3$ for all $i_0$, because each triangle has exactly three
neighbours and therefore the dual lattice is trivalent. Every triangle of $T$ will appear in exactly one of the shells,
implying that $\sum_r n(r,i_0)\! =\! N_2(T)$.
The identification of the shells can be implemented as a modified breadth-first search, which keeps track of
when a change of shells occurs.
Averaging over all initial triangles $i_0\in T$, we obtain the average shell volumes at radius $r$,
\begin{equation}
\label{avshell}
n(r) = \frac{1}{N_2} \sum_{i_0=1}^{N_2} n(r, i_0).
\end{equation}
In what follows, we will refer to the function $n(r)$ as the {\it shape} of a triangulation.
To extract the
Hausdorff dimension, we have applied finite-size scaling to the expectation value $\langle n(r)\rangle$ of
the shape function (\ref{avshell}).\footnote{We also tried to extract the Hausdorff dimension from the
scaling relation ${\bar r}(N_2)\sim N_2^{1/d_h}$ for the average linear extension $\bar r$ defined in
eq.\ (\ref{avlin}), but this did not yield meaningful results because of the convergence issues to be described
in more detail below.}
The simulations consisted of 48.000 sweeps each at volumes $N_2=100.000$, $200.000$,
$300.000$ and $400.000$, and were done for LCDT (at $\alpha\! =\! 1$) and, for reference and
comparison, also for CDT (corresponding
to $\alpha\! =\! 1/4$). The scaling ansatz for the radius and shell volume is
$r \rightarrow x=N_2^{-1/d_h}r$ and $\langle n(r)\rangle \rightarrow N_2^{-1 + 1/d_h} \langle n(r)\rangle $ respectively.
Individual data points $\langle n(r)\rangle$ were transformed into curves via spline interpolation and a
Levenberg-Marquardt least-square fit was used to align the shapes \cite{ruijl}.
\begin{figure}[htb]
\centering
\scalebox{0.65}{\includegraphics{haus_curve_fit_cdt.pdf}}
\caption{CDT quantum gravity: Fit for best overlap of the rescaled shapes $\langle n(r)\rangle /N_2^{-1+1/d_h}$
as function of the rescaled
distance $x\! =\! r/N_2^{1/d_h}$, for Hausdorff dimension $d_h\! =\! 2.2$.
The extension in time direction was set to $t_{TOT}\! =\! 80$.}
\label{haus_curve_fit_cdt}
\end{figure}
In maximizing the overlap we have taken into account all $x$-values where $\langle n(r)\rangle$
has at least half of its maximal value.
We have also measured the ``short-distance'' Hausdorff dimension for small $x$ by optimizing the overlap of
the initial rising slopes of the curves $\langle n(r)\rangle $. In principle this dimension need not coincide with the global
Hausdorff dimension $d_h$ we have been considering \cite{cdtmatter2}, but in our case there was little difference.
Our results for the best overlap of the shape functions for the CDT case are shown in Fig.\ \ref{haus_curve_fit_cdt};
they correspond to a Hausdorff dimension $d_h=2.2 \pm 0.2$, which is compatible with the known
value of 2.\footnote{For CDT in two dimensions, the Hausdorff dimension has been measured
previously \cite{cdtmatter1},
with good results, from finite-size scaling of the distribution of spatial volumes. Because of the absence
of a pre-defined time function, this method is not computationally feasible for LCDT.} Our
error bars are rather large, because the best fit depends quite sensitively on the $x$-range for which the
overlap is optimized.
\begin{figure}[htb]
\centering
\scalebox{0.65}{\includegraphics{haus_curve_fit.pdf}}
\caption{LCDT quantum gravity: Fit for best overlap of the rescaled shapes $\langle n(r)\rangle /N_2^{-1+1/d_h}$
as function of the rescaled
distance $x\! =\! r/N_2^{1/d_h}$, for Hausdorff dimension $d_h\! =\! 2.7$.}
\label{haus_curve_fit}
\end{figure}
The analogous data for LCDT are displayed in Fig.\ \ref{haus_curve_fit}. Maximal overlap is achieved for
a Hausdorff dimension $d_h\! =\! 2.71 \pm 0.2$, which is far away from our conjectured CDT value of 2,
and even further away from the DT value of 4. We conclude that LCDT very likely lies not in the same
universality class as DT. Contrary to our expectation, equivalence of LCDT and CDT appears to be excluded too.
Instead, our measurements point towards LCDT lying in
a {\it new} universality class, not hitherto seen in quantum models of two-dimensional pure gravity.
This would be a truly interesting result, and it warrants another critical look at the strength of our evidence.
As is apparent from Figs.\ \ref{haus_curve_fit_cdt} and \ref{haus_curve_fit}, the quality of the overlaps
is not very good. Could there be systematic sources
of error that affect our results to the extent that they ultimately are {\it not} in contradiction with $d_h\! =\! 2$ for
LCDT? In other words, may we be underestimating our error bars significantly?
It may be worth recalling that it took some time to nail down the
Hausdorff dimension of two-dimensional DT quantum gravity numerically.
In the words of the authors of \cite{catt_thor}, early simulation results were
``remarkably inconclusive" (see \cite{catt_thor} for
further references). The same work also used finite-size scaling
of the shape function to determine $d_h$, with a fit quality somewhat similar to ours. Especially when using the
dual lattice -- as we are also doing in the present work -- the Hausdorff dimension extracted this way was significantly
off the mark ($d_h\! =\! 3.15$ instead of the known, correct value 4). Of course, one should keep in mind
that these simulations were
performed for a geometric ensemble different from LCDT and for moderate
lattice sizes $N_2\leq 32.000$ only. On the other hand, the causal gluing rules of LCDT introduce a local
``stiffness" in the configurations compared to DT, which is likely to require larger volumes to achieve
numerical results of comparable quality.
In the case of DT simulations, significant progress with respect to
the convergence of fits was obtained by introducing
a ``phenomenologically fudged" scaling relation for the geodesic distance \cite{dim2dnum}, namely,
\begin{equation}
\label{fudge}
x=\frac{r+a}{N_2^{1/d_h} + b},
\end{equation}
where $a$ and $b$ are two parameters meant to compensate lattice artifacts at short distances.
We have also tested relation (\ref{fudge}),
but found that nonvanishing values for $a$ and $b$ steer the CDT results even further away from 2 and
also increase the sensitivity to the choice of fitting region.
A generic feature of two-dimensional quantum gravity illustrated by the numerical difficulties
already mentioned is the fact that in two dimensions quantum fluctuations are always large, even for large
lattice volumes. This is different from CDT in higher dimensions, say, where the dynamics is
governed by two scales:
one macroscopic, related to the overall size of the universe, and another one microscopic, setting the
scale of quantum fluctuations. In two dimensions, there are no nontrivial classical solutions, and
there is only a single scale, that of the quantum fluctuations.
In this situation, it is therefore natural for finite-size effects to generically be large,
especially when there are non-contractible
directions along which space can become ``small", as can happen for the torus topology used for
LCDT.\footnote{Note that the DT simulations mentioned above use
the topology of a two-sphere. Also this indicates the need to go to larger volumes in the LCDT case.}
This is certainly relevant when measuring the Hausdorff dimension; when the geodesic balls
centred at some triangle $i_0$ start wrapping around one of the torus directions, the interpretation of
the scaling relation by which we extract $d_h$ will be affected, in the sense that only triangles not visited
previously will be counted as belonging to a given radial shell. Of course, one can keep extracting
the Hausdorff dimension regardless, but should be aware that it contains also global, topological
information.
\begin{figure}
\centering
\scalebox{0.7}{\includegraphics{haus_ext_spikes_50k.pdf}}
\caption{Development in Monte Carlo time $t$ of the average linear extension $\bar{r}$
of a LCDT configuration $T$ of volume $N_2\! =\! 50.000$. Isolated peaks in $\bar{r}$ keep
occurring, even as the number of sweeps becomes very large. (Note that the y-axis has an offset of 70.)}
\label{finsize_haus}
\end{figure}
To get further insights into the origin of the relatively poor quality of our fits,
we have measured yet another observable, the average linear extension \cite{reconstruct}
\begin{equation}
\label{avlin}
\bar{r} = \frac{1}{N_2} \sum_r r \cdot n(r)
\end{equation}
of a given triangulation $T$ of discrete volume $N_2$, which is just the weighted average of
the geodesic distance $r$. As is described in more detail in the Appendix, the observable $\bar{r}$
has convergence issues, which appear to persist even on large lattices and after a large number
of sweeps. What seems to happen to the geometrical configurations is that most
of the time they are approximately ``square-shaped",
with comparable linear extensions for either torus direction, but ever so
often make an excursion to an overall shape that is elongated, where one torus direction becomes
longer than the other one, with $\bar{r}$ increasing as a result.
After a relative maximum of the two lengths has been reached, the system gradually
reverts back to being square-shaped and stays there for a while before another excursion takes
place (see Fig.\ \ref{finsize_haus} for illustration).
Of course, square and elongated configurations (for identical volume $N_2$) not only have different
average extensions $\bar{r}$, but also different shape functions $n(r)$ (see Appendix)
and therefore in general different Hausdorff dimensions. A likely explanation for our inaccurate
determination of $d_h$ is therefore the failure of the shape to stabilize during the course of the
simulation, and the finite-size effects associated specifically with elongated shapes, in addition to the
already mentioned large magnitude of the quantum fluctuations overall. This is supported by a numerical experiment we
have performed in pure CDT quantum gravity at a volume $N_2\!=\! 9.000$. For $t_{TOT}\! =\! 80$
time slices, the behaviour was ``square-like", in the sense that the
Monte Carlo history of the average extension $\bar{r}$ did not have any peaks. However, when
we shortened the time extension to $t_{TOT}\! = \! 20$, peaks similar to those depicted in Fig.\ \ref{finsize_haus}
appeared.
\begin{figure}[htb]
\vspace{0.5cm}
\centering
\begin{tikzpicture}[scale=0.75]
\pgfmathsetmacro{\s}{sqrt(3)}
\draw[blue] (0,0) -- (\s,1);
\draw[blue] (0,0) -- (\s,-1);
\draw[red] (\s,1) -- (\s,-1);
\draw[blue] (\s,1) -- (4*\s ,1);
\draw[blue] (\s,-1) -- (4*\s,-1);
\draw[red] (\s,1) -- (2*\s,-1);
\draw[red] (2*\s,-1) -- (2*\s,1);
\draw[red] (2*\s,-1) -- (3*\s,1);
\draw[red] (3*\s,-1) -- (3*\s,1);
\draw[red] (3*\s,1) -- (4*\s,-1);
\draw[red] (4*\s,1) -- (4*\s,-1);
\draw[blue] (4*\s,1) -- (5*\s,0);
\draw[blue] (4*\s,-1) -- (5*\s,0);
\end{tikzpicture}
\vspace{0.3cm}
\caption{A bubble has a $sst$-triangle at either end and arbitrarily many $stt$-triangles in between.}
\label{bubble}
\end{figure}
Lastly, in our search for ways to improve the convergence behaviour of LCDT quantum gravity, we
investigated what happens when self-overlapping bubbles are not allowed to occur. We mentioned these
structures briefly in Sec.\ \ref{invest:sec} above. A bubble is a contractible loop of spacelike links, which
in its interior is decorated by timelike links only (Fig.\ \ref{bubble}). It always has two $sst$-triangles at its end points and
consists of $stt$-triangles otherwise. When a bubble winds around a compact torus direction, it can touch
itself again (``self-overlap") along one or more spacelike edges of its boundary (see Fig.\ \ref{sob} for a simple
example). Self-overlapping bubbles are geometrically significant, because -- depending on their interior
geometry -- they can give rise to timelike cycles as defined at the beginning of Sec.\ \ref{ctc-sec}. Their
appearance is not forbidden by local vertex causality.
\begin{figure}[htb]
\vspace{0.5cm}
\centering
\begin{tikzpicture}[scale=0.75]
\pgfmathsetmacro{\s}{sqrt(3)}
\draw[red] (\s,1) -- node[black,left] {$a$} (\s,-1);
\draw[blue] (\s,1) -- (4*\s ,1);
\draw[blue] (\s,-1) -- (4*\s,-1);
\draw[red] (\s,1) -- (2*\s,-1);
\draw[red] (2*\s,-1) -- (2*\s,1);
\draw[blue] (2*\s,-1) -- (3*\s,1);
\draw[red] (3*\s,-1) -- (3*\s,1);
\draw[red] (3*\s,1) -- (4*\s,-1);
\draw[red] (4*\s,1) -- node[black,right] {$a$} (4*\s,-1);
\end{tikzpicture}
\vspace{0.3cm}
\caption{A self-overlapping bubble; the links with label $a$ are to be identified.}
\label{sob}
\end{figure}
Relevant to our present discussion is the fact that
globally self-overlapping bubbles cause severe thermalization issues in $2+1$ dimensions, and
therefore were removed from the ensemble \cite{jordanthesis,jordanloll}. This motivated us to
remove self-overlapping bubbles from the LCDT ensemble in $1+1$ dimensions too, and to check
whether it makes a difference to the measurement of the Hausdorff dimension.
\begin{figure}[htb]
\centering
\scalebox{0.6}{\includegraphics{haus_curve_fit_iso_bubblefilter.pdf}}
\caption{LCDT quantum gravity without self-overlapping bubbles: Fit for best overlap of the rescaled shapes
$\langle n(r)\rangle /N_2^{-1+1/d_h}$
as function of the rescaled
distance $x\! =\! r/N_2^{1/d_h}$, for Hausdorff dimension $d_h\! =\! 3.1$.}
\label{haus_curve_fit_iso_bubblefilter}
\end{figure}
Detecting whether a self-overlapping bubble is created during the Monte Carlo simulation (and
discarding the corresponding move) is nontrivial, since the property is nonlocal and requires a computationally
expensive walk around the lattice (see \cite{ruijl} for details on implementation). For this reason we
performed the numerical analysis on slightly smaller lattices of volume $N_2\!\leq\! 60.000$.
Measurement of the average linear extension $\bar{r}$ in this setting,
at $N_2\! =\! 50.000$, still revealed a peak structure similar to that of standard LCDT {\it with} self-overlapping bubbles,
providing evidence that this structure is not responsible for the observed instability.
Proceeding like before to determine the Hausdorff dimension for this system, via finite-size scaling
to maximize the overlap in shape (see Fig.\ \ref{haus_curve_fit_iso_bubblefilter})
yielded a Hausdorff dimension of $d_h=3.10 \pm 0.2$.
To summarize, we have pinpointed
an instability of the system with regard to its global behaviour, due to occasional excursions to a globally
elongated state, which can be observed by monitoring the geometry's average linear extension $\bar{r}$.
This is the likely source of the suboptimal data quality for the measurement of the Hausdorff dimension $d_h$.
By considering a more general fitting function for extracting $d_h$ and by using a modified ensemble without
self-overlapping bubbles we have found no hints of additional sources of error or a shift
of the Hausdorff dimension toward the CDT value of 2.
\section{Conclusions and outlook}
\label{concl:sec}
The aim of our work was to measure observables in locally causal dynamical triangulations in two dimensions,
most importantly, the spectral and the Hausdorff dimensions, and thereby understand the relation
of LCDT to other models of two-dimensional quantum gravity based on dynamical triangulations.
Our initial hypothesis was that LCDT lies in the same universality class as CDT, where both
spectral and Hausdorff dimension are equal to 2. While our measurement of LCDT's spectral dimension
did yield a value compatible with 2, with only small error margin, this was not true for the
Hausdorff dimension. Although the error bars were significantly larger -- due to an instability
in the system that persisted even at the largest volumes -- our measurements found
a Hausdorff dimension of $d_h\! =\! 2.7\pm 0.2$ and $d_h\! =\! 3.1\pm 0.2$ for two slightly different variants of LCDT.
On the basis of our simulations, it appears that LCDT is not equivalent to either DT or CDT in the continuum
limit.
This would be an interesting result, because it implies the existence of a new universality class of
two-dimensional quantum gravity in between Euclidean DT and Lorentzian CDT in two dimensions.
The ``in between" could be true quite literally, since within our measuring accuracy the Hausdorff
dimension of LCDT is compatible with 3. A Hausdorff dimension
$d_h\! =\! 3$ has been observed previously, in simulations of CDT quantum gravity in 1+1 dimensions
coupled to eight copies of Ising spins \cite{cdtmatter2} and coupled to several massless scalar fields \cite{cdtscalar},
adding some plausibility to the possibility that
a universality class with this property may actually exist.
Further confirmation of the appearance of this new phase would come from locating
a phase transition between CDT (corresponding to $\alpha\! =\! 1/4$, at least for fixed volume)
and LCDT as a function of the parameter $\alpha$ in our model. Having already invested
considerable computing resources into the isotropic case $\alpha\! =\! 1$ in the present work,
we leave this investigation to a future publication. --
Needless to say, it would be extremely interesting to
find an analytic solution of the LCDT model, to put our findings on a more definite footing.
\subsection*{Acknowledgments}
We thank J.\ Ambj\o rn for discussion. -- The contribution of RL is part of the
research programme of the Foundation for Fundamental Research
on Matter (FOM), financially supported by the Netherlands
Organisation for Scientific Research (NWO).
\vspace{0.3cm}
\section*{Appendix}
In this appendix, we give some more details about the instability we have observed in the LCDT system,
which affects at least one observable, the average linear extension $\bar{r}$ of the universe defined
in (\ref{avlin}), and contributes to the rather poor overlaps we have found in our finite-size scaling analysis
to determine the Hausdorff dimension. Unlike other observables, which typically converge after about
300 sweeps, $\bar{r}$ does not, even for very large system size $N_2\! =\! 400.000$ and after several
thousands of sweeps. To understand better what happens geometrically, we have plotted the shape
$n(r)$ of a typical configuration along the meta-stable ``bottom" of the Monte Carlo history of $\bar{r}$
shown in Fig.\ \ref{finsize_haus}, and of a configuration at one of the peaks.
\begin{figure}[h]
\centering
\scalebox{0.7}{\includegraphics{haus_avg_max.pdf}}
\caption{Average shape $n(r)$ (top) and shape at peaks in $\bar{r}$ (bottom) at $N=9.000$, exhibiting a long plateau.}
\label{finsize_haus_3}
\end{figure}
As illustrated by
Fig.\ \ref{finsize_haus_3}, the two are very different. Outside the peaks in $\bar{r}$, the shape of a configuration starts out
with an almost linear increase until it reaches a single maximum, and then quickly drops to zero.
A configuration from a peak in $\bar{r}$ also increases linearly until a first maximum, but then
enters a long plateau before also going to zero.
These two different shape functions are characteristic for a torus which is approximately
``square-shaped" (i.e. of a similar extension
in either of the torus directions) and one which is elongated. This fact is illustrated by comparing the
measured shapes with those of regular, flat tori\footnote{For simplicity, we are considering only
tori which are obtained from gluing flat rectangles without any ``twists".} in the continuum (Fig.\ \ref{toruspeaks}).
Despite the totally
different set-up (single, classical torus without local curvature and without quantum fluctuations),
there is a clear qualitative resemblance with the shapes extracted from the full quantum simulation.
Note that we have not attempted a proper translation between discrete and continuum units of length
and volume, which would be necessary for a quantitative comparison.
\begin{figure}[htb]
\centering
\scalebox{0.7}{\includegraphics{toruspeaks.pdf}}
\caption{Shape of a flat, classical torus in the continuum: square-shaped of length 50 in either direction (top) and elongated
with extension 30 and 180 in the two directions (bottom).}
\label{toruspeaks}
\end{figure}
|
2,869,038,155,848 | arxiv | \section{Appendix}
We shows the details of our evaluation metrics in Section \ref{setting} and comparison methods in Section \ref{compare}.
Also, more examples of phrases annotations are shown in Table \ref{examples}
\begin{table*}[h!] \label{examples}
\centering
\renewcommand{\arraystretch}{1.}
\renewcommand{\tabcolsep}{2.pt}
\caption{More examples of phrases descriptions about affordances. \textbf{PA}, \textbf{F}, \textbf{AF}, \textbf{E} denote \textbf{P}otential \textbf{A}ctions, \textbf{F}unction, \textbf{A}ppearance \textbf{F}eature and \textbf{E}nvironment, respectively in the table. Note that we only show the original form of the corresponding verbs.}
\label{Affordance Description table}
\begin{tabular}{c||c|c}
\Xhline{2.\arrayrulewidth}
\hline
\textbf{Affordance Class} & \textbf{Object Class} & \textbf{Phrase Descriptions Examples} \\
\hline
\Xhline{2.\arrayrulewidth}
\rowcolor{mygray}
\textbf{\normalsize{Kick}} & \multicolumn{1}{m{3.7cm}|}{\small{soccer ball, punching bag}} & \multicolumn{1}{m{11cm}}{
\textbf{PA}: move fast, trash out or strike, make a motion with feet or fist toward an object, strike out with feet, punt, physical strike, ...
\qquad
\textbf{E}: outdoor activities (soccer ball) } \\
\textbf{\normalsize{Sit}} & \multicolumn{1}{m{3.7cm}|}{\small{bench, sofa, stool, wheelchair}} & \multicolumn{1}{m{11cm}}{
\textbf{PA}: sit, sit down, seat, lounge, recline, be seated, sit in, lean back, lean over, lean against, ...
\quad
\textbf{F}: rest, take a rest, sleep, nap, take a break, have a rest, give feet a rest...
} \\
\rowcolor{mygray}
\textbf{\normalsize{Throw}} & \multicolumn{1}{m{3.7cm}|}{\small{frisbee, rugby ball}} & \multicolumn{1}{m{11cm}}{
\textbf{PA}:
throw, deliver, pass, toss, toss using hands, throw away, throw forcefully, cast, ...
\quad
\textbf{E}: outdoor, out-of-doors,
} \\
\textbf{\normalsize{Shelter}} & \multicolumn{1}{m{3.7cm}|}{\small{umbrella}} & \multicolumn{1}{m{11cm}}{
\textbf{PA}: shelter, raise, lift, move up, carry, take, grip handle, take, ... \quad
\textbf{F}: cover for, protect, shade, shield \quad
\textbf{E}: in the sun, in the rain, outdoor, \quad
\textbf{AF}: circular cover
} \\
\rowcolor{mygray}
\textbf{\normalsize{Beat}} & \multicolumn{1}{m{3.7cm}|}{\small{drum}} & \multicolumn{1}{m{11cm}}{
\textbf{PA}: beat, strike, hit, strike rapidly, hit in rhythm, pulse, beat in rhythm, clout, punch, pound, ... \quad
\textbf{F}: play, sound, create sound, make sound, produce sound
}\\
\textbf{\normalsize{Hit}} & \multicolumn{1}{m{3.7cm}|}{\small{axe, hammer}} & \multicolumn{1}{m{11cm}}{
\textbf{PA}: hit, deliver an impulsive fore by striking, strike, can be lifted \quad
\textbf{F}: hit, chop, split, cut, cleave \quad
\textbf{AF}: sharp blade, knife-edged \quad
\textbf{E}: usually appears along with wood
} \\
\rowcolor{mygray}
\textbf{\normalsize{Cut}} & \multicolumn{1}{m{3.7cm}|}{\small{knife, scissors}} & \multicolumn{1}{m{11cm}}{
\textbf{PA}: cut, hold, use, sharpen, grasp, raise, slash, pull into, hold the handle, ... \quad
\textbf{F}: separate, slice, chop, divide, part, trim, ... \quad
\textbf{AF}: sharp edge, usually made of metal \quad
} \\
\textbf{\normalsize{Lie}} & \multicolumn{1}{m{3.7cm}|}{\small{baby bed, bench, sofa}} & \multicolumn{1}{m{11cm}}{
\textbf{PA}: lie, lie down, sit down, recline or lay down, lean back, lean over, be recumbent, sit back, lie on the side, prostrate, lean, ... \quad
\textbf{F}: take a break, sleep, rest, repose, ... \quad
} \\
\rowcolor{mygray}
\textbf{\normalsize{Lift}} & \multicolumn{1}{m{3.7cm}|}{\small{dumbbell}} & \multicolumn{1}{m{11cm}}{
\textbf{PA}: lift, lift up, raise, grab, put down, pick up, take down, push, hold up, uplift, cause to raise, hold high, \quad
\textbf{F}: exercise, used for exercise of muscle-building \quad
\textbf{E}: indoor exercise
} \\
\textbf{\normalsize{Pick up}} & \multicolumn{1}{m{3.7cm}|}{\small{chopsticks}} & \multicolumn{1}{m{11cm}}{
\textbf{PA}: take and lift upward, hold, grasp, move up and down, hold and lift \quad
\textbf{F}: pass food, kitchen utensil \quad
\textbf{E}: usually appears in kitchen or dining table\quad
\textbf{AF}: usually are made of wood\quad
} \\
\rowcolor{mygray}
\textbf{\normalsize{Rolling}} & \multicolumn{1}{m{3.7cm}|}{\small{baseball, croquet ball, golf ball, table tennis ball, tennis ball}} & \multicolumn{1}{m{11cm}}{
\textbf{PA}: rolling, move, can roll, move by rotating, roll over, rotate rapidly, turn round and round, rotate, move fast, spin, whirl, move around an axis or a center, cycle, revolve, change orientation or direction, twirl revolve \quad
\textbf{AF}: spherical
} \\
\textbf{\normalsize{Mix}} & \multicolumn{1}{m{3.7cm}|}{\small{chopsticks, spoon, whisk}} & \multicolumn{1}{m{11cm}}{
\textbf{PA}: mix, blend, mix together, fuse, grasp, hold, merge, move circularly, move around, agitate, ... \quad
\textbf{F}: kitchen tools \quad
\textbf{E}: usually appears in kitchen or dining table,
} \\
\rowcolor{mygray}
\textbf{\normalsize{Jump}} & \multicolumn{1}{m{3.7cm}|}{\small{skateboard, skis, snowboard, surfboard}} & \multicolumn{1}{m{11cm}}{
\textbf{PA}: jump, turn at high speed, move forward, move fast, travel fast, perform a leap, accelerate, make a turn, speed, turn left, turn right, make a turn, speed up, ... \quad
\textbf{E}: outdoor activities.
} \\
\textbf{\normalsize{Fork}} & \multicolumn{1}{m{3.7cm}|}{\small{fork}} & \multicolumn{1}{m{11cm}}{
\textbf{PA}: fork, fork up, move up and down, hold handle \quad
\textbf{F}: pass food, pick up food, used for cook, lift food\quad
\textbf{E}: appears in kitchen or dining table, used with knife\quad
} \\
\rowcolor{mygray}
\textbf{\normalsize{Scoop}} & \multicolumn{1}{m{3.7cm}|}{\small{spatula, spoon}} & \multicolumn{1}{m{11cm}}{
\textbf{PA}: scoop, scoop out, scoop up, take up, ladle out, hold the handle, grasp the handle, lade, take out or up \quad
\textbf{E}: appears in the kitchen or the dining table \quad
\textbf{AF}: concave shape
} \\
\textbf{\normalsize{Swing}} & \multicolumn{1}{m{3.7cm}|}{\small{baseball bat, table tennis bat, tennis racket}} & \multicolumn{1}{m{11cm}}{
\textbf{PA}: swing, change location by moving back and forth, change direction, cause to move around, swing back, swing forward, swing back and forth, try to hit something \quad
} \\
\rowcolor{mygray}
\textbf{\normalsize{Take photo}} & \multicolumn{1}{m{3.7cm}|}{\small{camera}} & \multicolumn{1}{m{11cm}}{
\textbf{PA}: shoot, take a shot of, target to, adjust, put in front of eyes, aim at, raise up to eyes, bring up to eyes, snap, keep \quad
\textbf{F}: take a photo of, get pictures of, capture in a photo
} \\
\textbf{\normalsize{Bounce}} & \multicolumn{1}{m{3.7cm}|}{\small{basketball}} & \multicolumn{1}{m{11cm}}{
\textbf{PA}: bounce, spring back, move up and down, rebound, bounce back, move quickly back and forth, pass, bounce against, ... \quad
\textbf{AF}: bouncy, spherical, rubber or synthetic material, ... \quad
\textbf{E}: usually in door, team sport
} \\
\rowcolor{mygray}
\textbf{\normalsize{Contain-1}} & \multicolumn{1}{m{3.7cm}|}{\small{backpack, gift box, handbag, purse, suitcase}} & \multicolumn{1}{m{11cm}}{
\textbf{PA}: contain, take, hold, have within, pack, pack into, place within, hold in, fill up, load up, make full \quad
\textbf{F}: hold household items, hold inside, store, be capable of holding
} \\
\textbf{\normalsize{Contain-2}} & \multicolumn{1}{m{3.7cm}|}{\small{beaker, beer bottle, bowl, cup or mug, milk can, pitcher, soap dispenser, vase, watering can}} & \multicolumn{1}{m{11cm}}{
\textbf{PA}: contain, pour, hold, pour in, pour out, decant, flow, store, keep, hold in, carry, bear, have within, include, take, pour off, hold in hands, dribble, spill, ... \quad
\textbf{AF}: depression in the middle, open-top container, contain liquid, liquid container, ...
} \\
\rowcolor{mygray}
\textbf{\normalsize{Contain-3}} & \multicolumn{1}{m{3.7cm}|}{\small{bowl, frying pan}} & \multicolumn{1}{m{11cm}}{
\textbf{PA}: contain, store, hold in both hands up\quad
\textbf{F}: prepare for food, hold and store food \quad
\textbf{AF}: the center is depressed, depression in the middle \quad
\textbf{E}: usually appears in kitchen or dining table
} \\
\textbf{\normalsize{Play-1}} & \multicolumn{1}{m{3.7cm}|}{\small{cell, erhu fiddle, viola, violin}} & \multicolumn{1}{m{11cm}}{
\textbf{PA}: play, bow, fiddle, chord, press strings, squeeze the bow, move bow across strings, grip the bow, ... \quad
\textbf{F}: make sound, make music, produce sound, stringed instruments, ...
}\\
\rowcolor{mygray}
\textbf{\normalsize{Play-2}} & \multicolumn{1}{m{3.7cm}|}{\small{banjo, guitar, harp, pipa}} & \multicolumn{1}{m{11cm}}{
\textbf{PA}: play, carry, move fingers up and down, pluck fingers, press the string, perform, pull slightly but sharply, ... \quad
\textbf{F}: make sound, make music, produce sound, stringed musical instrument, ...
} \\
\textbf{\normalsize{Play-3}} & \multicolumn{1}{m{3.7cm}|}{\small{accordion, piano}} & \multicolumn{1}{m{11cm}}{
\textbf{PA}: play, tune, press keys, move fingers, touch, manipulate, squeeze, ... \quad
\textbf{F}: make sound, produce music, make music, ...
} \\
\rowcolor{mygray}
\textbf{\normalsize{Play-4}} & \multicolumn{1}{m{3.7cm}|}{\small{flute, frenchhorn, harmonica, trumpet}} & \multicolumn{1}{m{11cm}}{
\textbf{PA}: play, tune, hold, blow air into the instrument, raise to lip, perform, push aside mouth, lift to lip, carry, blow through mouth, carry, wind, ... \quad
\textbf{F}: make sound, make music, produce sound
} \\
\textbf{\normalsize{Ride}} & \multicolumn{1}{m{3.7cm}|}{\normalsize{bicycle, motorbike}} & \multicolumn{1}{m{11cm}}{
\textbf{PA}: ride, push down with foot, pedal, turn left, move rapidly, pull, control motion, slow down, stop, ... \quad
\textbf{F}: travel, change location, travel fast, ... \quad
\textbf{E}: outdoor
} \\
\rowcolor{mygray}
\textbf{\normalsize{Brush}} & \multicolumn{1}{m{3.7cm}|}{\normalsize{toothbrush}} & \multicolumn{1}{m{11cm}}{
\textbf{PA}: brush, grasp the handle, hold handle, touch lightly and briefly, ... \quad
\textbf{F}: clean, sweep, rub, sweep across or over, wash, clean tooth, ... \quad
\textbf{AF}: head attached to a handle, a head of tightly clustered bristles, ... \quad
\textbf{E}: often appears beside a sink within the kitchen or bathroom, ...
} \\
\textbf{\normalsize{Roll dough}} & \multicolumn{1}{m{3.7cm}|}{\normalsize{rolling pin}} & \multicolumn{1}{m{11cm}}{
\textbf{PA}: roll, press, roll the rod across the dough, grasp the handle, shape, shape by rolling, squeeze, shape by rolling, exert a force with a heavy weight, ... \quad
\textbf{AF}: cylindrical, ... \quad
\textbf{E}: appear in the kitchen, ... \quad
\textbf{F}: food preparation utensil, kitchen stuff, ... \quad
} \\
\rowcolor{mygray}
\textbf{\normalsize{Wear-1}} & \multicolumn{1}{m{3.7cm}|}{\normalsize{hat, helmet}} & \multicolumn{1}{m{11cm}}{
\textbf{PA}: wear, put on, take off, dress, be dressed in, be clothed in, carry, get dressed, hold, keep, raise, cover, have on, ... \quad
\textbf{F}: decorate, protect against, shelter from the sun, head covering, have on, used for warmth, ...
} \\
\textbf{\normalsize{Wear-2}} & \multicolumn{1}{m{3.7cm}|}{\normalsize{glasses}} & \multicolumn{1}{m{11cm}}{
\textbf{PA}: wear, wear on face, take off, put off, put on, raise, get, ... \quad
\textbf{AF}: two pieces of glasses, \quad
\textbf{F}: improve vision, protect eyes, used for decoration, ...
} \\
\rowcolor{mygray}
\textbf{\normalsize{Look Out}} & \multicolumn{1}{m{3.7cm}|}{\normalsize{binoculars}} & \multicolumn{1}{m{11cm}}{
\textbf{PA}: look out, adjust, hold in hands, target to, focus, look at, set the focus, put in front of eyes, aim at, zoom, bring up to eyes, turn the focus wheel, align with view, adjust, ... \quad
\textbf{F}: see clearly \quad
\textbf{E}: outdoor \quad
\textbf{AF}: two lens, two telescopes mounted side by side
} \\
\hline
\Xhline{2.\arrayrulewidth}
\end{tabular}
\end{table*}
\subsection{Benchmark Setting} \label{setting}
We choose five broadly used metrics to comprehensively evaluate the performance of different methods, \emph{i.e.}, Intersection over Union (IoU), F-measure ($F_{\beta}$), E-measure ($E_{\phi}$), Pearson's Correlation Coefficient (CC), and Mean Absolute Error (MAE). We introduce them briefly as following:
\begin{itemize}[leftmargin=*]
\item
\textbf{Intersection of Union (IoU) \cite{long2015fully}}: IoU is a common pixel level evaluation metric to measure the overlap between predicted mask and ground truth mask. It is defined as the ratio of the area of overlap and the area of union.
\item
\textbf{F-measure ($F_{\beta}$) \cite{arbelaez2010contour}}: $F_{\beta}$ is a widely used metric which simultaneously considers both recall $R$ and precision $P$, where $P$ is the number of true positive results divided by the number of all positive results and $R$ is the number of true positive results divided by the number of all samples that should have been identified as positive.
\item
\textbf{E-measure ($E_{\phi}$) \cite{fan2018enhanced}}: $E_{\phi}$ is a measurement which jointly utilizes local and global information to evaluate the difference between the ground-truth and predicted mask.
\item
\textbf{Pearson's Correlation Coefficient (CC) \cite{le2007predicting}}: CC is broadly applied to measure the linear correlation between two variables. In this paper, we employ CC to measure the relevance of the predicted map and the ground truth.
\item
\textbf{Mean Absolute Error (MAE) \cite{perazzi2012saliency}}: MAE measures the average over the absolute differences of the normalized predicted map and the ground-truth mask.
\end{itemize}
The evaluation code can be found at \url{https://github.com/lhc1224/OSAD_Net/tree/main/PyMetrics}.
\subsection{Comparison Methods} \label{compare}
To illustrate the superiority of our model, we compare several different kinds of methods, which involve \textbf{two} \textcolor[rgb]{0.8,0.0,0.3}{\textbf{Salient Detection} models (BASNet, CPD)}, \textbf{two} \textcolor[rgb]{0.99,0.5,0.0}{\textbf{Affordance Detection} models (OSAD-Net, OAFFD)}, \textbf{two} \textcolor[rgb]{0.4,0.0,0.99}{\textbf{Semantic Segmentation} models (PSPNet, DeepLabV3+)}, and \textbf{three} \textcolor[rgb]{0.1,0.8,0.1}{\textbf{Referring Segmentation} models (CMSA, BRINet, CMPC)}.
\begin{itemize}[leftmargin=*]
\item
\textcolor[rgb]{0.8,0.0,0.3}{\textbf{BASNet}} \cite{Qin_2019_CVPR}: \textbf{B}oundary-\textbf{A}ware \textbf{S}egmentation \textbf{N}etwork consists of a predict-refine architecture and a hybrid loss. The predict-refine architecture consists of a encoder-decoder network and a refinement module to predict and refine the segmentation probability map respectively.
\item
\textcolor[rgb]{0.8,0.0,0.3}{\textbf{CPD}} \cite{Wu_2019_CVPR}: \textbf{C}ascaded \textbf{P}artial \textbf{D}ecoder (CPD) framework leverages partial decoder to discard large resolution features in shallower layers and integrate features of deeper layers to generate more precise saliency map.
\item
\textcolor[rgb]{0.99,0.5,0.0}{\textbf{OSAD-Net}} \cite{Oneluo}: \textbf{O}ne \textbf{S}hot \textbf{A}ffordance \textbf{D}etection \textbf{N}etwork first learns the intentions of the human actions and then transfers it to query images to segment objects with the same affordance by collaborative learning.
\item
\textcolor[rgb]{0.99,0.5,0.0}{\textbf{OAFFD}} \cite{zhao2020object}: OAFFD-Net mainly combines CoordConv and ASPP to refine the feature maps, and designs a relationship-aware module to explore the relationships between objects and affordance.
\item
\textcolor[rgb]{0.4,0.0,0.99}{\textbf{PSPNet}} \cite{zhao2017pspnet}: \textbf{P}yramid \textbf{S}cene \textbf{P}arsing \textbf{Net}work utilizes a pyramid parsing module to exploit global context information. Thus the local and global clues are used together to improve the performance in semantic segmentation task.
\item
\textcolor[rgb]{0.4,0.0,0.99}{\textbf{DeepLabV3+}} \cite{chen2017rethinking}: \textbf{DeepLabV3+} applies the depthwise separable convolution to an \textbf{A}trous \textbf{S}patial \textbf{P}yramid \textbf{P}ooling (ASPP) model to encode multi-scale context information at multiple filter rates and multiple fields-of-view.
\item
\textcolor[rgb]{0.1,0.8,0.1}{\textbf{CMSA}} \cite{ye2021referring}: \textbf{C}ross-\textbf{M}odal \textbf{S}elf-\textbf{A}ttention module is able to adaptively focus on the important words in the given language expression and region in the corresponding image by utilizing self-attention mechanism.
\item
\textcolor[rgb]{0.1,0.8,0.1}{\textbf{BRINet}} \cite{hu2020bi}: \textbf{B}i-directional \textbf{R}elationship \textbf{I}nferring \textbf{N}etwork designs two kinds of attention mechanism from vision to language and language to vision to learn the bi-directional relationship between language and visual modalities.
\item
\textcolor[rgb]{0.1,0.8,0.1}{\textbf{CMPC}} \cite{liu2021cross}: \textbf{C}ross-\textbf{M}odal \textbf{P}rogressive \textbf{C}omprehension scheme first perceives all related entities utilizing entity and attribute words while the rest relational words are adopted to highlight the target entities by spatial graph reasoning.
\end{itemize}
\bibliographystyle{IEEEtran}
\section{Introduction}
\IEEEPARstart{T}{he} term ``Affordance'' is used to describe the interactions between humans, animal, and their environment. In other words, affordance implies the complementary between the animal and the environment \cite{gibson1977theory}. The affordance is regarded as an inherent property of an object independent of the user’s characteristics. Thus, investigating the affordance of objects leads agents to interact with environments better. In the field of vision, computer vision techniques not only need abilities to detect the content in a scene but also require the capability to infer possible interactions between humans, animals, and the corresponding environments \cite{gibson2014ecological}. Recently, affordance has drawn remarkable attention and has been widely explored in various application fields. For instance, the theory of affordance is applied to design more intelligent and more robust robotic systems in complex and dynamic environments \cite{horton2012affordances}. As a result, perceiving the affordance of objects has a broad range of applications in several fields such as action recognition \cite{qi2017predicting,earley1970efficient,li2021tri}, scene parsing \cite{bagautdinov2017social, fang2018demo2vec,zhu2020self} and robot grasping \cite{yamanobe2017brief,shiraki2014modeling}, \emph{etc}. \par
\begin{figure}[t]
\centering
\begin{overpic}[width=1.\linewidth]{figure/fig_1.pdf}
\end{overpic}
\caption{\textbf{Illustration of perceiving affordance.} Given a set of phrases that describes the affordance, the corresponding objects could be detected. In the first row, the phrases in blue and red indicate affordance ``\textit{contain}'' and ``\textit{cut}'', then the corresponding objects ``\textit{beer bottle}'' and ``\textit{knife}'' with these affordances are segmented in blue and red, respectively. In the second row, the phrases in blue and red indicate affordances ``\textit{contain}'' and ``\textit{scoop}'', and the objects ``\textit{cup}'', ``\textit{bowl}'' and ``\textit{spoon}'' are highlighted in blue and red, respectively.}
\label{FIG:fig_1}
\end{figure}
Previous work mainly addressed applications of affordance in the vision-only field. Some work \cite{sawatzky2017adaptive,zhao2020object,Oneluo} construct mapping relationships between objects representations and affordance categories. However, affordance is closely related to the environment and the actor, limiting models' generalization capability in new unseen scenes, leading to incorrect perception and localization. To solve the above problem, some other work perceives affordance objects by mining human-object interaction cues from videos or images \cite{Oneluo,kjellstrom2011visual,fang2018demo2vec,zhai2021one,luo2021learning} and transferring them to target images which enables models to better cope with the effects of dynamic changes of affordance and keep well generalization ability in new unseen environments. Unlike the above work, this paper attempts to explore the potential of language pairs in affordance detection tasks from a multimodal perspective. We consider utilizing a series of phrases combinations to describe affordances of an object and then generate segmentation masks from the corresponding images. This process is also consistent with the application scenarios where real intelligent agents receive information in multiple modalities from different sources to jointly perceive
the affordances of objects. \par
\begin{figure*}[t]
\centering
\begin{overpic}[width=1.\linewidth]{figure/cyclic.pdf}
\put(-1,15){\textbf{(b)}}
\put(-1,37.7){\textbf{(a)}}
\end{overpic}
\caption{\textbf{Task and method differences between affordance-related vision-language task and traditional ones.} \textbf{(a)} shows the problems caused by the multiplicity property of affordance. In traditional V-L tasks, the appearances of objects with the same language descriptions are generally similar, while the differences are significant in affordance-related V-L task. For images on the left, objects referred to by the same entity phrases are similar in color, shape, and texture; nevertheless, the opposite is true for images on the right. In \textbf{(b)}, We compare our method with conventional methods. In the traditional method, vision features are close enough in the distance in feature space, leading to easier alignment. However, for affordance-related V-L task, vision and language features are cluttered in feature space. We design a cyclic and bilateral mechanism to cope with these problems by enhancing iter-modal semantic consistency step by step. (See Section \ref{method} for details.)}
\label{FIG:cyclic}
\end{figure*}
Therefore, we propose a phrase-based affordance detection task in this paper. That is, input a set of textual phrases describing affordances and an image, then the objects that can afford the corresponding affordance are expected to be segmented precisely (as shown in Fig. \ref{FIG:fig_1}). We choose natural language phrases to describe affordances without considering the specific object categories suitable in practical application scenarios. In practice, humans often communicate with each other with incomplete sentences and pass out some common-sense information that does not need to be explicitly pointed out. When a human interacts with an agent, likely, the given instructions are also incomplete \cite{chen2020enabling}. For example, a human asks the agent to ``\textit{pour me some water}'' while whether to use a cup or a bowl to pour the water is not indicated explicitly. This shows that learning common-sense knowledge such as affordance will lead to more intelligent agents. \par
Nevertheless, affordance is a special property different from the semantic category. One object may have multiple affordances, while one affordance may correspond to different objects. For example, the affordance of \textit{``Bed''} includes two different actions: \textit{``Sit''} and \textit{``Lie''}, while \textit{``Chair''} can also afford action \textit{``Sit''}. This may lead to great differences in the visual representation of different objects referred to by the same textual descriptions. Fig. \ref{FIG:cyclic} (a) shows the differences between traditional vision-language tasks and affordance-related V-L tasks.
Such variations in the color, texture, and shape of objects would render affordance-related vision-language task more difficult in the alignment of textual and visual features compared to conventional multimodal tasks \cite{hu2016segmentation,liu2017recurrent,li2018referring,hu2020bi,jing2021locate}.
The differences can lead to significant divergence in the distribution of textual and visual representations in feature space. It may be difficult for the deep-learning-based network to align features from these two modalities through a single learning step. Because the distribution of corresponding visual features seems to be irregular for the same textual representation, the cross-modal semantic consistency is difficult to capture if the network only updates features once a time. To tackle this problem, as shown in Fig. \ref{FIG:cyclic} (b), we design a cyclic bilateral update mechanism. Our model updates visual and linguistic features with the leverage of another modality to enhance the inter-modal semantic consistency step by step in a bilateral and cyclic manner. The inter-modal consistency is gradually enhanced after several cyclic alignments. \par
To this end, we propose a \textbf{C}yclic \textbf{B}ilateral \textbf{C}onsistency \textbf{E}nhancement Network (\textbf{CBCE-Net}), which consists of three main modules: \textbf{V}ision guide \textbf{L}anguage \textbf{M}odule (VLM), \textbf{L}anguage guide \textbf{V}ision \textbf{M}odule (LVM) and \textbf{C}yclic \textbf{I}nteraction \textbf{M}odule (CIM). Utilizing the attention mechanism, VLM learns the importance of linguistic features in each visual region and derives new linguistic features. Then LVM uses the output of the textual feature from VLM with aggregating multi-level information to guide the generation of new visual features. VLM and LVM operations are repeated several times in CIM module to enhance the inter-modal semantic consistency in a cyclic and bilateral manner. \par
The current affordance datasets lack explicit descriptions of affordance using natural language. To address this issue, based on the previously proposed PAD dataset \cite{Oneluo}, we annotate associated short phrases according to affordance categories, as this is more suitable for practical application scenarios. With the leverage of WordNet \cite{miller1995wordnet}, which is a hierarchical lexical database, we annotate the text of affordance from four different but closely related perspectives: potential actions, functions, appearance features, and the environment. We name the resulting dataset based on PAD with annotating natural language as PAD-Language dataset (PAD-L). \par
In summary, our contributions are four-folds:
\begin{itemize}[leftmargin=12pt]
\item [1)]
We propose a new task for object affordance detection based on text phrases. Inputting a set of text phrases and an image containing related objects, the corresponding segmentation masks are expected to be generated. This makes more intelligent agents better comprehend humans' intentions during interactions with humans and locate specific objects in the scene even if the instructions do not indicate the specific category of the object.
\item [2)]
We design a novel CBCE-Net to effectively extract the affordance information from the given set of text phrases and then segment the corresponding object regions in the given image. Our model can effectively solves the vision-language alignment issue caused by the multiplicity feature of affordance. The text and vision information could interact and align well with each other in a cyclic and bilateral manner even though the visual appearance, texture and color are highly diverse.
\item [3)]
We annotate affordance categories using natural language phrases based on the existing dataset PAD. A new affordance dataset with natural language descriptions PAD-L is constructed, which extends the affordance of objects from limited categories to unconstrained natural language. The new dataset can be used in various downstream tasks.
\item [4)]
Compared with nine different approaches chosen from four relevant fields, our model achieves the best result in both subjective and objective terms, which is able to serve as a strong baseline for future work.
\end{itemize}
The rest of the paper is organized as following. Section \ref{related} illustrates the previous work related to phrase based affordance detection task. Section \ref{anno} describes details to annotate text phrases. The novel proposed CBCE-Net is introduced in Section \ref{method}. Section \ref{exp} shows the experiment results and analysis on PAD-L. We conclude the paper with discussing possible applications and future work in Section \ref{conclude}.
\section{Related Work} \label{related}
\subsection{Affordance Learning}
Visual Affordance has been extensively studied in computer vision and robotics communities because of its close association with action recognition, scene parsing, and human-robot interaction. Many approaches have been proposed to learn the visual affordance of objects in images. Hassan {\em et al.~} \cite{hassan2015attribute} proposed a Bayesian network-based affordance detection method that exploits the attribute of the object, actor, and environment. Grabner {\em et al.~} \cite{grabner2011makes} utilized a human skeleton 3D model to learn the action of sitting on a chair to infer whether an object can afford ``\textit{sitting}'' action or not. \par
With the development of deep neural networks, many methods based on deep learning have been proposed. Inspired by semantic segmentation \cite{long2015fully} approaches, affordance learning is extensively studied at the pixel level. Sawatzky {\em et al.~} \cite{sawatzky2017adaptive} proposed a weakly supervised affordance segmentation method to predict the fine segmentation masks by effectively leveraging the weakly labeled data, which is annotated in image-level and key-points level. Nguyen {\em et al.~} \cite{nguyen2017object} considered affordance segmentation as an object detection task. They employ the existing object detection models to obtain a set of candidate bounding box proposals. Afterward, atrous convolution is used to generate the final fine mask. Zhao {\em et al.~} \cite{zhao2020object} proposed an end-to-end model to exploit the symbiotic relationship between multiple affordances with the combinational relationship between affordance and objectness to produce a pixel-level affordance map. \par
In addition to using the features of objects themselves, some recent work has leveraged auxiliary information to learn visual affordance \cite{thermos2017deep,wang2017transferring}. Fang {\em et al.~} \cite{fang2018demo2vec} proposed a method to learn the affordance of unseen objects from expert demonstration videos. Their model extracts feature embedding vectors from demonstration videos to predict the same objects' interaction regions and action labels in a given image. Luo {\em et al.~} \cite{Oneluo} proposed an one-shot detection method to detect affordance in unseen scenarios. Their model first extracts intention information from support images. Then, the intention is transferred to query images to detect objects capable of affording the intention. To this end, they construct a new \textbf{P}urpose-driven \textbf{A}ffordance \textbf{D}ataset (PAD), which compensates for the lack of rich scenes in the previous datasets. \par
Unlike all the work mentioned above, where affordance is explored only in visual mediums, we attempt to investigate affordance detection involving natural language. Inputting a set of phrases that describe affordances and an image, the corresponding objects are expected to be segmented, which meets the realistic scenarios where robots receive information in multiple modalities from multiple sources.
\subsection{Referring Expression Grounding}
Given a piece of text, referring expression grounding task aims to comprehend the natural language content with locating the corresponding regions in the input image. Many efforts achieve localization at bounding box level \cite{mao2016generation,yu2017joint,liu2019improving}. Yu {\em et al.~} \cite{yu2018mattnet} proposed a modular network which decomposes the input natural language description into subject, location, and relationship attributes to improve the localization performance. Liu {\em et al.~} \cite{liu2021cross} adopted graph models with an attention mechanism to capture the relationship between the object regions in the given image. In association with visual affordance, Mi {\em et al.~} \cite{mi2019object,mi2020intention} investigated the use of natural language to guide visual affordance detection. Their model first extracts the intention in the natural language then locates referred objects in the given image at the bounding box level. \par
\begin{figure*}[t]
\centering
\includegraphics[width=0.99\textwidth]{figure/wordnet.pdf}
\caption{\textbf{Examples of utilizing WordNet \cite{miller1995wordnet} to explore potential actions which can be performed on specific objects}. WordNet groups words together based on specific senses. The interlinks between words in WordNet could be visually organized in the form of tree diagram. Verbs in WordNet are classified into $9$ major categories while nouns in $25$ categories. As for the word ``\textit{Kick}'' in WordNet, there are $6$ distinct senses when it is treated as a noun and $8$ different senses when it is a verb. In the tree diagram, ``$\langle noun.act \rangle$ kick \#1'' indicates that the first sense of noun ``kick'' belongs to ``act'' category. This semantic sense is glossed to ``deliver a blow by foot'' which could be an annotation phrase to affordance ``kick''. After filtering out irrelevant words semantic domains to affordance such as $\langle verb.emotion \rangle$, $\langle verb.competition \rangle$, \emph{etc}. we find that $\langle verb.act \rangle$, $\langle verb.motion \rangle$, $\langle verb.contact \rangle$ and $\langle noun.act \rangle$ are the most associated ones. Besides, we utilize the linguistic concepts in WordNet to explore richer expressions. In the tree diagram, ``\textit{hypernym}'' denotes more abstract and generic words, (\emph{e.g. } ``move'' is the hypernym of ``kick''), the expression ``\textit{sister terms}'' is used to represent a pair of synsets (sets of cognitive synonyms) which share a hypernym and ``\textit{troponym}'' indicates a ``manner'' relation between two verbs. With the leverage of these semantic relations, nodes in the tree diagrams could be employed as affordance annotations.}
\label{FIG:anno}
\end{figure*}
Many approaches have been proposed at pixel level. In \cite{hu2016segmentation,li2018referring,margffoy2018dynamic}, the multimodal features from CNN and LSTM \cite{hochreiter1997long} are directly concatenated together and input into the fully convolutional network to generate the final pixel-wise mask. These methods do not exploit the intra-modal and inter-modal relationships explicitly. More recent work uses self-attention and cross-attention mechanisms for linguistic and visual information. Ye {\em et al.~} \cite{ye2021referring} proposed a cross-modal self-attention module to capture the long-range dependencies between linguistic and visual features. They can adaptively focus on essential words in the referring expression and crucial regions in the image. Hu {\em et al.~} \cite{hu2020bi} designed a bi-directional relationship inferring network to model the relationship between linguistic and visual features. Liu {\em et al.~} \cite{liu2021cross} proposed a model that first perceives all the entities in the image according to the entity and attribution words in the expression, then infers the location of the target object with the words that represent the relationship. Jing {\em et al.~} \cite{jing2021locate} first gets the position prior of the referring object based on the language and image, then generates segmentation mask based on the position prior. \par
Compared to the mentioned referring segmentation task, our proposed task has significant differences. At first, the inherent multiplicity feature of affordance leads to a much more significant variation in the visual representation of objects referred to by the same text than in traditional ones, which masks the alignment of linguistic and visual features more difficult. Secondly, our phrases only describe affordance without presenting entity words. Therefore, it is unable to utilize relationships between entities as in \cite{jing2021locate,liu2021cross,ye2021referring} to localize objects. We adopt short phrases to describe affordances rather than long sentences to meet practical scenarios. This disables us to leverage the text context information to capture the relationship between linguistic and visual features like the operations in \cite{liu2017recurrent,hu2020bi,ye2021referring,liu2021cross}.
The work mentioned above also utilized natural language to learn visual affordances \cite{mi2019object,mi2020intention}. Our work differs from them in the following ways.
Firstly, we consider the inherent multiplicity problem of affordance and involve richer indoor and outdoor scenes, which meet affordance's definition and are suitable for practical applications. Secondly, our model can generate a more precise pixel-level segmentation mask instead of a bounding box, limiting the ability to capture the inherent shape of objects. The accurate shape offers downstream tasks such as ``Robot Grasping'' richer geometric features to facilitate potential actions.
In addition, technically, unlike their two-stage strategy, which relies heavily on the accuracy of intention extraction from natural language at first, we propose an end-to-end framework to enable multi-modal information to interact with each other adequately in a cyclic and bilateral manner.
\section{Language Annotations} \label{anno}
The section shows the process to get our language annotations based on PAD dataset. The whole complete PAD dataset could be found at \url{https://github.com/lhc1224/OSAD_Net}. We describe details of the annotation process where we consider affordances from four perspectives with the assistance of WordNet, which is a hierarchical lexical database and could be explored at \url{https://wordnet.princeton.edu/}. After that, we show some statistics of the proposed PAD-Language dataset.
Instead of describing affordances with grammatically coherent sentences, we find it more effective to use several phrases closer to the actual application scenarios to describe the affordance. People tend to use short instructions to computers or robots in daily life rather than long sentences. Moreover, short phrases are more representative than complicated sentences to depict objects' affordances. \par
As the inherent property of an object, the term ``\textit{affordance}'' is related to a set of possible actions that are able to manipulate objects. Therefore, the linguistic descriptions of affordance must be tightly relevant to these potential actions or the functionalities for human use. In addition, the environment in which objects are located and their appearance features may also affect the affordances of objects. Most of the affordances associated with our daily life are related to these aspects. Therefore, for better phrases descriptions, we consider text phrases from several different perspectives: \textbf{1)} The \textbf{actions} that can be potentially performed on the object. \textbf{2)} The \textbf{function} of the object. \textbf{3)} The \textbf{appearance features} related to the actions or functionalities. \textbf{4)} The \textbf{environment} that has capability to afford possible interactions between actors and bojects.
\textbf{\emph{\underline{1) Potential Actions:}}} Different from other properties of objects, affordance has one noteworthy distinction. That is, an object may have several different affordances, while different objects may have the same affordance. It is difficult to focus on this issue while describing affordance using natural language. To this end, we make the phrase descriptions based on the affordance categories rather than object categories in the PAD dataset. To explore more expressions for actions, we utilize a widely used large lexical tool WordNet \cite{miller1995wordnet} to assist the annotation process. WordNet is a hierarchical lexical database that groups verbs, nouns, adjectives, and adverbs into sets of cognitive synonyms and \textit{synsets} associated by semantic relations. Other words or phrases expressing a similar sense could be found easily in WordNet for a specific word. To describe affordances, actions can be generally indicated by a ``\textit{synset}'' rather than individual verbs. Tree diagrams could represent the semantic relationship between words in WordNet. Two typical keyword centered tree diagram examples are shown in Fig. \ref{FIG:anno}. For a specific action, the phrases on nodes in the tree diagram can greatly enrich the descriptions for affordances. As shown in Fig. \ref{FIG:anno}, it is reasonable to consider that \textit{``soccer ball''} and \textit{``punching bag''} have the affordance \textit{``deliver a blow by foot''}, \textit{``drive or propel with foot''}, \textit{``strike out''}, \emph{etc}. and affordance \textit{Throw} could be extended to phrases such as \textit{propel through the air}, \textit{deliver} or \textit{pass}, \emph{etc} rather than a single word. It is worth noting that we adopt multiple verb tenses instead of only in original form to exhibit more diverse application scenarios and get more natural language phrases.
\textbf{\emph{\underline{2) Function:}}} Object function is an intrinsic property of an object independent of the users. Moreover, functionality understanding plays a vital role in human-machine interactions. Sense objects' function is essential to building a more intelligent computer vision system. Therefore, functions of an object are also included in our phrases annotations. For instance, the object ``\textit{Umbrella}'' has the function of ``\textit{sheltering from the wind and rain}'', object ``\textit{Knife}'' has the function of ``\textit{cut}'', object ``\textit{Drum}'' has the function of ``\textit{make sound}''. We annotate functions of objects in PAD dataset using simple phrases instead of involving in more details.
\textbf{\emph{\underline{3) Appearance Feature:}}} Visual appearance and geometric characteristics can be regarded as the physical basis of affordances. For instance, the middle of a \textit{cup} is \textit{depressed} resulting in its ability to ``\textit{hold water}''. ``\textit{soccer ball}'' is ``\textit{spherical}'' in appearance causing it to have the ability to ``\textit{roll}''. Besides, in practice, one may not know the specific category of an object but the affordances could be inferred by appearance features.
\textbf{\emph{\underline{4) Environment:}}} In the most accepted affordance definitions, the environment plays an important role. The term ``affordance'' is thought to reveal the complementary nature of the animal and the environment \cite{gibson1977theory}. In our textual annotations, we incorporate descriptions of the environment. Sometimes, specific affordances are available only when the object is located in a particular environment. For example, a ``\textit{soccer ball}'' generally only exhibits the affordance ``\textit{play}'' in an \textbf{outdoor} environment and ``\textit{chopsticks}'' are normally present at \textbf{the dining table} or \textbf{the kitchen}. After considering environment surrounding the object, the description of affordance becomes more complete. More examples could be found in Appendix. \par
Fig. \ref{FIG:stati} shows some overall statistics of the proposed PAD-L dataset which is constructed based on previous PAD dataset with containing $4,002$ images from
$31$ affordance categories and $72$ object categories. We split these images into $75$\% training and $25$\% test. For a single image in the PAD dataset, we randomly select a set of phrases (the number is $4$ in this paper) from the candidate annotations to build a new extended version with text information considered. The statistic shows that PAD-L contains rich phrases in a variety of scenarios.
\begin{figure*}[!t]
\begin{minipage}[b]{0.28\textwidth}
\centering
\small
\renewcommand{\arraystretch}{1.}
\renewcommand{\tabcolsep}{4.pt}
\begin{tabular}{l|c}
\hline
\Xhline{2.\arrayrulewidth}
\rowcolor{mygray}
\textbf{Statistics} & \textbf{Overall} \\
\hline
\Xhline{2.\arrayrulewidth}
\# affordances & 31 \\
\hline
\# objects & 72 \\
\hline
\# images & 4,002 \\
\multicolumn{1}{l|}{\#\# Train Set} & 3,202\\
\multicolumn{1}{l|}{\#\# Test Set} & 800 \\
\hline
\# phrases & 1,447 \\
\hline
\# words & 959 \\
\hline
\# words per phrases & 2.36 \\
\hline
\# phrases per affordance & 46.68 \\
\hline
\Xhline{2.\arrayrulewidth}
\end{tabular}
\end{minipage}
\begin{minipage}[htb]{0.46\textwidth}
\centering
\includegraphics[width=0.99\textwidth]{figure/sta.png}
\end{minipage}
\begin{minipage}[htb]{0.25\textwidth}
\centering
\includegraphics[width=0.99\textwidth]{figure/pc.pdf}
\end{minipage}
\caption{\textbf{Statistics of PAD-L.} The table on the left shows the statistics of all data. The middle chart illustrates the image number and phrase number according to affordance categories. The bars represent image numbers, with every part in the bar indicating the image number of every object category, and the line chart shows phrase numbers. The cloud of phrases on the right has the font sizes proportional to the square root of frequencies in the proposed PAD-L.}
\label{FIG:stati}
\end{figure*}
\section{Method} \label{method}
\subsection{Problem Description}
Taking a set of affordance descriptions $P$=$\{P_1, P_2,..., P_n\}$ that describes affordance $A_m$ and an affordance related image $I$ which contains multiple objects $\{O_1, O_2, ..., O_n\}$, the phrase-based affordance detection task aims to get the segmentation mask $M$ of the object $O_m$ which can afford the affordance $A_m$ in the image. We define the input as $n$ query phrases and an image $I$ in each batch.\par
We need a vision encoder and language encoder to encode visual and linguistic features to align and fuse information from two distinct modalities. Then, the result features are input to a module to learn the consistency between the two modalities. After adequate alignment and fusion,
we use a segmentation module to generate the final masks.
\subsection{Visual and Linguistic Features Extraction}
\label{Overall}
As shown in Fig. \ref{FIG:model} (a), our model takes a set of phrases and an affordance related image as inputs. In the image branch, a CNN backbone (\emph{e.g. } ResNet101\cite{7780459}) is used to extract multi-level image features. The output of the 3rd, 4th and 5th stages of CNN backbone are denoted as $\{I_3, I_4, I_5\}$ with channel dimension of $512$, $1024$ and $2048$, respectively. Afterwards, $1 \times 1$ convolution layers are employed to transform the multi-level visual features to the same size of $\mathbb{R}^{H\times W \times C_{I}}$.\par
In the textual branch, the language features $L=\{L_1, L_2,..., L_n\}$ is extracted using a language encoder (\emph{e.g. } LSTM\cite{6795963}), where $n$ is the number of phrases. The parameters of embedding layer is initialized using GloVe word embeddings \cite{pennington-etal-2014-glove}. After encoding by the language encoder, the resulting linguistic features are applied in a max pooling operation and get a global language representation $L_0 \in \mathbb{R}^{C_l}$ where $C_l$ is the dimension of the language feature to interacts with the multi-level visual features and feed into the proposed \textit{Cyclic Interaction Module}.\par
Afterwards, a bilinear fusion \cite{ben2017mutan} is adopted to fuse different level visual features with linguistic feature $L_0$. Following the prior work in referring segmentation\cite{Liu2017RecurrentMI}\cite{ye2021referring}\cite{hu2016segmentation}, to incorporate more spatial information, we concatenate a 8-D spatial coordinate feature which is denoted as $P\in \mathbb{R}^{H \times W \times 8}$ with the resulting fused multi-model feature before to get final fused features $\{F^3_0, F^4_0, F^5_0 \} \in \mathbb{R}^{H \times W \times (C_I + C_l + 8)}$, which could be defined as following:
\begin{equation}
\{ F^i_0 = concat(f(I_i, L_0), P)\}_{i=3,4,5}, \label{eq:1}
\end{equation}
where $concat(\cdot , \cdot)$ represents the concatenation operation along the channel dimension and $f$ denotes bilinear fusion operation. In this paper, $H=40$, $W=40$.
\subsection{Cyclic Interaction Module}
\label{CIM}
Compared to traditional vision-language tasks, the apparent difference of objects referred to by the same language descriptions in different images could be significant because of the multiplicity property of affordance. \par
The vast differences in visual appearance lead to substantial divergence in the distribution of textual and visual features in feature space. To generate accurate and consistent representations of the target object and the given affordance description phrases, the feature representation for one modality is enhanced several times adaptively guided by the other modality in \textit{Cyclic Interaction Module (CIM)}, which is indicated in Fig. \ref{FIG:model} (a). CIM consists of bilateral interaction operations between the two modalities to learn the consistency step by step, which leads to adequate fusion and alignment.\par
Specifically, we propose a \textit{Vision guide Language Module} (VLM) to enhance the linguistic feature representation with the guidance of visual features and a \textit{Language guide Vision Module} (LVM)) to get improved visual feature representations. The cyclic interaction process is illustrated as following, ($i=3,4,5$ and $j,k \in \{3,4,5\} \backslash \{i\}$ in the equations):
\begin{align}
L^i_1 & = VLM(L_0, F^i_0),\label{eq:2} \\
F^i_1 & = LVM(L^i_1, F^i_0, F^j_0, F^k_0), \label{eq:3} \\
L^i_2 & = VLM(L^i_1, F^i_1),\label{eq:4} \\
F^i_2 & = LVM(L^i_2, F^i_1, F^j_1, F^k_1). \label{eq:5}
\end{align}
\subsection{Vision Guide Language Module}
\label{VLM}
\begin{figure*}[t]
\centering
\begin{overpic}[width=0.99\linewidth]{figure/model}
\put(11.5,19.6){\footnotesize{\textbf{(Eq.~\ref{eq:2})}}}
\put(9.5,11){\footnotesize{\textbf{(Eq.~\ref{eq:3})}}}
\put(25,13.2){\footnotesize{\textbf{(Eq.~\ref{eq:4})}}}
\put(34,16.5){\footnotesize{\textbf{(Eq.~\ref{eq:5})}}}
\put(68,64){\footnotesize{\textbf{(Eq.~\ref{eq:6})}}}
\put(74.5,61.5){\footnotesize{\textbf{(Eq.~\ref{eq:7})}}}
\put(92,67.5){\footnotesize{\textbf{(Eq.~\ref{eq:8})}}}
\put(93,45){\footnotesize{\textbf{(Eq.~\ref{eq:9})}}}
\put(49,12.7){\footnotesize{\textbf{(Eq.~\ref{eq:10})}}}
\put(39.3,72){\normalsize{\textbf{Section~(\ref{Overall})}}}
\put(27,22){\normalsize{\textbf{Section~(\ref{CIM})}}}
\put(87.5,54.2){\normalsize{\textbf{Section~(\ref{VLM})}}}
\put(87.5,31.2){\normalsize{\textbf{Section~(\ref{LVM})}}}
\put(87.5,5.5){\normalsize{\textbf{Section~(\ref{SegModule})}}}
\end{overpic}
\caption{\textbf{The architecture of our proposed CBCE-Net.} CBCE-Net first uses DeepLab Resnet101 \cite{chen2017deeplab} and LSTM \cite{6795963} to extract multi-level visual and linguistic features, respectively. Subsequently, combining spatial coordinate, multi-level multi-modal features are generated through bilinear fusion operations (see Section \ref{Overall} for details). Afterwards, fused features are fed into CIM module to enhance the semantic consistency in a cyclic and bilateral manner (See Section \ref{CIM} for details). In CIM module, we design a VLM module (see Section \ref{VLM}) and a LVM module (see Section \ref{LVM}) to update visual and linguistic features bilaterally with the guidance of each other. VLM is showed in part (b) in the top right corner and LVM is illustrated in part (c). Note that in LVM, the original feature is showed at the top left corner which denoted as $F^i_m$ and the updated feature is at the output denoting as $F^i_{m+1}$. At last, a ASPP module (shown in part (d)) receives the final concatenated fused features and generate predicted masks (see Section \ref{SegModule}).
}
\label{FIG:model}
\end{figure*}
The architecture of the proposed VLM is illustrated in Fig. \ref{FIG:model} (b). To update the language feature representation $L^i_{m+1}$, we leverage the previous fused feature $F^i_m$ to guide the transformation of the previous language feature $L^i_m$ in VLM. \par
For a language feature $L^i_m \in \mathbb{R}^{C_l}$ and fused visual feature $F^i_m \in \mathbb{R}^{H \times W \times C_v}$, we can compute the element-wise correlations using inner product:
\begin{equation}
S^i_m = \phi(F^i_m)\theta(L^i_m)^T, \label{eq:6}
\end{equation}
where $\theta$ and $\phi$ are $1 \times 1$ convolution layers to transform the feature to have the same dimensions where $\theta(L^i_m) \in \mathbb{R}^{1 \times C}$, $\phi(F^i_m) \in \mathbb{R}^{HW \times C}$.
The affinity map $S^i_m \in \mathbb{R}^{HW \times 1}$ captures the correlation information of the given features. Then we employ \textit{scale} and \textit{softmax} operations following the scaled dot-product attention practice in \cite{vaswani2017attention} to normalize and reshape the affinity map to produce global affinity attention map $A^i_m \in \mathbb{R}^{1 \times HW}$. This process is shown as following:
\begin{equation}
A^i_m = \text{Softmax}(\frac{S^i_m}{\sqrt{C}}). \label{eq:7}
\end{equation}
Afterwards, we multiply the reshaped original fused visual feature $F^i_m$ to the resulting attention map to generate the attention feature map $A^i_c \in \mathbb{R}^{1 \times C}$ along the channel dimension. Finally, to get the final updated language feature representation $L^i_{m+1} \in \mathbb{R}^{1 \times 1 \times C}$, we concatenate the original language feature $L^i_m$ to $A^i_c$ followed by a convolution layer and a $L_2$ normalization operation as shown following:
\begin{equation} \label{eq:8}
L^i_{m+1} = ||conv(concat(L^i_m, A^i_c))||_2,
\end{equation}
where $conv$, $concat(\cdot)$, $||\cdot||_2$ denotes $1 \times 1$ convolution, $concatenate$ and $L_2$ normalization operations, respectively.
\subsection{Language Guide Vision Module}
\label{LVM}
Previous work \cite{jing2021locate,li2018referring,yu2017multi} on vision-language tasks demonstrates that the information exchange among multi-level features benefits the vision and language interaction process a lot. Therefore, leveraging multi-level information, we propose a novel \textit{Language Guide Vision Module} to update the visual features under the guidance of linguistic features. The operation is shown in Fig. \ref{FIG:model} (c).
The updated linguistic feature $L^i_{m+1} \in \mathbb{R}^{1 \times 1 \times C}$ contains rich multimodal context information of $F^i_m$. We utilize $L^i_{m+1}$ to select the relevant information from other two level features $F^j_m$ and $F^k_m$ after necessary transformations. The final aggregated global context feature $F^i_{m+1}$ is obtained by adding $F^i_m$ and relevant information from other two levels:
\begin{equation}\label{eq:9}
F^i_{m+1} = F^i_m + \sum_{k\in \{3,4,5\} \backslash i} \sigma(conv(L^i_{m+1})) \odot F^k_m,
\end{equation}
where $\sigma(\cdot)$ denotes \textit{sigmoid} function.
\subsection{Segmentation Module}
\label{SegModule}
The segmentation module aims to produce the final fine segmentation mask. At first, we obtain a concatenated feature $F^C_2$ which contain multi-level information:
\begin{equation} \label{eq:10}
F^C_2 = concat(F^3_2, F^4_2, F^5_2).
\end{equation}
Next, as shown in Fig. \ref{FIG:model} (d), we utilize a ASPP module \cite{chen2017deeplab} to capture multi-scale information. The structure of ASPP consists of five parallel sub-networks. The first one learns global information by employing \textit{global average pooling} operation while the remaining four branches apply atrous convolutions with multiple dilation rates of $\{1, 3, 7, 11\}$ respectively.
In the parallel branches, the depthwise separable convolution is applied to reduce the model complexity. After that, the multi scale features are concatenated together. Finally, a $1\times 1$ convolution and a \textit{upsample} operation to are adopted to generate the final fine mask $P_m$ with the same resolution and channel dimension as the input image. \par
During training, we adopt the Sigmoid Binary Cross Entropy (BCE) loss as a minimized objective, which is defined on the predicted output $P_{mask}$ and the ground truth segmentation mask $G$ as following:
\begin{equation}
L = \sum_{i=1}^{H \times W} G(i)log(P_{mask}(i)) + (1-P_{mask}(i))log(1-G(i)), \label{eq:11}
\end{equation}
where $i$ is the elements of the ground-truth mask and $H \times W$ denotes the size of the ground-truth mask.
\begin{table*}[t]
\caption{\textbf{The experimental results of $10$ models} (\textcolor[rgb]{0.8,0.0,0.3}{\textbf{BASNet}}\cite{Qin_2019_CVPR} , \textcolor[rgb]{0.8,0.0,0.3}{\textbf{CPD}}\cite{Wu_2019_CVPR}, \textcolor[rgb]{0.99,0.5,0.0}{\textbf{OSAD}}\cite{Oneluo}, \textcolor[rgb]{0.99,0.5,0.0}{\textbf{OAFFD}}\cite{zhao2020object},
\textcolor[rgb]{0.4,0.0,0.99}{\textbf{PSPNet}}\cite{zhao2017pspnet}, \textcolor[rgb]{0.4,0.0,0.99}{\textbf{DeepLabv3+ (DLabV3+)}}\cite{chen2017rethinking}
\textcolor[rgb]{0.0,0.8,0.0}{\textbf{CMSA}}\cite{ye2021referring}, \textcolor[rgb]{0.1,0.8,0.1}{\textbf{BRINet}}\cite{hu2020bi}, \textcolor[rgb]{0.1,0.8,0.1}{\textbf{CMPC}}\cite{liu2021cross}) in terms of five metrics (IoU~$(\uparrow)$, $F_{\beta}$~$(\uparrow)$, $E_\phi$~$(\uparrow)$, CC~$(\uparrow)$, and MAE~$(\downarrow)$). \textbf{Bold} and \underline{underline} indicate the best and the second-best scores, respectively.}
\label{Table:2}
\centering
\renewcommand{\arraystretch}{1.}
\renewcommand{\tabcolsep}{3.pt}
\begin{tabular}{c||cc|cc|cc|ccc||c}
\hline
\Xhline{2.\arrayrulewidth}
\rowcolor{mygray}
\textbf{Methods} & \textcolor[rgb]{0.8,0.0,0.3}{\textbf{BASNet}} \textbf{\cite{Qin_2019_CVPR}} & \textcolor[rgb]{0.8,0.0,0.3}{\textbf{CPD}} \textbf{\cite{Wu_2019_CVPR}} & \textcolor[rgb]{0.99,0.5,0.0}{\textbf{OSAD}} \textbf{\cite{Oneluo}} & \textcolor[rgb]{0.99,0.5,0.0}{\textbf{OAFFD}} \textbf{\cite{zhao2020object}} & \textcolor[rgb]{0.4,0.0,0.99}{\textbf{PSPNet}} \textbf{\cite{zhao2017pspnet}} & \textcolor[rgb]{0.4,0.0,0.99}{\textbf{DLabV3+}} \textbf{\cite{chen2017rethinking}} & \textcolor[rgb]{0.0,0.8,0.0}{\textbf{CMSA}} \textbf{\cite{ye2021referring}} & \textcolor[rgb]{0.1,0.8,0.1}{\textbf{BRINet}} \textbf{\cite{hu2020bi}} & \textcolor[rgb]{0.1,0.8,0.1}{\textbf{CMPC}} \textbf{\cite{liu2021cross}} & \textbf{Ours} \\
\hline
\rowcolor{mygray}
Year & 2019 & 2019 & 2021 & 2020 & 2017 & 2018 & 2021 & 2020 & 2021 & $\backslash$ \\
\hline
\Xhline{2.\arrayrulewidth}
\textbf{IoU ($\uparrow$)} & $0.491$ & $0.496$ & $0.554$ & $0.439$ & $0.464$ & $0.509$ & $0.571$ & \underline{$0.579$} & \underline{$0.579$} & \bm{$0.593$} \\
\textbf{$E_{\phi}$ ($\uparrow$)} & $0.752$ & $0.744$ & $0.777$ & $0.714$ & $0.692$ & $0.761$ & $0.799$ & $0.793$ & \underline{$0.806$} & \bm{$0.822$} \\
\textbf{CC ($\uparrow$)} & $0.557$ & $0.626$ & $0.662$ & $0.565$ & $0.573$ & $0.638$ & \underline{$0.711$} & $0.710$ & $0.706$ & \bm{$0.713$} \\
\textbf{MAE ($\downarrow$)} & $0.086$ & $0.083$ & $0.083$ & $0.098$ & $0.138$ & $0.064$ & $0.063$ & \underline{$0.061$} & $0.062$ & \bm{$0.061$} \\
\textbf{$F_{\beta}$ ($\uparrow$)} & $0.571$ & $0.573$ & $0.630$ & $0.521$ & $0.503$ & $0.631$ & $0.644$ & \underline{$0.653$} & $0.650$ & \bm{$0.665$} \\
\hline
\Xhline{2.\arrayrulewidth}
\end{tabular}
\label{objective results}
\end{table*}
\begin{table*}[t]
\centering
\renewcommand{\arraystretch}{1.}
\renewcommand{\tabcolsep}{9.pt}
\caption{\textbf{The results of different methods on the PAD-L for each affordance category.} IoU is used as the evaluation metric. \textbf{Bold} and \underline{underline} indicate the best and the second-best scores, respectively.}
\begin{tabular}{c||cc|cc|cc|ccc||c}
\hline
\Xhline{2.\arrayrulewidth}
\textbf{Classes} & \makecell[c]{\textcolor[rgb]{0.8,0.0,0.3}{\textbf{BASNet}} \\ \textbf{\cite{Qin_2019_CVPR}}} & \makecell[c]{\textcolor[rgb]{0.8,0.0,0.3}{\textbf{CPD}} \\ \textbf{\cite{Wu_2019_CVPR}}} & \makecell[c]{\textcolor[rgb]{0.99,0.5,0.0}{\textbf{OSAD}} \\ \textbf{\cite{Oneluo}}} & \makecell[c]{\textcolor[rgb]{0.99,0.5,0.0}{\textbf{OAFFD}} \\ \textbf{\cite{zhao2020object}}} & \makecell[c]{\textcolor[rgb]{0.4,0.0,0.99}{\textbf{PSPNet}} \\ \textbf{\cite{zhao2017pspnet}}} & \makecell[c]{\textcolor[rgb]{0.4,0.0,0.99}{\textbf{DLabV3+}} \\ \textbf{\cite{chen2017rethinking}}} & \makecell[c]{\textcolor[rgb]{0.0,0.8,0.0}{\textbf{CMSA}} \\ \textbf{\cite{ye2021referring}}} & \makecell[c]{\textcolor[rgb]{0.1,0.8,0.1}{\textbf{BRINet}} \\ \textbf{\cite{hu2020bi}}} & \makecell[c]{\textcolor[rgb]{0.1,0.8,0.1}{\textbf{CMPC}} \\ \textbf{\cite{liu2021cross}}} & \textbf{Ours} \\
\hline
\Xhline{2.\arrayrulewidth}
\rowcolor{mygrayd}
\textbf{Beat} & $0.548$ & $0.625$ & $0.808$ & $0.562$ & $0.572$ & $0.671$ & \underline{$0.813$} & \bm{$0.835$} & $0.779$ & $0.766$ \\
\textbf{Bounce} & $0.362$ & $0.524$ & $0.601$ & $0.376$ & $0.427$ & $0.564$ & $0.606$ & $0.642$ & \bm{$0.652$} & \underline{$0.616$} \\
\rowcolor{mygrayd}
\textbf{Brush} & $0.275$ & $0.369$ & $0.427$ & $0.267$ & $0.292$ & $0.395$ & \underline{$0.449$} & \bm{$0.450$} & $0.440$ & $0.443$ \\
\textbf{Contain-1} & $0.290$ & $0.393$ & $0.463$ & $0.313$ & $0.355$ & $0.404$ & \underline{$0.493$} & $0.489$ & $0.481$ & \bm{$0.508$ } \\
\rowcolor{mygrayd}
\textbf{Contain-2} & $0.447$ & $0.518$ & $0.573$ & $0.449$ & $0.483$ & $0.539$ & $0.608$ & $0.609$ & \underline{$0.618$} & \bm{$0.634$} \\
\textbf{Contain-3} & $0.485$ & $0.543$ & $0.593$ & $0.482$ & $0.511$ & $0.555$ & $0.631$ & $0.629$ & \underline{$0.635$} & \bm{$0.656$} \\
\rowcolor{mygrayd}
\textbf{Cut} & $0.448$ & $0.511$ & $0.557$ & $0.446$ & $0.471$ & $0.524$ & $0.594$ & $0.587$ & \underline{$0.595$} & \bm{$0.621$} \\
\textbf{Fork} & $0.433$ & $0.490$ & $0.538$ & $0.431$ & $0.455$ & $0.507$ & \underline{$0.575$} & $0.567$ & $0.574$ & \bm{$0.603$} \\
\rowcolor{mygrayd}
\textbf{Hit} & $0.420$ & $0.475$ & $0.531$ & $0.421$ & $0.446$ & $0.500$ & $0.561$ & $0.552$ & \underline{$0.562$} & \bm{$0.590$} \\
\textbf{Jump} & $0.395$ & $0.438$ & $0.502$ & $0.388$ & $0.404$ & $0.458$ & $0.523$ & $0.520$ & \underline{$0.526$} & \bm{$0.556$} \\
\rowcolor{mygrayd}
\textbf{Kick} & $0.409$ & $0.450$ & $0.516$ & $0.400$ & $0.410$ & $0.471$ & $0.531$ & $0.533$ & \underline{$0.536$} & \bm{$0.567$} \\
\textbf{Lie} & $0.442$ & $0.476$ & $0.541$ & $0.425$ & $0.439$ & $0.491$ & $0.547$ & $0.553$ & \underline{$0.554$} & \bm{$0.579$} \\
\rowcolor{mygrayd}
\textbf{Lift} & $0.445$ & $0.480$ & $0.546$ & $0.429$ & $0.443$ & $0.494$ & $0.549$ & $0.554$ & \underline{$0.557$} & \bm{$0.581$} \\
\textbf{Look Out} & $0.448$ & $0.484$ & $0.549$ & $0.433$ & $0.447$ & $0.499$ & $0.552$ & $0.557$ & \underline{$0.558$} & \bm{$0.583$} \\
\rowcolor{mygrayd}
\textbf{Mix} & $0.465$ & $0.488$ & $0.542$ & $0.428$ & $0.449$ & $0.488$ & $0.541$ & $0.541$ & \underline{$0.547$} & \bm{$0.568$} \\
\textbf{Pick Up} & $0.469$ & $0.488$ & $0.541$ & $0.427$ & $0.448$ & $0.486$ & $0.538$ & $0.538$ & \underline{$0.546$} & \bm{$0.567$} \\
\rowcolor{mygrayd}
\textbf{Play-1} & $0.483$ & $0.498$ & $0.553$ & $0.435$ & $0.461$ & $0.503$ & $0.551$ & $0.551$ & \underline{$0.559$} & \bm{$0.578$} \\
\textbf{Play-2} & $0.497$ & $0.513$ & $0.563$ & $0.447$ & $ 0.476$ & $0.517$ & $0.561$ & $0.564$ & \underline{$0.570$} & \bm{$0.589$}\\
\rowcolor{mygrayd}
\textbf{Play-3} & $0.493$ & $0.519$ & $0.572$ & $0.452$ & $0.483$ & $0.525$ & $0.570$ & $0.572$ & \underline{$0.578$} & \bm{$0.596$} \\
\textbf{Play-4} & $0.499$ & $0.519$ & $0.574$ & $0.451$ & $0.484$ & $0.526$ & $0.571$ & \underline{$0.579$} & $0.578$ & \bm{$0.596$} \\
\rowcolor{mygrayd}
\textbf{Ride} & $0.502$ & $0.518$ & $0.575$ & $0.455$ & $0.486$ & $0.528$ & $0.574$ & $0.575$ & \underline{$0.580$} & \bm{$0.597$} \\
\textbf{Roll Dough} & $0.500$ & $0.518$ & $0.576$ & $0.454$ & $0.486$ & $0.530$ & $0.574$ & $0.576$ & \underline{$0.580$} & \bm{$0.598$} \\
\rowcolor{mygrayd}
\textbf{Rolling} & $0.500$ & $0.515$ & $0.570$ & $0.456$ & $0.479$ & $0.525$ & $0.579$ & $0.580$ & \underline{$0.588$} & \bm{$0.603$} \\
\textbf{Scoop} & $0.501$ & $0.511$ & $0.564$ & $0.451$ & $0.473$ & $0.517$ & $0.572$ & $0.573$ & \underline{$0.580$} & \bm{$0.596$} \\
\rowcolor{mygrayd}
\textbf{Shelter} & $0.495$ & $0.504$ & $0.556$ & $0.445$ & $0.465$ & $0.514$ & $0.574$ & $0.575$ & \underline{$0.581$} & \bm{$0.595$} \\
\textbf{Sit} & $0.499$ & $0.505$ & $0.559$ & $0.446$ & $0.469$ & $0.516$ & $0.572$ & $0.574$ & \underline{$0.581$} & \bm{$0.595$} \\
\rowcolor{mygrayd}
\textbf{Swing} & $0.494$ & $0.499$ & $0.555$ & $0.440$ & $0.461$ & $0.507$ & $0.572$ & $0.574$ & \underline{$0.581$} & \bm{$0.596$} \\
\textbf{Take Photo} & $0.494$ & $0.499$ & $0.555$ & $0.441$ & $0.461$ & $0.508$ & $0.573$ & $0.574$ & \underline{$0.581$} & \bm{$0.596$} \\
\rowcolor{mygrayd}
\textbf{Throw} & $0.491$ & $0.498$ & $0.555$ & $0.438$ & $0.458$ & $0.510$ & $0.571$ & $0.576$ & \underline{$0.580$} & \bm{$0.594$} \\
\textbf{Wear-1} & $0.492$ & $0.499$ & $0.557$ & $0.441$ & $0.462$ & $0.513$ & $0.574$ & $ 0.581$ & \underline{$0.584$} & \bm{$0.597$} \\
\rowcolor{mygrayd}
\textbf{Wear-2} & $0.491$ & $0.496$ & $0.553$ & $0.439$ & $0.459$ & $0.510$ & $0.571$ & \underline{$0.579$} & $0.577$ & \bm{$0.593$} \\
\hline
\Xhline{2.\arrayrulewidth}
\end{tabular}
\label{Table:IoU}
\end{table*}
\section{Experiments} \label{exp}
\begin{figure*}[t]
\centering
\small
\begin{overpic}[width=0.99\linewidth]{figure/comparefig.pdf}
\put(6.5,-1){\textbf{Phrases}}
\put(20,-1){\textbf{Image}}
\put(33,-1){\textbf{GT}}
\put(44,-1){\textbf{Ours}}
\put(53.5,-1){\textcolor[rgb]{0.1,0.8,0.1}{\textbf{CMPC}} \textbf{\cite{liu2021cross}}}
\put(66,-1){\textcolor[rgb]{0.8,0.0,0.3}{\textbf{CPD}} \textbf{\cite{Wu_2019_CVPR}}}
\put(77,-1){\textcolor[rgb]{0.99,0.5,0.0}{\textbf{OSAD}} \textbf{\cite{Oneluo}}}
\put(87,-1){\textcolor[rgb]{0.4,0.0,0.99}{\textbf{DeepLabV3+}} \textbf{\cite{chen2017rethinking}}}
\end{overpic}
\caption{\textbf{Visual results obtained by different models}, including \textcolor[rgb]{0.1,0.8,0.1}{\textbf{CMPC}} \cite{liu2021cross}, \textcolor[rgb]{0.8,0.0,0.3}{\textbf{CPD}} \cite{Wu_2019_CVPR}, \textcolor[rgb]{0.99,0.5,0.0}{\textbf{OSAD}} \cite{Oneluo} and \textcolor[rgb]{0.4,0.0,0.99}{\textbf{DeepLabV3+}} \cite{chen2017rethinking}.}
\label{FIG:visual}
\end{figure*}
This section elaborates on the experiments' details, including experiment settings, results, and analysis. Section \ref{settings} presents the evaluation metrics and comparison methods we choose.
In Section \ref{implement}, we describe the implementation details of our experiments. Section \ref{analysis} analysis the results of our model. Section \ref{ablation} demonstrates the ablation study.
\subsection{Settings} \label{settings}
We choose five broadly used metrics to comprehensively evaluate the performance of different methods, \emph{i.e.}, Intersection over Union (IoU), F-measure ($F_{\beta}$), E-measure ($E_{\phi}$), Pearson's Correlation Coefficient (CC), and Mean Absolute Error (MAE). More details could be found in the Appendix.\par
To illustrate the superiority of our model, we compare several different kinds of methods, which involve \textbf{two} \textcolor[rgb]{0.8,0.0,0.3}{\textbf{Salient Detection} models (BASNet, CPD)}, \textbf{two} \textcolor[rgb]{0.99,0.5,0.0}{\textbf{Affordance Detection} models (OSAD-Net, OAFFD)}, \textbf{two} \textcolor[rgb]{0.4,0.0,0.99}{\textbf{Semantic Segmentation} models (PSPNet, DeepLabV3+)}, and \textbf{three} \textcolor[rgb]{0.1,0.8,0.1}{\textbf{Referring Segmentation} models (CMSA, BRINet, CMPC)}. More details could be found in the Appendix.\par
\subsection{Implementation Details} \label{implement}
Our method is implemented using TensorFlow. For visual feature extraction, we choose DeepLab-ResNet101 network \cite{chen2017deeplab} which is pre-trained on PASCAL-VOC dataset \cite{everingham2010pascal} as the backbone\footnote{The pretrained DeepLab-ResNet101 model can be dowloaded at \href{https://drive.google.com/drive/folders/0B_rootXHuswsZ0E4Mjh1ZU5xZVU?resourcekey=0-9Ui2e1br1d6jymsI6UdGUQ}{Link}.}. We use the outputs of the DeepLab blocks \textit{Res3}, \textit{Res4} and \textit{Res5} as the input multi-level visual features $\{I_3, I_4, I_5\}$. The parameters of the backbone keep are fixed in the training phase. During training, the input images are randomly clipped from $360 \times 360$ to $320 \times 320$ with a random horizontal flipping. The multi-level visual feature dimension $C_I$ is set to be 1000 in this paper.\par
Meanwhile, for linguistic feature extraction, we first adopt the GloVe word embeddings \cite{pennington-etal-2014-glove} pre-trained on Common Crawl (840B tokens) to initialize the parameters of embedding layers then a LSTM is employed as the language feature extractor. The LSTM shares parameters to embed each phrase. The corresponding phrases to each image are selected from phrase annotations according to affordance categories. The number of phrases for each image is set to be $4$, and each phrase is embedded to a vector of $C_l =1000$ dimensions. \par
We train the model using the Adam optimizer \cite{kingma2014adam}. The learning rate is initialized as $2.5 \times 10^{-4}$ with a weight decay of $5\times 10^{-4}$ with gradually decreasing by a polynomial policy with a power of 0.9. The model is trained on an NVIDIA RTX3080 GPU for 100 epochs with a batch size of 1.
\begin{figure*}[t]
\centering
\begin{overpic}[width=1.\linewidth]{figure/select.pdf}
\end{overpic}
\caption{\textbf{Single image $with$ Different descriptions.} When multiple objects with various affordances appear in the same image, our model is expected to highlight the correct object according to the description phrases. The phrases in red indicate the results in the second column, while the blue ones refer to the blue objects in the third column.}
\label{FIG:sing}
\end{figure*}
\begin{figure*}[t]
\centering
\begin{overpic}[width=1.\linewidth]{figure/multiple2single.pdf}
\end{overpic}
\caption{\textbf{Multiple images $with$ Same descriptions.} When we use the same set of affordance phrases, our model is required to segment the objects with the same affordance in multiple images regardless of the appearance variations. The results regions are highlighted in red. Phrases in blue indicate affordance descriptions to the following images in the same row.}
\label{FIG:multiple}
\end{figure*}
\subsection{Results Analysis}\label{analysis}
We conduct a comprehensive and thorough analysis of the proposed model in this section:
\begin{comment}
\textbf{\emph{\underline{Question \#1:}}} Does our method outperform nine comparison methods chosen from four different tasks? \textbf{(In Section \ref{results})}
\textbf{\emph{\underline{Question \#2:}}} For a single image containing multiple instances with different affordances, does our model have the capability to detect different objects with the guidance from different phrase descriptions? \textbf{(In Section \ref{SingleToDiff})}
\textbf{\emph{\underline{Question \#3:}}} For several different images containing objects with the same affordance, does our model have the capability to segment these objects according to the same set of phrases? \textbf{(In Section \ref{DiffToSingle})}
\end{comment}
\textbf{\emph{Comparison with other methods:}} \label{results}
We compare our method with several other methods chosen from four fields: semantic segmentation, saliency object detection, affordance detection, and referring segmentation. The results of objective metrics are shown in Table \ref{objective results}. It reveals that our method surpasses all other methods in all metrics. Especially in terms of IoU, $E_{\phi}$ and $F_{\beta}$, our model achieves $2.4\%$, $2.0\%$ and $1.8\%$ performance improvement, respectively. Notably, the table shows that methods involving multi-modalities generally achieve better performance than methods in other fields because of additional language information used. The use of a cyclic interaction mechanism provides our method with better alignments. We also present the subjective visualization results in Fig \ref{FIG:visual}. Our method generates more precise segmentation masks closer to the ground truth than other methods. It indicates that our model can effectively capture the relationship between vision and language. Compared with the multi-modal method CMPC, our approach introduces fewer noises in the background because of more accurate alignment between cross-modal information. For other methods, unexpected objects may be segmented incorrectly because of the absence of necessary language guidance, and some failure cases are shown in the figure.
\par
We also show the IoU scores of all methods in every affordance category in Table. \ref{Table:IoU}. It further demonstrates the superior performance of our proposed model. Our model achieves the best performance in almost all categories except in ``Beat'', ``Bounce'' and ``Brush'' classes. The highest IoU score ($0.766$) is in the affordance class ``Beat'', which only contains the object ``drum'' with similar simple and regular shapes. The lowest IoU score ($0.443$) appears in class ``Brush'' including object ``toothbrush'', which is small and has complicated geometry. It shows that for objects with simple and regular shapes, our model will get higher IoU scores while it is slightly underperforming for small objects.
\textbf{\emph{Single image $with$ Different descriptions:}} \label{SingleToDiff}
To better comprehend the surrounding scene, when there are multiple objects with different affordances in the same image, our model is expected to be able to segment the corresponding regions based on the natural language descriptions. Some examples are shown in Fig. \ref{FIG:sing}. The examples illustrate that our proposed model can align vision and language information correctly even though language changes.
\begin{table}[t]
\caption{The influence of the \textbf{number of query phrases} on the performance.}
\centering
\renewcommand{\arraystretch}{1.}
\renewcommand{\tabcolsep}{7.pt}
\begin{tabular}{c||c|c|c|c|c}
\hline
\Xhline{2.\arrayrulewidth}
\rowcolor{mygray}
N & IoU ($\uparrow$) & $F_{\beta}$ ($\uparrow$) & $E_{\phi}$ ($\uparrow$) & CC ($\uparrow$) & MAE ($\downarrow$) \\
\hline
\Xhline{2.\arrayrulewidth}
1 & $0.532$ & $0.565$ & $0.728$ & $0.671$ & $0.089$ \\
2 & $0.563$ & $0.607$ & $0.759$ & $0.700$ & $0.076$ \\
3 & $0.580$ & $0.633$ & $0.788$ & $0.707$ & $0.068$ \\
4 (Ours) & \bm{$0.593$} & \bm{$0.665$} & \bm{$0.822$} & \bm{$0.713$} & \bm{$0.061$}\\
5 & \underline{$0.585$} & \underline{$0.637$} & \underline{$0.792$} & $0.711$ & \underline{$0.067$} \\
6 & \underline{$0.585$} & $0.611$ & $0.760$ & \underline{$0.712$} & $0.077$ \\
\hline
\Xhline{2.\arrayrulewidth}
\end{tabular}
\label{tab:num_influence}
\end{table}
\begin{table}[t]
\caption{The influence of different \textbf{language encoder method}. \textbf{Bold} and \underline{underline} indicate the best and the second-best scores, respectively}
\centering
\renewcommand{\arraystretch}{1.}
\renewcommand{\tabcolsep}{6.pt}
\begin{tabular}{c||c|c|c|c|c}
\hline
\Xhline{2.\arrayrulewidth}
\rowcolor{mygray}
N & IoU ($\uparrow$) & $F_{\beta}$ ($\uparrow$) & $E_{\phi}$ ($\uparrow$) & CC ($\uparrow$) & MAE ($\downarrow$) \\
\hline
\Xhline{2.\arrayrulewidth}
LSTM \cite{6795963} & \bm{$0.593$} & \bm{$0.665$} & \bm{$0.822$} & \bm{$0.713$} & \bm{$0.061$} \\
BERT \cite{devlin2018bert} & \underline{$0.513$} & \underline{$0.604$} & \underline{$0.783$} & \underline{$0.633$} & \underline{$0.075$} \\
ELMo \cite{Peters2018DeepCW} & $0.498$ & $0.580$ & $0.775$& $0.631$ & $0.079$ \\
\hline
\Xhline{2.\arrayrulewidth}
\end{tabular}
\label{tab:languge_encoder}
\end{table}
\begin{table}[t]
\caption{The influence of \textbf{the number of cyclic times} which is denoted as $N$ in the first column. In this paper, we only employ CIM module once which can be regarded as a baseline. \textbf{Bold} and \underline{underline} indicate the best and the second-best scores, respectively}
\centering
\renewcommand{\arraystretch}{1.}
\renewcommand{\tabcolsep}{6.pt}
\begin{tabular}{c||c|c|c|c|c}
\hline
\Xhline{2.\arrayrulewidth}
\rowcolor{mygray}
N & IoU ($\uparrow$) & $F_{\beta}$ ($\uparrow$) & $E_{\phi}$ ($\uparrow$) & CC ($\uparrow$) & MAE ($\downarrow$) \\
\hline
\Xhline{2.\arrayrulewidth}
1 (Baseline) & \bm{$0.593$} & \underline{$0.665$} & \underline{$0.822$} & \bm{$0.713$} & \underline{$0.061$} \\
2 & \underline{$0.588$} & \bm{$0.670$} & \bm{$0.830$} & $0.682$ & \bm{$0.059$} \\
3 & $0.572$ & $0.633$ & $0.788$ & \underline{$0.707$} & $0.068$ \\
\hline
\Xhline{2.\arrayrulewidth}
\end{tabular}
\label{tab:num_cim}
\end{table}
\textbf{\emph{Multiple images $with$ Same descriptions:}} \label{DiffToSingle}
In practice, multiple objects may have the same affordance, although with significant appearance variations in terms of color, shape, and texture. Therefore, our model is expected to identify corresponding objects in different images regardless of these variations. Some examples are illustrated in Fig. \ref{FIG:multiple}. From the examples, we can find that for the same set of phrases, corresponding referred objects described by the phrases could be highlighted correctly, which proves that our model can cope with the multiplicity property of affordance.
\subsection{Ablation Study} \label{ablation}
In this section, we conduct ablation study to investigate the effect of different modules and hyper-parameter settings. We consider the following factors: the number of input phrases, the language encoder method, and cyclic times of CIM module. \par
\textbf{\emph{Number of Text Phrases:}}
To explore the influence of the number of input phrases, we set the phrase number to be $N=1,2,3,4,5,6$, respectively. The results are shown in Table \ref{tab:num_influence}. The results illustrate that the phrase number influences all five metrics. It suggests that taking four phrases as input makes the model capture information more effectively. Contrary to intuition, performance does not improve with the number of phrases increasing after the number getting four. Our model may be reached a bottleneck after that point.
\textbf{\emph{Language Encoder Method:}} We consider to explore the effect of different language encoder methods. We replace LSTM with two different popular pre-trained language models BERT \cite{devlin2018bert} and ELMo \cite{Peters2018DeepCW}. The results are illustrates in Table \ref{tab:languge_encoder}. It is shown that LSTM outperforms the other two language encoder methods in this task. The possible reason is that the latter two pre-trained language models are more suitable for long sentences because of rich text context. However, the text descriptions of affordances in the proposed PAD-L are all short phrases that may limit their capabilities.
\textbf{\emph{Number of Cycles:}} The semantic consistency is enhanced in a cyclic and bilateral manner. To investigate the effect of the number of cycles, we repeat CIM module several times. The results are shown in Table. \ref{tab:num_cim}. It is demonstrated that more cyclic times do not necessarily lead to better performance. We set cycling once as the baseline. When CIM repeats twice, the performance outperforms the baseline in several metrics. However, the performance is not as good as the baseline when cycle three times. Our model may get stuck in over-fitting as the number of cycle increases.
\section{Conclusion and Discussion} \label{conclude}
In this paper, we propose a novel phrase-based affordance detection task. At first, based on the previously proposed PAD dataset, we annotate the affordance categories using short phrases to construct a new multi-modal dataset, PAD-Language (PAD-L). Then to align text features and vision features better, we adopt a novel cyclic and bilateral mechanism to cope with the problem caused by the inherent multiplicity property of affordance. Specifically, we design a Cyclic Bilateral Consistency Enhancement Network (CBCE-Net), which consists of three main modules: Vision guide Language Module (VLM), Language guide Vision Module (LVM), and Cyclic Interaction Module (CIM) to improve feature representations cyclically and bilaterally. Compared with nine relevant methods, our model achieves the best results in terms of all five evaluation metrics. \par
Our approach also has some limitations. At first, our method may not achieve satisfactory results in complicated scenes containing many objects. To improve our method, we could adopt a locate-then-segment framework to locate objects \cite{wu2021background} then generate the mask. Secondly, Our approach aims at detecting all possible objects in the image and cannot detect the one that best fits the intention. We can introduce a sorting mechanism to segment the objects that best match the actual scene. \par
In the future, based on PAD-L, more works could be done to explore the combination of multi-modal applications and affordance. For instance, exploring affordance detection in videos with natural language instructions would be a promising topic.
\bibliographystyle{IEEEtran}
\section{Appendix}
We shows the details of our evaluation metrics in Section \ref{setting} and comparison methods in Section \ref{compare}.
Also, more examples of phrases annotations are shown in Table \ref{examples}
\begin{table*}[h!] \label{examples}
\centering
\renewcommand{\arraystretch}{1.}
\renewcommand{\tabcolsep}{2.pt}
\caption{More examples of phrases descriptions about affordances. \textbf{PA}, \textbf{F}, \textbf{AF}, \textbf{E} denote \textbf{P}otential \textbf{A}ctions, \textbf{F}unction, \textbf{A}ppearance \textbf{F}eature and \textbf{E}nvironment, respectively in the table. Note that we only show the original form of the corresponding verbs.}
\label{Affordance Description table}
\begin{tabular}{c||c|c}
\Xhline{2.\arrayrulewidth}
\hline
\textbf{Affordance Class} & \textbf{Object Class} & \textbf{Phrase Descriptions Examples} \\
\hline
\Xhline{2.\arrayrulewidth}
\rowcolor{mygray}
\textbf{\normalsize{Kick}} & \multicolumn{1}{m{3.7cm}|}{\small{soccer ball, punching bag}} & \multicolumn{1}{m{11cm}}{
\textbf{PA}: move fast, trash out or strike, make a motion with feet or fist toward an object, strike out with feet, punt, physical strike, ...
\qquad
\textbf{E}: outdoor activities (soccer ball) } \\
\textbf{\normalsize{Sit}} & \multicolumn{1}{m{3.7cm}|}{\small{bench, sofa, stool, wheelchair}} & \multicolumn{1}{m{11cm}}{
\textbf{PA}: sit, sit down, seat, lounge, recline, be seated, sit in, lean back, lean over, lean against, ...
\quad
\textbf{F}: rest, take a rest, sleep, nap, take a break, have a rest, give feet a rest...
} \\
\rowcolor{mygray}
\textbf{\normalsize{Throw}} & \multicolumn{1}{m{3.7cm}|}{\small{frisbee, rugby ball}} & \multicolumn{1}{m{11cm}}{
\textbf{PA}:
throw, deliver, pass, toss, toss using hands, throw away, throw forcefully, cast, ...
\quad
\textbf{E}: outdoor, out-of-doors,
} \\
\textbf{\normalsize{Shelter}} & \multicolumn{1}{m{3.7cm}|}{\small{umbrella}} & \multicolumn{1}{m{11cm}}{
\textbf{PA}: shelter, raise, lift, move up, carry, take, grip handle, take, ... \quad
\textbf{F}: cover for, protect, shade, shield \quad
\textbf{E}: in the sun, in the rain, outdoor, \quad
\textbf{AF}: circular cover
} \\
\rowcolor{mygray}
\textbf{\normalsize{Beat}} & \multicolumn{1}{m{3.7cm}|}{\small{drum}} & \multicolumn{1}{m{11cm}}{
\textbf{PA}: beat, strike, hit, strike rapidly, hit in rhythm, pulse, beat in rhythm, clout, punch, pound, ... \quad
\textbf{F}: play, sound, create sound, make sound, produce sound
}\\
\textbf{\normalsize{Hit}} & \multicolumn{1}{m{3.7cm}|}{\small{axe, hammer}} & \multicolumn{1}{m{11cm}}{
\textbf{PA}: hit, deliver an impulsive fore by striking, strike, can be lifted \quad
\textbf{F}: hit, chop, split, cut, cleave \quad
\textbf{AF}: sharp blade, knife-edged \quad
\textbf{E}: usually appears along with wood
} \\
\rowcolor{mygray}
\textbf{\normalsize{Cut}} & \multicolumn{1}{m{3.7cm}|}{\small{knife, scissors}} & \multicolumn{1}{m{11cm}}{
\textbf{PA}: cut, hold, use, sharpen, grasp, raise, slash, pull into, hold the handle, ... \quad
\textbf{F}: separate, slice, chop, divide, part, trim, ... \quad
\textbf{AF}: sharp edge, usually made of metal \quad
} \\
\textbf{\normalsize{Lie}} & \multicolumn{1}{m{3.7cm}|}{\small{baby bed, bench, sofa}} & \multicolumn{1}{m{11cm}}{
\textbf{PA}: lie, lie down, sit down, recline or lay down, lean back, lean over, be recumbent, sit back, lie on the side, prostrate, lean, ... \quad
\textbf{F}: take a break, sleep, rest, repose, ... \quad
} \\
\rowcolor{mygray}
\textbf{\normalsize{Lift}} & \multicolumn{1}{m{3.7cm}|}{\small{dumbbell}} & \multicolumn{1}{m{11cm}}{
\textbf{PA}: lift, lift up, raise, grab, put down, pick up, take down, push, hold up, uplift, cause to raise, hold high, \quad
\textbf{F}: exercise, used for exercise of muscle-building \quad
\textbf{E}: indoor exercise
} \\
\textbf{\normalsize{Pick up}} & \multicolumn{1}{m{3.7cm}|}{\small{chopsticks}} & \multicolumn{1}{m{11cm}}{
\textbf{PA}: take and lift upward, hold, grasp, move up and down, hold and lift \quad
\textbf{F}: pass food, kitchen utensil \quad
\textbf{E}: usually appears in kitchen or dining table\quad
\textbf{AF}: usually are made of wood\quad
} \\
\rowcolor{mygray}
\textbf{\normalsize{Rolling}} & \multicolumn{1}{m{3.7cm}|}{\small{baseball, croquet ball, golf ball, table tennis ball, tennis ball}} & \multicolumn{1}{m{11cm}}{
\textbf{PA}: rolling, move, can roll, move by rotating, roll over, rotate rapidly, turn round and round, rotate, move fast, spin, whirl, move around an axis or a center, cycle, revolve, change orientation or direction, twirl revolve \quad
\textbf{AF}: spherical
} \\
\textbf{\normalsize{Mix}} & \multicolumn{1}{m{3.7cm}|}{\small{chopsticks, spoon, whisk}} & \multicolumn{1}{m{11cm}}{
\textbf{PA}: mix, blend, mix together, fuse, grasp, hold, merge, move circularly, move around, agitate, ... \quad
\textbf{F}: kitchen tools \quad
\textbf{E}: usually appears in kitchen or dining table,
} \\
\rowcolor{mygray}
\textbf{\normalsize{Jump}} & \multicolumn{1}{m{3.7cm}|}{\small{skateboard, skis, snowboard, surfboard}} & \multicolumn{1}{m{11cm}}{
\textbf{PA}: jump, turn at high speed, move forward, move fast, travel fast, perform a leap, accelerate, make a turn, speed, turn left, turn right, make a turn, speed up, ... \quad
\textbf{E}: outdoor activities.
} \\
\textbf{\normalsize{Fork}} & \multicolumn{1}{m{3.7cm}|}{\small{fork}} & \multicolumn{1}{m{11cm}}{
\textbf{PA}: fork, fork up, move up and down, hold handle \quad
\textbf{F}: pass food, pick up food, used for cook, lift food\quad
\textbf{E}: appears in kitchen or dining table, used with knife\quad
} \\
\rowcolor{mygray}
\textbf{\normalsize{Scoop}} & \multicolumn{1}{m{3.7cm}|}{\small{spatula, spoon}} & \multicolumn{1}{m{11cm}}{
\textbf{PA}: scoop, scoop out, scoop up, take up, ladle out, hold the handle, grasp the handle, lade, take out or up \quad
\textbf{E}: appears in the kitchen or the dining table \quad
\textbf{AF}: concave shape
} \\
\textbf{\normalsize{Swing}} & \multicolumn{1}{m{3.7cm}|}{\small{baseball bat, table tennis bat, tennis racket}} & \multicolumn{1}{m{11cm}}{
\textbf{PA}: swing, change location by moving back and forth, change direction, cause to move around, swing back, swing forward, swing back and forth, try to hit something \quad
} \\
\rowcolor{mygray}
\textbf{\normalsize{Take photo}} & \multicolumn{1}{m{3.7cm}|}{\small{camera}} & \multicolumn{1}{m{11cm}}{
\textbf{PA}: shoot, take a shot of, target to, adjust, put in front of eyes, aim at, raise up to eyes, bring up to eyes, snap, keep \quad
\textbf{F}: take a photo of, get pictures of, capture in a photo
} \\
\textbf{\normalsize{Bounce}} & \multicolumn{1}{m{3.7cm}|}{\small{basketball}} & \multicolumn{1}{m{11cm}}{
\textbf{PA}: bounce, spring back, move up and down, rebound, bounce back, move quickly back and forth, pass, bounce against, ... \quad
\textbf{AF}: bouncy, spherical, rubber or synthetic material, ... \quad
\textbf{E}: usually in door, team sport
} \\
\rowcolor{mygray}
\textbf{\normalsize{Contain-1}} & \multicolumn{1}{m{3.7cm}|}{\small{backpack, gift box, handbag, purse, suitcase}} & \multicolumn{1}{m{11cm}}{
\textbf{PA}: contain, take, hold, have within, pack, pack into, place within, hold in, fill up, load up, make full \quad
\textbf{F}: hold household items, hold inside, store, be capable of holding
} \\
\textbf{\normalsize{Contain-2}} & \multicolumn{1}{m{3.7cm}|}{\small{beaker, beer bottle, bowl, cup or mug, milk can, pitcher, soap dispenser, vase, watering can}} & \multicolumn{1}{m{11cm}}{
\textbf{PA}: contain, pour, hold, pour in, pour out, decant, flow, store, keep, hold in, carry, bear, have within, include, take, pour off, hold in hands, dribble, spill, ... \quad
\textbf{AF}: depression in the middle, open-top container, contain liquid, liquid container, ...
} \\
\rowcolor{mygray}
\textbf{\normalsize{Contain-3}} & \multicolumn{1}{m{3.7cm}|}{\small{bowl, frying pan}} & \multicolumn{1}{m{11cm}}{
\textbf{PA}: contain, store, hold in both hands up\quad
\textbf{F}: prepare for food, hold and store food \quad
\textbf{AF}: the center is depressed, depression in the middle \quad
\textbf{E}: usually appears in kitchen or dining table
} \\
\textbf{\normalsize{Play-1}} & \multicolumn{1}{m{3.7cm}|}{\small{cell, erhu fiddle, viola, violin}} & \multicolumn{1}{m{11cm}}{
\textbf{PA}: play, bow, fiddle, chord, press strings, squeeze the bow, move bow across strings, grip the bow, ... \quad
\textbf{F}: make sound, make music, produce sound, stringed instruments, ...
}\\
\rowcolor{mygray}
\textbf{\normalsize{Play-2}} & \multicolumn{1}{m{3.7cm}|}{\small{banjo, guitar, harp, pipa}} & \multicolumn{1}{m{11cm}}{
\textbf{PA}: play, carry, move fingers up and down, pluck fingers, press the string, perform, pull slightly but sharply, ... \quad
\textbf{F}: make sound, make music, produce sound, stringed musical instrument, ...
} \\
\textbf{\normalsize{Play-3}} & \multicolumn{1}{m{3.7cm}|}{\small{accordion, piano}} & \multicolumn{1}{m{11cm}}{
\textbf{PA}: play, tune, press keys, move fingers, touch, manipulate, squeeze, ... \quad
\textbf{F}: make sound, produce music, make music, ...
} \\
\rowcolor{mygray}
\textbf{\normalsize{Play-4}} & \multicolumn{1}{m{3.7cm}|}{\small{flute, frenchhorn, harmonica, trumpet}} & \multicolumn{1}{m{11cm}}{
\textbf{PA}: play, tune, hold, blow air into the instrument, raise to lip, perform, push aside mouth, lift to lip, carry, blow through mouth, carry, wind, ... \quad
\textbf{F}: make sound, make music, produce sound
} \\
\textbf{\normalsize{Ride}} & \multicolumn{1}{m{3.7cm}|}{\normalsize{bicycle, motorbike}} & \multicolumn{1}{m{11cm}}{
\textbf{PA}: ride, push down with foot, pedal, turn left, move rapidly, pull, control motion, slow down, stop, ... \quad
\textbf{F}: travel, change location, travel fast, ... \quad
\textbf{E}: outdoor
} \\
\rowcolor{mygray}
\textbf{\normalsize{Brush}} & \multicolumn{1}{m{3.7cm}|}{\normalsize{toothbrush}} & \multicolumn{1}{m{11cm}}{
\textbf{PA}: brush, grasp the handle, hold handle, touch lightly and briefly, ... \quad
\textbf{F}: clean, sweep, rub, sweep across or over, wash, clean tooth, ... \quad
\textbf{AF}: head attached to a handle, a head of tightly clustered bristles, ... \quad
\textbf{E}: often appears beside a sink within the kitchen or bathroom, ...
} \\
\textbf{\normalsize{Roll dough}} & \multicolumn{1}{m{3.7cm}|}{\normalsize{rolling pin}} & \multicolumn{1}{m{11cm}}{
\textbf{PA}: roll, press, roll the rod across the dough, grasp the handle, shape, shape by rolling, squeeze, shape by rolling, exert a force with a heavy weight, ... \quad
\textbf{AF}: cylindrical, ... \quad
\textbf{E}: appear in the kitchen, ... \quad
\textbf{F}: food preparation utensil, kitchen stuff, ... \quad
} \\
\rowcolor{mygray}
\textbf{\normalsize{Wear-1}} & \multicolumn{1}{m{3.7cm}|}{\normalsize{hat, helmet}} & \multicolumn{1}{m{11cm}}{
\textbf{PA}: wear, put on, take off, dress, be dressed in, be clothed in, carry, get dressed, hold, keep, raise, cover, have on, ... \quad
\textbf{F}: decorate, protect against, shelter from the sun, head covering, have on, used for warmth, ...
} \\
\textbf{\normalsize{Wear-2}} & \multicolumn{1}{m{3.7cm}|}{\normalsize{glasses}} & \multicolumn{1}{m{11cm}}{
\textbf{PA}: wear, wear on face, take off, put off, put on, raise, get, ... \quad
\textbf{AF}: two pieces of glasses, \quad
\textbf{F}: improve vision, protect eyes, used for decoration, ...
} \\
\rowcolor{mygray}
\textbf{\normalsize{Look Out}} & \multicolumn{1}{m{3.7cm}|}{\normalsize{binoculars}} & \multicolumn{1}{m{11cm}}{
\textbf{PA}: look out, adjust, hold in hands, target to, focus, look at, set the focus, put in front of eyes, aim at, zoom, bring up to eyes, turn the focus wheel, align with view, adjust, ... \quad
\textbf{F}: see clearly \quad
\textbf{E}: outdoor \quad
\textbf{AF}: two lens, two telescopes mounted side by side
} \\
\hline
\Xhline{2.\arrayrulewidth}
\end{tabular}
\end{table*}
\subsection{Benchmark Setting} \label{setting}
We choose five broadly used metrics to comprehensively evaluate the performance of different methods, \emph{i.e.}, Intersection over Union (IoU), F-measure ($F_{\beta}$), E-measure ($E_{\phi}$), Pearson's Correlation Coefficient (CC), and Mean Absolute Error (MAE). We introduce them briefly as following:
\begin{itemize}[leftmargin=*]
\item
\textbf{Intersection of Union (IoU) \cite{long2015fully}}: IoU is a common pixel level evaluation metric to measure the overlap between predicted mask and ground truth mask. It is defined as the ratio of the area of overlap and the area of union.
\item
\textbf{F-measure ($F_{\beta}$) \cite{arbelaez2010contour}}: $F_{\beta}$ is a widely used metric which simultaneously considers both recall $R$ and precision $P$, where $P$ is the number of true positive results divided by the number of all positive results and $R$ is the number of true positive results divided by the number of all samples that should have been identified as positive.
\item
\textbf{E-measure ($E_{\phi}$) \cite{fan2018enhanced}}: $E_{\phi}$ is a measurement which jointly utilizes local and global information to evaluate the difference between the ground-truth and predicted mask.
\item
\textbf{Pearson's Correlation Coefficient (CC) \cite{le2007predicting}}: CC is broadly applied to measure the linear correlation between two variables. In this paper, we employ CC to measure the relevance of the predicted map and the ground truth.
\item
\textbf{Mean Absolute Error (MAE) \cite{perazzi2012saliency}}: MAE measures the average over the absolute differences of the normalized predicted map and the ground-truth mask.
\end{itemize}
The evaluation code can be found at \url{https://github.com/lhc1224/OSAD_Net/tree/main/PyMetrics}.
\subsection{Comparison Methods} \label{compare}
To illustrate the superiority of our model, we compare several different kinds of methods, which involve \textbf{two} \textcolor[rgb]{0.8,0.0,0.3}{\textbf{Salient Detection} models (BASNet, CPD)}, \textbf{two} \textcolor[rgb]{0.99,0.5,0.0}{\textbf{Affordance Detection} models (OSAD-Net, OAFFD)}, \textbf{two} \textcolor[rgb]{0.4,0.0,0.99}{\textbf{Semantic Segmentation} models (PSPNet, DeepLabV3+)}, and \textbf{three} \textcolor[rgb]{0.1,0.8,0.1}{\textbf{Referring Segmentation} models (CMSA, BRINet, CMPC)}.
\begin{itemize}[leftmargin=*]
\item
\textcolor[rgb]{0.8,0.0,0.3}{\textbf{BASNet}} \cite{Qin_2019_CVPR}: \textbf{B}oundary-\textbf{A}ware \textbf{S}egmentation \textbf{N}etwork consists of a predict-refine architecture and a hybrid loss. The predict-refine architecture consists of a encoder-decoder network and a refinement module to predict and refine the segmentation probability map respectively.
\item
\textcolor[rgb]{0.8,0.0,0.3}{\textbf{CPD}} \cite{Wu_2019_CVPR}: \textbf{C}ascaded \textbf{P}artial \textbf{D}ecoder (CPD) framework leverages partial decoder to discard large resolution features in shallower layers and integrate features of deeper layers to generate more precise saliency map.
\item
\textcolor[rgb]{0.99,0.5,0.0}{\textbf{OSAD-Net}} \cite{Oneluo}: \textbf{O}ne \textbf{S}hot \textbf{A}ffordance \textbf{D}etection \textbf{N}etwork first learns the intentions of the human actions and then transfers it to query images to segment objects with the same affordance by collaborative learning.
\item
\textcolor[rgb]{0.99,0.5,0.0}{\textbf{OAFFD}} \cite{zhao2020object}: OAFFD-Net mainly combines CoordConv and ASPP to refine the feature maps, and designs a relationship-aware module to explore the relationships between objects and affordance.
\item
\textcolor[rgb]{0.4,0.0,0.99}{\textbf{PSPNet}} \cite{zhao2017pspnet}: \textbf{P}yramid \textbf{S}cene \textbf{P}arsing \textbf{Net}work utilizes a pyramid parsing module to exploit global context information. Thus the local and global clues are used together to improve the performance in semantic segmentation task.
\item
\textcolor[rgb]{0.4,0.0,0.99}{\textbf{DeepLabV3+}} \cite{chen2017rethinking}: \textbf{DeepLabV3+} applies the depthwise separable convolution to an \textbf{A}trous \textbf{S}patial \textbf{P}yramid \textbf{P}ooling (ASPP) model to encode multi-scale context information at multiple filter rates and multiple fields-of-view.
\item
\textcolor[rgb]{0.1,0.8,0.1}{\textbf{CMSA}} \cite{ye2021referring}: \textbf{C}ross-\textbf{M}odal \textbf{S}elf-\textbf{A}ttention module is able to adaptively focus on the important words in the given language expression and region in the corresponding image by utilizing self-attention mechanism.
\item
\textcolor[rgb]{0.1,0.8,0.1}{\textbf{BRINet}} \cite{hu2020bi}: \textbf{B}i-directional \textbf{R}elationship \textbf{I}nferring \textbf{N}etwork designs two kinds of attention mechanism from vision to language and language to vision to learn the bi-directional relationship between language and visual modalities.
\item
\textcolor[rgb]{0.1,0.8,0.1}{\textbf{CMPC}} \cite{liu2021cross}: \textbf{C}ross-\textbf{M}odal \textbf{P}rogressive \textbf{C}omprehension scheme first perceives all related entities utilizing entity and attribute words while the rest relational words are adopted to highlight the target entities by spatial graph reasoning.
\end{itemize}
|
2,869,038,155,849 | arxiv | \section{Introduction}
In this work, we are concerned with the Hilfer-Hadamard fractional derivative defined by \cite{kft}
\begin{equation}\label{hh}
(_{H}{\mathscr{D}}_{a^+}^{\alpha,\beta}f)(x)=(_{H}{\mathscr{I}}_{a^+}^{\beta(1-\alpha)}{_{H}{\mathscr{D}}_{a^+}^{\alpha+\beta(1-\alpha)}}f)(x),\qquad 0<\alpha<1,0\leq\beta\leq1,
\end{equation}
where $_{H}{\mathscr{I}}_{a^+}^{\beta(1-\alpha)}$ and ${_{H}{\mathscr{D}}_{a^+}^{\alpha+\beta(1-\alpha)}}$ are the Hadamard fractional integral of order $\beta(1-\alpha)$ and Hadamard fractional derivative of order $\alpha+\beta-\alpha\beta,$ respectively.
Analogous to the Hilfer derivative defined in \cite{hr}, Kassim M D, and N. E. Tatar introduced the Hilfer-Hadamard fractional derivative which interpolates between Hadamard fractional derivative (for $\beta=0$) and Caputo-Hadamard fractional derivative (for $\beta=1$), see \cite{kft}. In \cite{kt}, they established the well-posedness of Cauchy-type problem
\begin{equation}\label{h1}\begin{cases}
&{_{H}{\mathscr{D}}_{a^+}^{\alpha,\beta}x(t)}=f(t,x),\quad t>a>0,\\
&{_{H}{\mathscr{I}}_{a^+}^{1-\gamma}x(a)}=c,\qquad \gamma=\alpha+\beta(1-\alpha),
\end{cases}\end{equation}
where $c\in\R$ and ${_{H}{\mathscr{D}}_{a^+}^{\alpha,\beta}}$ is the HIlfer-Hadamard fractional derivative of order $\alpha (0<\alpha<1)$ and type $\beta (0\leq\beta\leq1),$ in the weighted space of continuous functions $C_{1-\gamma}^{\alpha,\beta}[a,b]$ defined by
\begin{equation}\label{a7}
C_{1-\gamma,\mu}^{\alpha,\beta}[a,b]=\big\{x\in C_{1-\gamma,\log}[a,b]|{_{H}{\mathscr{D}}_{a^+}^{\alpha,\beta}}x\in C_{\mu,\log}[a,b]\big\},\quad 0\leq\mu<1,\gamma=\alpha+\beta(1-\alpha),
\end{equation}
where
\begin{equation}\label{w1}
C_{\gamma,log}[a,b]=\bigg\{g:(a,b]\to\R|\big(\log{\frac{t}{a}}\big)^{\gamma}g(t)\in C[a,b]\bigg\}\quad 0\leq\gamma<1.
\end{equation}
They established the equivalence of initial value problem (IVP) \eqref{h1} with following Volterra integral equation of second kind:
\begin{equation}\label{a9}
x(t)=\frac{c}{\Gamma(\gamma)}\big(\log{\frac{t}{a}}\big)^{\gamma-1}+\frac{1}{\Gamma(\alpha)}\int_{a}^{t}\big(\log{\frac{t}{s}}\big)^{\alpha-1}f(s,x(s))\frac{ds}{s}, \quad t>a, \, c\in\R,
\end{equation}
and using the Banach fixed point theorem, following existence result for IVP \eqref{h1} is proved in \cite{kt}.
\begin{thm}\cite{kt}
Let $\gamma=\alpha+\beta-\alpha\beta$ where $(0<\alpha<1,0\leq\beta\leq1).$ Assume that $f:(a,b]\times\R\to\R,(a>0),$ is a function such that $f[\cdot,x(\cdot)]\in{C_{\mu,\log}[a,b]}$ for any $x\in{C_{\mu,\log}[a,b]}$ with $1-\gamma\leq\mu<1-\beta(1-\alpha)$ and is Lipschitz continuous with respect to its second variable. Then, there exists a unique solution $x$ for the Cauchy-type problem \eqref{h1} in the space $C_{1-\gamma,\mu}^{\alpha,\beta}[a,b].$
\end{thm}
We also point out that, when $f(t,x(t))\geq(\log{\frac{t}{a}})^{\mu}|x(t)|^{m}$ for some $m>1$ and $\mu\in\R,$ a nonexistence for global solutions of problem
\begin{equation}\label{h2}\begin{cases}
&{_{H}{\mathscr{D}}_{a^+}^{\alpha,\beta}x(t)}\geq(\log{\frac{t}{a}})^{\mu}|x(t)|^{m},\quad t>a>0,m>1,\mu\in\R,\\
&{_{H}{\mathscr{I}}_{a^+}^{1-\gamma}x(a)}=c,\qquad \gamma=\alpha+\beta(1-\alpha),
\end{cases}
\end{equation}
is proved in the following theorem.
\begin{thm}\cite{kft}
Assume that $\mu\in\R$ and $m<(1+\mu)/(1-\gamma).$ Then, the problem \eqref{h2} does not admit global nontrivial solutions in $C_{1-\gamma,\log}^{\gamma}[a,b],$ where $C_{1-\gamma,\log}^{\gamma}[a,b]=\{y\in C_{1-\gamma,\log}[a,b]:{_{H}{\mathscr{D}}_{a^+}^{\gamma}}C_{1-\gamma,\log}[a,b]\}$ and $c\geq0.$
\end{thm}
Recently, in a survey paper \cite{abl}, Said Abbas, et.al. obtained the results concerning the existence and uniqueness of weak solutions for some classes of Hadamard and Hilfer fractional differential equations. Further, some attractivity and Ulam stability results are obtained by applying the fixed point theory. Authors in \cite{db1}-\cite{db3} obtained the existence, uniqueness and continuations results by using both successive approximations and fixed point techniques for the solution of fractional IVP involving Hilfer fractional derivative defined in \cite{hr}.
We find that the existence and uniqueness results were proved, but the iterative scheme for uniformly approximating the solution of IVP \eqref{h1} was not given in (Theorem 21 \cite{kt}). Generally, finding the solution of nonlinear fractional differential equation is not an easy task. So the numerical treatment for such problems practically more sounds.
Motivated by this work, to avoid the ambiguity of fixed point theory, we adopted the method of successive approximations.
In this paper, we will study the IVP for fractional differential equation
\begin{equation}\label{s1}\begin{cases}
&{_{H}{\mathscr{D}}_{1}^{\alpha,\beta}}x(t)=f(t,x),\qquad \,\, 0<\alpha<1,0\leq\beta\leq1,\\
&\lim_{t\to{1}}\big(\log{t}\big)^{1-\gamma}x(t)=x_0,\quad \gamma=\alpha+\beta(1-\alpha).
\end{cases}
\end{equation}
Using some well-known convergence criterion and Picard's sequence functions \cite{km},\cite{yy}, we establish the existence and uniqueness results of IVP \eqref{s1}. The computable iterative scheme as well as the uniform convergence criterion for the solution are also developed. Note that the initial value considered in IVP \eqref{s1} is more suitable than that of considered in IVP \eqref{h1} and nonlinear function $f$ may be singular at $t=1.$
The rest of the paper is organised as follows: the next section covers the useful prerequisites. The main results are proved in section 3. Conclusion is given in the last section.
\section{Preliminaries}
We need the following basic definitions and properties from fractional calculus in the sequel, see \cite{kst}.
Let the Euler's gamma function $\Gamma(\cdot)$ defined by \cite{lv}
\begin{equation*}
\Gamma(x)=\int_{0}^{+\infty}s^{x-1}e^{-s}ds,\quad x>0.
\end{equation*}
\begin{defn}\cite{kst}
Let $(1,b),1<b\leq\infty,$ be a finite or infinite interval of the half-axis ${\R}^+$ and let $\alpha>0.$ The left-sided Hadamard fractional integral ${_{H}{\mathscr{I}}_{1}^{\alpha}f}$ of order $\alpha>0$ is defined by
\begin{equation}\label{hi}
(_{H}{\mathscr{I}}_{1}^{\alpha}f)(t)=\frac{1}{\Gamma(\alpha)}\int_{1}^{t}(\log{t})^{\alpha-1}\frac{f(s)ds}{s},\quad 1<t<b,
\end{equation}
provided that the integral exists. When $\alpha=0,$ we set ${_{H}{\mathscr{I}}_{1}^{0}f=f.}$
\end{defn}
\begin{defn}\cite{kst}
The left-sided Hadamard fractional derivative of order $\alpha(0\leq\alpha<1)$ on $(1,b)$ is defined by
\begin{equation}\label{hd}
(_{H}{\mathscr{D}}_{1}^{\alpha}f)(t)=\delta(_{H}{\mathscr{I}}_{1}^{1-\alpha}f)(t),\qquad 1<t<b,
\end{equation}
where $\delta=t(d/dt).$ In particular, when $\alpha=0$ we have $_{H}{\mathscr{D}}_{1}^{0}f=f.$
\end{defn}
\begin{defn}\cite{kst}
Let $(1,b)$ be a finite interval of the half-axis ${\R}^{+}.$ The fractional derivative $_{H}^{c}{\mathscr{D}}_{1}^{\alpha}f$ of order $\alpha(0<\alpha<1)$ on $(1,b)$ defined by
\begin{equation}\label{chd}
_{H}^{c}{\mathscr{D}}_{1}^{\alpha}f={_{H}{\mathscr{I}}_{1}^{1-\alpha}\delta f},
\end{equation}
is called the left-sided Hadamard-Caputo fractional derivative of order $\alpha$ of a function $f.$
\end{defn}
\begin{defn}\cite{kt}
The left-sided Hilfer-Hadamard fractional derivative of order $\alpha(0<\alpha<1)$ and type $\beta(0\leq\beta\leq1)$ with respect to $t$ is defined by
\begin{equation}\label{hh}
(_{H}{\mathscr{D}}_{1}^{\alpha,\beta}f)(t)=(_{H}{\mathscr{I}}_{1}^{\beta(1-\alpha)}{_{H}{\mathscr{D}}_{1}^{\alpha+\beta(1-\alpha)}}f)(t)
\end{equation}
of functions $f$ for which the expression on the right hand side exists, where ${_{H}{\mathscr{D}}_{1}^{\alpha+\beta(1-\alpha)}}$ is the Hadamard fractional derivative.
\end{defn}
\begin{lem}\cite{kst}
If $\alpha>0,\beta>0$ and $1<b<\infty,$ then
\begin{align}\label{pr}
\big({_{H}{\mathscr{I}}_{1}^{\alpha}}\big(\log{s}\big)^{\beta-1}\big)(t)&=\frac{\Gamma(\beta)}{\Gamma(\alpha+\beta)}\big(\log{t}\big)^{\beta+\alpha-1},\\
\big({_{H}{\mathscr{D}}_{1}^{\alpha}}\big(\log{s}\big)^{\beta-1}\big)(t)&=\frac{\Gamma(\beta)}{\Gamma(\beta-\alpha)}\big(\log{t}\big)^{\beta-\alpha-1}.
\end{align}
\end{lem}
\noindent Following lemma plays vital role in the proof of the main results, the detailed proof can be found in \cite{pi}.
\begin{lem}\cite{pi} Suppose that $x>0.$ Then $\Gamma(x)=\lim_{m\to+\infty}\frac{m^{x}m!}{x(x+1)(x+2)\cdots(x+m)}.$
\end{lem}
We denote $D=[1,1+h], D_{h}=(1,1+h], I=(1,1+l],J=[1,1+l]$, $E=\{x:|x(\log{t})^{1-\gamma}-x_0|\leq b\}$ for $h>0,b>0$ and $t\in{D_h}.$ A function $x(t)$ is said to be a solution of IVP \eqref{s1} if there exist $l>0$ such that $x\in C^{0}(I)$ satisfies the equation $_{H}{\mathscr{D}}_{1}^{\alpha,\beta}x(t)=f(t,x)$ almost everywhere on $I$ alongwith the condition $\lim_{t\to{1}}{(\log{t})}^{1-\gamma}x(t)=x_0.$ To construct the main results, the following hypotheses are considered:
\begin{description}
\item[(H1)] $(t,x)\to f(t,(\log{t})^{\gamma-1}x(t))$ is defined on ${D}_{h}\times E$ satisfies:
\begin{itemize}
\item[(i)] $x\to f(t,(\log{t})^{\gamma-1}x(t))$ is continuous on $E$ for all $t\in{D_{h}}$,\\
$t\to f(t,(\log{t})^{\gamma-1}x(t))$ is measurable on $D_{h}$ for all $x\in E;$\
\item[(ii)] there exist $k>(\beta(1-\alpha)-1)$ and $M\geq0$ such that the relation $|f(t,(\log{t})^{\gamma-1}x(t))|\leq M(\log{t})^{k}$ holds for all $t\in D_{h}$ and $x\in E,$
\end{itemize}
\item[(H2)] there exists $A>0$ such that $|f(t,(\log{t})^{\gamma-1}x_1(t))-f(t,(\log{t})^{\gamma-1}x_2(t))|$ $\leq A(\log{t})^{k}|x_1-x_2|,$ for all $t\in I$ and $x_1,x_2\in E.$
\end{description}
\begin{re}
In hypothesis \textbf{(H1)}, if $(\log{t})^{-k}f(t,(\log{t})^{\gamma-1}x(t))$ is continuous on $D\times E,$ one may choose $M=\max_{t\in J}(\log{t})^{-k}f(t,(\log{t})^{\gamma-1}x(t))$ continuous on ${D_h\times E}$ for all $x\in E.$
\end{re}
\section{Main results}
In this section, we state and prove the existence and uniqueness results for IVP \eqref{s1} concerned with above defined hypotheses. We present the iterative scheme for approximating such a unique solution.
For brevity let us choose $l=\min\bigg{\{h,{\big(\frac{b}{M}\frac{\Gamma(\alpha+k+1)}{\Gamma(k+1)}\big)}^{\frac{1}{\mu+k}}\bigg\}},\, \mu=1-\beta(1-\alpha).$
\begin{lem}
Suppose that \textbf{(H1)} holds. Then $x:J\to\R$ is a solution of IVP \eqref{s1} if and only if $x:I\to\R$ is a solution of the Volterra integral equation of second kind:
\begin{equation}\label{s2}
x(t)=x_0{\big(\log{t}\big)}^{\gamma-1}+\frac{1}{\Gamma(\alpha)}\int_{1}^{t}{\big(\log{\frac{t}{s}}\big)}^{\alpha-1}f(s,x(s))\frac{ds}{s},\quad t>1.
\end{equation}
\end{lem}
\begin{proof} First we suppose that $x:I\to\R$ is a solution of IVP \eqref{s1}. Then $|{\big(\log{t}\big)}^{1-\gamma}x(t)-x_0|\leq b$ for all $t\in I.$ From \textbf{(H1)}, there exists a $k>(\beta(1-\alpha)-1)$ and $M\geq0$ such that
\begin{equation*}
|f(t,x(t))|=|f(t,{(\log{t})}^{\gamma-1}{(\log{t})}^{1-\gamma}x(t))|\leq M{(\log{t})}^{k},\qquad \text{for all}\quad t\in I.
\end{equation*}
We have
\begin{align*}
\bigg{|}\frac{1}{{\Gamma(\alpha)}}\int_{1}^{t}{{\big(\log{\frac{t}{s}}\big)}^{\alpha-1}}f(s,x(s))\frac{ds}{s}\bigg{|}&\leq \frac{1}{{\Gamma(\alpha)}}\int_{1}^{t}{{\big(\log{\frac{t}{s}}\big)}^{\alpha-1}}M{{(\log{s})}^{k}}\frac{ds}{s}\\
&=M{(\log{t})}^{\alpha+k}\frac{\Gamma(k+1)}{\Gamma(\alpha+k+1)}.
\end{align*}
Clearly,
\begin{equation*}
\lim_{t\to1}{\big(\log{t}\big)}^{1-\gamma}\frac{1}{{\Gamma(\alpha)}}\int_{1}^{t}{{\big(\log{\frac{t}{s}}\big)}^{\alpha-1}}f(s,x(s))\frac{ds}{s}=0.
\end{equation*}
It follows that
\begin{equation*}
x(t)=x_0{\big(\log{t}\big)}^{\gamma-1}+\frac{1}{{\Gamma(\alpha)}}\int_{1}^{t}{{\big(\log{\frac{t}{s}}\big)}^{\alpha-1}}f(s,x(s))\frac{ds}{s},\quad t\in I.
\end{equation*}
Since $k>(\beta(1-\alpha)-1),$ then $x\in{C^{0}(I)}$ is a solution of integral equation \eqref{s2}.
Conversely, it is easy to see that $x:I\to\R$ is a solution of integral equation \eqref{s2} implies that $x$ is a solution of IVP \eqref{s1} defined on $J.$ This completes the proof.
\end{proof}
To prove further main results, we choose a Picard function sequence as follows:
\begin{equation}\label{pfc}\begin{split}
\phi_0(t)&=x_0{(\log{t})}^{\gamma-1},\qquad t\in I, \\
\phi_n(t)=\phi_0(t)+&\frac{1}{\Gamma(\alpha)}\int_{1}^{t}{\big(\log{\frac{t}{s}}\big)}^{\alpha-1}f(s,\phi_{n-1}(s))\frac{ds}{s},\quad t\in I,\quad n=1,2,\cdots.
\end{split}\end{equation}
\begin{lem}
Suppose that \textbf{(H1)} holds. Then $\phi_n$ is continuous on $I$ and satisfies $|{(\log{t})}^{1-\gamma}\phi_n(t)-x_0|\leq b.$
\end{lem}
\begin{proof} From \textbf{(H1)}, clearly $|f(t,{(\log{t})}^{\gamma-1}x)|\leq M{(\log{t})}^{k}$ for all $t\in{D_h}$ and $|x{(\log{t})}^{1-\gamma}-x_0|\leq b.$ For $n=1,$ we have
\begin{equation}\label{l1}
\phi_1(t)=x_0{(\log{t})}^{\gamma-1}+\frac{1}{\Gamma(\alpha)}\int_{1}^{t}{{\big(\log{\frac{t}{s}}\big)}^{\alpha-1}}f(s,\phi_{0}(s))\frac{ds}{s}.
\end{equation}
Then
\begin{equation*}
\bigg{|}\frac{1}{\Gamma(\alpha)}\int_{1}^{t}{{\big(\log{\frac{t}{s}}\big)}^{\alpha-1}}f(s,\phi_0(s))\frac{ds}{s}\bigg{|}\leq \frac{1}{\Gamma(\alpha)}\int_{1}^{t}{{\big(\log{\frac{t}{s}}\big)}^{\alpha-1}}M{(\log{s}\big)}^{k}\frac{ds}{s}=M{(\log{t}\big)}^{\alpha+k}\frac{\Gamma(k+1)}{\Gamma(\alpha+k+1)}.
\end{equation*}
This implies $\phi_1\in{C^{0}(I)}$ and from equation \eqref{l1}, we get
\begin{equation}\label{l2}
|{(\log{t})}^{1-\gamma}\phi_1(t)-x_0|\leq{(\log{t})}^{1-\gamma}M{(\log{t})}^{\alpha+k}\frac{\Gamma(k+1)}{\Gamma(\alpha+k+1)}\leq Ml^{\alpha+k+1-\gamma}\frac{\Gamma(k+1)}{\Gamma(\alpha+k+1)}.
\end{equation}
Now by induction hypothesis, suppose that $\phi_n\in{C^{0}(J)}$ and $|{(\log{t})}^{1-\gamma}\phi_n(t)-x_0|\leq b$ for all $t\in J.$ We have
\begin{equation}\label{l3}
\phi_{n+1}(t)=x_0{(\log{t})}^{\gamma-1}+\frac{1}{\Gamma(\alpha)}\int_{1}^{t}{{\big(\log{\frac{t}{s}}\big)}^{\alpha-1}}f(s,\phi_{n}(s))\frac{ds}{s}.
\end{equation}
From above discussion, we obtain $\phi_{n+1}(t)\in {C^{0}(I)}$ and from equation \eqref{l3}, we have
\begin{align*}
|{(\log{t}\big)}^{1-\gamma}\phi_{n+1}(t)-x_0|&\leq {(\log{t})}^{1-\gamma}\frac{1}{\Gamma(\alpha)}\int_{1}^{t}{{\big(\log{\frac{t}{s}}\big)}^{\alpha-1}}M{(\log{s}\big)}^{k}\frac{ds}{s}\\
&=M{(\log{t})}^{\alpha+k+1-\gamma}\frac{\Gamma(k+1)}{\Gamma(\alpha+k+1)} \\
&\leq Ml^{\alpha+k+1-\gamma}\frac{\Gamma(k+1)}{\Gamma(\alpha+k+1)}\leq b.
\end{align*}
Thus, the result is true for $n+1$ and holds the induction hypotheses. Therefore, by the mathematical induction principle, the result is true for all $n.$ The proof is complete.
\end{proof}
\begin{thm}
Suppose \textbf{(H1)}-\textbf{(H2)} holds. Then $\{{(\log{t})}^{1-\gamma}\phi_n(t)\}$ is uniformly convergent sequence on $J.$
\end{thm}
\begin{proof} Consider the series
\begin{equation*}
{{(\log{t})}^{1-\gamma}\phi_0(t)}+{{(\log{t})}^{1-\gamma}[\phi_1(t)-\phi_0(t)]}+\cdots+{{(\log{t})}^{1-\gamma}[\phi_n(t)-\phi_{n-1}(t)]}+\cdots,\quad t\in J.
\end{equation*}
By relation \eqref{l2} driven in the proof of Lemma 4 above,
\begin{equation*}
{(\log{t})}^{1-\gamma}|\phi_1(t)-\phi_0(t)|\leq M{(\log{t})}^{\alpha+k+1-\gamma}\frac{\Gamma(k+1)}{\Gamma(\alpha+k+1)}, \qquad t\in J.
\end{equation*}
From Lemma 4,
\begin{align*}
{(\log{t})}^{1-\gamma}|\phi_2(t)&-\phi_1(t)|\leq{(\log{t})}^{1-\gamma}\frac{1}{\Gamma(\alpha)}\int_{1}^{t}{{\big(\log{\frac{t}{s}}\big)}^{\alpha-1}}|f(s,\phi_1(s))-f(s,\phi_0(s))|\frac{ds}{s}\\
=&{(\log{t}\big)}^{1-\gamma}\frac{1}{\Gamma(\alpha)}\int_{1}^{t}{{\big(\log{\frac{t}{s}}\big)}^{\alpha-1}}\big|f\big(s,{(\log{s})}^{\gamma-1}{(\log{s})}^{1-\gamma}\phi_1(s)\big)\\
&\hspace{3cm}-f\big(s,{(\log{s})}^{\gamma-1}{(\log{s})}^{1-\gamma}\phi_0(s)\big)\big|\frac{ds}{s}\\
\leq&{(\log{t})}^{1-\gamma}\frac{1}{\Gamma(\alpha)}\int_{1}^{t}{{\big(\log{\frac{t}{s}}\big)}^{\alpha-1}}A{(\log{s})}^{k}\big|{(\log{s})}^{1-\gamma}\phi_1(s)-{(\log{s})}^{1-\gamma}\phi_0(s)\big|\frac{ds}{s}\\
\leq&{(\log{t})}^{1-\gamma}\frac{1}{\Gamma(\alpha)}\int_{1}^{t}{{\big(\log{\frac{t}{s}}\big)}^{\alpha-1}}A{(\log{s})}^{k}\big[{(\log{s})}^{1-\gamma}|\phi_1(s)-\phi_0(s)|\big]\frac{ds}{s}\\
\leq&{(\log{t})}^{1-\gamma}\frac{1}{\Gamma(\alpha)}\int_{1}^{t}{{\big(\log{\frac{t}{s}}\big)}^{\alpha-1}}A{(\log{s})}^{k}\big[M{(\log{s})}^{\alpha+k+1-\gamma}\frac{\Gamma(k+1)}{\Gamma(\alpha+k+1)}\big]\frac{ds}{s}\\
=&AM\frac{\Gamma(k+1)}{\Gamma(\alpha+k+1)}\frac{\Gamma(\alpha+2k+2-\gamma)}{\Gamma(2\alpha+2k+2-\gamma)}{(\log{t})}^{2(\alpha+k+1-\gamma)}.
\end{align*}
Now suppose that
\begin{equation*}
{(\log{t})}^{1-\gamma}|\phi_{n+1}(t)-\phi_n(t)|\leq A^{n}M{(\log{t})}^{(n+1)(\alpha+k+1-\gamma)}\prod_{i=0}^{n}\frac{\Gamma((i+1)k+i(\alpha+1-\gamma)+1)}{\Gamma((i+1)(\alpha+k)+i(1-\gamma)+1)}.
\end{equation*}
We have
\begin{align*}
{(\log{t})}^{1-\gamma}|\phi_{n+2}(t)-\phi_{n+1}(t)|&\leq{(\log{t})}^{1-\gamma}\frac{1}{\Gamma(\alpha)}\int_{1}^{t}{{\big(\log{\frac{t}{s}}\big)}^{\alpha-1}}|f(s,\phi_{n+1}(s))-f(s,\phi_n(s))|\frac{ds}{s}\\
&={(\log{t})}^{1-\gamma}\frac{1}{\Gamma(\alpha)}\int_{1}^{t}{{\big(\log{\frac{t}{s}}\big)}^{\alpha-1}}\big|f\big(s,{(\log{s})}^{\gamma-1}{(\log{s})}^{1-\gamma}\phi_{n+1}(s)\big)\\
&\hspace{3cm}-f\big(s,{(\log{s})}^{\gamma-1}{(\log{s})}^{1-\gamma}\phi_n(s)\big)\big|\frac{ds}{s}\\
{(\log{t})}^{1-\gamma}|\phi_{n+2}(t)-\phi_{n+1}(t)|&\leq{(\log{t})}^{1-\gamma}\frac{1}{(\Gamma(\alpha))}\int_{1}^{t}{{\big(\log{\frac{t}{s}}\big)}^{\alpha-1}}A{(\log{s})}^{k}\big[{(\log{s})}^{1-\gamma}|\phi_{n+1}(s)-\phi_n(s)|\big]\frac{ds}{s}\\
&\leq{(\log{t})}^{1-\gamma}\frac{1}{\Gamma(\alpha)}\int_{1}^{t}{{\big(\log{\frac{t}{s}}\big)}^{\alpha-1}}A{(\log{s})}^{k}\big[A^{n}M{(\log{s})}^{(n+1)(\alpha+k+1-\gamma)}\\
&\hspace{2.5cm}\times\prod_{i=0}^{n}\frac{\Gamma((i+1)k+i(\alpha+1-\gamma)+1)}{\Gamma((i+1)(\alpha+k)+i(1-\gamma)-1)}\big]\frac{ds}{s}\\
&={A^{n+1}M{(\log{t})}^{(n+2)(\alpha+k+1-\gamma)}}\prod_{i=0}^{n+1}\frac{\Gamma((i+1)k+i(\alpha+1-\gamma)+1)}{\Gamma((i+1)(\alpha+k)+i(1-\gamma)+1)}
\end{align*}
which follows that the result is true for $n+1.$ Using principle of mathematical induction, we get
\begin{equation}\label{l4}
{(\log{t})}^{1-\gamma}|\phi_{n+2}(t)-\phi_{n+1}(t)|\leq A^{n+1}Ml^{(n+2)(\alpha+k+1-\gamma)}\prod_{i=0}^{n+1}\frac{\Gamma((i+1)k+i(\alpha+1-\gamma)+1)}{\Gamma((i+1)(\alpha+k)+i(1-\gamma)+1)}.
\end{equation}
Consider
\begin{equation*}
\sum_{n=1}^{\infty}u_n=\sum_{n=1}^{\infty}MA^{n+1}l^{(n+2)(\alpha+k+1-\gamma)}\prod_{i=0}^{n+1}\frac{\Gamma((i+1)k+i(\alpha+1-\gamma)+1)}{\Gamma((i+1)(\alpha+k)+i(1-\gamma)+1)}.
\end{equation*}
We have
\begin{align*}
\frac{u_{n+1}}{u_n}&=\frac{MA^{n+2}l^{(n+3)(\alpha+k+1-\gamma)}\prod_{i=0}^{n+2}\frac{\Gamma((i+1)k+i(\alpha+1-\gamma)+1)}{\Gamma((i+1)(\alpha+k)+i(1-\gamma)+1)}}{MA^{n+1}l^{(n+2)(\alpha+k+1-\gamma)}\prod_{i=0}^{n+1}\frac{\Gamma((i+1)k+i(\alpha+1-\gamma)+1)}{\Gamma((i+1)(\alpha+k)+i(1-\gamma)+1)}}\\
&=Al^{\alpha+k+1-\gamma}\frac{\Gamma((n+3)k+(n+2)(\alpha+1-\gamma)+1)}{\Gamma((n+3)(k+\alpha)+(n+2)(1-\gamma)+1)}.
\end{align*}
Using Lemma 2, we have
\begin{align*}
\frac{u_{n+1}}{u_n}&=Al^{\alpha+k+1-\gamma}\frac{\lim_{m\to\infty}\frac{m^{(n+3)k+(n+2)(\alpha+1-\gamma)+1}m!}{((n+3)k+(n+2)(\alpha+1-\gamma)+1)\cdots((n+3)k+(n+2)(\alpha+1-\gamma)+m+1)}}{\lim_{m\to\infty}\frac{m^{(n+3)(k+\alpha)+(n+2)(1-\gamma)+1}m!}{((n+3)(k+\alpha)+(n+2)(1-\gamma)+1)\cdots((n+3)(k+\alpha)+(n+2)(1-\gamma)+m+1)}}
\end{align*}
$\hspace{2.5cm}=Al^{\alpha+k+1-\gamma}[\lim_{m\to\infty}m^{-\alpha}\frac{((n+3)(k+\alpha)+(n+2)(1-\gamma)+1)\cdots((n+3)(k+\alpha)+(n+2)(1-\gamma)+m+1)}
{((n+3)k+(n+2)(\alpha+1-\gamma)+1)\cdots((n+3)k+(n+2)(\alpha+1-\gamma)+m+1)}].$\\ \\
It is easy to see that $$\frac{((n+3)(k+\alpha)+(n+2)(1-\gamma)+1)\cdots((n+3)(k+\alpha)+(n+2)(1-\gamma)+m+1)}
{((n+3)k+(n+2)(\alpha+1-\gamma)+1)\cdots((n+3)k+(n+2)(\alpha+1-\gamma)+m+1)}$$ is bounded for all $m,n.$ Thus $\lim_{n\to\infty}\frac{u_{n+1}}{u_n}=0.$ This implies $\sum_{n=1}^{\infty}u_n$ is convergent. Hence the series
\begin{equation*}
{{(\log{t}\big)}^{1-\gamma}\phi_0(t)}+{{(\log{t})}^{1-\gamma}[\phi_1(t)-\phi_0(t)]}+\cdots+{{(\log{t})}^{1-\gamma}[\phi_n(t)-\phi_{n-1}(t)]}+\cdots
\end{equation*}
is uniformly convergent for $t\in J.$ Therefore $\{{(\log{t})}^{1-\gamma}\phi_n(t)\}$ is uniformly convergent sequence on $J.$
\end{proof}
\begin{thm}
Suppose that \textbf{(H1)}-\textbf{(H2)} holds. Then $\phi(t)={(\log{t})}^{\gamma-1}\lim_{n\to\infty}{(\log{t})}^{1-\gamma}\phi_n(t)$ is a unique continuous solution of integral equation \eqref{s2} defined on $J.$
\end{thm}
\begin{proof} Since $\phi(t)={(\log{t})}^{\gamma-1}\lim_{n\to\infty}{(\log{t})}^{1-\gamma}\phi_n(t)$ on $J,$ and by Lemma 2, ${(\log{t})}^{1-\gamma}|\phi(t)-x_0|\leq b.$ Then
\begin{align*}
|f(t,\phi_{n}(t))-f(t,\phi(t))|\leq A{(\log{t})}^{k}&|\phi_{n}(t)-\phi(t)|,\quad t\in I,\\
{(\log{t})}^{-k}|f(t,\phi_{n}(t))-f(t,\phi(t))|&\leq A|\phi_{n}(t)-\phi(t)|\to0
\end{align*}
uniformly as $n\to\infty$ on $I.$ Therefore
\begin{align*}
{(\log{t})}^{1-\gamma}\phi(t)&=\lim_{n\to\infty}\phi_{n}(t)\\
&=x_0+{(\log{t})}^{1-\gamma}\lim_{n\to\infty}\frac{1}{\Gamma(\alpha)}\int_{1}^{t}{{\big(\log{\frac{t}{s}}\big)}^{\alpha-1}}{(\log{s})}^{k}\big({(\log{s})}^{-k}f(s,\phi_{n-1}(s))\big)\frac{ds}{s}\\
&=x_0+{(\log{t})}^{1-\gamma}\frac{1}{\Gamma(\alpha)}\int_{1}^{t}{{\big(\log{\frac{t}{s}}\big)}^{\alpha-1}}{(\log{s})}^{k}\lim_{n\to\infty}\big({(\log{s})}^{-k}f(s,\phi_{n-1}(s))\big)\frac{ds}{s}\\
&=x_0+{(\log{t})}^{1-\gamma}\frac{1}{\Gamma(\alpha)}\int_{1}^{t}{{\big(\log{\frac{t}{s}}\big)}^{\alpha-1}}f(s,\phi(s))\frac{ds}{s}.
\end{align*}
Then $\phi$ is a continuous solution of integral equation \eqref{s2} defined on $J.$
Now we will prove uniqueness of solution $\phi(t).$ For this, suppose that $\psi(t)$ defined on $I$ is also a solution of integral equation \eqref{s2}. Then ${(\log{t})}^{1-\gamma}|\psi(t)|\leq b$ for all $t\in I$ and
\begin{equation*}
\psi(t)=x_0{(\log{t})}^{\gamma-1}+\frac{1}{\Gamma(\alpha)}\int_{1}^{t}{{\big(\log{\frac{t}{s}}\big)}^{\alpha-1}}f(s,\phi(s))\frac{ds}{s},\quad t\in I.
\end{equation*}
It is sufficient to prove $\phi(t)\equiv\psi(t)$ on $I.$ From \textbf{(H1)}, there exists a $k>(\beta(1-\alpha)-1)$ and $M\geq0$ such that
\begin{equation*}
|f(t,\psi(t))|=\big|f\big(t,{(\log{t})}^{\gamma-1}{(\log{t})}^{1-\gamma}\psi(t)\big)\big|\leq M{(\log{t})}^{k},
\end{equation*}
for all $t\in I.$ Therefore
\begin{align*}
{(\log{t})}^{1-\gamma}|\phi_{0}(t)-\psi(t)|=&{(\log{t})}^{1-\gamma}\bigg|\frac{1}{\Gamma(\alpha)}\int_{1}^{t}{{\big(\log{\frac{t}{s}}\big)}^{\alpha-1}}f(s,\psi(s))\frac{ds}{s}\bigg|\\
&\leq{(\log{t})}^{1-\gamma}\frac{1}{\Gamma(\alpha)}\int_{1}^{t}{{\big(\log{\frac{t}{s}}\big)}^{\alpha-1}}M{(\log{s})}^{k}\frac{ds}{s}\\
&=M{(\log{t})}^{\alpha+k+1-\gamma}\frac{\Gamma(k+1)}{\Gamma(\alpha+k+1)}.
\end{align*}
Furthermore
\begin{align*}
{(\log{t})}^{1-\gamma}|\phi_{1}(t)-\psi(t)|=&{(\log{t})}^{1-\gamma}\bigg|\frac{1}{\Gamma(\alpha)}\int_{1}^{t}{{\big(\log{\frac{t}{s}}\big)}^{\alpha-1}}[f(s,\phi_0(s))-f(s,\psi(s))]\frac{ds}{s}\bigg|\\
&\leq AM \frac{\Gamma(k+1)}{\Gamma(\alpha+k+1)}\frac{\Gamma(\alpha+2k+2-\gamma)}{\Gamma(2\alpha+2k+2-\gamma)}{(\log{t})}^{2(\alpha+k+1-\gamma)}.
\end{align*}
By the induction hypothesis, we suppose that
\begin{equation*}
{(\log{t})}^{1-\gamma}|\phi_{n}(t)-\psi(t)|\leq A^{n}M{(\log{t})}^{(n+1)(\alpha+k+1-\gamma)}\prod_{i=0}^{n}\frac{\Gamma((i+1)k+i(\alpha+1-\gamma)+1)}{\Gamma((i+1)(\alpha+k)+i(1-\gamma)+1)}.
\end{equation*}
Then
\begin{align*}
{(\log{t})}^{1-\gamma}|\phi_{n+1}(t)-\psi(t)|\leq&{(\log{t})}^{1-\gamma}\bigg|\frac{1}{\Gamma(\alpha)}\int_{1}^{t}{{\big(\log{\frac{t}{s}}\big)}^{\alpha-1}}[f(s,\phi_n(s))-f(s,\psi(s))]\frac{ds}{s}\bigg|\\
\leq& A^{n+1}M{(\log{t})}^{(n+2)(\alpha+k+1-\gamma)}\prod_{i=0}^{n+1}\frac{\Gamma((i+1)k+i(\alpha+1-\gamma)+1)}{\Gamma((i+1)(\alpha+k)+i(1-\gamma)+1)}\\
\leq& A^{n+1}Ml^{(n+2)(\alpha+k+1-\gamma)}\prod_{i=0}^{n+1}\frac{\Gamma((i+1)k+i(\alpha+1-\gamma)+1)}{\Gamma((i+1)(\alpha+k)+i(1-\gamma)+1)}.
\end{align*}
Using the same arguments used in the proof of Theorem 3, we obtain the series
\begin{equation*}
\sum_{n=1}^{\infty}A^{n+1}Ml^{(n+2)(\alpha+k+1-\gamma)}\prod_{i=0}^{n+1}\frac{\Gamma((i+1)k+i(\alpha+1-\gamma)+1)}{\Gamma((i+1)(\alpha+k)+i(1-\gamma)+1)}
\end{equation*}
is convergent. Thus $A^{n+1}Ml^{(n+2)(\alpha+k+1-\gamma)}\prod_{i=0}^{n+1}\frac{\Gamma((i+1)k+i(\alpha+1-\gamma)+1)}{\Gamma((i+1)(\alpha+k)+i(1-\gamma)+1)}\to0$ as $n\to\infty.$ Also we observe that $\lim_{n\to\infty}{(\log{t})}^{1-\gamma}\phi_n(t)={(\log{t})}^{1-\gamma}\psi(t)$ uniformly on $J.$ Thus $\phi(t)\equiv\psi(t)$ on $I.$
\end{proof}
\begin{thm}
Suppose that \textbf{(H1)}-\textbf{(H2)} holds. Then the IVP \eqref{s1} has a unique continuous solution $\phi$ defined on $I$ and $\phi(t)={(\log{t})}^{\gamma-1}\lim_{n\to\infty}{(\log{t})}^{1-\gamma}\phi_{n}(t)$ with $\phi_0(t)$ and $\phi_n(t)$ defined by \eqref{pfc}.
\end{thm}
\begin{proof} From Lemma 3 and Theorem 3, we can easily obtain that $\phi(t)={(\log{t})}^{\gamma-1}\lim_{n\to\infty}{(\log{t})}^{1-\gamma}\phi_n(t)$ is a unique continuous solution of IVP \eqref{s1} defined on $I$. Thus the proof is ended here.
\end{proof}
\section{Concluding remarks} We considered a new class of IVP for fractional differential problems with Hilfer-Hadamard fractional derivative. A new criteria for the local existence and uniqueness of solution is discussed. Then uniform convergence of such a local solution is obtained with Picard iterative method and a computable sequences are given for approximating the solutions.
|
2,869,038,155,850 | arxiv | \section{Introduction}
Particle collisions at high energies produce large numbers of secondaries and it is natural to try
a statistical-thermal model to analyse these. This type of analysis has a long
and proud history~\cite{koppe,fermi,hagedorn} and led to the successful explanation of
$m_T$ scaling which is a natural consequence of such models. The behaviour in the longitudinal direction
was however very different and led many people to
discard the thermal model. \\
In relativistic heavy ion collisions a new dimension was given to the model
by the highly successful analysis of particle yields, leading to the notion of chemical
equilibrium which is now a well-established one in the analysis of relativistic heavy ion collisions~\cite{cleymans-satz}.
The early situation in 1999 is summarized in Fig.~1, with three points, showing a clear increase of the
chemical freeze-out temperature, $T$,
with increasing beam energy and an accompanying decrease of the baryon chemical potential $\mu_B$~\cite{becattini}.
\begin{figure}[htb]
\centerline{\epsfig{file=eovern_1999.eps,width=8cm}}
\caption{Chemical freeze-out temperature $T$ vs. the baryon chemical potential at different beam
energies together with curves corresponding to a fixed ratio of energy per hadron divided by
total number of hadrons in the resonance gas before decay of resonances.}
\label{e_1999}
\end{figure}
The situation improved substantially in the following decade~\cite{pbm,manninen,picha,takahashi} and now covers almost
the complete curve as shown in Fig.~2. Note that a last substantial gap
still exists in the energy region to be covered by NICA.
\begin{figure}[htb]
\centerline{\epsfig{file=eovern_2009.eps,width=8cm}}
\caption{Chemical freeze-out temperature $T$ vs. the baryon chemical potential at different beam
energies together with curves corresponding to a fixed ratio of energy per hadron divided by
total number of hadrons in the resonance gas before decay of resonances.}
\label{e_2009}
\end{figure}
The resulting freeze-out curve in the $T-\mu_B$ plane can also be drawn in the
energy density vs net baryon density plane as was first done in Ref.~\cite{randrup}. The
resulting curve is shown in Fig.~\ref{randrup_figure}.
\begin{figure}[htb]
\centerline{\epsfig{file=rho-eps2.eps, width=8cm}}
\caption{The hadronic freeze-out line in the $\rho_B-\eps^{*}$ phase plane
as obtained from the values of $\mu_B$ and $T$
that have been extracted from the experimental data in \cite{wheaton}.
The calculation employs values of $\mu_Q$ and $\mu_S$
that ensure $\langle S\rangle=0$ and $\langle Q\rangle=0.4\langle B\rangle$
for each value of $\mu_B$.
Also indicated are the beam energies (in GeV/N)
for which the particular freeze-out
conditions are expected at either RHIC or FAIR or NICA.
}
\label{randrup_figure}
\end{figure}
This figure shows that the highest net baryon density will be reached in the beam energy covered by the NICA
accelerator.
\section{Comparison of Chemical Freeze-Out Criteria}
In view of the success of chemical freeze-out in relativistic heavy ion collisions,
much effort has gone into finding models that lead to a final state in chemical freeze-out, see e.g.
curve~\cite{magas_satz,transition,biro}.
A comparison~\cite{wheaton} of three parameterizations is shown in Fig.~\ref{criteria}.
\begin{figure}
\epsfig{file=larry_fo_noags.eps,width=8cm}
\caption{Chemical freeze-out temperature $T$ vs. the baryon chemical potential at different beam
energies together with curves corresponding to a fixed ratio of energy per hadron divided by
total number of hadrons in the resonance gas before decay of resonances.}
\label{criteria}
\end{figure}
The corresponding dependence of the temperature and the chemical potential on beam energy is
surprisingly smooth~\cite{wheaton} as shown in Figs.~\ref{tvse} and~\ref{mubvse}.
\begin{figure}
\epsfig{file=T_e.eps,width=8cm}
\caption{Chemical freeze-out temperature $T$ as a function of the beam energy.}
\label{tvse}
\end{figure}
\begin{figure}
\centerline{\epsfig{file=mub_e.eps,width=8cm}}
\caption{Chemical freeze-out baryon chemical potential $\mu_B$ as a function of the beam energy.}
\label{mubvse}
\end{figure}
However, despite this smoothness in the thermal freeze-out parameters a
roller-coaster is observed in several particle ratios, e.g. the horn in the $K^+/\pi^+$ ratio and a similar
strong variation in the $\Lambda/\pi$ ratio~\cite{NA49}.
Again these strong variations are not observed in $p-p$ collisions and happen in the NICA energy region.
Within the framework of thermal-satistical models this variation has been connected to a change from
a baryon domicated to a meson dominated hadron gas. This conclusion is based on the observation that the entropy density
divided by the temparature to the third power, $s/T^3$, is constant over the whole energy range. The change is illustrated in Fig.~\ref{sovert3}.
\begin{figure}[htb]
\centerline{\epsfig{file=sovert3_BSQ.eps,width=8cm}}
\caption{The $s/T^3$ ratio calculated in the thermal-statistical model along the constant value consistent with
chemical freeze-out. Also shown are the contributions from the mesons and the baryons.}
\label{sovert3}
\end{figure}
Lines of constant value for the $K^+/\pi^+$ ratio are shown in Fig.~\ref{kpluspiplus} where it can be seen that the
absolute maximum in the thermal-statistical model hugs the chemical freeze-out line. The largest observed value
is just barely compatible with this maximum. Again this is right in the energy region covered by NICA.
\begin{figure}[htb]
\centerline{\epsfig{file=kplus_maxima.eps,width=8cm}}
\caption{Lines of constant value of the $K^+/\pi^+$ ratio in the $T-\mu_B$ plane showing a clear maximum
in this ratio close to the boundary given by the chemical freeze-out line.}
\label{kpluspiplus}
\end{figure}
In the thermal-statistical model
a rapid change is expected as the hadronic gas undergoes a
transition from a baryon-dominated to a meson-dominated gas. The
transition occurs at a temperature $T$ = 151 MeV and baryon
chemical potential $\mu_B$ = 327 MeV corresponding to an incident
energy of $\sqrt{s_{NN}}$ = 11 GeV.
Thus the strong variation seen in the particle ratios
corresponds to a transition from a baryon-dominated to
a meson-dominated hadronic gas. This transition occurs at a
\begin{itemize}
\item temperature $T = $ 151 MeV,
\item baryon chemical potential $\mu_B = $ 327 MeV,
\item energy $\sqrt{s_{NN}} = $ 11 GeV.
\end{itemize}
In the
statistical model this transition leads to peaks in the
$\Lambda/\left<\pi\right>$, $K^+/\pi^+$, $\Xi^-/\pi^+$ and
$\Omega^-/\pi^+$ ratios. However, the observed ratios are sharper than the ones
calculated in thermal-statistical models and NICA will be ideally positioned to clarify this.\\
\section{Conclusions}
There are several theoretical indications that the energy region
covered by the proposed NICA accelerator in Dubna is an extremely
interesting one.
We present a review of data obtained in relativistic heavy ion collisions and show that there
is a gap around 11 GeV where more and better precise measurements are needed.
The theoretical interpretation can only be clarified by covering this
energy region.
In particular the strangeness content needs to be determined, data covering
the full phase space (4$\pi$) would be very helpful to determine the thermal parameters of
a possible phase transition and the existence of a quarkyonic phase as has been discussed in a recently~\cite{mclerran}.
\section*{Acknowledgments}
The numerous contributions by H Oeschler, J. Randrup, K. Redlich, E. Suhonen
and S. Wheaton are gratefully acknowledged.
|
2,869,038,155,851 | arxiv | \section{\label{sec: Intro} Introduction}
Laser-plasma acceleration (LPA) is a particle acceleration scheme that uses an ultrafast intense laser pulse to create a plasma wave that can sustain strong acceleration gradients of hundreds of GV/m to achieve electron acceleration over short distances \cite{lpatajima, lpaesarey, LPAexp, lpamalka, lpaplasmawave, LPAhighgrad, LPAwfgen, LPAnonlinear, direcgrad}. Such accelerators have become capable of producing relativistic quasi-monoenergetic electron beams \cite{krushelnick, geddesnature, faurenature, THzIFPhysLett} in the hundreds of MeV \cite{leenatphot, Kneip, Froula, pukhovmono, ControlledInjMono, attosec} to above GeV energy level \cite{LeemansNature, XWangNature, KimPhys, wimPhysLett, KimPhysLett, PW8GeV}. Controlled LPA experiments require a well-defined interaction region between the laser pulse and the plasma target \cite{tonypaper, highdensgasjet, Lemos, cgrth}. The plasma target is typically created by ionization of a gas target in the onset of high power laser pulses used to drive the plasma wave. A long flat-top density profile is often desired for laser propagation, plasma waveguide creation, and electron acceleration \cite{cgrth, Milchberg, gasjetSchmid, Semushin}. On the other hand, a sharp high density profile is useful for high repetition rate LPA driven by mJ-level laser pulses \cite{kHzLPA1, kHzLPA2} and electron injection \cite{liona, kkswan, haien, sambarber, SchmidInjection, CGeddesInjection, BuckInjection, GuillaumeInjection, ThauryInjection, VeiszInjection}. One common method of creating desired density profiles is by producing a supersonic gas jet through converging-diverging (C-D) nozzles. Numerous studies have been conducted in the context of LPA on nozzle manufacturing techniques \cite{3Dprinting} and nozzle design \cite{gasjetSchmid, gastargetsLorenz, liona, earlystudyJLHen, MKrish, cgrth, Semushin, Vmalka, highdensgasjet, Lemos, MinIOP, MusinskiGasJet, YMLi, froulaIF}. While the diverging section curvature of C-D nozzles has been examined and optimized for various applications in past studies \cite{Nasatrumpet, Eriksson, MKrish, Atkinson, windtunnel, DengThesis}, more investigation of the diverging section curvature's effect in an LPA context is needed. In this paper, we examine three nozzle designs with different curvatures shown in Fig. \ref{fig: nozzleprofs}(a), (b), (c). The trumpet design, which has not been commonly studied or applied in an LPA context, is based off of previous designs in other fields \cite{ogtrump, Nasatrumpet}. Simulation results suggest that the nozzle curvature has great effect on the resulting density field outside the nozzle exit. It is also found that the trumpet nozzle, like the straight nozzle, can effectively yield flat-top density profiles if optimized while the bell curvature creates highly focused regions of gas with large density fluctuations. Furthermore, the trumpet nozzle is found to be more versatile in producing flat-top profiles compared to the straight nozzle as its curvature can be adjusted to suppress shocks outside the nozzle more effectively.
\begin{figure}[H]
\centering
\begin{subfigure}{0.3\textwidth}
\centering
\includegraphics[width = \textwidth]{3D600umThroatBellProfile.png}
\caption{Concave "Bell" aka Parabolic}
\label{fig: bell}
\end{subfigure}
\begin{subfigure}{0.3\textwidth}
\centering
\includegraphics[width = \textwidth]{3D800umThroatStraightProfile.png}
\caption{Straight}
\label{fig: straight}
\end{subfigure}
\begin{subfigure}{0.3\textwidth}
\centering
\includegraphics[width = \textwidth]{3D800umThroatTrumpetProfile.png}
\caption{Convex "Trumpet"}
\label{fig: trumpet}
\end{subfigure}
\caption{Three-dimensional profile of each nozzle considered in this paper}
\label{fig: nozzleprofs}
\end{figure}
To experimentally verify simulation results, we employ neutral density interferometry, a popular gas jet characterization method \cite{earlystudyJLHen, Vmalka, gasjetSchmid, gastargetsLorenz, Semushin, highdensgasjet, MKrish, Lemos, MusinskiGasJet, YMLi, froulaIF}. The paper is structured as follows: Sec. \ref{sec: Sims} describes the principles of nozzle design and the simulation methods used. Sec. \ref{sec: Experiment Method} presents the diagnostic setup. Sec. \ref{sec: Results} presents simulation results and the comparison to measurements and Sec. \ref{sec: discon} summarizes the study and discusses future areas of interest.
\section{\label{sec: Sims} Gas Jet Simulations}
This section covers the basic theory behind supersonic nozzle design, the simulated nozzle geometries and the simulation methods employed to model the gas jets produced by each nozzle. The variables shown in the axisymmetric domain in Fig. \ref{fig:2D_Domain} are referenced throughout the paper and defined as:
\begin{enumerate}
\item $r^*, r_i, r_e$: nozzle throat, nozzle inlet and nozzle exit radius. Corresponding diameters defined the same way, ($d^*, d_i, d_e)$
\item $l_d$: length of diverging section
\item $z$: normal distance from nozzle exit. $z$ $<$ 0 means inside of nozzle
\item $R_c$: radius of curvature of diverging section
\item $\theta_e$: nozzle exit half-angle
\item O: origin of coordinate system
\item Outlet: where the gas exits in the domain
\end{enumerate}
\begin{figure}[H]
\centering
\includegraphics[width = 0.5\textwidth]{2DSimDomain.png}
\caption{Axisymmetric domain used for simulations}
\label{fig:2D_Domain}
\end{figure}
\subsection{Nozzle Geometries and Design}
For each simulated nozzle geometry, 1D isentropic flow theory was first used to choose the desired $d_e$ and $d^*$ with various radii of curvature, $R_c$, and lengths, $l_d$, being chosen after. The 1D isentropic flow model approximates important flow parameters such as density $\rho$ and Mach Number $M$ and relates these parameters to the nozzle geometry through the cross section area $A$ \cite{CompFlow, gasdynamicsbook, Nasareport, Semushin}. Any variable with subscript 0 indicates a stagnation quantity, referring to the quantities of the gas in the gas bottle. Any variable with superscript * refers to a quantity at the nozzle throat. $\kappa$ is the specific heat ratio of the gas. The isentropic flow equations used to design the nozzles were:
\begin{equation}
\frac{\rho}{\rho_0} = (1 + \frac{\kappa - 1}{2}M^2)^{-\frac{1}{\kappa - 1}}
\label{eq: rhoisen}
\end{equation}
\begin{equation}
\frac{A}{A^*} = \frac{1}{M}\left[\frac{2}{\kappa + 1}(1 + \frac{\kappa - 1}{2}M^2)\right]^{\frac{\kappa + 1}{2(\kappa - 1)}}
\label{eq: areamach}
\end{equation}
Eqn. (\ref{eq: areamach}) was used to calculate the exit Mach number for each nozzle, optimizing the nozzle geometry for a desired exit Mach number, $M_e$. All nozzles were chosen to have the same exit diameter, $d_e$ = $3$ mm, to match (including consideration of the gas flow dynamics to the interception point of the laser) the accelerating structure with laser parameters for dephasing, depletion and diffraction. The large $d_e$ also lessens the effect of boundary layers on the flow as opposed to sub-mm scale nozzles \cite{gasjetSchmid}. For $d^*$ = 0.6 mm, isentropic calculations yield $M_e$ = 7.1 whereas for $d^*$ = 0.8 mm, $M_e$ = 5.7. The higher exit Mach numbers were chosen as a sequel to previously lower $M_e$ nozzles used \cite{liona, kkswan, haien} as high exit Mach numbers correspond to more flat-top density profiles \cite{Semushin}. Both $d^*$ were also chosen so that, with a backing pressure of 500 psi, isentropic exit density $\rho_e$ would be on the order of $10^{19}$ $cm^{-3}$, the optimal LPA density range \cite{gasjetSchmid, cgrth}. $\kappa = 5/3$ for Helium (He) and Argon (Ar).
\begin{table*}[htbp!]
\caption{\label{tab: gases} Characteristic thermodynamic and flow values for He and Ar}
\begin{ruledtabular}
\begin{tabular}{cccccccc}
Gas & $C_p$ [J/kg*K] & $k$ [W/m*K] & $A$ & $n$ & $\mu_0$ [Pa*s] & $S$ [K] & $T_0$ [K]\\ \hline
He & 5193 & 0.152 & 4.078$\times 10^{-7}$ & 0.6896\\
Ar & 520.64 & 0.0158 & & & 2.125$\times 10^{-5}$ & 144.4 & 273.11\\
\end{tabular}
\end{ruledtabular}
\end{table*}
Three different curvatures were simulated, with their geometry parameters defined below:
\begin{enumerate}
\item Concave "Bell" ($R_c>0$) nozzle: $d^*$ = $0.6$ mm, $R_c$ = $+31$ mm
\item Straight conical nozzle: $d^*$ = $0.8$ mm, $R_c$ = $\infty$, $l_d$ varied
\item Convex "Trumpet" ($R_c<0$) nozzle: $d^*$ = $0.8$ mm, $R_c$ varied, $l_d$ varied
\end{enumerate}
To compare the effect of curvature, three of the simulated nozzles, one from each curvature, were constrained to have $l_d$ = $9$ mm with all other parameters also kept constant. The bell nozzle having $d^*$ = 0.6 mm as opposed to $d^*$ = 0.8 mm was found to not have significant impact on the qualitative features observed. The straight and trumpet nozzles were chosen to have greater $d^*$ to loosen manufacturing constraints. The inlet diameter $d_i$, which has little effect on the gas jet \cite{Atkinson}, was chosen to match the valve diameter of $2.24$ mm. The diverging section length, $l_d$, was varied to optimize the trumpet and straight nozzles. In past studies of the straight nozzle, $l_d$ was approximately optimized using the "$1/M_e$" condition, which matched exit half-angle, $\theta_e$, to the Mach angle, also known as the shock angle, of $\sin^{-1}{(1/M_e)}$, to minimize the shock intensity \cite{Semushin, MKrish, cgrth}. The radius of curvature of the bell nozzle, $R_c$ = $+31$ mm was chosen to demonstrate the effect of the bell. The trumpet $R_c$ was varied to find the optimal trumpet geometry for producing flat-top density profiles.
\subsection{Simulation Methods}
The gas jet simulations were performed using the computational fluid dynamics (CFD) program ANSYS Fluent, which provides a range of numerical solvers for the Navier-Stokes, continuity and energy equations \cite{fluent}. Both 2D-axisymmetric and 3D simulations were performed. While 2D-axisymmetric simulations are computationally less expensive and can be more refined, 3D simulations model turbulence and flow more accurately. Thus, density maps were extracted from the 3D simulations whereas 2D-axisymmetric simulation results were used to resolve finer features such as shocks. The domain used for the 2D-axisymmetric simulations is shown in Fig. \ref{fig:2D_Domain}. The mesh for the domain consisted of about $2.5 \times 10^5$ quadrilateral elements, also called cells. Most of the cells were close to being perfectly square, which is ideal for CFD simulations \cite{flutheo}. The solver settings were the exact same as the 3D simulations settings. The boundary conditions (BCs) were also the same except with an added "axis" BC due to the axisymmetric nature. The axisymmetric profile of the 3D domain had the same nozzle profile as the 2D domain but was smaller in outlet area by 75\% in order to allow for a more refined mesh with the cells being closer to cubes, which is ideal for CFD simulations \cite{flutheo}. The profile was revolved to create the 3D domain. The mesh for 3D simulations contained $4.5 \times 10^5$ cells, with the average skewness being 0.062. The average orthogonality is 0.983, and the average aspect ratio is 3.74. An implicit coupled density-based steady-state solver was used with double precision accuracy. Turbulence was modeled using the $k-\omega$ shear stress transport (SST) model, which models turbulence both near and far from the walls well \cite{turbsst}. Spatial discretization was done with the Least Squares Cell-Based (LSCB) method given its better accuracy, stability and speed compared to other provided methods such as the Green-Gauss Node Based (GGNB) method \cite{flutheo, grad}. Turbulence was modeled with a third order method while flow was modeled with a second order method to yield more accurate solutions. The gases tested were Helium (He) and Argon (Ar), modeled by the ideal gas equation of state. Heat capacity $C_p$ and thermal conductivity $k$ were assumed to be constant for both gases. Viscosity for He was modeled with the power law model, $\mu$ = $AT^n$, while for Ar, the Sutherland 3-coefficient model was employed, $\mu$ = $\mu_0 (\frac{T}{T_0})^{\frac{3}{2}} \frac{T + S}{T_0 + S}$. The power law was interpolated from past empirical data \cite{HeData}. Otherwise ANSYS Fluent's default parameters imported from the NIST database were used \cite{flutheo}. The parameters for the gases are shown in table \ref{tab: gases}. The following BCs were applied:
\begin{enumerate}
\item inlet: Pressure BC of 500 Psi. Temperature set at 300 K.
\item outlet: Pressure BC of 1 milliTorr, the ambient vacuum pressure, $P_{amb}$. Temperature set at 300 K.
\item wall: no-slip condition with no roughness assumed.
\end{enumerate}
These conditions were chosen to closely represent typical experimental conditions. From each nozzle simulation, the 2D density map along the diameter of the nozzle exit was extracted for the output gas jet plume where z $>$ 0. Density profiles at mm-scale distances from the nozzle exit were then obtained, as are used for typical LPA experiments.
\begin{figure*}[h!tpb]
\centering
\includegraphics[width = \textwidth]{ExpMethod.png}
\caption{(Color) \textbf{(a)} Interferometry setup. The raw interferogram, in \textbf{(b)}, is compared with the reference interferogram, in \textbf{(c)}, to extract the corresponding phase distribution, shown in \textbf{(d)}. The phase distribution is then converted to a density map by performing an Abel Inversion, shown in \textbf{(e)}.}
\label{fig: IFSetup}
\end{figure*}
\begin{table*}[htpb!]
\caption{\label{tab: isentropcomp} Comparison of average exit density $\rho_e$, exit Mach number $M_e$, and throat Mach number $M^*$ between simulation results and 1D isentropic flow predictions for all three nozzle geometries}
\begin{ruledtabular}
\begin{tabular}{ccccccc}
Nozzle & Sim. $\rho_e$ [$cm^{-3}$] & 1D $\rho_e$ [$cm^{-3}$] & Sim. $M^*$ & 1D $M^*$ & Sim. $M_e$ & 1D $M_e$\\ \hline
Bell & 1.09 $\times 10^{19}$ & 1.11 $\times 10^{19}$ & 0.83 & 1 & 6.73 & 7.09\\
Straight & 2.27 $\times 10^{19}$ & 2.00 $\times 10^{19}$ & 0.80 & 1 & 5.26 & 5.74\\
Trumpet & 2.37 $\times 10^{19}$ & 2.00 $\times 10^{19}$ & 0.86 & 1 & 5.09 & 5.74\\
\end{tabular}
\end{ruledtabular}
\end{table*}
\section{\label{sec: Experiment Method} Experimental Method}
Three nozzles, one from each curvature, out of all the simulated nozzle designs were manufactured and experimentally characterized using neutral density interferometry. All three manufactured nozzles had $l_d$ = 9 mm and $d_e$ = 3 mm. The bell nozzle had $d^*$ = 0.6 mm and $R_c$ = +31 mm while the trumpet nozzle had $d^*$ = 0.8 mm, like the straight nozzle, and $R_c$ = -100 mm. The three nozzles were manufactured using a special tool to create the required curvatures with an average surface roughness of 0.8 $\mu$m. A ball endmill was used to ensure a quality surface finish. A Michelson interferometer was used to characterize the gas jet density field of each nozzle. The setup is shown in Fig. \ref{fig: IFSetup}(a). The experiment was performed using the Hundred Terawatt Thomson (HTT) laser system at the Berkeley Lab Laser Accelerator (BELLA) center, specifically using the 1 Hz mJ-level probe laser beam, which is split after the first main amplifier stage and independently compressed to 40 fs with 800 nm center wavelength and 15 mm beam diameter. It propagates through the gas jet, imaging the gas jet plane, and is focused by a f/\# = 20 lens. The beam is then split by a beamsplitter (BS) which directs the reflected beam into a retroreflecting roof mirror pair (image) and the transmitted beam into a 0-degree high reflective mirror (reference). Both beams are then recombined by the same BS and imaged onto a CCD camera calibrated to 2.64 $\mu$m/pixel in the gas jet plane with a field of view of 4.6 x 3 mm$^2$ and a resolution of 15 $\mu$m. The gas jet nozzle is mounted onto a solenoid valve that allows for continuous or pulsed gas delivery. Shot-to-shot fluctuation was observed to be low at $\sim$ 2\%, with $1/3$ of this being from the imaging system and laser pulse fluctuations (determined by analyzing the variations of reference scans that contained no gas flow). Ar gas was used due to its higher index of refraction compared to He.
The laser beam passing through the gas flow experiences a phase shift, quantified by fringe shifts on the resulting interferogram compared to the reference. The phase shift distribution is reconstructed from the fringe shifts. Because the nozzles are axisymmetric, an Abel inversion is used to symmetrize the phase shift distribution and extract the variation in the index of refraction, $n$, through the following equation,
\begin{equation}
\Delta\phi(r) = k\int (n(r, l) - 1)dl = \frac{2\pi}{\lambda}\int (n(r, l) - 1)dl,
\label{eq: phaseshifteqn}
\end{equation}
where $\Delta \phi$ is the phase shift, $k$ is the wavenumber and $\lambda$ is the laser wavelength \cite{hutch}. The variation of the index of refraction, $n$, is then related to gas atom or molecule number density, $N$, through the Lorentz-Lorenz equation,
\begin{equation}
N = \frac{3}{4\pi\alpha}\frac{n^2 - 1}{n^2 + 2},
\label{eq: LLeqn}
\end{equation}
where $\alpha$ is the mean polarizability, defined as $\alpha = \frac{3A}{4\pi N_A}$ \cite{earlystudyJLHen, born}. $A$ is the molar refractivity and $N_A$ is Avogadro's number. Substituting the relation for $\alpha$, we get:
\begin{equation}
N = \frac{N_A}{A}\frac{n^2 - 1}{n^2 + 2}
\label{eq: LLsimp}
\end{equation}
Index of refraction data for Ar was used to calculated the molar refractivity, A, of Ar using Eqn. (\ref{eq: LLsimp}), found to be (4.138 $\pm$ 0.012) $\times$ $10^{-6}$ m$^3$/mol \cite{Armolar}.
To maximize interferometry signal, scans for each nozzle were taken at the maximum regulator pressure of 1000 psi. The interferograms were averaged and converted to 2D density maps, outlined in Fig. \ref{fig: IFSetup}(b)-(e). Inherent noise close to the axis from the Abel inversion coupled with uncertainty of the nozzle axis led to larger apparent density fluctuations closer to the nozzle axis. The density maps and density lineouts extracted from the maps were compared to simulation results.
\begin{figure*}[h!tpb]
\includegraphics[width = \textwidth]{DensMaps.png}
\caption{(Color) Atomic density maps extracted from 3D simulations for the three nozzle geometries. Density in units of cm$^{-3}$. The black dashed lines on each map denote the FWHM of the density profiles. Note the focusing effect of the bell nozzle, where the FWHM decreases noticeably close to the shock diamond. Density profiles at z = 1, 2, 3 and 4 mm are extracted from each map, along the colored dashed lines drawn. The left column shows the extracted maps when using He as the gas: \textbf{(a)} 600 $\mu$m Throat Bell Nozzle \textbf{(c)} 800 $\mu$m Throat Straight Nozzle \textbf{(e)} 800 $\mu$m Throat Trumpet Nozzle. The right column shows the maps for Ar as the gas:
\textbf{(b)} 600 $\mu$m Throat Bell Nozzle \textbf{(d)} 800 $\mu$m Throat Straight Nozzle \textbf{(f)} 800 $\mu$m Throat Trumpet Nozzle }
\label{fig: DensMaps}
\end{figure*}
\begin{figure*}[h!tpb]
\includegraphics[width = \textwidth]{DensProfs.png}
\caption{(Color) Axisymmetric density lineouts extracted from 3D simulations for the three nozzle geometries. The inset plots show the density lineouts along the nozzle axis extracted from 2D simulations, with the sharps discontinuities indicating presence of shock diamonds. The left column shows the extracted profiles when using He as the gas: \textbf{(a)} 600 $\mu$m Throat Bell Nozzle \textbf{(c)} 800 $\mu$m Throat Straight Nozzle \textbf{(e)} 800 $\mu$m Throat Trumpet Nozzle. The right column shows the profiles for Ar as the gas:
\textbf{(b)} 600 $\mu$m Throat Bell Nozzle \textbf{(d)} 800 $\mu$m Throat Straight Nozzle \textbf{(f)} 800 $\mu$m Throat Trumpet Nozzle}
\label{fig: DensLineouts}
\end{figure*}
\begin{figure*}[h!tpb]
\includegraphics[width = \textwidth]{ExpMeasurements.png}
\caption{(Color) Measured gas jet plume density maps for the three nozzles shown on left column. The black dashed lines on each map denote the FWHM of the density profiles. Measured profiles are compared with simulation on the right. The simulation profiles are normalized to be around the same density as the measured profiles to compare shape. The shaded regions represent the RMS density fluctuations of each profile. The $z$ = 4 mm measured bell profile is compared with the $z$ = 4.3 mm simulation bell profile to compare the shape of the density spikes at the shock diamond.}
\label{fig: ExpMes}
\end{figure*}
\section{\label{sec: Results} Results}
\subsection{\label{sec: compcurv} Effect of Diverging Section Curvature}
\begin{table*}[htpb!]
\caption{\label{tab: FWHM} FWHM of the simulation density profiles for all three geometries across both gases. Units in mm.}
\begin{ruledtabular}
\begin{tabular}{ccccccc}
z [mm] & He, Bell & He, Straight & He, Trumpet & Ar, Bell & Ar, Straight & Ar, Trumpet\\ \hline
2 & 2.27 & 2.81 & 2.92 & 2.43 & 2.91 & 3.01\\
3 & 2.09 & 2.87 & 3.08 & 2.31 & 2.99 & 3.19\\
4 & 0.75 & 2.89 & 3.23 & 2.05 & 3.05 & 3.37\\
\end{tabular}
\end{ruledtabular}
\end{table*}
2D density maps and radial density lineouts were extracted from each simulation for the manufactured geometries. The qualitative features between the maps and lineouts produced by each curvature were then compared. Test simulations were first done to confirm that wall roughnesses set at $10$ $\mu$m or below as well as $d^*$ being altered between $0.6$ and $0.8$ mm had little effect on simulation results. The simulated 2D density maps for the three manufactured geometries are shown in Fig. \ref{fig: DensMaps} while corresponding simulated density lineouts are shown in Fig. \ref{fig: DensLineouts}. The similarity in gas jet behavior and shock features between He and Ar, with the density maps and profiles for all three geometries having like shapes between the two gases, matches expectations in accordance with the isentropic flow equations, Eqns. (\ref{eq: rhoisen}) and (\ref{eq: areamach}), as He and Ar have the same specific heat ratio, $\kappa$ = $5/3$. A further comparison, shown in Table \ref{tab: isentropcomp}, between 1D isentropic flow predictions, calculated from using Eqn. \ref{eq: rhoisen} and \ref{eq: areamach}, and simulation results, indicates that the simulated gas flows roughly follow 1D isentropic flow as expected. The differences are due to the 1D model not accounting for shock features and losses from 2D and 3D effects \cite{Semushin}.
Observed from the density maps in Figs. \ref{fig: DensMaps}(a), (b), the bell geometry creates a focusing effect that places a shock diamond at around $z_d$ = 3.7 mm for He and $z_d$ = 4 mm for Ar, right in the region of interest. This causes large density fluctuations, where the density is much lower at positions before the shock diamond, $z < z_d$, compared to points closer to the to the shock diamond position, $z \approx z_d$, preventing effective formation of flat-top density profiles, seen in Figs. \ref{fig: DensLineouts}(a), (b). For example, for the case of He in Fig. \ref{fig: DensLineouts}a, the density profiles for z = 1, 2 and 3 mm all have an "M" shape, dipping down to a density of $\sim 8 \times 10^{18}$ cm$^{-3}$. These "M" shape profiles have also been observed in past studies on the bell nozzle \cite{MKrish}, yielding uneven profiles \cite{Semushin}. The z = 4 mm profile, closer to the shock diamond, displays a density spike up to $\sim 4 \times 10^{19}$ cm$^{-3}$, about 5 times the density dip before the shock diamond. This density spike produced by the bell can be useful for creating short, high-density gas targets. This focusing effect, also seen in past studies \cite{MKrish, bell}, is observed in the Full Width Half Maximum (FWHM) of the profiles, listed in Table \ref{tab: FWHM}, where the FWHM decreases significantly for the z = 4 mm profile of the bell nozzle. The large fluctuations and focusing effect are caused by the formation of standing shock waves from the nozzle throat to its outer region due to its exit pressure and ambient pressure not matching \cite{liona, MusinskiGasJet}. Transitioning to the straight nozzle map, shown in Fig. \ref{fig: DensMaps}(c), (d), the shock diamond is no longer intensely concentrated to a point, demonstrating a weaker focusing effect as observed before \cite{MKrish}. While the straight nozzle profiles, shown in Figs. \ref{fig: DensLineouts}(c), (d), are closer to the flat-top shape, noticeable density variations along potential laser interaction paths remain, matching past observed density lineouts of straight nozzles \cite{gasjetSchmid, froulaIF}. Since this could cause unwanted beam injection \cite{SBulanov, selfinjPhysLett}, further efforts were taken to approach flat-top profiles. When the curvature is inverted to the trumpet geometry, shown in Figs. \ref{fig: DensMaps}(e), (f), the shock diamond is suppressed as the trumpet geometry reverses the focusing effect of the bell curvature. The shock suppression of the trumpet nozzle prevents large density fluctuations at the output, leading to the formation of flat-top density profiles, seen in Figs. \ref{fig: DensLineouts}(e), (f). This suppression makes the profiles more homogeneous with longer flat-top region lengths compared to the straight nozzle profiles as the side perturbations observed on the straight nozzle profiles are not observed for the trumpet. In the case of He, the profiles have density plateaus at around 1-2$\times 10^{19}$ cm$^{-3}$. The lengths of the flat-top region, defined as the region within 10\% of the mean flat-top density, are 2.20, 2.19 and 2.06 mm for the z = 2, 3 and 4 mm profiles respectively. The flat region decreasing in length as we venture out farther from the nozzle exit is expected as the plume expands more at farther distances, affecting the flat-top uniformity. This is also reflected in the increase of density gradient thickness, $\Delta l$, and FWHM. $\Delta l$ is the length along the profile over which the density rises from 10\% to 90\% of the mean flat-top density. $\Delta l$ = 0.71, 0.96 and 1.27 mm for the z = 2, 3 and 4 mm profiles respectively, while FWHM increases from 2.92 to 3.23 mm, with these increases corresponding to the decrease of the flat-top region length as we move farther out from the exit. The simulation density gradient thicknesses agree well with the 1D isentropic theory estimates, where $\Delta l$ = $2z/M_e$ \cite{cgrth, gasdynamicsbook}, with the estimates being $\Delta l$ $\approx$ 0.70, 1.05, 1.40 mm for z = 2, 3 and 4 mm respectively.
The different shock features between the curvatures can also be observed from the density lineouts along the nozzle axis, shown in the inset plots of Fig. \ref{fig: DensLineouts}. Sudden spikes in the axial profiles correspond to shock diamonds created from standing shock waves \cite{MusinskiGasJet}, which are formed due to a sufficiently high $P_{exit}/P_{amb}$, causing compression waves to coalesce into focused shocks \cite{liona}. This is typical for under-expanded jets, where $P_{exit} > P_{amb}$, and has been extensively studied \cite{underexp}. The position of the nozzle throat and exit are marked in the inset plots to indicate the relative positions of the shock diamonds. For the bell nozzle, the second shock diamond is $\sim$4 mm from the exit, outside of the nozzle and is comparable to the first shock in magnitude. When observing the straight nozzle's axial profile, this second shock diamond is pushed back into the region inside the nozzle between the throat and exit with no observable density spike in the region outside the nozzle, indicating a weaker focusing effect. For the trumpet nozzle, in addition to the second shock diamond being pushed back behind the exit, a third shock diamond, weaker in magnitude, is also pushed to sit behind the exit, exhibiting the trumpet nozzle's shock suppression. This third shock diamond is circled in the inset plots of Figs. \ref{fig: DensLineouts}(e), (f).
\subsection{\label{sec: compres} Comparison of Simulation Results with Experimental Measurements}
The experimental measurement results are shown in Fig. \ref{fig: ExpMes}. For each nozzle, the interferograms taken closer and farther from the nozzle exit were concatenated to yield the full density map. The measured density maps matched well with simulation maps in shape and shock features, such as the FWHM lines. The strong focusing effect in the measured density field of the bell nozzle, shown in Fig. \ref{fig: ExpMes}(a), is observed, where the density before the shock diamond is low but spikes up as closer to the shock diamond. The shock suppression of the trumpet nozzle is similarly observed, shown in Fig. \ref{fig: ExpMes}(c) and (e) respectively.
The actual pressure delivered to the nozzle inlet is unknown and likely lower than set by the gas regulator due to the lossy connections between the valve and regulator. This explains why the measured density is lower than that from simulation, which treats the inlet pressure as the same as the regulator pressure. Because backing pressure only changes the quantitative density and not the normalized profile shape \cite{Semushin}, the simulation profiles were normalized to the measured profiles for profile comparison. The measured profiles demonstrated the qualitative features predicted by simulation. For the bell nozzle profiles in Fig. \ref{fig: ExpMes}(b), the z = 2 mm measured profile shows a dip similar to the simulation profile. At z = 4 mm, the measured profile displays a spike, characteristic of the large spikes observed in simulation when close to the shock diamond. The measured straight nozzle profiles contained the slight density dips observed in simulation, preventing them from being flat-top. The measured trumpet nozzle profiles at z = 2 and 3 mm were flat-top as predicted by simulation. For the trumpet nozzle, the simulation profiles are broader at the edges compared to the experimental profiles, which can be explained by wall slip not being modeled in the simulations, affecting the boundary layer formulation \cite{wallslip} and thereby influencing the density gradient thicknesses and profile edges \cite{gasjetSchmid}. All profiles had relatively small RMS fluctuations, indicated by the shaded regions around the measured profile in Fig. \ref{fig: ExpMes}(b), (d) and (e).
\subsection{Optimization of the Trumpet Geometry \label{sec: opttrump}}
\begin{figure}[H]
\centering
\includegraphics[width = 0.5\textwidth]{HeAxisProfilesTrumpetNozzles.png}
\caption{(Color) Density profiles along the nozzle axis for trumpet geometries with various $R_c$. Gas used was He. Nozzle throat and exit are marked.}
\label{fig: tuningRc}
\end{figure}
Multiple trumpet geometries were simulated to optimize the flat-top profiles produced by the nozzle. The optimization procedure was then compared to the straight nozzle optimization process. The trumpet geometry's shock suppression does not automatically guarantee flat-top density profiles. The strength of the trumpet nozzle's shock suppression is inversely related to $R_c$'s magnitude, shown in Fig. \ref{fig: tuningRc}. For example, for the case of $R_c$ = -50 mm, the third shock diamond is much closer to the throat than for trumpet geometries with larger $R_c$. On the other hand, for $R_c$ = -125 mm, the third shock diamond is closer to the exit than for the other radii of curvatures. A larger curvature, meaning a smaller $|R_c|$, leads to shock diamonds being pushed further back into the diverging section, yielding a stronger suppression.
Because $d^*$ and $d_e$ are constrained, the optimal trumpet geometry involved finding the right combination of $R_c$, $l_d$ and $\theta_e$. Defining the start of the diverging profile as the origin and $\Delta r$ = $r_e - r^*$, the following two equations can be written to define the two endpoints of the arc, corresponding to Fig. \ref{fig: halfanglecond}:
\begin{equation}
x_c^2 + y_c^2 = R_c^2; \quad (l_d - x_c)^2 + (\Delta r - y_c)^2 = R_c^2
\label{eq: arceqn}
\end{equation}
\begin{figure}[H]
\centering
\includegraphics[width = 0.5\textwidth]{halfanglecondition.png}
\caption{The trumpet nozzle's diverging section. Important parameters labeled.}
\label{fig: halfanglecond}
\end{figure}
The slope of the nozzle exit is then $\frac{dy}{dx}$ = $\frac{-(l_d - x_c)}{\Delta r - y_c}$. Solving the two equations shown above, we find the exit slope to be:
\begin{equation}
\scalebox{1}{
$\frac{dy}{dx} = \frac{-(\Delta r)(\sqrt{-(\Delta r)^2((\Delta r)^2 + l_d^2)((\Delta r)^2 + l_d^2) - 4R_c^2} + (\Delta r)^2l_d + l_d^3)}{(\Delta r)^4 + (\Delta r)^2l_d^2 - l_d\sqrt{-(\Delta r)^2((\Delta r)^2 + l_d^2)((\Delta r)^2 + l_d^2) - 4R_c^2}}$}
\label{eq: exitslope}
\end{equation}
This exit slope corresponds to a exit half-angle of $\theta_e$ = $\tan^{-1}{(\frac{dy}{dx})}$. The final optimization condition is then:
\begin{equation}
\theta_e = \sin^{-1}{(1/M_e)} = \tan^{-1}{(\frac{dy}{dx}(d_e, d^*, l_d, R_c))}
\label{eq: exitangle}
\end{equation}
This condition can be met by tuning the radius of curvature while keeping all other parameters the same. The calculated half-angles, $\theta_e$, corresponding to different radii of curvatures for the trumpet geometry used, where $M_e = 5.7$ and $l_d$ = 9 mm, are tabulated in Table \ref{tab: table2}. The optimal $R_c$ = -85 mm is labeled.
\begin{table}[H]
\caption{\label{tab: table2} Corresponding exit half-angles, $\theta_e$, for various $R_c$ for the trumpet nozzle with $l_d$ = 9 mm and all other parameters kept constant. The optimal half angle, matching the Mach angle, is marked with an asterisk.}
\begin{ruledtabular}
\begin{tabular}{lcdr}
\textrm{$R_c$ [mm]} & \textrm{$\theta_e$ [$^\circ$]}
\\
\colrule
-50 & 12.17\\
-70 & 10.68\\
-85 & 10.03*\\
-100 & 9.56\\
-125 & 9.05\\
\end{tabular}
\end{ruledtabular}
\end{table}
\begin{figure}[H]
\centering
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width = \textwidth]{Hez2mm800umThroatDifferentTrumpetCurvatures.png}
\caption{}
\label{fig: z=2mmCurves}
\end{subfigure}
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width = \textwidth]{Hez4mm800umThroatDifferentTrumpetCurvatures.png}
\caption{}
\label{fig: z=4mmCurves}
\end{subfigure}
\caption{(Color) Simulation density profiles for trumpet geometry nozzles with various radii of curvature, $R_c$. All other parameters were kept the same and all nozzles had $M_e = 5.7$. Note the consistent flat-top profiles for nozzles that have exit half-angles close to the Mach angle. The optimal radius of curvature, $R_c$ = -85 mm, is marked with an asterisk.}
\label{fig: tuningtrumpet}
\end{figure}
The radial density profiles for the various simulated $R_c$ in the trumpet geometry are shown in Fig. \ref{fig: tuningtrumpet}. For trumpet geometries with $\theta_e$ closer to the Mach angle, the density profiles remain flat-top, with z = 2 and 4 mm being shown as examples. The best consistency of the flat-top profiles is achieved at the optimal $R_c$ = -85 mm. With $R_c$ = -125 mm, the shock suppression is too weak, leading to more noticeable density variations. On the other hand, in the case of $R_c$ = -50 mm, the larger half-angle causes the output plume to diverge and disperse more, leading to a lower overall density and a nonuniform density profile. This optimization condition minimizes shocks by matching the $\theta_e$ to the Mach angle, analogous to the straight nozzle's Mach angle condition \cite{cgrth, Semushin, MKrish}.
\begin{figure}[H]
\centering
\includegraphics[width = 0.5\textwidth]{Hez4mm800umThroatTrumpetDifferentRcSameTheta.png}
\caption{(Color) Simulation density profiles of the trumpet nozzles with various $R_c$ and thus $l_d$ with $\theta_e$ being held constant. The optimal radius of curvature, $R_c$ = -85 mm, is marked with an asterisk.}
\label{fig: samethetadiffRc}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width = 0.5\textwidth]{Hez4mm800umThroatTrumpetDifferentthetaSameRc.png}
\caption{(Color) Simulation density profiles of the trumpet nozzles with various $\theta_e$ with $R_c$ held at the optimal -85 mm. Note how the flat-top profile shape is present even with different $\theta_e$. The Mach angle, $\theta_e$ = $10.03^{\circ}$, is marked with an asterisk.}
\label{fig: sameRcdifftheta}
\end{figure}
Holding $l_d$ constant while $R_c$ is varied also changes the $\theta_e$, which can create interference between the respective effects of the two parameters. To isolate the effect of $R_c$, $\theta_e$ was held constant at the Mach angle $\theta_e$ = $10.03^{\circ}$ while $R_c$ was varied, which in turn changed $l_d$. As seen in Fig. \ref{fig: samethetadiffRc}, a difference in profile shape is observed between the three different $R_c$ geometries. In particular, the $R_c$ = $-95$ mm profile has a small bump, which can be explained by the weaker shock suppression. This indicates that while optimizing the trumpet nozzle's $\theta_e$ to be the Mach angle will approximately create flat-top profiles, further adjustment of $R_c$ is needed afterwards to ensure such profiles are created.
On the other hand, for the trumpet nozzle, $\theta_e$ can be varied while $R_c$ is maintained at the optimal value of $-85$ mm. As observed in Fig. \ref{fig: sameRcdifftheta}, the flat-top shape is maintained at the optimal $R_c$ even though the $\theta_e$ varies. This further suggests that adjusting $R_c$ after optimizing $\theta_e$ to the Mach angle for the trumpet nozzle is more effective in producing consistent flat-top profiles than only optimizing the $\theta_e$.
\begin{figure}[H]
\centering
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width = \textwidth]{Hez2mm800umThroatStraightDifferentThetas.png}
\caption{}
\label{fig: z=2mmCurvesStraight}
\end{subfigure}
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width = \textwidth]{Hez4mm800umThroatStraightDifferentThetas.png}
\caption{}
\label{fig: z=4mmCurvesStraight}
\end{subfigure}
\caption{(Color) Simulation density profiles of straight nozzles with various exit half-angles, $\theta_e$. All other parameters were kept the same and all nozzles had $M_e = 5.7$. The Mach angle, $\theta_e$ = $10.03^{\circ}$, is marked with an asterisk.}
\label{fig: straightdifftheta}
\end{figure}
For the straight nozzle, density perturbations can also be minimized to yield flat-top profiles by varying $\theta_e$, as seen in Fig. \ref{fig: straightdifftheta}, which is done by varying $l_d$ since $d^*$ and $d_e$ are held constant. Matching the nozzle exit angle $\theta_e$ to Mach angle $\sin^{-1}{(1/M_e)}$ roughly leads to a flatter profile as it minimizes the effects of the shocks in perturbing the density, agreeing with past studies on straight nozzle optimization \cite{MKrish, cgrth, Semushin}. Further refinement of $\theta_e$, specifically increasing it above the Mach angle, will lead to flatter profiles as shown in Fig. \ref{fig: straightdifftheta}. However, because the only way to control the shock features of the straight nozzle is by varying the $\theta_e$ (since $d^*$ and $d_e$ are held constant), optimizing the straight nozzle lacks the extra parameter of control in adjusting $R_c$, unlike the trumpet nozzle. This lack of curvature removes the ability to tune shock suppression in addition to using the $\theta_e$ optimization to minimize shock effects, explaining why the density profiles shown in Fig. \ref{fig: straightdifftheta} still show fluctuations near the optimal angle. Therefore, although both straight and trumpet nozzles can be optimized to create flat-top profiles, the trumpet nozzle allows for better refinement.
\section{\label{sec: discon} Discussion and Conclusion}
In this paper, we examined three different nozzle curvatures in the bell, straight and trumpet nozzles and investigated the effect of this curvature variation on the gas jet density field as well as shock features. The trumpet nozzle was also optimized to produce consistent flat-top profiles in mm-scale distances from its exit. The trumpet optimization procedure was then compared to the straight nozzle optimization.
The study of the diverging section curvature's effect on gas jet density was conducted in two parts. The first part involved simulating various nozzle geometries to study this effect for few mm-scale nozzles. The main result was that the nozzle curvature had great impact on the resulting gas jet density field and therefore, is an important parameter for LPA gas jet design. It was found that the trumpet geometry, like the straight nozzle, could be optimized to create consistent flat-top density profiles. The trumpet $\theta_e$ optimization condition was similar to the Mach angle condition of the straight nozzle although with an added parameter of control in its radius of curvature, which allowed for additional adjustment of the nozzle's shock suppression strength. This added parameter of curvature provided better refinement for the trumpet optimization and made the trumpet nozzle more versatile in producing flat-top profiles compared to the straight nozzle. The bell nozzle created a focusing effect that amplified shock diamonds near the nozzle exit, leading to density spikes, which can be exploited as a design concept for short, high density kHz LPA targets driven by few cycle laser pulses \cite{kHzLPA1, kHzLPA2}. Simulation results were verified with neutral density interferometry measurements, showing very good qualitative agreement with simulation findings.
The manufactured trumpet nozzle will be applied in ongoing LPA-based MeV Thomson photon source experiments, leading the way to a compact, affordable and narrow bandwidth x-ray source \cite{thomsonmps}. Future work will focus on designing nozzles with tailored density profiles, e.g., to integrate injection, acceleration and deceleration in one jet or to optimize betatron radiation with multiple sections of varying density \cite{tomkus, phuoc, betaPhysLett}.
\section*{Acknowledgements}
This work is supported by the U.S. Department of Energy, NNSA DNN R\&D and SC HEP under Contract No.
DE-AC02-05CH11231. This material is also based upon work supported by the Department of
Energy National Nuclear Security Administration through the Nuclear Science and
Security Consortium under Award Number(s) DE-NA0003180 and/or DE-NA0000979.
The authors gratefully acknowledge the technical support from Zach Eisentraut and Tyler Sipla.
\section*{Data Availability}
Raw data was generated at Lawrence Berkeley National
Laboratory. The data that support the findings of this study are
available from the corresponding author upon reasonable request.
\providecommand{\noopsort}[1]{}\providecommand{\singleletter}[1]{#1}%
|
2,869,038,155,852 | arxiv | \section{Introduction}\label{sec:intro}
The stable set polytope of a graph $G$, $\STAB(G)$, is one of the most studied polyhedra related to set packing problems. In 1975, Chv\'atal~\cite{Ch75} gave a characterization of the adjacency of its vertices: the characteristic vectors of two stable sets $S$ and $S'$ of $G$ are adjacent in $\STAB(G)$ if, and only if, the subgraph of $G$ induced by $(S\setminus S')\cup (S'\setminus S)$ is connected.
Since a characterization of vertex adjacency may provide more insight into the associated combinatorial problem, and sometimes even the basis for efficient algorithms, it is quite natural to try to extend Chv\'atal's result to other settings. This was done by several authors, sometimes in the context of the simplex method or related to the Hirsch conjecture, see for instance Hausmann and Korte~\cite{HK78}, Ikebe and Tamura~\cite{IT95}, Alfakih and Murty~\cite{AM98}, Matsui and Tamura~\cite{MT93}, or Michini and Sassano~\cite{MS13}. See also Michini~\cite{Mi12} and references therein.
Sometimes, it may be very difficult to test adjacency: Papadimitriou~\cite{Pa78} observed the difficulty of the adjacency problem for the traveling salesman polytope, later Chung~\cite{Ch80} obtained a similar result for the set covering polytope, whereas Matsui~\cite{Ma95} showed the NP-completeness of the non adjacency problem for the set covering polytope even though the matrix involved has exactly three ones per row. Thus, in contrast to the case of $\STAB(G)$, it is unlikely that a simple characterization of the adjacency of vertices of the set covering polyhedron may be given.
Nevertheless, in this work we go one step beyond the usual sufficient condition of connectivity of a certain graph, and give another condition which is also sufficient for the adjacency of vertices of the (unbounded version of the) set covering polyhedron, showing that more restrictive but similar conditions are also necessary in the case of row circular matrices. Thus, the adjacency problem for the set covering polyhedron is polynomial for these matrices.
This paper is organized as follows. After some preliminary comments on the setting and notation in~\tref{Section}{sec:background}, in~\tref{Section}{sec:graph} we present the graph which we associate with each pair of vertices of the set covering polyhedron defined by a binary matrix $A$. A sufficient condition for adjacency in terms of this graph is presented in~\tref{Theorem}{thm:suf} of~\tref{Section}{sec:suf}. In~\tref{Section}{sec:nec} we establish a characterization of adjacency which applies to the case where $A$ is a row circular matrix (\tref{Theorem}{thm:CharactAdj}), and give an example (\tref{Example}{exam:ppfnd:13}) showing that our sufficient condition is not always necessary even for circulant matrices. Finally, in~\tref{Section}{sec:mni} we apply our results to obtain a new infinite family of minimally nonideal matrices based on known minimally nonideal circulant matrices.
\section{Notation and background}\label{sec:background}
Let us start by establishing some notation, definitions and known results.
We denote by $\mathbb N$ the set of natural numbers, $\mathbb N = \{1,2,\dotsc\}$; by $\mathbb Z$ the set of integers; by $\mathbb R$ the set of real numbers; by $\mathbb B$ the set of binary numbers, $\mathbb B = \{0,1\}$; and by $\I$ the set $\{1,2,\dots,n\}$.
The vectors in the canonical basis of $\mathbb R^n$ will be denoted by $\mathbf{e}_1,\dots,\mathbf{e}_n$.
We denote by $\mathbf{0}_n$ and $\mathbf{1}_n$ the vectors in $\mathbb R^n$ with all zeroes and all ones, respectively, dropping the subindex $n$ if the dimension is clear from the context.
The scalar product in $\mathbb R^n$ is denoted by a dot, so that, e.g., $x\cdot\mathbf{e}_i = x_i$ for $x\in\mathbb R^n$.
Given $x$ and $y$ in $\mathbb R^n$, we say that $x$ \emph{dominates} $y$, and write $x\ge y$, if $x_i\ge y_i$ for all $i\in\I$.
$\card{X}$ denotes the cardinality of the finite set $X$.
The \emph{support} of $x\in\mathbb R^n$ is the set $\supp x = \{i\in\I \mid x_i \ne 0\}$. Conversely, given $X\subset\I$, its \emph{characteristic vector}, $\car(X)\in\mathbb B^n$, is defined by
\[
\mathbf{e}_i\cdot \car(X) = \begin{cases}
1 & \text{if $i\in X$}, \\
0 & \text{otherwise},
\end{cases}
\]
so that $\supp (\car(X)) = X$.
Given a binary matrix $A\in \mathbb B^{m\times n}$, the \emph{set covering polyhedron} associated with $A$, $Q\ch(A)$, is the convex hull of non-negative integer solutions of $Ax\ge\mathbf{1}$,
\begin{equation}
\label{defn:Q:ast}
Q\ch(A) = \conv \{x\in\mathbb Z^n\mid Ax\ge\mathbf{1}, x\ge\mathbf{0}\},
\end{equation}
and we denote by $Q(A)$ the linear relaxation
\begin{equation}
\label{defn:Q}
Q(A) = \{x\in\mathbb R^n \mid Ax\ge\mathbf{1}, x\ge\mathbf{0}\}.
\end{equation}
In this paper we assume that the binary matrix $A$ associated with the set covering polyhedra in~\eqref{defn:Q:ast} and~\eqref{defn:Q} verifies the following assumptions:
\begin{assums}\label{assums:A}
The matrix $A\in\mathbb B^{m\times n}$ satisfies:
\begin{itemize}
\item
it has no dominating rows,
\item
it has between $2$ and $n-1$ ones per row,
\item
it has no column of all ones or of all zeroes.
\end{itemize}
\end{assums}
We denote by $C_t$ the support of the $t$-th row of $A$, and we let $\cov = \{C_1,\dots,C_m\}$.
\tref{Assumptions}{assums:A} imply that $2\le\card{C}\le n-1$ for every $C\in\cov$, that $\bigcup_{C\in\cov} C = \I$, and that $\cov$ is a \emph{clutter} in the nomenclature of \cite{CN94}.
The vertices of $Q\ch(A)$ are the binary vertices of $Q(A)$. They form the {\em blocker} of $A$, $\blk(A)$, and their supports are the {\em minimal transversals} of $\cov$, which we denote by $\tra$.
That is, $T\in\tra$ if and only if $T\cap C\ne\emptyset$ for all $C\in\cov$ and if $R\subset T$ with $R\cap C\ne\emptyset$ for all $C\in\cov$ then $R = T$. Notice that
\begin{itemize}
\item
$C\cap T \ne\emptyset$ for all $C\in\cov$ and $T\in\tra$.
\item
For every $T\in\tra$ and $p\in T$,
there exists $C\in\cov$ such that $C\cap T = \{p\}$.
\end{itemize}
As $\blk(\blk(A)) = A$ if $A$ is a clutter matrix, we also have:
\begin{itemize}
\item
For every $C\in\cov$ and $p\in C$
there exists $T\in\tra$ such that $C\cap T = \{p\}$.
\end{itemize}
\begin{rem}\label{rem:card:T}
Notice that \tref{Assumptions}{assums:A} also imply that $\tra$ has properties similar to those of $\cov$: $2\le\card{T}\le n-1$ for every $T\in\tra$, and $\bigcup_{T\in\tra} T = \I$.
\end{rem}
A \emph{convex combination} of the points $x^1, \ldots ,x^\ell$ of $\mathbb R^n$ is a point of the form $\sum_{k\in \I[\ell]} \lambda_k x^k$,
where $\sum_{k\in \I[\ell]} \lambda_k = 1$ and $\lambda_k\ge 0$ for $k\in \I[\ell]$. The combination is \emph{strict} if all $x^k$ are different and $0 < \lambda_k < 1$ for all $k\in \I[\ell]$.
The following result is well known and we will use it to prove adjacency:
\begin{propo}\label{propo:adys:1}
Suppose $P = \{x\in\mathbb R^n \mid Ax\ge b, x\ge\mathbf{0}\}$, where $A\in\mathbb R^{m\times n}$ has non-negative entries and $b\ge\mathbf{0}$.
If $v$ and $v'$ are distinct vertices of $P$, the following are equivalent:
\begin{enumcona}
\item\label{propo:adys:1:a}
$v$ and $v'$ are adjacent in $P$.
\item\label{propo:adys:1:b}
If a strict convex combination $\sum_{k\in \I[\ell]} \lambda_k x^k$ of points of $P$ belongs to the segment with endpoints $v$ and $v'$,
then $x^k$ also belongs to this segment for all $k\in \I[\ell]$.
\item\label{propo:adys:1:c}
If $y = \sum_{k\in \I[\ell]} \lambda_k u^k$
is a strict convex combination of vertices $u^1, \ldots, u^\ell$ of $P$ and
$y\le \frac{1}{2}\,(v + v')$,
then $\ell =2$ and, without loss of generality, $u^1=v$ and $u^2=v'$.
\end{enumcona}
\end{propo}
We point out that it is possible to relax the usual condition $y=\frac{1}{2}\,(v + v')$ to the inequality $y\le \frac{1}{2}\,(v + v')$ in~\trrefp{Proposition}{propo:adys:1}{propo:adys:1:c} due to the fact that we assume that $A$ has non-negative entries, and so the polyhedron $P$ satisfies the following property: $x\in P$ and $z\geq x$ imply $z\in P$.
To prove that the vertices $v$ and $v'$ of $Q\ch(A)$ are not adjacent, it will be convenient to make use of a variant of \tref{Proposition}{propo:adys:1}.
Namely, suppose $v$ and $v'$ can be decomposed as nontrivial sums of binary vectors:
\[
v = z + c + d
\quad \text{and} \quad
v' = z + c' + d',
\]
so that $z$ is a ``common part'',
and the remaining parts are split into two: $c$ and $d$ for $v$ and $c'$ and $d'$ for $v'$.
Suppose also that by interchanging $d$ and $d'$ we obtain two points of $Q\ch(A)$:
\[
x = z + c + d' = v - d + d'
\quad \text{and} \quad
x' = z + c' + d = v' - d' + d .
\]
Since $\frac{1}{2}\,(x + x') = \frac{1}{2}\,(v + v')$, if we could assure that either $x$ or $x'$ does not belong to the segment with endpoints $v$ and $v'$, then by \trrefp{Proposition}{propo:adys:1}{propo:adys:1:b} we would conclude that $v$ and $v'$ are not adjacent in $Q\ch(A)$.
This is the idea behind the next lemma.
\begin{propo}\label{propo:adys:2}
Let $v$ and $v'$ be distinct vertices of $Q\ch(A)$. Suppose there exist $d$ and $d'$ in $\mathbb R^n$ such that:
\begin{itemize}
\item
$\mathbf{0}\leq d \leq v$ and $\mathbf{0}\lneqq d'$,
\item
$v\cdot d' = 0$,
\item
$x= v - d + d'$ and $x'= v' - d' + d$ are elements of $Q\ch(A)$,
\item
$x$ is binary and different from $v'$.
\end{itemize}
Then $v$ and $v'$ are not adjacent in $Q\ch(A)$.
\end{propo}
\begin{proof}
We notice first that
\( 0\le (v - d)\cdot d' \le v\cdot d' = 0, \)
which implies \((v - d)\cdot d' = 0\).
If $x$ was equal to $v$, we would have
\[
\begin{aligned}
0
&= v\cdot d'
&& \text{by hypothesis,} \\
&= x\cdot d'
&& \text{since we are assuming $x = v$,} \\
&= (v - d + d')\cdot d'
\quad
&& \text{by definition of $x$,} \\
&= d'\cdot d'
&& \text{since $(v - d)\cdot d' = 0$,} \\
&> 0
&& \text{since $d'\ne\mathbf{0}$,}
\end{aligned}
\]
i.e., we obtain a contradiction.
Thus, $x$ is different from $v$.
Given that a segment with endpoints in $\mathbb B^n$ cannot contain other binary points, and that $x$ is binary and different from $v$ and $v'$, it follows that $x$ cannot belong to the segment with endpoints $v$ and $v'$.
The result now follows from~\trrefp{Proposition}{propo:adys:1}{propo:adys:1:b},
since $\frac{1}{2}\,(x + x') = \frac{1}{2}\,(v + v')$.
\end{proof}
\section{The joint saturation graph}\label{sec:graph}
The following definitions are essential in this paper.
\begin{defn}\label{defn:G}
Given a matrix $A\in\mathbb B^{m\times n}$ and the associated clutter $\cov$ as described in the previous section,
let $v$ and $v'$ be distinct vertices of $Q\ch(A)$. We construct a simple undirected graph $\G(v,v')$ depending on $v$, $v'$ and $A$, called the {\em joint saturation graph of $v$ and $v'$ (with respect to $A$)}, by the following setup:
\begin{itemize}
\item
the set of nodes of $\G(v,v')$ is
\[
\supp \ExtPv\bigtriangleup \supp \ExtPv' = (\supp \ExtPv \setminus \supp \ExtPv') \cup (\supp \ExtPv'\setminus \supp \ExtPv) \; ,
\]
\item
$\G(v,v')$ is bipartite with partite sets
\[
\supp \ExtPv \setminus \supp \ExtPv' \quad\text{and}\quad \supp \ExtPv'\setminus \supp \ExtPv\; ,
\]
\item
$p\in \supp \ExtPv \setminus \supp \ExtPv'$ and $p'\in \supp \ExtPv'\setminus \supp \ExtPv$ are neighbors in $\G(v,v')$ if there exists $C_t\in \cov$ such that
\begin{equation}
\label{equ:edge}
C_t\cap \supp \ExtPv = \{p\} \quad\text{and}\quad C_t\cap \supp \ExtPv' = \{p'\}.
\end{equation}
\end{itemize}
\end{defn}
See Figure~\ref{figure2} below for an illustration of~\tref{Definition}{defn:G}.
Following West~\cite{We01}, we will denote by $p\ngh p'$ and $p\nngh p'$ whether $p$ and $p'$ are neighbors in $\G(v,v')$ or not, respectively. A path of $\G(v,v')$ will be called {\em even} (resp. {\em odd}) if it contains an even (resp. odd) number of edges.
The name joint saturation graph of $v$ and $v'$ (with respect to $A$) comes from the fact that each edge of $\G(v,v')$ corresponds to an inequality in $A x \geq \mathbf{1}$ which is saturated (meaning satisfied with equality) by both vertices $v$ and $v'$ at the same time (observe that there may exist inequalities in $A x \geq \mathbf{1}$ which are saturated by both vertices $v$ and $v'$ at the same time and do not correspond to edges of $\G(v,v')$, because the saturation can be due to a coordinate in $\supp \ExtPv \cap \supp \ExtPv'$).
\begin{rem}\label{rem:contraction}
Let us recall that given a nontrivial proper subset $I$ of $\I$, the \emph{contraction minor} $A/I$ is obtained by eliminating the columns of $A$ with indices in $I$, and then removing any dominating row that might appear.
Observe that $v$ and $v'$ are adjacent in $Q\ch(A)$ if, and only if, they are adjacent in the face $\{x\in Q\ch(A)\mid x_i =0 \text{ for } i\in I\}$, where $I=\I \setminus (\supp \ExtPv \cup \supp \ExtPv')$. Note that the projection of this face on the coordinates in $\supp \ExtPv \cup \supp \ExtPv'$ is given by $Q\ch(A/I)$, and that the joint saturation graph of $v$ and $v'$ (with respect to $A$) coincides with the joint saturation graph of the projections of $v$ and $v'$ on the coordinates in $\supp \ExtPv \cup \supp \ExtPv'$ (with respect to the contraction minor $A/I$).
Finally, observe that $\G(v,v')$ is a subgraph of the graph $G$ whose edge-node incidence matrix is the row submatrix of $A/I$ given by the rows with precisely two ones (in other words, each edge of $\G(v,v')$ corresponds to a row of $A/I$ with precisely two ones, but there may exist rows of $A/I$ with precisely two ones which do not correspond to edges of $\G(v,v')$).
\end{rem}
\begin{rem}\label{rem:graph:1}
If $v$ and $v'$ are distinct vertices of $Q\ch(A)$, then:
\begin{itemize}
\item
$\supp \ExtPv \setminus \supp \ExtPv'\ne\emptyset$ since we cannot have $v\le v'$.
Similarly, we have $\supp \ExtPv' \setminus \supp \ExtPv\ne\emptyset$.
\item
Consequently, $\G(v,v')$ has at least two nodes.
\item
If $\G(v,v')$ has exactly two nodes, then it is connected: if $p\in\supp \ExtPv $ and $p'\in\supp \ExtPv' $ are the two nodes, $R = \supp v\cap \supp v'$, and $C_t\in\cov$ is such that $C_t\cap\supp \ExtPv = \{p\}$, then necessarily $C_t\cap R = \emptyset$ and $C_t\cap \supp \ExtPv' = \{p'\}$.
\end{itemize}
\end{rem}
Sufficient and necessary conditions for the adjacency of vertices of $Q\ch(A)$ will be given in terms of properties of the joint saturation graph. With this aim, it is convenient to make the following definition:
\begin{defn}\label{defn:partly}
A bipartite graph is said to be \emph{partite-connected} if one of its partite sets is contained in a component, and it is said to be \emph{almost-connected} if it has exactly two components, one of which is an isolated node.
\end{defn}
Observe that according to our definition, a connected bipartite graph is partite-connected but not almost-connected, and an almost-connected graph is always partite-connected.
\section{A sufficient condition for adjacency}\label{sec:suf}
In this section we present a sufficient condition for the adjacency of two vertices of $Q\ch(A)$ in terms of their joint saturation graph. For this, we will need the following lemma.
\begin{lem}\label{lem:graph:1}
Let $v$ and $v'$ be distinct vertices of $Q\ch(A)$, and let $\G(v,v')$ be their joint saturation graph. Suppose $y = \sum_{k\in \I[\ell]} \lambda_k u^k$
is a strict convex combination of vertices
$u^1,\ldots ,u^\ell$ of $Q\ch(A)$
such that $y\le \frac{1}{2}\,(v + v')$.
Then, for each $k\in \I[\ell]$, we have:
\begin{enumcona}
\item\label{lem:graph:1:a}
$\supp \ExtPu^k \subseteq \supp \ExtPv \cup \supp \ExtPv'$.
\item\label{lem:graph:1:b}
$\Card{\{p,p'\}\cap \supp u^k} = 1$
whenever $p$ and $p'$ are neighbors in $\G(v,v')$.
\end{enumcona}
\end{lem}
\begin{proof}
Let us write $z = \frac{1}{2}\,(v + v')$.
If $q\in \supp u^k$ we must have $u^k\cdot \mathbf{e}_q > 0$, and therefore
$y\cdot\mathbf{e}_q > 0$ since the convex combination for $y$ is strict.
Hence $z\cdot\mathbf{e}_q > 0$ as $z\ge y$.
Therefore, $q\in \supp \ExtPv \cup \supp \ExtPv'$.
For the second part, let $a$ be a row of $A$ such that~\eqref{equ:edge} holds with $C_t = \supp a$.
Since $\supp \ExtPu^k \subseteq \supp \ExtPv \cup \supp \ExtPv'$, by~\eqref{equ:edge} it follows that $p$ and $p'$ are the only elements of $C_t$ which can belong to $\supp \ExtPu^k$.
Then, from $\{p, p'\}\subset C_t$ we conclude that
\[
\{p, p'\} \cap \supp \ExtPu^k =C_t\cap \supp \ExtPu^k .
\]
Moreover, by~\eqref{equ:edge} we have $a\cdot z = 1$, and since $z\ge y$, it follows that $a\cdot y \le 1$.
On the other hand, as $y\inQ\ch(A)$, we have $a\cdot y\ge 1$, and therefore $a\cdot y = 1$.
Similarly, since $y$ is a strict convex combination of the points
$u^1,\ldots ,u^\ell$
and $a\cdot u^h\ge 1$ for all $h\in \I[\ell]$, we conclude that $
a\cdot u^h = 1$ for all $h\in \I[\ell]$. Thus, in particular we have $\Card{C_t\cap \supp \ExtPu^k} = 1$, proving the lemma.
\end{proof}
\begin{lem}\label{lem:CondImplicaAdy}
Let $v$ and $v'$ be distinct vertices of $Q\ch(A)$.
Suppose $y = \sum_{k\in \I[\ell]} \lambda_k u^k$
is a strict convex combination of vertices
$u^1,\ldots ,u^\ell$ of $Q\ch(A)$
such that $y\le \frac{1}{2}\,(v + v')$.
If the joint saturation graph of $v$ and $v'$ is partite-connected, then $\ell =2$ and, without loss of generality, $u^1=v$ and $u^2=v'$.
\end{lem}
\begin{proof}
To prove the lemma it is enough to show that, for each $k\in \I[\ell]$, either $u^k = v$ or $u^k = v'$.
Without loss of generality, assume that the partite set $\supp \ExtPv \setminus \supp \ExtPv'$ is contained in a component of $\G(v,v')$.
Let $I=\left\{ k\in \I[\ell] \mid \supp \ExtPu^k \cap (\supp \ExtPv \setminus \supp \ExtPv') = \emptyset\right\}$ and $J= \I[\ell] \setminus I$.
Observe that for $k\in I$ we must have $\supp \ExtPu^k \subseteq \supp \ExtPv'$ as
$\supp \ExtPu^k \subseteq \supp \ExtPv\cup \supp \ExtPv'$ by \trrefp{Lemma}{lem:graph:1}{lem:graph:1:a}. Thus, since $v'$ and $u^k$ are binary vertices of $Q\ch(A)$, we conclude that
\begin{equation}
\label{equ:nestor:1}
u^k = v'
\quad\text{for all }k\in I.
\end{equation}
On the other hand, for $k\in J$, let us fix $p\in \supp \ExtPu^k \cap (\supp \ExtPv \setminus \supp \ExtPv')$.
Since $\supp \ExtPv \setminus \supp \ExtPv'$ is contained in a component of $\G(v,v')$,
for each $q\in (\supp \ExtPv \setminus \supp \ExtPv')\setminus \{p\}$ there exists a path
$p = p_1$, $p'_2$,$\dots$, $p'_h$, $p_h = q$ connecting $p$ and $q$,
where $p'_i \in \supp \ExtPv' \setminus \supp \ExtPv$ and $p_i\in \supp \ExtPv \setminus \supp \ExtPv'$ for $i = 2,\dots,h$.
Using repeatedly \trrefp{Lemma}{lem:graph:1}{lem:graph:1:b}, we see that $p_1 = p\in \supp \ExtPu^k$, $p'_2\notin \supp \ExtPu^k$, $p_2\in \supp \ExtPu^k$, and so on, i.e., $p_i\in \supp \ExtPu^k$ and $p'_i\notin \supp \ExtPu^k$ for all $i = 2,\dots, h$. Thus, in particular we have $q= p_h \in \supp \ExtPu^k$. Since this holds for any $q\in (\supp \ExtPv \setminus \supp \ExtPv')\setminus \{p\}$, we conclude that
\begin{equation}
\label{equ:nestor:2}
\supp \ExtPv \setminus \supp \ExtPv'\subseteq \supp \ExtPu^k
\quad\text{for all } k\in J.
\end{equation}
Consider now any $q\in \supp \ExtPv \setminus \supp \ExtPv'$. From~\eqref{equ:nestor:1}, \eqref{equ:nestor:2} and the fact that $\frac{1}{2}\,(v + v')\geq y= \sum_{k\in \I[\ell]} \lambda_k u^k$, we obtain
\[
\frac{1}{2}=\frac{1}{2}\,(v_{q} + v'_{q})\geq \sum_{k\in I} \lambda_k u^k_{q} + \sum_{k\in J} \lambda_k u^k_{q}=\sum_{k\in J} \lambda_k\; ,
\]
and so $\sum_{k\in I} \lambda_k=1-\sum_{k\in J} \lambda_k\geq \frac{1}{2}$. Since using~\eqref{equ:nestor:1} we also have
\[
\frac{1}{2}\,(v + v')\geq \sum_{k\in I} \lambda_k u^k + \sum_{k\in J} \lambda_k u^k=\sum_{k\in I} \lambda_k v' + \sum_{k\in J} \lambda_k u^k,
\]
we conclude that $\frac{1}{2}\, v \geq \sum_{k\in J} \lambda_k u^k$, which implies $\supp \ExtPu^k\subseteq \supp \ExtPv$ for all $k\in J$. Therefore, since $v$ and $u^k$ are binary vertices of $Q\ch(A)$, it follows that $u^k = v$ for all $k\in J$. This completes the proof.
\end{proof}
Using~\tref{Proposition}{propo:adys:1} and~\tref{Lemma}{lem:CondImplicaAdy}, we obtain the main result of this section:
\begin{thm}\label{thm:suf}
If the joint saturation graph of two distinct vertices of $Q\ch(A)$ is partite-connected, then these vertices are adjacent in $Q\ch(A)$.
\end{thm}
\tref{Theorem}{thm:suf} in particular shows that given distinct vertices $v$ and $v'$ of $Q\ch(A)$, if $\G(v,v')$ is connected, then these vertices are adjacent in $Q\ch(A)$. When $A$ has precisely two ones per row, i.e., when $A$ can be thought of as the edge-node incidence matrix of a graph $G$, the connectivity of $\G(v,v')$ can be shown to be also necessary. As a matter of fact, in this case the subgraph of $G$ induced by $\supp \ExtPv\bigtriangleup \supp \ExtPv'$ turns out to be equal to $\G(v,v')$. Besides, Chv\'atal's characterization~\cite{Ch75} of the adjacency of vertices of the stable set polytope of $G$ implies that $v$ and $v'$ are adjacent in the vertex cover polytope of $G$ if and only if the subgraph of $G$ induced by $\supp \ExtPv\bigtriangleup \supp \ExtPv'$ is connected. Finally, it can be shown that two vertices of $Q\ch(A)$ are adjacent if and only if they are adjacent in the vertex cover polytope of $G$. Thus, the following theorem is a consequence of Chv\'atal's result.
\begin{thm}\label{thm:incidence}
Suppose that every row of $A\in\mathbb B^{m\times n}$ has exactly two ones. Then, two distinct vertices $v$ and $v'$ of $Q\ch(A)$ are adjacent if, and only if, their joint saturation graph is connected.
\end{thm}
\begin{rem}\label{rem:incidence}
Notice that \tref{Theorems}{thm:incidence} and\tref{}{thm:suf} imply that, when $A$ has exactly two ones per row, if $\G(v,v')$ is partite-connected, then $\G(v,v')$ is necessarily connected.
\end{rem}
To conclude this section, note that the sufficiency of the connectivity of $\G(v,v')$ can be alternatively proved using~\tref{Remark}{rem:contraction} and Chv\'atal's result~\cite{Ch75}. To see this, let $\bar{A}$ be the row submatrix of $A/I$, for $I=\I \setminus (\supp \ExtPv \cup \supp \ExtPv')$, obtained by keeping only the rows of $A/I$ with exactly two ones. Observe that $\bar{A}$ can be thought of as the edge-node incidence matrix of a graph $G$ with node set $\supp \ExtPv \cup \supp \ExtPv'$, and that $\supp \ExtPv$ and $\supp \ExtPv'$ are both vertex covers of this graph. Besides, it should be observed that the subgraph of $G$ induced by $\supp \ExtPv\bigtriangleup \supp \ExtPv'$ coincides with $\G(v,v')$. By Chv\'atal's result, it follows that the projections of $v$ and $v'$ on the coordinates in $\supp \ExtPv \cup \supp \ExtPv'$ are adjacent in the vertex cover polytope of $G$ if and only if the subgraph of $G$ induced by $\supp \ExtPv\bigtriangleup \supp \ExtPv'$ is connected, and by~\tref{Remark}{rem:contraction} we know that $v$ and $v'$ are adjacent in $Q\ch(A)$ if and only if the projections of $v$ and $v'$ on the coordinates in $\supp \ExtPv \cup \supp \ExtPv'$ are adjacent in $Q\ch(A/I)$. Finally, since two vertices of $Q\ch(\bar{A})$ are adjacent if and only if they are adjacent in the vertex cover polytope of $G$, we conclude that if $\G(v,v')$ is connected, then the projections of $v$ and $v'$ on the coordinates in $\supp \ExtPv \cup \supp \ExtPv'$ are adjacent in $Q\ch(\bar{A})$, which implies they are also adjacent in $Q\ch(A/I)$, and so $v$ and $v'$ are adjacent in $Q\ch(A)$.
\section{Characterization of vertex adjacency for row circular matrices}\label{sec:nec}
As mentioned in the~\thref{introduction}{sec:intro}, the sufficient condition of~\tref{Theorem}{thm:suf} is not always necessary.
In this section we show that the converse of that theorem is true when the matrix $A$ is row circular.
Actually, in this case we will give a much more detailed characterization in terms of properties of the joint saturation graph.
We will also show that being partite-connected is far from being a necessary condition when we consider the similar class of circulant matrices.
Let us recall that the \emph{circulant matrix} $\mathscr{C}(c)\in \mathbb R^{n\times n}$ associated with a vector $c = (c_1,\dots,c_n)\in\mathbb R^n$ is defined as
\[
\mathscr{C}(c) = \mathscr{C}(c_1,\dots,c_n) =
\begin{bmatrix}
c_1 & c_2 & \dots & c_n \\
c_n & c_1 & \dots & c_{n-1} \\
\vdots & \vdots & \ddots & \vdots \\
c_2 & c_3 & \dots & c_1
\end{bmatrix},
\]
where each row is a right rotation (shift) of the previous one.
Following Bartholdi et al.~\cite{BOR80}, we will say that a binary vector is \emph{circular} if its ones occur consecutively, where the first entry and the last entry of the vector are considered to be consecutive, or, alternatively, if either the ones are all consecutive or the zeroes are all consecutive.
A binary matrix is said to be \emph{row circular} if all its rows are circular.
To deal with circular vectors of $\mathbb B^n$, it is convenient to consider circular arcs of $\I$: for $i,j \in \I$, the \emph{(directed) circular arc} $\arc{i,j}$ is defined as
\[
\arc{i,j} =
\begin{cases}
\{i,\dots,j\} & \text{if } i \le j , \\
\{i,\dots,n\} \cup \{1,\dots,j\} & \text{if } j < i .
\end{cases}
\]
Thus, the rows of a row circular matrix $A\in\mathbb B^{m\times n}$ may be considered as the characteristic vectors of circular arcs of $\I$.
\begin{exam}\label{exam:cnk:1}
A particularly interesting case of row circular matrices is that of the \emph{consecutive ones circulant matrices} $\C{n}{k}$,
where the sets in the clutter $\cov$ are of the form
\[
C_t = \{t, t + 1,\dots, t + k - 1\}, \quad t\in\I,
\]
(sums are taken modulo $n$ with values in $\I$) so that $\C{n}{k}$ is also circulant.
For example,
\[
\C{3}{2} = \begin{bmatrix}
1 & 1 & 0 \\
0 & 1 & 1 \\
1 & 0 & 1
\end{bmatrix} = \mathscr{C}(1,1,0).
\]
\end{exam}
In the remainder of this section, we will make the following assumptions:
\begin{assums}\label{assums:B}
The matrix $A$ is row circular and satisfies~\tref{Assumptions}{assums:A}, and $\cov$ is the associated clutter (as described in~\tref{Section}{sec:background}).
\end{assums}
Thus, in particular we have $m = \card{\cov}\ge 2$,
$2\le\card{C_t}\le n-1$ for all $t\in \I[m]$, and
$2\le v\cdot \mathbf{1} \le n-1$ for every vertex $v$ of $Q\ch(A)$.
We will also use the following convention:
\begin{notat}
For $v\in\mathbb B^n$ we will write $\supp \ExtPv = \{p_1,p_2,\dots,p_r\}$,
with $ p_1 < p_2 <\dots< p_r$;
and for $h\notin\I[r]$ we let $p_h = p_i$
with $i\in\I[r]$ and $h\equiv i \pmod{r}$.
Similarly, for $v'\in\mathbb B^n$ we will write
$\supp \ExtPv' = \{p'_1,\dots,p'_{r'}\}$
with $p'_1 < \dots < p'_{r'}$, etc.
We will also consider that operations involving elements of the support of a vector of $\mathbb B^n$, such as $p_i+1$, are taken modulo $n$ with values in $\I$.
\end{notat}
\begin{rem}\label{rem:graph:2}
When $A$ is row circular, if $p\ngh p'$ (i.e., \eqref{equ:edge} is satisfied),
exactly one of the circular arcs $\arc{p,p'}$ or $\arc{p',p}$
is such that its intersection with $\supp \ExtPv\cup \supp \ExtPv'$ is $\{p,p'\}$. Indeed, by~\tref{Remark}{rem:card:T} we know that $\supp \ExtPv$ has at least two elements, and so there exists $q\in \supp \ExtPv$ such that $q\neq p$ (note that we also have $q\neq p'$ because $p'\in \supp \ExtPv' \setminus \supp \ExtPv$ by~\tref{Definition}{defn:G}). Besides,
if $A$ is row circular, the circular arc $C_t$ satisfying~\eqref{equ:edge} contains either $\arc{p,p'}$ or $\arc{p',p}$. Then, if $q\in \arc{p,p'}$, we have $\{ q ,p , p'\}\subset (\supp \ExtPv\cup \supp \ExtPv')\cap \arc{p,p'}$, and so by~\eqref{equ:edge} we conclude that $\arc{p',p} \subset C_t$ and $\{p,p'\} = (\supp \ExtPv\cup \supp \ExtPv')\cap \arc{p',p}$. Otherwise, i.e. if $q\in \arc{p',p}$, we have $\{ q ,p , p'\}\subset (\supp \ExtPv\cup \supp \ExtPv')\cap \arc{p',p}$, and so from~\eqref{equ:edge} it follows that $\arc{p,p'} \subset C_t$ and $\{p,p'\} = (\supp \ExtPv\cup \supp \ExtPv')\cap \arc{p,p'}$.
\end{rem}
Let us state now some simple results.
\begin{lem}\label{lem:nec:1}
Suppose $C_t\in\cov$ and $v$ is a vertex of $Q\ch(A)$.
Then
\[
1\le \card{C_t\cap \supp \ExtPv }\le 2.
\]
Moreover, if $\card{C_t\cap \supp \ExtPv } = 2$,
then $C_t\cap \supp \ExtPv = \{p_i, p_{i+1}\}$ for some $i$.
\end{lem}
\begin{proof}
Since $\supp \ExtPv$ is a transversal, we obviously have $\card{C_t\cap \supp \ExtPv }\ge 1$. If we had $\card{C_t\cap \supp \ExtPv }\ge 3$, as $C_t$ is a circular arc, there would exist three different elements $p_i$, $p_j$ and $p_h$ of $\supp \ExtPv$ such that $p_j\in \arc{p_i,p_h}\subset C_t$. Besides, since $\supp \ExtPv$ is a minimal transversal, there exists $C_s\in\cov$ such that $C_s\cap \supp \ExtPv = \{p_j\}$. Then, as $C_s$ is a circular arc which contains $p_j$ but does not contain $p_i$ nor $p_h$, we would have $C_s\subsetneq \arc{p_i,p_h}\subset C_t$, contradicting the fact that $A$ has no dominating rows (by~\tref{Assumptions}{assums:A}). Thus, we necessarily have $\card{C_t\cap \supp \ExtPv }\le 2$.
The last part follows from the fact that $C_t$ is a circular arc.
\end{proof}
\begin{lem}\label{lem:nec:4}
Let $v$ and $v'$ be distinct vertices of $Q\ch(A)$. Suppose $p_i\in \supp \ExtPv \setminus \supp \ExtPv'$ is an isolated node of the joint saturation graph $\G(v,v')$. If $C_t\in\cov$ and $C_t\cap \supp \ExtPv = \{p_i \}$, then $\card{C_t\cap (\supp \ExtPv' \setminus \supp \ExtPv)} = 2$ and $C_t\cap \supp \ExtPv' = C_t\cap (\supp \ExtPv' \setminus \supp \ExtPv)$.
\end{lem}
\begin{proof}
In the first place, observe that $C_t\cap \supp \ExtPv' = C_t\cap(\supp \ExtPv' \setminus \supp \ExtPv)$ because $C_t\cap \supp \ExtPv = \{p_i\}$ and $p_i\in \supp \ExtPv \setminus \supp \ExtPv'$. Then, by \tref{Lemma}{lem:nec:1}, we have
$1\le \card{C_t\cap (\supp \ExtPv' \setminus \supp \ExtPv)}\le 2$. Now, note that we cannot have
$\card{C_t\cap (\supp \ExtPv' \setminus \supp \ExtPv)}=1$, because by~\tref{Definition}{defn:G} (see in particular~\eqref{equ:edge}) that would mean that there exists an edge in $\G(v,v')$ connecting $p_i$ with the unique element of $C_t\cap (\supp \ExtPv' \setminus \supp \ExtPv)$, contradicting the fact that $p_i$ is an isolated node of $\G(v,v')$.
\end{proof}
The next lemmas provide simple properties of the joint saturation graph when $A$ is row circular, which we will need to establish the characterization of vertex adjacency for $Q\ch(A)$.
\begin{lem}\label{lem:nec:5}
The joint saturation graph $\G(v,v')$ of two distinct vertices $v$ and $v'$ of $Q\ch(A)$ has the following properties:
\begin{enumcona}
\item\label{lem:nec:5:2}
If $p_i\ngh p'_j$ and $p_h \ngh p'_j$ in $\G(v,v')$, with $h\neq i$, then we must have
either $h = i - 1$ or $h = i + 1$.
\item\label{lem:nec:5:3}
If $p_i\ngh p'_j$, $p_{i+1}\ngh p'_j$ and $\card{\supp \ExtPv} > 2$, then $p'_j \in \arc{p_i, p_{i+1}}$ and there are no other elements of $\supp \ExtPv'$ (or $\supp \ExtPv$) in $\arc{p_i, p_{i+1}}$.\footnote{Notice that if $\card{\supp \ExtPv} = 2$, then $p'_j$ could be in either $\arc{p_1, p_2}$ or $\arc{p_2, p_1}$.}
\item\label{lem:nec:5:1}
The nodes of $\G(v,v')$ have degree at most $2$.
\item\label{lem:nec:5:4}
Each component of $\G(v,v')$ must be either a cycle or a path (including isolated nodes).
\item\label{lem:nec:5:5}
If a component of $\G(v,v')$ is a cycle,
then its set of nodes is equal to $\supp \ExtPv\cup \supp \ExtPv'$, and we have $\supp \ExtPv\cap \supp \ExtPv' = \emptyset$.
In particular, $\G(v,v')$ is connected.
\end{enumcona}
\end{lem}
\begin{proof}
Let us assume that $C_s\in \cov$ is such that
\begin{equation}\label{Arco1}
C_s\cap \supp \ExtPv = \{p_i\} \; \makebox{ and }\;
C_s\cap \supp \ExtPv' = \{p'_j\}\;
\end{equation}
and $C_r\in \cov$ such that
\begin{equation}\label{Arco2}
C_r\cap \supp \ExtPv = \{p_h\} \; \makebox{ and } \;
C_r\cap \supp \ExtPv' = \{p'_j\}.
\end{equation}
Since $p'_j\in C_s\cap C_r $, it follows that $C_s\cup C_r $ is a circular arc. By~\eqref{Arco1} and~\eqref{Arco2}, this circular arc intersects $\supp \ExtPv$ only at $p_i$ and $p_h$, and hence these are consecutive elements of $\supp \ExtPv$.
This shows~\refp{lem:nec:5:2}.
To prove~\refp{lem:nec:5:3}, let $C_s$ and $C_r$ be circular arcs satisfying~\eqref{Arco1} and~\eqref{Arco2}, with $h=i+1$ in~\eqref{Arco2}. Then, as above, we can conclude that $C_s\cup C_r $ is a circular arc which intersects $\supp \ExtPv$ only at $p_i$ and $p_{i+1}$, and $\supp \ExtPv'$ only at $p'_j$. Besides, since $\card{\supp \ExtPv} > 2$, there exists an element $p_k$ in $\arc{p_{i+1}, p_i}\cap \supp \ExtPv$ which is different from $p_i$ and $p_{i+1}$. Observe that \tref{Remark}{rem:graph:2} would not hold for $p_i\ngh p'_j$ if $p'_j$ belonged to $\arc{p_{i+1}, p_k}$. Analogously, note that \tref{Remark}{rem:graph:2} would not hold for $p_{i+1}\ngh p'_j$ if $p'_j$ belonged to $\arc{p_k, p_i}$. Therefore, we conclude that $p'_j\in \arc{p_i, p_{i+1}}$. Finally, since $p_k \in \arc{p_{i+1}, p_i}$ and $C_s\cup C_r $ is a circular arc which contains $p_i$ and $p_{i+1}$ but does not contain $p_k$, we have $\arc{p_i, p_{i+1}} \subset C_s\cup C_r $. Thus, from~\eqref{Arco1} and~\eqref{Arco2} (recall that $h=i+1$ in~\eqref{Arco2}), it follows that $p_i$, $p_{i+1}$ and $p'_j$ are the only elements of $\supp \ExtPv \cup \supp \ExtPv'$ in $\arc{p_i , p_{i+1}}$.
Assume that a node of $\G(v,v')$, for instance $p'_j$, has degree strictly greater than $2$. Let $p_i$, $p_h$ and $p_k$ be three different elements of $\supp \ExtPv \setminus \supp \ExtPv'$ such that $p_i\ngh p'_j$, $p_h\ngh p'_j$ and $p_k\ngh p'_j$. By~\refp{lem:nec:5:2} we can assume, without loss of generality, that $h=i-1$ and $k=i+1$. Then, by~\refp{lem:nec:5:3} we have $p'_j\in \arc{p_{i-1}, p_i}$ and $p'_j\in \arc{p_i, p_{i+1}}$, which is a contradiction because $p'_j \in \supp \ExtPv' \setminus \supp \ExtPv$ by~\tref{Definition}{defn:G} and $\arc{p_{i-1}, p_i}\cap \arc{p_i, p_{i+1}} = \{p_i\}$ (recall that $p_{i-1}=p_h\neq p_k=p_{i+1}$). This proves~\refp{lem:nec:5:1}.
Note that~\refp{lem:nec:5:4} follows readily from~\refp{lem:nec:5:1}.
By~\refp{lem:nec:5:2} and~\refp{lem:nec:5:3}, any (simple) path of $\G(v,v')$ connecting two nodes of $\supp \ExtPv\setminus \supp \ExtPv'$ is of the form $p_i$, $p'_j$, $p_{i+1}$, $p'_{j+1}$, $\dots $, $p_{i+\ell}$ for some $\ell \in \mathbb N$, $i\in\I[r]$ and $j\in\I[r']$, where $p_{i+h}\in \arc{p'_{j+h-1}, p'_{j+h}}$ for any $h\in \I[\ell-1]$, and $p'_{j+h-1}\in \arc{p_{i+h-1}, p_{i+h}}$ and
\begin{equation}\label{IntervalPath}
\arc{p_{i}, p_{i+h}}\cap (\supp \ExtPv\cup \supp \ExtPv')=\{p_i, p'_j, p_{i+1}, p'_{j+1},\dots ,p_{i+h}\}
\end{equation}
for any $h\in \I[\ell]$ (see~\tref{Example}{ExampleReferee}). Thus, if a component of $\G(v,v')$ is a cycle, we can take $p_{i+\ell}=p_i$ in~\eqref{IntervalPath}, which then implies $\I \cap (\supp \ExtPv\cup \supp \ExtPv')=\{p_i, p'_j, p_{i+1}, p'_{j+1},\dots ,p'_{i+l-1}\}$, that is, the set of nodes of the cycle is equal to $\supp \ExtPv\cup \supp \ExtPv'$. Finally, since by~\tref{Definition}{defn:G} the nodes of $\G(v,v')$ belong either to $\supp \ExtPv\setminus \supp \ExtPv'$ or to $\supp \ExtPv' \setminus \supp \ExtPv$, we conclude that $\supp \ExtPv\cap \supp \ExtPv' = \emptyset$. This shows~\refp{lem:nec:5:5}.
\end{proof}
\begin{figure}
\begin{center}
\begin{picture}(0,0)%
\includegraphics{AKTFigure1.pdf}%
\end{picture}%
\setlength{\unitlength}{4144sp}%
\begingroup\makeatletter\ifx\SetFigFont\undefined%
\gdef\SetFigFont#1#2#3#4#5{%
\reset@font\fontsize{#1}{#2pt}%
\fontfamily{#3}\fontseries{#4}\fontshape{#5}%
\selectfont}%
\fi\endgroup%
\begin{picture}(5022,1566)(6691,-9769)
\put(6706,-8686){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\I$}%
}}}}
\put(6751,-9451){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\cov$}%
}}}}
\put(10036,-8386){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$p'_4$}%
}}}}
\put(11161,-8386){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$p'_6$}%
}}}}
\put(10711,-8386){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$p'_5$}%
}}}}
\put(8056,-8386){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$p'_2$}%
}}}}
\put(8956,-8386){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$p'_3$}%
}}}}
\put(7831,-8386){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$p'_1$}%
}}}}
\put(7426,-9691){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$C_1$}%
}}}}
\put(8551,-9691){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$C_3$}%
}}}}
\put(9811,-9691){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$C_5$}%
}}}}
\put(10801,-9691){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$C_7$}%
}}}}
\put(7381,-8881){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$p_1$}%
}}}}
\put(8056,-8881){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$p_2$}%
}}}}
\put(8506,-8881){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$p_3$}%
}}}}
\put(9631,-8881){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$p_4$}%
}}}}
\put(10531,-8881){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$p_5$}%
}}}}
\put(10756,-8881){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$p_6$}%
}}}}
\put(11521,-8881){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$n=21$}%
}}}}
\put(7156,-8881){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$1$}%
}}}}
\put(8101,-9196){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$C_2$}%
}}}}
\put(10216,-9196){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$C_6$}%
}}}}
\put(11476,-9196){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$C_8$}%
}}}}
\put(9226,-9196){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$C_4$}%
}}}}
\end{picture}%
\caption{The clutter and the supports of the vertices $v$ and $v'$ considered in~\tref{Example}{ExampleReferee}.}\label{figure1}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\begin{picture}(0,0)%
\includegraphics{AKTFigure2.pdf}%
\end{picture}%
\setlength{\unitlength}{4144sp}%
\begingroup\makeatletter\ifx\SetFigFont\undefined%
\gdef\SetFigFont#1#2#3#4#5{%
\reset@font\fontsize{#1}{#2pt}%
\fontfamily{#3}\fontseries{#4}\fontshape{#5}%
\selectfont}%
\fi\endgroup%
\begin{picture}(2235,3411)(8266,-8059)
\put(10486,-7981){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$p'_6$}%
}}}}
\put(8281,-7531){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$p_1$}%
}}}}
\put(8281,-6631){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$p_3$}%
}}}}
\put(8281,-4831){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$p_5$}%
}}}}
\put(8281,-5731){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$p_4$}%
}}}}
\put(10486,-5281){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$p'_4$}%
}}}}
\put(10486,-6181){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$p'_3$}%
}}}}
\put(10486,-7081){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$p'_1$}%
}}}}
\end{picture}%
\caption{The joint saturation graph $\G(v,v')$ of the vertices $v$ and $v'$ of~\tref{Example}{ExampleReferee}.}\label{figure2}
\end{center}
\end{figure}
\begin{exam}\label{ExampleReferee}
Consider the set covering polyhedron associated with the clutter $\cov=\{C_1,\ldots ,C_8\}$, where $C_1=\arc{1,4}$, $C_2=\arc{5,6}$, $C_3=\arc{6,9}$, $C_4=\arc{8,13}$, $C_5=\arc{11,15}$, $C_6=\arc{14,16}$, $C_7=\arc{17,18}$, $C_8=\arc{18,3}$ and $n=21$ (this clutter is represented in Figure~\ref{figure1}). The joint saturation graph $\G(v,v')$ of two vertices of this polyhedron is depicted in Figure~\ref{figure2} (the supports $\supp \ExtPv=\{p_1,\ldots ,p_6\}$ and $\supp \ExtPv'=\{p'_1,\ldots ,p'_6\}$ of these vertices are represented in Figure~\ref{figure1}). By properties~\refp{lem:nec:5:2} and~\refp{lem:nec:5:3} of \tref{Lemma}{lem:nec:5}, with each path of $\G(v,v')$ it is possible to associate a sequence of consecutive circular arcs of $\I$ such that in this sequence there is precisely one circular arc for each edge of the path and the endpoints of each circular arc are the nodes defining the corresponding edge of $\G(v,v')$ (here, we call $i$ and $j$ the {\em endpoints} of the circular arc $\arc{i, j}$ of $\I$, and we say that two circular arcs are {\em consecutive} if their intersection is one of their endpoints). Besides, each circular arc of this sequence has the property that only its endpoints belong to $\supp \ExtPv\cup \supp \ExtPv'$ (one of the endpoints belongs to $\supp \ExtPv\setminus \supp \ExtPv'$ and the other one to $\supp \ExtPv' \setminus \supp \ExtPv$). For the path $p_3$, $p'_3$, $p_4$, $p'_4$, $p_5$ of Figure~\ref{figure2}, the associated sequence of circular arcs is $\arc{p_3 ,p'_3}$, $\arc{p'_3, p_4}$, $\arc{p_4, p'_4}$, $\arc{p'_4, p_5}$, and for the path $p'_6$, $p_1$, $p'_1$, the associated sequence of circular arcs is $\arc{p'_6, p_1}$, $\arc{p_1, p'_1}$ (see Figure~\ref{figure1}). We refer the reader to~\tref{Example}{ExampleDifTypes} below for more examples.
\end{exam}
\begin{lem}\label{lem:nec:6}
Let $v$ and $v'$ be distinct vertices of $Q\ch(A)$.
Suppose $p_i$ and $p_{i+1}$
are in the same component $F$ of the joint saturation graph $\G(v,v')$,
but do not have a common neighbor in $\supp \ExtPv' \setminus \supp \ExtPv$.
Then, $\supp \ExtPv \subset F$, and so in particular $\supp \ExtPv \cap \supp \ExtPv' = \emptyset$.
\end{lem}
\begin{proof}
By properties~\refp{lem:nec:5:2} and~\refp{lem:nec:5:3} of \tref{Lemma}{lem:nec:5},
the path from $p_i$ to $p_{i+1}$ must contain all the elements of $\supp \ExtPv$, and so these must be in $\supp \ExtPv\setminus \supp \ExtPv'$.
\end{proof}
\begin{lem}\label{lem:nec:3}
Let $v$ and $v'$ be distinct vertices of $Q\ch(A)$. Suppose the inclusion $\arc{p'_j,p'_{j+1}}\subset \arc{p_i,p_{i+1}}$ is satisfied. Then, if $p_i \ngh p'_{j-1}$, we must have $p_i\ngh p'_j$. Similarly, if $p_{i+1}\ngh p'_{j+2}$ then $p_{i+1}\ngh p'_{j+1}$.
\end{lem}
\begin{proof}
Assume $p_i \ngh p'_{j-1}$. As $\arc{p'_j,p'_{j+1}}\subset \arc{p_i,p_{i+1}}$, observe that if $C_t\in\cov$ is such that $\{p_i,p'_{j+1}\} \subset C_t$, then $\{p_{i+1},p'_{j}\} \cap C_t\neq \emptyset$. Thus, we conclude that $p_i \nngh p'_{j+1}$, and so $p'_{j-1}\neq p'_{j+1}$.
As $p'_{j-1}\neq p'_{j+1}$, we must have $p'_{j-1}\not \in \arc{p_i,p'_j}$, because otherwise we would have $\arc{p'_{j-1},p'_{j+1}}\subset \arc{p_i,p_{i+1}}$, and so $C_t\cap \supp \ExtPv=\emptyset$ for any $C_t\in\cov$ satisfying $C_t\cap \supp \ExtPv' = \{p'_j\}$ (at least one of such $C_t$ exists), contradicting the fact that $\supp \ExtPv$ is a transversal. Since $p_i$ is different from $p'_{j-1}$ and $p'_j$ (because $p_i \ngh p'_{j-1}$, and so $p_i\in \supp \ExtPv \setminus \supp \ExtPv'$), observe that $p'_{j-1}\not \in \arc{p_i,p'_j}$ is equivalent to $p'_j\not \in \arc{p'_{j-1},p_i}$.
As $p_i \ngh p'_{j-1}$ and $p'_j\not \in \arc{p'_{j-1},p_i}$, we have $\arc{p'_{j-1},p_i}\cap \supp \ExtPv =\{p_i\}$ and $\arc{p'_{j-1},p_i}\cap \supp \ExtPv' =\{p'_{j-1}\}$ (see~\tref{Remark}{rem:graph:2}). Thus, we conclude that $\arc{p'_{j-1},p'_j}\cap \supp \ExtPv = \{p_i\}$ and $\arc{p'_j,p'_{j+1}-1}\cap \supp \ExtPv =\emptyset$ because $\arc{p'_j,p'_{j+1}-1}\subset \arc{p_i+1,p_{i+1}-1}$. Then, if $C_t\in\cov$ is such that $C_t\cap \supp \ExtPv' = \{p'_j\}$, we must have $C_t\cap \supp \ExtPv = \{p_i\}$ because $C_t$ is a circular arc and it must intersect $\supp \ExtPv$. Therefore, recalling~\tref{Definition}{defn:G}, we have $p_i\ngh p'_j$.
\end{proof}
\begin{lem}\label{lem:nec:7}
Let $v$ and $v'$ be distinct vertices of $Q\ch(A)$. If $\supp \ExtPv$ is contained in a component of the joint saturation graph $\G(v,v')$, then $\G(v,v')$ is either connected or almost-connected. Moreover, in the latter case, we have $\card{\supp \ExtPv} = \card{\supp \ExtPv'}$.
\end{lem}
\begin{proof}
Suppose $\G(v,v')$ is not connected. Then, if $F$ is the component containing $\supp \ExtPv$, by \tref{Lemma}{lem:nec:5} we conclude that $F$ must be a path connecting two consecutive elements of $\supp \ExtPv$, say $p_i$ and $p_{i+1}$, and these elements do not have a common neighbor in $\arc{p_i, p_{i+1}}$.
Observe that $\arc{p_i, p_{i+1}}$ cannot contain more than two elements of $\supp \ExtPv'$. Indeed, assuming the contrary we would have
$\arc{p'_{j-1},p'_{j+1}}\subset \arc{p_i, p_{i+1}}$ for some $j$ ($p'_{j-1}\neq p'_{j+1}$), and so there would exist $C_t\in\cov$ such that $C_t\cap \supp \ExtPv=\emptyset$ (this would hold for any $C_t\in\cov$ such that
$C_t\cap \supp \ExtPv'=\{p'_j \}$), contradicting that $\supp \ExtPv$ is a transversal.
If $\arc{p_i, p_{i+1}}$ did not contain elements of $\supp \ExtPv'$, by~\tref{Lemma}{lem:nec:5} we would conclude that $\G(v,v')$ is the path $F$, contradicting our assumption that
$\G(v,v')$ is not connected.
If there were two elements, say $p'_j$ and $p'_{j+1}$,
\tref{Lemma}{lem:nec:3} would show that $p_i\ngh p'_{j}$ and $p_{i+1}\ngh p'_{j+1}$, contradicting again that $\G(v,v')$ is not connected.
Thus, there can only be exactly one element of $\supp \ExtPv'$ in $\arc{p_i, p_{i+1}}$, say $p'_j$, and this element cannot belong to $F$ since we assume that $\G(v,v')$ is not connected.
Therefore, $\G(v,v')$ consists of the isolated node $p'_j$ and the path $F$ connecting $p_i$ with $p_{i+1}$, and so it is almost-connected and $\card{\supp \ExtPv} = \card{\supp \ExtPv'}$.
\end{proof}
Before proving a characterization of vertex adjacency for $Q\ch(A)$ when $A\in\mathbb B^{m\times n}$ is row circular, we note that with this aim we can restrict our analysis to the case where $A$ has at most three ones per row, and the vertices $v$ and $v'$ of $Q\ch(A)$ satisfy $\supp \ExtPv \cup \supp \ExtPv'=\I$ (in this case, observe that if $p_i \ngh p'_j$, then we must have either $p'_{j}=p_i-1$ or $p'_{j}=p_i+1$). This follows from~\tref{Remark}{rem:contraction} and the fact that for a row circular matrix $A$, the contraction minor $A/I$ of~\tref{Remark}{rem:contraction} has at most three ones per row, as shown in the next lemma.
\begin{lem}\label{Lemma:PropContrac}
Let $v$ and $v'$ be distinct vertices of $Q\ch(A)$, and let $I=\I\setminus (\supp \ExtPv \cup \supp \ExtPv')$. Then, each row of the contraction minor $A/I$ has at most three ones.
\end{lem}
\begin{proof}
Let us denote by $\bar{C}_t$ the support of the $t$-th row of $A/I$.
For the sake of simplicity, in this proof we will assume that the $t$-th row of $A/I$ corresponds to the $t$-th row of $A$ (recall that when contracting, dominating rows are eliminated, so this does not necessarily hold, but it can be assumed without loss of generality).
Since by~\tref{Lemma}{lem:nec:1} each circular arc $C_t\in\cov$ can contain at most two elements of the support of each vertex of $Q\ch(A)$, after the contraction, the resulting arc $\bar{C}_t$ can contain at most four elements, two of them corresponding to elements of $\supp \ExtPv$, and two corresponding to elements of $\supp \ExtPv'$.
\begin{figure}
\begin{center}
\begin{picture}(0,0)%
\includegraphics{AKTFigure3.pdf}%
\end{picture}%
\setlength{\unitlength}{4144sp}%
\begingroup\makeatletter\ifx\SetFigFont\undefined%
\gdef\SetFigFont#1#2#3#4#5{%
\reset@font\fontsize{#1}{#2pt}%
\fontfamily{#3}\fontseries{#4}\fontshape{#5}%
\selectfont}%
\fi\endgroup%
\begin{picture}(5022,1251)(6691,-9589)
\put(6706,-8686){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\I$}%
}}}}
\put(7156,-8521){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$1$}%
}}}}
\put(8056,-8521){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$l$}%
}}}}
\put(8506,-8521){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$h$}%
}}}}
\put(8956,-8521){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$i$}%
}}}}
\put(9631,-8521){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$j$}%
}}}}
\put(10081,-8521){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$k$}%
}}}}
\put(11656,-8521){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$n$}%
}}}}
\put(9271,-9196){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$C_t$}%
}}}}
\put(8326,-9511){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$C_s$}%
}}}}
\end{picture}%
\caption{Illustration of the proof of~\tref{Lemma}{Lemma:PropContrac} (the elements of $\supp \ExtPv \cup \supp \ExtPv'$ are represented by dots).}\label{figure3}
\end{center}
\end{figure}
Assume that an arc $\bar{C}_t$ with four elements exists,
and let $h$, $i$, $j$ and $k$ be the corresponding elements of $\supp \ExtPv \cup \supp \ExtPv'$. Without loss of generality, suppose that $\arc{i,j}\subset \arc{h,k}\subset C_t$, and that $i\in\supp \ExtPv$ (see
Figure~\ref{figure3} for an illustration of the relative positions of the main elements that appear in this proof).
Let $C_s\in\cov$ be such that $C_s\cap\supp \ExtPv = \{i\}$. Note that $(C_s \setminus C_t)\cap \supp \ExtPv'$ cannot be empty
because $C_s\cap\supp \ExtPv = \{i\}\subsetneq C_t\cap\supp \ExtPv$ and $A/I$ has no dominating rows. So, let $l$ be an element of $(C_s \setminus C_t)\cap \supp \ExtPv'$. As $C_s$ is a circular arc and $\{l,i\}\subset C_s$, we either have $\{l,h,i\}\subset \arc{l,i} \subset C_s$ or $\{i,j,k,l\}\subset \arc{i,l} \subset C_s$ (recall that $\arc{i,j}\subset \arc{h,k}$ and $l\not \in C_t \supset \arc{h,k}$). Then, by the choice of $C_s$ ($i$ is the only element of $\supp \ExtPv$ which belongs to $C_s$), we either have $\{l,h\}\subset C_s \cap \supp \ExtPv'$ or $\{j,k,l\}\subset C_s \cap \supp \ExtPv'$. It follows that $\{l,h\}\subset C_s \cap \supp \ExtPv'$, because $\{j,k,l\}\subset C_s \cap \supp \ExtPv'$ contradicts~\tref{Lemma}{lem:nec:1}.
We conclude that $\{l,h,i\}\subset \arc{l,i} \subset C_s$ and $h\in \supp \ExtPv'$.
Now, let $C_r\in\cov$ be such that $C_r\cap\supp \ExtPv' = \{h\}$. Note that $l\not \in C_r$, because $l\neq h$ and $l\in \supp \ExtPv'$ by the definition of $l$ (see the previous paragraph). Besides, since $C_r$ is a circular arc and $l\not \in C_r$, we have $k\not \in C_r$. Indeed, otherwise (i.e., if $k\in C_r$) we would have $\{h,i,j,k\} \subset C_r$ (recall that $\arc{i,j} \subset \arc{h,k}$, $h\in C_r$ and $l\not \in \arc{h,k}$), and then by the choice of $C_r$ ($h$ is the only element of $\supp \ExtPv'$ in $C_r$) we could conclude that $\{i,j,k\} \subset C_r \cap \supp \ExtPv$, contradicting \tref{Lemma}{lem:nec:1}.
Note that $\arc{l,k} \subset C_s\cup C_t$ because $h \in \arc{l,i} \subset C_s$ and $\arc{h,k}\subset C_t$. Besides, we have $\arc{l,k} \cap (\supp \ExtPv \cup \supp \ExtPv')=\{l,h,i,j,k\}$. Indeed, $\{l,h,i,j,k\} \subset \arc{l,k} \cap (\supp \ExtPv \cup \supp \ExtPv')$ due to the fact that $l\not \in C_t \supset \arc{h,k} \supset \arc{i,j}$, and if this intersection contained another element, this element should belong to $C_s$ (by our assumption we know that $C_t\cap (\supp \ExtPv \cup \supp \ExtPv')=\{h,i,j,k\}$), and then also to $\supp \ExtPv'$ ($i$ is the only element of $\supp \ExtPv$ which belongs to $C_s$), which together with the fact that $\{l,h\}\subset C_s \cap \supp \ExtPv'$ would contradict \tref{Lemma}{lem:nec:1}. Finally, observe that $C_r\subset \arc{l,k}$ because $C_r$ is a circular arc which contains $h$ but, by the previous paragraph, does not contain $l$ nor $k$. Thus, we conclude that $C_r\cap (\supp \ExtPv \cup \supp \ExtPv')\subset \{h,i,j\} \subsetneq \{h,i,j,k\} = C_t\cap (\supp \ExtPv \cup \supp \ExtPv')$, which contradicts the fact that $A/I$ has no dominating rows.
\end{proof}
We need finally the next lemma to prove a characterization of vertex adjacency for $Q\ch(A)$.
\begin{lem}\label{lem:nec:noaristas}
Two vertices $v$ and $v'$ of $Q\ch(A)$ are not adjacent in $Q\ch(A)$ if their joint saturation graph $\G(v,v')$ has no edges.
\end{lem}
\begin{proof}
As explained above, to prove this result we may assume that $A$ has at most three ones per row and $\supp \ExtPv \cup \supp \ExtPv'=\I$.
By \tref{Remark}{rem:graph:1}, $\G(v,v')$ has at least three nodes, so
let us fix $p_i\in \supp \ExtPv \setminus \supp \ExtPv'$ and $C_t\in\cov$ satisfying $C_t \cap \supp \ExtPv = \{p_i\}$.
Then, by \tref{Lemma}{lem:nec:4} we have $\card{C_t\cap (\supp \ExtPv' \setminus \supp \ExtPv)} = 2$ and $C_t\cap (\supp \ExtPv' \setminus \supp \ExtPv) = C_t\cap \supp \ExtPv'$. Since $\supp \ExtPv \cup \supp \ExtPv'=\I$, it follows that at least one of $p_i-1$ or $p_i+1$ is in $\supp \ExtPv'\setminus\supp \ExtPv$. Without loss of generality, assume $p'_j=p_i+1$ is in $\supp \ExtPv' \setminus \supp \ExtPv$.
We claim that the sets
\[
X = (\supp \ExtPv \setminus \{p_i\})\cup \{p'_j\}
\quad\text{and}\quad
X' = (\supp \ExtPv'\setminus \{p'_j\})\cup \{p_i\}
\]
are transversals.
To see this, assume on the contrary that, for example, $X\cap C_s=\emptyset $ for some $C_s\in \cov$. Then, since $\supp \ExtPv$ is a transversal, we must have $C_s \cap \supp \ExtPv = \{p_i\}$. Using the fact that $A$ has at most three ones per row and that $p_j'=p_i+1\not \in C_s$, by \tref{Lemma}{lem:nec:4} it follows that $\{p_i-2,p_i-1\} = (\supp \ExtPv' \setminus \supp \ExtPv)\cap C_s$. Now, taking $C_r\in \cov$ such that $C_r\cap \supp \ExtPv' =\{ p_i-1\}$, we necessarily have $C_r\cap \supp \ExtPv =\{ p_i\}$ because $A$ is row circular and $\{p_i-2,p_i+1\}\subset \supp \ExtPv' \setminus \supp \ExtPv$.
This implies $p_i \ngh p'_{j-1}=p_i-1$,
which contradicts that $\G(v,v')$ has no edges. This proves that $X$ is a transversal. Similarly, it can be shown that $X'$ is also an transversal.
Observe that $X$ cannot coincide with $\supp \ExtPv'$ since we are exchanging just one element of $\supp \ExtPv' \setminus \supp \ExtPv$, but $\card{\supp \ExtPv' \setminus \supp \ExtPv}\ge 2$ by~\tref{Lemma}{lem:nec:4}.
The lemma follows now from~\tref{Proposition}{propo:adys:2} defining $x=\car(X)$, $x'=\car(X')$, $d=\car(\{p_i\})$ and $d'=\car(\{p'_j\})$.
\end{proof}
We are now ready to prove a characterization of vertex adjacency for $Q\ch(A)$.
\begin{thm}\label{thm:CharactAdj}
Let $A\in \mathbb B^{m\times n}$ be a row circular matrix. Let $v$ and $v'$ be distinct vertices of $Q\ch(A)$, and $\G(v,v')$ be their joint saturation graph.
The vertices $v$ and $v'$ are adjacent in $Q\ch(A)$ if, and only if, one of the following conditions is satisfied:
\begin{itemize}
\item
$\G(v,v')$ is connected.
\item
$\G(v,v')$ is almost-connected and $\supp \ExtPv\cap \supp \ExtPv'=\emptyset$.
\end{itemize}
\end{thm}
\begin{proof}
If one of the conditions above is satisfied, then $\G(v,v')$ is partite-connected, and so $v$ and $v'$ are adjacent in $Q\ch(A)$ by~\tref{Theorem}{thm:suf}.
Assume now that none of the conditions above is satisfied, and let us show that $v$ and $v'$ are not adjacent in $Q\ch(A)$. With this aim, as in the proof of \tref{Lemma}{lem:nec:noaristas}, we may assume that $A$ has at most three ones per row (thus, by~\tref{Assumptions}{assums:A}, $A$ has between two and three ones per row) and $\supp \ExtPv \cup \supp \ExtPv'=\I$.
If $\G(v,v')$ has no edges, the result follows from \tref{Lemma}{lem:nec:noaristas}, so we next assume that $\G(v,v')$ contains at least one edge.
Let $F$ be a component containing an edge of $\G(v,v')$.
As $\G(v,v')$ is not connected, by~\tref{Lemma}{lem:nec:5} we know that $F$ is a path, not a cycle.
In order to show that $v$ and $v'$ are not adjacent we will use~\tref{Proposition}{propo:adys:2}.
For doing this, we let $R = \supp \ExtPv\cap \supp \ExtPv'$ and define $D$ and $T$ by
\begin{subequations}\label{equ:AdyImplicaCond:1}
\begin{equation}
D = F\cap \supp \ExtPv , \quad
T = \supp \ExtPv \setminus (R \cup D).
\end{equation}
Similarly, we set
\begin{equation}
D' = F\cap \supp \ExtPv', \quad
T' = \supp \ExtPv' \setminus (R \cup D').
\end{equation}
Finally, we define $X$ and $X'$ by
\begin{equation}
X = R \cup T \cup D',
\quad
X' = R \cup T' \cup D.
\end{equation}
\end{subequations}
Our first aim is to prove that $X$ and $X'$ are transversals, and we notice that it is enough to prove this only for $X$, given the symmetry of the definitions in~\eqref{equ:AdyImplicaCond:1}.
So let us show that
\begin{equation}\label{equ:compli:-4}
C_t\cap X \ne\emptyset
\end{equation}
for any $C_t\in\cov$.
Since $\supp \ExtPv= R \cup T \cup D$ is a transversal, it will be enough to consider just the case where $C_t$ intersects $\supp \ExtPv$ at some element $p_i$ of $D$, i.e., assume
\begin{equation}\label{equ:compli:-3}
p_i\in C_t\cap D.
\end{equation}
If $p_i$ is connected in $\G(v,v')$ to two elements of $\supp \ExtPv' \setminus \supp \ExtPv$ (and hence of $D'$),
then these are
$p'_j=p_i-1$ and $p'_{j+1}=p_i+1$
because $\supp \ExtPv \cup \supp \ExtPv'=\I$, and since $C_t$ is a circular arc that contains $p_i$ and must intersect $\supp \ExtPv'$, we have
\[
\emptyset
\neq
\{p_i-1,p_i+1\}\cap C_t
=
\{p'_j,p'_{j+1}\}\cap C_t
\subset D' \cap C_t
\subset X \cap C_t \; ,
\]
and~\eqref{equ:compli:-4} holds.
Suppose now that $p_i$ is a leaf of the path $F$, and let
\[
p'_j\in D'
\]
be its only neighbor.
If $p'_j\in C_t$ we are done, so we next consider the case
\begin{equation}\label{equ:compli:-1}
p'_j\not \in C_t\; .
\end{equation}
Let us assume that $p'_j=p_i-1$, the case $p'_j=p_i+1$ being similar.
We claim that $p_i$ and $p_{i+1}$ cannot have a common neighbor in $\G(v,v')$. To see this, assume the contrary. Then, if $\card{\supp \ExtPv} > 2$, by~\trrefp{Lemma}{lem:nec:5}{lem:nec:5:3} we know that the common neighbor must belong to $\arc{p_i, p_{i+1}}$, but we have assumed that the only neighbor
of $p_i$ is $p'_j$, and that $p'_j=p_i-1$, so it does not belong to $\arc{p_i, p_{i+1}}$. Thus, if $p_i$ and $p_{i+1}$ had a common neighbor, we must have $\card{\supp \ExtPv} = 2$, i.e., $\supp \ExtPv=\{p_i , p_{i+1}\}$. Note that in this case we have $\supp \ExtPv \cap\supp \ExtPv'=\emptyset$ (because $p_i$ and $p_{i+1}$ are nodes of $\G(v,v')$, and so they belong to $\supp \ExtPv \setminus \supp \ExtPv'$), and from~\tref{Lemma}{lem:nec:7} we conclude also that $\G(v,v')$ is either connected or almost-connected (because $\supp \ExtPv=\{p_i , p_{i+1}\}\subset F$). This proves our claim, since it contradicts our assumption that none of the conditions of the theorem is satisfied.
Since $p_i$ and $p_{i+1}$ do not have a common neighbor in $\G(v,v')$ by the previous paragraph, observe that $p_{i+1}$ cannot belong to $F$, because otherwise by \tref{Lemma}{lem:nec:6} we could conclude that $\supp \ExtPv \subset F$ and $\supp \ExtPv \cap \supp \ExtPv' = \emptyset$, and then by \tref{Lemma}{lem:nec:7} we could also conclude that $\G(v,v')$ is either connected or almost-connected, contradicting again our assumption that none of the conditions of the theorem is satisfied. Therefore, given that $p_{i+1}\in\supp \ExtPv$, we have
\[
p_{i+1}\in X.
\]
In order to show that $p_{i+1}$ is also in $C_t$, and therefore~\eqref{equ:compli:-4} holds, let us see that the assumption
\begin{equation}\label{equ:compli:0}
p_{i+1}\notin C_t
\end{equation}
leads to a contradiction.
Since the $t$-th row of $A$ has between two and three ones, and we have assumed $p_i\in C_t$ in~\eqref{equ:compli:-3} and $p'_j=p_i-1\not \in C_t$ in~\eqref{equ:compli:-1},
it follows that either $C_t= \{ p_i, p_i+1 \}$ or $C_t=\{ p_i, p_i+1,p_i+2 \}$. What is more, as $\supp \ExtPv \cup \supp \ExtPv'=\I$, $p'_j=p_i-1$, $p_{i}\in \supp \ExtPv \setminus \supp \ExtPv'$ (since $p_i$ is a node of $\G(v,v')$) and we have assumed $p_{i+1}\notin C_t$ in~\eqref{equ:compli:0}, we conclude that either $C_t = \{ p_i, p_i+1 \}=\{ p_i, p'_{j+1} \}$ or $C_t = \{ p_i, p_i+1, p_i+2 \}=\{ p_i, p'_{j+1},p'_{j+2} \}$. Besides, note that $p'_{j+1}\in \supp \ExtPv' \setminus \supp \ExtPv$ in both cases, because $p'_{j+1} = p_i+1\not \in \supp \ExtPv$ due to the fact that $p_{i+1} \not \in C_t$. Then, if $C_t=\{ p_i, p'_{j+1} \}$, we have $p_i\ngh p'_{j+1}$ by~\tref{Definition}{defn:G}. Similarly, assuming $C_t=\{ p_i, p'_{j+1},p'_{j+2} \}$, if $C_r\in\cov$ is such that $C_r\cap \supp \ExtPv'=\{p'_{j+1}\}$, we must have $C_r\cap \supp \ExtPv=\{p_i\}$ (more precisely, we must have $C_r=\{p_i,p'_{j+1}\}$ because $p'_{j+1}-2=p'_j\not \in C_r$ and $p'_{j+1}+1=p'_{j+2}\not \in C_r$ by the choice of $C_r$, and the $r$-th row of $A$ has at least two ones), and so again we have $p_i\ngh p'_{j+1}$ by \tref{Definition}{defn:G}. Thus, we can always conclude that $p_i\ngh p'_{j+1}$, which contradicts the fact that $p'_j$ is the only neighbor of $p_i$.
Thus, the assumption~\eqref{equ:compli:0} leads to a contradiction and~\eqref{equ:compli:-4} holds, showing that $X$ and $X'$ are transversals.
Finally, we set
\[
d = \car(D), \quad d' = \car(D'), \quad
x = v - d + d', \quad x' = v' - d' + d,
\]
so that $X = \supp x$ and $X' = \supp x'$.
We notice now that $D$ and $D'$ are not empty and different from $\supp \ExtPv$ and $\supp \ExtPv'$ (respectively), as otherwise $\supp \ExtPv\cap \supp \ExtPv' =\emptyset$ and $\G(v,v')$ would be connected or almost-connected by \tref{Lemma}{lem:nec:7}, and therefore
\[
\mathbf{0}\lneqq d \lneqq v
\quad\text{and}\quad
\mathbf{0}\lneqq d' \lneqq v'.
\]
Also, $x\nev$ since $D\neq \emptyset$ and $D\cap X=\emptyset$, and $x\ne v'$ since otherwise we would have $T=T'= \emptyset$ and then $\G(v,v')$ would be connected.
The fact that $v$ and $v'$ are not adjacent in $Q\ch(A)$ follows now from~\tref{Proposition}{propo:adys:2}.
\end{proof}
\begin{exam}\label{ExampleDifTypes}
There are six types of joint saturation graphs of adjacent vertices of $Q\ch(A)$ when $A$ is row circular.
Among consecutive ones circulant matrices (see~\tref{Example}{exam:cnk:1}), $\C{15}{6}$ is one of the smallest exhibiting all of these types as shown in \tref{Table}{table:cnk:2}: disjoint supports and even path (type 1) or odd path (type 2), cycle (type 3), almost-connected (type 4), and finally overlapping supports and even path (type 5) or odd path (type 6).
See also~\tref{Remark}{rem:incidence} for the case $\C{n}{2}$.
\begin{table}\centering
\begin{tabular}{*{6}{c}}
type & $\supp \ExtPv$ & $\supp \ExtPv'$
& component/s \\
\hline\rule{0pt}{12pt}%
1 &
$\{1,7,13\}$ & $\{6,8,14,15\}$
& $15, 1, 6, 7, 8, 13, 14$ (even path) \\
2 &
$\{6,12,15\}$ & $\{5,11,14\}$
& $14, 15, 5, 6, 11, 12$ (odd path) \\
3&
$\{6,12,15\}$ & $\{5,8,14\}$
& $5, 6, 8, 12, 14, 15, 5$ (cycle) \\
4 &
$\{6,12,15\}$ & $\{3,9,14\}$
& $15, 3, 6, 9, 12$ (even path) + $14$ (node) \\
5 &
$\{6,12,15\}$ & $\{6,8,14,15\}$
& $8, 12, 14$ (even path) \\
6 &
$\{6,12,15\}$ & $\{6,11,15\}$
& $11, 12$ (odd path)
\end{tabular}
\caption{Examples showing each of the six possible behaviors of the joint saturation graph $\G(v,v')$ of adjacent vertices $v$ and $v'$ of $Q\ch(\C{15}{6})$.}
\label{table:cnk:2}
\end{table}
\end{exam}
\tref{Table}{table:cnk:2} also exhibits a simple consequence of our discussions:
\begin{coro}\label{coro:nec:1}
If $v$ and $v'$ are adjacent vertices of $Q\ch(A)$, then the cardinalities of their supports differ by at most one.
\end{coro}
\begin{proof}
If the joint saturation graph $\G(v,v')$ of $v$ and $v'$ is connected, then by~\tref{Lemma}{lem:nec:5} it is either a path or a cycle. Since $\G(v,v')$ is bipartite with partite sets $\supp \ExtPv \setminus \supp \ExtPv'$ and $\supp \ExtPv'\setminus \supp \ExtPv$, we conclude that the cardinalities of $\supp \ExtPv$ and $\supp \ExtPv'$ differ by at most one.
If $\G(v,v')$ is almost-connected and
$\supp \ExtPv\cap\supp \ExtPv' = \emptyset$, either $\supp \ExtPv$ or $\supp \ExtPv'$ is contained in a component of $\G(v,v')$. Then $\card{\supp \ExtPv} = \card{\supp \ExtPv'}$ by~\tref{Lemma}{lem:nec:7}.
\end{proof}
The previous corollary is also a consequence of a technique by Bartholdi et al.~\cite{BOR80}, which Eisenbrand et al.~\cite{EOSV08}
employed to show that if $A$ is row circular, then the \emph{slices} $\{x\in Q(A)\mid \mathbf{1}\cdot x = \beta\}$ are integral polytopes for $\beta\in\mathbb Z$.
When the matrix $A$ is not row circular, the behavior of the joint saturation graphs may be quite different, as shown by the following example, which in particular shows that being partite-connected is not a necessary condition for adjacency in the case of circulant matrices.
\begin{exam}\label{exam:ppfnd:13}
Let us consider the circulant matrix
\[
A = \mathscr{C}(1,1,0,1,0,0,0,0,0,1,0,0,0) \in\mathbb R^{13\times 13},
\]
which is the line-point incidence matrix of a non-degenerate finite projective plane of order $3$, and so it is a circulant matrix
not isomorphic to any $\C{n}{k}$.
It turns out that if $v$ and $v'$ are adjacent vertices of $Q\ch(A)$, then their supports cannot be disjoint,
and the components of their joint saturation graph $\G(v,v')$
are isomorphic to a complete bipartite graph:
either $K_{1,1}$ (one edge),
or $K_{2,1}$ (path with two edges),
or $K_{3,1}$,
or $K_{3,3}$.
In particular, the nodes of $\G(v,v')$ may have degree more than $2$ (compare with \tref{Lemma}{lem:nec:5}).
For instance, consider the vertex $v$ with support
$\{6, 10, 11, 13\}$, and the following choices for an adjacent vertex $v'$:
\begin{itemize}
\item
$v'$ with support
$\{5, 9, 10, 12\}$.
Then $\G(v,v')$ is isomorphic to $K_{3,3}$.
\item\label{exam:ppfnd:13:2}
$v'$ with support
$\{4, 5, 9, 10, 11, 13\}$.
In this case, $\G(v,v')$ is isomorphic to $K_{3,1}$.
\item
$v'$ with support
$\{5, 7, 8, 10, 12, 13\}$.
Then $\G(v,v')$ has two components, each isomorphic to $K_{2,1}$, so it is not partite-connected (and hence almost-connected), and the supports of $v$ and $v'$ are not disjoint.
\end{itemize}
Moreover, the supports of the vertices of $Q\ch(A)$ have cardinality either $4$ or $6$, so that the conclusions of \tref{Corollary}{coro:nec:1} do not hold (for instance, the previous choice of $v$ and the second choice for $v'$).
\end{exam}
\section{Minimally nonideal matrices}\label{sec:mni}
A matrix $A\in\mathbb B^{m\times n}$ is said to be \emph{ideal} if $Q(A) = Q\ch(A)$, and \emph{minimally nonideal} (mni for short) if it is not ideal but $Q(A)\cap \{x \in\mathbb R^n \mid x_i = 0\}$ and $Q(A)\cap \{x \in\mathbb R^n \mid x_i = 1\}$ are integral polyhedra for all $i\in\I$.
There are still several interesting open questions regarding mni matrices. On one hand, there is no good characterization of them and many studies revolve around Lehman's fundamental ideas~\cite{Le79, Le79a, Le90}. On the other hand, few infinite families of mni matrices are known: $\C{n}{2}$ for odd $n$, the matrices corresponding to degenerate finite projective planes, the family described by Wang~\cite{Wa11}, as well as all of the corresponding blockers of these families.
Cornu{\'e}jols and Novick~\cite{CN94} stated that, for $n$ odd and greater than $9$, it is always possible to add to $\C{n}{2}$ one row so that the resulting matrix is still mni, obtaining another infinite family of mni matrices.
In this section we will apply our findings to prove this result, showing in addition other more elaborate infinite families of mni matrices based on the family $\C{n}{2}$. Let us start with the following definition.
\begin{defn}\label{defn:core}
If a binary matrix $A$ with no dominating rows and $n$ columns contains a row submatrix $A_1\in\mathbb B^{n\times n}$ which is nonsingular and has $r$ (where $r\ge 2$) ones per row and per column, and the other rows of $A$ have more than $r$ ones, then $A_1$ is called a \emph{core} of $A$.
\end{defn}
Notice that if $A$ has a core then it is unique (up to the permutation of rows). On the other hand, $A$ may coincide with its core.
We summarize some of Lehman's results~\cite{Le79,Le79a,Le90} on mni matrices and their consequences in the next two theorems. With this aim, let us recall that the matrix associated with the degenerate projective plane with $t+1$ points and lines is
\[
\jt = \begin{bmatrix}
0 & 1 & 1 & \dots & 1 & 1 \\
1 & 1 & 0 & \dots & 0 & 0 \\
1 & 0 & 1 & \dots & 0 & 0 \\
\vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\
1 & 0 & 0 & \cdots & 1 & 0 \\
1 & 0 & 0 & \cdots & 0 & 1
\end{bmatrix}\in \mathbb B^{(t+1)\times (t+1)}.
\]
\begin{thm}[{\cite{Le79,Le79a,Le90}}]
\label{thm:lehman:1}
If $A\in\mathbb B^{m\times n}$ is a mni matrix, then
$Q(A)$ has a unique fractional vertex
and the blocker of $A$,
$\blk(A)$, is mni.
\end{thm}
\begin{thm}[{\cite{Le79,Le79a,Le90}}]
\label{thm:lehman:2}
Let $A\in\mathbb B^{m\times n}$ be a mni matrix which is not isomorphic to $\jt$ for any $t\ge 2$.
Then $A$ has a core, say $A_1$, and its blocker
$\blk(A)$ has a core, say $B_1$,
such that:
\begin{enumcona}
\item
$A_1\mathbf{1} = r\,\mathbf{1}$ and $B_1\mathbf{1} = s\,\mathbf{1}$.
\item
The rows of $A_1$ and $B_1$ may be permuted so that
\begin{equation}\label{equ:lehman}
A_1 B_1^\text{\upshape\textsf{T}} = \mathbf{J} + (r s - n)\,\mathbf{I},
\end{equation}
where $\mathbf{J}$ is the matrix of all ones and
$\mathbf{I}$ is the identity matrix.
\item
$f^{\ast} = \frac{1}{r}\,\mathbf{1}$ is a fractional vertex of $Q(A)$.
\item\label{thm:lehman:2:d}
$f^{\ast}$ is in exactly $n$ edges of $Q(A)$. More precisely, $f^{\ast}$ is adjacent in $Q(A)$ to exactly $n$ vertices which make up the rows of $B_1$.
\item\label{thm:lehman:2:s}
$x\cdot\mathbf{1} \ge s$ defines a facet of $Q\ch(A)$, and
$ Q(A) \cap \{x \in \mathbb R^n \mid x\cdot\mathbf{1} \ge s\} = Q\ch(A)$.
\end{enumcona}
\end{thm}
L{\"u}tolf and Margot~\cite{LM98} gave a condition which ensures that a binary matrix is mni.
\begin{lem}[{\cite[Lemma~2.8]{LM98}}]\label{lem:LM98}
Suppose that $A\in\mathbb B^{m\times n} $ has core $A_1$, with $r$ ones per row, that its blocker $\blk(A)$ has core $B_1$, with $s$ ones per row, and that~\eqref{equ:lehman} holds. Then, if $Q(A)$ has just one fractional vertex, $A$ must be mni.
\end{lem}
Despite the ``minimal'' in mni, a mni matrix may have a row submatrix which is also mni. Cornu{\'e}jols and Novick~\cite{CN94}, and later L{\"u}tolf and Margot~\cite{LM98}, used this fact to construct many new mni matrices by adding rows to known ones. Of interest to us here is the possibility of adding one or more rows to $\C{n}{2}$, which is a mni matrix for $n$ odd, to obtain another mni matrix.
One of the main tools for studying the vertices of the polyhedron which results from the addition of an inequality to the system of inequalities describing a given polyhedron is the following variant of Lemma~8 of~\cite{FP96}, which essentially says that the new vertices are obtained by intersecting the edges of the original polyhedron with the hyperplane associated with the new inequality.
\begin{propo}[{variant of~\cite[Lemma~8]{FP96}}]\label{propo:FK96}
Let $A \in\mathbb R^{m\times n}$ be a matrix with non-negative entries, and suppose $P = \{x\in\mathbb R^n \mid Ax\ge b, x\ge\mathbf{0}\}$ is a full dimensional polyhedron. Let us further assume that the inequality $a\cdot x\ge c$ is independent of those defining $P$, where $a\ge\mathbf{0}$ and $c > 0$.
Then, any vertex $v$ of the polyhedron $P' = P\cap \{x\in\mathbb R^n \mid a\cdot x\ge c\}$ must satisfy one (and only one) of the following:
\begin{itemize}
\item
$v$ is a vertex of $P$ satisfying $a\cdot v\ge c$,
\item
$v$ is a convex combination
$v = \alpha w + (1-\alpha ) w'$ of adjacent vertices $w$ and $w'$ of $P$, satisfying $a\cdot w > c$, $a\cdot w' < c$, and $a\cdot v = c$, that is,
$\alpha = (c - a\cdot w') / (a\cdot w - a\cdot w')$,
\item
$v = w + \beta \mathbf{e}_h$ for some
vertex $w $ of $P$, $\beta > 0$ and $h\in\I$,
such that
$\{w + \gamma \mathbf{e}_h \mid \gamma \ge 0\}$ is an (infinite) edge of $P$,
$a\cdot w < c$ and $a\cdot v = c$, that is,
$\beta = (c - a\cdot w )/a_h$ (necessarily $a_h\ne 0$).
\end{itemize}
\end{propo}
Suppose the $mni$ matrix $A$ has core $A_1$ and its blocker $B = \blk(A)$ has core $B_1$, so that the properties of~\tref{Theorem}{thm:lehman:2} are satisfied, in particular~\eqref{equ:lehman}. Let $\mathcal{B}$ be the set consisting of the fractional vertex $f^{\ast}$ and the binary vertices of $Q(A)$ which are adjacent to it (i.e., the rows of $B_1$, see~\tref{Theorem}{thm:lehman:2}). Suppose furthermore that the binary matrix $M$ has more than $r$ ones per row, and we add to $A$ the rows of $M$ obtaining the matrix $E$, which has no dominating rows. Schematically,
\begin{equation}\label{equ:append}
E = \begin{bmatrix} A \\ M \end{bmatrix}.
\end{equation}
Then, we have:
\begin{lem}\label{lem:mni:2}
If $Mu \ge\mathbf{1}$ for all $u \in\mathcal{B}$, and any vertex of $Q(E)$ which is not in $\mathcal{B}$ is binary and has more than $s$ ones, then $E$ is mni.
\end{lem}
\begin{proof}
Since $Q(E)\subset Q(A)$, if $v\in Q(E)$ is a vertex of $Q(A)$, then it is also a vertex of $Q(E)$. Thus, the elements of $\mathcal{B}$ are vertices of $Q(E)$ because $Mu \ge \mathbf{1}$ for $u \in \mathcal{B}$. Since any vertex of $Q(E)$ which is not in $\mathcal{B}$ is binary, we conclude that $Q(E)$ has just one fractional vertex.
By~\tref{Lemma}{lem:LM98}, it is enough to show now that $E$ has core $A_1$ and $\blk(E)$ has core $B_1$. The first condition is clear ($M$ has more than $r$ ones per row), and the second one follows from the fact that the rows of $\blk(E)$ are exactly the binary vertices of $Q(E)$, and that any vertex of $Q(E)$ which is not in $\mathcal{B}$ has more than $s$ ones.
\end{proof}
The following result relates vertex adjacency in $Q(A)$ with vertex adjacency in $Q\ch(A)$ when $A$ is mni.
\begin{coro}\label{coro:mni:fast}
Let $A$ be a mni matrix not isomorphic to any $\jt$ ($t\ge 2$). Suppose the core $A_1$ of $A$ has $r$ ones per row, and the core $B_1$ of its blocker has $s$ ones per row.
Let $v$ and $v'$ be binary vertices of $Q(A)$.
Then, we have:
\begin{enumcona}
\item\label{coro:mni:fast:a}
If $\max\,\{v\cdot\mathbf{1}, v'\cdot\mathbf{1}\} > s$, the vertices $v$ and $v'$ are adjacent in $Q(A)$ if and only if they are adjacent in $Q\ch(A)$.
\item\label{coro:mni:fast:b}
If $v\cdot\mathbf{1} = v'\cdot\mathbf{1} = s$, the vertices $v$ and $v'$ are always adjacent in $Q\ch(A)$,
and they are adjacent
in $Q(A)$ if and only if $\supp v\cup \supp v' \neq \I$.
\end{enumcona}
\end{coro}
\begin{proof}
By~\trrefp{Theorem}{thm:lehman:2}{thm:lehman:2:s}, we know that $ Q(A) \cap \{x \in \mathbb R^n \mid x\cdot\mathbf{1} \ge s\} = Q\ch(A)$. If $\max\,\{v\cdot\mathbf{1}, v'\cdot\mathbf{1}\} > s$, at least one of the vertices $v$ and $v'$ does not satisfy the inequality $x\cdot\mathbf{1} \ge s$ tightly. Therefore, when we add this inequality to the system $A x\ge \mathbf{1}$, the adjacency relation between these vertices does not change. This shows~\refp{coro:mni:fast:a}.
For the first part of~\refp{coro:mni:fast:b}, we notice that $v$ satisfies with equality $n-1$ of the inequalities corresponding to the rows of $A_1$, as $v$ is adjacent to $f^{\ast} = \frac{1}{r}\,\mathbf{1}$ by~\trrefp{Theorem}{thm:lehman:2}{thm:lehman:2:d}. Since this is also true for $v'$, $v$ and $v'$ satisfy tightly $n-2$ inequalities coming from $A_1$ and the equality $x\cdot\mathbf{1} = s$ which defines a facet of $Q\ch(A)$ and is linearly independent with those of $A_1$ (as $f^{\ast}$ does not satisfy it). Thus, $v$ and $v'$ are adjacent in $Q\ch(A)$.
For the last part of~\refp{coro:mni:fast:b}, let $y = \sum_{k\in \I[\ell]} \lambda_k u^k$ be a strict convex combination of vertices of $Q(A)$, and suppose $\frac{1}{2}\,(v + v') \ge y$. If $v$ and $v'$ have a common null coordinate, say $v_h = v'_h = 0$, then $y_h = 0$ and $u^k\neq f^{\ast}$ for all $k\in \I[\ell]$. Therefore $u^k$ is a binary vertex of $Q(A)$, and so of $Q\ch(A)$, for $k\in \I[\ell]$. Since $v$ and $v'$ are adjacent in $Q\ch(A)$ by the previous paragraph, from~\tref{Proposition}{propo:adys:1} it follows that $\ell=2$ and, without loss of generality, $u^1=v$ and $u^2=v'$. Using again~\tref{Proposition}{propo:adys:1}, we conclude that $v$ and $v'$ are adjacent in $Q(A)$.
Finally, if $\supp v\cup \supp v' = \I$, we have $\frac{1}{2}\,(v + v')\ge f^{\ast}$ (since $r\ge 2$, see~\tref{Definition}{defn:core}), and then by~\tref{Proposition}{propo:adys:1} we conclude that $v$ and $v'$ are not adjacent in $Q(A)$.
\end{proof}
In the remainder of this section we will focus our attention on the mni matrix $\C{n}{2}$ with $n$ odd. This matrix coincides with its core, having exactly $2$ ones per row and per column, and the core of $\blk(\C{n}{2})$ has $s = (n + 1)/2$ ones per row and per column.
Let us start with some simple properties.
\begin{lem}\label{lem:mni:extremo}
The point $v\in\mathbb B^n$ is a vertex of $Q(\cn)$ if and only if it has neither two consecutive zeroes nor three consecutive ones (cyclically).
\end{lem}
\begin{proof}
For the ``only if'' part we notice that a binary vertex cannot have two consecutive zeroes, since otherwise its support would not be a transversal, and cannot have three consecutive ones, since changing the middle $1$ to $0$ still yields a point of $Q(\cn)$.
The ``if'' part is similar.
If $v$ does not have two consecutive zeroes, then it is in $Q(\cn)$. If $\supp \ExtPv$ were not a minimal transversal, we could diminish a coordinate of $v$ from $1$ to $0$, still staying in $Q(\cn)$, let us call $w$ this new point. Since $v$ does not have three consecutive ones, $w$ must have two consecutive zeroes, but then $w\notin Q(\cn)$.
\end{proof}
\begin{lem}\label{lem:mni:intervalo}
For $n$ odd, the joint saturation graph $\G(v,v')$ of two distinct binary vertices $v$ and $v'$ of $Q(\cn)$ has the following properties:
\begin{enumcona}
\item\label{lem:mni:intervalo:a}
If $p\in \supp \ExtPv \setminus \supp \ExtPv'$, $p'\in \supp \ExtPv' \setminus \supp \ExtPv$ and $p\ngh p'$ in $\G(v,v')$, then $p' = p \pm 1$.
\item\label{lem:mni:intervalo:b}
If $i$, $i+1$,$\dots$, $j$ is a path of $\G(v,v')$, then $i$ and $j$ belong to the same partite set of $\G(v,v')$ if and only if $\arc{i,j}$ has odd cardinality.
\item\label{lem:mni:intervalo:c}
If $i$ and $j$ belong to the same component of $\G(v,v')$, then either $i$, $i+1$,$\ldots$, $j$ or $j$, $j+1$,$\ldots$, $i$ is a path of $\G(v,v')$, but not both. By~\refp{lem:mni:intervalo:a}, this is equivalent to the fact that $\G(v,v')$ has no cycles.
\item\label{lem:mni:intervalo:e}
If $\max\,\{v\cdot\mathbf{1}, v'\cdot\mathbf{1}\} > s$, then $v$ and $v'$ are adjacent in $Q(\cn)$ if and only if $\G(v,v')$ is a path. Moreover, this path must be even if $v\cdot\mathbf{1} \ne v'\cdot\mathbf{1}$.
\end{enumcona}
\end{lem}
\begin{proof}
\refp{lem:mni:intervalo:a} is a straightforward consequence of \eqref{equ:edge} and the fact that $C_t=\{t,t+1\}$ for all $t\in \I[n]$. \refp{lem:mni:intervalo:b} follows from~\refp{lem:mni:intervalo:a} and the fact that $\G(v,v')$ is bipartite.
Observe that if $i$ and $j$ belong to the same component of $\G(v,v')$, then by~\refp{lem:mni:intervalo:a} either $i$, $i+1$,$\ldots$, $j$ or $j$, $j+1$,$\ldots$, $i$ is a path of $\G(v,v')$. If both of them were paths, by~\refp{lem:mni:intervalo:b} we would conclude that the cardinalities of $\arc{i,j}$ and $\arc{j,i}$ have the same parity. However, since $n=\card{\arc{i,j}}+\card{\arc{j,i}}-2$, this would contradict the fact that $n$ is odd. This proves the first part of~\refp{lem:mni:intervalo:c}. To complete the proof of~\refp{lem:mni:intervalo:c}, it is enough to note that by~\refp{lem:mni:intervalo:a}, the existence of a cycle in $\G(v,v')$ is equivalent to the existence of $i,j\in \I$ such that $i$, $i+1$,$\ldots$, $j$ and $j$, $j+1$,$\ldots$, $i$ are both paths of $\G(v,v')$.
For the last item,
\trrefp{Corollary}{coro:mni:fast}{coro:mni:fast:a} tells us that $v$ and $v'$ are adjacent in $Q(\cn)$ if and only if they are adjacent in $Q\ch(\C{n}{2} )$, and~\tref{Theorem}{thm:incidence} tells us that $v$ and $v'$ are adjacent in $Q\ch(\C{n}{2} )$ if and only if $\G(v,v')$ is a path because $\G(v,v')$ has no cycles by~\refp{lem:mni:intervalo:c}.
Finally, if $v$ and $v'$ are adjacent and $v\cdot\mathbf{1} \ne v'\cdot\mathbf{1}$, by~\tref{Corollary}{coro:nec:1} one of $\card{\supp \ExtPv \setminus \supp \ExtPv'}$ and $\card{\supp \ExtPv' \setminus \supp \ExtPv}$ is odd and the other even, so that the path $\G(v,v')$ has an even number of edges.
\end{proof}
\begin{lem}\label{lem:componente}
For $n\ge 9$ odd, let $\{i,j,l\}\subset \I$ be such that $i < j < l$, $j - i\ge 3$ odd, $l - j\ge 3$ odd, and either $i\neq 1$ or $l\neq n$. Let $v$ and $v'$ be distinct binary vertices of $Q(\cn)$ and let $\G(v,v')$ be their joint saturation graph. If $v'_{i} = v'_{j} = v'_{l} = 0$, then any component of $\G(v,v')$ contains at most one element of the set $\{i,j,l\}$.
\end{lem}
\begin{proof}
Let us observe first that the cardinalities of $\arc{i,j}$, $\arc{j,l}$ and $\arc{l, i}$ are even as $j - i$, $l - j$ and $n$ are odd.
To prove the lemma, assume by contradiction that a component of $\G(v,v')$ contains two elements of the set $\{i,j,l\}$. Without loss of generality, let us say these are $i$ and $j$. Since $v'_{i} = v'_{j} = 0$, we have $i,j\in \supp \ExtPv \setminus \supp \ExtPv'$, and so $i$ and $j$ are in the same partite set of $\G(v,v')$. Using also the fact that the cardinality of $\arc{i,j}$ is even, from properties~\refp{lem:mni:intervalo:b} and~\refp{lem:mni:intervalo:c} of~\tref{Lemma}{lem:mni:intervalo} we conclude that $j$, $j+1$,$\dots$, $i$ is a path of $\G(v,v')$. As $l$ is in this path and $\arc{j,l}$ has an even number of elements, using~\trrefp{Lemma}{lem:mni:intervalo}{lem:mni:intervalo:b} we conclude that $l\in \supp \ExtPv' \setminus \supp \ExtPv$, but this contradicts $v'_l = 0$.
\end{proof}
We are now ready to prove the following claim by Cornu{\'e}jols and Novick~\cite{CN94}.
\begin{propo}\label{propo:CN94}
For $n\ge 9$ odd, let $\{i,j,l\}\subset \I$ be such that $i < j < l$, $j - i\ge 3$ odd, $l - j\ge 3$ odd, and either $i\neq 1$ or $l\neq n$. Let $a\in \mathbb B^n$ be the characteristic vector of the set $\{i,j,l\}$. Then, the matrix $E$ obtained by adding to $\C{n}{2}$ the row vector $a$ is mni.
\end{propo}
\begin{proof}
By~\tref{Lemma}{lem:mni:2} it will be enough to show that:
\begin{enumcona}
\item\label{propo:CN94:1}
If $\mathcal{B}$ is the set consisting of the fractional vertex $f^{\ast} =\frac{1}{2}\,\mathbf{1} $ and the vertices of $Q(\cn)$ which are adjacent to it, then $a\cdot u \ge 1$ for all $u \in\mathcal{B}$.
\item\label{propo:CN94:2}
Any vertex of $Q(E)$ not in $\mathcal{B}$ is binary and has more than $s = (n + 1)/2$ ones (i.e., more than the number of ones per row in the core of the blocker of $\C{n}{2}$).
\end{enumcona}
To show~\refp{propo:CN94:1}, notice that $a\cdotf^{\ast} = 3/2 > 1$. On the other hand, if $u$ is in the core of $\blk(\C{n}{2})$, then it is a rotation of
\[
(1,1,0,1,0,1,\dots,0,1,0,1,0)\; ,
\]
and therefore $a\cdot u \ge 1$, as the cardinalities of $\arc{i,j}$, $\arc{j,l}$ and $\arc{l, i}$ are even.
To show~\refp{propo:CN94:2} we rely on~\tref{Proposition}{propo:FK96}.
Suppose $v$ is a vertex of $Q(E)$ which is a convex combination of the adjacent vertices $w$ and $w'$ of $Q(\cn)$,
\[
v = \alpha w + (1 - \alpha ) w',
\]
with
\begin{equation}\label{equ:propo:CN94}
a\cdotv = 1, \quad
a\cdotw > 1, \quad
a\cdotw' < 1\; .
\end{equation}
Since $a\cdotf^{\ast} = 3/2$ and $a\cdot u \ge 1$ for any vertex $u$ of $Q(\cn)$ which is adjacent to $f^{\ast}$, we conclude that $f^{\ast}$ is different from both $w$ and $w'$.
Thus, $w$ and $w'$ are binary, because $f^{\ast}$ in the only fractional vertex of $Q(\cn)$ by~\tref{Theorem}{thm:lehman:1}. In particular, we have
\[
a\cdotw' = 0\; ,
\]
and so by~\refp{propo:CN94:1} it follows that $w'\notin\mathcal{B}$, which in turn implies $w'\cdot\mathbf{1} > s$ (recall that by~\tref{Theorem}{thm:lehman:2}, the vertices of $Q(\cn)$ that are adjacent to $f^{\ast}$, which together with $f^{\ast}$ constitute $\mathcal{B}$, make up the rows of the core of $\blk(\C{n}{2})$ and so each of them has $s$ ones, having the other binary vertices of $Q(\cn)$, which are also rows of $\blk(\C{n}{2})$, more than $s$ ones).
Now, since $w$ and $w'$ are adjacent in $Q(\cn)$, by~\trrefp{Lemma}{lem:mni:intervalo}{lem:mni:intervalo:e} their joint saturation graph $\G(w, w')$ is a path (in particular, connected), and so by~\tref{Lemma}{lem:componente} we must have
\[
a\cdotw \le 1\; ,
\]
contradicting~\eqref{equ:propo:CN94}.
Thus, the second possibility described in~\tref{Proposition}{propo:FK96} cannot happen in the case of $Q(E) = Q(\cn) \cap \{x\in\mathbb R^n \mid a\cdot x\ge 1\}$.
Suppose now $v$ is a vertex of $Q(E)$ of the form
\[
v = w + \beta \mathbf{e}_h\; ,
\]
where $w$ is a vertex of $Q(\cn)$ satisfying $a\cdotw < 1$, and $a\cdotv = 1$. Once again, $w$ cannot be either $f^{\ast}$ or any of the vertices of $Q(\cn)$ which are adjacent to it. Thus, as above, by~\tref{Theorems}{thm:lehman:1} and~\ref{thm:lehman:2} it follows that $w$ is binary and $s < w\cdot\mathbf{1}$. Then, we have $a\cdotw =0$, which implies $\beta = 1$ since $a\cdotv = 1$. Thus, we conclude that $v$ is binary, and
\[
s < w\cdot\mathbf{1} \le v\cdot\mathbf{1} \; ,
\]
proving~\refp{propo:CN94:2}.
\end{proof}
One would hope that it is possible to add a circulant matrix $M$ instead of just a single row, but this is not true in general. For instance, if we add to $A = \C{15}{2}$ the matrix
\[\setcounter{MaxMatrixCols}{15}
M = \begin{bmatrix}
1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\
0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 1 & 0 \\
0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 1
\end{bmatrix},
\]
the resulting matrix $E$ is not mni, whereas, by~\tref{Proposition}{propo:CN94}, adding just one row of $M$ we obtain a mni matrix.
Let us see that we may obtain systematically a mni matrix by adding to $\C{n}{2}$ all the rows of a circulant matrix.
For $x = (x_1,x_2,\dots,x_n)\in\mathbb R^n$, let us denote by $\rho^h(x)$ the rotation of $x$ in $h$ spaces to the right:
\[
\rho^h(x) = (x_{n-h+1},\dots,x_n,x_1,\dots,x_{n-h})\; .
\]
For $n = 3\nu$ ($\nu \in \mathbb N$), let
\begin{equation}\label{equ:3nu:v}
a^1 = (1,0,0,1,0,0,\dots,1,0,0)\; , \quad
a^2 = \rho^1(a^1)\; , \quad a^3 = \rho^2(a^1)\; ,
\end{equation}
that is, $a^1$ consists of $\nu$ groups of the form $(1,0,0)$, and
let $M$ be the matrix with rows $a^1$, $a^2$ and $a^3$.
We would like to show next:
\begin{thm}\label{thm:3nu}
Let $n = 3\nu$ be odd, where $\nu\ge 3$. If $E$ is the matrix obtained from $A = \C{n}{2}$ by appending the rows of $M$ (as in~\eqref{equ:append}), then $E$ is mni.
Thus, varying $\nu$ we obtain an infinite family of mni matrices.
\end{thm}
The proof will be based on several lemmas, in which we preserve the notation.
\begin{lem}\label{lem:3nu:1}
For $i\in \I[3]$ let
\begin{equation}
\label{equ:3nu:w}
w^i = \mathbf{1} - a^i,
\end{equation}
so that, for example,
$w^1 = (0,1,1,0,1,1,\dots,0,1,1)$,
and let $\mathcal{W} = \{w^1,w^2,w^3\}$.
Then, we have:
\begin{enumcona}
\item\label{lem:3nu:1:1}
$w^i$ is a vertex of $Q(\cn)$.
\item\label{lem:3nu:1:4}
$w^i\cdot\mathbf{1} = 2\nu$ and any other vertex $v$ of $Q(\cn)$ which is not in $\mathcal{W}$ satisfies $v\cdot\mathbf{1} \le 2\nu - 1$.
\item\label{lem:3nu:1:2}
$w^i\cdot a^i = 0$.
\item\label{lem:3nu:1:5}
If $i\ne j$, $w^i$ and $w^j$ are not adjacent in $Q(\cn)$.
\item\label{lem:3nu:1:3}
\( v\cdot a^i \ge 1 \)
for every vertex $v$ of $Q(\cn)$ different from $w^i$.
Therefore, a vertex $v$ of $Q(\cn)$ is in $Q(E)$ if and only if $v\notin\mathcal{W}$.
\end{enumcona}
\end{lem}
\begin{proof}
\refp{lem:3nu:1:1} and~\refp{lem:3nu:1:4} follow from~\tref{Lemma}{lem:mni:extremo}, and~\refp{lem:3nu:1:2} follows from the definition of $w^i$ in~\eqref{equ:3nu:w}.
Since $w^i\cdot\mathbf{1} = w^j\cdot\mathbf{1} = 2\nu $ is strictly greater than the number $s = (3\nu + 1)/2$ of ones per row in the core of the blocker of $\C{n}{2}$, by \trrefp{Corollary}{coro:mni:fast}{coro:mni:fast:a}
we know that $w^i$ and $w^j$ are adjacent in $Q(\cn)$ if and only if they are adjacent in $Q\ch(\C{n}{2} )$, and since $\G(w^i,w^j)$ consists of $\nu$ disjoint arcs, by \tref{Theorem}{thm:CharactAdj} we conclude that $w^i$ and $w^j$ are not adjacent in $Q\ch(\C{n}{2} )$. This shows~\refp{lem:3nu:1:5}.
To prove~\refp{lem:3nu:1:3}, assume $v$ is a vertex of $Q(\cn)$ such that $v\cdot a^i < 1$. Then $v$ is binary and $v\cdot a^i = 0$, and so $v$ is dominated by $w^i$. Thus, $v$ and $w^i$ must coincide as they are both vertices of $Q(\cn)$.
\end{proof}
\begin{lem}
\label{lem:3nu:2}
Let
\[
u^1 = (0,1,0,1,0,1,0,1,1,\dots,0,1,1)\; ,
\]
that is, $u^1$ starts with the group $(0,1,0,1,0,1)$ followed by $\nu - 2$ groups of the form $(0,1,1)$,
and let
\[
u^j = \rho^{j-1}(u^1) \quad\text{for $j = 2,\dots,n$.}
\]
Then, for fixed $i\in \I[3]$, the vertices of $Q(\cn)$ adjacent to $w^i$ are
\begin{equation}
\label{equ:3nu:u:adjs}
u^{i}, u^{i+3},\dots, u^{i+3(\nu-1)},
\end{equation}
which are the only vertices of $Q(\cn)$ in the hyperplane $\{x\in\mathbb R^n \mid x\cdot a^i = 1\}$.
\end{lem}
\begin{proof}
Let us start by observing that the points in~\eqref{equ:3nu:u:adjs} are indeed vertices of $Q(\cn)$ by~\tref{Lemma}{lem:mni:extremo}, and they belong to $\{x\in\mathbb R^n \mid x\cdot a^i = 1\}$.
Let us see that they are the only ones in $\{x\in\mathbb R^n \mid x\cdot a^i = 1\}$. Suppose $v$ is a vertex of $Q(\cn)$ such that $v\cdot a^i = 1$. Then $v$ is binary, and after eventually some rotations, we may assume that $i = 1$ and $v_1 = 1$. It follows that $v_4 = v_7 = \dots = v_{1 + 3(\nu - 1)} = 0$, and by~\tref{Lemma}{lem:mni:extremo} these zeroes must be surrounded by ones. Thus $v$ is of the form:
\[
(1,?,1,0,1,1,0,1,\dots,0,1,?)\; ,
\]
and since by~\tref{Lemma}{lem:mni:extremo} it cannot have three consecutive ones,
we have:
\[
v = (1,0,1,0,1,1,0,1,\dots,0,1,0)
= u^{n - 2} = u^{1 + 3\,(\nu - 1)}.
\]
To see that the vertices in~\eqref{equ:3nu:u:adjs} are adjacent to $w^i$, we observe that it is enough to show this for $u^i$, since $\rho^3(w^i) = w^i$. Even more, without loss of generality we may restrict ourselves to showing that $u^1$ and $w^1$ are adjacent in $Q(\cn)$.
Note that $\G(w^1,u^1)$ consists of the path $3$, $4$, $5$, with $\supp \ExtPw^1\setminus \supp \ExtPu^1 = \{3,5\}$ and $\supp \ExtPu^1\setminus \supp \ExtPw^1 = \{4\}$. Since $w^1\cdot\mathbf{1} = 2\nu$ is strictly greater than the number $s = (3\nu + 1)/2$ of ones per row in the core of $\blk(\C{n}{2})$, by~\trrefp{Corollary}{coro:mni:fast}{coro:mni:fast:a} and~\tref{Theorem}{thm:CharactAdj} we conclude that $w^1$ and $u^1$ are adjacent in $Q(\cn)$.
Suppose now that $u$ is a vertex of $Q(\cn)$ which is adjacent to $w^1$. As $w^1\cdot\mathbf{1} = 2\nu > s = (3\nu + 1)/2$, by~\trrefp{Theorem}{thm:lehman:2}{thm:lehman:2:d} we conclude that $u$ is not equal to the unique fractional vertex $f^{\ast}=\frac{1}{2}\,\mathbf{1}$, and from \tref{Lemma}{lem:3nu:1} it follows that $u\notin\{w^2,w^3\}$ and $u\cdot\mathbf{1} \leq 2\nu - 1$. Then, by~\trrefp{Lemma}{lem:mni:intervalo}{lem:mni:intervalo:e} we know that $\G(u,w^1)$ is an even path. Since the ones in $w^1$ come in pairs and
$u\cdot\mathbf{1} < w^1\cdot\mathbf{1}$, this path must be of the form $j$, $j + 1$, $j + 2 \pmod{n}$, with $\{j,j+2\} = \supp \ExtPw^1 \setminus \supp \ExtPu$ and $\{j+1\} = \supp \ExtPu \setminus \supp \ExtPw^1$. Hence, $u$ is obtained from $w^1$ by replacing a group $(1,1,0,1,1)$ by $(1,0,1,0,1)$, so $u = \rho^{3k}(u^1)$ for some $k$.
\end{proof}
For future reference, we notice that
\[
u^1 + \mathbf{e}_3 + \mathbf{e}_5 = w^1 + \mathbf{e}_4 \; ,
\]
and in general, if $j = 3k + i$, with $i\in\I[3]$, then
\begin{equation}\label{equ:3nu:u:w}
u^j + \mathbf{e}_{j+2} + \mathbf{e}_{j+4} = w^i + \mathbf{e}_{j+3}\; ,
\end{equation}
where, as usual, the sums in the indices are to be understood modulo $n$.
\begin{lem}
If $E$ is the matrix $\C{n}{2}$ to which we have appended the rows $a^1$, $a^2$ and $a^3$ (defined in~\eqref{equ:3nu:v}), then the vertices of $Q(E)$ are those of $Q(\cn)$ except for $w^1$, $w^2$ and $w^3$ (defined in~\eqref{equ:3nu:w}).
\end{lem}
\begin{proof}
By~\tref{Lemma}{lem:3nu:1}, every vertex of $Q(\cn)$ not in $\mathcal{W} = \{w^1,w^2,w^3\}$ is a vertex of $Q(E)$.
Conversely, let us see that if we add one row at a time then no new vertices are created, and the points in $\mathcal{W}$ are the only vertices that are eliminated.
In the first place, let us consider the intersection of $Q(\cn)$ with the half-space
$\{x\in\mathbb R^n \mid a^i\cdot x \ge 1 \}$. If new vertices are created, then they should come from the intersection of the hyperplane $\{x\in\mathbb R^n \mid a^i\cdot x = 1\}$ with an edge of $Q(\cn)$ (\tref{Proposition}{propo:FK96}). This edge should be incident to a vertex $v$ satisfying $a^i\cdot v < 1$, and therefore $v = w^i$ is an endpoint of the edge (\tref{Lemma}{lem:3nu:1}). Given that $w^i$ is adjacent only to the vertices in~\eqref{equ:3nu:u:adjs}, and these vertices belong to $\{x\in\mathbb R^n \mid a^i\cdot x = 1\}$, any new vertex must come from the intersection of an infinite edge of the form $\{w^i + \gamma \mathbf{e}_h \mid \gamma \geq 0\}$ with the hyperplane $\{x\in\mathbb R^n \mid a^i\cdot x = 1\}$. Since $a^i\cdotw^i = 0$, this intersection must be of the form $\{w^i + \mathbf{e}_h\}$ with $h \in\supp a^i$. Hence, $h = 3k + i$ for some $k$, and so, setting $j = h - 3$ and using~\eqref{equ:3nu:u:w}, we have
\[
w^i + \mathbf{e}_h = w^i + \mathbf{e}_{j+3} = u^j + \mathbf{e}_{j+2} + \mathbf{e}_{j+4}\; .
\]
It follows that $w^i + \mathbf{e}_h$ dominates $u^j$, and then it cannot be a vertex of $Q(\cn) \cap \{x\in\mathbb R^n \mid a^i\cdot x \ge 1 \}$. Thus, we conclude that no new vertex is created and the vertices of $Q(\cn) \cap \{x\in\mathbb R^n \mid a^i\cdot x \ge 1 \}$ are the vertices of $Q(\cn)$ except for $w^i$.
Finally, since $a^i\cdotw^j = \nu >1$ for $j\neq i$, observe that the addition of the inequality $a^i\cdot x \ge 1$ does not modify the adjacency relations for $w^j$ ($j\neq i$) in the resulting polyhedron, which are still given by~\tref{Lemma}{lem:3nu:2}. Then, we can repeat the argument above each time we add a new inequality. This completes the proof.
\end{proof}
\begin{proof}[Proof of~\tref{Theorem}{thm:3nu}]
The previous lemmas show that, except for the points in $\mathcal{W} = \{w^1, w^2, w^3\}$ (defined in~\eqref{equ:3nu:w}), the vertices of $Q(\cn)$ and $Q(E)$ coincide, so that~\tref{Lemma}{lem:mni:2} yields that $E$ is mni.
\end{proof}
\section*{Acknowledgements}\label{sec:acknowledge}
\begin{itemize}
\item
The authors are very grateful to the anonymous reviewers for their comments and suggestions which helped to improve the presentation of the results in this paper.
\item
We made wide use of the freely available polyhedral computational codes \emph{PORTA} by Christof and L\"{o}bel~\cite{porta} and \emph{cdd} by
Fukuda~\cite{Fu0.94}: our thanks to their authors.
\item
This work was partially supported by grant PIP 112-201101-01026 from Consejo Nacional de Investigaciones Cient\'ificas y T\'ecnicas (CONICET), Argentina. P.~B.~Tolomei was also partially supported by grant PID-ING 416 from Universidad Nacional de Rosario (UNR), Argentina.
\end{itemize}
|
2,869,038,155,853 | arxiv | \section*{Acknowledgements}
The authors wish to thank the Australian Research Council for supporting this work under grant numbers FL0992016 \& CE11E0082 for MET, and DP0986932 \& FT100100025 for TLD. JMM is supported by IARPA under ARO award W911NF-09-1-0375.
|
2,869,038,155,854 | arxiv | \section*{The Information and Media Choice Models}
\subsection*{An Abundance of Information Drives Increasing Information Utility Rates}
When animals hunt for prey they are selective with their diet \cite{stephenskrebs1986foraging}. In times of scarcity they are less selective \cite{stephenskrebs1986foraging}, for example wolves will take on more difficult prey when starving. And in times of abundance animals are more selective \cite{stephenskrebs1986foraging} --- why waste energy hunting difficult prey when there are plenty of easy calories around? Humans act in the same way when selecting information to consume \cite{pirolli1999information, simon1969designing}. When the internet is down we will read, or watch, whatever we have available.
We can model this along the lines of the prey choice model from food foraging, which describes which types of prey are worth pursuing and consuming \cite{stephenskrebs1986foraging}. Assume an information forager is within a media environment where they are searching and encountering information of different types, $i$, each at its own Poisson rate $\lambda_i$. If consumed, the information provides a benefit $u_i$ in a handling time $t_i$, during which time the forager is not searching. Alternatively the forager can choose to ignore the information and keep searching. The forager's choices of whether to consume or ignore information items will determine the expected total time spent searching, $T_s$, and handling, $T_h$, and the total utility gained, $U$. Within these constraints, the forager is trying to optimise the expected overall rate of utility of foraging given by
\begin{equation}
R_{media} = \dfrac{U}{T_s + T_h} \,.
\end{equation}
Here \emph{media} describes the forager's local environment, such as a media platform. Media platforms are analogous to foraging patches in optimal foraging theory. The forager's choices of which information types to consume can be described as an information diet, $D$. The total expected utility is $U = \sum_D \lambda_i u_i T_s$. Similarly the total expected handling time is $T_h = \sum_D \lambda_i t_i T_s$. Substituting in and cancelling $T_s$, we can write the expected utility rate given a diet
\begin{equation}
R_{media} = \dfrac{\sum_D \lambda_i u_i}{1 + \sum_D \lambda_i t_i} \,. \label{eqn:diet_rate}
\end{equation}
Consuming an information item carries an expected opportunity cost of not spending that item's handling time looking for other items, equal to $t_i R_{media}$, and an expected utility gain of $u_i$. To maximise expected utility rate a forager should therefore consume the item if the item utility rate, $r_i = \frac{u_i}{t_i}$, is greater than the overall media platform utility rate, $R_{media}$,
\begin{equation}
r_i \geq R_{media} \label{eqn:info_choice} \,.
\end{equation}
This diet threshold condition is a familiar result from foraging theory \cite{stephenskrebs1986foraging, macarthur1966optimal,pirolli1999information}. To find the optimal diet, item types can be ranked in order of $r_i$ and added to the diet one by one until this inequality fails \cite{macarthur1966optimal}. See the Supplementary Information for a more thorough derivation.
We can now ask which information types a forager should include in their diet, $D$, to maximise their
expected overall utility rate as a consequence of rising information prevalence, here $\lambda_i$. For items with $r_i < R_{media}$, increasing prevalence has no effect as they are still not included in the diet. For items with $r_i \geq R_{media}$, increasing prevalence will mean more time spent handling these items and less time spent searching, so the overall media platform utility rate will increase,
\begin{equation}
\frac{\partial R_{media}}{\partial \lambda_i} \geq 0 \quad \forall i \label{eqn:r_rises} \,.
\end{equation}
Combining this with Inequality \ref{eqn:info_choice}, increasing information prevalence will cause a rise in the criteria for diet inclusion so that higher information utility rates are needed to enter an informavore's diet (Figure \ref{fig:entropy-rising} \textbf{d}). Foragers become more selective when prey is abundant \cite{stephenskrebs1986foraging}.
We can extend traditional foraging theory to information evolution by asking how media producers will respond to this. By assuming there is some cost to media of producing more informative messages --- a standard assumption underlying Zipf's principle of least effort \cite{i2003least,zipf1949human} --- we conclude that an abundance of information creates an adaptive pressure that drives media producers to create information with a higher utility rate.
\begin{proposition}
As information prevalence increases, information utility rates will increase.
\end{proposition}
\begin{figure}[ht]
\centering
\includegraphics[width=1\textwidth]{entropy_rising_new.png}
\caption{ a) Word entropy of text samples in the Corpus of Historical American English has risen since around 1900. This is a robust result, with similar trends in the alternative word distribution measures of b) type token ratio and c) Zipf exponent. (Timeseries are smoothed with a moving average window of $\pm$ 5 years, and then averaged over media categories. Shaded region shows 95\% confidence interval of this average.) This is explained by d) simulations of the information choice model with varying information prevalence. For each prevalence, information items are either in the diet and consumed (blue) or ignored (grey). At higher information prevalence, foragers are more selective with items in their diet, which increases the average item utility rate, a proxy for entropy.}
\label{fig:entropy-rising}
\end{figure}
\subsection*{Competition Between Media Platforms Drives Differences Between Short and Long-form Media}
Information is not distributed evenly around the environment but is clumped in patches, which we will call media platforms. A media platform could be a newspaper, Twitter, papers on a desk, a book etc. The forager has to choose not only which information to consume within a media platform, but also which media platforms to visit. The media choice model is analogous to the information choice model in the previous section, and following that logic we find an analogous result to Inequality \ref{eqn:info_choice}: an optimal information forager will visit a media platform if the expected media utility rate is greater than the background utility rate from foraging in the overall environment (see Supplementary Information for the full model),
\begin{equation}
R_{media} \geq R_{env} \label{eqn:patch_choice} \,.
\end{equation}
A media platform is characterised by the types of information it contains. The utility rate of a media platform, $R_{media}$, involves summations over separate Poisson processes (Equation \ref{eqn:diet_rate}). To simplify this, let $\bar{u}_m$ be the average utility of information items consumed in the media platform, $\bar{t}_m$ the average time spent consuming information items, and $\lambda_m$ the rate of encounter of any item in the diet. Following this, equation \ref{eqn:diet_rate} becomes a variation of Holling's disc equation \cite{holling1959some} (full derivation in Supplementary Information),
\begin{equation}
R_{media} = \dfrac{{\lambda}_m\bar{u}_m}{1 + {\lambda}_m\bar{t}_m} \label{eqn:holling} \,.
\end{equation}
Dividing the denominator by the numerator, and substituting the average item utility rate, defined as the expected utility per unit time spent handling items in the media platform, $\bar{r}_m=\frac{\bar{u}_m}{\bar{t}_m}$,
\begin{equation}
R_{media} = \dfrac{1}{\frac{1}{{\lambda}_m\bar{u}_m} + \frac{1}{\bar{r}_m}} \label{eqn:holling_inverted} \,.
\end{equation}
This is visualised in Figure \ref{fig:media-categories} \textbf{d}. Combining Equation \ref{eqn:holling_inverted} with Inequality \ref{eqn:patch_choice},
the criteria for inclusion in an information forager's diet is
\begin{equation}
\frac{1}{{\lambda}_m\bar{u}_m} + \frac{1}{\bar{r}_m} \leq \frac{1}{R_{env}} \label{eqn:media_patch_criteria} \,.
\end{equation}
The inclusion of a media platform in the information diet is therefore determined by three properties of the information items that it contains: the average utility (i.e. size) of an item, $\bar{u}_m$; the average item utility rate, $\bar{r}_m$; and the prevalence of items within the media platform, ${\lambda}_m$.
\begin{figure}[ht]
\centering
\includegraphics[width=1\textwidth]{media_categories.png}
\caption{Word entropy of long-form (fiction and non-fiction), short-form (news and magazines) and very short-form (social) media. a) Short-form media has higher word entropy than long-form media in varied corpora. For each media category, distributions are kernel density estimates cut to the data range, with quartile positions shown. b) Animals foraging for food and c) people foraging for information share a common search problem. d) The expected cumulative utility (solid line) and overall utility rate (dashed line) of foraging in a media platform depend on the properties of the information items it contains. The expected utility rate (and competitiveness) of a media platform is determined by the time spent searching (horizontal solid lines) and consuming (diagonal solid lines) information. e) Consuming short-form media involves more time spent searching for information items, so that the short-form media platforms need a higher average item utility rate (diagonal solid red line gradient) to give an overall utility rate (dotted grey line) equal to that of long-form media (blue line).}
\label{fig:media-categories}
\end{figure}
Short-form media platforms such as news and magazines involve more time spent switching (and searching for) articles than long-form media platforms like books. As such, in order to reach the same overall media platform utility rate, $R_{media}$, the information items themselves need to be more information dense in short-form media, to balance this extra time spent switching. In foraging terms, an animal might pursue prey which is calorie rich but small such as berries, or they might spend all day chewing grass in a field.
This creates a differential selective pressure on short- and long-form media producers. Given some $R_{env}$, the short-form media platform needs higher average information utility rates, $\bar{r}_m$, to be accepted in the forager's diet than the long-form media. The long-form media experiences a relaxed selective pressure on information utility rates because there is less time wasted spent switching in these media platforms. This cause differences in the observed information utility rates in short- and long-form media.
\begin{proposition}
In a competitive environment, short-form media will have higher average information utility rates than long-form media.
\end{proposition}
We will investigate this proposition in the Results section by considering differences in word entropy across media categories.
Inequality \ref{eqn:media_patch_criteria} includes a weaker condition for diet inclusion, $\frac{1}{\lambda_m \bar{u}_m} \leq \frac{1}{R_{env}}$. This suggests a minimal average size of information for diet inclusion given a level of information prevalence. Even if we increase the average information utility rate, $\bar{r}_m$, to infinity, there will still be a minimal size that information foragers will tolerate within a media platform. At low information prevalence, a lot of time is spent switching between items so that foragers prefer bigger information item sizes, such as books. As information prevalence increases, less time is spent switching between information items, so that foragers will tolerate media platforms with smaller and smaller information item sizes (Figure \ref{fig:viability_social_media}), such as social media. In plain language, Twitter only works in a world with instant messages --- no one would go to the library to check out a single Tweet.
\begin{proposition}
As information prevalence increases, very short-form media becomes viable.
\end{proposition}
\begin{figure}[ht]
\centering
\includegraphics[width=0.9\textwidth]{min_u_with_increasing_prevalence_one_r.png}
\caption{Minimum average information size, $u_{min}$, for media platform diet inclusion for varying levels of information prevalence, $\lambda_m$. Increasing average information utility rates, $\bar{r}_m$ can increase this limit only to a point. Very short-form media platforms like social media can only capture attention in a world with high information prevalence.}
\label{fig:viability_social_media}
\end{figure}
Finally, our model also provides a way to quantify why media platforms are driven to make information more easily accessible. If a media platform can increase the prevalence of information i.e. reduce the expected search time between information encounters, $\frac{1}{\lambda_m}$, then they can reduce the left hand side of Inequality \ref{eqn:media_patch_criteria} and become more competitive. This affects the utility amount term, $\frac{1}{\lambda_m \bar{u}_m}$, and so will be a particularly strong effect with short-form media (in long-form media this term is already small). This can help explain the drive by media platforms to make it easier to access information and minimise clicks through things like infinite scroll, autoplay videos, notifications and apps.
\section*{Results}
\subsection*{Entropy Rising in the Attention Economy}
We use word entropy as our main proxy for information utility rate. We analysed the Corpus of Historical American English (COHA), a balanced corpus with text samples from the 1810s to the 2000s categorised into news, magazines, fiction and non-fiction \cite{Davies2012Nov}. As discussed in Methods, we analysed text samples truncated to $N=2000$ words. We found a clear trend in rising entropy since approximately 1900 (Figure \ref{fig:entropy-rising} \textbf{a}). For robustness we also measured two alternative measures of lexical complexity, with similar trends found since 1900 for the type token ratio (Figure \ref{fig:entropy-rising} \textbf{b}) and Zipf exponent (Figure \ref{fig:entropy-rising} \textbf{c}).
Notably, the trends in separate media categories follow the same pattern of rising word entropy (Figure \ref{fig:timeseries}). We analysed the timeseries of annual averages since 1900 for each media category and lexical measure using Kwiatkowski–Phillips–Schmidt–Shin (KPSS) and Mann-Kendall (MK) tests, with 23 out of 24 tests showing significant evidence of trends at $p<0.05$. For full results and a deeper analysis, see the Supplementary Information.
\begin{figure}[ht]
\centering
\includegraphics[width=1\textwidth]{coha_categories_timeseries.png}
\caption{Timeseries of word entropy across media categories in the Corpus of Historical American English. For each media category, the timeseries was smoothed using an average over a window of $\pm$ 5 years. The shaded regions are 95\% confidence intervals of this average. All media categories show an upward trend in word entropy from 1900.}
\label{fig:timeseries}
\end{figure}
\subsection*{Historical Analysis of US Publishing}
We investigated the history of media publishing in America. Magazine publishing was the most interesting. Figure \ref{fig:historical-magazines} shows the historical trend in COHA magazine word entropy alongside magazine circulation figures and important events. Magazine publishers are in a two-sided market where they sell magazines to consumers and attention to advertisers \cite{evans2020economics}, with the majority of revenue from selling attention \cite{sumner2010magazine}. This wasn't always the case in the US --- prior to the 1890s most magazine revenue was from sales with advertising considered undesirable \cite{sumner2010magazine}. Towards the late 19th century, a combination of rapidly decreasing printing costs, growth in the literate population, discounts from the US postal service and the ability to target adverts to a niche readership led to a new business model to emerge in magazine publishing \cite{sumner2010magazine}. This new model was to sell magazines lower than the price of production, increasing circulation so that those costs could be recouped by advertising revenue \cite{sumner2010magazine}. Before 1893, most magazines sold for 25 cents --- until a price war led to the magazines McClure's, Munsey's and Cosmopolitan dropping their prices to 10 cents and subsequently enjoying rises in circulation and advertising revenue \cite{sumner2010magazine}. The 10 cent magazines contributed to a tripling in total magazine readership from 1890 in 1905 \cite{sumner2010magazine}, and there was a huge jump in word entropy in the same period (Figure \ref{fig:historical-magazines}).
The Audit Bureau of Circulation was created by advertisers in 1914 \cite{sumner2010magazine} to better measure magazine readership numbers. This quantification of attention further increased pressure on magazine publishers. Other changes included moving advertisements from the back of the magazine to alongside the main content --- a move that forced copywriters to improve the appeal of the content through adding color and improving graphics \cite{sumner2010magazine}, and we hypothesise by increasing word entropy.
Word entropy continues to rise throughout the 20th century alongside magazine circulation, with a Pearson's correlation coefficient r$=0.91$ ($p < 0.001$), although both rise over time so that confounding factors are not ruled out (Figure \ref{fig:historical-magazines}). After the 1890s, the biggest drop in word entropy was during the great depression when magazine circulation also fell. There is a suggestion in the data that things change around the year 2000, as magazine circulation drops but word entropy continues to rise. The rise of digital media around this time is perhaps the biggest change in publishing since the printing press so we would not expect the same trends to necessarily continue --- and digital media represents a new competitive pressure which would drive word entropy rise within our model, matching the historical trend.
\begin{figure}[ht]
\centering
\includegraphics[width=1\textwidth]{magazine_history.png}
\caption{Historical analysis of word entropy in magazines (red dotted, timeseries calculated as in previous figure) with key events (pink) and US Monthly Magazine circulation(purple).}
\label{fig:historical-magazines}
\end{figure}
Concerning the other media categories, circulation numbers and competition for attention certainly increased in the 20th century, and we see a general rise in word entropy. However the timeseries trends do not fit quite as well alongside specific historical events as in magazine publishing. This is not unexpected as they are under less direct attention economy pressure. Fiction is interesting as it had high word entropy during the 19th century, decreasing throughout that century (Figure \ref{fig:timeseries}). This is partly explained by a large number of plays and scripts in COHA in this time period, which have particularly high word entropy. However even removing these documents, the fiction word entropy is still very high in the 19th century. We believe the high word entropy here, and the downward trend, are not primarily caused by attention economy pressures. There may be a connection with changes in literary fashion in 19th century US fiction from romanticism towards realism, but this is beyond the scope of this study and so we leave the question open for other researchers.
\subsection*{Higher Entropy in Short-form Media}
The historical trend (Figure \ref{fig:timeseries}) suggests differences in entropy between media categories. We investigated this relationship further in the Corpus of Contemporary American English (COCA) and the British National Corpus (BNC), as well as social media data from Facebook and Twitter. Figure \ref{fig:media-categories} \textbf{a} shows the distribution of word entropy across different media categories. See Extended Data for equivalent figures for the other lexical measures. Within COHA (limited to 2000-2007), BNC, and COCA there were significant differences in all lexical measures across media categories (ANOVA test $p<0.01$). Short-form media categories of news and magazines (with low $u_i$ per information type) have higher entropy (with higher $r_i$) than long-form media (with high $u_i$). The shortest short-form media---Twitter and Facebook status updates---has the highest entropy. This agrees with the implications of Inequality \ref{eqn:media_patch_criteria} and visualised in Figure \ref{fig:media-categories} \textbf{e}.
To our knowledge this is the first large scale quantitative analysis of differences in word entropy (and type token ratio, Zipf exponent) between media categories. The differences persist across different corpora, and even between American and British media. Moreover, these observed patterns are explained by our information foraging diet choice model.
\section*{Discussion}
Language evolution has been shown to follow a number of principles governed by human psychology. These principles have, for example, included features of biological and cultural evolution \cite{smith2008cultural, christiansen2008language}, learning \cite{hills2015recent, christiansen2008language, Lupyan2010Jan}, cooling by expansion \cite{petersen2012languages}, word formation and distribution \cite{i2003least}, and the decay of morphological complexity \cite{Lupyan2010Jan, Lieberman2007Oct}. Our results extend the psychological consequences on language evolution to word entropy in response to information abundance.
We use animal foraging theory to understand how competition influences information evolution. Our model describes observed empirical changes in word entropy of English both within and between media categories in response to increasing information abundance. Our analysis of historical data shows that information markets respond predictably to this increased competition. Furthermore, our model offers a simple explanation in terms of humans as information rate maximisers responding to rising information abundance. We welcome alternative explanations for the observed changes and hope to start a debate in this regard. Notably, prevailing sociolinguistic models \cite{Lupyan2010Jan} and empirical results of grammatical simplification \cite{Michel2011Jan, Lieberman2007Oct} would seemingly predict a decrease in word entropy of English --- the opposite of what we find (see Supplementary Information).
We make a key assumption that people's attention is captured and maintained by high word entropy text. There are empirical findings that support the idea of people's attention being attracted to high entropy information, such as eye tracking experiments that find participant's visual attention is attracted by information with a high Kullback-Leiber divergence \cite{itti2009bayesian} and high complexity \cite{radach2003eye}.
Essentially we assume that people are, on average, attracted to novelty and bored by repetition. Humans choices are, of course, based on more than entropy --- for example, humans respond to social cues and risk \cite{hills2019dark} --- just as animals do not always maximise net energy intake but also consider other factors like macro-nutrient content and predators \cite{stephenskrebs1986foraging}. Moreover, information producers are not simply interested in capturing attention, but also in influence and selling ideas and services \cite{chen2014economic, evans2020economics}. Nonetheless, just as animal foraging models have been shown to predict human behaviour in a variety of domains \cite{winterhalder1986diet, pirolli1999information,pirolli2009elementary, fu2007snif,hills2012optimal}, our analyses suggests these models also extend to understanding the shape of information evolution, just as the co-evolutionary arguments of Darwin might have predicted \cite{darwin2011various}.
\section*{Methods}
\textbf{Text Corpora and Data Cleaning}
Several text corpora were investigated. The Corpus of Historical American English (COHA) has over 100,000 texts spanning the 1810s to 2000s, balanced between categories of fiction, non-fiction, news and magazines \cite{Davies2012Nov}. The Corpus of Contemporary American English (COCA) has over 150,000 texts from between 1990 to 2008 split equally between fiction, popular magazines, newspapers, academic journals and spoken word \cite{davies2009385+}. The British National Corpus (BNC) contains over 4,000 texts from between 1960 and 1993 including written categories of academic prose, fiction and verse, newspapers, non-academic prose and biography, other published materials and unpublished materials \cite{ByLouBurnard2007Jan}. Fiction and newspapers are common categories across the corpora. Magazines are a common category between COHA and COCA. We grouped as non-fiction the categories of COHA non-fiction, COCA academic journals and BNC academic prose.
The text sample data was cleaned before analysis in a standard way \cite{gerlach2020standardized}. COHA and COCA are similar formats and so followed the same procedure. For both:
\begin{itemize}
\item Stripped any headers not a part of the main text samples.
\item Removed any XML text tags.
\item Removed any sentences that contained "@" symbols. COHA and COCA randomly replace words with @ symbol in groups of ten for copyright reasons \cite{rudnicka2018variation}.
\item Removed apostrophes and extra whitespace.
\item Used python's natural language toolkit (nltk) package to convert text to tokens \cite{bird2009natural}.
\item Selected the last 2000 tokens of the text sample for processing. This avoids, as much as possible, anomalous text that sometimes appears at the start of text samples such as a contents section.
\end{itemize}
For the BNC data, python's natural language toolkit package comes with a BNC corpus reader \cite{bird2009natural}, which was used to extract tokens. The only other treatment was to remove extra whitespace and apostrophes as with COCA and COHA.
We also investigated social media. The Twitter dataset consisted of 1.6 million tweets scraped from the twitter API between April and June 2009 \cite{go2009twitter} and available online at \url{https://www.kaggle.com/kazanova/sentiment140}. To simulate a twitter feed the tweets were randomly collated to create 1000 text samples with $N \geq 2000$ words each. The facebook dataset consisted of status updates from 2016 from a range of 163 public accounts, available online at \url{https://github.com/minimaxir/interactive-facebook-reactions}. Non-english facebook statuses were removed. The facebook statuses were collated chronologically to simulate a news feed to produce 24 text samples with $N>2000$ words each. For both, the data was cleaned:
\begin{itemize}
\item Removed apostrophes and extra whitespace.
\item Removed any urls.
\item Removed hashtags and usernames i.e. any words containing "@" or "\#".
\item For the facebook data, removed any non-English statuses.
\item Used python's natural langauge toolkit (nltk) package \cite{bird2009natural} to convert the collated samples into a list of tokens, and the last 2000 tokens taken.
\end{itemize}
Social media statuses are by nature short and do not exist in samples of $N \geq 2000$ words, and lexical measures of short text samples have little meaning. We simulated feeds by collating status updates. This will naturally create text samples with high lexical diversity. This isn't a flawed analysis --- the high information density of a news feed is related to the collation of statuses and how people actually consume social media.
\textbf{Lexical Measures}
Lexical diversity can be thought of as a proxy for information density, or entropy. Lexical diversity is measured using type token ratio, unigram word entropy and Zipf exponent \cite{Bentz2015Jun}. The lexical measures are all sensitive to sample size, which is why we used text samples of a fixed size of $N=2000$ words.
Type token ratio (TTR) is the number of unique words (types) divided by the total words (tokens) in a text sample.
\begin{equation}
TTR = \dfrac{\# types}{\# tokens} \,.
\end{equation}
Empirical unigram word entropy, $H_1$, is measured using the relative frequencies of words, $f_i$, given a set of $W$ unique words in the text sample. We use the maximum likelihood or plug-in estimator, which has the benefit of being simple and well known. And it has been shown to correlate well with more advanced estimators \cite{bentz2017entropy}. There is some bias in the estimator \cite{bentz2017entropy} but this bias is systematic so is not too important for trend analysis.
\begin{equation}
H_1 = - \sum_{i=1}^W f_i log_2 f_i \,.
\end{equation}
Words in natural language are typically approximately distributed as a power law distribution between type frequency, $f_i$, and type rank in that frequency distribution, $r(f_i)$ \cite{clauset2009power}. This power law is parameterised by the Zipf exponent, $\alpha$, which describes the steepness of the distribution in log space. Maximum likelihood estimation was used to estimate the Zipf exponent \cite{clauset2009power}. This estimator has the benefit of being widely used and well known. It shows bias (as do all Zipf estimators) \cite{pilgrim2020bias}, but the bias is systematic so is not critical for trend analysis.
\begin{equation}
f_i \propto r(f_i)^{- \alpha}
\end{equation}
\textbf{Timeseries Smoothing}
The Corpus of Historical American English (COHA) provides historical text samples across fiction, non-fiction, news and magazines categories. The type token ratio, word entropy and Zipf exponent were calculated for each text sample.
The timeseries was smoothed for plotting using a moving average with measures of text samples from $\pm$ 5 years. The 95\% confidence interval was calculated as the standard error of this mean calculation multiplied by 1.96 (assuming normally distributed errors). For each lexical measure, the mean was plotted for each year with the confidence interval region shaded. We only included years where we had a minimum of 10 data points within the window.
For the main composite figure (Figure \ref{fig:entropy-rising} \textbf{a}-\textbf{c}), the timeseries for media categories were combined by taking an average across the timeseries annual means for the media categories that had a value for that year. The 95\% confidence interval was again calculated as 1.96 times the standard error. For each year, the standard error of the estimate of the mean, $SE_{\bar{X}}$ was computed based on the delta method,
\begin{equation}
SE_{\bar{X}} = \dfrac{\sqrt{\sum_{i=1}^{n} SE_i^2}}{n} \,,
\end{equation}
with $n$ depending on how many media categories had values for the annual mean each year.
\textbf{Timeseries Analysis}
The results were binned into years and the median taken each year (similar results were found when using the mean). Trend analyses were carried out on these binned data between the years 1900 and 2009, the last year of data. KPSS and MK tests were carried out for each measure and media cateogry in COHA (full results in Extended Data).
The Kwiatkowski–Phillips–Schmidt–Shin (KPSS) test assumes the null hypothesis of a stationary timeseries. p-values below 0.05 mean that we can reject this hypothesis at 5\% significance and suggest a trend. The test was applied using python's statsmodels package \cite{seabold2010statsmodels}.
The Mann-Kendall test is a non-parametric trend test \cite{hussain2019pymannkendall}. The test assumes no serial correlation so that errors in one observation do not predict errors in other observations \cite{hussain2019pymannkendall}. Our data is independently sampled so this is reasonable. The null hypothesis is that the data has no trend, and the p-value tells us the probabilty that the data was observed under the null hypothesis. At 5\% significance we reject the null hypothesis if $p<0.05$. The test was carried out using python's pymannkendall package \cite{hussain2019pymannkendall}
\textbf{Differences Between Media Categories}
We looked at the distributions of the lexical measures within media categories in COCA, the BNC and COHA (restricted to 2000-2007 to avoid the effect of historical changes). To test for differences between the groups we carried out ANOVA tests across categories within each corpora separately for each of the lexical measures. At 5\% significance, $p<0.05$ provides evidence that the media categories are drawn from different underlying population distributions. Each of these tests reported very small p-values with $p<0.001$. The tests were carried out using python's statsmodels package \cite{seabold2010statsmodels}
For Figure \ref{fig:media-categories} \textbf{a}, the distributions of word entropy for each media category are shown as a kernel density estimate with the bandwidth determined by the Scott rule and the density trimmed to the data range.
\textbf{US Magazine Circulation}
The data for magazine circulation numbers were taken from Sumner's "The Magazine Century American Magazines Since 1900" \cite{sumner2010magazine} Chapter 1, which are attributed to data originally from the Audit Bureau of Circulation. This data source does not track all US magazines, but does track well-known magazines. The data was plotted without further treatment.
\textbf{Information Diet Simulations}
For Figure \ref{fig:viability_social_media} we simulated the information diet choice model for varying levels of information prevalence. For each level of prevalence, the first step was to randomly generate a number of information items with utility rates drawn from a uniform distribution , $r_i \approx U(20,30)$. The total number of items drawn was proportional to the information prevalence. The information diet choice algorithm was then applied, adding items in order of $r_i$ until Inequality \ref{eqn:info_choice} failed. The items included in the diet were considered consumed and plotted on the figure as blue points. Items that were not included in the diet were considered ignored, and were removed from the data with 80\% probability, to represent the selective pressure on these items from being unable to attract attention. The surviving items were plotted on the figure as grey points. The variables were adjusted manually to illustrate a wide range of diets.
\subsection*{Data Availability}
All data generated following analysis of text samples is available at \url{https://github.com/chasmani/PUBLIC_information_foraging_in_the_attention_economy}.
The text corpora data is not included in the public repository for copyright and size reasons. They are available at:
\begin{itemize}
\item COHA and COCA. \url{https://www.corpusdata.org/}
\item BNC. \url{http://www.natcorp.ox.ac.uk/}
\item Twitter dataset. \url{https://www.kaggle.com/kazanova/sentiment140}
\item Facebook dataset. \url{https://github.com/minimaxir/interactive-facebook-reactions}
\end{itemize}
\subsection*{Code Availability}
All code used to generate figures and analysis is available at \url{https://github.com/chasmani/PUBLIC_information_foraging_in_the_attention_economy}.
\bibliographystyle{unsrt}
\section{Supplementary Information --- Linguistic Niche Hypothesis}
The finding in the main paper of word entropy, and lexical diversity, rising in American English is the opposite of what might be predicted by the Linguistic Niche Hypothesis. The hypothesis makes predictions about the complexity of language morphology (e.g. I ate, la casita) and syntax (e.g. I did eat, la pequeña casa), with the assumption that complexity is balanced between the two. The Linguistic Niche Hypothesis \cite{Lupyan2010Jan} suggests that languages in large, spread out social systems tend to have simpler morphological forms, with the grammatical work instead being done through syntax \cite{Lupyan2010Jan}. The hypothesised mechanism for this is that second language learners prefer simpler forms so that complex morphological forms disappear over time \cite{Lupyan2010Jan}. A global lingua franca like English should therefore be undergoing morphological simplification, and evidence does suggest that this is the case with the regularisation of English past tense verbs \cite{Michel2011Jan, Lieberman2007Oct} and a loss of inflectional diversity \cite{zhu2018modern}. Further work suggests that this morphological simplification should correlate with a reduction in lexical diversity as measured by type token ratio \cite{Bentz2015Jun, kettunen2014can} (or word entropy) --- complex morphological forms are non-repetitive (many unique word types per word token) whilst syntactic grammatical modifiers are repetitive (few unique word types per word token). We find that lexical diversity is instead rising in American English. We suggest some possible explanations:
\begin{enumerate}
\item English morphology is overall becoming more complex, against the Linguistic Niche Hypothesis.
\item English morphology is becoming simpler without an increase in syntactic complexity. This would be a further refutation of the already beleaguered \cite{deutscher2009overall, sampson2009linguistic} equicomplexity assumption, which states that mature languages have broadly equal grammatical complexity, balanced between morphology and syntax.
\item Lexical diversity (and Type Token Ratio) is not a good measure of morphological complexity. The increase in lexical diversity is instead driven by more concise information and a wider, and faster switching of, contexts in written media.
\end{enumerate}
The third option here aligns well with the ideas in the main paper, and is in our opinion at least partly responsible. If people are drawn towards higher utility rate information then that could drive English to be more concise and to switch contexts more quickly.
\section{Supplementary Information --- Prey Choice Model Derivation}
In the main paper we justify the prey choice algorithm using an argument that considers the opportunity cost of spending time handling a prey versus searching in the environment. Here we derive the same result more rigorously. As in the main paper, we have information types, $i$, that are encountered with rates $\lambda_i$ while searching. Each information item, if consumed, provides a benefit $u_i$ in a handling time $t_i$, during which the forager is not searching for other items.
In the main text, a patch expected utility rate is given by,
\begin{equation}
R_{patch} = \dfrac{\sum_D \lambda_i u_i}{1 + \sum_D \lambda_i t_i} \,. \label{eqn:diet_rate_SI}
\end{equation}
This assumes that information types are either in the diet, $D$, in which case they are always consumed upon encounter, or alternatively the items are not in the diet and never consumed. We can generalise this so that forager's have some probability of consuming an information type upon encounter, $p_i$,
\begin{equation}
R_{patch} = \dfrac{\sum \lambda_i u_i p_i }{1 + \sum \lambda_i t_i p_i} \label{eqn:diet_rate_SI} \,.
\end{equation}
The forager can choose the probability of paying attention to each information type, and a forager's strategy can be defined as a vector
$\textbf{p} = [p_1, p_2, . . . , p_n]$. These choices are independent. To find the strategy that gives the maximum utility rate we can consider each of these choices, $p_j$, independently. To find the best strategy we separate $p_j$ from the summations and differentiate
\begin{equation}
\frac{\partial R_{patch}}{\partial p_j} = \dfrac{\lambda_j u_j (1 + p_j \lambda_j t_j + \sum_{i \neq j} p_i \lambda_i t_i) - \lambda_j t_j (p_j \lambda_j u_j + \sum_{i \neq j} p_i \lambda_i u_i)}{(1 + p_j \lambda_j t_j + \sum_{i \neq j} p_i \lambda_i t_i)^2} \,.
\end{equation}
Cancelling like terms
\begin{equation}
\frac{\partial R_{patch}}{\partial p_j} = \dfrac{\lambda_j u_j (1 + \sum_{i \neq j} p_i \lambda_i t_i) - \lambda_j t_j (\sum_{i \neq j} p_i \lambda_i u_i)}{(1 + p_j \lambda_j t_j + \sum_{i \neq j} p_i \lambda_i t_i)^2} \,.
\end{equation}
The sign of this does not depend on $p_j$. So if $\frac{\partial R}{\partial p_j} > 0$, $R_{patch}$ will be maximised with $p_j=1$, and otherwise with $p_j=0$. The condition for $p_j=1$ is
\begin{equation}
\frac{u_j}{t_j} > \dfrac{\sum_{i \neq j} p_i \lambda_i u_i}{1 + \sum_{i \neq j} p_i \lambda_i t_i)} \label{eqn:diet_confition_full} \,.
\end{equation}
The right hand side is the rate of utility excluding item $j$, $R_{\neg j}$. The item should be included in the diet if the utility rate of the item, $r_i = \frac{u_j}{t_j}$, is greater than the overall rate of foraging without the item.
\begin{equation}
r_j \geq R_{\neg j} \label{eqn:diet_condition} \,.
\end{equation}
This is equivalent to the diet inclusion criteria given in the main paper. To find the optimal diet, one can add items in order of their utility rate until the inequality fails.
\section{Supplementary Information --- Patch Choice Model and Non Constant Patches}
The patch choice model considered in the main paper is analogous to the information choice model. Patches of each type are randomly encountered in the environment and encountered as a Poisson processes with rates $\lambda_{patch}$. We also assume that patches have a constant expected rate of utility, $R_{patch}$, and some finite time, $T_{patch}$ until the rate drops to zero, which gives each patch a total utility, $U_{patch}$. Foragers can choose to either consume or ignore a patch upon encountering it. This model is identical to the information choice model so that we can follow that derivation and jump to the conclusion that a patch will be included in the diet if the patch utility rate is greater than or equal to the overall rate of foraging in the environment, $R_{patch} \geq R_{env}$.
Information patches in the real world have non-constant utility rates. Commonly patch marginal utility will decrease with time \cite{stephenskrebs1986foraging, Charnov1976Apr}. This can happen as finite prey are consumed \cite{bettinger2016marginal, stephenskrebs1986foraging}. For example, within a patch an optimal forager will consume the most profitable items first if they can, which then makes those items more scarce and reduces the overall utility rate in the patch as time goes on \cite{bettinger2016marginal}. Examples are collecting raspberries from a bush, or checking your email. Information items themselves may degrade while being consumed, for example news articles often follow an inverted pyramid structure where the most important information is presented first, with extra paragraphs adding marginally diminishing extra information \cite{po2003news}. Magazines, fiction and non-fiction have their own styles and utility curves. Overall we can say that utility rates in patches, and information, are not constant.
An optimal forager now has to choose both which patches to consume and how long to spend in those patches. This problem was famously solved by Charnov's marginal value theorem \cite{Charnov1976Apr}, which we derive here. We follow the model and derivation given by Stephens and Krebs \cite{stephenskrebs1986foraging}. We characterise each patch type, $k$, with an expected utility return rate as a function of time spent within the patch, $g_k(t_k)$. We assume that patches are encountered randomly with rate $\lambda_k$ as Poisson processes. The forager's decision is now how long to spend in each patch type, with a strategy described as $\textbf{t} = [t_1, t_2, ... , t_k]$ ($t_i=0$ meaning the patch is ignored) . We can rewrite equation \ref{eqn:holling} as
\begin{equation}
R_{patch} = \dfrac{\sum_k \lambda_k g_k(t_k)}{1 + \sum_k \lambda_k t_k} \,.
\end{equation}
Similarly to the prey choice derivation, we differentiate with respect to the time spent in a patch type, $t_j$,
\begin{equation}
\frac{\partial R_{patch}}{\partial t_j} = \dfrac{\lambda_j g'_j(t_j) (1 + \sum_k \lambda_k t_k) - \lambda_j (\sum_k \lambda_k g_k(t_k))}{(1 + \sum_k \lambda_k t_k)^2} \,,
\end{equation}
where $g'_j(t_j) = \frac{\partial g_j(t_j))}{\partial t_j}$. Setting this equal to zero, we find the maximum $R_{env}$ when
\begin{equation}
g'_j(t_j) = R_{env} \quad \quad \forall j \,. \label{eqn:charnov_criteria}
\end{equation}
This is Charnov's marginal value theorem \cite{Charnov1976Apr} and states that an optimal forager will leave a patch when the marginal utility rate of the patch equals the overall rate of utility from foraging in the environment. And foragers will not spend any time in a patch if the marginal rate never reaches the environmental rate i.e. $g'_j(t_j) < R_{env} \quad \forall t_j$. This makes sense intuitively --- time spent in a patch with rate $g_j$ carries an opportunity cost of time not spent foraging in the wider environment with utility rate $R_{env}$.
We can find which patches will be visited using the "patches as prey" algorithm \cite{stephenskrebs1986foraging}. This is a similar algorithm to the diet choice model but with patches ranked in order of their maximum profitability, $\frac{g_k(t_k^*)}{t_k^*}$. patch types are added to the diet one at a time, with the marginal value theorem applied to all included patches after adding each new patch to recalculate the environmental utility rate. This is done with all patch types, or until Inequality \ref{eqn:charnov_criteria} fails.
How would this model of patches effect the conclusions of the main paper? As in the main paper, we assume that media producers have an incentive to create information patches that attract and hold attention. People are still driven towards patches with high patch utility rates. If patch degradation occurs through consuming the most attractive items first then then there would still be a selective pressure toward high utility rate information items, as this would make the patch more attractive before degradation and keep foragers in the patch for longer as it degrades. And this pressure would still apply more strongly to short-form media than long-form media (due to more time switching between short-form media). The conclusions in the main paper would still follow, although the full model would be more complicated. We are confident that the conclusions would hold under any reasonable model of patch degradation.
\section{Supplementary Information --- The Merged Poisson Process for Patches}
Here we justify using average values to describe the expected patch utility rates, instead of summations over information types. We have not seen this derivation before in the foraging literature, but it is relatively straightforward. The result is used in \cite{pirolli2009information}.
In the main text we write down an equation for the expected patch rate in terms of the characteristics of the information within the patch diet, $D$,
\begin{equation}
R_{patch} = \dfrac{\sum_{i \in D} \lambda_{i} u_{i}}{1 + \sum_{i \in D} \lambda_{i} t_{i}} \label{eqn:patch_rate_SI} \,.
\end{equation}
In this model, information types are encountered as independent Poisson processes with rates, $\lambda_i$, during time spent searching, with total searching time $T_s$. Items have utilities $u_i$ and handling times $t_i$. With some simple algebraic manipulation we can write down
\begin{equation}
R_{patch} = \dfrac{ (\sum_D \lambda_i) \frac{\sum_D \lambda_i u_i T_s}{\sum_D \lambda_i T_s}}{1 + (\sum_D \lambda_i) \frac{\sum_D \lambda_i t_i T_s}{\sum_D \lambda_i T_s}} \label{eqn:patch_rate_expanded} \,.
\end{equation}
The rate of a combined Poisson process is equal to the sum of the rate of the independent Poisson processes, $\lambda_p = \sum_D \lambda_i$ \cite{gallager2012discrete}.
We define the average utility of items encountered in the patch as the total utility gained divided by the total number of items handled,
\begin{equation}
\bar{u}_p = \frac{\sum_D \lambda_i u_i T_s}{\sum_D \lambda_i T_s} \,.
\end{equation}
Similarly the average time spent handling items encountered is the total time spent handling divided by the number of items handled,
\begin{equation}
\bar{t}_p = \frac{\sum_D \lambda_i t_i T_s}{\sum_D \lambda_i T_s} \,.
\end{equation}
Substituting these relations into equation \ref{eqn:patch_rate_expanded},
\begin{equation}
R_{patch} = \dfrac{\lambda_p \bar{u}_p}{1 + \lambda_p \bar{t}_p} \label{eqn:Holling_avg_SI} \,.
\end{equation}
We can therefore replace the patch rate equation (equation \ref{eqn:patch_rate_SI}) with averages taken over the merged Poisson process. This is a variation of Holling's disc equation \cite{holling1959some}, considering average values.
\section{Supplementary Information --- Historical Analysis of US Publishing}
We investigated the history of media publishing in America. Magazine publishing was the most interesting. Figure \ref{fig:historical-magazines} shows the historical trend in COHA magazine word entropy alongside magazine circulation figures and important events. Magazine publishers are in a two-sided market where they sell magazines to consumers and attention to advertisers \cite{evans2020economics}, with the majority of revenue from selling attention \cite{sumner2010magazine}. This wasn't always the case in the US --- prior to the 1890s most magazine revenue was from sales with advertising considered undesirable \cite{sumner2010magazine}. Towards the late 20th century, a combination of rapidly decreasing printing costs, growth in the literate population, discounts from the US postal service and the ability to target adverts to a niche readership led to a new business model to emerge in magazine publishing \cite{sumner2010magazine}. This new model was to sell magazines lower than the price of production, increasing circulation so that those costs can be recouped by advertising revenue \cite{sumner2010magazine}. Before 1893, most magazines sold for 25 cents --- until a price war led to the magazines McClure's, Munsey's and Cosmopolitan dropping their prices to 10 cents and then enjoying rises in circulation and advertising revenue \cite{sumner2010magazine}. The 10 cent magazines contributed to a tripling in total magazine readership from 1890 in 1905 \cite{sumner2010magazine}, and there was a huge jump in word entropy in the same period (Figure \ref{fig:historical-magazines}).
The Audit Bureau of Circulation was created by advertisers in 1914 \cite{sumner2010magazine} to better measure magazine readership numbers. This quantification of attention further increased pressure on magazine publishers. Other changes included moving advertisements from the back of the magazine to alongside the main content --- a move that forced copywriters to improve the appeal of the content through adding color and improving graphics \cite{sumner2010magazine}, and we hypothesise by increasing word entropy.
Word entropy continues to rise throughout the 20th century alongside magazine circulation, with a Pearson's correlation coefficient r$=0.91$ ($p < 0.001$), although both rise over time so that confounding factors are not ruled out (Figure \ref{fig:historical-magazines}). After the 1890s, the biggest drop in word entropy was during the great depression when magazine circulation also fell. There is a suggestion in the data that things change around the year 2000, as magazine circulation drops but word entropy continues to rise. The rise of digital media around this time is perhaps the biggest change in publishing since the printing press so we would not expect the same trends to necessarily continue --- and digital media represents a new competitive pressure which would drive word entropy rise within our model, matching the historical trend.
\begin{figure}[ht]
\centering
\includegraphics[width=1\textwidth]{magazine_history.png}
\caption{Historical analysis of word entropy in magazines (red dotted, timeseries calculated as in previous figure) with key events (pink) and US Monthly Magazine circulation(purple). }
\label{fig:historical-magazines}
\end{figure}
Concerning the other media categories, circulation numbers and competition for attention certainly increased in the 20th century, and we see a general rise in word entropy. However the timeseries trends do not fit quite as well alongside specific historical events as in magazine publishing. This is not unexpected as they are under less direct attention economy pressure. Fiction is interesting as it had high word entropy during the 19th century, decreasing throughout that century (Figure \ref{fig:timeseries}). This is partly explained by a large number of plays and scripts in COHA in this time period, which have particularly high word entropy. However even removing these documents, the fiction word entropy is still very high in the 19th century. We believe the high word entropy here, and the downward trend, are not primarily caused by attention economy pressures. There may be a connection with changes in literary fashion in 19th century US fiction from romanticism towards realism, but this is beyond the scope of this study and so we leave the question open for other researchers.
\section{Extended Data --- Timeseries Trend Analysis Table}
\begin{table}[ht]
\centering
\begin{tabular}{|l|l|l|l|l|}
\hline
& Unigram Word Entropy & Type Token ratio & Zipf Exponent \\ \hline
news & (\textbf{$<$0.01}, \textbf{0.00}) & (\textbf{0.02}, \textbf{0.00}) & (\textbf{$<$0.01}, \textbf{0.00})\\ \hline
mag & ($<$\textbf{0.01}, \textbf{0.00}) & (\textbf{0.02}, \textbf{0.00}) & (\textbf{$<$0.01}, \textbf{0.00})\\ \hline
fic & (\textbf{0.02}, \textbf{0.00}) & (\textbf{0.04}, \textbf{0.00}) & (\textbf{0.01}, \textbf{0.00})\\ \hline
nf & (\textbf{0.01}, \textbf{0.00}) & (0.08, \textbf{0.01}) & (\textbf{0.01}, \textbf{0.00})\\ \hline
\end{tabular}
\caption{Timeseries analysis across different categories and measures for text samples from COHA between 1900 and 2009. In each cell, the p-value of a Kwiatkowski–Phillips–Schmidt–Shin (KPSS) test and a Mann Kendall (MK) test are shown respectively. Significant trends are emboldened. For both tests, p-values below 0.05 mean we can reject the null hypothesis of stationarity at 5\% significance. Unigram word entropy and Zipf exponent show trends across all categories and both tests. Stationarity could not be ruled out for type token ratio for non-fiction by the KPSS test. See Methods for further details.}
\label{tbl:timeseries}
\end{table}
\section{Extended Data --- COHA Timeseries for Type Token Ratio and Zipf exponent}
\begin{figure}[ht]
\centering
\includegraphics[width=1\textwidth]{timeseries_with_ci_ttr.png}
\caption{Historical timeseries of type token ratio in the Corpus of Historical American English. Type token ratio was calculated for text samples from COHA truncated with $N=2000$ words. For each media category and year, a moving average of all valid samples with $\pm 5$ years was calculated. The shaded region shows a 95\% confidence interval for this average.}
\label{fig:timeseries-ttr}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=1\textwidth]{timeseries_with_ci_zipf_clauset.png}
\caption{Historical timeseries of Zipf exponent in text samples in written media categories in American English. The timeseries was calculated in the same way as in the previous figure. }
\label{fig:timeseries-zipf}
\end{figure}
\section{Extended Data --- Corpora Boxplot Distributions for Word Entropy, Type Token Ratio and Zipf exponent}
\begin{figure}[ht]
\centering
\includegraphics[width=1\textwidth]{boxplot_distributions_ttr.png}
\caption{Distribution snapshots of type token ratio across different text corpora for text samples with $N=2000$ words. COHA samples are from the year 2000 onwards only. Social media text samples were collated from status updates.}
\label{fig:boxplots-ttr}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=1\textwidth]{boxplot_distributions_H_1.png}
\caption{Distribution snapshots of unigram word entropy across different text corpora for text samples with $N=2000$ words. COHA samples are from the year 2000 onwards only. Social media text samples were collated from status updates.}
\label{fig:boxplots-word-entropy}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=1\textwidth]{boxplot_distributions_zipf_clauset.png}
\caption{Distribution snapshots of the Zipf exponent across different text corpora for text samples with $N=2000$ words. COHA samples are from the year 2000 onwards only. Social media text samples were collated from status updates.}
\label{fig:boxplots-zipf}
\end{figure}
\bibliographystyle{unsrt}
|
2,869,038,155,855 | arxiv | \section{Preparation of $\ket{D_{m}}$ . \label{appendixI} }
\subsection{Coupling to a common environment.}
Let us consider first the hamiltonian part describing the atom-photon coupling for an optical transition $\ket{g}\longleftrightarrow\ket{e}$, coupled to 1D photonic reservoir. In this case, the hamiltonian is given by $H = H_0 + H_{\rm I}$, where $H_0$ is the free term:
$H_0 = H_{\rm qb} + H_{\rm field}$, (using $\hbar = 1$)
\begin{equation}
H_{\rm qb} = \omega_\aa \sum_{n=1}^{N+1} \sigma^{n}_{ee}, \ \ \ \
H_{\rm field} = \sum_q \omega_q a^\dagger_q a_q,
\label{eqS:H0}
\end{equation}
where $\omega_\aa$ is the atomic transition energy, $\omega_q$ is the field frequency from the dispersion relation of the waveguide modes. We consider that we are coupled to a single polarization to focus on the most relevant physics of our work which can be justified for appropriately designed dielectric waveguides \cite{hung13a}. We consider a dipolar coupling of the form
\begin{equation}
H_{\rm I} = \sum_{n=1}^{N+1} \left( \sigma_{ge}^n E(z_n) + \rm {H.c.} \right),
\label{Hint}
\end{equation}
with $E(z) = \sum_q g_q a^\dagger_q e^{-i q z}$, and $g_q$ the single-photon coupling constant. In the conditions where the 1D-bath degrees of freedom have a much faster relaxation timescale than the system, we can describe our atomic system via a density matrix, $\rho$, which in Born-Markov limit, is governed by a master equation: $d \rho /d t = {\cal L_{\rm D}}(\rho)$ \cite{gardiner_book00a,lehmberg70a,lehmberg70b}, with the superoperator
\begin{equation}
\label{eqS:mequation1}
{\cal L_{\rm D}}(\rho) =
\sum_{n,m} \Gamma_{n,m} \left( \sigma_{ge}^n \rho \sigma_{eg}^m - \rho \sigma_{eg}^m \sigma_{ge}^n \right)
+ \rm {H.c.} \,,
\end{equation}
where $\Gamma_{n,m} = \frac{\Gamma_{\mathrm{1D}}}{2} e^{i q(\omega_\aa) |z_n - z_m |}$, with $\Gamma_{\mathrm{1D}}$ the renormalized space spontaneous decay rate of the atoms due to the interaction with the 1D photonic reservoir. For completeness, it is interesting to write the connection between $\Gamma_{\mathrm{1D}}$ and $g_q$ that can be easily computed from \cite{gardiner_book00a}:
\begin{align}
\Gamma_{\mathrm{1D}}=2\pi\sum_q |g_q|^2\delta(\omega_\aa-\omega_q)= L\int_{-\infty}^\infty {\rm d} q |g_q|^2\delta(\omega_\aa-\omega_q)=\frac{2L |g_{q_\aa}|^2}{v_g(\omega_\aa)}\,, \label{eqS:gammaoned}
\end{align}
where we have introduced $L$ the quantization length of the guided modes and used the fact that $\omega_q=\omega_{-q}$ and $|g_q|=|g_{-q}|$.
\subsection{Emergence of subradiant and superradiant states.}
In the main manuscript we stipulated a homogeneous coupling to the environment, that can be achieved naturally with a 1D reservoir by choosing appropriately the atomic positions, i.e., $z_n = n \lambda_\aa=n 2\pi/q(\omega_\aa)$, with $n \in {\mathbb{N}}} \def\Q{{\mathbb{Q}}$, along the waveguide. With this choice, the effective interaction induced by reservoir modes yields a pure Dicke model \cite{dicke54a} decay described by
\begin{equation}
{\cal L}_{\rm D}(\rho) =
\frac{\Gamma_{\mathrm{1D}}}{2} \left(S_{ge} \rho S_{eg} - S_{eg} S_{ge} \rho \right) + \rm H.c.,
\label{eqS:Dicke}
\end{equation}
where we have introduced the following notation for the collective spin operators $S_{ij} = \sum_{n=1}^{N+1} \sigma^n_{ij}$. $S_{eg}$ is just the collective operator for the spin dipole $\sigma_{eg}$. One of the assets of the model described by Eq. \ref{eqS:Dicke} is the emergence of \emph{sub} and \emph{super}radiant states as depicted in Fig. \ref{fig1}(c), which can be seen easily by examining $S_{ge}\ket{J,m_J,\alpha_J}=\sqrt{(J+m_J)(J-m_J+1)}\ket{J,m_J-1,\alpha_J}$ in the collective angular momentum basis $\{\ket{J,m_J,\alpha_J}\}$ with $J=N/2,N/2-1,\dots$ and $m_J=-J,-J+1,\dots J$ and $\alpha_J$ is an index that takes into account the degeneracy of the states (that we drop from here on as it does not play an important role for what we will describe). The states satisfying $S_{ge}\ket{\Psi}=0$ (i.e., $m_J=-J$) are dark states of the Liovillian of Eq.~\ref{eqS:Dicke} and they form a so-called Decoherence-Free Subspace (DFS)\cite{zanardi97a,lidar98a}, and are therefore uncoupled from dissipation, whereas
the
states with $J=N/2,\ m_J >-N/2$ are super-radiant with an enhanced decay rate proportional to the atom number $N$. It is interesting to emphasize that we can find similar physics, i.e., pure Dicke model, using an atomic configuration where the $z_n = n \lambda_\aa/2=n \pi/q(\omega_\aa)$ ($n \in {\mathbb{N}}} \def\Q{{\mathbb{Q}}$). The difference between the two configurations is the symmetry of the super/subradiant states, which has to be taken into account in the generation of these states.
\subsection{Controlling atomic states under strong dissipation.}
Our first goal is to generate particular superpositions of atomic states within the DFS; therefore we need to include in our system some additional fields that allow to control the individual atomic states. We find convenient to introduce an extra auxiliary level $\ket{s}_n$, as sketched in Fig. \ref{fig1}(b) of the main text and use the following fields to control the atomic state:
\begin{align}
H_{\mathrm{las}}&=\left( \frac{\Omega_r}{2}\sum_{n=1}^{N} \sigma^n_{se} + \frac{\Omega_\mathrm{anc} }{2} \sigma^{N+1}_{se}+\mathrm{h.c.} \right) + \Delta_e \sum_{n=1}^{N+1} \sigma_{ee}^n\,, \label{eqS:laser} \\
H_{\mathrm{c}}&=\frac{\Omega_{\mathrm{c}}}{2} \sigma^{N+1}_{sg} + \mathrm{h.c.}\,,
\label{eqS:laserb}
\end{align}
where $H_{\mathrm{las}}$ allows to control both the coupling between atom and 1D-reservoir, while $H_{\mathrm{c}}$ allows to control the atomic state of the ancilla atom independently of the coupling to the reservoir. We are interested in working in the regime of strong collective dissipation,
where $N\Gamma_{\mathrm{1D}}\gg\Omega_r, \Omega_\mathrm{anc},\Omega_{c},\Delta_e$.
In this situation, we can intuitively consider that the 1D bath is continuously \emph{monitoring} the atomic state, as in the Quantum Zeno regime \cite{zanardi97a,lidar98a,beige00a,facchi02a}, and projecting the atomic state into the DFS of the Liouvillian $\mathcal{L}_{\rm D}$. Notice that when including the extra auxiliary level $\ket{s}$, the $\mathrm{DFS}\equiv \{\ket{\Psi}:\ S_{ge}\ket{\Psi}=0\}$ contains all superposition states of the metastable states $\ket{g}$ and $\ket{s}$ plus all the excited states from the nullspace of the collective dipole $S_{ge}$. Formally, we obtain effective dynamics within the DFS by using a projector operator $\mathbb{P}$ satisfying: $\mathbb{P}{\cal L}_{\rm D}={\cal L}_{\rm D} \mathbb{P}=0$, and its orthogonal part: $\mathbb{Q}=1-\mathbb{P}$. Using these
projectors, one can formally integrate out the fast dynamics outside of the DFS,
described by $\mathbb{Q} \rho$, and obtain effective dynamics of the atomic system within the DFS given by \cite{gardiner_book00a}
\begin{equation}
\mathbb{P} \dot{\rho}=\mathbb{P} W \mathbb{P} \rho-\mathbb{P} W \mathbb{Q} \frac{1}{{\cal L}_{\rm D}}\mathbb{Q} W \mathbb{P} \rho+\mathrm{O}[\frac{\tau^-3}{\Gamma_{\mathrm{1D}}^2}]\,.
\end{equation}
where $W$ is any perturbation acting on the atomic system (with relevant time scale $\tau$), e.g., $W=H_{\mathrm{las}}+H_{\mathrm{c}}$ and $\tau\Gamma_\mathrm{1D}$ must be $\ll 1$ such that it is a good approximation.
\subsection{Preparation of many-body entangled states: neglecting losses.}
As explained in the main manuscript, the first step of our protocol consists of creating a certain class of states that must satisfy: i) they must be easily mapped to the superradiant states of the atomic ensemble; ii) they must be created using only states within the DFS such that we can avoid the dissipation induced by the waveguide. We propose in Fig.\ref{fig1}(a) of the main manuscript a configuration of $N$ atoms and a separately addressable ancilla atom. Because of the high symmetry of the states that we aim to create among the first $N$ atoms, it is convenient to introduce the following notation to describe any symmetric combination of states over these atoms:
\begin{align}
\ket{F_{m,k}}=\mathcal{N}(m,k)^{-1/2}\mathrm{sym}\{\ket{s}^{\otimes m}\otimes\ket{e}^{\otimes k}\otimes\ket{g}^{\otimes N-m-k}\}\,,
\end{align}
where $\mathcal{N}(m,k) = \binom {N} {m,k,N-m-k}$ is the multinomial coefficient that gives the normalization of these states. This notation embeds the many-body entangled states that we aim to create, i.e., $ \ket{F_{m,0}}\equiv \ket{D_m}$ and $ \ket{F_{0,m}}\equiv \ket{S_m}$. For the ancilla atom, we use a notation $\ket{\psi}_\mathrm{A}$.
In Eqs. \ref{eqS:laser}-\ref{eqS:laserb} we introduced the laser configuration that we use, namely, a symmetric excitation over the first $N$ atoms and a different one for the ancilla. Interestingly, the combination of this configuration and the collective dissipation imposes certain symmetry conditions on the states that can exist within the DFS, namely, the atomic states with excited states ($\ket{e}$) must be symmetric under any permutation of the first $N$ atoms and antisymmetric with the permutation of the ancilla. This reduces the exponential Hilbert space of all states ($3^m$) to a set of $5m$ relevant states, which are depicted in Fig.~\ref{fig4}. For example, among all the combination of states $\ket{F_{m,1}}$ with one atomic excited state, for each $m$ only one of them (denoted by $\ket{\Psi_e^{(m)}}$) belongs to the DFS. Moreover, the states of combinations of the ancilla with $\ket{F_{m,2}}$ are all superradiant as none of them can fulfil the symmetry to be within the DFS of the collective
dipole $S_{eg}$.
\begin{figure}
\centering
\includegraphics[width=0.98\textwidth]{fig4v2.pdf}
\caption{(a) Product of the Hilbert space of $N$ permutation invariant emitters and of one ancilla for $m$ excitations, that is $m$ emitters in state $\ket{s}$ or $\ket{e}$. (b) Separation of whole Hilbert space (for $m$ excitations) into DFS states (blue background) and non-DFS states, which makes obvious the emergence of the effective $\Lambda$-type transitions within the DFS.}\label{fig4}
\end{figure}
Now, let us explain how to generate the $\ket{D_m}$ using the tools that we have introduced. As shown in Fig. \ref{fig4}(b), for each $m$, there exists an effective $\Lambda$-scheme within the DFS of the Liouvillian $\mathcal{L}_D$ that couples the states $\ket{\Psi_{s}^{(m)}}=\ket{D_{m-1}}\otimes\ket{s}_{A}$ to $\ket{\Psi_{g}^{(m)}}=\ket{D_m}\otimes
\ket{g}_{A}$, through an excited state $\ket{\Psi_{e}^{(m)}}$ that also belongs to the DFS. If no projection into the DFS is considered, the states $\ket{\Psi_{g,s}^{(m)}}$ are coupled with excited states as follows:
\begin{align}
\label{eqs:las}
H_{\mathrm{las}}\ket{\Psi_{s}^{(m)}}&= \frac{\Omega_{r}}{2} \sqrt{m-1} \ket{F_{m-2,1}}\otimes\ket{s}_\mathrm{A}+ \frac{\Omega_{\mathrm{anc}}}{2}\ket{F_{m-1,0}}\otimes\ket{e}_\mathrm{A}\,,\\
H_{\mathrm{las}}\ket{\Psi_{g}^{(m)}}&= \frac{\Omega_{r}}{2}\sqrt{m}\ket{F_{m-1,1}}\otimes\ket{g}_\mathrm{A}\,,\nonumber
\end{align}
Interestingly, it is possible to write Eqs.~\ref{eqs:las} separating the contributions of the states in and out of the DFS:
\begin{align}
\label{eqs:las2}
H_{\mathrm{las}}\ket{\Psi_{s}^{(m)}}&= -\frac{\Omega_{\mathrm{anc}}}{2}\sqrt{\frac{N_m}{N_m+1}}\ket{\Psi_{e}^{(m)}}+\frac{\Omega_{\mathrm{anc}}}{2}\sqrt{\frac{1}{N_m+1}}\ket{\chi_g^{(m)}}+\frac{\Omega_{r}}{2}\sqrt{m-1} \ket{\chi_s^{(m)}}\,, \\
H_{\mathrm{las}}\ket{\Psi_{g}^{(m)}}&= \sqrt{\frac{m}{N_m+1}}\frac{\Omega_{r}}{2}\ket{\Psi_e^{(m)}} +\frac{\sqrt{m N_m}}{N_m+1}\frac{\Omega_{r}}{2}\ket{\chi_g^{(m)}}\,,\nonumber
\end{align}
where we have introduced the notation $N_m=N-m+1$, and $\ket{\Psi_e^{(m)}}$ is a state within the DFS that couples both $\ket{\Psi_{s,g}^{(m)}}$, and is given by:
\begin{align}
\ket{\Psi_e^{(m)}}&=\sqrt{\frac{N_m}{N_m+1}}\ket{F_{m-1,0}}\otimes\ket{e}_\mathrm{A}\nonumber -\frac{1}{\sqrt{N_m+1}}\ket{F_{m-1,1}}\otimes\ket{g}_\mathrm{A}\,.
\end{align}
$\ket{\chi_{s/g}^{(m)}}$ are two states outside the DFS defined as
\begin{align}
\label{eq:super}
\ket{\chi_s^{(m)}}&=\ket{F_{m-2,1}}\otimes\ket{s}_\mathrm{A}\,,\\ \nonumber
\ket{\chi_g^{(m)}}&=\frac{1}{\sqrt{N_m+1}}\ket{F_{m-1,0}}\otimes\ket{e}_\mathrm{A}+\sqrt{\frac{N_m}{N_m+1}}\ket{F_{m-1,1}}\otimes\ket{g}_\mathrm{A}\,,\nonumber
\end{align}
and can be shown to have an enhanced decay rate of $\Gamma_e=(N_m+1)\Gamma_{\mathrm{1D}}$, by looking at the action of the collective operator $S_{eg}$ on these states. We discuss first the effect considering only perturbations up to first order within the Zeno dynamics. Therefore, the super-radiant states can be neglected as they are only virtually populated due to their enhanced decay rate. Thus, we first consider the effective $\Lambda$ system, with effective Raman intensities given by (see Fig. \ref{fig4}(b)):
\begin{align}
\Omega_{se}^{(m)} & = \bra{\Psi_e^{(m)}} H_\mathrm{las} \ket{\Psi_s^{(m)}} =-\frac{\Omega_{\mathrm{anc}}}{2}\sqrt{\frac{N_m}{N_m+1}}\,,\\\,
\Omega_{ge}^{(m)} & = \bra{\Psi_e^{(m)}} H_\mathrm{las} \ket{\Psi_g^{(m)}} =\frac{\Omega_{r}}{2}\sqrt{\frac{m}{N_m+1}}\,. \nonumber
\end{align}
where we see the importance of addressing the ancilla atom individually from the other $N$ emitters and keeping $\Omega_r\neq \Omega_{\mathrm{anc}}$, as we can now set them such that $|\Omega_{se}^{(m)}|=|\Omega_{ge}^{(m)}|$, by choosing: $| \Omega_{\mathrm{anc}}| =|\Omega_{r}| \sqrt{m/N_m}$. This choice allows us to compensate the different Stark-shifts that are introduced with the projection $\mathbb{P}$ and yields an off-resonant two-photon transition with Rabi frequency:
\begin{align}
|\Omega^{(m)}| =\frac{|\Omega_r|^2}{2\Delta_e} \frac{m}{N_m+1}\,.
\end{align}
Notice then, that by flipping the state of the ancilla with $\Omega_{\mathrm{c}}$, one also flips $\ket{\Psi_{g}^{(m)}}\rightarrow \ket{\Psi_s^{(m+1)}}$, which re-initializes the process. Thus, by using a combination of $m$ off-resonance Raman transition and $m$ control fields, we can generate any $\ket{D_m}$ (or superpositions thereof).
\subsection{Preparation of many-body entangled states: error analysis.\label{subsec:error}}
By using the effective hamiltonian $H_{\mathrm{eff}}=\mathcal{P}(H_{\mathrm{las}}+H_{\mathrm{c}})\mathcal{P}$ (with $\mathcal{P}$ being the hamiltonian projection into the DFS) under the conditions described in the previous Section, we see that the time of the operation to do a complete transfer of population from $\ket{\Psi_{s}^{(m)}}$ to $\ket{\Psi_{g}^{(m)}}$ by using an off-resonant transition, i.e., $\Delta_e\gg \Omega_{ge,(se)}^{(m)}$, is given by:
\begin{equation}
t_{\mathrm{op}}^{(m)}=\frac{\pi }{|\Omega^{(m)}|}\approx \frac{2\pi \Delta_e N}{m|\Omega_r|^2}\,.
\end{equation}
So far, we have considered the ideal situation; however, to estimate the fidelities in the preparation of these states, we need to analyze the errors that may occur within $t_{\mathrm{op}}^{(m)}$. The errors come from:
\begin{itemize}
\item The spontaneously emitted photons from $\ket{\Psi_e^{(m)}}$ to other decay channels other than the waveguide, described by a Lindblad term: ${\cal L}_{*}(\rho) =
\sum_{n} \frac{\Gamma^*}{2} \left( \sigma^n_{ge} \rho \sigma^n_{eg} - \rho \sigma^n_{ee} \right)
+ \rm {H.c.}\, $, that we embedded in a single decay rate $\Gamma^*$. As we are using an off-resonant Raman transition, this source of error scales (for $N\gg m \ge 1$):
%
\begin{equation}
\epsilon_{\Psi_e}^{(m)}=\Gamma^*\frac{m|\Omega_r|^2}{4(N_m+1)(\Delta_{e}^2+(\Gamma^*)^2)}\approx \Gamma^*\frac{m|\Omega_r|^2}{4 N\Delta_{e}^2}\,,
\end{equation}
%
where we used that $\Delta_e\gg \Gamma^*$.
\item The other errors may appear due to photons emitted from the states out of the DFS, i.e., through the $\ket{F_{m,1}}$-like states. We can estimate the rate of these errors which are finally given by (for $N\gg m \ge 1$):
\begin{align}
\label{eqS:erroff}
&\epsilon_{\chi_s^{(m)}}=\big(\Gamma^*+N\Gamma_{\mathrm{1D}}\big)\frac{m|\Omega_r|^2}{4N^2\big(\Delta_{e}^2+(\Gamma^*+N\Gamma_\mathrm{1D})^2\big)}\,,\\
&\epsilon_{\chi_g^{(m)}}=\big(\Gamma^*+N \Gamma_{\mathrm{1D}}\big)\frac{m|\Omega_r|^2}{4\big(\Delta_{e}^2+(\Gamma^*+N \Gamma_{\mathrm{1D}})^2\big)}+\big(\Gamma^*+N\Gamma_{\mathrm{1D}}\big)\frac{m|\Omega_r|^2}{4N\big(\Delta_{e}^2+(\Gamma^*+N\Gamma_{\mathrm{1D}})^2\big)\big)}\,, \label{eqS:erroff1}
\end{align}
\end{itemize}
Using these estimations, and with the following hierarchy for the parameters: $N\gg m\ge1$ and $\Gamma_{\mathrm{1D}}\gg\Delta_e\gg \Gamma^*$ to simplify the expressions, we find the infidelity of the step $m$ to be
\begin{equation}
1-F^m=t_{\mathrm{op}}^{(m)}\Big(\epsilon_{\Psi_{e}}^{(m)}+\epsilon_{\chi_s^{(m)}}+\epsilon_{\chi_g^{(m)}}\Big)\approx \frac{\pi }{2}\big(\frac{\Gamma^*}{\Delta_e}+\frac{\Delta_e}{\Gamma_{\mathrm{1D}}}\big)\,,
\end{equation}
which is optimized for $\Delta_{e,\mathrm{opt}}=\sqrt{\Gamma^* \Gamma_{\mathrm{1D}}}$ yielding a scaling $1-F^m_{\mathrm{opt}}\propto 1/\sqrt{P_{\mathrm{1D}}}$. Interestingly, the scaling does not depend on either the number of atoms, $N$, nor the number of excitations, $m$.
So far, we have focused our discussion on the $m$-th step that goes from $\ket{D_{m-1}}\rightarrow \ket{D_{m}}$. As depicted in Fig.~\ref{fig2} of the main manuscript, by combining this process with a transition over the ancilla qubit that initializes it in $\ket{s}_A$, we can generate any arbitrary superposition of states (over the first $N$ atoms) up to a $m_{\mathrm{max}}$, namely $\ket{\Psi_{D}}=\sum_{m=0}^{m_{\mathrm{max}}} d_m \ket{D_{m}}$, with a total of $m_{\mathrm{max}}$ Raman transitions together with $m_{\mathrm{max}}$ initialization gates on the ancilla. Neglecting the errors of the microwave transition over the ancilla, the total infidelity to generate $\ket{\Psi_{D}}$ is:
\begin{equation}
1-F_{\mathrm{opt}}\propto \frac{m_{\mathrm{max}}}{\sqrt{P_{\mathrm{1D}}}}\,.
\end{equation}
\subsection{Preparation of many-body entangled states: numerical analysis.}
In order to validate our scaling analysis, we study the preparation of two relevant sets of states without doing any approximation and considering all the possible states (including super-radiant ones). The two sets of states are i) the general class of states $\ket{D_m}$ and ii) the superpositions $\ket{\Phi_m}=\frac{1}{\sqrt{2}}\big(\ket{D_0}+\ket{D_m}\big)$.
First, it is interesting to realize that the symmetry conditions found from our analysis of the ideal situation tell us the relevant Hilbert space of the problem can be written in terms of the states: $\ket{F_{m,k}}\otimes \ket{\psi}_\mathrm{A}$, where $\ket{\psi}_\mathrm{A}=\{\ket{g}_\mathrm{A},\ket{s}_\mathrm{A},\ket{e}_\mathrm{A}\}$ is the Hilbert space of the ancilla atom, $k$ can be restricted up to 2 photons (as higher excited states will be only weakly populated), and $m=0,1,\dots, m_{\mathrm{max}}$, where $m_{\mathrm{max}}$ is the highest excitation that we want to achieve. Notice, that the Hilbert space does not depend directly on the total atom number $N$, but only on $m_{\mathrm{max}}$. The number of atoms, $N$, only enters in the two-photon resonant condition that fixes $\Omega^{(m)}$. In this Hilbert space we can use a non-Hermitian evolution, where the effective Hamiltonian is determined by the action of $H_{\mathrm{las}}+H_{c}$ plus the imaginary energies determined by the coupling to the
waveguide through the $S_{ge}$ operator. The non-Hermitian Hamiltonian elements are given by:
\begin{align}
\label{eqS:nhham}
&\bra{F_{n,q}}\otimes \bra{\phi}_\mathrm{A} (H_{\mathrm{las}}+H_{c})\ket{F_{m,k}}\otimes \ket{\psi}_\mathrm{A}= k[\Delta_e-i((N-m-k+1)\Gamma_\mathrm{1D}/2 +\Gamma^*/2)]\delta_{k,q}\delta_{m,n}\delta_{\psi,\phi}+\nonumber \\
&+ [ \frac{\Omega_r}{2} \sqrt{m(k+1)} \delta_{k+1,q}\delta_{m-1,n}\delta_{\psi,\phi}+\mathrm{H.c.}]+ [\frac{\Omega_\mathrm{anc}}{2} \delta_{k,q}\delta_{m,n}\delta_{\psi,s}\delta_{\phi,e}+ \mathrm{H.c.}]+ [\frac{\Omega_\mathrm{c}}{2} \delta_{k,q}\delta_{m,n}\delta_{\psi,s}\delta_{\phi,g}+ \mathrm{H.c.}]\,.
\end{align}
A diagram of the relevant transitions in the complete Hilbert space is depicted in Fig.~\ref{fig4}(a). For the generation of individual Fock states, the pulse sequence can be easily deduced. One just needs to ensure a complete transfer of populations from $\ket{F_{0,0}}\otimes\ket{g}_\mathrm{A}\rightarrow \ket{F_{0,0}}\otimes\ket{s}_\mathrm{A} \rightarrow \ket{F_{1,0}}\otimes\ket{g}_\mathrm{A} \rightarrow \ket{F_{1,0}}\otimes\ket{s}_\mathrm{A}\rightarrow \ket{F_{2,0}}\otimes\ket{g}_\mathrm{A}\dots \rightarrow \ket{F_{m,0}}\otimes\ket{g}_\mathrm{A}$, which can be done by fixing the time of interaction, $t$, to $t\Omega_{\mathrm{c}}=\pi$ ($t\Omega^{(m)}=\pi$) for the microwave (two-photon Raman) transitions.
In Fig.~\ref{fig3}(a) of the main text, we show the numerical fidelities obtaining when fixing the off-resonant transition to the optimal $\Delta_{e,\mathrm{opt}}$ that we explored in the previous Section: the dots correspond to the numerical fidelities, whereas the solid lines depict the scaling $\propto 1/\sqrt{P_{\mathrm{1D}}}$ and show how our general arguments give us the right scaling. For a more complicated state, such as the $\ket{\Phi_m}$ the pulse sequence can be calculated numerically. In Fig.~\ref{fig3}(b), we show the optimal fidelities for generating these states up to 5 excitations, showing again how the $1/\sqrt{P_\mathrm{1D}}$ scaling of fidelities also holds for superpositions.
\subsection{Conditional preparation of many-body entangled states: using post-selection.}
In the error analysis we made in Section~\ref{subsec:error}, we realized that some of the errors were coming from the small populations of superradiant states $\ket{\chi_{g,s}^{(m)}}$ that emit quickly into the waveguide. Actually, in our atom-waveguide configuration one can think of using another atomic ensemble that acts as an efficient photonic absorber that maps the photonic excitation into an collective atomic one that afterwards can be detected via fluorescence with a very high fidelity. Moreover, it is possible to use a more elaborate scheme as sketched in Ref. \cite{chang12a}, where collective atomic excitations can be mapped to a single impurity atom, making fluorescence detection much more efficient. As it is not the purpose of our manuscript to elaborate on conditional preparation, we just assume that we can perfectly detect all the photons emitted through the waveguide and study the scaling of the fidelities. Using these assumptions, we can cancel the errors in Eqs.~\ref{eqS:erroff} and~\ref{eqS:erroff1} that are
proportional to $N\Gamma_{\mathrm{1D}}$, that in the limit $N\gg m\ge1$ yield:
\begin{align}
\label{eqS:erroffpost}
&\epsilon_{\chi_s^{(m)}}\approx\Gamma^*\frac{m|\Omega_r|^2}{4N^2\big(\Delta_{e}^2+(\Gamma^*+N\Gamma_\mathrm{1D})^2\big)}\,,\\
&\epsilon_{\chi_g^{(m)}}\approx\Gamma^*\frac{m|\Omega_r|^2}{4\big(\Delta_{e}^2+(\Gamma^*+N \Gamma_{\mathrm{1D}})^2\big)}+\Gamma^*\frac{m|\Omega_r|^2}{4N\big(\Delta_{e}^2+(\Gamma^*+N\Gamma_{\mathrm{1D}})^2\big)\big)}\,.
\end{align}
We can immediately see that the leading error comes from the first contribution of $\epsilon_{\chi_g^{(m)}}$. Taking the leading error only, we can then estimate the infidelity of the step $m$ to be
\begin{equation}
1-F^m=t_{\mathrm{op}}^{(m)}\Big(\epsilon_{\Psi_{e}}^{(m)}+\epsilon_{\chi_s^{(m)}}+\epsilon_{\chi_g^{(m)}}\Big)\approx \frac{\pi N \Gamma^*}{2}\Big(\frac{1}{\Delta_e}+\frac{\Delta_e}{(N\Gamma_{\mathrm{1D}})^2}\Big)\,,
\end{equation}
which is optimized for $\Delta_{e,\mathrm{opt}}=N\Gamma_{\mathrm{1D}}$ yielding a scaling $1-F^m_{\mathrm{opt}}\propto 1/P_{\mathrm{1D}}$. It is important to highlight that the optimal condition can not be realized as it implies $\Delta_{e,\mathrm{opt}}\gg \Gamma_{\mathrm{1D}}$, which violates the conditions under which we derived our effective Hamiltonian. However, the linear improvement with $1/P_{\mathrm{1D}}$ is still obtained even if we do not reach the optimal conditions. More details, on how to take advantage of the atom nanophotonic waveguide for conditional preparation will be presented elsewhere.\cite{workinprogress15}
\section{Atom-photon mapping starting from an initial atomic excitations.}
Let us review first the general derivation for a Hamiltonian of the form $H = H_\mathrm{S} + H_\mathrm{B} + H_\mathrm{SB}$, with
\begin{align}
H_\mathrm{SB} =& \sum_{n,q} g_q a_q^\dagger O^n \end{equation}^{-{\bf i} q z_n} + \mathrm{H.c.}\,,
\end{align}
where $H_S$ ($H_B$) are the system (1D-bath) Hamiltonians, and $H_\mathrm{SB}$ is the interaction between them, where $O^n$ ($a_q$) are the system (1D-bath) operators. Using the generalized input-output formalism \cite{caneva15a}, everything boils down to calculate the scattering amplitude
\begin{align}
A(t) = \bra{\phi_\mathrm{out}} \bra{B_\mathrm{out}} \end{equation}^{-{\bf i} H t } \ket{B_\mathrm{in}} \ket{\phi_\mathrm{in}},
\end{align}
where $\ket{\phi_\mathrm{in(out)}}=\gamma_\mathrm{in(out)}^\dagger(t)\ket{\mathrm{vac}}$ denotes the system input (output) state at time $t$ and $\ket{B_\mathrm{in(out)}}$ the input (output) state of the bath, which in our case is the electromagnetic field inside the waveguide. Let us particularize to our situation of interest, where we decay from an initial system state with $m$ excitations, that is, $\gamma_\mathrm{in}^\dagger (0) \ket{\mathrm{vac}}=\ket{S_m}$, $\gamma_\mathrm{out}(T) = \mathbf{1}$ and $\mathcal{F}_\mathrm{in} = \mathbf{1}$ and the operators $O^n=\sigma_{ge}^n$. Moreover, we also linearize the waveguide dispersion relationship, i.e., $\omega_q\approx v_g(q_\aa) |q|$, assume for simplicity that: $g_q\approx g_{q_\aa}\equiv g$. With these considerations, the scattering amplitude for $m$ excitations simplifies to \cite{caneva15a,shi15a}
\begin{align}
& A_{ \{ q\}} (t) = \left( -{\bf i} \right)^m g^m \int_{0}^{t} {\rm d} s_1 \cdots \int_{0}^{t} {\rm d} s_m\ \end{equation}^{-{\bf i} \sum_{i=1}^{m} \omega_{q_i} (T-s_i)} \times \bra{\mathrm{vac}} \mathcal{T} O_{q_1}(s_1) O_{q_2}(s_2) \cdots O_{q_m}(s_{m}) \ket{S_m}\,,
\end{align}
where $\{q\}=\{q_1,\dots,q_m\}$ is the set of relevant momenta of the $m$-photon state, where each of them run over the whole Brillouin Zone $q_i\in\mathrm{B.Z.}$ and $\mathcal{T}$ is the time ordering operator that guarantee that $s_1>s_2>\dots>s_m$. A further simplification is obtained if we assume that we are within the Markov approximation, and use the fact that we work with an atomic configuration such that: $q_\aa z_n=2\pi n$. Then,
\begin{align}
O_q = \sum_n \sigma^n_{ge} \end{equation}^{-{\bf i} q z_n} \approx \sum_n \sigma^n_{ge} \end{equation}^{-{\bf i} q_\aa z_n} = S_{ge}\,.\label{eqS:approxO}
\end{align}
With this approximation the time ordering simply ensures that the final (bosonic) state is symmetrized over all sets $\{ q \}$. Notice, that the output photonic state associated to this scattering amplitude can be written as
\begin{equation}
\ket{\Psi_{B}^{(m)}}=\sum_{\{ q\}} \frac{A_{ \{ q\}} (t)}{m!} \ud{a}_{q_1} \ud{a}_{q_2} \dots \ud{a}_{q_m}\ket{\mathrm{vac}}\,,
\end{equation}
where the sum over ${ \{ q\}}=\{q_1,\dots,q_m\}$ extends over all momenta. The state $\ket{\Psi_{B}^{(m)}}$ is normalized with the $1/m!$ factor as it cancels the $m!$ terms that appears from the permutations of all the $a_{q_i}$'s and the $m!$-factor of the scattering amplitude normalization in the whole ${ \{ q\}}$-space, i.e., $\sum_{ \{ q\}} | A_{ \{ q\}} (t)|^2=m!$ (notice the scattering amplitude $A_{ \{ q\}}$ is normalized to one only if: $\sum_{q_1>q_2>\dots> q_m} | A_{ \{ q\}} (t)|^2=1$).
Therefore, it is enough to calculate the contribution of one time ordering, e.g., $s_1>s_2>\dots >s_m$, and then sum up to all the permutations of $\{ q\}$. As was shown in the previous sections, the effective (non-hermitian) system Hamiltonian is
\begin{align}
H_\mathrm{eff} = \omega_\mathrm{a} S_{ee} - {\bf i} (\Gamma_\mathrm{1D}/2) S_{eg} S_{ge}.
\end{align}
Interestingly, our initial state $\ket{S_m}$ is an eigenstate of this effective Hamiltonian, and we can now calculate the action of the operator
\begin{align}
S_{ge}(s) \ket{S_m} =\sqrt{N_m} \sqrt{m}\ \end{equation}^{\left[ -{\bf i} \omega_\aa -\Gamma_{\mathrm{1D}} (m N_m-(m-1)N_{m-1})/2 \right] s} \ket{S_{m-1}}
\end{align}
and hence the correlator
\begin{align}
&\bra{\mathrm{vac}} S_{ge}(s_1) S_{ge}(s_2) \cdots S_{ge}(s_{m}) \gamma_\mathrm{in}^\dagger (0) \ket{\mathrm{vac}}= \prod_{r=1}^{m} \sqrt{ r N_r} \exp\Big[\left[-{\bf i} \omega_\aa -\Gamma_\mathrm{1D} (r N_r-(r-1)N_{r-1})/2 \right] s_r \Big].
\end{align}
When doing the integral, one needs also to take care of the particular time ordering considered. In the case that we have chosen, $s_1>\dots>s_m$, the integral can be rearranged $\int_0^t {\rm d} s_m \int_{s_{m}}^{t} {\rm d} s_{m-1} \dots \int_{s_2}^t {\rm d} s_{1}$. The choice of the upper limit of integration to $t$ is not casual, as in each time integral we will obtain a term proportional to $\propto e^{- N \Gamma_{\mathrm{1D}}t/2 }$, that will disappear for times $t\gg 1/(N\Gamma_\mathrm{1D})$ which are the ones that we are interested in. For example, the first integral:
\begin{align}
\int_{s_2}^t {\rm d} s_1 \exp\Big[\left[{\bf i} (\omega_{q_1}- \omega_\aa) -\Gamma_\mathrm{1D} N_1/2 \right] s_1 \Big]&=-\frac{1}{{\bf i} (\omega_{q_1}- \omega_\aa) -\Gamma_\mathrm{1D} N_1 /2 } \Big( \exp\Big[\left({\bf i} (\omega_{q_1}- \omega_\aa)-\Gamma_\mathrm{1D} N_1/2 \right) s_2 \Big]-\\ \nonumber &\exp\Big[\left({\bf i} (\omega_{q_1}- \omega_\aa) -\Gamma_\mathrm{1D} N_1 /2 \right) t \Big]\Big)\\ \nonumber
&\approx -\frac{1}{{\bf i} (\omega_{q_1}- \omega_\aa) -\Gamma_\mathrm{1D} N_1/2 } \,\exp\Big[\left(-{\bf i} (\omega_{q_1}- \omega_\aa) -\Gamma_\mathrm{1D} N_1/2 \right) s_2 \Big]\,,
\end{align}
where the last approximation was done for $t\gg 1/(N\Gamma_\mathrm{1D})$. The second integral then reads:
\begin{align}
-&\int_{s_3}^t {\rm d} s_2 \frac{\exp\left[\left[{\bf i} (\omega_{q_1}+\omega_{q_2}- 2\omega_\aa) -\Gamma_\mathrm{1D} 2 N_2/2 \right] s_2 \right]}{{\bf i} (\omega_{q_1}- \omega_\aa) -\Gamma_\mathrm{1D} N_1/2 }=\\ \nonumber
&\approx \frac{1}{[{\bf i} (\omega_{q_1}- \omega_\aa) -\Gamma_\mathrm{1D} N_1/2 ][{\bf i}(\omega_{q_1}+\omega_{q_2}- 2\omega_\aa)- \Gamma_\mathrm{1D} 2 N_2/2 ]}\,\exp\Big[\left[{\bf i} (\omega_{q_1}+\omega_{q_2}- 2\omega_\aa) -\Gamma_\mathrm{1D} 2 N_2 /2 \right] s_3 \Big]\,,
\end{align}
Iterating this integration and considering the permutations of the $\{q\}$ due to the different time-orderings, we obtain the following expression for the scattering amplitude
\begin{align}
A_{ \{ q \} }(t) = {\bf i}^m g^m \prod_{r=1}^{m} \frac{\sqrt{r N_r}\ \end{equation}^{-{\bf i} \omega_{q_r} t} }{ {\bf i} ( \sum_{l=1}^r \omega_{q_l}-r \omega_\aa) + r \Gamma_\mathrm{1D} N_r/2 } +[\{q\}-\mathrm{permutations}]\,
\label{eqS:amplitude}
\end{align}
for sufficiently large times $t \gg 1/ N_m \Gamma_\mathrm{1D}$, that is when the system state has completely decayed and all the excitations have been transferred to the bath state. Notice that the only dependence on $t$ in this case enters through: $\end{equation}^{-{\bf i} \sum_{r=1}^m \omega_{q_r} t} $, which describes the center of mass motion of the wavepacket when going to the real space. In the low excitation regime, one can either do a Holstein-Primakoff approximation \cite{porras08a} or change $N_m\rightarrow N$, in the expression of Eq. \ref{eqS:amplitude}. In both cases we obtain:
\begin{align}
A^{\mathrm{HP}}_{ \{ q \} }(t) = \sqrt{m!} \end{equation}^{-{\bf i} \sum_{r=1}^m \omega_{q_r} t} \prod_{r=1}^{m} \frac{{\bf i} g \sqrt{N} }{ {\bf i} ( \omega_{q_r}- \omega_\aa) + \Gamma_\mathrm{1D} N/2 } =\sqrt{m!} \end{equation}^{-{\bf i} \sum_{r=1}^m \omega_{q_r} t} \prod_{r=1}^{m} C_{\Gamma_{\mathrm{1D}} N}(q_r) \,,
\end{align}
which represents a single mode wavepacket with spectral shape $C_{\Gamma_{\mathrm{1D}} N}(q)$. To emphasize the connection between the linear and non-linear scattering amplitudes we exemplify the results for the $m=2$ photon wavepacket. The non-linear scattering amplitude is given in this case by
\begin{align}
A_{ q_1,q_2 }(t) &=-g^2 \end{equation}^{-{\bf i} \sum_{r=1}^2 \omega_{q_r} t} \sqrt{2 N_2 N_1} \frac{1}{{\bf i}(\omega_{q_1}+\omega_{q_2}- 2\omega_\aa)- \Gamma_\mathrm{1D} 2 N_2/2 }\Big[\frac{1}{{\bf i} (\omega_{q_1}- \omega_\aa) -\Gamma_\mathrm{1D} N_1/2 }+\frac{1}{{\bf i} (\omega_{q_2}- \omega_\aa) -\Gamma_\mathrm{1D} N_1/2 }\Big]=\nonumber\\
&=-g^2 \sqrt{2 N_2 N_1} \end{equation}^{-{\bf i} \sum_{r=1}^2 \omega_{q_r} t} \frac{1}{[{\bf i} (\omega_{q_2}- \omega_\aa) -\Gamma_\mathrm{1D} N_1/2 ][{\bf i} (\omega_{q_1}- \omega_\aa) -\Gamma_\mathrm{1D} N_1/2 ]}\Big[1+\Gamma_{\mathrm{1D}}\frac{(N_2-N_1)}{{\bf i}(\omega_{q_1}+\omega_{q_2}- 2\omega_\aa)- \Gamma_\mathrm{1D} 2 N_2/2 }\Big]\,,
\end{align}
where one can clearly see how if $N_2=N_1$, then $A_{ q_1,q_2 }(t)\equiv A^{\mathrm{HP}}_{ q_1,q_2 }(t)$. Moreover, as $N_1-N_2=1$, then, it is direct to see that the correction due to the $N_2-N_1$ term is of the order $O(1/N)$. To generalize to higher photon numbers it is more convenient to directly calculate the overlap between the linear and non-linear approximations before doing the time integral. The reason is that one can formally integrate in $\{q\}$ variables before the times $s_i$, to obtain:
\begin{equation}
\sum_{q_i} |g|^2 \exp\left[{\bf i} (\omega_{q_i} -\omega_{\aa})(s_r-\tilde{s}_r )\right]=\frac{2L|g|^2}{v_g(\omega_\aa)}\delta(s_r-\tilde{s}_r)=\Gamma_{\mathrm{1D}}\delta(s_r-\tilde{s}_r)\,.
\end{equation}
With these $\delta$'s the double time integral appearing when calculating the overlap is much simplified:
\begin{align}
\label{overlap}
\braket{\Psi_{B}^{(m)}}{\Psi_{B,\mathrm{HP}}^{(m)}}=\sum_{\{q\}}\frac{A^*_{\{q\}}(t)A^{\mathrm{HP}}_{\{q\}}(t)}{m!}= \Gamma_{\mathrm{1D}}^m \int_{0}^{t}{\rm d} s_m \cdots \int_{s_2}^{t} {\rm d} s_1 \prod_{r=1}^{m} r \sqrt{N N_r} \exp[-\Gamma_\mathrm{1D} (r N_r-(r-1)N_{r-1})/2 ]]\exp[-\Gamma_\mathrm{1D} r N/2 ]\,,
\end{align}
where in the last equality we have used the fact that the contribution of $m!$ different time-orders will be the same. Then, the multi-time integral can be calculated iteratively, yielding to an overlap:
\begin{align}
\label{overlap2}
1-\braket{\Psi_{B}^{(m)}}{\Psi_{B,\mathrm{HP}}^{(m)}}= 1-2^m\prod_{r=1}^{m} \frac{\sqrt{N N_r}}{N+N_r}\approx 1-\prod_{r=1}^{m} \frac{\sqrt{1-r/N}}{1-r/(2N)}\approx \frac{m^3}{20 N^2}+O(m^4/N^3)\,.
\end{align}
From the expression above, one can also check that $\ket{\Psi_{B}^{(m)}}$ is normalized by making $N_r\equiv N$. For consistency, one can also check that each $C_{\Gamma_{\mathrm{1D}} N}(q)$ in the linear expresion of $A_{q}^{\mathrm{HP}}(t)$ is normalized independently:
\begin{align}
\sum_q |C_{\Gamma_{\mathrm{1D}} N}(q)|^2 &=\int_{-\infty}^\infty \frac{{\rm d} q |g|^2 N L}{2 \pi} \frac{1}{ (\omega_{q}- \omega_\aa)^2 +(\Gamma_\mathrm{1D} N/2 )^2} \nonumber \\
&\approx \int_0^\infty \frac{{\rm d} \omega |g|^2 N L}{v_g(\omega_\aa) \pi} \frac{N}{ (\omega_{q}- \omega_\aa)^2 +(\Gamma_\mathrm{1D} N/2 )^2}=\frac{ L |g|^2 N v_g(\omega_\aa)}{\pi} \frac{2\pi}{\Gamma_{\mathrm{1D}} N} =1
\end{align}
\section{Implementation details: Photonic Crystal Waveguides.}
A particularly promising system to implement our proposal is atom 1D nanophotonic systems, in which first working proof-of-principles examples have been realized by using ``alligator'' photonic crystal waveguides \cite{goban13a,goban15a}. In these systems, the renormalized spontaneous decay rate is given by:
\begin{equation}
\frac{\Gamma_{\mathrm{1D}}}{\Gamma_\aa}=\frac{n_g \sigma \xi}{2 A_m}\,,
\end{equation}
where $n_g=c/v_g$ is the group index, $\sigma=3 \lambda_0^2/(2\pi)$ the radiative cross-section, $A_m$ the effective mode area and $\xi$ is an adimensional factor of cavity enhancement due the reflections at the end of the dielectric waveguide. Current SiN structures \cite{goban13a,goban15a}, have $A_m\approx 0.2$ $\mu$m$^2$, $n_g\approx 10$ and cavity enhancement $\xi\sim 5$. There are several sources of errors in these systems:
\begin{enumerate}
\item Spontaneous emission to other modes different from the chosen guided mode. Current structures show $\Gamma^*\sim \Gamma_a$, however, further design may result in further reduction of spontaneous emission, e.g., by making thicker dielectric structures $\Gamma^*\sim 0.1\Gamma_\aa$ \cite{gonzaleztudela14c}. Depending on the reduction of spontaneous emission $\Gamma^*=\alpha\Gamma_{\aa}$, the Purcell factor with current designs can be $P_\mathrm{1D}\sim 50/\alpha$.
\item Intrinsic losses of the material yield finite $Q$-factors, which can be calculated as:
\begin{equation}
Q=\frac{n_r}{2n_i}\,
\end{equation}
%
with $n=n_r-in_i$ the refractive index of the material. The $Q$-factor can be easily related to the attenuation of the intensity of the field traveling through the dielectric as follows:
\begin{equation}
L_{\mathrm{prop}}\approx \frac{\lambda_0}{4\pi n_i n_g}\approx \frac{Q \lambda_\aa}{2\pi n_g}\,,
\end{equation}
%
with $\lambda_\aa=\lambda_0/n_r$ and where $L_{\mathrm{prop}}$ incorporates both material absorption via $n_i$ and effect of reduced group velocity. We notice that $Q$-factor also has contribution of scattering losses of the material due to imperfections, and therefore one must consider state-of-the-art values for doing estimations. For Cs atoms ($\lambda_0=894$ nm) and SiN structures ($n_r=2$, $Q\sim 10^6$, $n_g=10$), this yields $L_\mathrm{prop}/\lambda_\aa \gtrsim 10^4$. The main effect is that $J_{mn}$ must be corrected by this attenuation length: $J_{mn}\approx\Gamma_{\mathrm{1D}} e^{iq(\omega_\aa)|z_{mn}|} e^{-|z_{mn}|/L_{\mathrm{prop}}}$\cite{gonzaleztudela11a}, with $z_{mn}=z_m-z_n$. As $L_{\mathrm{prop}}/\lambda_\aa$ is very large, in our situation of interest with $z_n=n\lambda_\aa$, the effect of the finite propagation can be treated as a perturbation to the collective Liouvillian given by:
\begin{equation}
\label{eqS:mequation1}
\mathcal{L}_{\mathrm{prop}}(\rho)=
\sum_{n,m} \frac{\Gamma_{\mathrm{1D}}}{2}(1- e^{-|z_{mn}|/L_{\mathrm{prop}}})\left( \sigma_{ge}^n \rho \sigma_{eg}^m - \rho \sigma_{eg}^m \sigma_{ge}^n \right)
+ \rm {H.c.}\approx
\sum_{n,m}\frac{\Gamma_{\mathrm{1D}}}{2}\frac{|z_{mn}|}{L_{\mathrm{prop}}}\left( \sigma_{ge}^n \rho \sigma_{eg}^m - \rho \sigma_{eg}^m \sigma_{ge}^n \right)
+ \rm {H.c.} \,.
\end{equation}
%
and introduces small corrections to the superradiant decay rate $\ket{S_m}$ and spontaneous emission rate of $\ket{\Psi_{e}^{(m)}}$ as long as the size of the atomic ensemble is $N\lambda_\aa \ll L_{\mathrm{prop}}$.
\item If one thinks of increasing $\Gamma_{\mathrm{1D}}$ only through group velocity reduction another effect to take into account is retardation. The worst-case correction of this effect appears after doing a fast-resonant $\pi/2$ pulse to switch from $\ket{D_m}$ to $\ket{S_m}$ to do the atom-photon mapping. To observe superradiant behaviour in that case, it must be satisfied $N\Gamma_{\mathrm{1D}}<2 v_g/(N \lambda_\aa)$ (assuming $\lambda_\aa/2$ separation of atomic states), that with current state-of-art parameters \cite{goban13a,goban15a}, leads to $N\lesssim 500$. Notice, that in the preparation of superposition of $\ket{D_m}$, this critical number goes up to $N\lesssim 10^4$, as the characteristic timescales do not show the collective enhancement.
Furthermore, there are several ways of overcoming retardation in these set-ups: i) increase $\Gamma_{\mathrm{1D}}$ not only by $v_g$ but through cavity enhancement, e.g., by placing mirrors at the end of dielectric \cite{goban15a}; ii) more easily by reducing the characteristic timescale doing the atom-photon mapping from $\ket{D_m}$ to $\ket{S_m}$ off-resonantly by setting a finite $\Delta_e\neq 0$. This reduces $\Gamma_{1d}$ (and $\Gamma^*$) by a factor $(\Omega_r/\Delta_e)^2$ which relaxes retardation requirements, while keeping $P_{\mathrm{1D}}$ constant.
\item Moreover, a typical way of increasing group index is by using the regions of slow-light that appear close to 1D band-gaps, where one can approximate the dispersion relationship by $\omega_q\approx \omega_c+A(q-q_c)^2$. This dispersion will generate corrections with respect to the linear propagation of the wavepacket that must be kept small within the bandwidth $m N\Gamma_{\mathrm{1D}}$.
\end{enumerate}
|
2,869,038,155,856 | arxiv | \section{Introduction}
The vibrant experimental programs pursued at the Relativistic Heavy
Ion Collider (RHIC) and at the
Large Hadron Collider (LHC) have ushered in a new era of
exploration of systems governed by the nuclear strong interaction. One of the remarkable features that emerged from investigating the physics of relativistic heavy-ion collisions is the fact that the created systems could be modeled theoretically by relativistic fluid dynamics \cite{Gale:2013da,Heinz:2013th}. This realization led to developments in the formulation of relativistic viscous hydrodynamics in which observable consequences of the dissipative effects were isolated
\cite{Baier:2007ix,Betz:2009zz,Romatschke:2009im,Kovtun:2012rj,Jeon:2015dfa,Florkowski:2017olj,Denicol:2011fa,Koide:2009sy,Denicol:2010xn,Denicol:2012es,Denicol:2012cn}. Currently, second-order viscous hydrodynamics
provides a description of the fluid behavior
\cite{Israel:1979wp,Denicol:2010xn,Denicol:2012es,Denicol:2012cn} which remedies the main failure of the Navier-Stokes -- or first-order -- formulation: acausal
signal propagation and numerical instabilities plaguing relativistic systems.
While the
hydrodynamic equations are universal and provide a macroscopic picture of a
relativistic fluid behavior in terms of conservation laws, transport
coefficients are governed by the underlying
microscopic theory which must be used for their extraction.
Although the first applications of viscous hydrodynamics focused on
the shear viscosity, it has recently become clear that bulk viscosity
also plays an important role in the evolution of the QGP system
\cite{Ryu:2015vwa,Ryu:2017qzn,Paquet:2015lta}. The calculation of bulk viscosity from first principles,
however, remains a challenging project. It is on this aspect that we concentrate in this paper.
The equations of the second-order hydrodynamics describe very efficiently
the expansion of the system produced in heavy-ion collisions.
This is a strong indication that the system must thermalize very
rapidly, which in turn indicates that the system is strongly interacting at
presently achievable energies.
Current estimates of the bulk viscosity of QCD are mainly
based on the equation of state obtained from lattice QCD simulations \cite{Guenther:2017dou,Borsanyi:2016bzg}, or rely on empirical extractions based on simulations of relativistic nuclear collisions \cite{Ryu:2015vwa,Ryu:2017qzn,Paquet:2015lta,Paquet:2017mny}. Application of lattice QCD findings
\cite{Romatschke:2009ng,Karsch:2007jc,Hama:2005dz} and hadron resonance gas results
\cite{NoronhaHostler:2008ju,Denicol:2009am} made it possible to determine that the bulk viscosity
is notably enhanced near the critical temperature of the QCD phase
transition while the shear viscosity is substantially decreased in this
region \cite{Nakamura:2004sy,Csernai:2006zz}.
Furthermore, the importance of bulk viscosity near
the transition temperature region was shown to have a remarkable impact on the elliptic
flow coefficient $v_2$ \cite{Song:2008hj,Denicol:2009am,Denicol:2010tr} and
other heavy-ion observables
\cite{Ryu:2015vwa,Ryu:2017qzn,Paquet:2015lta,Paquet:2017mny,Bozek:2017kxo,Monnai:2016kud}. Recently, the
behavior of bulk viscosity was also obtained from hydrokinetic theory,
which incorporates thermal noise \cite{Akamatsu:2017rdu}.
Despite the progress described above, there is still a need to develop methods which provide a better
insight in the effects of bulk viscosity at different energy
scales. In particular, one may be interested in having a consistent
analytical approach to bulk viscosity physics in the regime of very high
temperatures. At this energy scale the coupling constant is small and
fundamental quantum field theoretical tools can be used to study
bulk viscosity systematically. Having
a comprehensive fluid dynamic formulation of a weakly coupled gas may also
provide an essential benchmark for different approaches and
phenomenological applications.
In Refs.~\cite{Jeon:1994if,Jeon:1995zm} it was shown that quantum field
theory is equivalent, at least at leading order of perturbative
expansion, to kinetic theory. Later calculations then could
use this efficient and intuitive kinetic theory
framework to study transport phenomena; see
\cite{Arnold:2000dr,Arnold:2002zm,Arnold:2003zc}. It has also provided a
natural language to formulate fluid dynamics concepts. Within the kinetic
approaches, the Chapman-Enskog and Grad's 14-moment methods are
commonly employed to study the nonequilibrium processes of a fluid. They,
however, rely on different treatments of the distribution function. While
the Chapman-Enskog theory deals directly with solving the Boltzmann
equation \cite{Chapman:1970}, Grad's approach is based on an expansion
of the nonequilibrium function in terms of the powers of momenta
\cite{Grad:1949}. To date, great progress has been made in extraction of
different transport coefficients within different theories. It seems,
however, that the comprehensive analysis of transport processes in a system
exhibiting conformal anomaly is not yet complete, especially in cases involving a mean field interaction.
A violation of conformal symmetry has a different impact on
different transport coefficients. It does not affect shear viscosity much: its leading order behavior is dominated by the kinetic energy scale in weakly
interacting systems. On the other hand, the breaking of scale invariance
dominates the physics of bulk viscosity. Consequently, the behavior of bulk
viscosity is largely determined by the sources of conformal symmetry breaking:
either the physical mass of plasma constituents or the Callan-Symanzyk
$\beta_\lambda$ function, which fixes the coupling as a function of the
energy scale \cite{Jeon:1994if}. The parametric form of bulk viscosity
should then be dictated by the sources of scale invariance breaking squared,
as shown in Ref.~\cite{Arnold:2006fz} for QCD. The bulk viscosity of systems
exhibiting a conformal anomaly, due to the presence of a constant mass only, was
later studied within the Chapman-Enskog approach and the 14-moment
approximation, mostly in the relaxation time approximation \cite{Denicol:2014vaa,Jaiswal:2014isa,Florkowski:2015lra}, and
also within other approaches \cite{Huang:2010sa}. Moreover, quasiparticle
models were also examined for systems of various matter content in
Refs.~\cite{Sasaki:2008fg,Chakraborty:2010fr,Dusling:2011fd,Bluhm:2010qf,Romatschke:2011qp,Albright:2015fpa,Chakraborty:2016ttq,Tinti:2016bav,Alqahtani:2017jwl}.
We observe, however, that there is still a need to revisit a
formulation of nonequilibrium fluid dynamics with the mean field background.
Such a formulation is essential when one needs to include variable thermal masses
consistently in the equations of viscous hydrodynamics. Having the correct form
of a nonequilibrium momentum distribution is also critical while studying
some aspects of nuclear matter behavior phenomenologically, in particular,
when implementing the Cooper-Frye prescription in hydrodynamic simulations or
examining electromagnetic probes in heavy-ion collisions
\cite{Shen:2013cca,Shen:2014nfa,Paquet:2015lta,Hauksson:2016nnm}.
Furthermore, such a consistent approach
allows for an exhaustive calculation of transport coefficients.
The central part of this paper is devoted to derivation of the
nonequilibium correction to the distribution function where thermal effects
are consistently included. Subsequently, it is shown how the correction
influences the bulk viscosity behavior in the relaxation time
approximation. The analysis is done systematically and it comprises
different cases, namely, formulation of equilibrium and nonequilibrium
fluid dynamics and then computation of the ratio of bulk viscosity to relaxation time. A computation is provided for gases of Boltzmann and Bose-Einstein statistics
in both the Anderson-Witting model of the Chapman-Enskog method and the
14-moment approximation. The analysis performed in this paper is specific to single-component bosonic degrees of freedom. Consequently, when the explicit forms of the thermal mass
and the $\beta_\lambda$ function are needed, we will use those of the
scalar $\lambda\phi^4$ theory \cite{Jeon:1994if,Jeon:1995zm}.
The method developed here is not appropriate for a one-component system
following a Fermi-Dirac distribution function. Such a system would be a system of
noninteracting fermionic degrees of freedom where the thermal mass and bulk viscosity
cannot be determined. To count fermions accurately one needs to consider a
many-component system with the inclusion of bosons mediating the interaction. This is not done here and is left for future work.
The correction to the distribution function is found by noticing that there
is a twofold source of departure from equilibrium. First, there are hydrodynamic
forces that generate a deviation in the distribution function $\delta f$,
that is, they change the functional form of the distribution function. The
other source is related directly to interparticle interactions, the effect
of which is statistically averaged and emerges as the mean field.
Therefore, the correction is expressed by two terms; for the
Bose-Einstein gas the correction is
\ba
\Delta f = \delta f - T^2 \frac{d m^2_{\text{eq}}}{dT^2}
\frac{f_0(1+f_0)}{E_k}
\frac{\int dK \delta f}{\int dK E_k f_0(1+f_0)}.\nn\\
\ea
For the description of quantities, see Table \ref{tab-quantities}.
The obtained form of the correction allows one to formulate hydrodynamic
equations in a coherent way, where the Landau matching condition and
thermodynamic relations are guaranteed. Since the thermal mean field has a
negligible impact on shear viscosity, we further concentrate on bulk
viscosity dynamics, where the influence of the thermal background reveals
itself through the Landau condition and the speed of sound.
We show that both the Chapman-Enskog and
the 14-moment approaches lead to the same final
expressions for the $\zeta/\tau_R$ ratio in the small mass limit, where $\tau_R$ is the bulk relaxation time. In general,
temperature-dependent mass results in the emergence of the $\beta_\lambda$ function,
which dictates the very high temperature form of the ratio. In the
Boltzmann case the ratio is
\be
{\zeta_{\rm Boltz}\over \tau_R}
\approx
T^4 \left({1\over 3} - c_s^2\right)^2
\left(
{60\over \pi^2} - {36m_{x} \over \pi T}
\right),
\ee
where $(1/3-c_s^2)$ is directly related to $M_c$, the nonconformality parameter; see Table~\ref{tab-quantities}.
This shows the expected behavior of the source of scale invariance
breaking.
One may observe that one factor of the scale invariance breaking
parameter is introduced directly by the Landau matching, which comes from
the small departure from equilibrium. The other factor emerges as a
correction to the pressure given by purely equilibrium quantities, but not
provided by the equation of state, as argued in \cite{Arnold:2006fz}.
For a system with Bose-Einstein statistics, the result is
\ba
{\zeta \over\tau_R}
\approx
T^4 \left({1\over 3} - c_s^2\right)^2
\left( {2\pi^3 T \over 25 m_{x}} - {4\pi^2 \over 75 }
\left( 1-\frac{9 m^2_{\rm eq}}{8 m^2_x} \right) \right).\nn\\
\ea
The leading order term is not of the expected dependence because of the
factor $T/m_{x}$, which comes from an infrared cutoff. The
same behavior is reflected if we neglect either the constant mass term or
thermally affected quantities. Therefore, it rather indicates that the relaxation time
approximation, which assumes that $\tau_R$ is energy independent, may not
allow one to entirely capture microscopic physics, in particular, of soft
momenta in quantum gases following a Bose-Einstein distribution function.
A similar conclusion was reached in Ref.~\cite{Arnold:2006fz}.
The paper is organized as follows. In Sec.~\ref{sec-deviation} the
ingredients of the effective kinetic theory are briefly summarized and the
derivation of the noneqilibrium thermal correction is provided.
Section~\ref{sec-boltz-hydro} is devoted to the formulation of fluid dynamic basic
equations with the mean field background. In
Sec.~\ref{sec-bulk-ce}, the analysis of the ratio of bulk viscosity to relaxation time
ratio is presented in the Chapman-Enskog theory, within which we solve the Anderson-Witting model. In Sec.~\ref{sec-evolution} we use the 14-moment approximation to derive the
evolution equation for the bulk pressure and then to calculate the bulk
viscosity over the relaxation time ratio and other transport coefficients in the bulk channel in the relaxation time approximation.
Sec.~\ref{sec-summary} summarizes and concludes the work. Appendices
contain some technical details.
\begin{widetext}
\begin{table}[!h]
\centering
\begin{tabular}{lllll}
\hline \hline
Description && Equilibrium quantity && Nonequilibrium quantity
\\
\hline \hline
Physical, zero-temperature mass of a particle && $m_0$
&& $m_0$ \vspace{1mm}
\\
Quasiparticle thermal mass
&&
$m_{\text{eq}}$ && $m_{\text{th}}$ \vspace{1mm}
\\
Quasiparticle mass && $m_x= \sqrt{m_0^2+ m_{\text{eq}}^2}$
&&
$\tilde m_x= \sqrt{m_0^2+ m_{\text{th}}^2}$ \vspace{1mm}
\\
Quasiparticle energy &&
$E_k = \sqrt{{\bf k}^2+ m_x^2}$ \vspace{1mm}
&&
$\mathcal{E}_k = \sqrt{{\bf k}^2+ \tilde m_x^2}$ \vspace{1mm}
\\
Quasiparticle four-momentum &&
$k^\mu \equiv (k_0,{\bf k})=(E_k,{\bf k})$
&&
$\tilde k^\mu \equiv (\tilde k_0,{\bf k})=(\mathcal{E}_k,{\bf k})$ \vspace{1mm}
\\
Lorentz invariant measure &&
$dK = d^3 {\bf k}/[(2\pi)^3 E_k]$
&&
$d\mathcal{K} = d^3 {\bf k}/[(2\pi)^3 \mathcal{E}_k]$ \vspace{1mm}
\\
Distribution function (in the local rest frame) &&
$f_0=1/[e^{\beta E_k}-1]$, with $\beta=1/T $
&& $f=f_0 + \Delta f$ \vspace{1mm}
\\
\hline \hline
Beta function for a coupling constant $\lambda$ & \multicolumn{4}{c}
{$\beta_\lambda = T d\lambda/dT=3\lambda^2/(16\pi^2)$ }
\vspace{1mm}
\\
Temperature dependence of the thermal mass & \multicolumn{4}{c}{$T^2 d m^2_{\text{eq}}/dT^2=m^2_{\text{eq}}+aT^2 \beta_\lambda$, with $a=1/48$ }
\vspace{1mm}
\\
Nonconformality parameter & \multicolumn{4}{c}{$M=(-m^2_0+aT^2 \beta_\lambda)/3$}
\\
\hline \hline
\end{tabular}
\caption{\label{tab-quantities} The quantities characterizing the
equilibrium and nonequilibrium dynamics of a gas with
Bose-Einstein statistics. For a classical gas with Boltzmann
statistics, some of these quantities have different values or forms, and whenever
there is a need to distinguish them we add the subscript $c$: $m_{\text{eq},c}$, $f_{0,c}=e^{-\beta E_k}$, $f_c$, $m_{\text{th},c}$,
$a_c=1/(8\pi^2)$, and $M_c$.}
\end{table}
\end{widetext}
\section{Nonequilibrium deviation from the equilibrium distribution
function}
\label{sec-deviation}
\subsection{Boltzmann equation with the mean field effect}
\label{sec-boltz}
Kinetic theory provides an efficient classical description of complex
microscopic dynamics of an interacting many-body system.
It is a good alternative to quantum field theory to study transport phenomena
in the weakly coupled limit dominated by quasi-particle dynamics.
By quasiparticles one means particles which,
apart from zero temperature mass, gain additional thermal mass due to
interactions with the medium: the effect of the mean field.
They are characterized by a mean free path which is much larger than the
Compton wavelength of the system's constituents, and by a mean free time, which is
much larger than the time between collisions \cite{Arnold:2002zm}.
The dynamics of quasiparticles is encoded in the phase-space distribution
function which evolves according to the Boltzmann equation.
We consider a system of uncharged thermally influenced particles of a
single species for which the Boltzmann equation reads
\ba
\label{boltz}
(\tilde k^\mu \partial_\mu -\mathcal{E}_k \nabla \mathcal{E}_k \cdot \nabla_k)
f=C[f],
\ea
where $C[f]$ is the collision term, $f=f(x,k)$ is a distribution function
of quasiparticles,\footnote{We use here such a notation that whenever $x$
and $k$ appear as arguments of a function, we mean $x^\mu$ and $\tilde k^\mu$ (or $k^\mu$ in the case of $f_0$),
respectively.} and the second term of the left-hand side involves the force
${\bf F}=d{\bf k}/dt = -\nabla \mathcal{E}_k$. The quasiparticle
four-momentum is defined as $\tilde k^\mu=(\tilde k^0, {\bf k})$, where $\tilde k_0 \equiv
\mathcal{E}_k$ is the nonequilibrium energy given by
\ba
\label{energy-noneq}
\mathcal{E}_k = \sqrt{{\bf k}^2+\tilde m_x^2},
\ea
which is a time- and space-dependent variable since $\tilde m_x^2 \equiv
\tilde m^2(x)=m_0^2+m^2_\text{th}(x)$, where $m_0$ is the physical
mass and $m_\text{th}(x)$ is the nonequilibrium thermal mass, which varies
in time and space. Knowing the $x$ dependence of the energy, one may rewrite
Eq.~(\ref{boltz}) as
\ba
\label{boltz-2}
\big(\tilde k^\mu \partial_\mu -\frac{1}{2} \nabla \tilde m_x^2 \cdot
\nabla_k\big) f=C[f].
\ea
The central object of the kinetic theory is the phase-space density
function $f(x,k)$. What we assume about the system is that its departure
from the equilibrium state is small, which, in turn, means that the process
of system equilibration is controlled by a small deviation in the
distribution function, which we denote as
\ba
\label{deltaf-0}
\Delta f(x,k) = f(x,k) - f_0(x,k),
\ea
where $f_0(x,k)$ is the equilibrium Bose-Einstein distribution function
and, in a general frame, it has the form
\ba
\label{distrib-function-zero-gen}
f_0(x,k) = \frac{1}{\exp[u_\mu(x)k^\mu(x)\beta(x)]-1},
\ea
where $\beta\equiv \beta(x)=1/T(x)$ with $T(x)$ being the local temperature, and
$u_\mu\equiv u_\mu(x)$ is the fluid four-velocity. The four-velocity in the
local rest frame is $u^\mu=(1,0,0,0)$. The quasiparticle four-momentum is
$k^\mu=(k^0, {\bf k})$, where $k^0$ component is the equilibrium
$x$-dependent energy
\ba
E_k=\sqrt{{\bf k}^2+ m^2_x},
\ea
where the dependence of $x$ enters through the mass $m^2_x \equiv m^2(x)=
m_0^2 + m^2_{\text{eq}}(x)$ with $m^2_{\text{eq}}(x)$ being the equilibrium
thermal mass, which is not the same as $m^2_{\text{th}}(x)$, the nonequilibrium thermal mass. The
Bose-Einstein density function in the fluid rest frame takes the form
\ba
\label{distrib-function-zero}
f_0(x,k) = \frac{1}{\exp\big( E_k(x)\beta(x) \big)-1}.
\ea
Let us add that in the forthcoming parts we will be deriving all equations
for the Bose-Einstein gas, but these equations may be analogously found for
the classical Boltzmann gas with the distribution function
\ba
\label{boltz-fun}
f_{0,c}(x,k)= \text{exp}(-\beta(x) u_\mu(x) k^\mu(x))
\ea
and these will be briefly presented as well. Our aim is to reformulate the
equations of the viscous hydrodynamics when the effect of fluctuating
thermal mass is incorporated. Therefore, we assume that thermal influence
on the process of the system equilibration is controlled by the
nonequilibrium correction to the thermal mass,
$\Delta m^2_{\text{th}} = m_{\rm th}^2 - m_{\rm eq}^2$,
which will be specified further.
\subsection{Form of $\Delta f$}
\label{sec-ff}
As stated earlier, in this work we study systems with distribution functions that are perturbed from their equilibrium value.
More specifically, the nonequilibrium phase space density can be written as
\be
f(x,k) = f_{\rm th}(x,k) + \delta f(x,k)
\ee
The first part, $f_{\rm th}(x,k)$, still retains the local-equilibrium
form of the distribution function, but the thermal mass
contains the nonequilibrium corrections
\ba
&&f_{\text{th}}(x,k) \equiv
\left. f_0(x,k)
\right|_{m_0^2 + m^2_{\text{eq}}(x)\to m_0^2 + m^2_{\text{eq}}(x)+\Delta m^2_{\text{th}}(x)}
\\
&& \;\;= \bigg[ \exp \Big( \sqrt{{\bf k}^2 + m^2_0 + m^2_{\text{eq}}(x)
+ \Delta m^2_{\text{th}}(x)} \beta(x) \Big)-1\bigg]^{-1}.\nn
\ea
The second part, $\delta f(x,k)$, is a change in the functional form of $f_0(x,k)$
caused by hydrodynamic forces, or equivalently,
nonvanishing gradients of energy and momentum densities.
The nonequilibrium correction $\Delta f$ then has two parts,
\ba
\label{Delta-f_not_used}
\Delta f(x,k) &=& f(x,k) - f_0(x,k) \nn \\
&=& \delta f(x,k) + \delta f_{\text{th}}(x,k),
\ea
where, to the leading order in small change,
$\delta f_{\rm th}(x,k) = f_{\rm th}(x,k) - f_0(x,k)$ is
\ba
\label{delta-th}
\delta f_{\text{th}}(x,k) &=&
-f_0(x,k) \big(1+f_0(x,k)\big)
\frac{\Delta m^2_{\text{th}}(x)}{2 E_k(x)} \beta(x), \qquad\;
\ea
which is obtained by expanding $f_{\rm th}$.
Since $\Delta m_{\rm th}^2$ is the nonequilibrium deviation,
it itself is going to be a functional of $\Delta f$.
Hence, the equation
\ba
\label{Delta-f}
\Delta f = \delta f - \beta f_0(1+f_0) \frac{\Delta m_{\text{th}}^2}{2E_k}
\ea
must be solved self-consistently for $\Delta f$.
\subsection{Form of $\Delta m^2_{\rm{th}}$}
\label{sec-mm}
Recalling the basic foundations of effective kinetic theory,
the analysis here relies
heavily on findings within the scalar $\lambda\phi^4$ theory,
as provided in Refs.~\cite{Jeon:1994if,Jeon:1995zm},
which makes the introduction of thermal
corrections analytically feasible.
But the analysis presented here works equally well whenever the equilibrium
thermal mass has the form $\sim g^n T^2$, where $g$ is the dimensionless coupling
constant and $n$ is a positive integer.
We
intend to provide an effective macroscopic framework to study weakly
interacting systems, where the strength of interaction is determined by the
coupling constant $\lambda \ll 1$.
The coupling constant is scale
(temperature) dependent and the analysis performed here pertains only to
the perturbative regime. Within this approach the equilibrium thermal mass
is found to be
\ba
\label{mass-qq}
m^2_{\text{eq}} = \frac{\lambda(q_0)}{2} q_0,
\ea
where we have introduced the equilibrium scalar quantity $q_0$. The
function $q_0$ and its nonequilibrium counterpart $q$ are defined through
the corresponding distribution functions as
\ba
\label{q-fun-0}
q_0 &=& \int d K f_0,\\
\label{q-fun}
q &=& \int d \mathcal{K} f.
\ea
For the definitions of the symbols,
see Table\,\ref{tab-quantities}.
Therefore, one can observe that Eq.~(\ref{mass-qq}) contains the coupling
constant $\lambda(q_0)$, which is temperature dependent since $q_0$ is
temperature dependent.
Throughout the analysis we always keep the assumption that all
nonequilibrium quantities are slowly varying functions of space
points, which justifies that the nonequilibrium dynamics is governed by
small deviations of the quantities from their equilibrium values.
Therefore, we further assume that the nonequilibrium thermal mass is a
function of the scalar quantity $q$ only.
The same assumption is applied to the running coupling $\lambda(q)$.
Thus, the nonequilibrium thermal mass can be
expanded as
\ba
\label{mas-del}
m^2_{\text{th}}(q) = m^2_{\text{th}}(q_0+\Delta q)
= m^2_{\text{eq}}(q_0) + \Delta m_{\text{th}}^2
\ea
with
\ba
\label{mas-del-q}
\Delta m_{\text{th}}^2 = \frac{d m^2_{\text{eq}}}{dq_0} \Delta q.
\ea
The function $q$ is uniquely defined by Eq.~(\ref{q-fun}) and should be
obtained self-consistently from this
equation. Hence to evaluate $\Delta m_{\text{th}}^2$, we need to find
$\Delta q$ which is itself a function of $ \Delta
m_{\text{th}}^2$. The
deviation of the scalar quantity $q$ can be written as
\ba
\Delta q &=&\int dK \delta f + \frac{\partial q_0}{\partial
m^2_{\text{eq}}} \Delta m_{\text{th}}^2.
\ea
Equation~(\ref{mas-del-q}) then takes the form
\ba
\label{mas-del1}
\Delta m^2_{\text{th}} = \frac{1}{1- \frac{dm^2_{\text{eq}}}{dq_0}
\frac{\partial q_0}{\partial m^2_{\text{eq}} }}
\frac{dm^2_{\text{eq}}}{dq_0} \int dK \delta f.
\ea
On the other hand both $m^2_{\text{eq}}$ and $q_0$ are related by
temperature, so that one can find
\ba
&&\frac{dm^2_{\text{eq}}}{dT}= \frac{dm^2_{\text{eq}}}{dq_0} \frac{dq_0}{dT} \nn\\
&& \;\;=\frac{dm^2_{\text{eq}}}{dq_0} \bigg( \beta^2 \int dK E_k f_0(1+f_0) +
\frac{dm^2_{\text{eq}}}{dT} \frac{\partial q_0}{ \partial m^2_{\text{eq}}}
\bigg).\qquad
\ea
Extracting further $\frac{dm^2_{\text{eq}}}{dq_0} \frac{\partial
q_0}{\partial m^2_{\text{eq}} }$ and inserting it to Eq.~(\ref{mas-del1})
leads to
\ba
\label{Delta-m}
\Delta m^2_{\text{th}} =2 T^2 \frac{d m^2_{\text{eq}}}{dT^2} \frac{\int dK
\delta f}{\beta \int dK E_k f_0(1+f_0)},
\ea
where we used $d m^2_{\text{eq}}/dT = 2T d
m^2_{\text{eq}}/ dT^2$.
\vspace{0.2cm}
Inserting Eq.~(\ref{Delta-m}) into
Eq.~(\ref{Delta-f}), one gets
\ba
\label{Delta-f1}
\Delta f = \delta f - T^2 \frac{d m^2_{\text{eq}}}{dT^2}
\frac{f_0(1+f_0)}{E_k}
\frac{\int dK \delta f}{\int dK E_k f_0(1+f_0)}.\qquad\;
\ea
Analogously, the correction for the Boltzmann gas is
\ba
\label{Delta-f1-bol}
\Delta f_c = \delta f_c - T^2 \frac{d m^2_{\text{eq},c}}{dT^2}
\frac{f_{0,c}}{E_k}
\frac{\int dK \delta f_c}{\int dK E_k f_{0,c}},
\ea
where the subscript $c$ has been used to emphasize that the formula holds
for the classical gas.
Equations~(\ref{Delta-f1}) and (\ref{Delta-f1-bol}) are one of the main results of
this paper. In previous analyses \cite{Sasaki:2008fg,Chakraborty:2010fr,Bluhm:2010qf,Romatschke:2011qp,Albright:2015fpa,Chakraborty:2016ttq,Tinti:2016bav,Alqahtani:2017jwl}, the second term in Eq.~(\ref{Delta-f1}) was missing or was incomplete. When applying the Cooper-Frye formula in viscous hydrodynamics,
it is $\Delta f$, not $\delta f$ that should be used.
\subsection{Temperature dependence of the thermal mass}
\label{sec-tm}
The thermal mass is a function of the
scalar quantity $q_0$ and is defined by Eq.~(\ref{mass-qq}).
Its temperature dependence is dictated by
\ba
\label{mass-t}
\frac{dm^2_{\text{eq}}}{dT} = \frac{\lambda(q_0)}{2} \frac{dq_0}{dT}
+ \frac{q_0}{2} \frac{d\lambda(q_0)}{dT}.
\ea
$q_0$ is one of the thermodynamic functions discussed in detail in Appendix
\ref{bessel-fun}, and its leading order value is found to be $T^2/12$.
Additionally, the second term in Eq.~(\ref{mass-t})
encodes the running of the coupling constant as a function of the energy
scale, which is the essence of the renormalization group
$\beta_\lambda$ function, defined by
\ba
\beta_\lambda \equiv \beta(\lambda) = T \frac{d\lambda(q_0)}{dT}.
\ea
It should be obtained using diagrammatic methods.
In the case of scalar theory,
$\beta_\lambda$ is positive and proportional to $\lambda^2$.
Collecting these contributions, one finds
\ba
\label{mass-temp}
T^2\frac{dm^2_{\text{eq}}}{dT^2} = m^2_{\text{eq}} + a T^2 \beta_\lambda.
\ea
where $m^2_{\text{eq}} = \lambda T^2/24$ and $a=1/48$.
One can analogously consider a temperature-dependent scaling for the
classical Boltzmann gas. In this case, the thermal effective mass may
be assumed to have the same form as (\ref{mass-qq}).
The only difference is that one uses the Boltzmann distribution function
$f_{0,c}$ instead of $f_0$. This gives
$q_{0c} = T^2/(2\pi^2) + O(m_x^2)$, as given
by Eq.~(\ref{I00}), and it leads to
\ba
\label{mass-temp_2}
T^2\frac{dm^2_{\text{eq},c}}{dT^2} = m^2_{\text{eq},c} + a_c T^2
\beta_\lambda,
\ea
where $m^2_{\text{eq},c} = \lambda T^2/(4\pi^2)$ and $a_c=1/(8\pi^2)$.
\section{Equations of hydrodynamics with thermal corrections}
\label{sec-boltz-hydro}
\subsection{Local equilibrium hydrodynamics}
\label{sec-hydro-eq}
First consider a system under strict local equilibrium.
By that we mean that the functional form of the distribution function
is still $f_0$ given in Eq.~(\ref{distrib-function-zero-gen})
or in Eq.~(\ref{boltz-fun}),
but the temperature as well as the thermal mass are $x$ dependent.
Such a system
possesses a conserved stress-energy tensor of the form
\ba
\label{T-zero}
T_0^{\mu\nu} = \int dK k^\mu k^\nu f_0 -g^{\mu\nu}U_0,
\ea
where the metric tensor we use is $g^{\mu\nu} = \text{diag}(1,-1,-1,-1)$.
The extra term $U_0\equiv U_0(x)$ is the mean-field contribution that guarantees the
thermodynamic consistency of hydrodynamic equations and the conservation of
energy and momentum, via the following condition:
\ba
\label{U-zero}
dU_0 = \frac{q_0}{2} dm^2_{\text{eq}},
\ea
where $q_0$ is the Lorentz scalar defined by Eq.~(\ref{q-fun-0}).
Since we study here a system with no conserved charges, the
Landau frame is a natural kinetic framework to define the four-velocity
$u^\mu$ via
\ba
\label{eigen}
u_\mu T_0^{\mu\nu} = \epsilon_0 u^\nu,
\ea
where the eigenvalue $\epsilon_0$ can be identified as the
local energy density. With this definition the energy-momentum tensor may
be decomposed using two orthogonal projections $u^\mu u^\nu$ and
$\Delta^{\mu\nu} = g^{\mu\nu} - u^\mu u^\nu$.
The equilibrium energy-momentum tensor becomes
\ba
\label{tensor-zero}
T_0^{\mu\nu} = \epsilon_0 u^\mu u^\nu - P_0 \Delta^{\mu\nu},
\ea
where $P_0$ is the local thermodynamic pressure. The energy density and the
pressure are in turn given by
\ba
\label{energy-pressure}
\epsilon_0 &=& \bar \epsilon_0- U_0, \\
\label{energy-pressure2}
P_0 &=& \bar P_0+U_0,
\ea
where
\ba
\bar \epsilon_0 &=& \big \langle (u_\mu k^\mu)^2 \big \rangle_0, \\
\bar P_0 &=& -\frac{1}{3} \big \langle \Delta^{\mu\nu} k_\mu k_\nu \big \rangle_0
\ea
with the notation $\big \langle \dots \big \rangle_0 = \int dK \dots f_0$.
Let us point out that the enthalpy is not changed by the mean field
$\bar\epsilon_0+\bar P_0 = \epsilon_0+ P_0 $. One may also check that the
definitions of energy density (\ref{energy-pressure}) and pressure (\ref{energy-pressure2}), together
with the condition (\ref{U-zero}), guarantee that the thermodynamic
relation
\ba
T s_0=T\frac{d P_0}{dT} = \epsilon_0+ P_0,
\ea
where $ s_0$ is the entropy density, is satisfied.
\subsection{Nonequilibrium hydrodynamics}
\label{sec-hydro-non}
The stress-energy tensor of fluid dynamics out of equilibrium takes the
following form:
\ba
\label{T-noneq}
T^{\mu\nu} = \int d\mathcal{K} \tilde k^\mu \tilde k^\nu f -g^{\mu\nu}U,
\ea
which is formally the same as Eq.~(\ref{T-zero}).
The mean-field correction $U$ must be now a function of
$q = \int d{\cal K} f$ only \cite{Jeon:1995zm}.
We emphasize that the formulation of the fluid
hydrodynamic framework with the thermal correction still has to
conform with all assumptions that were made to provide the effective kinetic
theory, discussed in Sec. \ref{sec-deviation}.
In particular, such a
description requires the system to be sufficiently dilute and the
quasiparticles' mean free paths to be much longer than the thermal width of
its constituents, which is maintained when the strength of interaction is
weak. Furthermore, to allow for validity of hydrodynamics, the system has
to be characterized by some macroscopic length scale at which macroscopic
variables, such as pressure and energy density, vary. Under these
assumptions, a nonequilibrium hydrodynamic description applies to systems
where departures of all quantities from their equilibrium values are
characterized by small corrections. Therefore, the nonequilibrium function
$U$, in particular, may be expanded as
\ba
\label{U-Del}
U=U_0 + \Delta U ,
\ea
where
\ba
\label{U-Delq}
\Delta U = \frac{dU_0}{dq_0} \Delta q.
\ea
However, as discussed before and explicitly
shown by Eqs.~(\ref{mas-del}) and (\ref{mas-del-q}), the thermal mass is
also a function of $q$ only. Therefore, applying the relation
(\ref{mas-del-q}) to (\ref{U-Delq}), one finds
\ba
\label{U1}
\Delta U
= \frac{q_0}{2} \Delta m^2_{\text{th}}.
\ea
As before, this is also the condition that $U$
must satisfy to maintain the energy-momentum conservation law $\partial_\mu
T^{\mu\nu}=0$.
The stress-energy tensor of the viscous hydrodynamics (\ref{T-noneq}) may
be next decomposed into the local equilibrium part and the nonequilibrium
deviation
\ba
\label{tensor-delta}
T^{\mu\nu} = T^{\mu\nu}_0 + \Delta T^{\mu\nu},
\ea
where $T^{\mu\nu}_0$ is given by (\ref{tensor-zero}) and $\Delta
T^{\mu\nu}$ carries all dynamical
information needed in order to determine how the nonequilibrium system
evolves into equilibrium. Note that a separation of the viscous
correction from the equilibrium part in Eq.~(\ref{tensor-delta}) has
been done not as a rearrangement of Eq.~(\ref{T-noneq}) but rather as an
expansion of the stress-energy tensor around its local equilibrium value.
As shown in Appendix \ref{components}, we have
\ba
\label{T-00}
\Delta T^{00} &=& \int dK E_k^2 \Delta f, \\
\label{T-0i}
\Delta T^{0i} &=& \int dK E_k k^i \Delta f , \\
\label{T-ij}
\Delta T^{ij} &=&
\int dK k^i k^j \Delta f
-\frac{\Delta m^2_{\text{th}}}{2} \int dK \frac{k^i k^j}{E_k^2} f_0 \nn\\
&&+\delta^{ij} \frac{\Delta m^2_{\text{th}}}{2}
\int dK f_0,
\ea
where $\Delta m^2_{\text{th}}$ and $\Delta f$ are given by (\ref{Delta-m})
and (\ref{Delta-f1}), respectively. Equations (\ref{T-00}) and (\ref{T-0i})
shall dictate the form of the Landau matching condition,
and Eq.~(\ref{T-ij}) contains the definitions of the viscous corrections.
\subsection{Landau matching condition in the rest frame}
\label{sec-hydro-landau}
The Landau matching is defined by the eigenvalue problem
\ba
\label{eigen-non}
u_\mu T^{\mu\nu} = \epsilon u^\nu,
\ea
where $\epsilon$ is the energy density of the nonequilibrium state
including the thermal correction $U$. In the
fluid rest frame it comes down to two equations, corresponding to the
conditions on the energy density and the momentum density:
\ba
T^{00} = \epsilon, \qquad\qquad T^{0i} = 0.
\ea
Under the Landau matching condition,
the local equilibrium is defined to have the same local energy and the momentum density
\ba
\label{T-00-landau}
\Delta T^{00} = 0,
\qquad\qquad\qquad
\label{T-0i-landau}
\Delta T^{0i} = 0.
\ea
Using Eqs.~(\ref{T-00}) and (\ref{T-0i}) with the correction to the
distribution function $\Delta f$
given by Eq.~(\ref{Delta-f1}), we obtain
\ba
\label{landau-1}
\Delta \epsilon \! &=&\! \int \! dK \bigg[E_k^2 - T^2
\frac{dm^2_{\text{eq}}}{dT^2} \bigg] \delta f , \\
\label{landau-2}
0 \! &=& \! \int \! dK \bigg[E_k k^i - T^2 \frac{dm^2_{\text{eq}}}{dT^2}
\! \frac{\int \!dK' k^{\prime i} f_0(f_0+1)}{\int \!dK' E_k' f_0(f_0+1)}\bigg] \delta f.\qquad\;
\ea
However, the second term in Eq.~(\ref{landau-2}) vanishes because of rotational symmetry in
equilibrium.
Hence the Landau matching conditions are
\ba
\label{landau-e}
\int dK \bigg[E_k^2 - T^2 \frac{dm^2_{\text{eq}}}{dT^2} \bigg] \delta f &=&
0 , \\
\label{landau-p}
\int dK E_k k^i \delta f &=& 0.
\ea
The second condition indicates that $\delta f$ cannot have
a vector component: it can only contain a spin 0 part
and a spin 2 part.
\subsection{Shear-stress tensor and bulk pressure in the local rest frame}
\label{sec-hydro-viscous}
The shear tensor $\pi^{ij}$ and the bulk pressure $\Pi$ are found from Eq.~(\ref{T-ij}) in the local rest frame, where
Eqs.~(\ref{Delta-m}) and (\ref{Delta-f1}) are inserted. Then, as shown
in Appendix~\ref{components}, one obtains
\ba
\label{T-ij-del}
\Delta T^{ij} = \int dK k^i k^j \delta f.
\ea
We can reorganize (\ref{T-ij-del}) to separate
the spin 0 part and the spin 2 part as follows:
\ba
\Delta T^{ij} = \pi^{ij} + \delta^{ij} \Pi,
\ea
where
\ba
\label{pi-ij}
\pi^{ij} &=& \int dK k^{\langle i} k^{j \rangle}
\delta f, \\
\label{Pi-expr}
\Pi &=& \frac{1}{3} \int dK {\bf k}^2 \delta f,
\ea
where $k^{\langle i}k^{j\rangle} = k^i k^j - {\bf k}^2\delta^{ij}/3$.
These coincide with the commonly known forms of the shear-stress tensor and bulk
pressure in the local rest frame.
\subsection{General frame}
\label{sec-hydro-gen}
In a general frame where the flow velocity $u^\mu$ may be arbitrary,
the energy-momentum tensor
is\footnote{In Ref.~\cite{Jeon:1994if}, the energy-momentum tensor
correction was written down incorrectly, but the mistake vanished with the imposition of the Landau matching condition, ensuring the validity of the subsequent
derivations.}
\ba
T^{\mu\nu} &=& \int dK k^\mu k^\nu f_0 - g^{\mu\nu}U_0 \nn\\
&&
+ \int dK \bigg[ k^\mu k^\nu -u^\mu u^\nu T^2\frac{dm^2_{\text{eq}}}{dT^2}
\bigg] \delta f.
\ea
The Landau condition then becomes
\ba
\int dK \bigg[ (u_\mu k^\mu) k^\nu - u^\nu
T^2\frac{dm^2_{\text{eq}}}{dT^2} \bigg]\delta f =0
\ea
and the viscous corrections are given by
\ba
\label{viscous-corr}
\pi^{\mu\nu} = \big\langle k^{\langle \mu} k^{\nu \rangle}
\big\rangle_\delta ,
\qquad
\label{viscous-pi}
\Pi = -\frac{1}{3}\big\langle \Delta_{\mu\nu} k^\mu k^\nu
\big\rangle_\delta,
\ea
where $\langle \dots \rangle_\delta \equiv \int dK (\dots) \delta f$. We
have also used the notation
$A^{\langle \mu\nu \rangle} \equiv \Delta^{\mu\nu}_{\alpha \beta}
A^{\alpha\beta}$, where
$\Delta^{\mu\nu}_{\alpha \beta} \equiv (\Delta^\mu_\alpha \Delta^\nu_\beta
+ \Delta^\mu_\beta \Delta^\nu_\alpha - 2/3
\Delta^{\mu\nu}\Delta_{\alpha\beta})/2$.
The definitions (\ref{viscous-corr}) have
well-known structures, but the thermal mass that enters
them is now $x$ dependent and the Landau matching contains a correction due
to the temperature-dependent mass. These arguments are essential when one
aims at examining transport properties of the medium.
\section{Nonequilibrium correction in the Chapman-Enskog approach}
\label{sec-bulk-ce}
Chapman-Enskog theory provides
a way to directly find the solution to the
Boltzmann equation for near-equilibrium systems.
Solving the full Boltzmann equation, however, is formidable task.
In this paper, we use the Anderson-Witting model \cite{Anderson:1973} to find
the explicit leading order solution.
In this section, we focus on the bosonic quantum gas case.
Treatment for the Boltzmann gas case is identical
if one replaces $f_0(1+f_0)$ with the Boltzmann factor $f_{0,c}$.
\subsection{Solution of the Anderson-Witting equation in the rest frame}
With the medium-dependent thermal mass, the Anderson-Witting model is
given by
\be
\left( \tilde k^\mu\partial_\mu - {\cal E}_k \nabla {\cal E}_k\cdot \nabla_k \right)f =
-{(u\cdot \tilde k)\over \tau_R}\Delta f,
\label{eq:AW_eq}
\ee
where $\tilde k^\mu = ({\cal E}_k, {\bf k})$. In the fluid cell rest frame
$u^\mu = (1, 0, 0, 0)$ and $u\cdot \tilde k = {\cal E}_k$.
To use the Chapman-Enskog method, we let
\be
f = f_0 + f_1 + f_2 + \cdots
\ee
where each $f_n$ contains only the $n$-th derivatives of
the thermodynamic quantities and the flow velocity.
The first-order equation is obtained by identifying $\Delta f = f_1$
in the right-hand side and using all other quantities in their equilibrium forms
\be
\left(k^\mu\partial_\mu
- {1\over 2}\partial_i m_{\rm eq}^2
{\partial\over \partial k_i}\right)f_0(x,k)
= -{E_k\over \tau_R} \Delta f(x,k),
\ee
where now $k^\mu = (E_k, {\bf k})$.
Evaluating the left-hand side yields
\ba
&&
\Big( k^\mu \partial_\mu - {1\over 2}\partial_i m_{\rm eq}^2
{\partial \over \partial k_i}\Big)f_0(x,k)
=
-\beta
f_0(x,k)(1 + f_0(x,k)) \nn \\
&& \;\;\;\times \bigg[
\bigg(
c_s^2\left(E_k^2 - T^2{dm_{\rm eq}^2\over dT^2}\right)
- {{\bf k}^2\over 3}
\bigg)
(\partial_i u^i)
-
k^{\langle j} k^{i\rangle} \partial_j u_i
\bigg], \nn \\
\label{eq:AW-LHS}
\ea
where the equations of motion from the ideal hydrodynamics
\ba
\label{part-u}
\partial_0 u^i &=& \frac{\partial^i T}{T},\\
\label{dtemp}
\partial_0 T &=& - T c_s^2 \partial_i u^i
\ea
are used to remove time derivatives.
The $\Delta f$ in the right-hand side of the Anderson-Witting model
is just Eq.~(\ref{Delta-f1}).
Letting $\delta f = f_0(1+f_0)\phi$, we get
\ba
&&
\Delta f(k)
=
f_0(k)(1 +f_0(k)) \nn \\
&&\times \left(\phi(k) -
{T^2\over E_k} {dm_{\rm eq}^2\over dT^2}
{\int dK\phi(k) f_0(k)(1 + f_0(k))
\over \int dK E_{k}f_0(k)(1 + f_0(k))}
\right)\!\!,\;\;\;\;\;
\label{eq:AW-RHS}
\ea
where the $x$ dependence of all quantities is suppressed for the sake of brevity.
In previous derivations, the last term
was missing \cite{Bluhm:2010qf,Tinti:2016bav,Alqahtani:2017jwl}.
Dividing $\phi$ into the shear and the bulk parts
$\phi = \phi_{\rm s} + \phi_{\rm b}$, and comparing
Eqs.~(\ref{eq:AW-LHS}) and (\ref{eq:AW-RHS}),
the shear part of $\phi$ is trivially obtained as
\be
\phi_{\rm s}(k)
=
-{\tau_R \over TE_k}
k^{\langle j} k^{i\rangle} \partial_j u_i ,
\label{phi_s}
\ee
since the angle integration over the spin-2 tensor $k^{\langle j}k^{i\rangle}$
vanishes.
For the bulk part,
letting
\be
\phi_{\rm b}(k) = \left(aE_k + {b\over E_k}\right) \partial_i u^i
\ee
and comparing Eqs.~(\ref{eq:AW-RHS}) and (\ref{eq:AW-LHS}),
we get
\be
a = \tau_R \beta \left(c_s^2 - {1\over 3}\right)
\label{a-expr}
\ee
and
\ba
b
&=&
{-M\beta\tau_R J_{1,0} \over J_{1,0} - T^2(dm_{\rm eq}^2/dT^2) J_{-1,0}},
\label{b-expr}
\ea
where we defined
\be
M = -{1\over 3}\left(m_x^2 - T^2{dm_{\rm eq}^2\over dT^2}\right) .
\label{eq:M}
\ee
With $m_{\rm eq}^2 \propto \lambda T^2$, we have
\be
M = -{1\over 3}\left(m_0^2 - a\beta_\lambda T^2\right),
\ee
where $\beta_\lambda$ is the coefficient function of the coupling constant
renormalization group and $a = O(1)$ depends on the theory.
The parameter $M$
can be identified as the parameter of nonconformality of the system (or
the source of the conformal invariance violation).
We have also introduced a notation for thermodynamic integrals,
\ba
\label{J}
J_{n,q} &=& a_q\int dK (u \cdot k)^{n-2q}
(-\Delta_{\mu\nu} k^\mu k^\nu )^q \, f_0(k)(1+f_0(k)),\nn\\
&&
\ea
where $a_q=1/(2q+1)!!$, which can be evaluated in the fluid cell rest frame.
The bulk part of the leading order Chapman-Enskog solution
of the Anderson-Witting equation is then
\ba
\label{cs2}
&&\phi_{\rm b}(k)=\tau_R \beta (\partial_i u^i) \nn\\
&&\times \left(
(c_s^2 - 1/3)E_k
-{1\over E_k}
{M J_{1,0}\over J_{1,0} - T^2(dm_{\rm eq}^2/dT^2) J_{-1,0}}
\right).\nn\\
&&
\ea
To show that $\phi_{\rm b}(k)$ is in fact proportional to $(c_s^2-1/3)$,
we can use
\ba
\label{sound}
c_s^2 = \frac{d P_0/dT}{d \epsilon_0/dT}
= \frac{J_{3,1} }
{J_{3,0} - (T^2 d{m_{\rm eq}^2/dT^2})J_{1,0}},
\ea
where $ P_0$ and $ \epsilon_0$
are the pressure and the energy density
given in Eqs.~(\ref{energy-pressure2}) and (\ref{energy-pressure}).
Using the identities from Appendix~\ref{bose-details}, one can also show
that
\ba
{1\over 3} - c_s^2
&=&
-
\frac{ MJ_{1,0} }
{
J_{3,0} - T^2(dm_{\rm eq}^2/dT^2)J_{1,0 }
}.
\label{speed-be}
\ea
Hence finally
\ba
\label{eq:phi_b}
&&\phi_{\rm b}(k) =
\tau_R \beta(\partial_i u^i)(c_s^2 - 1/3) \nn\\
&&\qquad \times \left(E_k -{1\over E_k}{J_{3,0} - T^2(dm_{\rm eq}^2/dT^2)J_{1,0}\over J_{1,0}
- T^2(dm_{\rm eq}^2/dT^2) J_{-1,0}}
\right).\qquad
\ea
Equation~(\ref{eq:phi_b}) is another main result in this work. This equation slightly differs from the analogous one for the Boltzmann statistics shown in Ref.~\cite{Romatschke:2011qp,Chakraborty:2010fr,Paquet:2015lta}.
In hydrodynamic simulations, it is practical to replace the system expansion rate by the bulk viscous pressure using the Navier-Stokes relation $\Pi = - \zeta \theta$, which gives
\ba
&&\phi_{\rm b}(k) =
\beta \left( - \frac{\Pi}{\zeta/\tau_R} \right)(c_s^2 - 1/3) \nn\\
&&\qquad \times \left(E_k -{1\over E_k}{J_{3,0} - T^2(dm_{\rm eq}^2/dT^2)J_{1,0}\over J_{1,0}
- T^2(dm_{\rm eq}^2/dT^2) J_{-1,0}}
\right).\qquad
\label{eq:phi_b_fin}
\ea
Having given the solution of the Anderson-Witting equation, one can also find $\Delta f$ explicitly. Inserting Eqs.~(\ref{eq:phi_b}) and (\ref{phi_s}) into (\ref{eq:AW-RHS}) one finds
\ba
\label{del-II}
\Delta f(k)&=&f_0(k)(1+f_0(k))\tau_R \beta
\bigg[-(\partial_j u_i) \frac{ k^{\langle j} k^{i\rangle} }{E_k} \nn\\
&&+ (\partial_i u^i)(c_s^2-1/3)
\bigg(E_k - \frac{1}{E_k}\frac{J_{3,0}}{J_{1,0}}\bigg)\bigg].\;\;
\ea
The phase space density correction $\Delta f$ has a much
simpler form than $\phi$. However, for transport coefficient calculations, it is
$\phi$ (equivalently $\delta f$),
rather than $\Delta f$, that is needed.
\subsection{Energy conservation and Landau matching in the Anderson-Witting case}
By multiplying $\tilde k^\nu = ({\cal E}_k, {\bf k})$ and integrating over $d{\cal K}$,
the left-hand side of Anderson-Witting equation (\ref{eq:AW_eq}) turns into
$\partial_\mu T^{\mu\nu}$, where the stress-energy tensor $T^{\mu\nu}$ is defined in Eq.~(\ref{T-noneq}). Assuming that the mean-field contribution $U$ satisfies
\be
\partial_\mu U(x) = {\partial_\mu \tilde{m}_{x}^2(x)\over 2}\int d{\cal K} f(x,k),
\ee
we get $\partial_\mu T^{\mu\nu} = 0$.
Under the same condition, the right-hand side of the
Anderson-Witting model within the Chapman-Enskog approach must also vanish,
\be
-{1\over \tau_R} \int dK\, E_k k^\mu\, \Delta f = 0,
\ee
to ensure energy-momentum conservation.
This condition for energy-momentum conservation is actually
exactly the same as the Landau conditions we derived in
Sec.~\ref{sec-hydro-landau}.
Upon using $\Delta f$ in Eq.~(\ref{Delta-f1}) in the fluid rest frame,
these become
\be
0 = \int dK \left(E^2_k - T^2{dm_{\rm eq}^2\over dT^2} \right)\delta f
\label{eq:Landau_cond_ene}
\ee
and
\be
0
=
\int dK E_k k^i \delta f .
\label{eq:Landau_cond_pi}
\ee
Eq.~(\ref{eq:Landau_cond_pi}) is automatically satisfied
by the $\delta f = f_0(1+f_0)(\phi_{\rm s} + \phi_{\rm b})$ obtained
in the previous subsection since it does not contain a vector part.
In the condition (\ref{eq:Landau_cond_ene}), the shear part $\phi_{\rm s}$
also vanishes because it contains a spin-2 tensor.
Using Eqs.~(\ref{eq:phi_b}) and (\ref{J}), it is easy to show
that the energy conservation and the Landau condition are indeed fulfilled.
This automatic fulfillment of the Landau condition for the quasiparticle case
would not have been possible if one missed the $\Delta m^2_{\rm th}$ correction in $\Delta f$.
\subsection{The shear and the bulk viscosities in the Anderson-Witting model}
\label{sec-ce-ratio}
The full leading order Chapman-Enskog solution to the Anderson-Witting model is
given by Eq.~(\ref{eq:AW-RHS}) with $\phi_{\rm s}$ and $\phi_{\rm b}$ obtained above.
The shear viscosity can be evaluated by using Eq.~(\ref{pi-ij}) for $\pi^{ij}$
and Eq.~(\ref{phi_s}) for $\phi_{\rm s}$ as
\be
\pi^{ij} =
\frac{2\beta}{15} \tau_R \int dK\, f_0(1+f_0)\frac{{\bf k}^4}{E_k} \sigma^{ij},
\ee
where $\sigma^{ij}=-1/2(\partial^i u^j + \partial^j u^i -2/3 g^{ij} \partial_k u^k)$.
Identifying $\pi^{ij} = 2\eta \sigma^{ij}$,
we get
\be
{\eta\over\tau_R} = \beta J_{3,2}
\ee
and subsequently find the shear viscosity in the relaxation time
approximation, which was examined in few papers, see, for example,
\cite{Romatschke:2011qp,Denicol:2014vaa,Florkowski:2015lra}, and has the form
\ba
\label{shear-v}
\frac{\eta}{\tau_R} = \frac{\epsilon_0+P_0}{5}.
\ea
For the bulk viscosity, we start with Eq.~(\ref{Pi-expr})
\be
\Pi = \int dK\, {{\bf k}^2\over 3}\, \delta f.
\ee
Using the Landau condition, Eq.~(\ref{landau-e}), one gets
\ba
\Pi &=& M \int dK \delta f,
\label{eq:Pi_final}
\ea
in which only the bulk part is relevant:
\ba
\Pi
&=&
M\int dK\, f_0(k) (1+f_0(k))\phi_b(k)
\label{eq:Pi_bulk}
\ea
with $\phi_b(k)$ given by Eq.~(\ref{eq:phi_b}). Since $\Pi = -\zeta \partial_i u^i$,
one can read off the ratio of bulk viscosity to the relaxation time from Eq.~(\ref{eq:Pi_bulk}) as
\ba
&&{\zeta\over \tau_R}
=
\beta M^2
\bigg(
{J_{-1,0}J_{1,0}\over J_{1,0} - T^2(dm_{\rm eq}^2/dT^2) J_{-1,0}} \qquad\qquad\nn \\
&&\qquad\qquad\qquad
-\frac{ J_{1,0} J_{1,0} } {J_{3,0} - T^2(dm_{\rm eq}^2/dT^2)J_{1,0}}
\bigg).
\label{bulk-be1}
\ea
The integrals present in Eq.~(\ref{bulk-be1}) have been computed
in Appendix \ref{bose-details}. Using them, one gets the value of the ratio as
\ba
\label{bulk-be-2-ce}
\frac{\zeta}{\tau_R} \approx
\frac{M^2}{2\pi^2}
\bigg( \frac{\pi T}{4 m_x} - \frac{11}{12}\bigg( 1-\frac{9m^2_{\rm eq}}{44m^2_x} \bigg) \bigg).
\ea
For the application in relativistic viscous hydrodynamics,
it is more useful to use the speed of sound. Applying Eqs.~(\ref{speed-be}) and (\ref{bulk-be1}), one can explicitly show that the ratio is proportional to $(1/3-c_s)^2$, namely
\ba
\label{eq:zeta_be}
{\zeta \over\tau_R}
&\approx&
T^4 \left({1\over 3} - c_s^2\right)^2
\left( {2\pi^3 T \over 25 m_{x}} - {4\pi^2 \over 75 }
\left( 1-\frac{9 m^2_{\rm eq}}{8 m^2_x} \right) \right).\nn\\
&&
\ea
Note the appearance of $T/m_{x}$ in the expression (\ref{eq:zeta_be}).
This is in clear contrast to the Boltzmann statistics case
which does not show such a behavior.
The analysis for the Boltzmann statistics case is identical to the analysis above
except that in place of $J_{n,q}$ we have
\ba
\label{I}
I_{n,q} &=& a_q \int dK (u \cdot k)^{n-2q}
(-\Delta_{\mu\nu} k^\mu k^\nu )^q f_{0,c}(k),\qquad
\ea
where $f_{0,c}(k) = e^{-\beta k^\mu u_\mu}$.
In this case, one gets
\be
\label{zeta-boltzmann}
{\zeta_{\rm Boltz}\over \tau_R}
\approx
T^4 \left({1\over 3} - c_s^2\right)^2
\left(
{60\over \pi^2} - {36m_{x} \over \pi T}
\right).
\ee
The origin of this discrepancy is
the fact that the Bose-Einstein factor behaves like
$f(k)\sim T/E_k$ in the infrared limit, which makes the thermodynamic integral
$J_{-1,0}$ in Eq.~(\ref{bulk-be1}) diverge in the $m_x\to 0$ limit while
$I_{-1,0}$ does not.
As a result, soft
momenta govern the structure of $\zeta/\tau_R$. However, since the
calculation was performed in the relaxation time approximation, which
assumes that $\tau_R$ is independent of energy,
it may not capture the right
soft physics. A similar behavior was seen
in Ref.~\cite{Arnold:2006fz}, where QCD bulk viscosity is
studied. The authors claim that the correct behavior of bulk viscosity is
obtained in the relaxation time approximation
by neglecting the infrared divergent term.
But in principle there is no reason why this term
should be ignored within the present framework.
Further, notice that starting from Eq.~(\ref{eq:AW-LHS}), the spin 0 part (the bulk
part) and the spin 2 part (the shear part) of the analysis are totally independent.
Hence, it is possible to generalize the leading order Anderson-Witting equation
as
\ba
&&\left(k^\mu\partial_\mu
- {1\over 2}\partial_i m_{\rm eq}^2
{\partial\over \partial k_i}\right)f_0(x,k) \nn \\
&& \qquad\qquad\qquad
= -{E_k\over \tau_\pi} \Delta f_{\rm s}(x,k)
-{E_k\over \tau_\Pi} \Delta f_{\rm b}(x,k),\qquad
\ea
where $\Delta f_{\rm s}$
and $\Delta f_{\rm b}$ are the shear and bulk parts of $\Delta f$.
In fact, when the dominant physical
processes for the shear relaxation and the bulk relaxation are different,
this is the most natural form of the Anderson-Witting model.
The analysis of this generalized Anderson-Witting model follows exactly the same
route as for the single $\tau_R$, except that the shear viscosity
and the bulk viscosity have different relaxation times.
As discussed in Refs.~\cite{Jeon:1994if,Jeon:1995zm}, the dominant physical processes for the
shear relaxation and the bulk relaxation can be indeed very different, and the bulk
relaxation can be dominated by the soft sector.
Hence, the appearance of $T/m_x$ is not entirely unnatural given that
$\tau_\Pi$ can have very different $m_x$ dependence from $\tau_\pi$ and
the bulk relaxation is dominated by the soft number-changing process.
\subsection{Comparison of $\Delta f$ to previous works}
The phase space correction $\Delta f$ in Eq.~(\ref{del-II}) ultimately
comes from solving the first-order Chapman-Enskog approximation.
Hence, it should come as no surprise that Eq.~(\ref{del-II})
is consistent with similar results found in other similar works,
provided that the right expression for the speed of sound is used.
For instance, in Ref.~\cite{Romatschke:2011qp} one finds
that the bulk part of the phase space correction in
the Boltzmann case is derived to be
\ba
\label{delta-Rom}
\Delta f_R(k) = f_{0,c}(k) \phi_R(k)
\ea
with
\ba
\label{phi-Rom}
\phi_R(k) &=&\tau_R \beta (\partial_i u^i) \bigg((c_{sR}^2-1/3)E_k \\ \nn
&&-\frac{1}{E_k}\bigg( c_{sR}^2 m_x T \frac{d m_x}{dT} - \frac{m^2_x}{3}\bigg) \bigg),
\ea
where the speed of sound is $c^2_{sR}=(3+zK_2(z)/K_3(z))^{-1}$, with $z=m_x/T$ and $K_n(z)$ being the modified Bessel functions of the second kind.
This $\phi_R$ is different than $\phi_b$ in Eq.~(\ref{eq:phi_b_fin})
since $\phi_R$ is a part of $\Delta f$ while $\phi_b$ is a part of
$\delta f$. The phase space correction $\Delta f_R$ is, however,
equivalent to the bulk part of $\Delta f$ in Eq.~(\ref{del-II})
if one uses the speed of sound expression
(\ref{sound}) with $J_{n,q}\to I_{n,q}$.
As mentioned above, this is as it should be since both are solutions of the first-order
Chapman-Enskog approximation.
The big difference between the previous treatments and ours is in computing the bulk viscosity.
The bulk viscosity must be calculated using $\delta f$ and
{\em not} $\Delta f$ as explained in the previous section.
If one uses $\Delta f$ (or $\Delta f_R$) instead of $\delta f$,
the ratio $\zeta/\tau_R$ would be incorrectly calculated.
\section{Transport coefficients in the 14-moment approximation}
\label{sec-evolution}
When a system features a conformal anomaly, first-order
transport coefficients reveal different
sensitivity to the source of the conformal symmetry violation, as explicitly shown in the previous section. In particular, shear viscosity is fully determined by the dominant energy
scale, which is the temperature $T$, and thus the shear viscosity over its
relaxation time ratio behaves as $T^4$ at leading order in the conformal symmetry breaking, making the effects of scale anomaly negligible. On the other hand, bulk
viscosity over the relaxation time is fully determined by the breaking of conformal symmetry. Such a difference makes it justified to
omit the analysis of shear viscous effects and to evaluate first- and second-order
transport coefficients related to bulk pressure, because the additional term in Eq.~(\ref{Delta-f1})
indeed concerns only the scalar part. The analysis is performed at leading order
in the conformal breaking parameter
while including the thermal mass consistently.
The bulk pressure is given by Eq.~(\ref{eq:Pi_final}). Noting that Eq.~(\ref{Delta-f1}) can be expressed as
\ba
M \Delta f &=& M \delta f - T^2 \frac{d m^2_{\text{eq}}}{dT^2}
\frac{f_0(1+f_0)}{E_k}
\frac{\Pi}{\int dK E_k f_0(1+f_0)}, \nn\\
&&
\ea
one can rewrite Eq.~(\ref{eq:Pi_final}) as
\be
\Pi =
\tilde M \int dK\,\Delta f,
\ee
where
\be
\tilde M =
{MJ_{1,0}\over J_{1,0} - T^2(dm_{\rm eq}^2/dT^2)J_{-1,0}}.
\ee
To obtain the
equation of motion for the bulk pressure, we first take
the time derivative of $\Pi$,
\ba
\label{Pi-eom}
\dot \Pi &=&
\dot{\tilde{M}} \int dK\,\Delta f \nn\\
&&
+
\tilde{M} \bigg[ \int dK \Delta \dot f
- \frac{\dot m^2_{\text{eq}}}{2}\int dK \frac{1}{E_k^2} \Delta f\bigg] ,
\ea
where we adopted the notation $\dot A = u^\mu \partial_\mu A$ for an
arbitrary quantity $A$, which reduces to the time derivative
in the rest frame of the fluid.
From the Boltzmann equation
\be
\left( \tilde k^\mu\partial_\mu - {\cal E}_k \nabla {\cal E}_k\cdot \nabla_k \right)f =
C[f],
\label{eq:Boltzmann_eq}
\ee
where $C[f]$ is the collision integral, one finds
\ba
\label{delta-eom}
u^\mu \partial_\mu(\Delta f)&=&
\frac{1}{(u\cdot \tilde k)} \bigg[ C[f] - \tilde k^\mu \partial_\mu f_0
- \tilde k^\mu \nabla_\mu \Delta f \nn\\
&&\qquad
+\frac{1}{2} \nabla \tilde m_x^2 \nabla_k f_0
+\frac{1}{2} \nabla \tilde m_x^2 \nabla_k \Delta f \bigg].\qquad
\ea
Inserting the expression (\ref{delta-eom})
to Eq.~(\ref{Pi-eom}) and keeping only leading order terms, that is, terms which are evaluated
with $\tilde k \to k$, we have
\ba
\label{Pi-eom-1}
\dot \Pi - C &=&
- \tilde{M}\bigg[- \dot\beta \Big(J_{1,0} -
T^2(dm^2_{\text{eq}}/dT^2) J_{-1,0}\Big)
\nn\\
&&
+ \frac{\beta}{3} \theta \Big( J_{1,0} - m_x^2 J_{-1,0} \Big) \bigg]
+\left( \frac{\dot{\tilde M}}{\tilde M}- \frac{2}{3} \theta \right) \Pi
\nn\\
&&
-M\Big( \frac{\dot m_{\text{eq}}^2}{2} + \frac{m_x^2}{3} \theta \Big) \rho_{-2}
- M \rho_{-2}^{\mu\nu} \sigma_{\mu\nu} ,
\ea
where $\theta \equiv \nabla_\mu u^\mu$ and $\sigma_{\mu\nu} = \partial_{\langle\mu} u_{\nu\rangle}$ is the Navier-Stokes shear tensor. In Eq.~(\ref{Pi-eom-1}) we adopted the following notation for the collision term:
\ba
\label{C}
C &=& \tilde M \int dK (u \cdot k)^{-1} C[f]
\ea
and, for the irreducible moments,
\ba
\label{rho-1}
\rho_n =\langle (u^\alpha k_\alpha)^n \rangle_\delta, \qquad
\rho_n^{\mu\nu} = \langle (u^\alpha k_\alpha)^n k^{\langle \mu} k^{\nu \rangle} \rangle_\delta.
\;\;\;
\ea
Evaluating $u_\nu \partial_\mu T^{\mu\nu}=0$ and implementing the formula (\ref{sound}) for the speed of sound squared, one obtains
\ba
\label{beta-dot}
\dot \beta = \frac{ \Pi \theta -\pi^{\mu\nu} \sigma_{\mu\nu} }
{J_{3,0} - T^2 (dm^2_{\text{eq}}/dT^2)J_{1,0}} + c_s^2 \beta \theta .
\ea
Next, calculating time derivatives $\dot{\tilde M}$ and $\dot m^2_{\text{eq}}$, Eq.~(\ref{Pi-eom-1}) simplifies to
\ba
\label{Pi-eom-2}
\dot \Pi - C &=&
-
\beta\tilde{M} \bigg[\bigg(\frac{1}{3}- c_s^2\bigg)
\left( J_{1,0} - T^2\frac{dm_{\rm eq}^2}{dT^2}J_{-1,0}\right)\nn\\
&&
+ M J_{-1,0} \bigg] \theta
- \Big(\frac{2}{3} + \frac{2c_s^2 aT^2\beta_\lambda}{3\tilde M}-A \Big)\theta \Pi \nn\\
&&
- \pi^{\mu\nu}\sigma_{\mu\nu} A
+M^2 \rho_{-2} \theta
- M \rho_{-2}^{\mu\nu} \sigma_{\mu\nu} ,
\ea
where
\ba
\label{A}
A=\tilde M \frac{J_{1,0}- T^2(dm^2_{\text{eq}}/dT^2) J_{-1,0}}{J_{3,0}- T^2(dm^2_{\text{eq}}/dT^2)J_{1,0}} = c_s^2 -\frac{1}{3}\qquad
\ea
with the quantity $(c_s^2-1/3)$ given by Eq.~(\ref{speed-be}).
To close Eq.~(\ref{Pi-eom-2}) in terms of $\Pi$ and $\pi^{\mu\nu}$, one can apply the 14-moment approximation, which allows one to express the irreducible moments by $\Pi$ and $\pi^{\mu\nu}$ as follows:
\ba
\label{mom-1}
\rho_{-2} &=& \gamma^{(0)}_{2} \Pi,\\
\label{mom-2}
\rho^{\mu\nu}_{-2} &=&\gamma^{(2)}_2 \pi^{\mu\nu},
\ea
where the coefficients $\gamma^{(0)}_{2}$ and $\gamma^{(2)}_2$ are combinations of different thermal functions $J_{n,q}$. Their particular forms are presented in Appendix~\ref{moments}. Also, using the Anderson-Witting model for the collision term
\ba
C[f] = - (u\cdot k) \frac{\Delta f}{\tau_R},
\ea
where $\Delta f$ is given by Eq.~(\ref{Delta-f1}), the collision integral becomes
\ba
\label{C1}
C &=& -\frac{\Pi}{\tau_R}.
\ea
Applying the collision term in the relaxation time approximation (\ref{C1}), the irreducible moments, Eqs.~(\ref{mom-1}) and (\ref{mom-2}), and the relation for the speed of sound (\ref{speed-be}) to the evolution equation (\ref{Pi-eom-2}), one obtains
\ba
\label{Pi-eom-3}
\dot \Pi + \frac{\Pi}{\tau_R} &=&
-{\zeta\theta\over\tau_R} - \frac{\delta_{\Pi\Pi}}{\tau_R}\theta \Pi + \frac{\lambda_{\Pi \pi}}{\tau_R} \pi^{\mu\nu}\sigma_{\mu\nu},
\ea
where
\ba
{\zeta\over \tau_R}
&=&
\beta
M^2
\bigg[
{J_{1,0}J_{-1,0}\over J_{1,0} - T^2(dm_{\rm eq}^2/dT^2)J_{-1,0}}\qquad\qquad\nn\\
&&\qquad
-{J_{1,0}J_{1,0}\over J_{3,0} - T^2(dm_{\rm eq}^2/dT^2)J_{1,0}}
\bigg]
\ea
is identical to the expression obtained in the Chapman-Enskog
approach found in the previous section, Eq.~(\ref{bulk-be1}). The remaining transport coefficients are
\ba
\label{bulk-del-bose}
\frac{\delta_{\Pi\Pi}}{\tau_R} &=&1 - c_s^2 +M^2 \gamma_2^{(0)} + \frac{2 aT^2\beta_\lambda}{9\tilde M}, \\
\label{bulk-lam-bose}
\frac{\lambda_{\Pi \pi}}{\tau_R} &=&\frac{1}{3} - c_s^2 - M\gamma_2^{(2)}.
\ea
Converting $M$ to the speed of sound and taking $m_0 \to 0$ limit, one gets
\ba
\label{bulk-del-boltz}
\frac{\delta_{\Pi\Pi}}{\tau_R} &\approx&
\frac{4}{3} \left(1+ \frac{T^2}{2}\frac{dm_{\rm eq}^2}{dT^2}\frac{J_{-1,0}}{J_{1,0}}\right)
+ \left(\frac{1}{3}-c_s^2 \right) \nn\\
&&
+ \gamma_2^{(0)} \left({J_{3,0}\over J_{1,0}} - T^2\frac{dm_{\rm eq}^2}{dT^2} \right)^2
\left( \frac{1}{3}-c_s^2 \right)^2, \\
\label{bulk-lam-boltz}
\frac{\lambda_{\Pi \pi}}{\tau_R} &\approx&
\left( 1+ \gamma_2^{(2)} \left({ J_{3,0} \over J_{1,0}} - T^2\frac{dm_{\rm eq}^2}{dT^2} \right) \right) \left( \frac{1}{3}-c_s^2 \right) ,\qquad
\ea
where $\gamma_2^{(0)}$ and $\gamma_2^{(2)}$ are calculated in Appendix \ref{moments} and are given by Eqs.~(\ref{gam-1a-bose}) and (\ref{gam-2a-bose}), respectively. When inserted, one gets the leading orders of the coefficients,
\ba
\label{bulk-del-boltz-1}
\frac{\delta_{\Pi\Pi}}{\tau_R}
&\approx&
\frac{4}{3}\left(1+ \frac{3}{8\pi}\frac{m_{\rm eq}}{T}
- \frac{3}{16\pi^2} \frac{m^2_{\rm eq}}{T^2} \right) \nn \\
&&
+ \left(\frac{1}{3}-c_s^2 \right) \left(\frac{6}{15\pi} \frac{T}{m_{\rm eq}} +1 \right) \nn \\
&&
+ 0.97 \left( \frac{1}{3}-c_s^2 \right)^2 \frac{T^4}{m_{\rm eq}^4}, \\
\label{bulk-lam-boltz-1}
\frac{\lambda_{\Pi \pi}}{\tau_R} &\approx& 1.05 \left(\frac{1}{3} - c_s^2\right),
\ea
where the numerical factors come from evaluating $g_0 \left(12/15 \right)^2 \approx 0.97$ and $\left(1+12g_2/15 \right) \approx 1.05$ with $g_0$ and $g_2$ given by Eqs.~(\ref{g0}) and (\ref{g2}).
As seen, the coefficient $\delta_{\Pi\Pi}/\tau_R$ is affected by the soft physics even more strongly than the bulk viscosity which is manifested by the factors $1/m_{\rm eq}$ and $1/m_{\rm eq}^4$.
Repeating the same analysis for the Boltzmann gas, which leads simply to replacement of the thermodynamic functions $J_{n,q} \to I_{n,q}$, one obtains the same value of $\zeta_{\rm Boltz}/\tau_R$ as within the Chapman-Enskog approach, Eq.~(\ref{zeta-boltzmann}). The other two coefficients have the forms (\ref{bulk-del-boltz}) and (\ref{bulk-lam-boltz}) with $\gamma_2^{(0)}$ and $\gamma_2^{(2)}$ given by Eqs.~(\ref{gam-1a}) and (\ref{gam-2a}). The explicit expressions in the $m_0 \to 0$ limit are then found to be
\ba
\label{bulk-del-boltz2}
\frac{\delta_{\Pi\Pi,{\rm Boltz}}}{\tau_R}
&\approx&
\frac{4}{3}\left( 1+\frac{1}{4} \frac{m^2_{\rm eq,\;c}}{T^2} \right)
+5 \left(\frac{1}{3} - c_s^2\right) \nn\\
&&
-10.8\left(\frac{1}{3} - c_s^2\right)^2 , \\
\label{bulk-lam-boltz2}
\frac{\lambda_{\Pi \pi,{\rm Boltz}}}{\tau_R} &\approx& 1.6 \left(\frac{1}{3} - c_s^2\right) ,
\ea
where the numerical factors were found from $144g_{0c} \approx -10.8$ and $\left(1+12g_{2c} \right) \approx 1.6$ with $g_{0c}$ and $g_{2c}$ written up below Eq.~(\ref{gam-2a}). One can also see from
Eqs.~(\ref{bulk-del-bose}) and (\ref{bulk-lam-bose}) that when thermal quantities are neglected and the constant mass is kept, we reproduce $\zeta_{{\rm Boltz}}/\tau_R$, $\lambda_{\Pi \pi,{\rm Boltz}}/\tau_R$ and the two first terms of $\delta_{\Pi \Pi,{\rm Boltz}}/\tau_R$ from Ref.~\cite{Denicol:2014vaa}.
\section{Summary and conclusions}
\label{sec-summary}
In this paper we analyzed the influence of the mean field on fluid
dynamics in weakly interacting systems of a single species, where all
occurring masses are much smaller than the system's temperature. Our main
attention was paid to proper determination of the form of the
nonequilibrium correction to the distribution function which depends
on the mass varying as the temperature varies. The correction guarantees a consistent
hydrodynamic description which satisfies thermodynamic relations and the conservation of energy
and momentum and furthermore gives an accurate fixing of the temperature through Landau matching.
The correction
plays a central role in studying thermal dependence of bulk viscous
dynamics. Therefore, we further considered the Anderson-Witting model
of the Chapman-Enskog approach and
computed $\zeta/\tau_R$ of single-component Bose-Einstein and Boltzmann
gases. We also derived the evolution equation for the bulk pressure in the
14-moment approximation and obtained relevant transport coefficients. Both
methods provide the same result for~$\zeta/\tau_R$.
The ratio $\zeta/\tau_R$ obtained for the Boltzmann statistics behaves as
expected, that is, it is given by the nonconformality parameter squared.
When thermal effects are omitted, we reproduce the result from
Refs.~\cite{Denicol:2014vaa,Florkowski:2015lra}. On the other hand, for very high
temperatures the ratio gets dominated by the $\beta_\lambda$ function. We
also see that in spite of breaking conformal invariance, bulk viscosity
vanishes at some critical temperature where $c_s^2=1/3$. In the case of the Bose-Einstein gas, we
have shown that the leading order term of $\zeta/\tau_R$ is different than expected
if we neglect either the physical mass or thermal effects. The ratio in this case is strongly redounded by the infrared
physics, which introduces an additional energy-scale-dependent factor $T/m_x$. We suspect that the relaxation time approximation
used here does not include the entire microscopic physics of a quantum gas, in
particular, it is insensitive to phenomena at the soft scale.
Therefore, we conclude that to compute the bulk viscosity over its
relaxation time for quantum gases of Bose-Einstein statistics, one needs to
use more advanced methods and solve an integral equation. It can be done
starting from either the linearized Boltzmann equation or Kubo formulas, in
which case note that the formula for the bulk relaxation time was recently found
\cite{Czajka:2017bod}.
\section*{Acknowledgments}
It is a pleasure to thank G. Denicol, J. Kapusta, and J.-F. Paquet for useful discussions. This work is supported in part by the Natural Sciences and Engineering Research Council of Canada, and by the US DOE, under Contract No. DE-SC0012704. In addition, we gratefully acknowledge support from the program Mobility Plus of the
Polish Ministry of Science and Higher Education (A. C. ), from the Canada Council for the Arts through its Killam Research Fellowship program (C. G.), and from the Goldhaber Distinguished Fellowship program from Brookhaven Science Associates (C. S.).
|
2,869,038,155,857 | arxiv | \section{Introduction}
In Statistical Mechanics the Ising model is one of the most important model. One possible extension of the Ising model is the Potts one. An excellent review is \cite{W}. Recent works about the Potts model are \cite{CS},\cite{BR} and \cite{CS2}. The Potts model can be described as follows: Consider $N$ particles and for each particle $i$ we associate a number $\sigma_i\in\Sigma=\{1,2,\ldots,r\}$ called the spin of the particle $i$. $r$ is a positive integer bigger than one. The vector $\sigma=(\sigma_1,\ldots,\sigma_N)\in\Sigma^N$ receives the name of the system configuration. In the nineteenth century people started using one function to describe a physical system: the energy function. In the Potts Model the energy function is:
\begin{align}
E(\sigma)=\sum_{1\leq i<j\leq N} J_{i,j}\delta(\sigma_i,\sigma_j)+\sum_{i=1}^NB_i\delta(1,\sigma_i)
\end{align}
where the numbers $(J_{i,j})_{1\leq i<j\leq N},(B_i)_{1\leq i\leq N}$ will be consider non-negative real numbers and the function $\delta(x,y)=1$ if $x=y$ and zero otherwise. The $J_{i,j}$ represents the exchange interaction between the particles $i$ and $j$. As $J_{i,j}\geq 0$ we have only ferromagnetic interactions. The $B_i$ is the external field in the direction of the particle $i$. The probabilistic model is defined by the triple $(\Sigma^N,\mathcal{P}(\Sigma^N),\mathbb{P})$ where
\begin{align}
\mathbb{P}(\sigma)=\frac{e^{E(\sigma)}}{Z_N}
\end{align}
the term of normalization $Z_N=\sum_{\sigma\in\Sigma^N}e^{E(\sigma)}$ is named partition function. We obtain the Ising model if we take $r=2$. Let $\underline{J}=(J_{1,2},\ldots,J_{N-1,N})$ and $\underline{B}=(B_1,\ldots,B_N)$, we define the local magnetization as:
\begin{align}
m_i(\underline{J},\underline{B})=<\delta(1,\sigma_i)>=\sum_{\sigma\in\Sigma^N}\delta(1,\sigma_i)\mathbb{P}(\sigma).
\end{align}
It was proved in \cite{GHS} which is called GHS inequality that
\begin{theorem}\label{T:first} Let $r$ be equal to two. Then, for all $i,j,k\in\{1,\ldots,N\}$
\begin{align}
\frac{\partial m_i(\underline{J},\underline{B})}{\partial B_j\partial B_k}\leq 0.
\end{align}
\end{theorem}
Alternative proofs can be found in \cite{EM} or \cite{L}. Naturally, we can ask for what does it happen with $r$ bigger than two? This works gives an answer to this question.
In fact, we prove:
\begin{theorem}\label{T:convexity}(GHS inequality for the potts model)Given non-negative real numbers $\underline{J}$ and $\underline{B}$ the local magnetization has the following properties
\begin{itemize}
\item[i)] if $r=2$ then
\begin{align}
\frac{\partial^2 m_i(\underline{J},\underline{B})}{\partial B_j\partial B_k}&\leq 0
\end{align}
\item[ii)] if $r\geq 3$ then
\begin{align}
\frac{\partial^2 m_i(\underline{J},\underline{B})}{\partial B_j\partial B_k}&\geq 0.
\end{align}
\end{itemize}
\end{theorem}
Item i) is the theorem~\ref{T:first} which we obtain by a different method.
It was obtained in \cite{E} the proof of the concavity of the magnetization for a model of spins that are real-valued random variables. In the work \cite{EMN} they proved the GHS inequality for families of random variables which
arise in certain ferromagnetic models of statistical mechanics and quantum
field theory.
In the next section, we obtain an expression for the second derivative of the local magnetization. In the following section, we determine that this derivative can be seen as a polynomial. After, we verify that this polynomial can have almost all the variables separated. In the last section, we evaluate some coefficient and prove the GHS inequality for the Potts model.
\section{The second derivative of the local magnetization}
The first step is to evaluate the second derivative of the local magnetization.
\begin{proposition}\label{P:second} Let $r$ be a positive integer bigger than one. $\underline{J}$ and $\underline{B}$ vectors with entries non-negative. Then, we have
\begin{align}
&\frac{\partial^2 m_i(\underline{J},\underline{B})}{\partial B_j\partial B_k}=<\delta(1,\sigma_i)\delta(1,\sigma_j)\delta(1,\sigma_k)>-<\delta(1,\sigma_i)\delta(1,\sigma_k)><\delta(1,\sigma_j)>-\nonumber\\
&-<\delta(1,\sigma_i)\delta(1,\sigma_j)><\delta(1,\sigma_k)>-<\delta(1,\sigma_k)\delta(1,\sigma_j)><\delta(1,\sigma_i)>+\nonumber\\&+2<\delta(1,\sigma_i)><\delta(1,\sigma_k)><\delta(1,\sigma_j)>.\nonumber
\end{align}
\end{proposition}
\begin{proof}if we take the first derivative we have:
\begin{align}
\frac{\partial m_i(\underline{J},\underline{B})}{\partial B_k}=\sum_{\sigma\in\Sigma^N}\delta(1,\sigma_i)\delta(1,\sigma_k)\mathbb{P}(\sigma)-\sum_{\sigma\in\Sigma^N}\sum_{\hat\sigma\in\Sigma^N}\delta(1,\sigma_i)\delta(1,\hat\sigma_k)\mathbb{P}(\sigma)\mathbb{P}(\hat\sigma)\nonumber
\end{align}
Now, we take the second derivative:
\begin{align}
\frac{\partial^2 m_i(\underline{J},\underline{B})}{\partial B_j\partial B_k}&=\sum_{\sigma\in\Sigma^N}\delta(1,\sigma_i)\delta(1,\sigma_k)\delta(1,\sigma_j)\mathbb{P}(\sigma)-\nonumber\\
&-\sum_{\sigma\in\Sigma^N}\sum_{\hat\sigma\in\Sigma^N}\delta(1,\sigma_i)\delta(1,\sigma_k)\delta(1,\hat\sigma_j)\mathbb{P}(\sigma)\mathbb{P}(\hat\sigma)\nonumber\\
&-\sum_{\sigma\in\Sigma^N}\sum_{\hat\sigma\in\Sigma^N}\delta(1,\sigma_i)\delta(1,\hat\sigma_k)\delta(1,\sigma_j)\mathbb{P}(\sigma)\mathbb{P}(\hat\sigma)\nonumber\\
&-\sum_{\sigma\in\Sigma^N}\sum_{\hat\sigma\in\Sigma^N}\delta(1,\sigma_i)\delta(1,\hat\sigma_k)\delta(1,\hat\sigma_j)\mathbb{P}(\sigma)\mathbb{P}(\hat\sigma)\nonumber\\
&+2\sum_{\sigma\in\Sigma^N}\sum_{\hat\sigma\in\Sigma^N}\sum_{\check\sigma\in\Sigma^N}\delta(1,\sigma_i)\delta(1,\hat\sigma_k)\delta(1,\check\sigma_j)\mathbb{P}(\sigma)\mathbb{P}\hat\sigma)\mathbb{P}(\check\sigma)\nonumber
\end{align}
\qed
\end{proof}
We pick up the idea introduced in \cite{GHS} and we consider a ghost spin $\sigma_0$. Consequently, we consider the energy as
\begin{align}
E(\sigma)=\sum_{1\leq i<j\leq N} J_{i,j}\delta(\sigma_i,\sigma_j)+\sum_{i=1}^NB_i\delta(\sigma_0,\sigma_i)=\sum_{0\leq i<j\leq N} J_{i,j}\delta(\sigma_i,\sigma_j)\nonumber
\end{align}
where $J_{0,i}=B_i$ and if we set up $i=1,j=2,k=3$ then the second derivative becomes:
\begin{align}
& \frac{\partial^2 m_1(\underline{J},\underline{B})}{\partial B_2\partial B_3}=<\delta(\sigma_0,\sigma_1)\delta(\sigma_0,\sigma_2)\delta(\sigma_0,\sigma_3)>-\nonumber\\&-<\delta(\sigma_0,\sigma_1)\delta(\sigma_0,\sigma_3)><\delta(\sigma_0,\sigma_2)>-
<\delta(\sigma_0,\sigma_1)\delta(\sigma_0,\sigma_2)><\delta(\sigma_0,\sigma_3)>-\nonumber\\&-<\delta(\sigma_0,\sigma_3)\delta(\sigma_0,\sigma_2)><\delta(\sigma_0,\sigma_1)>+\nonumber\\&+2<\delta(\sigma_0,\sigma_1)><\delta(\sigma_0,\sigma_2)><\delta(\sigma_0,\sigma_3)>.\nonumber
\end{align}
We will study the signal of this last expression. By one remark in \cite{GHS} this signal implies the signal of the second derivative of the Proposition~\ref{P:second}.
\section{The recursive property of the second derivative signal}
We define
$$H=H(\sigma)=\exp{(\displaystyle\sum_{0\leq i<j\leq N}J_{i,j}\delta(\sigma_i,\sigma_j))}.$$
Thus, we pay attention for the following term:
\begin{align}\label{E:def_I}
&I(\underline{J},\underline{B})\colon= Z_N^3\frac{\partial^2 m_1(\underline{J},\underline{B})}{\partial B_2\partial B_3}=\sum_{\begin{array}{c}\sigma\end{array}}H\sum_{\begin{array}{c}\sigma\end{array}}H\sum_{\begin{array}{c}\sigma\\
\sigma_0=\sigma_1=\sigma_2=\sigma_3\end{array}}H-\nonumber\\&\sum_{\begin{array}{c}\sigma\end{array}}H\sum_{\begin{array}{c}\sigma\\\sigma_0=\sigma_1=\sigma_2\end{array}}H\sum_{\begin{array}{c}\sigma\\
\sigma_0=\sigma_3\end{array}}H-\nonumber\\&-\sum_{\begin{array}{c}\sigma\end{array}}H\sum_{\begin{array}{c}\sigma\\\sigma_0=\sigma_1=\sigma_3\end{array}}H\sum_{\begin{array}{c}\sigma\\
\sigma_0=\sigma_2\end{array}}H\nonumber\\&-\sum_{\begin{array}{c}\sigma\end{array}}H\sum_{\begin{array}{c}\sigma\\\sigma_0=\sigma_2=\sigma_3\end{array}}H\sum_{\begin{array}{c}\sigma\\
\sigma_0=\sigma_1\end{array}}H+\nonumber\\&+2\sum_{\begin{array}{c}\sigma\\\sigma_0=\sigma_1\end{array}}H\sum_{\begin{array}{c}\sigma\\\sigma_0=\sigma_2\end{array}}H\sum_{\begin{array}{c}\sigma\\
\sigma_0=\sigma_3\end{array}}H.
\end{align}
Given a positive integer $s$ let $\mathcal{A}_{s}$ be the set of matrix $A=(a_{i,j})$ with index $1\leq i\leq \binom{N+1}{2}$ and $ 1\leq j\leq 3$ where $a_{i,j}\in\{0,1\}$ if the index $i$ is bigger or equal to $\binom{N+1}{2}-s+1$ and less or equal than $\binom {N+1}{2}$.Further, $a_{i,j}=0$ otherwise. We consider that the pairs of particles are in a specific order $\mathcal{O}=\{(0,1),(0,2),\ldots,(N-1,N)\}=\{o_1,\ldots o_{\binom{N+1}{2}}\}$. The row $p$ in $A$ is associated to the pair of particles $o_p$.Let $H_{s}$ be $H_{s}(\sigma)=\exp{(\displaystyle\sum_{(i,j)\in \mbox{ the first }\binom{N+1}{2}-s\mbox{ terms in } \mathcal{O}}J_{i,j}\delta(\sigma_i,\sigma_j))}$. For each matrix $A\in \mathcal{A}_s$ we associate the term
$I_A=I_{1,A}+I_{2,A}-I_{3,A}-I_{4,A}+I_{5,A}$ with
\begin{align}\label{E:def_I12345}
&I_{1,A}=\big(\sum_{\begin{array}{c}\sigma\\\sigma_i=\sigma_j\mbox{ if }\\o_r=(i,j),a_{r,1}=1\end{array}}H_s\sum_{\begin{array}{c}\sigma\\\sigma_i=\sigma_j\mbox{ if }\\o_r=(i,j),a_{r,2}=1\end{array}}H_s\big)\times\nonumber\\&\times\sum_{\begin{array}{c}\sigma\\
\sigma_0=\sigma_1=\sigma_2=\sigma_3\\\sigma_i=\sigma_j\mbox{ if }\\o_r=(i,j),a_{r,3}=1\end{array}}H_s\nonumber
\end{align}
\begin{align}
I_{2,A}&=\big(\sum_{\begin{array}{c}\sigma\\\sigma_i=\sigma_j\mbox{ if }\\o_r=(i,j),a_{r,1}=1\end{array}}H_s\sum_{\begin{array}{c}\sigma\\\sigma_0=\sigma_1=\sigma_3\\\sigma_i=\sigma_j\mbox{ if }\\o_r=(i,j),a_{r,2}=1\end{array}}H_s\big)\times\nonumber\\&\times
\sum_{\begin{array}{c}\sigma\\
\sigma_0=\sigma_2\\\sigma_i=\sigma_j\mbox{ if }\\o_r=(i,j),a_{r,3}=1\end{array}}H_s\nonumber
\end{align}
\begin{align}
I_{3,A}&=\big(\sum_{\begin{array}{c}\sigma\\\sigma_i=\sigma_j\mbox{ if }\\o_r=(i,j),a_{r,1}=1\end{array}}H_s\sum_{\begin{array}{c}\sigma\\\sigma_0=\sigma_1=\sigma_3\\\sigma_i=\sigma_j\mbox{ if }\\o_r=(i,j),a_{r,2}=1\end{array}}H_s\big)\times\nonumber\\&\times\sum_{\begin{array}{c}\sigma\\
\sigma_0=\sigma_2\\\sigma_i=\sigma_j\mbox{ if }\\o_r=(i,j),a_{r,3}=1\end{array}}H_s\nonumber
\end{align}
\begin{align}
I_{4,A}&=\big(\sum_{\begin{array}{c}\sigma\\\sigma_i=\sigma_j\mbox{ if }\\o_r=(i,j),a_{r,1}=1\end{array}}H_s\sum_{\begin{array}{c}\sigma\\\sigma_0=\sigma_2=\sigma_3\\\sigma_i=\sigma_j\mbox{ if }\\o_r=(i,j),a_{r,2}=1\end{array}}H_s\big)\times\nonumber\\
&\times\sum_{\begin{array}{c}\sigma\\
\sigma_0=\sigma_1\\\sigma_i=\sigma_j\mbox{ if }\\o_r=(i,j),a_{r,3}=1\end{array}}H_s\nonumber
\end{align}
\begin{align}
I_{5,A}&=2\big(\sum_{\begin{array}{c}\sigma\\\sigma_0=\sigma_1\\\sigma_i=\sigma_j\mbox{ if }\\o_r=(i,j),a_{r,1}=1\end{array}}H_s\sum_{\begin{array}{c}\sigma\\\sigma_0=\sigma_2\\\sigma_i=\sigma_j\mbox{ if }\\o_r=(i,j),a_{r,2}=1\end{array}}H_s\big)\times\nonumber\\&\times\sum_{\begin{array}{c}\sigma\\
\sigma_0=\sigma_3\\\sigma_i=\sigma_j\mbox{ if }\\o_r=(i,j),a_{r,3}=1\end{array}}H_s.\nonumber
\end{align}
It means that we have many constraints of the type $\sigma_i=\sigma_j$ which are defined by the element $A$.
As For each row of $A$ we have a pair $o_p=(i,j)$ we consider its weight defined by $n(p)\colon=a(p,1)+a(p,2)+a(p,3)$ and also we designate by $J_p\colon=J_{i,j}$. Now, we can announce the following result which can be interpreted as $I(\underline{J},\underline{B})$ is a polynomial in the variables $X_p\colon=(e^{J_p}-1)$ with $p=1,\ldots,\binom{N+1}{2}$.
\begin{proposition}Let $\underline{J}$ and $\underline{B}$ be non-negative real numbers and $s$ be a positive integer that belongs to $\{1,\ldots,\binom{N+1}{2}\}$. Then
\begin{align}
I(\underline{J},\underline{B})=\sum_{A\in \mathcal{A}_s}I_AX_{\binom{N+1}{2}-s+1}^{n(\binom{N+1}{2}-s+1)}\ldots X_{\binom{N+1}{2}}^{n(\binom{N+1}{2})}
\end{align}
\end{proposition}
A particular case in which we are interested is when $s=\binom{N+1}{2}$ we have \begin{align}\label{E:simplificada}
I(\underline{J},\underline{B})=\sum_{A\in A_{\binom{N+1}{2}}}I_AX_{1}^{n(1)}\ldots X_{\binom{N+1}{2}}^{n(\binom{N+1}{2})}
\end{align}
we remark that in this case $H_{\binom{N+1}{2}}=1$.
\begin{proof}
The proof is by induction on $s$. Let $s$ be equal to one. We observe that
\begin{align}\label{E:eq1}
\sum_{\begin{array}{c}\sigma\end{array}}H&=\sum_{\begin{array}{c}\sigma\\\sigma_{N-1}=\sigma_{N}\end{array}}H_1e^{J_{\binom{N+1}{2}}}+\sum_{\begin{array}{c}\sigma\\\sigma_{N-1}\neq\sigma_N\end{array}}H_1\nonumber\\&=
\sum_{\begin{array}{c}\sigma\\\sigma_{N-1}=\sigma_{N}\end{array}}H_1e^{J_{\binom{N+1}{2}}}+\sum_{\begin{array}{c}\sigma\end{array}}H_1-\sum_{\begin{array}{c}\sigma\\\sigma_{N-1}=\sigma_N\end{array}}H_1\nonumber\\
&=\sum_{\begin{array}{c}\sigma\\\sigma_{N-1}=\sigma_{N}\end{array}}H_1(e^{J_{\binom{N+1}{2}}}-1)+\sum_{\begin{array}{c}\sigma\end{array}}H_1.
\end{align}
In the second step we have just used the fact that $$\sum_{\begin{array}{c}\sigma\end{array}}H_1=\sum_{\begin{array}{c}\sigma\\\sigma_{N-1}=\sigma_N\end{array}}H_1+\sum_{\begin{array}{c}\sigma\\\sigma_{N-1}\neq\sigma_N\end{array}}H_1.$$
We use the notation $[i_1,\ldots,i_n]$ if $\sigma_{i_1}=\ldots=\sigma_{i_n}$.
Then, If we replace expression ~(\ref{E:eq1}) for each one of the elements in the equation~(\ref{E:def_I}) we obtain:
\begin{align}
I(\underline{J},\underline{B})=&\big(\sum_{\begin{array}{c}\sigma\\\, [N-1,N]\end{array}}H_1(e^{J_{\binom{N+1}{2}}}-1)+\sum_{\begin{array}{c}\sigma\end{array}}H_1\big)\times\nonumber\\&\times\big(\sum_{\begin{array}{c}\sigma\\\,[N-1,N]\end{array}}H_1(e^{J_{\binom{N+1}{2}}}-1)+\sum_{\begin{array}{c}\sigma\end{array}}H_1\big)\times\nonumber
\end{align}
\begin{align}
\hspace{3cm}
&\times\big(\sum_{\begin{array}{c}\sigma\\\,[0,1,2,3]\\\,[N-1,N]\end{array}}H_1(e^{J_{\binom{N+1}{2}}}-1)+\sum_{\begin{array}{c}\sigma\\\,[0,1,2,3]\end{array}}H_1\big)-\nonumber
\end{align}
\begin{align}
\hspace{3cm}
&\big(\sum_{\begin{array}{c}\sigma\\\,[N-1,N]\end{array}}H_1(e^{J_{\binom{N+1}{2}}}-1)+\sum_{\begin{array}{c}\sigma\end{array}}H_1\big)\times\nonumber
\end{align}
\begin{align}
\hspace{3cm}
&\times\big(\sum_{\begin{array}{c}\sigma\\\,[0,1,2]\\\,[N-1,N]\end{array}}H_1(e^{J_{\binom{N+1}{2}}}-1)+\sum_{\begin{array}{c}\sigma\\\,[0,1,2]\end{array}}H_1\big)\times\nonumber
\end{align}
\begin{align}
\hspace{3cm}
&\times\big(\sum_{\begin{array}{c}\sigma\\\,[0,3]\\\,[N-1,N]\end{array}}H_1(e^{J_{\binom{N+1}{2}}}-1)+\sum_{\begin{array}{c}\sigma\\\,[0,3]\end{array}}H_1\big)-\nonumber
\end{align}
\begin{align}
\hspace{3cm}
&\big(\sum_{\begin{array}{c}\sigma\\\,[N-1,N]\end{array}}H_1(e^{J_{\binom{N+1}{2}}}-1)+\sum_{\begin{array}{c}\sigma\end{array}}H_1\big)\times\nonumber
\end{align}
\begin{align}
\hspace{3cm}
&\times\big(\sum_{\begin{array}{c}\sigma\\\,[0,1,3]\\\,[N-1,N]\end{array}}H_1(e^{J_{\binom{N+1}{2}}}-1)+\sum_{\begin{array}{c}\sigma\\\,[0,1,3]\end{array}}H_1\big)\times\nonumber
\end{align}
\begin{align}
\hspace{3cm}
&\times\big(\sum_{\begin{array}{c}\sigma\\\,[0,2]\\\,[N-1,N]\end{array}}H_1(e^{J_{\binom{N+1}{2}}}-1)+\sum_{\begin{array}{c}\sigma\\\,[0,2]\end{array}}H_1\big)-\nonumber
\end{align}
\begin{align}
\hspace{3cm}
&\big(\sum_{\begin{array}{c}\sigma\\\,[N-1,N]\end{array}}H_1(e^{J_{\binom{N+1}{2}}}-1)+\sum_{\begin{array}{c}\sigma\end{array}}H_1\big)\times\nonumber
\end{align}
\begin{align}
\hspace{3cm}
&\times\big(\sum_{\begin{array}{c}\sigma\\\,[0,2,3]\\\,[N-1,N]\end{array}}H_1(e^{J_{\binom{N+1}{2}}}-1)+\sum_{\begin{array}{c}\sigma\\\,[0,2,3]\end{array}}H_1\big)\times\nonumber
\end{align}
\begin{align}
\hspace{3cm}
&\times\big(\sum_{\begin{array}{c}\sigma\\\,[0,1]\\\,[N-1,N]\end{array}}H_1(e^{J_{\binom{N+1}{2}}}-1)+\sum_{\begin{array}{c}\sigma\\\,[0,1]\end{array}}H_1\big)+\nonumber
\end{align}
\begin{align}
\hspace{3cm}
&+2\big(\sum_{\begin{array}{c}\sigma\\\,[0,1]\\\,[N-1,N]\end{array}}H_1(e^{J_{\binom{N+1}{2}}}-1)+\sum_{\begin{array}{c}\sigma\\,[0,1]\end{array}}H_1\big)\times\nonumber
\end{align}
\begin{align}
\hspace{3cm}
&\times\big(\sum_{\begin{array}{c}\sigma\\\,[0,2]\\\,[N-1,N]\end{array}}H_1(e^{J_{\binom{N+1}{2}}}-1)+\sum_{\begin{array}{c}\sigma\\\,[0,2]\end{array}}H_1\big)\times\nonumber
\end{align}
\begin{align}
\hspace{3cm}
&\times\big(\sum_{\begin{array}{c}\sigma\\\,[0,3]\\\,[N-1,N]\end{array}}H_1(e^{J_{\binom{N+1}{2}}}-1)+\sum_{\begin{array}{c}\sigma\\\,[0,3]\end{array}}H_1\big).\nonumber
\end{align}
Thus, the result follows. Now, we consider that we proved the formula for a positive integer $s$. That means,
\begin{align}
I(\underline{J},\underline{B})=\sum_{A\in \mathcal{A}_s}I_AX_{\binom{N+1}{2}-s+1}^{n(\binom{N+1}{2}-s+1)}\ldots X_{\binom{N+1}{2}}^{n(\binom{N+1}{2})}.\label{E:def_Is}
\end{align}
Let $p=\binom{N}{2}-s-1$ with $o_{p}=(i,j)$. Then,
\begin{align}
H_s=H_{s+1}e^{J_p\delta(\sigma_i,\sigma_j)}
\end{align}
we have that
\begin{align}
\sum_{\begin{array}{c}\sigma\\\mbox{condition on the }\sigma_k\mbox{'s}\end{array}}H_s&=\sum_{\begin{array}{c}\sigma\\\mbox{condition on the }\sigma_k\mbox{'s}\\\sigma_i=\sigma_j\end{array}}H_{s+1}e^{J_p}+\nonumber\\&+\sum_{\begin{array}{c}\sigma\\\mbox{condition on the }\sigma_k\mbox{'s}\\\sigma_i\neq\sigma_j\end{array}}H_{s+1}\nonumber\\
&=\sum_{\begin{array}{c}\sigma\\\mbox{condition on the }\sigma_k\mbox{'s}\\\sigma_i=\sigma_j\end{array}}H_{s+1}(e^{J_p}-1)+\nonumber\\&+\sum_{\begin{array}{c}\sigma\\\mbox{condition on the }\sigma_k\mbox{'s}\end{array}}H_{s+1}.\label{E:eq2}
\end{align}
In the last step, we used the fact that
\begin{align}
\sum_{\begin{array}{c}\sigma\\\mbox{condition on the }\sigma_k\mbox{'s}\\\sigma_i\neq\sigma_j\end{array}}H_{s+1}&=\sum_{\begin{array}{c}\sigma\\\mbox{condition on the }\sigma_k\mbox{'s}\end{array}}H_{s+1}-\nonumber\\&-\sum_{\begin{array}{c}\sigma\\\mbox{condition on the }\sigma_k\mbox{'s}\\\sigma_i=\sigma_j\end{array}}H_{s+1}.\nonumber
\end{align}
If we use expression~(\ref{E:eq2}) in the defintion of the term $I_A$ and we replace the result in the formula~(\ref{E:def_Is}) we prove the proposition.
\qed
\end{proof}
\section{The separation formula}
At this moment, we obtain the separation formula of the variable $X_i$'s. Let $O_1=O_2\cup O_3$ where $O_2=\{(1,2),(1,3),(2,3)\}$ and the set $O_3$ be equal to $\{(0,1),(0,2),(0,3)\}$. The pair of particles of $O_2$ will take a particular importance, we denote them by $p_1=(1,2),p_2=(1,3),p_3=(2,3)$.
\begin{proposition}(Separation formula)
\begin{align}
I(\underline{J},\underline{B})&=\prod_{\begin{array}{c}p=1\\o_p\notin O_1 \end{array}}^{\binom{N+1}{2}}(1+r^{-1}X_p)^3\times\nonumber\\&\times\prod_{\begin{array}{c}p=1\\o_p\in O_3 \end{array}}^{\binom{N+1}{2}}(1+(1+2r^{-1})X_p+(2r^{-1}+r^{-2})X_p^2+r^{-2}X_p^3)\times\nonumber\\&\times\sum_{\begin{array}{c}A\in \mathcal{A}_{\binom{N+1}{2}}\\\mbox{if }o_p\notin O_2\\a(p,1)=a(p,2)=a(p,3)=0\end{array}}I_AX_{p_1}^{n(p_1)}X_{p_2}^{n(p_2)}X_{p_3}^{n(p_3)}.\nonumber
\end{align}
\end{proposition}
\begin{proof}
We start by the expression~(\ref{E:simplificada}). Given $A$ we define $\hat A$ as the matrix with entries equal to the entries of $A$ in all rows except in row $p$. At row $p$ we have $\hat a(p,1)=\hat a(p,2)=\hat a(p,3)=0$.
We consider the case that $p\notin O_1$. Let $n(p)$ be the weight of the row $p$ of $A$. We observe the following relations, if $n(p)=0$ then $I_A=I_{\hat A}$. In the case that $n(p)=1$ then $I_A=r^{-1}I_{\hat A}$. For $n(p)=2$ we have $I_A=r^{-2}I_{\hat A}$. Finally, when $n(p)=3$ we get $I_A=r^{-3}I_{\hat A}$. We do the calculations for $n(p)=1$. let $p$ be the pair $(i,j)$ then $n(p)=1$ means that appears the condition $\sigma_i=\sigma_j$ in one term of $I_{1,A},I_{2,A},I_{3,A},I_{4,A}$ and $I_{5,A}$ defined in (\ref{E:def_I12345}). Furthermore, this is the only additional constraint if we compare the conditions implied by $A$ and $\hat A$. These constraints define a partition of the elements $\{0,1,\ldots,N\}$ where two elements $\ell,\hat\ell$ belong to the same subset if $\sigma_{\ell}=\sigma_{\hat\ell}$. When we sum up $$\displaystyle\sum_{\begin{array}{c}\sigma\\\mbox{ some constraints of the type}\\\sigma_{\ell}=\sigma_{\hat\ell}\end{array}}1$$ we obtain $r^{S}$ where $S$ is the number of subsets of the partition defined by the constraints. In the case that we are considering, we have one additional constraint if we compare $A$ with $\hat A$. Thus, the partition implied by the constraints of $A$ has the number of subsets of the partition implied by $\hat A$ less one. Further, as $(i,j)\notin O_1$ the constraint $\sigma_i=\sigma_j$ is a new constraint. It is not an existent condition as $\sigma_0=\sigma_1=\sigma_2=\sigma_3$ in $I_{1,A}$. Thus, $I_A=r^{-1}I_{\hat A}$.
We observe that there exists eight elements belong to $\mathcal{A}_{\binom{N+1}{2}}$ that we associate the same $\hat A$. They differ one from each other by the possible values of $a(p,1),a(p,2)$ and $a(p,3)$. We have one element with $n(p)=0$, three elements with $n(p)=1$, three elements with $n(p)=2$ and one element with $n(p)=3$. Hence, the formula
\begin{align}
I(\underline{J},\underline{B})&=\sum_{\begin{array}{c}A\in\mathcal{A}_{\binom{N+1}{2}}\\n(p)=0\end{array}}I_{A}X_{1}^{n(1)}\ldots X_{p}^{0}\ldots X_{\binom{N+1}{2}}^{n(\binom{N+1}{2})}+\nonumber\\&+\sum_{\begin{array}{c}A\in\mathcal{A}_{\binom{N+1}{2}}\\n(p)=1\end{array}}I_{A}X_{1}^{n(1)}\ldots X_{p}^{1}\ldots X_{\binom{N+1}{2}}^{n(\binom{N+1}{2})}+\nonumber\\&+\sum_{\begin{array}{c}A\in\mathcal{A}_{\binom{N+1}{2}}\\n(p)=2\end{array}}I_{A}X_{1}^{n(1)}\ldots X_{p}^{2}\ldots X_{\binom{N+1}{2}}^{n(\binom{N+1}{2})}+\nonumber\\&+\sum_{\begin{array}{c}A\in\mathcal{A}_{\binom{N+1}{2}}\\n(p)=3\end{array}}I_{A}X_{1}^{n(1)}\ldots X_{p}^{3}\ldots X_{\binom{N+1}{2}}^{n(\binom{N+1}{2})}\nonumber
\end{align}
can be simplified
\begin{align}
I(\underline{J},\underline{B})&=
\sum_{\begin{array}{c}A\in\mathcal{A}_{\binom{N+1}{2}}\\n(p)=0\end{array}}I_{A}X_{1}^{n(1)}\ldots \hat X_{p}\ldots X_{\binom{N+1}{2}}^{n(\binom{N+1}{2})}+\nonumber\\&+3r^{-1}\sum_{\begin{array}{c}A\in\mathcal{A}_{\binom{N+1}{2}}\\n(p)=0\end{array}}I_{A}X_{1}^{n(1)}\ldots \hat X_{p}\ldots X_{\binom{N+1}{2}}^{n(\binom{N+1}{2})}X_{p}+\nonumber\\&+3r^{-2}\sum_{\begin{array}{c}A\in\mathcal{A}_{\binom{N+1}{2}}\\n(p)=0\end{array}}I_{A}X_{1}^{n(1)}\ldots \hat X_{p}\ldots X_{\binom{N+1}{2}}^{n(\binom{N+1}{2})}X_{p}^{2}+\nonumber\\&+\sum_{\begin{array}{c}A\in\mathcal{A}_{\binom{N+1}{2}}\\n(p)=0\end{array}}I_{A}X_{1}^{n(1)}\ldots \hat X_{p}\ldots X_{\binom{N+1}{2}}^{n(\binom{N+1}{2})}X_{p}^{3}\nonumber
\end{align}
where $\hat X$ means that this term does not appear. With more simplifications, we obtain
\begin{align}
I(\underline{J},\underline{B})&=
\sum_{\begin{array}{c}A\in\mathcal{A}_{\binom{N+1}{2}}\\n(p)=0\end{array}}I_{A}X_{1}^{n(1)}\ldots \hat X_{p}\ldots X_{\binom{N+1}{2}}^{n(\binom{N+1}{2})}(1+3r^{-1}X_p+\nonumber\\&+3r^{-2}X_p^2+r^{-3}X_p^3)\nonumber\\&
= \sum_{\begin{array}{c}A\in\mathcal{A}_{\binom{N+1}{2}}\\n(p)=0\end{array}}I_{A}X_{1}^{n(1)}\ldots \hat X_{p}\ldots X_{\binom{N+1}{2}}^{n(\binom{N+1}{2})}(1+r^{-1}X_p)^3.\nonumber
\end{align}
We can repeat the procedure for all $p\notin O_3$ which produces
\begin{align}
I(\underline{J},\underline{B})&=\prod_{\begin{array}{c}p=1\\o_p\notin O_1 \end{array}}^{\binom{N+1}{2}}(1+r^{-1}X_p)^3\times\nonumber\\&\times\sum_{\begin{array}{c}A\in \mathcal{A}_{\binom{N+1}{2}}\\\mbox{if }o_p\notin O_3\\a(p,1)=a(p,2)=a(p,3)=0\end{array}}I_A\prod_{\begin{array}{c}p=1\\o_p\in O_3\end{array}}^{\binom{N+1}{2}}X_{p}^{n(p)}.\label{E:parcial}
\end{align}
Now, we pay attention to the terms that belong to $O_1$. Let $p_4$ be $(0,1)$. For a matrix $A\in\mathcal{A}_{\binom{N+1}{2}}$ with entries $a(p,1)=a(p,2)=a(p,3)=0$ for all $p\notin O_3$ we associate the matrix $\hat A$ with the same entries except for row $p_4$ we have $\hat a(p,1)=\hat a(p,2)=\hat a(p,3)=0$. We can show that if $n(p_4)=0$ then we have $I_A=I_{\hat A}$. Also, we obtain
\begin{align}
&\sum_{\begin{array}{c}A\in\mathcal{A}_{\binom{N+1}{2}}\\n(p_4)=1\\n(p)=0\forall p\in O_3\end{array}}I_AX_{p_4}\prod_{\begin{array}{c}p=1\\o_p\in O_3\\p\neq p_4\end{array}}^{\binom{N+1}{2}}X_{p}^{n(p)} \nonumber\\&=(1+2r^{-1})X_{p_4}\sum_{\begin{array}{c}A\in\mathcal{A}_{\binom{N+1}{2}}\\n(p_4)=0\\n(p)=0\forall p\in O_3\end{array}}I_A\prod_{\begin{array}{c}p=1\\o_p\in O_3\\p\neq p_4\end{array}}^{\binom{N+1}{2}}X_{p}^{n(p)}.\label{E:eq3}
\end{align}
For $n(p_4)=2$ we have
\begin{align}
&\sum_{\begin{array}{c}A\in\mathcal{A}_{\binom{N+1}{2}}\\n(p_4)=2\\n(p)=0\forall p\in O_3\end{array}}I_AX_{p_4}^2\prod_{\begin{array}{c}p=1\\o_p\in O_3\\p\neq p_4\end{array}}^{\binom{N+1}{2}}X_{p}^{n(p)} \nonumber\\&=(2r^{-1}+r^{-2})X_{p_4}^2\sum_{\begin{array}{c}A\in\mathcal{A}_{\binom{N+1}{2}}\\n(p_4)=0\\n(p)=0\forall p\in O_3\end{array}}I_A\prod_{\begin{array}{c}p=1\\o_p\in O_3\\p\neq p_4\end{array}}^{\binom{N+1}{2}}X_{p}^{n(p)}.\label{E:eq4}
\end{align}
And if $n(p_4)=3$ we get $I_{A}=r^{-2}I_{\hat A}$.
Again, we have eight matrix $A$ which are associated for the same $\hat A$. We consider the three matrix with $n(p_4)=1$, i.e., the matrix $A_1$ with $a(p_4,1)=1$ the matrix $A_2$ with $a(p_4,2)=1$ and $A_3$ with $a(p_4,3)=1$. We have that $I_{1,A_1}=r^{-1}I_{1,\hat A}$,$I_{2,A_1}=r^{-1}I_{2,\hat A}$,$I_{3,A_1}=r^{-1}I_{3,\hat A}$,$I_{4,A_1}=r^{-1}I_{4,\hat A}$ and $I_{5,A_1}=I_{5,\hat A}$. In the last term does not appear the factor $r^{-1}$ because the condition $\sigma_0=\sigma_1$ appears independently of the matrix $A_1$. We also obtain that $I_{1,A_2}=r^{-1}I_{1,\hat A}$,$I_{2,A_2}=I_{2,\hat A}$,$I_{3,A_2}=I_{3,\hat A}$,$I_{4,A_2}=r^{-1}I_{4,\hat A}$ and $I_{5,A_2}=r^{-1}I_{5,\hat A}$. There are two terms that do not appear $r^{-1}$ it happens because again the condition $\sigma_0=\sigma_1$ occurs independently of the matrix $A_2$. For $A_3$ we have $I_{1,A_3}=I_{1,\hat A}$,$I_{2,A_3}=r^{-1}I_{2,\hat A}$,$I_{3,A_3}=r^{-1}I_{3,\hat A}$,$I_{4,A_3}=I_{4,\hat A}$ and $I_{5,A_3}=r^{-1}I_{5,\hat A}$. Again the terms that do not appear $r^{-1}$ are consequence of the constraint $\sigma_0=\sigma_1$ happens independently of the matrix $A_3$. Besides, equations~(\ref{E:eq3}) is a consequence of these facts. Now we look at the three matrices with $n(p)=2$ let $A_4$ be the matrix with $a(p_4,1)=a(p_4,2)=1$ $A_5$ be the matrix with $a(p_4,1)=a(p_4,3)=1$ and $A_5$ the one with $a(p_4,2)=a(p_4,3)=1$. We obtain that $I_{1,A_4}=r^{-2}I_{1,\hat A}$,$I_{2,A_4}=r^{-1}I_{2,\hat A}$,$I_{3,A_4}=r^{-1}I_{3,\hat A}$,$I_{4,A_4}=r^{-2}I_{4,\hat A}$ and $I_{5,A_4}=r^{-1}I_{5,\hat A}$. The terms $r^{-1}$ appear when one of the conditions $\sigma_0=\sigma_1$ already occurs independently of $A_4$ and the term $r^{-2}$ appears when both conditions $\sigma_0=\sigma_1$ are new. For the matrix $A_5$ we obtain that $I_{1,A_5}=r^{-1}I_{1,\hat A}$,$I_{2,A_5}=r^{-2}I_{2,\hat A}$,$I_{3,A_5}=r^{-2}I_{3,\hat A}$,$I_{4,A_5}=r^{-1}I_{4,\hat A}$ and $I_{5,A_5}=r^{-1}I_{5,\hat A}$ with the same reasoning. And if we see the matrix $A_6$ we get $I_{1,A_6}=r^{-1}I_{1,\hat A}$,$I_{2,A_6}=r^{-1}I_{2,\hat A}$,$I_{3,A_6}=r^{-1}I_{3,\hat A}$,$I_{4,A_6}=r^{-1}I_{4,\hat A}$ and $I_{5,A_6}=r^{-2}I_{5,\hat A}$. Thus, we prove equation~(\ref{E:eq4}). As a matter of facts we obtain from equation~(\ref{E:parcial}) that
\begin{align}
I(\underline{J},\underline{B})&=\prod_{\begin{array}{c}p=1\\o_p\notin O_1 \end{array}}^{\binom{N+1}{2}}(1+r^{-1}X_p)^3\times\nonumber
\end{align}
\begin{align}
\hspace{3cm}
&\times\left[\sum_{\begin{array}{c}A\in \mathcal{A}_{\binom{N+1}{2}}\\n(p_4)=0\mbox{ if }o_p\notin O_3\\n(p)=0\end{array}}I_A\prod_{\begin{array}{c}p=1\\o_p\in O_3\\p\neq p_4\end{array}}^{\binom{N+1}{2}}X_{p}^{n(p)}\right.+\nonumber
\end{align}
\begin{align}
\hspace{3cm}
&+\sum_{\begin{array}{c}A\in \mathcal{A}_{\binom{N+1}{2}}\\n(p_4)=1\mbox{ if }o_p\notin O_3\\n(p)=0\end{array}}I_AX_{p_4}\prod_{\begin{array}{c}p=1\\o_p\in O_3\\p\neq p_4\end{array}}^{\binom{N+1}{2}}X_{p}^{n(p)}+\nonumber
\end{align}
\begin{align}
\hspace{3cm}
&+\sum_{\begin{array}{c}A\in \mathcal{A}_{\binom{N+1}{2}}\\n(p_4)=2\mbox{ if }o_p\notin O_3\\n(p)=0\end{array}}I_AX_{p_4}^2\prod_{\begin{array}{c}p=1\\o_p\in O_3\\p\neq p_4\end{array}}^{\binom{N+1}{2}}X_{p}^{n(p)}+\nonumber
\end{align}
\begin{align}
\hspace{3cm}
&+\left.\sum_{\begin{array}{c}A\in \mathcal{A}_{\binom{N+1}{2}}\\n(p_4)=3\mbox{ if }o_p\notin O_3\\n(p)=0\end{array}}I_AX_{p_4}^3\prod_{\begin{array}{c}p=1\\o_p\in O_3\\p\neq p_4\end{array}}^{\binom{N+1}{2}}X_{p}^{n(p)}\right]\nonumber
\end{align}
By the remarks above and equation~(\ref{E:eq3}) and~(\ref{E:eq4}) we conclude that
\begin{align}
I(\underline{J},\underline{B})&=\prod_{\begin{array}{c}p=1\\o_p\notin O_1 \end{array}}^{\binom{N+1}{2}}(1+r^{-1}X_p)^3\times\nonumber\\&\times
(1+(1+2r^{-1})X_{p_4}+(2r^{-1}+r^{-2})X_{p_4}^2+r^{-2}X_{p_4}^3)\times
\nonumber\\&\times\sum_{\begin{array}{c}A\in \mathcal{A}_{\binom{N+1}{2}}\\n(p_4)=0\mbox{ if }o_p\notin O_3\\n(p)=0\end{array}}I_A\prod_{\begin{array}{c}p=1\\p\neq p_4\,o_p\in O_3\end{array}}^{\binom{N+1}{2}}X_{p}^{n(p)}.
\end{align}
We can repeat this last procedure for the other two elements of $O_3$ which ends the proof.
\qed
\end{proof}
\section{Some coefficients and the main theorem}
Now, We will evaluate the constants $I_A$ for $A\in\mathcal{A}_{\binom{N+1}{2}}$ with $n(p)=0$ if $o_p\notin O_2$. These matrices can have entries different from zero only in three rows, we will represent these matrices by matrices $3\times 3$ where the first row is associated to the pair $(1,2)$ the second one to the pair $(1,3)$ and the third one to $(2,3)$. We also define the coefficients $$\alpha(x,y,z)=\sum_{\begin{array}{c}A\in\mathcal{A}_{\binom{N+1}{2}}\\n(p_1)=x,n(p_2)=y\\n(p_3)=z,\mbox{ for the other } p\\n(p)=0\end{array}}I_A$$
Using the fact that if $\sigma_1=\sigma_2$ and $\sigma_1=\sigma_3$ implies that $\sigma_2=\sigma_3$ and the symmetries, we get
\begin{align}
\alpha(3,3,3)&=\alpha(3,0,3)=\alpha(0,3,3)=\alpha(3,3,0)=I_{\left(\begin{smallmatrix}1&1&1\\1&1&1\\1&1&1\end{smallmatrix}\right)}=\nonumber\\
&=r^{n-1}r^{n-1}r^{n-2}-r^{n-1}r^{n-2}r^{n-2}-r^{n-1}r^{n-2}r^{n-2}-\nonumber\\&-r^{n-1}r^{n-2}r^{n-2}
+2r^{n-2}r^{n-2}r^{n-2}=r^{3n-6}(r^2-3r+2).\nonumber
\end{align}
We observe by the last remark, we have
\begin{align}
\alpha(3,2,3)&=\alpha(2,3,3)=\alpha(3,3,2)=\alpha(3,1,3)=\alpha(1,3,3)=\alpha(3,3,1)=\nonumber\\
&=I_{\left(\begin{smallmatrix}1&1&1\\1&1&1\\1&0&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}1&1&1\\1&1&1\\0&1&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}1&1&1\\1&1&1\\0&0&1\end{smallmatrix}\right)}=\nonumber\\
&=3r^{3n-6}(r^2-3r+2)\nonumber
\end{align}
and
\begin{align}
&\alpha(3,2,2)=\alpha(2,3,2)=\alpha(2,2,3)=I_{\left(\begin{smallmatrix}1&1&1\\1&1&0\\1&1&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}1&1&1\\1&1&0\\1&0&1\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}1&1&1\\1&1&0\\0&1&1\end{smallmatrix}\right)}+\nonumber\\
&+I_{\left(\begin{smallmatrix}1&1&1\\1&0&1\\1&1&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}1&1&1\\1&0&1\\1&0&1\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}1&1&1\\1&0&1\\0&1&1\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}1&1&1\\0&1&1\\1&1&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}1&1&1\\0&1&1\\1&0&1\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}1&1&1\\0&1&1\\0&1&1\end{smallmatrix}\right)}=\nonumber\\
&=r^{3n-5}(-2r+1)+r^{3n-6}(r^2-3r+2)+r^{3n-6}(r^2-3r+2)+\nonumber\\&+r^{3n-6}(r^2-3r+2)+r^{3n-4}(r-1)+r^{3n-6}(r^2-3r+2)+\nonumber\\&+r^{3n-6}(r^2-3r+2)+r^{3n-6}(r^2-3r+2)+r^{3n-5}(r^2-3r+2)=\nonumber\\&=r^{3n-6}(2r^{3}-15r+12).\nonumber
\end{align}
We also obtain
\begin{align}
&\alpha(3,2,1)=\alpha(2,3,1)=\alpha(3,1,2)=\alpha(1,3,2)=\alpha(1,2,3)=\alpha(2,1,3)=\nonumber\\&=I_{\left(\begin{smallmatrix}1&1&1\\1&1&0\\1&0&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}1&1&1\\1&1&0\\0&1&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}1&1&1\\1&1&0\\0&0&1\end{smallmatrix}\right)}+\nonumber\\
&+I_{\left(\begin{smallmatrix}1&1&1\\1&0&1\\1&0&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}1&1&1\\1&0&1\\0&1&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}1&1&1\\1&0&1\\0&0&1\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}1&1&1\\0&1&1\\1&0&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}1&1&1\\0&1&1\\0&1&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}1&1&1\\0&1&1\\0&0&1\end{smallmatrix}\right)}=\nonumber\\
&=r^{3n-5}(-2r+1)+r^{3n-5}(-2r+1)+r^{3n-6}(r^2-3r+2)+r^{3n-4}(r-1)+\nonumber\\&+r^{3n-6}(r^2-3r+2)+r^{3n-4}(r-1)+r^{3n-6}(r^2-3r+2)+\nonumber\\
&+r^{3n-5}(r^2-3r+2)+r^{3n-5}(r^2-3r+2)=r^{3n-6}(4r^{3}-9r^2-3r+6).\nonumber
\end{align}
For the element
\begin{align}
&\alpha(3,2,0)=\alpha(2,3,0)=\alpha(3,0,2)=\alpha(0,3,2)=\alpha(0,2,3)=\alpha(2,0,3)=\nonumber\\&=I_{\left(\begin{smallmatrix}1&1&1\\1&1&0\\0&0&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}1&1&1\\1&0&1\\0&0&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}1&1&1\\0&1&1\\0&0&0\end{smallmatrix}\right)}=\nonumber\\
&=r^{3n-5}(-2r+1)+r^{3n-4}(r-1)+r^{3n-5}(r^2-3r+2)=r^{3n-5}(2r^2-6r+3).\nonumber
\end{align}
Now, the element
\begin{align}
&\alpha(3,1,1)=\alpha(1,3,1)=\alpha(1,1,3)=I_{\left(\begin{smallmatrix}1&1&1\\1&0&0\\1&0&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}1&1&1\\1&0&0\\0&1&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}1&1&1\\1&0&0\\0&0&1\end{smallmatrix}\right)}+\nonumber\\
&+I_{\left(\begin{smallmatrix}1&1&1\\0&1&0\\1&0&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}1&1&1\\0&1&0\\0&1&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}1&1&1\\0&1&0\\0&0&1\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}1&1&1\\0&0&1\\1&0&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}1&1&1\\0&0&1\\0&1&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}1&1&1\\0&0&1\\0&0&1\end{smallmatrix}\right)}=\nonumber\\
&=0+r^{3n-5}(-2r+1)+r^{3n-4}(r-1)+r^{3n-5}(-2r+1)+r^{3n-4}(-2r+2)+\nonumber\\&+r^{3n-5}(r^2-3r+2)+r^{3n-4}(r-1)+r^{3n-5}(r^2-3r+2)+r^{3n-3}(r-1)=\nonumber\\
&=r^{3n-5}(r^{3}+r^2-10r+6)\nonumber
\end{align}
and
\begin{align}
&\alpha(3,1,0)=\alpha(0,3,1)=\alpha(0,1,3)=\alpha(3,0,1)=\alpha(1,0,3)=\alpha(1,3,0)=\nonumber\\&=I_{\left(\begin{smallmatrix}1&1&1\\1&0&0\\0&0&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}1&1&1\\0&1&0\\0&0&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}1&1&1\\0&0&1\\0&0&0\end{smallmatrix}\right)}+\nonumber\\
&=0+r^{3n-4}(-2r+2)+r^{3n-3}(r-1)=\nonumber\\
&=r^{3n-4}(r^2-3r+2)\nonumber
\end{align}
also
\begin{align}
&\alpha(3,0,0)=\alpha(0,3,0)=\alpha(0,0,3)=I_{\left(\begin{smallmatrix}1&1&1\\0&0&0\\0&0&0\end{smallmatrix}\right)}=0.\nonumber
\end{align}
One of the longest terms is
\begin{align}
&\alpha(2,2,2)=I_{\left(\begin{smallmatrix}1&1&0\\1&1&0\\1&1&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}1&1&0\\1&1&0\\1&0&1\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}1&1&0\\1&1&0\\0&1&1\end{smallmatrix}\right)}+\nonumber\\
&+I_{\left(\begin{smallmatrix}1&1&0\\1&0&1\\1&1&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}1&1&0\\1&0&1\\1&0&1\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}1&1&0\\1&0&1\\0&1&1\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}1&1&0\\0&1&1\\1&1&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}1&1&0\\0&1&1\\1&0&1\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}1&1&0\\0&1&1\\0&1&1\end{smallmatrix}\right)}+\nonumber\\
&+I_{\left(\begin{smallmatrix}1&0&1\\1&1&0\\1&1&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}1&0&1\\1&1&0\\1&0&1\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}1&0&1\\1&1&0\\0&1&1\end{smallmatrix}\right)}
+I_{\left(\begin{smallmatrix}1&0&1\\1&0&1\\1&1&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}1&0&1\\1&0&1\\1&0&1\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}1&0&1\\1&0&1\\0&1&1\end{smallmatrix}\right)}+\nonumber\\&+I_{\left(\begin{smallmatrix}1&0&1\\0&1&1\\1&1&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}1&0&1\\0&1&1\\1&0&1\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}1&0&1\\0&1&1\\0&1&1\end{smallmatrix}\right)}
+I_{\left(\begin{smallmatrix}0&1&1\\1&1&0\\1&1&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}0&1&1\\1&1&0\\1&0&1\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}0&1&1\\1&1&0\\0&1&1\end{smallmatrix}\right)}+\nonumber\\
&+I_{\left(\begin{smallmatrix}0&1&1\\1&0&1\\1&1&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}0&1&1\\1&0&1\\1&0&1\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}0&1&1\\1&0&1\\0&1&1\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}0&1&1\\0&1&1\\1&1&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}0&1&1\\0&1&1\\1&0&1\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}0&1&1\\0&1&1\\0&1&1\end{smallmatrix}\right)}=\nonumber
\end{align}
\begin{align}
\hspace{1cm}
&=r^{3n-4}(-3r+3)+r^{3n-5}(-2r+2)+r^{3n-5}(-2r+2)+r^{3n-5}(-2r+2)+\nonumber\\
&+r^{3n-4}(r-1)+r^{3n-6}(r^2-3r+2)+r^{3n-5}(-2r+2)+r^{3n-6}(r^2-3r+2)+\nonumber\\
&+r^{3n-5}(r^2-3r+2)+r^{3n-5}(-2r+2)+r^{3n-4}(r-1)+r^{3n-6}(r^2-3r+2)+\nonumber\\
&+r^{3n-4}(r-1)+r^{3n-5}(r^3-3r+2)+r^{3n-4}(r-1)+r^{3n-6}(r^2-3r+2)+\nonumber\\
&+r^{3n-4}(r-1)+r^{3n-5}(r^2-3r+2)+r^{3n-5}(-2r+2)+\nonumber\\
&+r^{3n-6}(r^2-3r+2)+r^{3n-5}(r^2-3r+2)+r^{3n-6}(r^2-3r+2)+\nonumber\\
&+r^{3n-4}(r-1)+r^{3n-5}(r^2-3r+2)+r^{3n-5}(r^2-3r+2)+\nonumber\\
&+r^{3n-5}(r^2-3r+2)+r^{3n-4}(r^2-3r+2)=\nonumber\\
&=r^{3n-6}(2r^4+6r^3-28r^2+8r+12)\nonumber
\end{align}
and
\begin{align}
&\alpha(2,2,1)=\alpha(2,1,2)=\alpha(1,2,2)=I_{\left(\begin{smallmatrix}1&1&0\\1&1&0\\1&0&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}1&1&0\\1&1&0\\0&1&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}1&1&0\\1&1&0\\0&0&1\end{smallmatrix}\right)}+\nonumber\\
&+I_{\left(\begin{smallmatrix}1&1&0\\1&0&1\\1&0&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}1&1&0\\1&0&1\\0&1&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}1&1&0\\1&0&1\\0&0&1\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}1&1&0\\0&1&1\\1&0&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}1&1&0\\0&1&1\\0&1&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}1&1&0\\0&1&1\\0&0&1\end{smallmatrix}\right)}+\nonumber\\
&+I_{\left(\begin{smallmatrix}1&0&1\\1&1&0\\1&0&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}1&0&1\\1&1&0\\0&1&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}1&0&1\\1&1&0\\0&0&1\end{smallmatrix}\right)}
+I_{\left(\begin{smallmatrix}1&0&1\\1&0&1\\1&0&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}1&0&1\\1&0&1\\0&1&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}1&0&1\\1&0&1\\0&0&1\end{smallmatrix}\right)}+\nonumber\\&+I_{\left(\begin{smallmatrix}1&0&1\\0&1&1\\1&0&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}1&0&1\\0&1&1\\0&1&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}1&0&1\\0&1&1\\0&0&1\end{smallmatrix}\right)}
+I_{\left(\begin{smallmatrix}0&1&1\\1&1&0\\1&0&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}0&1&1\\1&1&0\\0&1&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}0&1&1\\1&1&0\\0&0&1\end{smallmatrix}\right)}+\nonumber\\
&+I_{\left(\begin{smallmatrix}0&1&1\\1&0&1\\1&0&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}0&1&1\\1&0&1\\0&1&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}0&1&1\\1&0&1\\0&0&1\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}0&1&1\\0&1&1\\1&0&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}0&1&1\\0&1&1\\0&1&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}0&1&1\\0&1&1\\0&0&1\end{smallmatrix}\right)}=\nonumber
\end{align}
\begin{align}
&=r^{3n-4}(-3r+3)+r^{3n-4}(-3r+3)+r^{3n-5}(-2r+2)+0+\nonumber\\
&+r^{3n-5}(-2r+2)+r^{3n-4}(r-1)+r^{3n-5}(-2r+2)+r^{3n-4}(-2r+2)+\nonumber\\
&+r^{3n-5}(r^2-3r+2)+0+r^{3n-5}(-2r+2)+r^{3n-4}(r-1)+\nonumber\\
&+r^{3n-5}(r^3-3r+2)+r^{3n-4}(r-1)+r^{3n-5}(r^3-3r+2)+r^{3n-4}(r-1)+\nonumber\\
&+r^{3n-5}(r^2-3r+2)+r^{3n-3}(r-1)+r^{3n-5}(-2r+2)+r^{3n-4}(-2r+2)+\nonumber
\end{align}
\begin{align}
\hspace{1cm}
&+r^{3n-5}(r^2-3r+2)+r^{3n-4}(r-1)+r^{3n-5}(r^2-3r+2)+r^{3n-3}(r-1)+\nonumber\\
&+r^{3n-5}(r^2-3r+2)+r^{3n-4}(r^2-3r+2)+r^{3n-4}(r^2-3r+2)=\nonumber\\
&=r^{3n-5}(5r^3-7r^2-22r+24).\nonumber
\end{align}
Now, the element
\begin{align}
&\alpha(2,2,0)=\alpha(2,0,2)=\alpha(0,2,2)=I_{\left(\begin{smallmatrix}1&1&0\\1&1&0\\0&0&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}1&1&0\\1&0&1\\0&0&0\end{smallmatrix}\right)}
+I_{\left(\begin{smallmatrix}1&1&0\\0&1&1\\0&0&0\end{smallmatrix}\right)}+\nonumber\\
&+I_{\left(\begin{smallmatrix}1&0&1\\1&1&0\\0&0&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}1&0&1\\1&0&1\\0&0&0\end{smallmatrix}\right)}+
I_{\left(\begin{smallmatrix}1&0&1\\0&1&1\\0&0&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}0&1&1\\1&1&0\\0&0&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}0&1&1\\1&0&1\\0&0&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}0&1&1\\0&1&1\\0&0&0\end{smallmatrix}\right)}\nonumber\\
&=r^{3n-4}(-3r+3)+0+r^{3n-4}(-2r+2)+0+r^{3n-5}(r^3-3r+2)\nonumber\\
&+r^{3n-3}(r-1)+r^{3n-4}(-2r+2)+r^{3n-3}(r-1)+r^{3n-4}(r^2-3r+2)\nonumber\\
&=r^{3n-5}(4r^3-12r^2+6r+2)\nonumber
\end{align}
and
\begin{align}
&\alpha(2,1,1)=\alpha(1,1,2)=\alpha(1,2,1)=I_{\left(\begin{smallmatrix}1&1&0\\1&0&0\\1&0&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}1&1&0\\1&0&0\\0&1&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}1&1&0\\1&0&0\\0&0&1\end{smallmatrix}\right)}+\nonumber\\
&+I_{\left(\begin{smallmatrix}1&1&0\\0&1&0\\1&0&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}1&1&0\\0&1&0\\0&1&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}1&1&0\\0&1&0\\0&0&1\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}1&1&0\\0&0&1\\1&0&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}1&1&0\\0&0&1\\0&1&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}1&1&0\\0&0&1\\0&0&1\end{smallmatrix}\right)}+\nonumber\\
&+I_{\left(\begin{smallmatrix}1&0&1\\1&0&0\\1&0&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}1&0&1\\1&0&0\\0&1&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}1&0&1\\1&0&0\\0&0&1\end{smallmatrix}\right)}
+I_{\left(\begin{smallmatrix}1&0&1\\0&1&0\\1&0&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}1&0&1\\0&1&0\\0&1&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}1&0&1\\0&1&0\\0&0&1\end{smallmatrix}\right)}+\nonumber\\
&+I_{\left(\begin{smallmatrix}1&0&1\\0&0&1\\1&0&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}1&0&1\\0&0&1\\0&1&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}1&0&1\\0&0&1\\0&0&1\end{smallmatrix}\right)}
+I_{\left(\begin{smallmatrix}0&1&1\\1&0&0\\1&0&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}0&1&1\\1&0&0\\0&1&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}0&1&1\\1&0&0\\0&0&1\end{smallmatrix}\right)}+\nonumber\\
&+I_{\left(\begin{smallmatrix}0&1&1\\0&1&0\\1&0&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}0&1&1\\0&1&0\\0&1&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}0&1&1\\0&1&0\\0&0&1\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}0&1&1\\0&0&1\\1&0&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}0&1&1\\0&0&1\\0&1&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}0&1&1\\0&0&1\\0&0&1\end{smallmatrix}\right)}\nonumber\\
&=r^{3n-3}(-r+1)+r^{3n-4}(-3r+3)+0+r^{3n-4}(-3r+3)+r^{3n-3}(-3r+3)+\nonumber\\
&+r^{3n-4}(-2r+2)+0+r^{3n-4}(-2r+2)+r^{3n-3}(r-1)+r^{3n-3}(r-1)+\nonumber\\
&+0+r^{3n-5}(r^3-3r+2)+0+r^{3n-4}(-2r+2)+r^{3n-3}(r-1)+\nonumber\\
&+r^{3n-5}(r^3-3r+2)+r^{3n-3}(r-1)+r^{3n-3}(r^2-1)+0+r^{3n-4}(-2r+2)\nonumber\\
&+r^{3n-3}(r-1)+r^{3n-4}(-2r+2)+r^{3n-3}(-2r+2)+r^{3n-4}(r^2-3r+2)+\nonumber\\
&+r^{3n-3}(r-1)
+r^{3n-4}(r^2-3r+2)+r^{3n-2}(r-1)=\nonumber\\&=r^{3n-5}(2r^4+3r^3-23r^2+14r+4).\nonumber
\end{align}
We also obtain
\begin{align}
&\alpha(2,1,0)=\alpha(0,1,2)=\alpha(0,2,1)=\alpha(2,0,1)=\alpha(1,0,2)=\alpha(1,2,0)=\nonumber\\&=I_{\left(\begin{smallmatrix}1&1&0\\1&0&0\\0&0&0\end{smallmatrix}\right)}+
+I_{\left(\begin{smallmatrix}1&1&0\\0&1&0\\0&0&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}1&1&0\\0&0&1\\0&0&0\end{smallmatrix}\right)}+
+I_{\left(\begin{smallmatrix}1&0&1\\1&0&0\\0&0&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}1&0&1\\0&1&0\\0&0&0\end{smallmatrix}\right)}+
+I_{\left(\begin{smallmatrix}1&0&1\\0&0&1\\0&0&0\end{smallmatrix}\right)}+\nonumber\\&+I_{\left(\begin{smallmatrix}0&1&1\\1&0&0\\0&0&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}0&1&1\\0&1&0\\0&0&0\end{smallmatrix}\right)}
+I_{\left(\begin{smallmatrix}0&1&1\\0&0&1\\0&0&0\end{smallmatrix}\right)}=
=r^{3n-3}(-r+1)+r^{3n-3}(-3r+3)+0+\nonumber\\
&+r^{3n-3}(r-1)+0+r^{3n-3}(r^2-1)+0+r^{3n-3}(-2r+2)+r^{3n-2}(r-1)=\nonumber\\
&=r^{3n-3}(2r^2-6r+4).\nonumber
\end{align}
Here, we get
\begin{align}
&\alpha(2,0,0)=\alpha(0,0,2)=\alpha(2,0,0)=I_{\left(\begin{smallmatrix}1&1&0\\0&0&0\\0&0&0\end{smallmatrix}\right)}+
I_{\left(\begin{smallmatrix}1&0&1\\0&0&0\\0&0&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}0&1&1\\0&0&0\\0&0&0\end{smallmatrix}\right)}=\nonumber\\
&=r^{3n-2}(-r+1)+r^{3n-2}(r-1)+0=0\nonumber
\end{align}
and we can also get
\begin{align}
&\alpha(1,1,1)=I_{\left(\begin{smallmatrix}1&0&0\\1&0&0\\1&0&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}1&0&0\\1&0&0\\0&1&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}1&0&0\\1&0&0\\0&0&1\end{smallmatrix}\right)}+\nonumber\\
&+I_{\left(\begin{smallmatrix}1&0&0\\0&1&0\\1&0&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}1&0&0\\0&1&0\\0&1&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}1&0&0\\0&1&0\\0&0&1\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}1&0&0\\0&0&1\\1&0&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}1&0&0\\0&0&1\\0&1&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}1&0&0\\0&0&1\\0&0&1\end{smallmatrix}\right)}+\nonumber\\
&+I_{\left(\begin{smallmatrix}0&1&0\\1&0&0\\1&0&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}0&1&0\\1&0&0\\0&1&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}0&1&0\\1&0&0\\0&0&1\end{smallmatrix}\right)}
+I_{\left(\begin{smallmatrix}0&1&0\\0&1&0\\1&0&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}0&1&0\\0&1&0\\0&1&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}0&1&0\\0&1&0\\0&0&1\end{smallmatrix}\right)}+\nonumber\\
&+I_{\left(\begin{smallmatrix}0&1&0\\0&0&1\\1&0&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}0&1&0\\0&0&1\\0&1&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}0&1&0\\0&0&1\\0&0&1\end{smallmatrix}\right)}
+I_{\left(\begin{smallmatrix}0&0&1\\1&0&0\\1&0&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}0&0&1\\1&0&0\\0&1&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}0&0&1\\1&0&0\\0&0&1\end{smallmatrix}\right)}+\nonumber\\
&+I_{\left(\begin{smallmatrix}0&0&1\\0&1&0\\1&0&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}0&0&1\\0&1&0\\0&1&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}0&0&1\\0&1&0\\0&0&1\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}0&0&1\\0&0&1\\1&0&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}0&0&1\\0&0&1\\0&1&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}0&0&1\\0&0&1\\0&0&1\end{smallmatrix}\right)}=\nonumber\\
&=0+r^{3n-3}(-r+1)+r^{3n-3}(r-1)+r^{3n-3}(-r+1)+r^{3n-3}(-3r+3)\nonumber\\
&+0+r^{3n-3}(r-1)+0+r^{3n-3}(r^2-1)+r^{3n-3}(-r+1)+\nonumber\\
&+r^{3n-3}(-3r+3)+0+r^{3n-3}(-3r+3)+r^{3n-2}(-3r+3)+r^{3n-3}(-2r+2)\nonumber\\
&+0+r^{3n-3}(-2r+2)+r^{3n-2}(r-1)+r^{3n-3}(r-1)+0+\nonumber\\
&+r^{3n-3}(r^2-1)+0+r^{3n-3}(-2r+2)+r^{3n-2}(r-1)+r^{3n-3}(r^2-1)\nonumber\\
&+r^{3n-2}(r-1)+r^{3n-2}(r^2-1)=r^{3n-3}(r^3+3r^2-16r+12).\nonumber
\end{align}
For the element
\begin{align}
&\alpha(1,1,0)=\alpha(1,0,1)=\alpha(0,1,1)=I_{\left(\begin{smallmatrix}1&0&0\\1&0&0\\0&0&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}1&0&0\\0&1&0\\0&0&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}1&0&0\\0&0&1\\0&0&0\end{smallmatrix}\right)}+\nonumber\\&
+I_{\left(\begin{smallmatrix}0&1&0\\1&0&0\\0&0&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}0&1&0\\0&1&0\\0&0&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}0&1&0\\0&0&1\\0&0&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}0&0&1\\1&0&0\\0&0&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}0&0&1\\0&1&0\\0&0&0\end{smallmatrix}\right)}
+I_{\left(\begin{smallmatrix}0&0&1\\0&0&1\\0&0&0\end{smallmatrix}\right)}=\nonumber\\
&=0+r^{3n-2}(-r+1)+r^{3n-2}(r-1)+r^{3n-2}(-r+1)+r^{3n-2}(-3r+3)\nonumber\\
&=0+r^{3n-2}(r-1)+0+r^{3n-2}(r^2-1)=\nonumber\\
&=r^{3n-2}(r^2-3r+2)\nonumber
\end{align}
and
\begin{align}
&\alpha(1,0,0)=\alpha(0,0,1)=\alpha(0,1,0)=I_{\left(\begin{smallmatrix}1&0&0\\0&0&0\\0&0&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}0&1&0\\0&0&0\\0&0&0\end{smallmatrix}\right)}+I_{\left(\begin{smallmatrix}0&0&1\\0&0&0\\0&0&0\end{smallmatrix}\right)}=\nonumber\\
&=0+r^{3n-2}(-r+1)+r^{3n-1}(-r+1)+r^{3n-1}(r-1)=0.\nonumber
\end{align}
Finally, we have
\begin{align}
&\alpha(0,0,0)=I_{\left(\begin{smallmatrix}0&0&0\\0&0&0\\0&0&0\end{smallmatrix}\right)}=0\nonumber
\end{align}
At this moment we can prove the theorem~\ref{T:convexity} :
\begin{proof}(Theorem GHS inequality for the Potts model)
The study of the signal of the quantity $\frac{\partial^2 m_i(\underline{J},\underline{B})}{\partial B_j\partial B_k}$ is related to the signal of $I(\underline{J},\underline{B})$ (see equation (\ref{E:def_I})). By the separation formula the signal of $I(\underline{J},\underline{B})$ is defined by the signal of the polynomial
\begin{align}
&\sum_{\begin{array}{c}A\in \mathcal{A}_{\binom{N+1}{2}}\\\mbox{if }o_p\notin O_2\\a(p,1)=a(p,2)=a(p,3)=0\end{array}}I_AX_{p_1}^{n(p_1)}X_{p_2}^{n(p_2)}X_{p_3}^{n(p_3)}=\nonumber\\&=\sum_{n(p_1),n(p_2),n(p_3)}\alpha(n(p_1),n(p_2),n(p_3))X_{p_1}^{n(p_1)}X_{p_2}^{n(p_2)}X_{p_3}^{n(p_3)}
\end{align}
But we can easily verify that the coefficients $\alpha(n(p_1),n(p_2),n(p_3))$ are all of them non positive if $r=2$ and are all of them non-negative if $r\geq 3$.
\qed
\end{proof}
|
2,869,038,155,858 | arxiv | \section{Introduction}
With a few exceptions, models of optimal stopping time problems
assume that the player is able to terminate the underlying
stochastic dynamics immediately after the decision to stop, or to
bring a new project online without any delays after the decision to
invest. In fact, both stopping stochastic dynamics and initiating a
new project take time. In this paper, we consider a general class of
optimal stopping problems, where there exists a time lag between the
player's decision time and the time that the payoff is delivered.
As an example, we study American put options with delivery lags in
details. In practice, there may exist a time lag between the time
that the option holder decides to excise the option and the time
that the payoff is delivered. Such delivery lags may be specified in
financial contracts, where the decision to exercise must be made
before the exercise takes place. They are called
\emph{make-your-mind-up options} (see Chapter 6 of \cite{Jiang} and
Chapter 9 of \cite{Wilmott}). For example, the option holder must
give a notice period before she exercises, and she cannot change her
mind. On the other hand, even for standard American derivatives, the
option holder may not be able to exercise immediately, when there
exist liquidation constraints in financial markets.
The purpose of this paper is to study the effects of delivery lags
in the model of American put options. Our paper makes three specific
contributions.
We first solve a general optimal stopping problem with a given time
lag to deliver the payoff. To this end, we use the reflected
backward stochastic differential equation (BSDE) method and,
therefore, no Markovian assumption is required. This is in contrast
to the existing literature (see \cite{Keppo}, \cite{Bar-ILan},
\cite{Oksendal} and \cite{Oksendal2} with more references therein),
where the Markovian property is required. We refer to
\cite{ElKaroui1997_1} and \cite{ElKaroui1997_3} for an introduction
of BSDE and reflected BSDE. To solve the problem, we introduce a new
obstacle process, which is the projection (conditional expectation)
of the payoff of the original optimal stopping problem with delivery
lags. We then transform the original problem to a standard optimal
stopping problem (without delivery lags) with this new obstacle
process as the modified payoff (see Lemma \ref{proposition1}).
Due to the existence of the projection operator, if the original
payoff is nonlinear (as the payoff of the American put option
herein), the nonlinearity will propagate via the conditional
expectation, resulting in a more complicated nonlinear function as
the modified payoff. In our case, it is the corresponding European
put option price. This makes the analysis of the associated optimal
exercise boundary much more challenging. In the existing literature
of optimal stopping with delivery lags (see \cite{Keppo} and
\cite{Oksendal} for example), however, the authors assume that
original payoff is linear, so the modified payoff is also linear as
a consequence of the linearity of the conditional expectation.
Hence, the treatments of the optimal stopping problems with and
without delivery lags are essentially the same in their models.
Our second contribution is an early exercise premium decomposition
formula for American put options with delivery lags. This helps us
overcome the difficulty of handling the European option price as the
modified payoff. We show that an American put option with delivery
lags can be decomposed as a European put option and another
American-style derivative. The latter is an option for which the
investor receives the Greek Theta of the corresponding European
option as the running payoff, and decides an optimal stopping time
to terminate the contract (see Lemma \ref{lemma1}). The
decomposition formula (\ref{Decomposition}) in Lemma \ref{lemma1}
can also be regarded as a counterpart of the early exercise premium
representation of standard American options, and is crucial to the
analysis of the associated optimal exercise boundary.
Using the free-boundary method, we then give a detailed analysis of
the associated optimal exercise boundary. An essential difficulty
herein is the non-monotonicity of the value function with respect to
the stock price (a similar phenomenon also appears in \cite{Dai}).
As a result, it is not even clear \emph{ex ante} whether the optimal
exercise boundary exists or not. This is in contrast to standard
American options, for which the value function, subtracted by the
payoff, is monotonic with respect to the stock price, so the
stopping and continuation regions can be easily separated.
As the third contribution, we prove that the optimal exercise
boundary exists and is a strictly increasing and smooth curve, with
its end point closely related to the zero crossing point of the
Greek Theta of the corresponding European option (see Theorem
\ref{Th5}). Intuitively, when Theta is positive, the running payoff
of the new American-style derivative is also positive, so the
investor will hold the option to receive the positive Theta
continuously. In the contrary, when Theta is negative, one may think
that the investor would then exercise the option to stop her losses.
However, we show that when Theta is negative but not too small, the
investor may still hold the option and wait for Theta to rally at a
later time to recover her previous losses. We further quantify such
negative values of Theta by identifying the asymptotic line of the
optimal exercise boundary, which turns out to be the optimal
exercise boundary of the corresponding perpetual option (see Theorem
\ref{theorem}). We also prove the convergence of the optimal
exercise boundary as the time lag tends to zero. As expected, it
will converge to the optimal exercise boundary of standard American
options (see Theorem \ref{Th5.2}).
The paper is organized as follows. In section 2, we solve a general
optimal stopping problem with delivery lags via the reflected BSDE
method. In section 3, we introduce the model of American put options
with delivery lags, together with their early exercise premium
decomposition and their perpetual version. We then give a detailed
analysis of the associated optimal exercise boundary via the
free-boundary method in section 4, and conclude the paper with the
properties of the Greek Theta in the appendix.
\section{Optimal stopping with delivery lags}
In this section, we introduce a general optimal stopping problem
with delivery lags, which includes American put options in the next
section as a special case.
Let $W$ be a one-dimensional Brownian motion on a probability space
$(\Omega,\mathcal{F},\mathbf{P})$. Denote by
$\mathbb{F}=\{\mathcal{F}_t\}_{t\geq 0}$ the augmented filtration
generated by $W$. Let $T>0$ represent the fixed maturity, and
$\delta\in[0,T)$ represent the time lag. {For $t\in[0,T]$, we
introduce the admissible set}
$${\mathcal{R}_t^{0}:=\{\tau^{0}:\Omega\rightarrow[t,T],\
\text{and}\ \{\tau^{0}\leq s\}\in\mathcal{F}_{s}\ \text{for\ any}\
s\in[t,T]\}.}$$
The player chooses an optimal stopping time
$\tau^{0,*}\in\mathcal{R}_t^{0}$ in order to maximize {the following
objective functional}
$$
{{\cal Y}_t^\delta(\tau^0):=
\mathbf{E}\left[\int_t^{(\tau^{0}+\delta)\wedge
T}\frac{R_s}{R_t}f_sds+\frac{R_{\tau^{0}+\delta}}{R_t}S_{\tau^{0}+\delta}\mathbf{1}_{\{\tau^0+\delta<T\}}+
\frac{R_{T}}{R_t}\xi\mathbf{1}_{\{\tau^{0}+\delta\geq
T\}}|\mathcal{F}_t\right],}
$$
with the terminal data $\xi$, the running payoff $f$, the discount
factor $R$, and the payoff $S$ as the given data, and the stopping
time $\tau^0\in\mathcal{R}_t^{0}$.
{If the maximum value exists, then $\tau^{0,*}$ is called the
\emph{optimal stopping time}, and the maximum value $y_t^{\delta}$
is called the \emph{value process}, where}
\begin{equation}\label{optimal_stopping_delay_2}
{y_t^{\delta}:=\esssup_{\tau^{0}\in\mathcal{R}_t^{0}}{\cal
Y}_t^\delta(\tau^0).}
\end{equation}
Note that $\delta=0$ corresponds to classical optimal stopping
problems (see, for example, \cite{Detemple}, \cite{Lamberton} and
\cite{Peskir} among others). For $\delta>0$, if the player decides
to stop at some stopping time $\tau^{0}$, then the payoff will be
delivered at $\tau^0+\delta$ rather than $\tau^0$, so there is a
time lag of the delivery of the payoff. For this reason, the problem
(\ref{optimal_stopping_delay_2}) is referred to as the \emph{optimal
stopping problem with delivery lags}.
We also observe that (\ref{optimal_stopping_delay_2}) is trivial for
$t\in(T-\delta,T]$ since, in this situation, ${\cal
Y}_t^\delta(\tau^0)$ is independent of choice of $\tau^0$, and we
may simply choose the optimal stopping time as the maturity $T$.
Thus, we focus on the case $t\in[0,T-\delta]$ throughout the paper.
The Markovian case of (\ref{optimal_stopping_delay_2}) in an
infinite horizon setting has been considered in the literature (see,
for example, \cite{Oksendal} with more references therein). The
problem has been further applied to irreversible investment
(\cite{Keppo}), reversible investment (\cite{Bar-ILan2 },
\cite{Bar-ILan}) and impulse control (\cite{Erhan}, \cite{Pham},
\cite{Oksendal2}). Herein, we generalize to the non-Markovian case
in a finite horizon setting.
\begin{remark}
The problem (\ref{optimal_stopping_delay_2}) is closely related to
the following optimal stopping problem with delayed information, as
was introduced in \cite{Oksendal} under the Markovian assumption in
an infinite horizon setting. The player chooses an optimal stopping
time $\tau^{\delta,*}\in\mathcal{R}_t^{\delta}$ in order to maximize
\begin{equation}\label{optimal_stopping_delay}
{\mathbf{E}\left[\int_t^{\tau^{\delta}}\frac{R_s}{R_t}f_sds+\frac{R_{\tau^{\delta}}}{R_t}S_{\tau^{\delta}}\mathbf{1}_{\{\tau^{\delta}<T\}}+\frac{R_T}{R_t}\xi\mathbf{1}_{\{\tau^{\delta}=
T\}}|\mathcal{F}_t\right],}
\end{equation}
with ${\tau^\delta\in\mathcal{R}_t^{\delta}}$. Herein, the
admissible set $\mathcal{R}_t^{\delta}$ is defined as, for
$t\in[0,T-\delta]$,
$$\mathcal{R}_t^{\delta}:=\{\tau^{\delta}:\Omega\rightarrow[t+\delta,T],\ \text{and}\ \{\tau^{\delta}\leq s\}\in\mathcal{F}_{s-\delta}\ \text{for\ any}\ s\in
[t+\delta,T]\},$$ and for $t\in(T-\delta,T]$,
$\mathcal{R}_t^{\delta}=\{T\}$.
The new feature of (\ref{optimal_stopping_delay}) is that given the
stopping time $\tau^{\delta}$, the player stops based on the
information up to $\tau^{\delta}-\delta$, rather than
$\tau^{\delta}$ itself. It is proved in Lemma 1.2 of \cite{Oksendal}
that if $\tau^{0,*}$ is the optimal stopping time for
(\ref{optimal_stopping_delay_2}), then
$\tau^{\delta,*}=\tau^{0,*}+\delta$ {for $t\in[0,T-\delta]$, and
$\tau^{\delta,*}=\tau^{0,*}=T$ for $t\in(T-\delta,T]$}.
\end{remark}
We solve (\ref{optimal_stopping_delay_2}) via the reflected BSDE
method under the following assumption.
\begin{assumption}\label{Assumption}
(i) The terminal data $\xi\in\mathcal{F}_T$ is square integrable
$\mathbf{E}[|\xi|^2]<\infty;$
(ii) The running payoff $f$ is an $\mathbb{F}$-adapted process, and
is {$\mathbb{H}^2$-}square integrable, i.e.
$\mathbf{E}[\int_0^T|f_t|^2dt]<\infty$;
(iii) The discount factor $R$ satisfies $dR_t=-r_tR_tdt$ with
$R_0=1$, and $r$ is a uniformly bounded $\mathbb{F}$-adapted
process.
(iv) The payoff $S$ is a continuous $\mathbb{F}$-adapted process,
and is uniformly square integrable, i.e.
$\mathbf{E}\left[\sup_{t\in[0,T]}|S_t|^2\right]<\infty.$
\end{assumption}
\begin{lemma}\label{proposition1} Suppose that Assumption \ref{Assumption}
holds. Let the process $\widehat{Y}_t^{\delta}$, $t\in[0,T-\delta]$,
be given as
\begin{equation}\label{BSDE1}
\widehat{Y}_t^{\delta}=\mathbf{E}\left[\int_t^{t+\delta}\frac{R_s}{R_t}f_sds+
\frac{R_{t+\delta}}{R_t}S_{t+\delta}\mathbf{1}_{\{t+\delta<
T\}}+\frac{R_T}{R_t}\xi\mathbf{1}_{\{t+\delta=
T\}}|\mathcal{F}_t\right].
\end{equation}
Then, the following assertions hold:
(i) The reflected BSDE
\begin{eqnarray}\label{RBSDE1} \left\{
\begin{array}{ll}
Y_t^{\delta}=\widehat{Y}_{T-\delta}^{\delta}+\int_t^{T-\delta}(f_s-r_sY_s^{\delta})ds+\int_t^{T-\delta}dK_s^{\delta}-\int_t^{T-\delta}Z_s^{\delta}dW_s;\\[+0.2cm]
Y_t^{\delta}\geq \widehat{Y}_t^{\delta},\ \ \ \text{for}\ t\in[0,T-\delta];\\[+0.2cm]
\int_0^{T-\delta}(Y_t^{\delta}-\widehat{Y}_t^{\delta})dK_t^{\delta}=0,\
\ \ \ \text{(Skorohod condition)},
\end{array}
\right.
\end{eqnarray}
admits a unique $\mathbb{F}$-adapted solution
$(Y^{\delta}_t,Z_t^{\delta},K^{\delta}_t)$, $t\in[0,T-\delta]$,
where $Y^{\delta}$ is uniformly square integrable, $Z^{\delta}$ is
{$\mathbb{H}^2$-}square integrable, and $K^{\delta}$ is an
increasing and continuous process with $K_0=0$ and $K_T$ being
square integrable.
(ii) The value process of (\ref{optimal_stopping_delay_2}) is given
as $y_t^{\delta}=Y_t^{\delta}$, $t\in[0,T-\delta]$, and the optimal
stopping time is given as
$$\tau^{0,*}=\inf\{s\in[
t, T-\delta]:Y_s^{\delta}=\widehat{Y}_s^{\delta}\}.$$
\end{lemma}
\begin{proof}
(i) It is standard to check that the process $\hat{Y}^{\delta}_t$,
$t\in[0,T-\delta)$, is continuous and uniformly square integrable.
Hence, it follows from sections 5 and 6 in \cite{ElKaroui1997_1}
that the reflected BSDE (\ref{RBSDE1}) admits a unique solution
$(Y^{\delta},Z^{\delta},K^{\delta})$.
(ii) The proof is adapted from Proposition 2.3 in
\cite{ElKaroui1997_1}. We provide its details for the reader's
convenience. We first show that $Y_t^{\delta}\geq y_t^{\delta}$.
Applying It\^o's formula to $R_tY_t^{\delta}$, we obtain that
\begin{equation}\label{Ito}
d(R_tY_t^{\delta})=- {R_{t}}f_tdt-{R_t}dK_t^{\delta}+
{R_t}Z_t^{\delta}dW_t.
\end{equation}
For any stopping time $\tau^0\in\mathcal{R}_t^{0}$, it follows from
(\ref{Ito}) that
\begin{align*}
Y_t^{\delta}=&
\frac{R_{\tau^0}}{R_t}{Y}_{\tau^0}^{\delta}+\int_t^{\tau^0}\frac{R_s}{R_t}f_sds
+\int_t^{\tau^0}\frac{R_s}{R_t}dK_s^{\delta}
-\int_t^{\tau^0}\frac{R_s}{R_t}Z_s^{\delta}dW_s\\
\geq&\ \mathbf{E}\left[\frac{R_{\tau^0}}{R_t}{Y}_{\tau^0}^{\delta}+\int_t^{\tau^0}\frac{R_s}{R_t}f_sds|\mathcal{F}_t\right]\\
=&\ \mathbf{E}\left[\int_t^{\tau^0}\frac{R_s}{R_t}f_sds
+\frac{R_{T-\delta}}{R_t}{Y}^{\delta}_{T-\delta}\mathbf{1}_{\{\tau^0=
T-\delta\}}+\frac{R_{\tau^0}}{R_t}Y^{\delta}_{\tau^0}\mathbf{1}_{\{\tau^0<
T-\delta\}} |\mathcal{F}_t\right].
\end{align*}
Note that on $\{\tau^0=T-\delta\}$,
\begin{equation*}
Y^{\delta}_{T-\delta}=\widehat{Y}^{\delta}_{T-\delta}=\mathbf{E}\left[\frac{R_T}{R_{T-\delta}}\xi+\int_{T-\delta}^T\frac{R_s}{R_{T-\delta}}f_sds|\mathcal{F}_{T-\delta}\right],
\end{equation*}
and on $\{\tau^0<T-\delta\}$,
\begin{equation*}
Y_{\tau^0}^{\delta}\geq
\widehat{Y}_{\tau^0}^{\delta}=\mathbf{E}\left[\frac{R_{\tau^0+\delta}}{R_{\tau^0}}S_{\tau^0+\delta}
+\int_{\tau^0}^{\tau^0+\delta}\frac{R_s}{R_{\tau^0}}f_sds|\mathcal{F}_{\tau^0}\right],
\end{equation*}
where we have used (\ref{BSDE1}) in both equations. It follows that
\begin{align*}
Y_t^{\delta}\geq&\
\mathbf{E}\left[\int_t^{\tau^0}\frac{R_s}{R_t}f_sds
+\int_{T-\delta}^T\frac{R_s}{R_t}f_sds\mathbf{1}_{\{\tau^0=
T-\delta\}}+\int_{\tau^0}^{\tau^0+\delta}\frac{R_s}{R_t}f_sds\mathbf{1}_{\{\tau^0<
T-\delta\}}
\right.\\
&\left.+\frac{R_{T}}{R_t}\xi\mathbf{1}_{\{\tau^0=T-\delta\}}+\frac{R_{\tau^0+\delta}}{R_t}S_{\tau^0+\delta}\mathbf{1}_{\{\tau^0<
T-\delta\}}|\mathcal{F}_t\right]\\
=&\ \mathbf{E}\left[\int_t^{\tau^{0}+\delta
}\frac{R_s}{R_t}f_sds+\frac{R_{\tau^0+\delta}}{R_t}S_{\tau^{0}+\delta}\mathbf{1}_{\{\tau^0+\delta<T\}}+\frac{R_T}{R_t}\xi\mathbf{1}_{\{\tau^{0}+\delta=
T\}}|\mathcal{F}_t\right].
\end{align*}
Taking supremum over $\tau^0\in\mathcal{R}_t^{0}$ gives
$Y_t^{\delta}\geq y_t^{\delta}$.
Next, we prove the reserve inequality using $\tau^{0,*}$. Note that
the Skorohod condition in (\ref{RBSDE1}) implies that the measure
$dK_s^{\delta}$ is carried by the set $\{s:
Y_s^{\delta}=\hat{Y}_s^{\delta}\}$. Hence, $dK_s^{\delta}=0$ for
$s\in[t,\tau^{0,*}]$. It then follows from (\ref{Ito}) that
\begin{align*}
Y_t^{\delta}&=\frac{R_{\tau^{0,*}}}{R_t}\widehat{Y}_{\tau^{0,*}}^{\delta}+\int_t^{\tau^{0,*}}\frac{R_s}{R_t}f_sds
+\int_t^{\tau^{0,*}}\frac{R_s}{R_t}dK_s^{\delta}-\int_t^{\tau^{0,*}}\frac{R_s}{R_t}Z_s^{\delta}dW_s\\
&=\mathbf{E}\left[\frac{R_{\tau^{0,*}}}{R_t}\widehat{Y}_{\tau^{0,*}}^{\delta}+\int_t^{\tau^{0,*}}\frac{R_{s}}{R_t}f_sds|\mathcal{F}_t\right]\\
&=\mathbf{E}\left[\int_t^{\tau^{0,*}+\delta}\frac{R_s}{R_t}f_sds+\frac{R_{\tau^{0,*}+\delta}}{R_t}S_{\tau^{0,*}+\delta}\mathbf{1}_{\{\tau^{0,*}+\delta<T\}}+\frac{R_{T}}{R_t}\xi\mathbf{1}_{\{\tau^{0,*}+\delta=
T\}}|\mathcal{F}_t\right],
\end{align*}
from which we conclude that $Y_t^{\delta}\leq y_t^{\delta}.$
\end{proof}
\section{American put option with delivery lags}
As an application of the optimal stopping with delivery lags, we
consider a concrete model of American put option with delivery lags
in this section.
\begin{assumption}\label{Assumption2} The terminal data $\xi=(K-X_T)^+$, the running payoff $f_t=0$, the
interest rate $r_t=r$ and the payoff $S_t=(K-X_t)^+$, where $X$
represents the stock price and follows
$$dX_t/X_t=(r-q)dt+\sigma dW_t,$$
and the strike price $K$, the interest rate $r$, the dividend rate
$q<r$, and the volatility {$\sigma$} are all positive constants.
\end{assumption}
Under the above assumption, the optimal stopping problem
(\ref{optimal_stopping_delay_2}) reduces to
\begin{equation}\label{optimal_stopping_delay_special_3}
y_t^{\delta}= \esssup_{\tau^{0}\in\mathcal{R}_t^{0}}
\mathbf{E}\left[e^{-r(\tau^{0}+\delta-t)}(K-X_{\tau^{0}+\delta})^+|\mathcal{F}_t\right],
\end{equation}
for $t\in[0,T-\delta]$, so $y^{\delta}$ is the value process of the
American put option with time lag $\delta$, if the probability
measure $\mathbf{P}$ is interpreted as the risk-neutral measure.
It follows from (\ref{BSDE1}) in Lemma \ref{proposition1} that, for
$t\in[0,T-\delta]$,
\begin{equation}\label{european}
\widehat{Y}_t^{\delta}=\mathbf{E}\left[e^{-r\delta}(K-X_{t+\delta})^+|\mathcal{F}_t\right],
\end{equation}
which is the time $t$ value of the corresponding European put option
with maturity $t+\delta$. Denote by $P(\cdot,\cdot)$ the value
function of the European put option with maturity $T$. Then, we have
$\hat{Y}_t^{\delta}=P(T-\delta, X_t)$.
Moreover, Lemma \ref{proposition1} (ii) implies that the value
process $y^{\delta}$ is given as $y^{\delta}_t=Y_t^{\delta}$,
$t\in[0,T-\delta]$, where $Y^{\delta}$ solves the reflected BSDE
{(\ref{RBSDE1})}. Due to Assumption 2, the Feynman-Kac formula in
section 9 of \cite{ElKaroui1997_1} further yields that
$Y_t^{\delta}=V^{\delta}(t,X_t)$, $t\in[0,T-\delta]$, where
$V^{\delta}(\cdot,\cdot)$ is the unique (viscosity) solution to the
variational inequality
\begin{eqnarray}\label{VI11}
\left\{
\begin{array}{ll}
(-\p_t-{{\cal L}})V^{\delta}(t,X)=0,&\mbox{if}\; V^{\delta}(t,X)> P(T-\delta,X),\\
&\mbox{for}\;(t,X)\in{\Omega}_{T-\delta};
\vspace{2mm} \\
(-\p_t-{{\cal L}})V^{\delta}(t,X)\geq 0,&\mbox{if}\; V^{\delta}(t,X)=P(T-\delta,X),\\
&\mbox{for}\;(t,X)\in{\Omega}_{T-\delta};
\vspace{2mm} \\
V^{\delta}(T-\delta,X)=P(T-\delta,X),
&\mbox{for}\;X\in\mathbb{R}_+,
\end{array}
\right.
\end{eqnarray}
with ${\Omega}_{T-\delta}= [0,T-\delta\,)\times\mathbb{R}_+$, and
the operator $\mathcal{L}$ given as the Black-Scholes differential
operator
$$
{\cal L} ={1\over2}\,\sigma^2X^2\p_{XX}
+(r-q)X\p_X-r. $$
Note that if $\delta=0$, $P(T-\delta,X)=(K-X)^+$, and variational
inequality (\ref{VI11}) reduces to the standard variational
inequality for American put options (see, for example,
\cite{Detemple}, \cite{Lamberton} and \cite{Peskir}).
Furthermore, since variational inequality (\ref{VI11}) is with
smooth coefficients and obstacle, the solution
$V^\delta(\cdot,\cdot)$ to (\ref{VI11}) admits the following
regularities. The proof follows along the similar arguments used in
Chapter 1 of \cite{Friedman2} (or more recently \cite{Yan1} and
\cite{Yan2}) and is, thus, omitted.
\begin{proposition}\label{regularity1}
{Suppose that Assumption 2 holds. Then, the viscosity solution
$V^{\delta}(\cdot,\cdot)$ is also the unique bounded strong solution
to variational inequality~\eqref{VI11}, and satisfies $V^\delta\in
W^{2,1}_{p,loc}(\Omega_{T-\delta})\cap
C(\overline{\Omega_{T-\delta}})$ for any $p\geq 1$. Moreover, $\p_x
V^\delta\in C(\overline{\Omega_{T-\delta}})$.}
Herein, $W^{2,1}_{p,loc}(\Omega_{T-\delta})$ is the set of all
functions whose restriction on any compact subset
$\Omega_{T-\delta}^*\subset\Omega_{T-\delta}$ belong to
$W^{2,1}_p(\Omega_{T-\delta}^{*})$, where
$W^{2,1}_p(\Omega_{T-\delta}^{*})$ is the completion of
$C^{\infty}(\Omega_{T-\delta}^{*})$ under the norm
$$||V^\delta||_{W^{2,1}_p(\Omega_{T-\delta}^{*})}=
\left[\int_{\Omega_{T-\delta}^*}(|V^\delta|^p+|\partial_tV^\delta|^p+|\partial_xV^\delta|^p+|\partial_{xx}V^\delta|^{p})dxdt\right]^{\frac{1}{p}}
.$$
\end{proposition}
\begin{remark}\label{remark0}
In the existing literature of optimal stopping with delivery lags
(see \cite{Keppo} and \cite{Oksendal} for example), they assume that
the payoff $S$ is a linear function of the underlying asset $X$. A
consequence of this assumption is that the process
$\hat{Y}^{\delta}$ is also linear in $X$, which follows from the
linearity of the conditional expectation, and the obstacle in the
variational inequality is therefore also a linear function.
In our case, the payoff $S$ is a piecewise linear function of the
underlying asset $X$ (with a kink point at $K$), and this kink point
propagates via the conditional expectation, resulting in a nonlinear
obstacle $P(T-\delta,\cdot)$. This differentiates our problem from
the existing optimal stopping problems with delivery lags, and makes
the analysis of the corresponding optimal exercise boundary much
more challenging.
\end{remark}
\subsection{Comparison with standard American put option}
We first make a comparison between the American put options with and
without delivery lags. Intuitively, the existence of delivery lags
results in the loss of the opportunities to exercise the option and,
therefore, the option value is smaller than the value of the
standard American put option. Moreover, the longer the lag period
is, the smaller the option value should be. The following
proposition {\ref{Th5.1}} confirms the above intuition.
\begin{proposition}\label{Th5.1}
Suppose that Assumption 2 holds.
Then, the American put option's value function $V^\delta(\cdot,\cdot)$ with time lag $\delta$ is decreasing with respect to
$\delta$ for $\delta\in[0,T)$, and moreover,
\begin{equation}\label{bound}
V^{0}(t,X)\geq V^{\delta}(t,X)\geq V^{0}(t,X)-\delta rK,
\end{equation}
for $(t,X)\in[0,T-\delta]\times\mathbb{R}_+$.
\end{proposition}
\begin{proof}
To prove the monotone property of $V^{\delta}(\cdot,\cdot)$ with
respect to $\delta$, we first establish that
\begin{equation}\label{derivative with respect to t}
{\partial_{t}V^{\delta}\leq 0 \;\;\mbox{a.e.
in}\;\;\Omega_{T-\delta}.}
\end{equation}
To this end, let $V^{\delta}_\ep(t,X)=V^{\delta}(t-\ep,X)$ for
$\ep>0$. Then,
$V^{\delta}_\ep(T-\delta,X)=V^{\delta}(T-\delta-\ep,X)\geq
P(T-\delta,X)$ and, moreover, $V^{\delta}_{\ep}(\cdot,\cdot)$
satisfies
the variational inequality
\bee
\left\{
\begin{array}{ll}
(-\p_t-{{\cal L}})V^{\delta}_{\ep}(t,X)=0,&\mbox{if}\; V^{\delta}_{\ep}(t,X)> P(T-\delta,X),\\
&\mbox{for}\;(t,X)\in[\ep,T-\delta]\times\mathbb{R}_+;
\vspace{2mm} \\
(-\p_t-{{\cal L}})V^{\delta}_{\ep}(t,X)\geq 0,&\mbox{if}\;
V^{\delta}_{\ep}(t,X)=P(T-\delta,X),\\
&\mbox{for}\;(t,X)\in[\ep,T-\delta]\times\mathbb{R}_+;
\vspace{2mm}\\
V^{\delta}_\ep(T-\delta,X)\geq
P(T-\delta,X),&\mbox{for}\;X\in\mathbb{R}_+.
\end{array}
\right.
\eee Hence, $V^{\delta}_{\ep}(t,X)$ is a supersolution of
(\ref{VI11}) for $(t,X)\in[\epsilon,T-\delta]\times\mathbb{R}_+$. It
follows from the comparison principle (see {\cite{Friedman2} or
\cite{Yan1}}) for variational inequality (\ref{VI11}) in the domain
$[\ep,T-\delta)\times\mathbb{R}_+$ that
$V^{\delta}_{\ep}(t,X)=V^{\delta}(t-\epsilon,X)\geq V^{\delta}(t,X)$
for any $\ep>0$. Thus, we conclude that $\p_t V^{\delta}\leq 0$
{a.e. in $\Omega_{T-\delta}$}.
Now suppose $0\leq\delta_2\leq \delta_1<T$. We first compare
$V^{\delta_2}(t,X)$ and $V^{\delta_1}(t,X)$ at $t=T-\delta_1$.
Recall (\ref{optimal_stopping_delay_special_3}), and we have
$$V^{\delta_2}(T-\delta_1,X_{T-\delta_1})=\esssup_{\tau^0\in\mathcal{R}_{T-\delta_{1}}^0}\mathbf{E}\left[e^{-r(\tau^0+\delta_2-(T-\delta_1))}(K-X_{\tau^0+\delta_2})^+|\mathcal{F}_{T-\delta_1}\right].$$
Taking $\tau^0=T-\delta_2$ further yields that
$$V^{\delta_2}(T-\delta_1,X_{T-\delta_1})\geq
\mathbf{E}\left[e^{-r\delta_1}(K-X_{T})^+|\mathcal{F}_{T-\delta_1}\right]=P(T-\delta_1,X_{T-\delta_1}).
$$
On the other hand, it follows from (\ref{european}) that
$$V^{\delta_1}(T-\delta_1,X_{T-\delta_1})=\hat{Y}^{\delta_1}_{T-\delta_1}=\mathbf{E}\left[e^{-r\delta_1}(K-X_{T})^+|\mathcal{F}_{T-\delta_1}\right]=P(T-\delta_1,X_{T-\delta_1}).$$
In turn, $V^{\delta_2}(T-\delta_1,X)\geq
V^{\delta_1}(T-\delta_1,X)$.
In general, for $t\in[0,T-\delta_1]$, since
$\partial_tV^{\delta_2}\leq 0$, it follows that
$$V^{\delta_2}(t,X)\geq V^{\delta_2}(T-\delta_1,X)\geq V^{\delta_1}(T-\delta_1,X)=P(T-\delta_1,X).$$ Hence, $V^{\delta_2}(t,X)$ is a
supersolution of variational inequality (\ref{VI11}) with
$\delta=\delta_1$,
\begin{eqnarray*}
\left\{
\begin{array}{ll}
\min\left\{(-\p_t-{{\cal L}})V^{\delta_2}(t,X),V^{\delta_2}(t,X)-P(T-\delta_1,X)\right\}\geq 0,&\mbox{for}\;(t,X)\in{\Omega}_{T-\delta_1};
\vspace{2mm} \\
V^{\delta_2}(T-\delta_1,X)\geq P(T-\delta_1,X),&\mbox{for}\;X\in\mathbb{R}_+,
\end{array}
\right.
\end{eqnarray*}
from which we obtain that $V^{\delta_2}(t,X)\geq V^{\delta_1}(t,X)$,
for $(t,X)\in\Omega_{T-\delta_1}$, using the comparison principle
for (\ref{VI11}) in the domain $\Omega_{T-\delta_1}$.
To prove the second inequality in (\ref{bound}), we use the early
exercise premium representation of the standard American put option
(see \cite{Jiang} for its proof):
\begin{equation}\label{Decomposition0}
V^{0}(t,X)=P(t,X)+e(t,X),
\end{equation}
where $e$ is the early exercise premium given as
$$e(t,X)=\int_t^{T}ds\int_{0}^{X^{0}(s)}(rK-q\xi)p(t,X;s,\xi)d\xi,$$
with $X_0(s)$ being the corresponding optimal exercise boundary, and
$p(t,X;s,\xi)$ being the transition density/the fundamental solution
of the operator $\mathcal{L}$.
Note that for $t\in[T-\delta, T]$,
\begin{align*}
e(t,X)&\leq rK\int_t^{T}ds\int_{0}^{X^{0}(s)}p(t,X;s,\xi)d\xi\\
&\leq
rK\int_t^{T}ds\int_{0}^{\infty}p(t,X;s,\xi)d\xi \leq \delta rK,
\end{align*}
which in turn yields that
$$
{P(T,X)\leq V^0(T-\delta,X)\leq
P(T-\delta,X)+\delta rK=V^{\delta}(T-\delta,X)+\delta rK.}
$$
In general, for $t\in[0,T-\delta]$, since
$(-\partial_t-\mathcal{L})(-\delta rK)=-\delta r^2K\leq 0$, it
follows that $(V^0-\delta rK)$ is a subsolution of variational
inequality (\ref{VI11}), i.e.
\begin{eqnarray*}
\left\{
\begin{array}{lr}
\min\left\{(-\p_t-{{\cal L}})(V^{0}(t,X)-\delta rK),(V^{0}(t,X)-\delta rK)-P(T-\delta,X)\right\}\leq 0,\vspace{2mm}\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \mbox{for}\;(t,X)\in{\Omega}_{T-\delta};
\vspace{2mm} \\
V^{0}(T-\delta,X)-\delta rK\leq P(T-\delta,X),\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \mbox{for}\;X\in\mathbb{R}_+,
\end{array}
\right.
\end{eqnarray*}
from which we conclude that $V^{0}(t,X)-\delta rK\leq
V^{\delta}(t,X)$, for $(t,X)\in\Omega_{T-\delta}$, using the
comparison principle for (\ref{VI11}) in the domain
$\Omega_{T-\delta}$.
\end{proof}
\subsection{An early exercise premium decomposition formula}
We derive a decomposition formula for the American put option with
delivery lags, which can be regarded as a counterpart of the early
exercise premium representation (\ref{Decomposition0}) for the
standard case. Such a decomposition formula is crucial to the
analysis of the optimal exercise boundary in sections 3.3 and 4.
We show that the American put option with delivery lags can be
decomposed as the European put option $P(T-\delta,X)$ and another
American-style derivative with the running payoff
$\Theta^{\delta}(X)$, where $\Theta^{\delta}(\cdot)$ is the Greek
Theta of the European option:
$\Theta^{\delta}(X)=-\partial_tP(T-\delta,X)$.
\begin{lemma}\label{lemma1} Suppose that Assumption $\ref{Assumption2}$ holds. Then, for $t\in[0,T-\delta]$, the
value of the American put option with time lag $\delta$ can be
decomposed as
\begin{equation}\label{Decomposition}
Y_t^{\delta}=P(T-\delta,X_t)+U^{\delta}(t,X_t),
\end{equation}
where
\begin{equation}\label{optimal_stopping_delay_special_4}
U^{\delta}(t,X_t)=\esssup_{\tau^{0}\in\mathcal{R}_t^{0}}\mathbf{E}\left[\int_t^{\tau^{0}}e^{-r(s-t)}\Theta^{\delta}(X_s)ds|\mathcal{F}_t\right].
\end{equation}
The optimal stopping time for
(\ref{optimal_stopping_delay_special_4}) is $\tau^{0,*}$ as given in
Lemma \ref{proposition1}.
\end{lemma}
\begin{proof} Recall that
$\widehat{Y}_t^{\delta}=\mathbf{E}\left[e^{-r\delta}(K-X_{t+\delta})^+|\mathcal{F}_t\right]
=P(T-\delta,X_t)$. Define
$U^{\delta}_t=Y^{\delta}_t-\widehat{Y}_t^{\delta}$, for
$t\in[0,T-\delta]$. It then follows from the reflected BSDE
(\ref{RBSDE1}) and It\^o's formula that
\begin{align*}
U^{\delta}_t=&\ P(T-\delta,X_{T-\delta})-P(T-\delta,X_t)-\int_t^{T-\delta}r(U_s^{\delta}+P(T-\delta,X_s))ds\\
&+\int_t^{T-\delta}dK_s^{\delta}-\int_t^{T-\delta}Z_s^{\delta}dW_s\\
=&\int_t^{T-\delta}(\mathcal{L}P(T-\delta,X_s)-rU_s^{\delta})ds\\
&+\int_t^{T-\delta}dK_s^{\delta}+\int_t^{T-\delta}(\sigma
X_s\partial_XP(T-\delta,X_s)-Z_s^{\delta})dW_s.
\end{align*}
Since
$\mathcal{L}P(T-\delta,X_s)-\Theta^{\delta}(X_s)=(\mathcal{L}+\partial_t)P(T-\delta,X_s)=0$,
we further have
\begin{align*}
U^{\delta}_t=&\int_t^{T-\delta}(\Theta^{\delta}(X_s)-rU_s^{\delta})ds\\
&+\int_t^{T-\delta}dK_s^{\delta}+\int_t^{T-\delta}(\sigma
X_s\partial_XP(T-\delta,X_s)-Z_s^{\delta})dW_s.
\end{align*}
Moreover, the reflected BSDE (\ref{RBSDE1}) implies that
$U_t^{\delta}=Y^{\delta}_t-\widehat{Y}_t^{\delta}\geq 0$, and
$$\int_0^{T-\delta}U_t^{\delta}dK_t^{\delta}
=\int_0^{T-\delta}(Y^{\delta}_t-\widehat{Y}_t^{\delta})dK_t^{\delta}=0.$$
In turn, using Proposition 2.3 in \cite{ElKaroui1997_1}, we conclude
that $U_t^{\delta}$ is the time $t$ value of the optimal stopping
time problem (\ref{optimal_stopping_delay_special_4}) with the
optimal stopping time given as $\inf\{s\in[t,T-\delta]:
U_s^{\delta}=0\}$, and moreover, by Assumption \ref{Assumption2},
$U_t^{\delta}=U^{\delta}(t,X_t)$ for some measurable function
$U^{\delta}(\cdot,\cdot)$.
Finally, it is immediate to check that $\tau^{0,*}$ is the optimal
stopping time for (\ref{optimal_stopping_delay_special_4}) using the
definition of $U^{\delta}_t$.
\end{proof}
\begin{remark}\label{remark}
One of the advantages of the optimal stopping formulation
(\ref{optimal_stopping_delay_special_4}) is that it does not have
final payoff and only has running payoff, and this will facilitate
our analysis of the associated optimal exercise boundary. {In the
rest of the paper, we shall focus our analysis on the optimal
stopping problem (\ref{optimal_stopping_delay_special_4}) and its
associated optimal exercise boundary, which will in turn solve the
original optimal stopping problem
(\ref{optimal_stopping_delay_special_3})}.
\end{remark}
The Feynman-Kac formula in Section 9 of \cite{ElKaroui1997_1} yields
that $U^{\delta}(t,X)$ is the unique (viscosity) solution of the
variational inequality \be \label{VI1}
\left\{
\begin{array}{ll}
(-\p_t-{{\cal L}})U^{\delta}(t,X)=\Theta^{\delta}(X),\ &\mbox{if}\;U^{\delta}(t,X)>0,\;\mbox{for}\;(t,X)\in{\Omega}_{T-\delta};
\vspace{2mm} \\
(-\p_t-{{\cal L}})U^{\delta}(t,X)\geq \Theta^{\delta}(X),\ &\mbox{if}\;U^{\delta}(t,X)=0,\;\mbox{for}\;(t,X)\in{\Omega}_{T-\delta};
\vspace{2mm}\\
U^{\delta}(T-\delta,X)=0,\ &\mbox{for}\ X\in\mathbb{R}_+,
\end{array}
\right.
\ee Introduce the transformation\footnote{For notation simplicity,
we suppress the superscript $\delta$ in $u^{\delta}$ and
$\theta^{\delta}$, and use $u$ and $\theta$ instead. The same
convention applies to the optimal exercise boundary $x(\tau)$ in
section 4.}
\begin{equation}\label{transform}
x=\ln X-\ln K,\quad \tau=T-\delta-t,\quad
u(\tau,x)=U^{\delta}(t,X),\quad
\theta(x)=\Theta^{\delta}(X).
\end{equation}
Then (\ref{VI1}) reduces to \be \label{VI2}
\left\{
\begin{array}{ll}
(\p_\tau -\widetilde{{\cal L}})u(t,x)=\theta(x),\ &\mbox{if}\;u(\tau,x)>0,\;\mbox{for}\;(\tau,x)\in{\cal N}_{T-\delta};
\vspace{2mm} \\
(\p_\tau -\widetilde{{\cal L}})u(t,x)\geq \theta(x),\ &\mbox{if}\;u(\tau,x)=0,\;\mbox{for}\;(\tau,x)\in{\cal N}_{T-\delta};
\vspace{2mm}\\
u(0,x)=0,\ &\mbox{for}\ x\in\mathbb{R},
\end{array}
\right.
\ee
where $
{\cal N}_{T-\delta}=(0,T-\delta\,]\times\mathbb{R}$, and
\begin{equation*}
\widetilde{{\cal
L}}={\sigma^2\over2}\,\p_{xx}+\left(\,r-q-{\sigma^2\over2}\,\right)\p_x-r.
\end{equation*}
For the latter use, we present some basic properties of the Greek
$\Theta(X)$, or equivalently, the function $\theta(x)$ with $x=\ln
X-\ln K$, whose proof is given in Appendix A.1.
\begin{proposition}\label{Pro3}
{(i)} There exists a unique zero crossing point
$\overline{X}\in\mathbb{R}$ such that
$\theta(\overline{X})=0$.
In addition, $\theta(x)<0$ for any $x<\overline{X}$, $\theta(x)>0$ for any
$x>\overline{X}$, and $\theta^{\prime}(\overline{X})>0$.
{(ii) For any $x<\overline{X}$, $\theta^\delta(x)\rightarrow qKe^x-rK$ as $\delta\rightarrow0^+$, where we use the
superscript
$\delta$ to emphasize the dependence of $\theta(\cdot)$ on $\delta$.}
\end{proposition}
{Thanks to Proposition \ref{regularity1} and transformation
(\ref{transform}), we deduce that the solution $u(\cdot,\cdot)$
admits the following regularities.}
\begin{proposition}\label{Th1}
Suppose that Assumption 2 holds. Then, the viscosity solution
$u(\cdot,\cdot)$ is also the unique {bounded} strong solution to
variational inequality~\eqref{VI2}, and satisfies $u\in
W^{2,1}_{p,loc}({\cal N}_{T-\delta})\cap C(\overline{{\cal
N}_{T-\delta}})$ for $p\geq
1$.
Moreover, $\p_x u\in C(\overline{{\cal N}_{T-\delta}})$.
\end{proposition}
\subsection{The perpetual case and its optimal exercise boundary}
We consider the perpetual version of the optimal stopping problem
(\ref{optimal_stopping_delay_special_4}), whose solution admits
explicit expressions (cf. (\ref{ivisolution}) and (\ref{l0}) below).
The perpetual problem is also closely related to the asymptotic
analysis of the optimal exercise boundary in section 4.
Suppose that Assumption 2 holds. For any $\mathbb{F}$-stopping time
$\tau^{0}\geq t$, we consider
\begin{equation}\label{optimal_stopping_delay_special_5}
U^{\delta}_{\infty}(X_t)=\esssup_{\tau^{0}\geq
t}\mathbf{E}\left[\int_t^{\tau^0}e^{-r(s-t)}\Theta^{\delta}(X_s)ds|\mathcal{F}_t\right].
\end{equation}
Using the similar arguments as in section 3.2, we obtain that
$U^{\delta}_{\infty}(X)=u_{\infty}(x)$, where $x=\ln X-\ln K$, and
$u_{\infty}(\cdot)$ is the unique {bounded} strong solution to the
stationary variational inequality \be\label{IVI}
\left\{
\begin{array}{ll}
-\widetilde{{\cal L}}\,u_\infty(x)=\theta(x),
&\mbox{if}\;u_\infty(x)>0,\;\mbox{for}\;x\in\mathbb{R};
\vspace{2mm} \\
-\widetilde{{\cal L}}\,u_\infty(x)\geq \theta(x),
&\mbox{if}\;u_\infty(x)=0,\;\mbox{for}\;x\in\mathbb{R},
\end{array}
\right.
\ee with $u_\infty\in W^2_{p,loc}(\mathbb{R})$ for
$p\geq1$, and $(u_\infty)^\prime\in C(\mathbb{R})$.
In the domain $\{u_{\infty}(x)=0\}$, the player will exercise the
option. Since $-\widetilde{\cal{L}} 0=0\geq \theta(x)$ by
(\ref{IVI}), and $\{\theta(x)\leq 0\}=\{x\leq \overline{X}\}$ by
Proposition \ref{Pro3}, it follows that
\begin{equation}\label{region}
\{x\leq \overline{X}\}\supseteq\{u_{\infty}(x)=0\},\ \text{and}\
\{x> \overline{X}\}\subseteq\{u_{\infty}(x)>0\}.
\end{equation}
We can then define the \emph{optimal exercise boundary}
$\underline{X}$ as\footnote{{Note that from the definition of
$\underline{X}$, it may be possible that $\underline{X}=-\infty$. We
will however exclude such a situation in Proposition \ref{le1}.}}
\begin{equation}\label{freeboundary_0}
\underline{X}=\inf\{x\in\mathbb{R}: u_{\infty}(x)>0\}.
\end{equation}
The continuity of $u_{\infty}(\cdot)$ implies that $u_{\infty}(x)=0$
for $x\leq \underline{X}$ and, therefore, the player will exercise
the option in $(-\infty,\underline{X}]$. Moreover, it follows from
(\ref{region}) and (\ref{freeboundary_0}) that $\underline{X}\leq
\overline{X}$.
The next proposition relates variational inequality (\ref{IVI}) to a
free-boundary problem, which in turn provides the explicit
expressions for $u_{\infty}(\cdot)$ and $\underline{X}$.
\begin{proposition}\label{le1}
For $x>\underline{X}$, it holds that $u_{\infty}(x)>0$. Moreover,
$(u_{\infty}(\cdot),\underline{X})$ is the unique {bounded} solution
to the free-boundary problem \be\label{free_boundary_problem_1}
\left\{
\begin{array}{ll}
-\widetilde{{\cal L}}\,u_\infty(x)=\theta(x),
&\mbox{for}\;x>\underline{X};
\vspace{2mm} \\
u_\infty(x)=0,
&\mbox{for}\;x\leq \underline{X};
\vspace{2mm} \\
(u_{\infty}){'}(\underline{X})=0,&\text{(smooth-pasting
condition)},
\end{array}
\right.
\ee and satisfies {$\overline{X}> \underline{X}>-\infty$}.
\end{proposition}
\begin{proof} {\emph{Step 1}. We prove that
$(u_{\infty}(\cdot),\underline{X})$ satisfies the free-boundary
problem (\ref{free_boundary_problem_1}).} To this end, we first show
what $u_{\infty}(x)>0$ for $x>\underline{X}$. Since
$u_{\infty}(x)>0$ for $x>\overline{X}$, we only need to show that
$u_{\infty}>0$ on $(\underline{X},\overline{X}]$. If not, let
$x_1,\,x_2\in[\,\underline{X},\overline{X}\,]$ be such that
$$x_1<x_2,\,u_{\infty}(x_1)=u_{\infty}(x_2)=0,\ \text{and}\
u_{\infty}(x)>0\ \text{for\ any}\ x\in(x_1,x_2).$$ Using variational
inequality (\ref{IVI}) and Proposition \ref{Pro3}, we obtain that
\begin{eqnarray*}
\left\{
\begin{array}{ll}
-\widetilde{{\cal L}}\,u_\infty(x)=\theta(x)\leq 0,
&\mbox{for}\ x\in(x_1,x_2);
\vspace{2mm} \\
u_\infty(x_1)=u_{\infty}(x_2)=0.
&
\end{array}
\right.
\end{eqnarray*}
The comparison principle then implies that $u_{\infty}(x)\leq 0$ for
$x\in(x_1,x_2)$, which is a contradiction.
To prove the smooth-pasting condition, we observe that
$(u_{\infty})^{\prime}$ is continuous, and that $u_{\infty}(x)=0$
for $x\leq \underline{X}$. Therefore,
$(u_{\infty})^{\prime}(\underline{X}+0)=(u_{\infty})^{\prime}(\underline{X}-0)=0$,
and $(u_{\infty}(\cdot),\underline{X})$ indeed satisfies the free
boundary problem (\ref{free_boundary_problem_1}).\smallskip
\emph{Step 2.} we prove that $(u_{\infty}(\cdot),\underline{X})$ is
actually the unique solution to (\ref{free_boundary_problem_1}). To
this end, we first show that if
$(u_{\infty,1}(\cdot),\underline{X}_1)$ is any solution solving
(\ref{free_boundary_problem_1}), then it is necessary that
$\underline{X}_1<\overline{X}$. If not, by
(\ref{free_boundary_problem_1}) and Proposition \ref{Pro3}, we have
\begin{eqnarray*}
\left\{
\begin{array}{ll}
-\widetilde{{\cal L}}\,u_{\infty,1}(x)=\theta(x)>0,
&\mbox{for}\;x>\underline{X}_1\geq \overline{X};
\vspace{2mm} \\
u_{\infty,1}(\underline{X}_1)=(u_{\infty,1}){'}(\underline{X}_1)=0.
\end{array}
\right.
\end{eqnarray*}
The strong comparison principle (see \cite{Evans}) then implies that
$u_{\infty,1}(x)>0$ for $x>\underline{X}_1$.
Next we compare $u_{\infty,1}(x)$ with an auxiliary function
$$\underline{w}(x)=u_{\infty,1}(\underline{X}_1+1)w(x;\underline{X}_1,\underline{X}_1+1)$$
in the interval $(\underline{X}_1,\underline{X}_1+1)$, where
$$
w(x;a,b)={e^{\lambda^+(x-a\,)}-e^{\lambda^-(x-a\,)}\over e^{\lambda^+(b-a)}-e^{\lambda^-(b-a)}},
$$ with $\lambda^+$ and $\lambda^-$ being, respectively, the positive and negative
characteristic roots of $\widetilde{\cal{L}}$:
$$
{\sigma^2\over 2}\lambda^2+\left(\,r-q-{\sigma^2\over
2}\,\right)\lambda-r=0.
$$
It is clear that
$$w(a;a,b)=0,\quad w(b;a,b)=1,\quad w'(a;a,b)>0,\quad -\widetilde{\cal{L}}w=0\ \text{in}\ (a,b).$$
In turn,
\begin{eqnarray*}
\left\{
\begin{array}{ll}
-\widetilde{{\cal
L}}\underline{w}(x)=0<-\widetilde{{\cal L}}u_{\infty,1}(x),\qquad\mbox{for}\;x\in(\underline{X}_1,\underline{X}_1+1);
\vspace{2mm} \\
u_{\infty,1}(\underline{X}_1)=\underline{w}(\underline{X}_1),\qquad\ \
u_{\infty,1}(\underline{X}_1+1)=\underline{w}(\underline{X}_1+1).
\end{array}
\right.
\end{eqnarray*}
Hence, the comparison principle implies that $
u_{\infty,1}(x)\geq \underline{w}(x)$ for
$x\in(\underline{X}_1,\underline{X}_1+1)$.
In turn, $(u_{\infty,1})^\prime(\underline{X}_1)\geq
\underline{w}^\prime(\underline{X}_1)>0,
$
which contradicts the smooth-pasting condition $(u_{\infty,1})^\prime(\underline{X}_1)=0$.
Now we show that $(u_{\infty}(\cdot),\underline{X})$ is the unique
solution to (\ref{free_boundary_problem_1}). If not, let
$(u_{\infty,\,1},\underline{X}_1)$ be another solution of the free-boundary problem \eqref{free_boundary_problem_1}. Without loss of
generality, we may assume that
$\underline{X}_1<\underline{X}<\overline{X}$.
It is immediate to check that
\begin{eqnarray*}
\left\{
\begin{array}{ll}
-\widetilde{\cal L}u_{\infty,\,1}(x)=\theta(x)\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \leq \theta(x) I_{\{x>\underline{X}\}}
=-\widetilde{\cal L}u_{\infty}(x),\ \text{for}\
x\in(\,\underline{X}_1,\infty);
\vspace{2mm}\\
u_{\infty,\,1}(\underline{X}_1)=u_{\infty}(\underline{X}_1)=0;\\
(u_{\infty,\,1})^\prime(\underline{X}_1)=(u_{\infty})^\prime(\underline{X}_1)=0,
\end{array}
\right.
\end{eqnarray*}
where we have used the fact $\theta(x)<0$ for any $x\leq \underline{X}<\overline{X}$. {The comparison principle} then implies that
$u_{\infty,\,1}(x)\leq u_{\infty}(x)$ and, in particular,
$u_{\infty,\,1}(x)\leq u_{\infty}(x)=0$ for
$x\in[\,\underline{X}_1,\underline{X}]$.
On the other hand, applying Taylor's expansion to $u_{\infty,1}(x)$
yields
$$u_{\infty,1}(x)=\frac12u_{\infty,1}''(\underline{X}_1+0)(x-\underline{X}_1)^2(1+o(1))=\frac{-\theta(\underline{X}_1)}{\sigma^2}
(x-\underline{X}_1)^2(1+o(1)),$$ which further implies that
$u_{\infty,1}(x)>0$ if $x$ is close enough to $\underline{X}_{1}$.
Thus, we obtain a contradiction.
\smallskip
\emph{Step 3}. We prove that $\overline{X}> \underline{X}>-\infty$.
Since we have already showed that $\underline{X}<\overline{X}$ in
Step 2, it is sufficient to prove that $\underline{X}>-\infty$.
{In fact, using} the free-boundary formulation
(\ref{free_boundary_problem_1}), we further obtain that its solution
must have the form
\begin{equation*}
u_\infty(x)=CK e^{\lambda^-x}-p(x),\ \text{for}\
x>\underline{X},
\end{equation*}
where the constants $C$ is to be determined,
$\lambda^-$ is the negative root of the characteristic equation
for $\widetilde{\cal{L}}$,
and
$p(x)=p(T-\delta,x)$ is the price of the European put option (cf.
(\ref{price_european}) with $t=T-\delta$).
In order to fix the constant $C$ and the optimal exercise boundary
$\underline{X}$, we make use of the boundary and smooth-pasting
conditions in (\ref{free_boundary_problem_1}), and obtain that
\begin{eqnarray*}
\left\{
\begin{array}{ll}
CKe^{\lambda^-\underline{X}}=p(\underline{X}\,)=\left[\,Ke^{-r\delta}N(-\underline{d}\,_2)
-Ke^{\underline{X}-q\delta}N(-\underline{d}\,_1)\right];
\vspace{2mm}\\
CK\lambda^-e^{\lambda^-\underline{X}}=p^\prime(\underline{X}\,)
=-Ke^{\underline{X}-q\delta}N(-\underline{d}\,_1),
\end{array}
\right.
\end{eqnarray*} where $\underline{d}\,_1$ and $\underline{d}\,_2$ are the same
as $d_1$ and $d_2$ in (\ref{defofN}) except that $x$ is replaced by
$\underline{X}$ (see Appendix A.1 for the notations). Thus, we
obtain that \be\label{ivisolution}
u_\infty(x)=\left\{
\begin{array}{ll}
p(\underline{X}) \,e^{\lambda^-(x-\underline{X}\,)}-p(x) ,
&\text{for}\ x>\underline{X}\;;
\vspace{2mm}\\
0 &\text{for}\ x\leq\underline{X}\,,
\end{array}
\right.
\ee and $\underline{X}$ is the zero crossing point of the algebra
equation \be\label{l0}
l(x)=\lambda^-e^{-r\delta}N(-{d}_2)+(1-\lambda^-)e^{x-q\delta}N(-{d}_1)=0.
\ee
{Next, we prove that the zero crossing point of $l(x)=0$ exists and
is unique. It is clear that, when $x\rightarrow-\infty$,
$$
d_1,\,d_2\rightarrow-\infty,\;\;N(-d_1),\,N(-d_2)\rightarrow1,\;\;
l(x)\rightarrow\lambda^-e^{-r\delta}+o(1)<0.
$$
Hence, $l(x)$ is negative provided $x$ is small enough. On the other
hand, by (\ref{inequ1}) and (\ref{inequ2}), we have
$$
d_1,\,d_2\rightarrow+\infty,\;\;
N(-d_1)={N'(-d_1)\over d_1}\Big(1+o(1)\Big),\;\;
N(-d_2)={N'(-d_2)\over d_2}\Big(1+o(1)\Big),
$$
as $x\rightarrow+\infty$, and therefore,
\begin{eqnarray*}
{l(x)e^{r\delta}\over N'(-d_2)}
&=&{\lambda^-\over d_2}\Big(1+o(1)\Big)
+{1-\lambda^-\over d_1}\Big(1+o(1)\Big)
\\[2mm]
&=&{d_2+\lambda^-(d_1-d_2)\over
d_1\,d_2}\Big(1+o(1)\Big)={1\over d_1}\Big(1+o(1)\Big).
\end{eqnarray*}
Hence, $l(x)$ is positive provided $x$ is large enough. Thus, we
deduce that there exists at least one zero crossing point of
$l(x)=0$.} Thanks to the uniqueness of the solution to the
free-boundary problem (\ref{free_boundary_problem_1}), we know that
the zero crossing point of the algebra equation (\ref{l0}) is also
unique, from which we conclude that $\underline{X}>-\infty$.
\end{proof}
\section{Analysis of the optimal exercise boundary}
We study the optimal exercise boundary of the American put option
with time lag $\delta$ under Assumption 2. We first demonstrate our
main results through Figures 1 and 2 as
below.
\begin{picture}(0,130)(140,0)
\put(190,10){\vector(1,0){150}} \put(260,6){\vector(0,1){100}}
\put(258,108){$\tau$}\put(343,10){$x$}
{\thicklines\qbezier(280,9)(220,46)(210,100)}
\put(207,10){\line(0,1){10}}\put(207,25){\line(0,1){10}}\put(207,40){\line(0,1){10}}
\put(207,55){\line(0,1){10}}\put(207,70){\line(0,1){10}}\put(207,85){\line(0,1){10}}
\put(277,7){$\bullet$}\put(282,12){$\overline{X}$}
\put(200,50){${\bf ER}$}\put(251,50){${\bf CR}$}
\put(200,105){${ x(\tau)}$}
\put(200,0){$\underline{X}$}
\put(175,80){$u=0$}\put(250,80){$u>0$}
\end{picture}
\begin{center}
\small{Figure 1: Optimal exercise boundary $x(\tau)$ under the
coordinates $(\tau,x)$.}
\end{center}
\begin{picture}(0,130)(140,0)
\put(190,10){\vector(1,0){150}} \put(200,6){\vector(0,1){110}}
\put(195,115){$t$}\put(343,10){$X$}
{\thicklines\qbezier(225,10)(240,85)(310,90)}
\put(220,10){\line(0,1){10}}\put(220,25){\line(0,1){10}}\put(220,40){\line(0,1){10}}
\put(220,55){\line(0,1){10}}\put(220,70){\line(0,1){10}}\put(220,85){\line(0,1){10}}
\put(200,90){\line(1,0){10}}\put(215,90){\line(1,0){10}}\put(230,90){\line(1,0){10}}
\put(245,90){\line(1,0){10}}\put(260,90){\line(1,0){10}}\put(275,90){\line(1,0){10}}
\put(290,90){\line(1,0){10}}\put(305,90){\line(1,0){10}}\put(310,90){\line(1,0){8}}
\put(325,90){\line(1,0){10}}
\put(198,87){$\bullet$}\put(174,84){$T-\delta$}
\put(200,105){\line(1,0){10}}\put(215,105){\line(1,0){10}}\put(230,105){\line(1,0){10}}
\put(245,105){\line(1,0){10}}\put(260,105){\line(1,0){10}}\put(275,105){\line(1,0){10}}
\put(290,105){\line(1,0){10}}\put(305,105){\line(1,0){10}}\put(310,105){\line(1,0){8}}
\put(325,105){\line(1,0){10}}
\put(198,102){$\bullet$}\put(182,101){$T$}
\put(210,50){${\bf ER}$}\put(320,50){${\bf CR}$}
\put(260,65){${X^\delta(t)}$}
\put(218,7){$\bullet$}\put(200,-3){$K e^{\underline{X}}$}
\put(308,87){$\bullet$}\put(300,92){$K e^{\overline{X}}$}
\put(205,63){$U^\delta=0$}\put(310,63){$U^\delta>0$}
\end{picture}
\begin{center}
\small{Figure 2: Optimal exercise boundary $X^{\delta}(t)$ under the
coordinates $(t,X)$.}
\end{center}
Figure 1 is under the coordinates $(\tau,x)$, and Figure 2 is under
the coordinates $(t,X)$,
where $\tau=T-\delta-t$ and $x=\ln X-\ln
K$ (cf. the transformation (\ref{transform})).
Figure 2 illustrates that the whole region $\Omega_{T-\delta}$ is
divided by a curve $X^{\delta}(t)$ into two parts. In the left
region, the investor will exercise the option (with time lag
$\delta$), and in the right region the investor will hold the
option. Hence, $X^{\delta}(t)$ is called the \emph{optimal exercise
boundary}. We will prove that it is a strictly increasing and smooth
function, with $Ke^{\underline{X}}$ as its asymptotic line when the
maturity $T$ goes to infinity, and $Ke^{\overline{X}}$ as the end
point at time $T-\delta$. If we denote by $x(\cdot)$ the optimal
exercise boundaries under the coordinates $(\tau,x)$, as shown in
Figure 1, then we have the
relationship
\begin{equation}\label{relation}
X^{\delta}(t)=K\exp{\{x(T-\delta-t)\}}.
\end{equation}
\subsection{The optimal exercise boundary}
Due to Remark \ref{remark}, we will mainly work with variational
inequality (\ref{VI2}) for $u(\cdot,\cdot)$ in the following. Recall
${\cal N}_{T-\delta}=(0,T-\delta\,]\times\mathbb{R}$. Define the
exercise domain ${\bf ER}$ and the continuation domain ${\bf CR}$ as
\begin{align*}
{\bf ER}&=\{(\tau,x)\in\mathcal{N}_{T-\delta}:u(\tau,x)=0\};\qquad
\\{\bf CR}&= \{(\tau,x)\in\mathcal{N}_{T-\delta}:u(\tau,x)>0\}.
\end{align*}
\begin{lemma}\label{Th3}
Let $\overline{X}$ and $\underline{X}$ be given in Proposition
\ref{Pro3} and (\ref{freeboundary_0}), respectively. Then, it holds
that
$$\{\,x\leq \overline{X}\,\}\supseteq{\bf ER}\supseteq\{\,x\leq\underline{X}\,\},\ \text{and}\ \{\,x> \overline{X}\,\}\subseteq{\bf CR}\subseteq\{\,x>\underline{X}\,\}.$$
\end{lemma}
\begin{proof}
If $(\tau,x)\in {\bf ER}$, then $u(\tau,x)=0$ and
variational inequality (\ref{VI2}) reduces to
$$
(\p_\tau-\widetilde{{\cal L}})u(\tau,x)=0\geq \theta(x).
$$
Since $\{\theta(x)\leq 0\}=\{x\leq \overline{X}\}$, it follows that
${\bf ER}\subseteq\{\,x\leq \overline{X}\,\}$.
In order to prove that ${\bf ER}\supseteq\{\,x\leq \underline{X}\,\}$, we compare $u(\cdot,\cdot)$ and $u_\infty(\cdot)$,
the latter of which is the solution to variational
inequality~\eqref{IVI}. Note that
\bee
\left\{
\begin{array}{ll}
(\p_\tau -\widetilde{{\cal L}})u_\infty(x)=\theta(x),&\mbox{if}\;u_\infty(x)>0,\;\mbox{for}\;(\tau,x)\in{\cal N}_{T-\delta};
\vspace{2mm} \\
(\p_\tau-\widetilde{{\cal L}})u_\infty(x)\geq \theta(x),&\mbox{if}\;u_\infty(x)=0,\;\mbox{for}\;(\tau,x)\in{\cal N}_{T-\delta};
\vspace{2mm}\\
u_\infty(x)\geq 0=u(0,x),&\text{for}\ x\in\mathbb{R}.
\end{array}
\right.
\eee
The comparison principle for variational inequality (\ref{VI2}) in the domain $\mathcal{N}_{T-\delta}$ then implies that $u(\tau,x)\leq u_\infty(x)$. But
if $x\leq \underline{X}$, according to the free-boundary problem (\ref{free_boundary_problem_1}), $u_{\infty}(x)=0$. In turn, $u(\tau,x)=0$. This proves
that
$\{x\leq \underline{X}\,\}\subseteq {\bf ER}$.
\end{proof}
Intuitively, when $\theta(x)$ is positive (i.e. $x> \overline{X}$),
the running payoff in (\ref{optimal_stopping_delay_special_4}) is
also positive, so the investor will hold the option. In the
contrary, when $\theta(x)$ is non-positive (i.e. $x\leq
\overline{X}$), one may think that the investor would then exercise
the option to stop her losses. However, the above lemma shows that
for {$x\leq\underline{X}$}, the investor may still hold the option,
and wait for the running payoff to rally at a later time to recover
her previous losses.
Next, we define the \emph{optimal exercise boundary} $x(\tau)$ as
\begin{equation}\label{freeboundary_10}
x(\tau)=\inf\{x\in\mathbb{R}: u(\tau,x)>0\},
\end{equation} for any $\tau\in(0,T-\delta]$. It follows from Lemma \ref{Th3}
that $x(\tau)\in[\underline{X},\overline{X}]$, and by the continuity
of $u(\cdot,\cdot)$, $u(\tau,x)=0$ for $x\leq x(\tau)$.
\begin{lemma}\label{lemma11} For $\tau\in(0,T-\delta]$, let
$$x_1(\tau)=\sup\{x\in\mathbb{R}: u(\tau,x)=0\}.$$
Then, $x(\tau)=x_{1}(\tau)$. Hence, $x(\tau)$ is the unique curve
separating $\mathcal{N}_{T-\delta}$ such that $u(\tau,x)=0$ for
$x\leq x(\tau)$, and $u(\tau,x)>0$ for $x\geq x(\tau)$.
\end{lemma}
\begin{proof} The definition of $x_{1}(\tau)$ implies that $x(\tau)\leq
x_1(\tau)$ and $u(\tau,x)>0$ for $x\geq x_1(\tau)$. Moreover, it
follows from Lemma \ref{Th3} that
$x_1(\tau)\in[\underline{X},\overline{X}]$.
Suppose $x(\tau^*)< x_1(\tau^*)$ for some $\tau^*\in(0,T-\delta]$.
The continuity of $u$ implies that
$u(\tau^*,x(\tau^*))=u(\tau^*,x_1(\tau^*))=0$. Let $x^*$ be a
maximum point of $u(\tau^*,\cdot)$ in the interval
$[x(\tau^*),x_1(\tau^*)]$. Suppose that $u(\tau^*,x^*)> 0$;
otherwise {$u(\tau^*,x)\equiv0$ in the interval
$[x(\tau^*),x_1(\tau^*)]$, which contradicts the definition of
$x^*(\tau)$}. Since $u(\tau^*,x^*)>0$, $\partial_xu(\tau^*,x^*)=0$
and $\partial_{xx}u(\tau^*,x^*)\leq0$, we have
$$-\widetilde{{\cal L}}u(\tau^*,x^*)=-{\sigma^2\over2}\,\p_{xx}u(\tau^*,x^*)-\left(\,r-q-{\sigma^2\over2}\,\right)\p_xu(\tau^*,x^*)+ru(\tau^*,x^*)> 0.$$
On the other hand, by the continuity of $u$, there exits a
neighborhood of $(\tau^*,x^*)$ such that $u>0$, so
$\p_{\tau}u-\widetilde{{\cal L}}u=\theta$. In turn,
$$-\widetilde{{\cal L}}u(\tau^*,x^*)=\theta(x^*)-\p_{\tau}u(\tau^*,x^*).$$
Since \begin{equation}\label{timederivative}
\partial_{\tau}u(\tau,x)=-\partial_{t}U^{\delta}(t,X)=-\partial_{t}V^{\delta}(t,X)\geq
0,
\end{equation}
where we have used the transformation (\ref{transform}) and the
decomposition (\ref{Decomposition}) in the first two equalities, and
(\ref{derivative with respect to t}) in the last inequality, we
further get
$$-\widetilde{{\cal L}}u(\tau^*,x^*)\leq \theta(x^*)< 0.$$ This is a
contradiction. Thus, we must have $x(\tau^*)=x_1(\tau^*)$.
\end{proof}
From the above lemma, we deduce that the exercise region and the
continuation region are equivalent to
\begin{align*}
{\bf ER}&=\{(\tau,x)\in {\cal N}_{T-\delta}:x\leq x(\tau)\};\ \\
{\bf CR}&=\{(\tau,x)\in {\cal N}_{T-\delta}:x> x(\tau)\}.
\end{align*}
We state the first main result of this section, which is about the
monotone and regularity properties of the optimal exercise boundary
$x(\tau)$.
\begin{theorem} (Properties of optimal exercise boundary) \label{Th5}
Let $x(\tau)$ be the optimal exercise boundary given in
(\ref{freeboundary_10}). Then, the following assertions {hold}:
(i) Monotonicity: ${x(\tau)}$ is strictly decreasing in $\tau$;
(ii) Position: $x(\tau)$ is with
the starting point
$x(0)=\lim\limits_{\tau\rightarrow0^+}x(\tau)=\overline{X}$;
(iii) Regularity: ${x(\cdot)}\in C^\infty(0,T-\delta\,]$, and
$${u(\cdot,\cdot)}\in C^\infty(\{x\geq x(\tau):\tau\in(0,T-\delta\,]\}).$$
\end{theorem}
\begin{proof}
(i) We first show that $x(\tau)$ is non-increasing. For any
$0\leq \tau_1<\tau_2\leq T-\delta$, we then have
$0=u(\tau_2,x(\tau_2))\geq u(\tau_1,x(\tau_2))\geq 0$. Thus,
$u(\tau_2,x(\tau_2))=u(\tau_1,x(\tau_2))=0$, and together with Lemma
\ref{lemma11}, we deduce that $x(\tau_1)\geq x(\tau_2)$, i.e.
$x(\tau)$ is non-increasing.
If $x(\tau)$ is not strictly decreasing, then there exist
$x_1\in[\underline{X},\overline{X}]$ and
$0\leq\tau_1<\tau_2\leq T-\delta$ such that $x(\tau)=x_1$ for any
$\tau\in[\tau_1,\tau_2]$. See Figure 3 below.
Note that $\partial_xu(\tau,x_1)=0$ and, moreover,
$\partial_{\tau}\partial_{x}u(\tau,x_1)=0$ for any
$\tau\in[\tau_1,\tau_2]$.
On the other hand, we observe that in the domain
$[\tau_1,\tau_2]\times(x_1,x_1+1)$, $u(\cdot,\cdot)$ satisfies
\begin{eqnarray*}
\left\{
\begin{array}{ll}
(\partial_{\tau}-\widetilde{{\cal L}})u(\tau,x)=\theta(x),
&\mbox{for}\ (\tau, x)\in[\tau_1,\tau_2]\times(x_1,x_1+1);
\vspace{2mm} \\
u(\tau,x_1)=0,
&\mbox{for}\ \tau\in[\tau_1,\tau_2].
\end{array}
\right.
\end{eqnarray*}
In turn, $\partial_{\tau}u(\cdot,\cdot)$ satisfies
\begin{eqnarray*}
\left\{
\begin{array}{ll}
(\partial_{\tau}-\widetilde{{\cal L}})\partial_{\tau}u(\tau,x)=\partial_{\tau}\theta(x)= 0,
&\mbox{for}\ (\tau, x)\in[\tau_1,\tau_2]\times(x_1,x_1+1);
\vspace{2mm} \\
\partial_{\tau}u(\tau,x_1)=0,
&\mbox{for}\ \tau\in[\tau_1,\tau_2].
\end{array}
\right.
\end{eqnarray*}
For any $x_2>\overline{X}$, since $(\tau_2,x_2)\in\mathbf{CR}$, we
have $u(\tau_2,x_2)>0$, and $u(0,x_2)=0$. Hence, there exists
$\tau\in(0,\tau_2)$ such that $\p_\tau u(\tau,x_2)>0$. Note,
however, that $\partial_{\tau}u\geq 0$ (cf. (\ref{timederivative}))
and, therefore, the strong maximum principle (see \cite{Evans})
implies that $\p_\tau u>0$ in ${\bf CR}$.
Finally, together with $\partial_{\tau}u(\tau,x_1)=0$ for any
$\tau\in[\tau_1,\tau_2]$, we deduce that
$\partial_x\partial_{\tau}u(\tau,x_1)>0$ using Hopf lemma (see
\cite{Evans}). But this is a contradiction to
$\partial_{\tau}\partial_{x}u(\tau,x_1)=0$ for any
$\tau\in[\tau_1,\tau_2]$.
(ii) It is obvious that $x(0)\leq \overline{X}$ from Lemma
\ref{Th3}, so it is sufficient to prove that $x(0)\geq
\overline{X}$. If not, in the domain
$(0,T-\delta]\times(x(0),\overline{X})\subset\mathbf{CR}$, we
consider
\begin{eqnarray*}
\left\{
\begin{array}{ll}
(\partial_{\tau}-\widetilde{{\cal L}})u(\tau,x)=\theta(x)<0,
&\mbox{for}\ (\tau, x)\in(0,T-\delta]\times(x(0),\overline{X});
\vspace{2mm} \\
u(0,x)=0,
&\mbox{for}\ x\in(x(0),\overline{X}).
\end{array}
\right.
\end{eqnarray*}
Then,
$\partial_{\tau}u(0,x)=\widetilde{\cal{L}}u(0,x)+\theta(x)=\theta(x)<0$,
which is a contradiction to $\partial_{\tau}u\geq 0$ in
(\ref{timederivative}).
(iii) We first prove that $x(\tau)$ is continuous. If not, then
there exists $\tau_2\in(0,T-\delta)$ and $\underline{X}\leq
x_3<x_1\leq \overline{X}$ such that $x(\tau_2+0)=x_3$ and
$x(\tau_2-0)=x_1$. See Figure 3 below.
In the domain $(\tau_2,T-\delta]\times(x_3,x_1)\subset\mathbf{CR}$,
we consider
\begin{eqnarray*}
\left\{
\begin{array}{ll}
(\partial_{\tau}-\widetilde{{\cal L}})u(\tau,x)=\theta(x)<0,
&\mbox{for}\ (\tau, x)\in[\tau_2,T-\delta]\times(x_3,x_1);
\vspace{2mm} \\
u(\tau_2,x)=0,
&\mbox{for}\ x\in(x_3,x_1).
\end{array}
\right.
\end{eqnarray*}
Then,
$\partial_{\tau}u(\tau_2,x)=\widetilde{\cal{L}}u(\tau_2,x)+\theta(x)=\theta(x)<0$,
which is a contradiction to $\partial_{\tau}u\geq 0$ in
(\ref{timederivative}).
Finally, since $\partial_{\tau}u\geq 0$, the smoothness of both the
optimal exercise boundary $x(\tau)$ and the value function
$u(\cdot,\cdot)$ in the continuation region follow along the similar
arguments used in \cite{Fr2}.
\end{proof}
\begin{picture}(0,130)(140,0)
\put(240,10){\vector(1,0){140}} \put(360,6){\vector(0,1){100}}
\put(363,100){$\tau$}\put(383,10){$x$}
{\thicklines\qbezier(355,10)(340,15)(330,30)
\put(300,40){\line(1,0){30}}
\qbezier(300,40)(282,42)(275,50)
\qbezier(275,50)(265,60)(260,92)}
\multiput(300,40)(0,-5){6}{\line(0,-1){2}}
\multiput(330,40)(0,-5){6}{\line(0,-1){2}}
\put(328,7){$\bullet$}\put(328,-3){$x_1$}
\put(252,7){$\bullet$}\put(250,-3){$\underline{X}$}
{\thicklines\put(330,30){\line(0,1){10}}}
\put(366,7){$\bullet$}\put(366,-3){$x_2$}
\put(297,7){$\bullet$}\put(298,-3){$x_3$}
\put(353,7){$\bullet$}\put(353,-3){$\overline{X}$}
\multiput(330,40)(5,0){7}{\line(1,0){2}}
\multiput(255,10)(0,5){17}{\line(0,1){2}}
\multiput(330,30)(5,0){7}{\line(1,0){2}}
\put(358,37){$\bullet$}\put(368,37){$\tau_2$}
\put(358,27){$\bullet$}\put(368,27){$\tau_1$}
\put(240,60){${\bf ER}$}\put(235,72){$u=0$}\put(321,60){${\bf CR}$}\put(317,72){$u>0$}
\put(280,50){$x(\tau)$}
\end{picture}
\begin{center}
\small{Figure 3: Non-strictly decreasing and discontinuous free boundary
$x(\tau)$.}
\end{center}
\begin{remark} Under the original coordinates $(t,X)$, the optimal
exercise boundary $X^{\delta}(t)$ then satisfies the following
properties:
(i) Monotonicity: $X^{\delta}(t)$ is strictly increasing in $t$;
(ii) Position: $X^{\delta}(t)$ is with
the end point
$$X^{\delta}(T-\delta)=\lim\limits_{t\rightarrow(T-\delta)^{-}}X^{\delta}(t)=Ke^{\overline{X}};$$
(iii) Regularity: ${X^{\delta}(\cdot)}\in C^\infty[0,T-\delta\,)$,
and
$${U^{\delta}(\cdot,\cdot)}\in C^\infty(\{X\geq
X^{\delta}(t):t\in[0,T-\delta\,)\}).$$
\end{remark}
\subsection{Asymptotic behavior for large maturity}
We study the asymptotic behavior of the optimal exercise boundary
$x(\tau)$ and the value function $u(\tau,x)$ as
$\tau\rightarrow\infty$, and as the second main result in this
section, we show that they converge to their stationary counterparts
$\underline{X}$ and $u_{\infty}(x)$, respectively.
To this end, we consider the following auxiliary optimal stopping
time problem perturbed by $r\ep$.
\begin{equation}\label{optimal_stopping_delay_special_5.1}
U^{\epsilon}_{\infty}(X_t)=\esssup_{\tau^{0}\geq
t}\mathbf{E}\left[\int_t^{\tau^0}e^{-r(s-t)}(\Theta(X_s)-r\epsilon)ds|\mathcal{F}_t\right],
\end{equation}
for any $\mathbb{F}$-stopping time $\tau^{0}\geq t$ and any
$\epsilon\geq 0$. This will help us to achieve the lower bound and,
therefore, the asymptotic behavior of the optimal exercise boundary
$x(\tau)$.
Following along the similar arguments used in section 3.3, we obtain
that $u^{\epsilon}_{\infty}(x)=U^{\epsilon}_{\infty}(X)$, where
$x=\ln X-\ln K$, and $u^{\ep}(\cdot)$ is the unique strong solution
to the stationary variational inequality \be\label{IVI.1}
\left\{
\begin{array}{ll}
-\widetilde{{\cal L}}\,u^\ep_\infty(x)=\theta(x)-r\ep,
&\mbox{if}\;u^\ep_\infty(x)>0,\;\mbox{for}\;x\in\mathbb{R};
\vspace{2mm} \\
-\widetilde{{\cal L}}\,u^\ep_\infty(x)\geq \theta(x)-r\ep,
&\mbox{if}\;u^\ep_\infty(x)=0,\;\mbox{for}\;x\in\mathbb{R},
\end{array}
\right.
\ee with $u^\ep_\infty\in W^2_{p,loc}(\mathbb{R})$ for
$p\geq1$, and $(u^\ep_\infty)^\prime\in C(\mathbb{R})$.
In contrast to variational inequality (\ref{IVI}), it is not clear
how to reduce variational inequality (\ref{IVI.1}) to a
free-boundary problem, and to obtain its explicit solution.
Nevertheless, we are able to derive a local version of the
free-boundary problem with $\epsilon>0$ small enough, which is
sufficient to obtain the asymptotic behavior of the optimal exercise
boundary later on.
\begin{proposition}\label{Pro5}
For $\ep>0$ small enough, it holds that $u_{\infty}(x)\geq u^\epsilon_\infty(x)\geq
u_\infty(x)-\epsilon$. Define $\underline{X}^{\epsilon}$ as
\begin{equation*}
\underline{X}^{\epsilon}=\inf\{x\in(-\infty,\overline{X}]:
u^{\epsilon}_{\infty}(x)>0\}.
\end{equation*}
Then $\underline{X}\leq \underline{X}^{\epsilon}< \overline{X}$, and
$u^\epsilon_\infty(x)>0$ for any
$x\in(\underline{X}^{\epsilon},\overline{X})$, where $\underline{X}$
and $\overline{X}$ are given in (\ref{freeboundary_0}) and
Proposition \ref{Pro3}, respectively. Moreover,
$\underline{X}^\epsilon\rightarrow \underline{X}$ as
$\ep\rightarrow{0^+}$. {See Figure 4 below.}
\vspace{-1cm}
\begin{picture}(0,130)(140,0)
\put(150,20){\vector(1,0){200}}\put(350,10){$x$}
\put(300,65){$\theta(x)$}
{\thicklines\qbezier(170,2)(220,24)(260,60)}
{\thicklines\qbezier(260,60)(280,75)(295,65)}
{\thicklines\qbezier(295,65)(320,46)(355,39)}
\put(204,17){$\bullet$}\put(204,7){$\overline{X}$}
\put(185,17){$\bullet$}\put(185,25){$\underline{X}^\epsilon$}
\put(155,17){$\bullet$}\put(155,25){$\underline{X}$}
\end{picture}
\begin{center}
\small{Figure 4: The graph of $\underline{X}$,
$\underline{X}^{\epsilon}$ and $\overline{X}$.}
\end{center}
\end{proposition}
\begin{proof} Note that the running payoff in the optimal stopping problem
(\ref{optimal_stopping_delay_special_5.1}) satisfies
\begin{align*}
\int_t^{\tau^0}e^{-r(s-t)}\Theta(X_s)ds&\geq
\int_t^{\tau^0}e^{-r(s-t)}(\Theta(X_s)-r\epsilon)ds\\
&=\int_t^{\tau^0}e^{-r(s-t)}\Theta(X_s)ds+\epsilon
e^{-r(\tau^0-t)}-\epsilon \\
&\geq \int_t^{\tau^0}e^{-r(s-t)}\Theta(X_s)ds-\epsilon,
\end{align*}
for any $\mathbb{F}$-stopping time
$\tau^0\geq t$. It follows that $u_{\infty}(x)\geq
u^\epsilon_\infty(x)\geq u_\infty(x)-\epsilon$.
Since $u_{\infty}(x)>0$ for $x>\underline{X}$, and
$\overline{X}>\underline{X}$ by Proposition \ref{le1}, it holds that
$u_{\infty}(\overline{X})>0$. Let $\epsilon>0$ be small enough such
that $\epsilon<u_{\infty}(\overline{X})$. Using the inequality
$u^{\epsilon}_{\infty}(x)\geq u_{\infty}(x)-\epsilon$, we obtain
that
$$u_{\infty}^{\epsilon}(\overline{X})\geq
u_{\infty}(\overline{X})-\epsilon>0.$$ In turn, the definition of
$\underline{X}^{\epsilon}$ and the continuity of
$u^{\epsilon}_{\infty}(\cdot)$ imply that
$\underline{X}^{\epsilon}<\overline{X}$.
Repeating the similar arguments used in Proposition \ref{le1}, we
obtain that $u^{\epsilon}_\infty(x)>0$ for
$x\in(\underline{X}^{\epsilon},\overline{X})$. Furthermore, the
inequality $u_{\infty}(x)\geq u^\epsilon_\infty(x)$ and Proposition
\ref{le1} imply that
$$
0=u_{\infty}(\underline{X})\geq u_{\infty}^{\ep}(\underline{X}).
$$
In turn, the definition of $\underline{X}^{\epsilon}$ implies that
$\underline{X}^{\epsilon}\geq \underline{X}$.
{Next, we prove that
$\underline{X}^{\epsilon}\rightarrow\underline{X}$ as
$\epsilon\rightarrow0^+$. In fact, from the definition of
$\underline{X}^\epsilon$ and the continuity of $u^\epsilon_\infty$,
we know that $u^\epsilon_\infty(\underline{X}^\epsilon)=0$. Using
the inequality $u^{\epsilon}_{\infty}(x)\geq u_{\infty}(x)-\epsilon$
again, we obtain $u_{\infty}(\underline{X}^\epsilon)\leq\epsilon$.}
{On the other hand, applying Taylor's expansion to $u_{\infty}(x)$
yields
$$u_{\infty}(x)=\frac12u_{\infty}''(\underline{X}+0)(x-\underline{X})^2(1+o(1))=\frac{-\theta(\underline{X})}{\sigma^2}
(x-\underline{X})^2(1+o(1)),$$ which further implies that
$u_{\infty}(x)>\kappa (x-\underline{X})^2$ with some positive
constant $\kappa$ if $x$ is close enough to $\underline{X}$.
Moreover, since $u_{\infty}(x)>0$ in the interval
$(\underline{X},\overline{X}\,]$ and is continuous, we deduce that
if $\epsilon$ is small enough, then $\underline{X}^\epsilon\leq
\underline{X}+\sqrt{\epsilon/\kappa}$. Recalling
$\underline{X}^\epsilon\geq \underline{X}$, we conclude that
$\underline{X}^{\epsilon}\rightarrow\underline{X}$ as
$\epsilon\rightarrow0^+$.}
\end{proof}
\begin{theorem} (Asymptotic behavior of optimal exercise boundary for $\tau\rightarrow\infty$)
\label{theorem}
Let $u(\cdot,\cdot)$ and $x(\tau)$ be the solution to variational
inequality (\ref{VI2}) and its associated optimal exercise boundary
(\ref{freeboundary_10}), respectively. Then,
$$u(\tau,\cdot)\rightarrow u_\infty(\cdot),\ \text{and}\ \ x(\tau)\rightarrow
\underline{X},$$
as $\tau\rightarrow\infty$, where $u_\infty(\cdot)$ and $\underline{X}$ are
the solution of the stationary
variational inequality (\ref{IVI}) and its associated optimal exercise boundary (\ref{freeboundary_0}), respectively,
\end{theorem}
\begin{proof}
To prove the asymptotic results, we use an idea introduced in
\cite{Yan2}. From the optimal stopping problems
(\ref{optimal_stopping_delay_special_4}) and
(\ref{optimal_stopping_delay_special_5}), it is immediate that
$u(\cdot,\cdot)\leq u_{\infty}(\cdot)$.
For $t\leq (T-\delta)/2$, define
$$
{u}^t(\tau,x)=u^{\exp\{-rt\}}_\infty(x)-e^{-r(\tau-t)}+e^{-rt},
$$
where ${u}^{\exp\{-rt\}}_\infty(\cdot)$ is the solution of variational inequality~\eqref{IVI.1}
with $\ep=\exp\{-rt\}$. It is routine to check that
${u}^t\in W^{2,\,1}_{p,\,loc}({\cal N}_{2t})\cap C(\overline{{\cal
N}_{2t}}\,)$, and satisfies
\bee
\left\{
\begin{array}{ll}
(\p_{\tau}-\widetilde{{\cal L}}){u}^t(\tau,x)=\theta(x),
\ \ \ \mbox{if}\;{u}^t(\tau,x)>-e^{-r(\tau-t)}+e^{-rt},\;\mbox{for}\;(\tau,x)\in{\cal N}_{2t};
\vspace{2mm} \\
(\p_{\tau} -\widetilde{{\cal L}}){u}^t(\tau,x)\geq \theta(x),\ \ \
\mbox{if}\;{u}^t(\tau,x)=-e^{-r(\tau-t)}+e^{-rt},\;\mbox{for}\;(\tau,x)\in{\cal N}_{2t};
\vspace{2mm} \\
{u}^t(0,x)={u}^{\exp\{-rt\}}_\infty(x)-e^{rt}+e^{-rt}<0,\ \ \
\mbox{for}\ x\in\mathbb{R},
\end{array}
\right.
\eee
provided that $t$ and $T$ are large enough. Since the obstacle
$-e^{-r(\tau-t)}+e^{-rt}\leq0$ in the domain ${\cal N}_{2t}$, using the
comparison principle (see
\cite{Friedman2} or \cite{Yan1}) for variational inequality
(\ref{VI2}) in the domain ${\cal N}_{2t}$, we
deduce that
$
u(\tau,x)\geq{u}^t(\tau,x)$ for $(\tau,x)\in{\cal N}_{2t}$. In turn, Proposition
\ref{Pro5} implies that
\begin{equation}\label{inequ3}
u(2t,\cdot)\geq {u}^t(2t,\cdot)={u}^{\exp\{-rt\}}_\infty(\cdot)
\geq {u}_\infty(\cdot)-e^{-rt}.
\end{equation}
Together with $u(2t,\cdot)\leq u_{\infty}(\cdot)$, we obtain that
$u(2t,\cdot)\rightarrow u_{\infty}(\cdot)$ as $t\rightarrow\infty$.
To prove the convergence of the optimal exercise boundary $x(\tau)$
to $\underline{X}$, we choose $t$ large enough such that
$\underline{X}^{\exp\{-rt\}}+\exp\{-rt\}<\overline{X}$. Then,
(\ref{inequ3}) yields that
$$
u\left(2t,\underline{X}^{\exp\{-rt\}}+\exp\{-rt\}\right)\geq{u}^{\exp\{-rt\}}_\infty\left(\underline{X}^{\exp\{-rt\}}+\exp\{-rt\}\right)>0,$$
where we have used ${u}^{\exp\{-rt\}}_\infty\left(x\right)>0$ for
$x\in(\underline{X}^{\exp\{-rt\}},\overline{X})$ (cf. Proposition
\ref{Pro5}) in the second inequality. It then follows from the
definition of $x(\tau)$ in (\ref{freeboundary_10}) that $$x(2t)\leq
\underline{X}^{\exp\{-rt\}}+\exp\{-rt\}.$$ By Lemma \ref{Th3}, we
also have $x(\tau)\geq
\underline{X}$ for any $\tau\in[\,0,T-\delta\,].
$ Hence, we have proved that $$\underline{X}\leq x(2t)\leq
\underline{X}^{\exp\{-rt\}}+\exp\{-rt\}.
$$
Finally, we send $t\rightarrow\infty$ in the above inequalities, and
conclude the convergence of $x(2t)$ to $\underline{X}$ by
Proposition \ref{Pro5}.
\end{proof}
\begin{remark} Under the original coordinates $(t,X)$, it follows
from the relationship (\ref{relation}) and Theorem \ref{theorem}
that $X^{\delta}(t)\rightarrow K e^{\underline{X}}$ as
$T\rightarrow\infty$, so $Ke^{\underline{X}}$ is the asymptotic line
of the optimal exercise boundary $X^{\delta}(t)$.
Theorem \ref{theorem} also establishes the connection between the
optimal stopping problems (\ref{optimal_stopping_delay_special_4})
and (\ref{optimal_stopping_delay_special_5}):
$U^{\delta}(t,X)\rightarrow U^{\delta}_{\infty}(X)$ uniformly in
$X\in\mathbb{R}_+$ as $T\rightarrow\infty$. Moreover, it follows
from the decomposition formula (\ref{Decomposition}) that the value
function of the American put option with time lag $\delta$ has the
long maturity limit: $V^{\delta}(t,X)\rightarrow
P(T-\delta,X)+U_{\infty}^{\delta}(X)$ uniformly in
$X\in\mathbb{R}_+$ as $T\rightarrow\infty$.
\end{remark}
\subsection{Asymptotic behavior for small time lag}
Our third main result in this section is about the convergence of
the optimal exercise boundary when the time lag $\delta\rightarrow
0$.
Denote by $X^0(t)$ the optimal exercise boundary of the
corresponding standard American put option. It is well known that
$X^{0}(t)$ is a strictly increasing and smooth function with
$X^0(T)=K$. We refer to \cite{Jiang} for its proof.
\begin{theorem}\label{Th5.2} (Asymptotic behavior of optimal exercise boundary for $\delta\rightarrow 0$)
Let $X^{\delta}(t)$ be the optimal exercise boundary given in (\ref{relation}). Then,
$X^\delta(t)$ converges to $X^0(t)$ for any $t\in[0,T)$
as $\delta\rightarrow 0$.
\end{theorem}
\begin{proof}
We first extend variational inequality (\ref{VI11}) from
$\Omega_{T-\delta}$ to $\Omega_T$ by defining
$V^{\delta}(t,X)=P(t,X)$ for
$(t,X)\in[T-\delta,T]\times\mathbb{R}_+$, and rewrite (\ref{VI11})
as
\begin{align}\label{PDE}
(-\p_t -{\cal L})V^\delta
(t,X)&=I_{\{V^\delta=P(T-\delta,X)\}}(-\p_t-{\cal L})P(T-\delta,X)\notag\\
&=-I_{\{V^\delta=P(T-\delta,X)\}}\Theta(X),
\end{align}
for $(t,X)\in\Omega_{T}$, and $V^{\delta}(T,X)=(K-X)^+$ for
$X\in\mathbb{R}^+$.
Denote $\mathcal{N}_T^{n}:=(0,T]\times\mathcal{N}^n$ and
$\mathcal{N}^n:=(-n,K-\frac1n)$. Then, we apply the
$W^{2,1}_p$-estimates (see Lemma A.4 in \cite{Yan1} for example) to
the above PDE (\ref{PDE}) for $V^{\delta}(\cdot,\cdot)$, and obtain
that for any $n\in\mathbb{N}$, \be\label{estimate}
\|V^\delta\|_{W^{2,1}_p(\mathcal{N}_T^{n})}\leq
C\Big(\,\|V^\delta\|_{L^p(\mathcal{N}_T^{2n})}+\|\Theta\|_{L^p(\mathcal{N}^{2n})}+\|K-X\|_{W^{2,1}_p(\mathcal{N}^{2n})}
\Big). \ee Note that the right hand side of the above inequality is
independent of $\delta$ due to the fact that $V^{0}(t,X)-\delta
rK\leq V^{\delta}(t,X)\leq V^{0}(t,X)$ (cf. (\ref{bound})), and the
formula (\ref{freeterm}) for $\Theta(X)$.
From Proposition \ref{Th5.1}, $V^\delta$ converges to $V^0$ in
$C(\overline{\Omega_T})$ as $\delta\rightarrow 0$. Hence, the above
estimate~\eqref{estimate} implies that $V^\delta$ also converges
weakly to $V^0$ in $W^{2,1}_p(\mathcal{N}^n_T)$ and, therefore,
\begin{equation*}
-I_{\{V^\delta=P(T-\delta,X)\}}\Theta(X)=(-\p_t -{\cal L})V^{\delta}(t,X)\rightharpoonup
(-\p_t-{\cal
L})V^0(t,X)
\end{equation*}
weakly in $L^p(\mathcal{N}^n_T)$ as $\delta\rightarrow 0$. But note
that $$(-\p_t-{\cal
L})V^0(t,X)=I_{\{V^0=K-X\}}(rK-qX).$$ In turn,
\begin{equation}\label{convergence}
-I_{\{V^\delta=P(T-\delta,X)\}}\Theta(X)\rightharpoonup
I_{\{V^0=K-X\}}(qX-rK)
\end{equation}
weakly in $L^p(\mathcal{N}^n_T)$.
Now suppose that $X^{\delta}(t)$ does not converge to $X^{0}(t)$.
Then there exist $t_0\in[0,T)$ and a sequence
$\{X^{\delta_{m}}\}_{m=1}^{\infty}$ such that when
$\delta_m\rightarrow 0$, $X^{\delta_{m}}(t_0)$ does not converge to
$X^{0}(t_0)$.
Since $X^{0}(t)$ is continuous and strictly increasing with
$X^{0}(T)=K$, we may assume there exists $\epsilon>0$ and an integer
$M$ such that $X^{0}(t_0)+2\epsilon<\min\{X^{\delta_m}(t_0),K\}$ for
any $m\geq M$. See Figure 5 below. Other cases can be treated in a
similar way.
By the continuity and strictly increasing property of both
$X^{0}(t)$ and $X^{\delta}(t)$, we can find $\eta>0$ such that the
compact set $[t_{0},t_0+\eta]\times
[X^0(t_0)+\epsilon,X^0(t_0)+2\epsilon]$ is in the exercise region of
$V^{\delta_m}$ and the continuation region of $V^0$. Therefore, in
this compact set,
$V^{\delta_m}(t,X)=P(T-\delta_m,X),\,V^0(t,X)>K-X$, and
\begin{align*}
&-I_{\{V^{\delta_m}=P(T-\delta_m,X)\}}\Theta(T-\delta_m,X)-I_{\{V^0=K-X\}}(qX-rK)\\
=&-\Theta(T-\delta_m,X),
\end{align*}
where we use the notation $\Theta(T-\delta_m,\cdot)$ to emphasize
its dependence on $T-\delta_m$. However, from Proposition
\ref{Pro3}, it is immediate to check that
$$
\lim_{\delta_m\rightarrow
0}\Theta(T-\delta_m,X)=qX-rK<0,{\;\text{for}\;X<K,}
$$
which is a contradiction to (\ref{convergence}).
\end{proof}
\begin{picture}(0,130)(140,0)
\put(190,10){\vector(1,0){150}} \put(200,6){\vector(0,1){110}}
\put(195,115){$t$}\put(343,10){$X$}
{\thicklines\qbezier(225,10)(226,60)(310,90)}
{\thicklines\qbezier(230,10)(235,85)(312,105)}
\put(200,90){\line(1,0){10}}\put(215,90){\line(1,0){10}}\put(230,90){\line(1,0){10}}
\put(245,90){\line(1,0){10}}\put(260,90){\line(1,0){10}}\put(275,90){\line(1,0){10}}
\put(290,90){\line(1,0){10}}\put(305,90){\line(1,0){10}}\put(310,90){\line(1,0){8}}
\put(325,90){\line(1,0){10}}
\put(198,87){$\bullet$}\put(174,84){$T-\delta$}
\put(200,105){\line(1,0){10}}\put(215,105){\line(1,0){10}}\put(230,105){\line(1,0){10}}
\put(245,105){\line(1,0){10}}\put(260,105){\line(1,0){10}}\put(275,105){\line(1,0){10}}
\put(290,105){\line(1,0){10}}\put(305,105){\line(1,0){10}}\put(310,105){\line(1,0){8}}
\put(325,105){\line(1,0){10}}
\put(200,75){\line(1,0){10}}\put(215,75){\line(1,0){10}} \put(230,75){\line(1,0){10}}
\put(245,75){\line(1,0){5}}\put(255,75){\line(1,0){20}}
\put(267,82){\line(1,0){6}}\put(267,75){\line(0,1){7}}\put(273,75){\line(0,1){7}}
\put(198,102){$\bullet$}\put(182,101){$T$}
\put(198,72){$\bullet$}\put(190,71){$t_0$}
\put(210,35){${\bf ER}$}\put(320,35){${\bf CR}$}
\put(260,56){${X^\delta(t)}$}\put(218,56){${X^0(t)}$}
\put(308,87){$\bullet$}\put(300,92){$K e^{\overline{X}}$}
\put(308,102){$\bullet$}\put(300,107){$K$}
\end{picture}
\begin{center}
\small{Figure 5: Non-convergence of the free boundaries $X^{\delta}(t)$ to
$X^{0}(t)$ as $\delta\rightarrow 0$.}
\end{center}
|
2,869,038,155,859 | arxiv |
\section{Model and method}
\textit{Model and method.}
We investigate the extended spin-$\frac{1}{2}$ XY model with a uniform scalar chiral term using both infinite and finite size DMRG methods~\cite{ITensorandTenPy,tenpy} in the language of matrix product states~\cite{schollwock2011density}. We use the cylindrical geometry with circumference up to 6 (8) unit cells in the finite (infinite) size systems except for the calculations of spin gap, which is based on smaller size tori to reduce boundary effect.
The Hamiltonian of the model is given as
\begin{equation}
\label{eq1}
\begin{split}
H=J_{1}\sum\limits_{\left\langle i,j\right\rangle
}(S_{i}^{+}S_{j}^{-}+h.c.)+J_{2}\sum\limits_{\left\langle \left\langle
i,j\right\rangle \right\rangle }(S_{i}^{+}S_{j}^{-}+h.c.)\\ +J_{\chi
}\sum\limits_{i,j,k\in \triangle }\overrightarrow{S}_{i}\cdot (%
\overrightarrow{S}_{j}\times \overrightarrow{S}_{k})
\end{split}
\end{equation}
here $\left\langle i,j\right\rangle$ refers to the nearest-neighbor sites and $\left\langle \left\langle i,j\right\rangle \right\rangle $ refers to the next-nearest-neighbor sites. $\left \{ i,j,k \right \}$ in the summation $\sum _{\Delta }$ refers to the three neighboring sites of the smallest triangle taken clockwise as shown in Fig.\ref{Fig1}. The chiral term could be derived as an effective Hamiltonian of the extended Hubbard model with an additional $\Phi $ flux through each elementary honeycomb~\cite{bauer2014chiral,hickey2016haldane,motrunich2006orbital,sen1995large}.
We set $J_{1}=1$ as the unit for the energy scale, and use the spin U(1) symmetry for better convergence.
\begin{figure}
\centering
\input{Spin_Corr_Stru_Order.tex}
\caption{\label{Fig2}(Color online) $(a)$ shows the peak value at $\Gamma $ point in the spin structure $ S\left (q \right ) $ at $J_{2}=0.2$ for various $J_{\chi }$ where the peak vanishes at $J_{\chi }\approx 0.15$. The inset of $(a)$ is the spin structure of the XY-Neel order at $J_{2}=0.2,J_{\chi }=0.06$, where there are clear peaks at the $\Gamma $ points in the second Brillouin zone. $(b)$ is the $M$ point peak value at $J_{2}=0.4$ for various $J_{\chi }$. The peak shows a sudden drop at $J_{\chi }\approx 0.06$, indicating a phase transition. The inset of $(b)$ is the spin structure of the collinear order at $J_{2}=0.4,J_{\chi }=0.01$, where the dominant peak is located at the $M$ points in the second Brillouin zone. $(c)$ shows the antiferromagnetic order (blue line) and the scalar chiral order (red line) at $J_{2}=0.3$ for various $J_{\chi }$ where the three corresponding phases from left to right are Ising antiferromagnetic state, CSL, and chiral spin state. The left dash line is determined by the sudden drop of antiferromagnetic order, while the right dash line is determined by the vanish of quasi-degenerate pattern in the entanglement spectrum. $(d)$ refers to the spin correlations at $J_{2}=0.3$ for various $J_{\chi }$ representing different phases. The phases at $J_{\chi }=0.01$, $0.04$, $0.08$, and $0.14$($0.25$) refer to Ising antiferromagnetic state, phase boundary, CSL, and chiral spin state respectively. The $ x_{0}$ is chosen away from the open boundary, and $x$ refers to the horizontal distance between the two spins. All of the correlations except $J_{\chi }=0.04$ show a straight line in the log plot that indicates an exponential decay. The plots above are based on finite DMRG results with $L_{y}=4\times 2$.}
\end{figure}
\textit{Phase diagram.}
The ground state phase diagram is illustrated in Fig.\ref{Fig1}.
We use spin structure factors to identify magnetic ordered phases, and entanglement spectrum to identify the topological ordered CSL. For larger $J_{\chi }$, a magnetic ordered chiral spin state with nonzero scalar chiral order is also identified.
The static spin structure in the Brillouin zone is defined as
\begin{equation}
\label{eq2}
S\left ( \overrightarrow{q} \right ) = \frac{1}{N}\sum_{i,j}\left \langle \overrightarrow{S}_{i}\cdot \overrightarrow{S}_{j} \right \rangle e^{i\overrightarrow{q}\cdot \left ( \overrightarrow{r}_{i} - \overrightarrow{r}_{j} \right ) }
\end{equation}
For the XY-Neel state there are peaks at the Brillouin zone $\Gamma $ points in the static spin structure as shown in the inset of Fig.\ref{Fig2}(a). The magnitude of the peak is plotted as a function of $J_{\chi }$ in Fig.\ref{Fig2}(a). It decreases rapidly as $J_{\chi }$ increases, and disappears as the system transits into the CSL at $J_{\chi }\approx 0.15$. Similarly, the peak for the collinear order at various $J_{\chi }$ is given in Fig.\ref{Fig2}(b). The inset of Fig.\ref{Fig2}(b) shows the spin structure at $J_{\chi }=0.01$ where the phase is dominant by the collinear order. The phase boundary could be identified by the sudden drop and the disappearance of the peak at $J_{\chi }\approx 0.06$. In the intermediate regime at $J_{2}=0.3$ and small $J_{\chi }$, the staggered on-site magnetization serves as the order parameter as shown in Fig.\ref{Fig2}(c). This quantity shows a sudden drop from the Ising antiferromagnetic state to the CSL at $J_{\chi }\approx 0.04$, which determines the phase boundary. The finite size analysis of it indicates a possible first order phase transition for $J_{2}$ close to 0.34, and a higher order transition for smaller $J_{2}$ (see supplemental material~\cite{SuppMaterial}).
Besides the magnetic order parameters, other properties such as the spin correlation, the entanglement entropy and spectrum are also used to identify the phase boundary. We have found consistence in those different measurements. As shown in Fig.\ref{Fig2}(d), the spin correlations are strongly enhanced at $J_{\chi }\approx 0.04$ near the phase boundary between the Ising antiferromagnetic phase and the CSL, while both phases have exponentially decaying spin correlations. The phase boundary determined by the spin correlation is the same as the one by the staggered magnetization.
Both CSL and the chiral spin state in the larger $J_{\chi }$ regime have a finite scalar chiral order that is defined as
\begin{equation}
\label{eq3}
\left \langle \chi \right \rangle = \frac{1}{3N}\sum\limits_{i,j,k\in \triangle }\overrightarrow{S}_{i}\cdot (\overrightarrow{S}_{j}\times \overrightarrow{S}_{k})
\end{equation}
As shown by the red curve in Fig.\ref{Fig2}(c), the chiral order increases monotonically with the increase of $J_{\chi }$ in the CSL and chiral spin state, and saturates around $ \left \langle \chi \right \rangle \approx 0.177$. The spin correlations in these two states are given in Fig.\ref{Fig2}(d) as examples at $J_{\chi }=0.08$, and $0.14$($0.25$) respectively, where they remain exponentially decay. However, the spin correlation increases generally as $J_{\chi }$ increases. As shown in Fig.\ref{Fig4}(b), for the parameters we labeled as chiral spin state, the spin structure factors show sharp peaks, with the magnitudes of the peak values increasing with system sizes, suggesting a magnetic ordered state in the larger $J_{\chi }$ regime. We also notice that the spin structure in this chiral spin state shares the same peaks as the tetrahedral phase~\cite{hickey2016haldane,hickey2017emergence} (see supplemental material~\cite{SuppMaterial}), and we do not rule out the possibility of tetrahedral magnetic order in this regime.
The extended regime of $J_{2}>0.6$ and $J_{2}<0.1$ are not our main focus in this letter because we are interested in the intermediate $J_2$ regime with strong frustration, but we do find that the CSL extends to a relatively large $J_{\chi }\approx 0.5$ at $J_{2}=0$. This implies that the CSL could survive even without the frustration induced by second nearest neighbor interactions in the XY model, which may be interesting for future study. In the regime labeled as collinear/dimer, we also find a non-magnetic dimer ground state in close competition with the collinear state at $J_{\chi }> 0.55$. As pointed out in Ref.~\cite{zhu2013unexpected}, the actual ground state depends on the system size and XC/YC geometry, and we will not try to resolve this close competition here.
The phase near the critical point of $J_{2}\approx 0.36$, $J_{\chi }\approx 0.02$ is hard to define numerically because different spin orders are mixed together in the low energy spectrum, thus the spin correlation is generally large. Here the phase boundary is measured by the unique properties of the CSL through the entanglement spectrum as discussed below, and it will be marked by the dash line as a guide to the eye.
\begin{figure}
\centering
\input{EntanSpecKspace_J2.tex}
\caption{\label{Fig3}(Color online) The ES for the spinon ground state (a) and the vacuum ground state (b) in the CSL phase at $ J_{2}=0.26, J_{\chi }=0.09$, and the ES in the chiral spin state (c) at $ J_{2}=0.26, J_{\chi }=0.2$ with different spin sectors. The spectrum is calculated using infinite DMRG with $L_{y}=6\times 2$. The $\lambda _{i}$ refers to the eigenvalues of the reduced density matrix, and the $k_{y}$ has an increasement of $\frac{2\pi }{L_{y}}$. The quasi degenerate eigenvalues are labeled by the number below each momentum. Each spin sector is separated with the help of total $S_{z}$ conservation implemented in the algorithm.}
\end{figure}
\textit{Chiral spin liquids.}
The CSL is characterized by the twofold topological degenerate ground states, which are called ground state in vacuum and spinon sectors~\cite{gong2014emergent,he2014obtaining}, respectively. The entanglement spectrum (ES) of the ground state corresponds to the physical edge spectrum that is created by cutting the system in half~\cite{qi2012general,PhysRevB.86.125441,PhysRevLett.110.067208}. Following the chiral $SU(2)_{1}$ conformal field theory~\cite{francesco2012conformal}, the leading ES of a gapped CSL has the degeneracy pattern of {1,1,2,3,5...}~\cite{wen1990chiral}. As shown in Fig.\ref{Fig3}(a) and (b), the ES in the CSL phase has such quasi-degenerate pattern with decreasing momentum in the y-direction for each spin sector, though higher degeneracy levels may not be observed due to the finite numbers of momentum sectors. The ES of the spinon ground state has a symmetry about $S_{z}=\frac{1}{2}$ which corresponds to a spinon at the edge of the cylinder, while the one of the vacuum ground state has a symmetry about $S_{z}=0$. The ES is robust in the bulk part of the CSL phase for various parameters and system sizes, but as we approach the phase boundary, additional eigenstates may also mix in the spectrum (see supplemental material~\cite{SuppMaterial}).
The main difference between the CSL and the chiral spin state is the topological edge state that can be identified through the ES. An example of the ES in the chiral spin state is also given in Fig.\ref{Fig3}(c), where the quasi-degenerate pattern disappears and additional low-lying states emerge, as opposed to the ES in the CSL shown in Fig.\ref{Fig3}(a) and (b). The phase boundary between these two states are determined mainly by the ES.
The finite chiral order represents the time reversal symmetry breaking chiral current in each small triangle, which is shown in Fig.\ref{Fig2}(c). The chiral order is significantly enhanced as the system undergoes a phase transition from the Ising antiferromagnetic state to the CSL. However, the spin correlation remains following exponential decay, as shown by the line of $J_{\chi }=0.08$ in Fig.\ref{Fig2}$(d)$. We further confirm the vanish of any conventional spin order in the CSL by obtaining the spin structure in Fig.\ref{Fig4}$(a)$, and comparing it with the one in the chiral spin state in Fig.\ref{Fig4}$(b)$. There is no significant peak in the CSL phase as opposed to other magnetic phases.
\begin{figure}
\centering
\input{correlation_Gap_structure.tex}
\caption{\label{Fig4}(Color online) $(a)$ refers to the spin structure in the CSL phase at $ J_{2}=0.2, J_{\chi }=0.16$ where there is no peak as opposed to other magnetic phases. This result is based on the cluster of $20\times 4\times 2$. $(b)$ refers to the spin structure peaks with fixed $k_{x}=-\frac{2\pi }{\sqrt{3}}$ of various parameters. The blue and red lines are obtained in the chiral spin state with clusters of $20\times 4\times 2$ and $30\times 6\times 2$, respectively. The magnitude of the peak increases as the cluster size increases. The black and grey lines are obtained in the CSL with the same two clusters, where there is no significant peak. $(c)$ refers to the finite size scaling of the spin gap on the torus geometry with clusters of $3\times 3\times 2$, $4\times 3\times 2$, $4\times 4\times 2$, $6\times 4\times 2$, and $8\times 5\times 2$. $(d)$ refers to the entanglement entropy with various clusters on finite cylinders in the CSL phase. The $x$ here denotes the distance of the cut in the x direction. All of the results are obtained at $J_{2}=0.2$.}
\end{figure}
In order to identify the excitation properties of the CSL, we obtain
the spin-1 excitation gap by the energy difference between the lowest state in $S=0$ and $1$ sector. To measure the bulk excitation gap, we use the torus geometry to reduce the boundary effect. The finite size scaling of the spin gap using rectangle like clusters is shown in Fig.\ref{Fig4}$(c)$. The spin gap decays slowly as the cluster grows, and remains finite after the extrapolation, suggesting a gapped phase in the thermodynamic limit. In addition, we study the entanglement entropy of the subsystems by cutting at different bonds. As shown in Fig.\ref{Fig4}$(d)$, the entropy becomes flat away from the boundary, which corresponds to a zero central charge in the conformal field theory interpretation~\cite{calabrese2004entanglement}. This supports a gapped CSL phase that is consistent with the finite spin gap.
\textit{Summary and discussions.}
Using large scale DMRG, we identify the long-sought CSL with the perturbation of three-spins chiral interactions in the spin-$\frac{1}{2}$ XY model on the honeycomb lattice. The CSL extends to the intermediate regime with a small $J_{\chi }$, providing evidence of the important interplay between frustration and chiral interactions driving the CSL. Here, we demonstrate that the chiral interactions are essential for the emergence of the CSL, because the minimum critical $J_{\chi }$ of the phase transition is around $0.02$, which is stable against the increasing of system sizes, and below the critical $J_{\chi }$ there is no such quasi-degenerate pattern in the ES (see supplemental material~\cite{SuppMaterial}).
A chiral spin state is also obtained at larger $J_{\chi }$, which extends to the wider regime of $J_{2}$. The chiral spin state has a peak value for spin structure factor growing with system sizes. Further studies include finding the exact nature of this chiral spin state, and the nature of the phase transition into the CSL.
Experimentally, of all the honeycomb materials that show a quantum-spin-liquid-like behavior~\cite{nakatsuji2012spin,PhysRevLett.107.197204,PhysRevB.93.214432}, the Co-based compounds are mostly studied in the context of XY model such as $BaCo_{2}\left ( PO_{4} \right )_{2}$~\cite{nair2018short,zhong2018field} and $BaCo_{2}\left ( AsO_{4} \right )_{2}$~\cite{zhong2019weak}, thus it would be extremely interesting to search for the quantum spin liquid in such systems. On the other hand, the results of CSL may be tested in cold atoms experiments~\cite{goldman2016topological,aidelsburger2015measuring} as the spin XY model could be mapped by the bosonic Kane-Mele model in the Mott regime~\cite{plekhanov2018emergent,kane2005quantum}.
\textit{Acknowledgments.}
Y.H and C.S.T was supported by the Texas Center for Superconductivity and the Robert A. Welch Foundation Grant No. E-1146. Work at CSUN was supported by National Science Foundation Grants PREM DMR-1828019. Numerical calculations was completed in part with resources provided by the Center for Advanced Computing and Data Science at the University of Houston.
\onecolumngrid
|
2,869,038,155,860 | arxiv | \section{Networks of firms}
\label{intro}
Firms are not simply independent agents
competing for customers on markets.
Their activity involves many interactions,
and some of them even involve some kind of cooperation.
Interactions among firms might include:
\begin{itemize}
\item information exchange\cite{Davis}\cite{Bat03a},\cite{Bat03b};
\item loans\cite{stig},\cite{cats};
\item common endeavours\cite{pow};
\item partial ownership\cite{Bat03c};
\item and of course economic transactions allowing
production\cite{bak} (the present paper).
\end{itemize}
Economic activity can be seen as occurring on
an economic network (``le tissu \'economique''):
firms are represented by vertices and their interactions
by edges. The edges are most often asymmetric
(think for instance of providers/customers interactions).
The availability of empirical data has provoked
research on the structure of these networks:
many papers discuss their ``small world properties''\cite{wat}
and frequently report scale free distribution\cite{BA} of the connections
among firms.
The long term interest of economic network
research is rather the dynamics creating or occurring
on these nets: how are connections evolving, what are the fluxes
of information, decisions\cite{Bat03a},\cite{Bat03b}, economic transactions
etc ...
But dynamic studies lag behind
statistical approaches because of conceptual difficulties
and because time series of individual transactions are
harder to obtain than time aggregated statistics.
The recent cascade of bankruptcies that occurred in Eastern
Asia in 1997, provoked
some research on the influence of the loans network structure
on the propagation of ``bad debts'' and resulting avalanches
of bankruptcies (\cite{stig},\cite{cats})
. One of the most early papers on avalanche distribution
in economic networks is due to Bak {\it et al}
\cite{bak}. It concerns production networks:
edges represent suppliers/customers connections
among firms engaged in batch production activity.
The authors describe the distribution of production
avalanches triggered by random independent demand
events at the output boundary of the production network.
These papers (\cite{stig},\cite{cats} and\cite{bak}) are not based on any empirical description
of the network structure, but assume a very simple interaction
structure: star structure in the first case\cite{stig},\cite{cats}, periodic
lattice in Bak {\it et al} paper\cite{bak}.
They neither take into account price dynamics.
The present paper is along these lines:
we start from a very simple lattice structure
and we study the consequences of simple local processes
of orders/production (with or without failure)/delivery/
profit/investment on the global dynamics: evolution of
global production and wealth in connection
to their distribution and local patterns.
In the spirit of complex systems analysis,
our aim is not to present specific
economic prediction, but primarily to concentrate on the generic
properties (dynamical regimes, transitions, scaling laws)
common to a large class of models of production networks.
A minimal model of a production network will first
be introduced in section 2. Simulation results are presented in
section 3. Section 4 is a discussion
of the genericity of the obtained results:
reference is made to comparable soluble models.
We also summarise the results of several variants
of the simplest model. The conclusion is a
discussion of possible applications to
geographical economics.
\section{ A simple model of a production network}
We can schematise the suppliers/customers interactions
among firms by a production network, where firms are
located at the vertices and directed edges represent
the delivery of products from one firm to its customers
(see figure 1).
Independent local failures to produce (or to deliver) by a firm
might give rise to the propagation of shortage across the
production network.
\begin{figure}[htbp]
\centerline{\epsfxsize=120mm\epsfbox{resfa.eps}}
\caption{Firms are located at the nodes of the lattice.
Production ($Y^D$) flows from the resource input layer ($k=l)$ to
the output layer ($k=0$), orders ($Y$) flow backward.}
\end{figure}
We have chosen a simple periodic lattice with three input connections
of equal importance and three output per firm. The network is oriented
from an input layer (say natural resources) towards an output layer
(say the shelves of supermarkets). The transverse axis can be thought
as representing either geographical position or some
product space while the longitudinal axis relates to production.
We here use a one dimensional transverse space to facilitate
the representation of the dynamics by two-dimensional patterns,
but there is no reason to suppose
geographical or product space to be one-dimensional
in the real world.
In real economies, the network structure is
more heterogenous with firms of unequal importance and connectivity.
Furthermore some delivery
connections go backwards. Most often these backward
connections concern equipment goods;
neglecting them as we do here implies considering
equipment goods dynamics as much slower
than consumption goods dynamics.
Anyway, since these backward
connections enter positive feedback loops,
we have no reason to suppose that they would
qualitatively disrupt the dynamics that we further describe.
At each time step two opposite flows get across the lattice:
orders are first transmitted upstream from the output layer;
production is then transmitted downstream from the
input layer to the output layer.
\begin{itemize}
\item Orders at the output layer
We suppose that orders are only limited by the production
capacity\footnote{A number of simplifying assumptions of our
model are inspired from \cite{cats}, especially the assumption that
production is limited by production capacity, not by market.}
$A_{0i}$ of the firm in position ${0,i}$, where $0$ indicates the output
layer, and $i$ the transverse position in the layer.
\begin{eqnarray}
Y_{0i} &=& q \cdot A_{0i}
\end{eqnarray}
$Y_{0i}$ is the order in production units, and $q$ a
technological proportionality coefficient relating the quantity
of product $Y$ to the production capacity $A$, combining the
effect of capital and labor. $q$ is further taken equal to 1 without
loss of generality.
\item Orders
Firms at each layer $k$, including the output layer, transfer orders
upstream to get products from layer $k+1$ allowing them to produce.
These orders are evenly distributed across their 3 suppliers upstream.
But any firm can only produce according to its own production capacity
$A_{ki}$. The planned production $Y_{ki}$ is then a minimum between
production capacity and orders coming from downstream:
\begin{eqnarray}
Y_{ki} &=& min (q \cdot A_{ki} , \sum_{v} \frac{Y_{(k-1)i}}{3})
\end{eqnarray}
$v$ stands for the supplied neighborhood, here supposed to be the
three firms served by firm $k,i$ (see figure 1).
We suppose that resources at the input layer are always in excess
and here too, production is limited only by orders and production capacity.
\item Production downstream
Starting from the input layer, each firm then starts producing
according to inputs and to its production capacity; but production
itself is random, depending upon alea. We suppose that at each time step
some catastrophic event might occur with constant probability $\mathcal{P}$ and
completely destroy production. Failures result in canceling production
at the firm where they occur, but also reduce production downstream,
since firms downstream have to reduce their own production by lack of input.
These failures to produce are uncorrelated in time and location on
the grid.
Delivered production $Y^d_{ki}$ by firm $k,i$ then depends upon the
production delivered upstream from its delivering neighborhood $v'_i$
at level $k+1$:
\begin{eqnarray}
Y^d_{ki} &=& (\sum_{i'\in v'_i} Y^d_{(k+1)i'} \cdot \frac{Y_{ki}}{\sum_{i''\in v_{i'}} Y_{ki''}} ) \cdot \epsilon(t)
\end{eqnarray}
\begin{itemize}
\item Whenever any of the firms $i'\in v'_i$ at level $k+1$ is not able to deliver
according to the order
it received, it delivers downstream at level $k$
to its delivery neighbourhood $v_{i'}$
in proportion of the initial orders
it received, which corresponds to the fraction term;
\item
$\epsilon(t)$ is a random term equals to 0 with probability $\mathcal{P}$
and 1 with probability $1-\mathcal{P}$.
\end{itemize}
The propagation of production deficit due to local
independent catastrophic event is the collective
phenomenon we are interested in.
\item Profits and production capacity increase
Production delivery results into payments without failure.
For each firm, profits are the difference between the valued quantity
of delivered products and production costs, minus capital decay.
Profits $\Pi_{ki}$ are then written:
\begin{eqnarray}
\Pi_{ki} &=& p\cdot Y^d_{ki} - c \cdot Y^d_{ki} - \lambda A_{ki}
\end{eqnarray}
where $p$ is the unit sale price,
$c$ is the unit cost of production,
and $\lambda$ is the capital decay constant due to interest rates and
material degradation.
We suppose that all profits are re-invested into production.
Production capacities of all firms are
thus upgraded (or downgraded in case
of negative profits) according to:
\begin{eqnarray}
A_{ki}(t+1)=A_{ki}(t)+ \Pi_{ki} (t)
\end{eqnarray}
\item Bankruptcy and re-birth.
We suppose that firms which capital becomes negative go
into bankruptcy. Their production capacity goes to zero
and they neither produce nor deliver. In fact
we even destroy firms which
capital is under a minimum fraction of the average firm (typically 1/500).
A re-birth process occurs for the corresponding vertex after
a latency period: re-birth is due to the creation of new firms which use
the business opportunity to produce for the downstream neighborhood
of the previously bankrupted firm. New firms are created at a
unique capital, a small fraction of the average firm capital (typically
1/250).\footnote{Adjusting these capital values relative to the average
firm capital $<A>$ is a standard hypothesis in many economic growth
models: one supposes that in evolving economies such processes depend upon
the actual state of the economy\cite{solGLV} and not upon fixed and predefined values.}.
\end{itemize}
The dynamical system that we defined here belongs to a large
class of non linear systems called reaction-diffusion systems (see e.g. \cite{solAB})
from chemical physics. The reaction part here is the autocatalytic loop
of production and capital growth coupled with capital decay and death
processes. The diffusion part is the diffusion of orders and production
across the lattice. We can a priori expect a dynamical behaviour
with spatio-temporal patterns, well characterised
dynamical regimes separated in the
parameter space by transitions or crossovers, and scale free distributions
since the dynamics is essentially multiplicative and noisy.
These expectations guided our choices of quantities to monitor
during simulations.
\section{ Simulation results}
\subsection{ Methods and parameter choice}
Unless otherwise stated, the following results were obtained for a
production network with 1200 nodes and ten layers between the input and
the output.
Initial wealth is uniformly and randomly distributed
among firms:
\begin{equation}
A_{ki} \in [1.0,1.1]
\end{equation}
One time step correspond to the double sweep
of orders and production across the network,
plus updating capital according to profits.
The simulations were run for typically 5000 time steps.
The figures further displayed correspond to:
\begin{itemize}
\item a capital threshold for bankruptcy of $<A>/500$;
\item an initial capital level of new firms of $<A>/250$;
\end{itemize}
Production costs $c$ were 0.8 and capital decay
rate $\lambda=0.2$. In the absence of failures,
stability of the economy would be ensured by
sales prices $p=1.0$. In fact, only the relative difference between
these parameters influences stability. But their relative magnitude
with respect to the inverse delay between
bankruptcy and creation of new firm also
qualitatively influence the dynamics.
In the limits of low probability of failures,
when bankruptcies are absent,
the linear relation between failure probability $\mathcal{P}$
and equilibrium price $p$ is
written:
\begin{eqnarray}
p=c+\lambda+ \frac{l}{2} \cdot \mathcal{P}
\end{eqnarray}
where $l$ is the total number of layers.
The $\frac{l}{2}$ comes from the fact that the integrated damage
due to an isolated failure is proportional to the average number of
downstream layers. The slopes at the origin of
the breakeven lines of figure 2 verify this equation.
Most simulations were monitored online:
we directly observed the evolution of the
local patterns of wealth and production
which our choice of a lattice topology made possible.
Most of our understanding comes from these direct observations.
But we can only display global dynamics or static patterns
in this manuscript.
\subsection{Monitoring global economic performance}
The performance of the economic system under
failures can be tested by checking which prices
correspond to breakeven: the capital dynamics being essentially
exponential, the parameter space is divided in two regions,
where economic growth or collapse are observed.
Drawing the breakeven manifolds
for instance in the failure probability $\mathcal{P}$ and
sale price $p$ plane
allows to compare the influence of other parameters
. The growth regime is observed in the low $\mathcal{P}$
and high $p$ region, the collapse regime in the
high $\mathcal{P}$ and low $p$ region.
\begin{figure}[htbp]
\centerline{\epsfxsize=120mm\epsfbox{dichotlay3.eps}}
\caption{Regime diagram in the sale price versus
probability of failure plane. The time lag between bakruptcy
and re-birth is 20.
Two regions of growth and
economical collapse at large times are separated by lines
which position are fixed by simulation parameters.
We here varied the production network depth:
The red '+' line was obtained for a 3 layers net,
the green 'x' line for a 5 layers net,
the blue '*' line for an 8 layers net,
and pink square line for a 10 layers net.
}
\end{figure}
Figure 2 displays four
breakeven manifolds corresponding to different lattice
depths.
At low failure probability, the breakeven lines follow equation 7.
At higher values of $\mathcal{P}$, interactions among
firms failures are important, hence the
non linear increase of compensating prices.
Breakeven manifold are a simple test of the economic
performances of the network: when performances are poor, the compensating
sales price has to be larger. We checked for instance that increasing
the bankruptcy threshold and new firms initial capital increase
global economic performance. On the other hand, increasing
the time lag between bankruptcy and the apparition of new firms
increase breakeven sale prices in the non-linear region.
Among other systematic tests, we checked parent models
with more realistic representations of production costs such as:
\begin{itemize}
\item
Influence of capital inertia; production costs don't instantly
readjust to orders: capital and labour have some inertia
which we modeled by writing that productions costs are a maximum
function of actual costs and costs at the previous period.
\item
Influence of the cost of credit: production failures increase
credit rates.
\end{itemize}
Both variants of course yield higher breakeven sale prices;
nevertheless these variants display the same generic properties
that we will discuss in the next sections.
Most further results, dynamical and statistical,
are based on runs close to the breakeven price
in order to avoid systematic drifts and recalibrations.
\subsection{Time evolution}
The simplest way to monitor the evolution of the system
is to display the time variations of some of its
global performance. Figure 3 displays the time variations
of total delivered production $Y^d$, total wealth $A$,
total undelivered production due to failures
and the fraction of active firms for a 1200x10 lattice,
with a probability of failure of 0.05 and a compensation
sale price of 1.185. Time lag between bankruptcy and
and new firm creation is either 1 (for the left diagram)
or 5 (for the right digram).
\begin{figure}[htbp]
\centerline{\epsfxsize=100mm\epsfbox{evol10lag1agr.eps}\epsfxsize=100mm\epsfbox{evol10lag5.eps}}
\caption{Time evolution of wealth (red '+'), production (blue '*'),
destroyed production (green 'x'), active firms (magenta empty squares) and
production by the largest firm (cyan hollow squares).
The network has 10 layers, 200 firms per layer, $\mathcal{P}=0.05$ (the
failure probability).
The left diagram corresponds to a small time lag (1)
between bankruptcy and firm re-birth, right diagram corresponds to a
larger time lag (5).
Vertical scale is logarithmic, which permits to have the four quantities
displayed on the same time plot but reduces the apparent amplitude of fluctuations
occurring when time is larger than 1000.}
\end{figure}
The features that we here report are generic to most
simulation at breakeven prices.
During the initial steps of the simulation, here say 1000, the wealth distribution
widens due to the influences of failures. Bankruptcies cascades
do not occur as observed by checking the number of active firms,
until the lowest wealth values reach the bankruptcy threshold.
All quantities have smooth variations.
Later, for $t>1000$ one observes large production
and wealth fluctuations characteristic
of critical systems.
At larger time lag (5) between bankruptcy and firm re-birth,
when bankruptcies become frequent, they can cascade
across the lattice and propagate in both network directions
as seen on the right diagram of figure 3.
A surprising feature of the dynamics
is that avalanches of bankruptcies are not correlated with production level.
Even when only one tenth of the firms are active, the
total production is still high. In fact, in this model,
most of the total production
is dominated by large firms, and avalanches which concern
mostly small firms are of little consequence
for the global economy.
Battiston etal study more thoroughly the time dynamics
of a related model (large sale price fluctuations
possibly inducing bankruptcies and lack of
payment) in \cite{bat-galleg} .
\subsection{Wealth and production patterns}
Like most reaction-diffusion systems, the dynamics is
not uniform in space and display patterns.
The wealth and production patterns displayed after 5000 time steps
on figure 4 and 5 were obtained for $\mathcal{P}=0.05$ .
They reflect wide
distributions and spatial organisation.
In these diagrams, production flows upward. The upper diagram displays
wealth $A$ and the lower one production $Y_d$. The intermediate bar is
the colour scale, black=0, violet is the maximum wealth or production.
(We in fact displayed square roots of $A$ and $Y_d$ in order to
increase the visual dynamics of the displays; otherwise
large regions of the patterns
would have been red because of the scale free distributions of $A$ and $Y_d$,
see further).
The important result is that although production has random fluctuations
and diffuses across the lattice, the inherent multiplicative
(or autocatalytic)
process of production + re-investment coupled with
local diffusion results in a strong metastable
local organisation: the dynamics clusters rich and productive
firms in "active regions" separated by "poor regions" (in red or black).
\begin{figure}[htbp]
\centerline{\includegraphics[width=18cm, clip=true, trim= 0 0 0 3]{patlag1t2500.eps}}
\caption{Patterns of wealth(upper pattern) and
production (lower pattern)
after 5000 iterations steps with the parameter set-up of figure 3 (left)
(time lag =1), for a 200x10 lattice.
. For both patterns the output layer is
the last one above. The intermediate line is the colour code, with minimal
amplitude at the extreme left. We observe alternance of highly productive
regions
(in pink, blue and green colour),
with less active regions (in red). Local production failures
represented by black dots are randomly distributed across the production pattern.
Only one bankrupted firm is observed on the wealth pattern.}
\end{figure}
\begin{figure}[htbp]
\centerline{\includegraphics[width=18cm, clip=true, trim= 0 0 0 3]{patlag5t2500.eps}}
\caption{Patterns of wealth(upper pattern) and
production (lower pattern)
after 5000 iterations steps with the parameter set-up of figure 3 (right)
(time lag is 5).
The same alternance of active and less active regions is observed,
but with a larger time lag (5), we also get large zones of bankrupted firms
in black.}
\end{figure}
\begin{figure}[htbp]
\centerline{\includegraphics[width=18cm, clip=true, trim= 0 0 0 3]{patfi250.eps}}
\centerline{\includegraphics[width=18cm, clip=true, trim= 0 0 0 3]{patfi750.eps}}
\centerline{\includegraphics[width=18cm, clip=true, trim= 0 0 0 3]{patfi1250.eps}}
\centerline{\includegraphics[width=18cm, clip=true, trim= 0 0 0 3]{patfi1750.eps}}
\centerline{\includegraphics[width=18cm, clip=true, trim= 0 0 0 3]{patfi2250.eps}}
\caption{Successive patterns of wealth after 250, 750, 1250, 1750 and 2250
time steps with the parameter set-up of figure 3 (right, time lag = 5) for a 1200x10 lattice.}
\end{figure}
These patterns are evolving in time, but are
metastable on a long time scale, say of
the order of several 100 time steps as seen on the succession of
production patterns at different steps of the simulation
as one can observe on figure 6:
successive patterns at time 1250, 1750 and 2250.
The relative importance of active (and richer)
regions can be checked by a Zipf plot\cite{Zipf}.
We first isolate active regions by "clipping" the dowstream
(along $k$ axis) integrated
wealth at a level of one thousandth of the total production\footnote
{Clipping here means that when the production level is lower than the threshold
it is set to zero}.
\begin{figure}[htbp]
\centerline{\epsfxsize=120mm\epsfbox{clip.eps}}
\caption{ Separating richer regions. Downstream integrated
wealth levels (green '+') are plotted as a function of their transverse position.
The clipping level indicated by the red line isolates richer regions
(those wealth peaks above the red line).}
\end{figure}
We then transversally (along $i$ axis) integrate the wealth of active regions
and order these regional wealths to get the Zipf plots.
\begin{figure}[htbp]
\centerline{\epsfxsize=150mm\epsfbox{Zipfr.eps}}
\caption{Zipf plot of wealth of the most active regions for
the standard and adaptive firms models (cf. section 4.2).
The vertical axis display the production relative to
the total production.
The red '+' correspond to the standard model with time lag = 5,
green 'x' to time lag = 1, and blue '*' to the
adaptive firms model with time lag = 1.}
\end{figure}
All 3 Zipf plots display some resemblance with
standard Zipf\cite{Zipf} plots of individual wealth, firm size
and city size. For the model discussed
here, the size decrease following approximately a power law.
The apparent\footnote
{the approximate algorithm that we use to isolate
high productivity regions is responsible for the
kinks in the Zipf plot} exponent is one when the time lag is
1. It is much higher when the time lag is
5.
Zipf plots of output\footnote
{ rather than vertically integrating production,
we applyed the clipping, horizontal integration and
ordering algorithm to firms at the output layer ($k=0$)}
active regions (not shown here) display
the same characteristics.
When the time lag is 5, the most productive region
accounts for more than 50 perc. of total production.
The figure is 18 perc. for the second peak.
The distribution typically is "winner takes all".
The equivalent figures when the time lag is 1
are 10 and 8.5 perc..
In conclusion, the patterns clearly display
some intermediate scale organisation
in active and less active zones:
strongly correlated active regions are responsible
for most part of the production. The relative
importance of these regions obeys a Zipf distribution.
\subsection{Wealth and production histograms}
The multiplicative random dynamics of capital
and the direct observation of wealth and production
would lead us to predict a scale free distribution\footnote
{What we mean here by scale free is that no characteristic
scale is readily apparent from the distribution
as opposed for instance to gaussian distributions.
Power law distributions are scale free.
A first discussion of power law distributions
generated by multiplicative processes appeared in
\cite{kes}.}
of
wealth and production.
The cumulative distribution functions (cdf) of wealth and
production observed on figure 8 are indeed wide
range and do not display any characteristic scale:
The data wealth and production
were taken for the same conditions as the previous figures at the end of the
simulation, i.e. after 5000 time steps.
The medium range of the cdf
when time lag is 1 (figure 8a)
extends on one and a half decade with an apparent
slope of $1 \pm 0.05$ in log-log scale.
This observed dependence of the wealth cdf, log normal at lower $A$ values followed by
power law at intermediate $A$ values, is consistent with expressions derived
for pdf in the literature on coupled differential equations with
multiplicative noise.
Bouchaud and M\'ezard\cite{bouch} e.g. obtained:
\begin{equation}
P(w)= Z \frac{exp-\frac{1-\mu}{w}}{w^{1+\mu}}
\end{equation}
(where $w$ stands for the wealth relative to average wealth $\bar{A}$),
from the differential system:
\begin{equation}
\frac{dA_i}{dt}= \eta_i(t)\cdot A_i + J \cdot (\bar{A}-A_i) .
\end{equation}
where $\eta_i(t)$ is a random multiplicative noise,
with variance $\sigma^2$; $\mu=1+\frac{J}{\sigma^2}$.
At higher wealth, the straight line giggles
and drops much faster: this is because of the
underlying region structure. The last 80 perc.
of the wealth is concentrated in two rich regions
and its distribution is dominated by local
diffusion phenomena in these regions.
The departure form the standard (equ.8)
distribution is even more noticeable
when avalanches are present.
The large wealth shoulder is bigger
(95 perc. of production) and
the first point at zero wealth stands well above
the rest of the distribution:
it corresponds to those 50 perc. of the firms
which are momentarily bankrupted.
The fraction of
bankrupted firms fluctuates
in time and so does the slope of the linear segment\footnote{
both fluctuations are correlated since the
slope of the linear segment depends upon the number of firms
in the distribution}.
\begin{figure}[htbp]
\centerline{\epsfxsize=100mm\epsfbox{cdfl10lag1.eps}\epsfxsize=100mm\epsfbox{cdfl10lag5.eps}}
\caption{Cumulative distribution of wealth (red '+') after 5000 iteration steps. Parameter
choices are the same as the previous figures.}
\end{figure}
In conclusion, the observed statistics largely reflect
the underlying region structure: at intermediate
levels of wealth, the different wealth peaks
overlap (in wealth, not in space!): we then observed
a smooth cdf obeying equation 8. At the
large wealth extreme the fine structure of peaks is
revealed.
\section{Conclusions}
The simple model of production networks that we
proposed presents some remarkable properties:
\begin{itemize}
\item Scale free distributions of wealth and production.
\item Large spatial distribution of wealth and production.
\item A few active regions are responsible for most production.
\item Avalanches of bankruptcies occur for larger values
of the time lag between bankruptcy and firm re-birth.
But even when most firms are bankrupted, the global
economy is little perturbed.
\end{itemize}
Are these properties generic to a large class
of models? we will first briefly report on equations
which display similar behaviour and then examine
the results which we obtained with variants of the
model.
\subsection{Formal approaches of similar dynamics}
A number of models which display equivalent
phenomena have been proposed and formally solved.
We kept our own notation to display similarities:
\begin{itemize}
\item Growth by deposition on surfaces\cite{THH}, Edwards/Wilkinson:
\begin{eqnarray}
\frac{dA}{dt}= D \cdot \Delta A + \eta(x,t)
\end{eqnarray}
$A$ stands for the distance to the interface.
$D$ is the surface diffusion constant of the deposited
material and $\eta_i(t)$ is an addititive
noise.
Other models were proposed by Karkar/Parisi/Zhang, Derrida/Spohn\cite{THH}, etc.
\item Generalised Volterra-Lotka from econophysics:
(Bouchaud\cite{bouch}, Cont, Mezard, Sornette, Solomon\cite{solGLV} etc.)
\begin{eqnarray}
\frac{dA_i}{dt}= A_i \cdot \eta_i(t) + \sum_{j} J_{ij} A_j - \sum_{j} J_{ji} A_i
\end{eqnarray}
$A$ stands for individual wealth of agents and $\eta_i(t)$ is a multiplicative
noise.
Agents are involved in binary transactions of "intensity" $J_{ij}$.
Mean field formal solutions displays scale free distribution of wealth.
Simulations display
patterns on lattice structures (Souma etal\cite{souma}).
\item
Solomon etal\cite{solAB}. Reaction-Diffusion AB models.
\begin{eqnarray}
\frac{dA}{dt}= k \cdot A \cdot \eta(x,t) + D \cdot \Delta A
\end{eqnarray}
$A$ is the chemical concentration of a
product involved in an auto-catalytic chemical reaction,
$D$ is its diffusion constant. Simulations and formal derivations
yield spatio-temporal patterns similar to
ours.
\end{itemize}
\subsection{Variants of the original model}
We started checking three
variants, with for instance more realistic production costs
taking into account:
\begin{itemize}
\item
Influence of capital inertia: production costs don't instantly
readjust to orders; capital and labour have some inertia
which we modeled by writing that productions costs are a maximum
function of actual costs and costs at the previous period.
\item
Influence of the cost of credit: production failures increase
credit rates.
\end{itemize}
The preliminary simulations confirm the genericity of our
results.
The third variant is a model with "adaptive firms".
The lattice connection structure
supposes a passive reactive behaviour
of firms. But if a firm is consistently delivering less than
the orders it receives, its customers should order less from it
and look for alternative suppliers.
Such adaptive behaviour leading to an evolutive
connection structure would be more realistic.
We then also checked an adaptive version of the model
by writing that orders of firm $i$ are proportional
to the production capacity $A$ of the upstream firms
connected to firm $i$. Simulations
gave qualitative results similar to those obtained
with fixed structures.
\begin{figure}[htbp]
\centerline{\includegraphics[width=18cm, clip=true, trim= 0 0 0 3]{patadapt1500.eps}}
\vskip 1cm
\centerline{\includegraphics[width=18cm, clip=true, trim= 0 0 0
3]{patadapt1998.eps}}
\caption {Wealth and production patterns for a network
of "adaptive" firms. The conventions and parameters are the same as for figures
3, 4 and 5, for a 200x10 lattice. Time lag is 1, the two upper patterns correspond to $t=1500$,
the lower ones were taken when $t=1998$.}
\end{figure}
We observe that adaptation strongly re-enforce the local structure
of the economy. The general picture is the same
scale free distribution of production and wealth
with metastable patterns. Due to the strong local character
of the economy:
\begin{itemize}
\item Avalanches of production are observed (see figure 9),
even when time lag is short (time lag of 1).
\item The spatial periodicity of the active zones is increased
(see figure 9 with larger density of smaller zones). But again the activity
distribution among zones
is like "winner takes all" (figure 7).
\end{itemize}
\subsection{Checking stylised facts}
Even though the present model is quite primitive\footnote{
We e.g. discuss a "Mickey Mouse" economy with fixed prices
independent from supply and demand. Introducing
price dynamics is not a major challenge: we would simply face
an "embarras de richesse" having to choose among either local or
global prices. In fact both kind of adjustment
have already been tested: global adjustment
in the case of production cost connected to
production failure through credit costs,
or local adjustment in the case of adaptive behaviour.
We have already shown that they don't change the generic
properties of the dynamics.}
it is still tempting to draw some conclusions
that could apply to real economies.
The most striking result to our mind is the strong
and relatively stable spatial disparities
that it yields. Let us compare
this prediction to the economic situation of
developing countries: large and persistent disparities in
wealth and production level as compared to developed countries.
We can even go further and raise questions about the influence
of the depth of the production network or the kind of
investment needed:
\begin{itemize}
\item One generally agrees that disparities between
developing and developed countries increased since industrial
revolution. This is also a period during which production
became more specialised, which translates in our model
as increasing the network depth: for instance a shoemaker
would in the past make and sell a pair of shoes from
leather obtained from a cattle breeder. Nowadays the shoe production
and delivery process involve many more stages. Our simulations
have shown that increasing depth increases the fragility
of economies to failures and bankruptcies. The new industrial
organisation may have detrimental effects on developing economies.
\item
Obviously investment policies in developing countries yield
some coordination across the whole production chain.
Bad economic results might be due to very local conditions but
can also reflect the lack of suppliers/producers connections.
\end{itemize}
The above remarks are not counter-intuitive and these conclusions
could have been reached by verbal analysis. What is brought
by the model is the dramatic and persistent consequences of
such apparently trivial details.
Acknowledgments: We thank Bernard Derrida and Sorin Solomon for
illuminating discussions and the participants to CHIEF Ancona
Thematic Institute, especially Mauro Gallegati. CHIEF
was supported by EXYSTENCE network of excellence,
EC grant FET IST-2001-32802. This research was also supported by
COSIN FET IST-2001-33555, E2C2 NEST 012975 and CO3 NEST 012410
EC grants.
|
2,869,038,155,861 | arxiv |
\section{Introduction}
Federated Learning (FL) allows distributed clients to collaborate and train a centralized global model without the transmission of local data.
In practice, mobile and edge devices that are equipped with drastically different computation and communication capabilities are becoming the dominant source for FL \cite{lim2020federated}. This has prompted significant recent attention to a family of FL algorithms focusing on training heterogeneous local models (often obtained through pruning a global model). It includes algorithms like HeteroFL \cite{diao2021heterofl} that employ heterogeneous local models with fixed structures, algorithms utilizing pre-trained local models like \cite{frankle2019lottery}, as well as algorithms like PruneFL \cite{jiang2020model} that update local models adaptively during training. However, the success of these algorithms has only been demonstrated
empirically (e.g., \cite{diao2021heterofl, jiang2020model}). Unlike standard FL that has received rigorous theoretical analysis \cite{wang2018cooperative,bonawitz2019towards,yu2019parallel,convergenceNoniid}, the convergence of heterogeneous FL with adaptive online model pruning is still an open question. Little is known about whether such algorithms converge to a solution of standard FL.
To answer these questions, in this paper we present a unifying framework for heterogeneous FL algorithms with {\em arbitrary} adaptive online model pruning and provide a general convergence analysis. There have been many existing efforts in establishing convergence guarantees for FL algorithms, such as the popular FedAvg \cite{fedavg}, on both IID and non-IID \footnote{Throughout this paper, “non-IID data” means that the data among local clients are not independent and identically distributed.}data distributions, but all rely on the assumption that there can only exist one uniform structure on all client devices. By considering {\em arbitrary} pruning strategies in our framework, we formally establish the convergence conditions for a general family of FL algorithms with both (i) heterogeneous local models to accommodate different resource constraints on client devices and (ii) time-varying local models to continuously refine pruning results during training. We prove that these FL algorithms with arbitrary pruning strategies satisfying certain sufficient conditions can indeed converge (at a speed of $O(\frac{1}{\sqrt{Q}})$, where $Q$ is the number of communication rounds) to a stationary point of standard FL for general smooth cost functions.
To the best of our knowledge, this is the first convergence analysis for heterogeneous FL with arbitrary adaptive online model pruning. The framework captures a number of existing FL algorithms as important special cases and provide a general convergence guarantee to them, including HeteroFL \cite{diao2021heterofl} that employs fixed-structure local models, PruneFL \cite{jiang2020model} that requires periodically training a full-size model, and S-GaP \cite{ma2021effective} that can be viewed as a single-client version. Moreover, we show that the convergence gap is affected by both pruning-induced noise (i.e., modeled through a constant $\delta^2$) and a new notion of minimum coverage index (i.e., any parameters in the global model are covered in at least $\Gamma_{\rm min}$ local models). In particular, it advocates a joint design of efficient local-model pruning strategies (e.g., leveraging \cite{wen2016learning,li2016pruning,ciresan2011flexible}) for efficient training. Our results provides a solid theoretical support for designing heterogeneous FL algorithms with efficient pruning strategies, while ensuring similar convergence as standard FL.
We carried out extensive experiments on two datasets which suggest that for a given level of model sparsity, client models should also consider the maximization of the coverage index rather than only keeping the largest parameters through pruning.
As an example, a federated learning network with 85\% sparsity obtained via our design to maximize converge index achieves up to 8\% of improvement compared to the network generated by pruning with the identical model architecture without posing any additional computation overhead.
In summary, our paper makes the following key contributions:
\begin{itemize}
\vspace{-0.07in}
\item We propose a unifying framework for heterogeneous FL with arbitrary adaptive online model pruning. It captures a number of existing algorithms (whose success are empirically demonstrated) as special cases and allows convergence analysis.
\vspace{-0.07in}
\item The general convergence of these algorithms are established. On both IID and non-IID data, we prove that under standard assumptions and certain sufficient conditions on pruning strategy, the algorithms converge to a stationary point of standard FL for smooth cost functions.
\vspace{-0.07in}
\item We further analyze the impact of key factors contributing to the convergence and further advocate a joint design of local pruning masks with respect to both pruning-induced error a notion of minimum coverage index. The results are validated on MNIST and CIFAR10 datasets.
\end{itemize}
\section{Background}
\noindent {\bf Standard Federated Learning} A standard Federated Learning problem considers a distributed optimization for N clients:
\begin{equation}
\min_{\theta} \left\{ F(\theta) \triangleq \sum_{i=1}^N p_i F_i(\theta) \right\}, \ {\rm with} \ F(\theta) = \mathbb{E}_{\xi\sim D} l(\xi,\theta).
\end{equation}
Here $\theta$ is as set of trainable weights/parameters, $F_n(\theta)$ is a cost function defined on data set $D_i$ with respect to a user specified loss function $l(x,\theta)$, and $p_{i}$ is the weight for the $i$-th client such that $p_{i}\geq 0$ and $\sum \ _{i=1}^{N} p_{i} = 1$.
The FL procedure, e.g., FedAvg \cite{fedavg}, typically consists of a sequence of stochastic gradient descent steps performed distributedly on each local objective, followed by a central step collecting the workers' updated local parameters and computing an aggregated global parameter.
For the $q$-th round of training, first the central server broadcasts the latest global model parameters ${\theta}_{q}$ to clients $n=1,\ldots,N$, who performs local updates as follows:
$$
{{\theta}_{q,n,t} = {\theta}_{q,n,t-1} - \gamma \Delta F_{i}({\theta}_{q,n,t-1};\xi_{n,t-1}) {\rm \ with \ } {\theta}_{q,n,0} = {\theta}_{q}}
$$
where $\gamma$ is the local learning rate. After all available clients have concluded their local updates (in $T$ epochs), the server will aggregate parameters from them and generate the new global model for the next round, i.e.,
$$
{{\theta}_{q+1} = \sum_{n=1}^N p_i {{\theta}_{q,n,T}}}
$$
The formulation captures FL with both IID and non-IID data distributions.
\noindent {\bf Model Pruning}. Model pruning via weights and connections pruning is one of the promising methods to enable efficient neural networks by setting the proportion of weights and biases to zero and thus bringing reduction to both computation and memory usage. Most works on weights pruning require 3 phases of training: pre-training phase, pruning to sparse phase, and fine-tune phase. For a neural network $F(\theta,\xi)$ with parameters $\theta$ and input data $\xi$.
The pruning process takes $F(\cdot)$ as input and generates a new model $F_{i}( \theta \odot m;\xi)$, where $m \in \{0,1\}^{|{\theta}|}$ is a binary mask to denote certain parameters to be set to zero and $\odot$ denotes element-wise multiplication. The pruning mask $m$ is computed from
a certain pruning policy $\mathbb{P}$, e.g., layer-wise parameter pruning removing weights below certain percentile and neuron pruning removing neurons with small average weights. We use $\theta \odot m$ to denote the pruned model, which has a reduced model size and is more efficient for communication and training.
\section{Related Work}
\noindent {\bf Federated Averaging and Communication Efficient FL}. FedAvg \cite{fedavg} is considered the first and the most commonly used federated learning algorithm, where for each round of training local clients trains using their own data, with their parameters averaged at the central server. FedAvg is able to reduce communication costs by training clients for multiple rounds locally. Several works have shown the convergence of FedAvg under several different settings with both homogeneous (IID) data \cite{wang2018cooperative,woodworth2018graph} and heterogeneous (non-IID) data \cite{convergenceNoniid,bonawitz2019towards,yu2019parallel} even with partial clients participation. Specifically, \cite{yu2019parallel} demonstrated LocalSGD achieves $O(\frac{1}{\sqrt{NQ}})$ convergence for non-convex optimization and \cite{convergenceNoniid} established a convergence rate of $O(\frac{1}{Q})$ for strongly convex problems on FedAvg, where Q is the number of SGDs and N is the number of participated clients.
Several works \cite{karimireddy2020scaffold,wang2019adaptive,wang2019adaptive22}are proposed to further reduce the communication costs. One direction is to use data compression such as quantization \cite{konevcny2016federated, bonawitz2019towards,mao2021communication,yao2021fedhm}, sketching \cite{alistarh2017qsgd, ivkin2019communication}, split learning \cite{thapa2020splitfed} and learning with gradient sparsity \cite{han2020adaptive}. This type of work does not consider computation efficiency.
\noindent {\bf Neural Network Pruning and Sparsification}. To reduce the computation costs of a neural network, neural network pruning is a popular research topic. A magnitude-based prune-from-dense methodology \cite{han2015learning,guo2016dynamic,yu2018nisp,liu2018rethinking,real2019regularized} is widely used where weights smaller than certain preset threshold are removed from the network. In addition, there are one-shot pruning initialization \cite{lee2018snip}, iterative pruning approach \cite{zhu2017prune,narang2017exploring} and adaptive pruning approach \cite{lin2020dynamic, ma2021effective} that allows network to grow and prune.
In \cite{frankle2019lottery, morcos2019one} a "lottery ticket hypothesis" was proposed that with an optimal substructure of the neural network acquired by weights pruning directly train a pruned model could reach similar results as pruning a pre-trained network. The other direction is through sparse mask exploration \cite{bellec2017deep, mostafa2019parameter,evci2020rigging}, where a sparsity in neural networks are maintained during the training process, while the fraction of the weights is explored based on random or heuristics methods. \cite{frankle2019lottery,mostafa2019parameter} empirically observed training of models with static sparse parameters will converge to a solution with higher loss than models with dynamic sparse training. Note that the efficient sparse matrix multiplication sometimes requires special libraries or hardware, e.g. the sparse tensor cores in NVIDIA A100 GPU, to achieve the actual reduction in memory footprint and computational resources.
\noindent {\bf Efficient FL with Heterogeneous Neural Networks}. Several works are proposed to address the reduction of both computation and communication costs, including one way to utilize lossy compression and dropout techniques\cite{caldas2018expanding, xu2019elfish}. Although early works mainly assume that all local models are to share the same architecture as the global model \cite{li2020federated}, recent works have empirically demonstrated that federated learning with heterogeneous client model to save both computation and communication is feasible. PruneFL\cite{jiang2020model} proposed an approach with adaptive parameter pruning during federated learning. \cite{li2021fedmask} proposed federated learning with a personalized and structured sparse mask. HetroFL\cite{diao2021heterofl} proposed to generate heterogeneous local models as a subnet of the global network by picking the leading continuous parameters layer-wise with the help of proposed static batch normalization, while \cite{li2021hermes} finds the small subnetwork by applying the structured pruning. Despite their empirical success, they lack theoretical convergence guarantees even in convex optimization settings.
\section{Convergence analysis of Federated Learning under adaptive online model pruning in IID and non-IID setting}
\section{Our Main Results}
We rigorously analyze the convergence of heterogeneous FL under arbitrary adaptive online model pruning and establish the conditions for converging to a stationary point of standard FL with general smooth cost functions. The theory results in this paper not only illuminate key convergence properties but also provide a solid support for designing adaptive pruning strategies in heterogeneous FL algorithms.
\subsection{FL under Arbitrary Adaptive Online Pruning}
In this paper, we focus on a family of FL algorithms that leverage adaptive online model pruning to train heterogeneous local models on distributed clients. By considering {\em arbitrary} pruning strategies in our formulation, it relaxes a number of key limitations in standard FL: (i) Pruning masks are allowed to be time-varying, enabling online adjustment of pruned local models during the entire training process. (ii) The pruning strategies may vary for different clients, making it possible to optimize the pruned local models with respect to individual clients' heterogeneous computing resource and network conditions.
More precisely, we use a series of masks $m_{q,n} \in \{0,1\}^{|{\theta}|}$ model an adaptive online pruning strategy that may change the pruning mask $m_{q,n}$ for any round $q$ and any client $n$. Let $\theta_q$ denote the global model at the beginning of round $q$ and $\odot$ be the element-wise product. Thus, $\theta_q\odot m_{q,n}$ defines the trainable parameters of the pruned local model\footnote{While a pruned local model has a smaller number of parameters than the global model. We adopt the notation in \cite{a,b} and use $\theta_q\odot m_{q,n}$ with an element-wise product to denote the pruned local model - only parameter corresponding to a 1-value in the mask is accessible and trainable in the local model.} for client $n$ in round $q$.
Here, we describe one around (say the $q$th) of the algorithm. First, the central server employs a pruning function $\mathbb{P}(\cdot)$ to prune the latest global model $\theta_{q}$ and broadcast the resulting local models to clients:
\begin{eqnarray}
\theta_{q,n,0} = \theta_{q} \cdot m_{q,n}, {\rm \ with \ } m_{q,n}=\mathbb{P}(\theta_{q}, n), \ \forall n.
\end{eqnarray}
Each client $n$ then trains the pruned local model by performing $T$ updates for $t=1\ldots,T$:
\begin{eqnarray}
\theta_{q,n,t}=\theta_{q,n,t-1}-\gamma \nabla F_n(\theta_{q,n,t-1}, \xi_{n,t-1})\odot m_{q,n},
\end{eqnarray}
where $\gamma$ is the learning rate and $\xi_{n,t-1}$ are independent samples uniformly drawn form local data $D_n$. We note that $\nabla F_n(\theta_{q,n,t-1}, \xi_{n,t-1})\odot m_{q,n}$ is a local stochastic gradient evaluated using only local parameters in $\theta_{q,n,t-1}$ (due to pruning) and that only locally trainable parameters are updated by the stochastic gradient (due to the element-wise product with mask $m_{q,n}$).
Finally, the central server aggregates the local models $\theta_{n,q,T}$ $\forall n$ and produces an updated global model $\theta_{q+1}$. Due to the use of arbitrary pruning masks in this paper, global parameters are broadcasted to and updated at different subsets of clients. To this end, we partition the global model $\theta_q$ into $i=1,\ldots,K$ disjoint regions, such that parameters of region $i$, denoted by $\theta_{q}^{(i)}$, are included and {\em only} included by the same subset of local models.
Let $\mathcal{N}_q^{(i)}$ be the set of clients\footnote{Clearly $\mathcal{N}_q^{(i)}$ is determined by the pruning mask $m_{q,n}$ since we have $m_{q,n}^{(i)} = \mathbf{1}$ for $n\in \mathcal{N}_q^{(i)}$ and $m_{q,n}^{(i)} = \mathbf{0}$ otherwise.}, whose local models contain parameters of region $i$ in round $q$. The global model update of region $i$ is performed by aggregating local models at clients $n\in \mathcal{N}_q^{(i)}$, i.e.,
\begin{eqnarray}
\theta_{q+1}^{(i)} = \frac{1}{|\mathcal{N}_q^{(i)}|} \sum_{n\in \mathcal{N}_q^{(i)} } \theta_{q,n,T}^{(i)}, \ \forall i.
\end{eqnarray}
We summarize the algorithm in Algorithm 1.
{\bf Remark 1.} We hasten to note that our framework captures heterogeneous FL with arbitrary adaptive online pruning strategies, so does our convergence analysis. It recovers many important FL algorithms recently proposed as special cases of our framework with arbitrary masks, including HeteroFL \cite{diao2021heterofl} that uses fixed masks $m_{n}$ over time, PruneFL \cite{jiang2020model} that periodically trains a full-size local model $m_{n,q}={\bf 1}$ for some $n,q$, Prune-and-Grow \cite{ma2021effective} that can be viewed as a single-client algorithm without parameter aggregation, as well as FedAvg \cite{fedavg} that employs full-size local models at all clients. Our unifying framework provide a solid support for incorporating
arbitrary model pruning strategies (such as weight weight or neuron pruning, CNN-pruning, and sparsification) into heterogeneous FL algorithms. Our analysis establishes general conditions for {\em any} heterogeneous FL with arbitrary adaptive online pruning to converge to standard FL.
\begin{algorithm}[h]
\SetKwInput{KwData}{Input}
\caption{Our unifying framework.}\label{alg:cap}
\KwData {Local data ${D_{i}^{k}}$ on $N$ clients, pruning policy $\mathbb{P}$.
}
\SetKwFunction{FMain}{Local Update}
\SetKwProg{Fn}{Function}{:}{}
\textbf{Executes:} \\
Initialize $\theta_0$\\
\For{round $q=1,2,\ldots, Q$ }{
\For{local workers $n=1,2,\ldots, N$ (In parallel)} {
{\rm Generate mask} $m_{q,n} = \mathbb{P}(\theta_q, n) $ \\
{\rm Prune} $\theta_{q,n,0} = \theta_q \odot m_{q,n} $ \\
{\rm $/ /$ } Update local models: \\
\For{epoch $t=1,2,\ldots, T$ }
{ {\rm } $\theta_{q,n,t}=\theta_{q,n,t-1}-\gamma \nabla F_n(\theta_{q,n,t-1}, \xi_{n,t-1})\odot m_{q,n} $
}
}
{\rm $/ /$ } Update global model: \\
\For{region $i=1,2,\ldots, K$ }{
{\rm Find} $\mathcal{N}_q^{(i)}=\{ n: m_{q,n}^{(i)} = \mathbf{1} \} $ \\
{\rm Update} $\theta_{q+1}^{(i)} = \frac{1}{|\mathcal{N}_q^{(i)}|} \sum_{n\in \mathcal{N}_q^{(i)} } \theta_{q,n,T}^{(i)}$
}}
Output $\theta_{Q}$
\end{algorithm}
\subsection{Notations and Assumptions}
We make the following assumptions on $F_{1}, \dots F_{n}$. Assumptions~1 is a standard. Assumption~2 follows from \cite{ma2021effective} and implies the noise introduced by pruning is relatively small and bounded.
Assumptions 3 and 4 are standard for FL convergence analysis following from \cite{zhang2013communication, stich2018local,yu2019parallel, convergenceNoniid} and assume the stochastic gradients to be bounded and unbiased.
\begin{assumption} (Smoothness).
Cost functions $F_{1}, \dots ,F_{N}$ are all L-smooth: $\forall \theta,\phi\in \mathcal{R}^d$ and any $n$, we assume that there exists $L>0$:
\begin{eqnarray}
\| \nabla F_n(\theta) - \nabla F_n(\phi) \| \le L \|\theta-\phi\|.
\end{eqnarray}
\end{assumption}
\begin{assumption}(Pruning-induced Noise).
We assume that for some $\delta^{2} \in[0,1)$ and any $q,n$, the pruning-induced error is bounded by
\begin{eqnarray}
\left\|\theta_{q}-\theta_{q} \odot m_{q,n}\right\|^{2} \leq \delta^{2}\left\|\theta_{q}\right\|^{2}.
\end{eqnarray}
\end{assumption}
\begin{assumption} (Bounded Gradient).
The expected squared norm of stochastic gradients is bounded uniformly, i.e., for constant $G>0$ and any $n,q,t$:
\begin{eqnarray}
E \left\| \nabla F_{n}(\theta_{q,n,t},\xi_{q,n,t})\right\|^{2} \leq G.
\end{eqnarray}
\end{assumption}
\begin{assumption} (Gradient Noise for IID data).
Under IID data distribution, for any $q,n,t$, we assume that
\begin{eqnarray}
& \mathbb{E}[\nabla F_n(\theta_{q,n,t}, \xi_{n,t})] = \nabla F(\theta_{q,n,t}) \\
& \mathbb{E}\| \nabla F_n(\theta_{q,n,t}, \xi_{n,t}) - \nabla F(\theta_{q,n,t}) \|^2 \le \sigma^2
\end{eqnarray}
for constant $\sigma^2>0$ and independent samples $\xi_{n,t}$.
\end{assumption}
\subsection{Convergence Analysis}
We now analyze heterogeneous FL under arbitrary adaptive online pruning. To the best of our knowledge, this is the first proof that shows general convergence for this family of algorithms to a stationary point of standard FL (in Section~2.1) with smooth cost functions. We will first show convergence for IID data distributions, and by replacing Assumption~4 with a similar Assumption~5, show convergence for non-IID data distributions. We define an important value:
\begin{eqnarray}
\Gamma_{min}=\min_{q,i} |\mathcal{N}_q^{(i)}|,
\end{eqnarray}
referred to in this paper as the minimum covering index. Since $|\mathcal{N}_q^{(i)}|$ is the number of local models containing parameters of region $i$, $\Gamma_{min}$ measures the minimum occurrence of any parameters in the local models. Intuitively, if a parameter is never included in any local models, it is impossible for it to be updated. Thus conditions based on the covering index would be necessary for the convergence to standard FL. Our analysis establishes sufficient conditions for convergence. All proofs are collected in the Appendix.
\begin{theorem}
Under Assumptions~1-4 and for arbitrary pruning satisfying $\Gamma_{min}\ge 1$, heterogeneous FL with adaptive online pruning converge as follows:
\begin{eqnarray}
\frac{1}{Q} \sum_{q=1}^Q \mathbb{E} ||\nabla F(\theta_q)||^2 \le \frac{G_0}{\sqrt{TQ}} + \frac{V_0}{Q} + \frac{I_0}{\Gamma^*} \cdot \frac{\delta^2}{Q}\sum_{q=1}^Q \mathbb{E} \| \theta_q \|^2 \nonumber
\end{eqnarray}
where $V_0=3L^2NG/\Gamma^*$, $I_0=3L^2 N$, and $G_0=4\mathbb{E} [ F(\theta_{0})]+6LN \sigma^2/(\Gamma^*)^2$, are constants depending on the initial model parameters and the gradient noise.
\end{theorem}
\textbf{Remark 2.} Theorem~1 shows convergence to a stationary point of
standard FL as long as $\Gamma_{min}\ge 1$ (albeit some pruning-induced noise). The result is a bit surprising, since $\Gamma_{min}\ge 1$ only requires any parameters to be included in at least one local model (which is necessary for all parameters to be updated during training). But we show that this is a sufficient condition for convergence to standard FL. Moreover, we also establish a convergence rate of $O(\frac{1}{\sqrt{Q}})$ for arbitrary pruning strategies satisfying the condition.
\textbf{Remark 3.} Impact of pruning-induced noise. In Assumption~2, we assume the pruning-induced noise is relatively small and bounded with respect to the global model: $\left\|\theta_{q}-\theta_{q} \odot m_{q,n}\right\|^{2} \leq \delta^{2}\left\|\theta_{q}\right\|^{2}$. This is satisfied in practice since most pruning strategies tend to focus on eliminating weights/neurons that are insignificant, therefore keeping $\delta^2$ indeed small.
We note that pruning will incur an error term $\delta^2\frac{1}{Q}\sum_{q=1}^Q \mathbb{E} \| \theta_q \|^2 $ in our convergence analysis, which is proportional to $\delta^2$ and the average model norm (averaged over Q). It implies that more aggressive pruning in heterogeneous FL may lead to a larger error, deviating from standard FL at a speed quantified by $\delta^{2}$. We note that this error is affected by both $\delta^{2}$ and $\Gamma_{min}$.
\textbf{Remark 4.} Impact of minimum covering index $\Gamma_{min}$. It turns out that the minimum number of occurrences of any parameter in the local models is a key factor deciding convergence. As $\Gamma_{min}$ increases, both constants $G_0,V_0$ and the convergence error are inverse proportional to $\Gamma_{min}$. This result is a bit counter-intuitive since certain parameters should be small enough to ignore in pruning. However, recall that our analysis shows convergence of all parameters in $\theta_q$ to a stationary point of standard FL (rather than for a subset of parameters or to a random point). The more times a parameter is covered by local models, the sooner it gets updated and convergences to the desired target. This is quantified in our analysis by showing that the error term due to pruning noise decreases at the rate of $\Gamma_{min}$.
\textbf{Remark 5.} When the cost function is strongly convex (e.g., for softmax classifier, logistic regression and linear regression with $l$2-normalization), a stationary point becomes the global optimum. Thus, Theorem~1 shows convergence to the global optimum of standard FL for strongly convex cost functions.
\textbf{Remark 6.} Theorem~1 inspires new design of adaptive online pruning for heterogeneous FL. Since the convergence gap is affected by both pruning-induced noise $\delta^2$ and minimum covering index $\Gamma_{min}$, we may want to design pruning masks to preserve the largest parameters while sufficiently covering all parameters in different local models. The example shown in Figure~1 illustrates three alternative pruning strategies for $N=10$ clients. It can be seen that to achieve the best performance in heterogeneous FL, pruning masks need to be optimized to mitigate noise $\delta^2$ and achieve high covering index. Due to space limitations, optimal pruning mask design with respect to clients' resource constraints will be considered in future work. We present numerical examples with different pruning mask designs (with improved performance for low $\delta^2$ and high $\Gamma_{min}$) in Section~5 to support the observation.
\begin{figure}[hb]
\includegraphics[width=0.99\textwidth]{pics/Gamma_MIN3.pdf}
\caption{Illustration of our method. (a): Existing method utilizing pruning will always discard the parameters below the threshold. (b,c): Our method will utilize different partitions for a higher $\Gamma_{min}$. Note FL with these 3 settings have nearly identical communication and computation costs.}
\end{figure}
When data distribution is non-IID, we need a stronger assumption to ensure stochastic gradients computed on a subset of clients' datasets still provide a non-biased estimate for each parameter region. To this end, we replace Assumption~4 by a similar Assumption~5 for non-IID.
\begin{assumption} (Gradient Noise for non-IID data).
Under non-IID data distribution, we assume that for constant $\sigma^2>0$ and any $q,n,t$:
\begin{eqnarray}
\mathbb{E}[\frac{1}{|\mathcal{N}_q^{(i)}|} \sum_{n\in \mathcal{N}_q^{(i)} } \nabla F_n^{(i)}(\theta_{q,n,t}, \xi_{n,t})] = \nabla F^{(i)}(\theta_{q,n,t}) \nonumber
\end{eqnarray}
\vspace{-0.3in}
\begin{eqnarray}
\mathbb{E}\left\| \frac{1}{|\mathcal{N}_q^{(i)}|} \sum_{n\in \mathcal{N}_q^{(i)} } \nabla F_n^{(i)}(\theta_{q,n,t}, \xi_{n,t}) -\nabla F^{(i)}(\theta_{q,n,t}) \right\|^2 \le \sigma^2 \nonumber.
\end{eqnarray}
\end{assumption}
\begin{theorem}
Under Assumptions 1-3 and 5, heterogeneous FL with arbitrary adaptive online pruning strategy satisfying $\Gamma_{min}\ge 1$ converges as follows:
\begin{eqnarray}
\frac{1}{Q} \sum_{q=1}^Q \mathbb{E} ||\nabla f(\theta_q)||^2 \le \frac{H_0}{\sqrt{TQ}} + \frac{U_0}{{Q}} + \frac{\sigma^2I_0}{Q}\sum_{q=1}^T \mathbb{E} \| \theta_q \|^2 \nonumber
\end{eqnarray}
where $H_0=4\mathbb{E} [ F(\theta_{0})]+6LK\sigma^2$, $U_0=3L^2NG$ and $I_0$ is same constant as before.
\end{theorem}
\textbf{Remark 7.} With Assumption~5, the convergence under non-IID data distributions is very similar to that in Theorem, except for different constants $H_0=4\mathbb{E} [ F(\theta_{0})]+6LK\sigma^2$ and $U_0=3L^2NG$. Thus, most remarks made for Theorem~1, including convergence speed, pruning-induced noise, and pruning mask design, still apply. We notice that $\Gamma_{min}$ no longer plays a role in the convergence error. This is because the stochastic gradients computed by different clients in $\mathcal{N}_q^{(i)}$ now are based on different datasets and jointly provide an unbiased estimate, no longer resulting in smaller statistical noise.
\textbf{Remark 8.} We note that Assumption~5 can be satisfied in practice by jointly designing pruning masks and data partitions among the clients.
For example, for $N=4$ clients with local data $(D_1, D_2+D_3, D_1+D_2, D_3)$ respectively, we can design 4 pruning masks like $m_{q,1}=[1, 1 ,0]$, $m_{q,2}=[1, 1, 0]$, $m_{q,3}=[0, 1, 1]$, $m_{q,4}=[0, 1, 1]$ or $m_{q,1}=[1, 0 ,1]$, $m_{q,2}=[1, 0, 1]$, $m_{q,3}=[1, 1, 1]$, $m_{q,4}=[1, 1, 1]$. It is easy to show that these satisfy the gradient noise assumption for non-IID data distribution. Due to space limitations, optimal pruning mask design based on data partitioning will be considered in future work. Nevertheless, we present numerical examples under non-IID data distribution with different pruning mask designs in Section~5. When the conditions in Theorem~2 are satisfied, we observe convergence and significant improvement in performance.
\section{Experiments}
\subsection{Experiment settings}
In this section we evaluate different pruning techniques from state-of-the-art designs and verify our proposed theory under unifying pruning framework using two datasets. Unless stated otherwise, the accuracy reported is defined as $\frac{1}{n} \sum _{i} p_{i}\sum_{j}\text{Acc}(f_{i}(x_{j}^{(i)},\theta_{i}\odot m_{i}),y_{j}^{i}))$ averaged over three random seeds with same random initialized starting $\theta_{0}$.
We focus on three points in our experiments: (i) the general coverage of federated learning with heterogeneous models by pruning (ii) the impact of minimum coverage index $\Gamma_{min}$ (iii) the impact of pruning-induced noise $\delta$.
The experiment results provide a comprehensive comparison among several pruning techniques with modification of our new design to verify the correctness of our theory.
We examine theoretical results on the following two common image classification datasets: MNIST \cite{minist} and CIFAR10 \cite{krizhevsky2009learning}, among $N=100$ workers with IID and non-IID data with participation ratio $c = 0.1$. For IID data, we follow the design of balanced MNIST by \cite{convergenceNoniid}, and similarly obtain balanced CIFAR10. For non-IID data, we obtained balanced partition with label distribution skewed, where the number of the samples on each device is up to at most two out of ten possible classifications.
\subsection{Baselines and Test Case Notations}
To empirically verify the correctness of our theory, we pick FedAvg, which can be considered federated learning with full local models, and 4 other pruning techniques\footnote[1]{FullNets can be considered as FedAvg \cite{fedavg} without any pruning. We use "WP" for weights pruning as used in PruneFL\cite{jiang2020model}, "NP" for neuron pruning as used in \cite{shao2019privacy}, "FS" for fixed sub-network as used in HeteroFL \cite{diao2021heterofl} and "PT" for pruning with a pre-trained mask as used in \cite{frankle2019lottery}; for notation and demonstration simplicity.} from state-of-the-art federated learning with heterogeneous model designs as baselines. Let $P_{m}=\frac{\|m\|_{0}}{|\theta|}$ be the sparsity of mask $m$, e.g.,$P_{m}=75\%$ for a model when 25 \% of its weights are pruned. Due to page limits, we show selected combinations over 3 pruning levels named \textit{L} (Large), \textit{M} (medium) and \textit{S} (Small): \textit{L}. 60\% workers with full model and 40\% workers with 75\% pruned model; \textit{M}. 40 \% workers with full model and 60\% workers with 75\% pruned model; \textit{S}. 10\% workers with full model, 30\% workers with 75\% pruned model and 60 \% workers with 50\% pruned models.
For each round 10
devices are randomly selected to run $E$ steps of SGD. We evaluate the averaged model after each global aggregation on the corresponding global objective and show the global loss in Figure 2. We present the key model characteristics as well as their model accuracy after $Q$ rounds of training on MNIST(IID and non-IID) and CIFAR 10 in table 1 and table 2. The FLOPs and Space stand for amortized FLOPs for one local step and memory space needed to store parameters for one model, with their ratio representing the corresponding communication and computation savings compared with FedAvg which uses a full-size model.
In the experiments, NP, FS, WP, and PT use the same architecture but the latter two are trained with sparse parameters, while FS and NP are trained on actual reduced network size.
To better exemplify and examine the results, we run all experiments on the small model architecture: an MLP with a single hidden layer for MNIST and a LeNet-5 like network with 2 convolutions layers for CIFAR10. As some large DNN models are proved to have the ability to maintain their performance with a reasonable level of pruning, we use smaller networks to avoid the potential influence from very large networks, as well as other tricks and model characteristics of each framework. More details regarding models and experiment design can be found at Appendix.2.
More results including other possible combinations, pruning details with analysis, and other experiment details can be found at Appendix.3.
\input{0Figure1}
\clearpage
\input{0tables}
\subsection{Impact on minimum coverage index}
Our theory suggests that for a given pruning level, we expect that the minimum coverage index $\Gamma_{min}$ is a hyperbolic function of the global loss as theorem 1 indicates. Then for a given pruning level, a higher minimum coverage index may reduce the the standing point of convergence since $\frac{1}{Q} \sum_{q=1}^Q \mathbb{E} ||\nabla f(\theta_q)||^2$ contains a term $O(\frac{1}{\Gamma_{min}})$, which could potentially lead to better performance. Note that existing pruning techniques and federated learning with heterogeneous models by pruning will always discard the partition in which the parameters are smaller than a certain threshold determined by the pruning policy and pruning level.
To illustrate the importance of such minimum coverage index, we consider the parameters of a model is argsorted based on certain pruning technique policy $\mathbb{P}$ and then four sets are thereby generated representing the highest 25\% partition to the lowest 25\% partition: $\mathbb{P}_{1} (\theta)= \{\textsl{S}_{1},\textsl{S}_{2},\textsl{S}_{3},\textsl{S}_{4}\}$, where a mask generated based on existing pruning technique for a 75\% sparsity model is then defined as $m_{i} = 1 \ \text{if} \ \theta_{i}\in \{\textsl{S}_{1}\cup \textsl{S}_{2}\cup \textsl{S}_{3}\} \ \text{ otherwise} \ m_{i} = 0$. It is easy to see $\Gamma_{min}$ is then directly determined by the number of models with highest pruning levels, e.g. $\Gamma_{min} = 4$ for experiment case 2: 40 \% workers with full models and 60\% workers with 75\% pruned models.
To verify the impact of the minimum coverage index, we propose a way to increase the minimum coverage index without changing the network architecture or introducing extra computation overhead: by a simple way to increase the usage of parameters below pruning threshold: let partial local models train with pruned partition.
As an example shown in Fig 1, for model with code name *WP-M1, from which 2 out of 6 models with 75\% pruned model using regular weights pruning technique, with the other 4 each two use
$\mathbb{P}_{2} (\theta)= \{\textsl{S}_{1},\textsl{S}_{3},\textsl{S}_{4}\}$ with
$m_{i} = 1 \ \text{if} \ \theta_{i}\in \{\textsl{S}_{1}\cup \textsl{S}_{3}\cup \textsl{S}_{4}\} \ \text{ otherwise} \ m_{i} = 0$
and $\mathbb{P}_{3} (\theta)= \{\textsl{S}_{1},\textsl{S}_{2},\textsl{S}_{4}\}$ with
$m_{i} = 1 \ \text{if} \ \theta_{i}\in \{\textsl{S}_{1}\cup \textsl{S}_{2}\cup \textsl{S}_{4}\} \ \text{ otherwise} \ m_{i} = 0$
, so that $\Gamma_{min} = 8$ is then achieved. We denote such design to maximize the minimum coverage index on current pruning techniques as STAR(*) in the results.
For detailed case settings and corresponding pruning techniques see Appendix 2.
As shown in Figure 2(c), under the identical model setting with the same pruning level, both pruning techniques with different minimum coverage index show different convergence scenarios, specifically, the design with higher minimum coverage index is able to reach a solution with lower loss within the training round limits. It is also observed in table 1 and table 2 that they can reach higher accuracy with both IID and non-IID data, whereas for non-IID the improvements are more significant.
There are even cases where settings under our design with fewer communication and computation costs that perform better than a regular design with more costs, e.g. "*WP-M1" over "WP-L1" on both IID and non-IID data. More examples and results can be found in Appendix.3
\subsection{Impact on pruning-induced noise}
As suggested in our theory, another key factor that contributes to the convergence is the pruning-induced noise $\delta$. When the client model is treated with pruning or a sparse mask, then inevitably the pruning-induced noise $\delta$ will affect the convergency and model accuracy. Given the same minimum coverage index, generally, a smaller pruning-induced noise will lead to a lower convergence point and potentially a more accurate model.
For this phenomenon, we focus on the Fixed Sub-network method, which does not involve the adaptive changing of mask, and tested higher pruning levels as shown in figure 2(d) and confirms such a trend. As shown in Figure 2(b), all selected pruning methods are affected by the change of pruning level. In Figure 2(f), where model WP2-1 trains with a relatively steady trend, it becomes unsteady for model WP3-1, which could be due to the change of pruning mask before its local convergence. This may suggest that on high pruning level without a properly designed pruning technique, using fixed sub-network may bring a more robust and accurate model.
\subsection{More Discussions and Empirical Findings}
Besides on the verification of our theory, we have additionally noticed several phenomenon, of which some confirm previous research while some others may require further investigation for a theoretical support.
In Figure 2(a), under a similar pruning level, PT converges a lot slower than others with its pre-trained mask, as also suggested by previous works that models with static sparse parameters will converge to a solution with higher loss than models with dynamic sparse training. Also, it suggests that it is unlikely to find such a lottery ticket to an optimal mask within limited rounds of training, especially without a carefully designed algorithm.
Although generally a higher pruning level will result in a higher loss in training, different pruning techniques have different sensitivity towards it. As an example Fixed Sub-network has a relatively low sensitivity towards high pruning levels, this could be due to by using the static continuous mask will avoid the situation where pruning mask is changed before its local convergence, which makes it more stable on different pruning levels.
Nevertheless, for most cases our design to increase minimum coverage index could deliver improvement on increasing model accuracy and reducing the global loss, pruning-induced noise is another key factor to notice, especially with higher pruning levels such design to merely focus on increasing minimum coverage index may not bring significant improvements.
Finally, we also show a synthetic special case where (the proposed necessary conditions are not met)all local clients do not sum up a mask that covers the whole model in Appendix.4, which under this situation the model did not learn a usable solution.
\section{Proof of Theorems 1 and 2}
\subsection{Problem summary and notations}
We summarize the algorithm in a way that can present the convergence analysis more easily. We use a superscript such as $\theta^{(i)}$, $m_{q,n}^{(i)}$, and $\nabla{F}^{(i)}$ to denote the sub-vector of parameter, mask, and gradient corresponding to region $i$. In each round $q$, parameters in each region $i$ is contained in and only in a set of local models denoted by $\mathcal{N}_q^{(i)}$, implying that $m_{q,n}^{(i)} = \mathbf{1}$ for $n\in \mathcal{N}_q^{(i)}$ and $m_{q,n}^{(i)} = \mathbf{0}$ otherwise. We define $\Gamma^*=\min_{q,i} \mathcal{N}_q^{(i)}$ as the minimum coverage index, since it denotes the minimum number of local models that contain any parameters in $\theta_q$. With slight abuse of notations, we use $\nabla F_n(\theta$ and $\nabla F_n(\theta,\xi)$ to denote the the gradient and stochastic gradient, respectively.
\begin{algorithm}[h]
\SetKwInput{KwData}{Input}
\caption{Heterogeneous FL with adaptive online model pruning}\label{alg:cap}
\KwData {Local data ${D_{i}^{k}}$ on $N$ local workers, learning rate $\gamma$, pruning policy $\mathbb{P}$, number of local epochs $T$, global model parameterized by $\theta$.
}
\SetKwFunction{FMain}{Local Update}
\SetKwProg{Fn}{Function}{:}{}
\textbf{Executes:} \\
Initialize $\theta_0$\\
\For{round $q=1,2,\ldots, Q$ {\bf do}}{
\For{local workers $n=1,2,\ldots, N$ {\bf do} (In parallel)} {
{\rm Generate mask} $m_{q,n} = \mathbb{P}(\theta_q, n) $ \\
{\rm Prune} $\theta_{q,n,0} = \theta_q \odot m_{q,n} $ \\
{\rm $/ /$ } Update local models: \\
\For{epoch $t=1,2,\ldots, T$ {\bf do} }
{ {\rm Update} $\theta_{q,n,t}=\theta_{q,n,t-1}-\gamma \nabla F_n(\theta_{q,n,t-1}, \xi_{n,t-1})\odot m_{q,n} $
}
}
{\rm $/ /$ } Update global model: \\
\For{region $i=1,2,\ldots, K$ {\bf do} }{
{\rm Find} $\mathcal{N}_q^{(i)}=\{ n: m_{q,n}^{(i)} = \mathbf{1} \} $ \\
{\rm Update} $\theta_{q+1}^{(i)} = \frac{1}{|\mathcal{N}_q^{(i)}|} \sum_{n\in \mathcal{N}_q^{(i)} } \theta_{q,n,T}^{(i)}$
}}
Output $\theta_{Q}$
\end{algorithm}
\subsection{Assumptions}
\begin{assumption} (Smoothness).
Cost functions $F_{1}, \dots ,F_{N}$ are all L-smooth: $\forall \theta,\phi\in \mathcal{R}^d$ and any $n$, we assume that there exists $L>0$:
\begin{eqnarray}
\| \nabla F_n(\theta) - \nabla F_n(\phi) \| \le L \|\theta-\phi\|.
\end{eqnarray}
\end{assumption}
\begin{assumption}(Pruning-induced Error).
We assume that for some $\delta^{2} \in[0,1)$ and any $q,n,t$, the pruning-induced error is bounded by
\begin{eqnarray}
\left\|\theta_{q,n,t}-\theta_{q,n,t} \odot m_{q,n}\right\|^{2} \leq \delta^{2}\left\|\theta_{q,n,t}\right\|^{2}.
\end{eqnarray}
\end{assumption}
\begin{assumption} (Bounded Gradient).
The expected squared norm of stochastic gradients is bounded uniformly, i.e., for constant $G>0$ and any $n,q,t$:
\begin{eqnarray}
E \left\| \nabla F_{n}(\theta_{q,n,t},x_{q,n,t})\right\|^{2} \leq G.
\end{eqnarray}
\end{assumption}
\begin{assumption} (Gradient Noise for IID data).
Under IID data distribution, for any $q,n,t$, we assume that
\begin{eqnarray}
& \mathbb{E}[\nabla F_n(\theta_{q,n,t}, \xi_{n,t})] = \nabla F(\theta_{q,n,t}) \\
& \mathbb{E}\| \nabla F_n(\theta_{q,n,t}, \xi_{n,t}) - \nabla F(\theta_{q,n,t}) \|^2 \le \sigma^2
\end{eqnarray}
where $\sigma^2>0$ is a constant and $\xi_{n,t})$ are independent samples for different $n,t$.
\end{assumption}
\begin{assumption} (Gradient Noise for non-IID data).
Under non-IID data distribution, we assume that for constant $\sigma^2>0$ and any $q,n,t$:
\begin{eqnarray}
& \mathbb{E}\left[\frac{1}{|\mathcal{N}_q^{(i)}|} \sum_{n\in \mathcal{N}_q^{(i)} } \nabla F_n^{(i)}(\theta_{q,n,t}, \xi_{n,t})\right] = \nabla F^{(i)}(\theta_{q,n,t}) \\
& \mathbb{E}\left\| \frac{1}{|\mathcal{N}_q^{(i)}|} \sum_{n\in \mathcal{N}_q^{(i)} } \nabla F_n^{(i)}(\theta_{q,n,t}, \xi_{n,t}) -\nabla F^{(i)}(\theta_{q,n,t}) \right\|^2 \le \sigma^2.
\end{eqnarray}
\end{assumption}
\subsection{Convergence Analysis}
We now analyze the convergence of heterogeneous FL under adaptive online model pruning with respect to any pruning policy $\mathbb{P}(\theta_q,n)$ (and the resulting mask $m_{q,n}$) and prove the main theorems in this paper. We need to overcome a number of challenges as follows:
\begin{itemize}
\vspace{-0.05in}
\item We will begin the proof by analyzing the change of loss function in one round as the model goes from $\theta_q$ to $\theta_{q+1}$, i.e., $F(\theta_{q+1})-F(\theta_1)$ . It includes three major steps: pruning to obtain heterogeneous local models $\theta_{q,n,0}=\theta_q\odot{m_{q,n}}$, training local models in a distributed fashion to update $\theta_{q,n,t}$, and parameter aggregation to update the global model $\theta_{q+1}$.
\vspace{-0.05in}
\item Due to the use of heterogeneous local models whose masks $m_{q,n}$ both vary over rounds and change for different workers, we first characterize the difference between (i) local model $\theta_{q,n,t}$ at any epoch $t$ and global model $\theta_{q}$ at the beginning of the current round. It is easy to see that this can be factorized into two parts: pruning induced error $\|\theta_{q,n,0}- \theta_{q}\|^2$ and local training $\|\theta_{q,n,t}- \theta_{q,n,0}\|^2$, which will be analyzed in Lemma~1.
\vspace{-0.05in}
\item We characterize the impact of heterogeneous local models on global parameter update. Specifically, We use an ideal local gradient $\nabla F_n(\theta_q)$ as a reference point and quantify the different between aggregated local gradients and the ideal gradient. This will be presented in Lemma~2. We also quantify the norm difference between a gradient and a stochastic gradient (with respect to the global update step) using the gradient noise assumptions, in Lemma~3.
\vspace{-0.05in}
\item Since IID and non-IID data distributions in our model differ in the gradient noise assumption (i.e., Assumption~4 and Assumption~5), we present a unified proof for both cases. We will explicitly state IID and non-IID data distributions only if the two cases require different treatment (when the gradient noise assumptions are needed). Otherwise, the derivations and proofs are identical for both cases.
\end{itemize}
We will begin by proving a number of lemmas and then use them for convergence analysis.
\begin{lemma}
Under Assumption~2 and Assumption~3, for any $q$, we have:
\begin{eqnarray}
\sum_{t=1}^T \sum_{n=1}^N \mathbb{E} \| \theta_{q,n,t-1} - \theta_{q} \|^2 \le \gamma^2 T^2NG+ \delta^2 NT \cdot \mathbb{E} \|\theta_{q} \|^2 .
\end{eqnarray}
\end{lemma}
\begin{proof}
We note that $\theta_{q}$ is the global model at the beginning of current round. We split the difference $\theta_{q,n,t-1} - \theta_{q}$ into two parts: changes due to local model training $\theta_{q,n,t-1} - \theta_{q,n,0}$ and changes due to pruning $\theta_{q,n,0} - \theta_{q}$. That is
\begin{eqnarray}
& & \sum_{t=1}^T \sum_{n=1}^N \mathbb{E} \| \theta_{q,n,t-1} - \theta_{q} \|^2 \nonumber \\
& & \ \ \ \ \ = \sum_{t=1}^T \sum_{n=1}^N \mathbb{E} \| \left( \theta_{q,n,t-1} - \theta_{q,n,0} \right) + \left( \theta_{q,n,0} - \theta_{q} \right) \|^2 \nonumber \\
& & \ \ \ \ \ \le \sum_{t=1}^T \sum_{n=1}^N 2\mathbb{E} \| \theta_{q,n,t-1} - \theta_{q} \|^2 + \sum_{t=1}^T \sum_{n=1}^N 2\mathbb{E} \| \theta_{q,n,t-1} - \theta_{q} \|^2 \label{l1_1}
\end{eqnarray}
where we used the fact that $\|\sum_{i=1}^s a_i \|^2\le s \sum_{i=1}^s \|a_i \|^2 $ in the last step.
For the first term in Eq.(\ref{l1_1}), we notice that $\theta_{q,n,t-1}$ is obtained from $\theta_{q,n,0}$ through $t-1$ epochs of local model updates on worker $n$. Using the local gradient updates from the algorithm, it is easy to see:
\begin{eqnarray}
& & \sum_{t=1^T} \sum_{n=1}^N \mathbb{E} \| \theta_{q,n,t-1} - \theta_{q,n,0} \|^2 \nonumber \\
& & \ \ \ \ \ = \sum_{t=1^T} \sum_{n=1}^N \mathbb{E}\left\| \sum_{j=0}^{t-1} -\gamma \nabla F_n(\theta_{q,n,t-1}, \xi_{n,t-1})\odot m_{q,n} \right\|^2 \nonumber \\
& & \ \ \ \ \ \le \sum_{t=1^T} \sum_{n=1}^N (t-1) \sum_{j=0}^{t-1} \mathbb{E}\left\| -\gamma \nabla F_n(\theta_{q,n,t-1}, \xi_{n,t-1})\odot m_{q,n} \right\|^2 \nonumber \\
& & \ \ \ \ \ \le \sum_{t=1^T} \sum_{n=1}^N (t-1) \gamma^2 G \nonumber \\
& & \ \ \ \ \ \le \frac{\gamma^2 T^2 NG}{2}, \label{l1_2}
\end{eqnarray}
where we use the fact that $\|\sum_{i=1}^s a_i \|^2\le s \sum_{i=1}^s \|a_i \|^2 $ in step 2 above, and the fact that $m_{q,n}$ is a binary mask in step 3 above together with Assumption~3 for bounded gradient.
For the second term in Eq.(\ref{l1_1}), the difference is resulted by model pruning using mask $m_{n,q}$ of work $n$ in round $q$. We have
\begin{eqnarray}
& \sum_{t=1}^T \sum_{n=1}^N \mathbb{E} \| \theta_{q,n,0} - \theta_{q} \|^2 & = \sum_{t=1}^T \sum_{n=1}^N \mathbb{E} \| \theta_{q}\odot m_{n,q} - \theta_{q} \|^2 \nonumber \\
& & \le \sum_{t=1}^T \sum_{n=1}^N \delta^2 \mathbb{E} \|\theta_{q} \|^2 \nonumber \\
& & = \delta^2 NT \cdot \mathbb{E} \|\theta_{q} \|^2, \label{l1_3}
\end{eqnarray}
where we used the fact that $\theta_{q,n,0}=\theta_{q}\odot m_{n,q}$ in step 1 above, and Assumption~2 in step 2 above.
Plugging Eq.(\ref{l1_2}) and Eq.(\ref{l1_3}) into Eq.(\ref{l1_1}), we obtain the desired result.
\end{proof}
\begin{lemma}
Under Assumptions~1-3, for any $q$, we have:
\begin{eqnarray}
& & \sum_{i=1}^K \mathbb{E} \left\| \frac{1}{\Gamma_q^{(i)}T} \sum_{t=1}^T \sum_{n\in\mathcal{N}_q^{(i)}} \left[ \nabla F_n^{(i)}({\theta_{q,n,t-1}})-\nabla F_n^{(i)} (\theta_q) \right] \right\|^2 \nonumber \\
& & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \le \frac{L^2 \gamma^2 T NG}{\Gamma^*} + \frac{L^2\delta^2 N}{\Gamma^*} \mathbb{E} \|\theta_{q} \|^2
\label{l2_0}.
\end{eqnarray}
\end{lemma}
\begin{proof}
Recall that $\Gamma_q^{(i)}=|\mathcal{N}_q^{(i)}|$ is the number of local models containing parameters of region $i$ in round $q$. The left-hand-side of Eq.(\ref{l2_0}) denotes the difference between an average gradient of heterogeneous models (through aggregation and over time) and an ideal gradient. The summation over $i$ adds up such difference over all regions $i=1,\ldots,K$, because the average gradient takes a different form in different regions.
From the inequality $\|\sum_{i=1}^s a_i \|^2\le s \sum_{i=1}^s \|a_i \|^2 $, we obtain $\| \frac{1}{s} \sum_{i=1}^s a_i \|^2\le \frac{1}{s} \sum_{i=1}^s \|a_i \|^2 $. We use this inequality on the left-hand-side of Eq.(\ref{l2_0}) to get:
\begin{eqnarray}
& & \sum_{i=1}^K \mathbb{E} \left\| \frac{1}{\Gamma_q^{(i)}T} \sum_{t=1}^T \sum_{n\in\mathcal{N}_q^{(i)}} \left[ \nabla F_n^{(i)}({\theta_{q,n,t-1}})-\nabla F_n^{(i)} (\theta_q) \right] \right\|^2 \nonumber \\
& & \ \ \ \ \ \le \sum_{i=1}^K \frac{1}{\Gamma_q^{(i)}T} \sum_{t=1}^T \sum_{n\in\mathcal{N}_q^{(i)}} \mathbb{E} \left\| \nabla F_n^{(i)}({\theta_{q,n,t-1}})-\nabla F_n^{(i)} (\theta_q) \right\|^2 \nonumber \\
& & \ \ \ \ \ \le \frac{1}{T\Gamma^{*}} \sum_{t=1}^T \sum_{n=1}^N \sum_{i=1}^K \mathbb{E} \left\| \nabla F_n^{(i)}({\theta_{q,n,t-1}})-\nabla F_n^{(i)} (\theta_q) \right\|^2 \nonumber \\
& & \ \ \ \ \ = \frac{1}{T\Gamma^{*}} \sum_{t=1}^T \sum_{n=1}^N \mathbb{E} \left\| \nabla F_n({\theta_{q,n,t-1}})-\nabla F_n (\theta_q) \right\|^2 \nonumber \\
& & \ \ \ \ \ \le \frac{1}{T\Gamma^{*}} \sum_{t=1}^T \sum_{n=1}^N L^2 \mathbb{E} \left\| {\theta_{q,n,t-1}} - \theta_q \right\|^2 {\label{l2_2}},
\end{eqnarray}
where we relax the inequality by choosing the smallest $\Gamma^*=\min_{q,i} \Gamma_q^{(i)}$ and changing the summation over $n$ to all workers in the second step. In the third step, we use the fact that $L_2$ gradient norm of a vector is equal to the sum of norm of all sub-vectors (i.e., regions $i=1,\ldots,K$). This allows us to consider $\nabla F_n$ instead of its sub-vectors on different regions.
Finally, the last step is directly from L-smoothness in Assumption~1. Under Assumptions~2-3, we notice that the last step of Eq.(\ref{l2_2}) is further bounded by Lemma~1, which yields the desired result of this lemma after re-arranging the terms.
\end{proof}
\begin{lemma}
For IID data distribution under Assumptions~4, for any $q$, we have:
\begin{eqnarray}
\sum_{i=1}^K \mathbb{E} \left\| \frac{1}{\Gamma_q^{(i)}T} \sum_{t=1}^T \sum_{n\in\mathcal{N}_q^{(i)}} \left[ \nabla F_n^{(i)}({\theta_{q,n,t-1}},\xi_{n,t-1})-\nabla F^{(i)} (\theta_{q,n,t-1}) \right] \right\|^2 \le \frac{N\sigma^2}{T({\Gamma^*})^2}
\nonumber \label{l3_0}.
\end{eqnarray}
For non-IID data distribution under Assumption~5, for any $q$, we have:
\begin{eqnarray}
\sum_{i=1}^K \mathbb{E} \left\| \frac{1}{\Gamma_q^{(i)}T} \sum_{t=1}^T \sum_{n\in\mathcal{N}_q^{(i)}} \left[ \nabla F_n^{(i)}({\theta_{q,n,t-1}},\xi_{n,t-1})-\nabla F^{(i)} (\theta_{q,n,t-1}) \right] \right\|^2 \le \frac{K\sigma^2}{T}
\nonumber \label{l3_0}.
\end{eqnarray}
\end{lemma}
\begin{proof}
This lemma quantifies the square norm of the difference between gradient and stochastic gradient in the global parameter update. We present results for both IID and non-IID cases in this lemma under Assumption~4 and Assumption~5, respectively.
We first consider IID data distributions. Since all the samples $\xi_{n,t-1}$ are independent from each other for different $n$ and $t-1$, the difference between gradient and stochastic gradient, i.e., $\nabla F_n^{(i)}({\theta_{q,n,t-1}},\xi_{n,t-1})-\nabla F_n^{(i)} (\theta_{q,n,t-1})$, are independent gradient noise. Due to Assumption~4, these gradient noise has zero mean. Using the fact that $\mathbb{E}\|\sum_i \mathbf{x}_i\|^2 = \sum_i \mathbb{E} \|\mathbf{x}_i^2\| $ for zero-mean and independent $\mathbf{x}_i$'s, we get:
\begin{eqnarray}
& & \sum_{i=1}^K \mathbb{E} \left\| \frac{1}{\Gamma_q^{(i)}T} \sum_{t=1}^T \sum_{n\in\mathcal{N}_q^{(i)}} \left[ \nabla F_n^{(i)}({\theta_{q,n,t-1}},\xi_{n,t-1})-\nabla F_n^{(i)} (\theta_{q,n,t-1}) \right] \right\|^2 \nonumber \\
& & \ \ \ \ \ \le \sum_{i=1}^K \frac{1}{(\Gamma_q^{(i)}T)^2} \sum_{t=1}^T \sum_{n\in\mathcal{N}_q^{(i)}} \mathbb{E} \left\| \nabla F_n^{(i)}({\theta_{q,n,t-1}},\xi_{n,t-1})-\nabla F_n^{(i)} (\theta_{q,n,t-1}) \right\|^2 \nonumber \\
& & \ \ \ \ \ \le \frac{1}{(T\Gamma^*)^2}\sum_{i=1}^K \sum_{t=1}^T \sum_{n=1}^N \mathbb{E} \left\| \nabla F_n^{(i)}({\theta_{q,n,t-1}},\xi_{n,t-1})-\nabla F_n^{(i)} (\theta_{q,n,t-1}) \right\|^2 \nonumber \\
& & \ \ \ \ \ = \frac{1}{(T\Gamma^*)^2} \sum_{t=1}^T \sum_{n=1}^N \mathbb{E} \left\| \nabla F_n({\theta_{q,n,t-1}},\xi_{n,t-1})-\nabla F_n (\theta_{q,n,t-1}) \right\|^2 \nonumber \\
& & \ \ \ \ \ \le \frac{1}{(T\Gamma^*)^2} \cdot TN\sigma^2
\end{eqnarray}
where we used the property of zero-mean and independent gradient noise in the first step above, relax the inequality by choosing the smallest $\Gamma^*=\min_{q,i} \Gamma_q^{(i)}$ and changing the summation over $n$ to all workers in the second step. In the third step, we use the fact that $L_2$ gradient norm of a vector is equal to the sum of norm of all sub-vectors (i.e., regions $i=1,\ldots,K$). This allows us to consider $\nabla F_n$ instead of its sub-vectors on different regions. Finally, we apply Assumption~4 to bound the gradient noise and obtain the desired result.
For non-IID data distributions under Assumption~4 (instead of Assumption~5), we notice that
$\mathbb{E}\left[\frac{1}{|\mathcal{N}_q^{(i)}|} \sum_{n\in \mathcal{N}_q^{(i)} } \nabla F_n^{(i)}(\theta_{q,n,t-1}, \xi_{n,t-1})\right] = \nabla F^{(i)}(\theta_{q,n,t-1})$ is an unbiased estimate for any epoch $t$, with bounded gradient noise. Again, due to independent samples $\xi_{n,t-1}$, we have:
\begin{eqnarray}
& & \sum_{i=1}^K \mathbb{E} \left\| \frac{1}{\Gamma_q^{(i)}T} \sum_{t=1}^T \sum_{n\in\mathcal{N}_q^{(i)}} \left[ \nabla F_n^{(i)}({\theta_{q,n,t-1}},\xi_{n,t-1})-\nabla F_n^{(i)} (\theta_{q,n,t-1}) \right] \right\|^2 \nonumber \\
& & \ \ \ \ \ \le \frac{1}{T^2} \sum_{i=1}^K \sum_{t=1}^T \mathbb{E} \left\| \frac{1}{\Gamma_q^{(i)}} \sum_{n\in\mathcal{N}_q^{(i)}} \nabla F_n^{(i)}({\theta_{q,n,t-1}},\xi_{n,t-1})-\nabla F_n^{(i)} (\theta_{q,n,t-1}) \right\|^2 \nonumber \\
& & \ \ \ \ \ \le \frac{1}{T^2} \sum_{i=1}^K \sum_{t=1}^T \sigma^2 \nonumber \\
& & \ \ \ \ \ = \frac{K\sigma^2}{T},
\end{eqnarray}
where we use the property of zero-mean and independent gradient noise in the first step above, used the fact that the norm of a sub-vector (in region $i$) is bounded by that of the entire vector in the second step above, as well as Assumption~5. This completes the proof of this lemma.
\end{proof}
\vspace{0.07in}
\noindent {\bf Proof of the main result}. Now we are ready to present the main proof. We begin with the L-smoothness property in Assumption~1, which implies
\begin{eqnarray}
F(\theta_{q+1}) - F(\theta_q) \le \left< \nabla F(\theta_q) , \ \theta_{q+1} - \theta_{q} \right> + \frac{L}{2 }\left\| \theta_{q+1} - \theta_{q} \right\|^2.
\end{eqnarray}
We take expectations on both sides of the inequality and get:
\begin{eqnarray}
\mathbb{E} [ F(\theta_{q+1})] - \mathbb{E}[]F(\theta_q)] \le \mathbb{E}\left< \nabla F(\theta_q) , \ \theta_{q+1} - \theta_{q} \right> + \frac{L}{2 } \mathbb{E}\left\| \theta_{q+1} - \theta_{q} \right\|^2. \label{mm_0}
\end{eqnarray}
In the following, we bound the two terms on the right-hand-side above and finally combine the results to complete the proof.
\vspace{0.07in}
\noindent {\bf Upperbound for $\mathbb{E}\left< \nabla F(\theta_q) , \ \theta_{q+1} - \theta_{q} \right>$}. We notice that the inner product can be broken down and reformulated as the sum of inner products over all regions $i=1,\ldots,K$. This is necessary because the global parameter update is different for different regions. More precisely, for any region $i$, we have:
\begin{eqnarray}
& \theta_{q+1}^{(i)} - \theta_{q}^{(i)} & = \left(\frac{1}{\Gamma_q^{(i)}} \sum_{n\in\mathcal{N}_q^{(i)}} \theta_{q,n,T}^{(i)} \right)- \theta_{q}^{(i)} \nonumber \\
& & = \frac{1}{\Gamma_q^{(i)}} \sum_{n\in\mathcal{N}_q^{(i)}} \left[ \theta_{q,n,0}^{(i)} - \sum_{t=1}^T \gamma \nabla F_n^{(i)} (\theta_{q,n,t-1},\xi_{n,t-1})\cdot m_{n,q}^{(i)} \right] - \theta_{q}^{(i)} \nonumber \\
& & = - \frac{1}{\Gamma_q^{(i)}} \sum_{n\in\mathcal{N}_q^{(i)}} \sum_{t=1}^T \gamma \nabla F_n^{(i)} (\theta_{q,n,t-1},\xi_{n,t-1})\cdot m_{n,q}^{(i)} + \theta_{q}^{(i)}\cdot m_{n,q}^{(i)} - \theta_{q}^{(i)} \nonumber \\
& & = - \frac{1}{\Gamma_q^{(i)}} \sum_{n\in\mathcal{N}_q^{(i)}} \sum_{t=1}^T \gamma \nabla F_n^{(i)} (\theta_{q,n,t-1},\xi_{n,t-1}), \label{m_1}
\end{eqnarray}
where global parameter updated is used in the first step, local parameter update is used in the second step, and the third step follows from the fact that for any worker $n\in\mathcal{N}_q^{(i)}$ participating in the global update of $\theta^{(i)}_q$ contain the model parameters of region $i$, i.e., $m_{q,n}^{(i)}={\bf 1}$. We also use $ \theta_{q,n,0}^{(i)}=\theta_{q}^{(i)}\cdot m_{n,q}^{(i)}$ in the third step above because of to pruning.
Next we analyze $\mathbb{E}\left< \nabla F(\theta_q) , \ \theta_{q+1} - \theta_{q} \right>$ by considering a sum of inner products over $K$ regions. We have
\begin{eqnarray}
& & \mathbb{E}\left< \nabla F(\theta_q) , \ \theta_{q+1} - \theta_{q} \right> \nonumber \\
& & \ \ \ \ \ \ \ \ \ \ = \sum_{i=1}^K \mathbb{E}\left< \nabla F^{(i)}(\theta_q) , \ \theta_{q+1}^{(i)} - \theta_{q}^{(i)} \right> \nonumber \\
& & \ \ \ \ \ \ \ \ \ \ = \sum_{i=1}^K \mathbb{E}\left< \nabla F^{(i)}(\theta_q) , \ - \frac{1}{\Gamma_q^{(i)}} \sum_{n\in\mathcal{N}_q^{(i)}} \sum_{t=1}^T \gamma \nabla F_n^{(i)} (\theta_{q,n,t-1},\xi_{n,t-1}) \right> \nonumber \\
& & \ \ \ \ \ \ \ \ \ \ = \sum_{i=1}^K \mathbb{E}\left< \nabla F^{(i)}(\theta_q) , \ - \frac{1}{\Gamma_q^{(i)}} \sum_{n\in\mathcal{N}_q^{(i)}} \sum_{t=1}^T \gamma \mathbb{E} \left[ \nabla F_n^{(i)} (\theta_{q,n,t-1},\xi_{n,t-1}) | \theta_q \right]\right> \nonumber \\
& & \ \ \ \ \ \ \ \ \ \ = \sum_{i=1}^K \mathbb{E}\left< \nabla F^{(i)}(\theta_q) , \ - \frac{1}{\Gamma_q^{(i)}} \sum_{n\in\mathcal{N}_q^{(i)}} \sum_{t=1}^T \gamma \nabla F_n^{(i)} (\theta_{q,n,t-1}) \right> \nonumber \\
& & \ \ \ \ \ \ \ \ \ \ = - \sum_{i=1}^K \mathbb{E}\left< \nabla F^{(i)}(\theta_q) , \ \gamma T\nabla F^{(i)}(\theta_q) \right> \label{m_2} \\
& & \ \ \ \ \ \ \ \ \ \ \ \ \ -\sum_{i=1}^K \mathbb{E}\left< \nabla F^{(i)}(\theta_q) , \ \frac{1}{\Gamma_q^{(i)}} \sum_{n\in\mathcal{N}_q^{(i)}} \sum_{t=1}^T \gamma \left[ \nabla F_n^{(i)} (\theta_{q,n,t-1}) - \nabla F^{(i)}(\theta_q)\right] \right> \nonumber
\end{eqnarray}
where we use the first step reformulates the inner product as a sum, the second step follows from Eq.(\ref{m_1}), the third step employs a conditional expectation over the random samples with respect to $\theta_q$, and the last step splits the result into two parts with respect to a reference point $\gamma T \nabla F^{(i)}(\theta_q)$.
For the first term on the right-hand-side of Eq.(\ref{m_2}), it is easy to see that
\begin{eqnarray}
& - \sum_{i=1}^K \mathbb{E}\left< \nabla F^{(i)}(\theta_q) , \ \gamma T \nabla F^{(i)}(\theta_q) \right> & = - \gamma T \sum_{i=1}^K \left\| \nabla F^{(i)}(\theta_q) \right\|^2 \nonumber \\
& & = - \gamma T \left\| \nabla F(\theta_q) \right\|^2, \label{m_3}
\end{eqnarray}
where we add up the norm over $K$ regions in the last step. For the second term on the right-hand-side of Eq.(\ref{m_2}), we use the inequality $<a,b>\le \frac{1}{2} \|a\|^2 + \frac{1}{2} \|b\|^2 $ for any vectors $a,b$. Applying this inequality to the second term, we have
\begin{eqnarray}
& & -\sum_{i=1}^K \mathbb{E}\left< \nabla F^{(i)}(\theta_q) , \ \frac{1}{\Gamma_q^{(i)}} \sum_{n\in\mathcal{N}_q^{(i)}} \sum_{t=1}^T \gamma \left[ \nabla F_n^{(i)} (\theta_{q,n,t-1}) - \nabla F^{(i)}(\theta_q)\right] \right> \nonumber \\
& & \ \ \ \ \ = -\sum_{i=1}^K T\gamma \cdot \mathbb{E}\left< \nabla F^{(i)}(\theta_q) , \ \frac{1}{T\Gamma_q^{(i)}} \sum_{n\in\mathcal{N}_q^{(i)}} \sum_{t=1}^T \left[ \nabla F_n^{(i)} (\theta_{q,n,t-1}) - \nabla F^{(i)}(\theta_q)\right] \right> \nonumber \\
& & \ \ \ \ \ \le \frac{T\gamma}{2} \sum_{i=1}^K \mathbb{E} \left\| \nabla F^{(i)}(\theta_q) \right\|^2 + \frac{T\gamma}{2} \sum_{i=1}^K \mathbb{E} \left\| \frac{1}{T\Gamma_q^{(i)}} \sum_{n\in\mathcal{N}_q^{(i)}} \sum_{t=1}^T \left[ \nabla F_n^{(i)} (\theta_{q,n,t-1}) - \nabla F^{(i)}(\theta_q)\right] \right\| \nonumber \\
& & \ \ \ \ \ = \frac{T\gamma}{2} \mathbb{E} \left\| \nabla F(\theta_q) \right\|^2 + \frac{T\gamma}{2} \left( \frac{L^2 \gamma^2 T NG}{\Gamma^*} + \frac{L^2\delta^2 N}{\Gamma^*} \mathbb{E} \|\theta_{q} \|^2\right) \label{m_4}
\end{eqnarray}
where the second step uses the inequality and the third step follows directly from Lemma~2. Plugging Eq.(\ref{m_3}) and Eq.(\ref{m_4}) results into Eq.(\ref{m_2}), we obtain the desired upperbound:
\begin{eqnarray}
\mathbb{E}\left< \nabla F(\theta_q) , \ \theta_{q+1} - \theta_{q} \right> \le - \frac{T\gamma}{2} \mathbb{E} \left\| \nabla F(\theta_q) \right\|^2 + \frac{T\gamma}{2} \left( \frac{L^2 \gamma^2 T NG}{\Gamma^*} + \frac{L^2\delta^2 N}{\Gamma^*} \mathbb{E} \|\theta_{q} \|^2\right). \label{mm_1}
\end{eqnarray}
\vspace{0.07in}
\noindent {\bf Upperbound for $\frac{L}{2 } \mathbb{E}\left\| \theta_{q+1} - \theta_{q} \right\|^2$}. We use the again result in Eq.(\ref{m_1}) and apply it to $\theta_{q+1} - \theta_{q}$, which gives:
\begin{eqnarray}
& & \frac{L}{2 } \mathbb{E}\left\| \theta_{q+1} - \theta_{q} \right\|^2 \nonumber \\
& & \ \ \ \ = \frac{L}{2 } \mathbb{E}\left\| \frac{1}{\Gamma_q^{(i)}} \sum_{n\in\mathcal{N}_q^{(i)}} \sum_{t=1}^T \gamma \nabla F_n^{(i)} (\theta_{q,n,t-1},\xi_{n,t-1})\right\|^2 \nonumber \\
& & \ \ \ \ \le \frac{3L}{2 } \mathbb{E}\left\| \frac{1}{\Gamma_q^{(i)}} \sum_{n\in\mathcal{N}_q^{(i)}} \sum_{t=1}^T \gamma \left[ \nabla F_n^{(i)} (\theta_{q,n,t-1},\xi_{n,t-1}) - \nabla F_n^{(i)} (\theta_{q,n,t-1})\right]\right\|^2 \nonumber \\
& & \ \ \ \ \ \ \ \ + \frac{3L}{2} \mathbb{E}\left\| \frac{1}{\Gamma_q^{(i)}} \sum_{n\in\mathcal{N}_q^{(i)}} \sum_{t=1}^T \gamma \left[ \nabla F_n^{(i)} (\theta_{q,n,t-1}) - \nabla F_n^{(i)} (\theta_{q}) \right] \right\|^2 \nonumber \\
& & \ \ \ \ \ \ \ \ + \frac{3L}{2} \mathbb{E}\left\| \frac{1}{\Gamma_q^{(i)}} \sum_{n\in\mathcal{N}_q^{(i)}} \sum_{t=1}^T \gamma \nabla F_n^{(i)} (\theta_{q}) \right\|^2, \label{mm_2}
\end{eqnarray}
where in the second step, we use the inequality $\|\sum_{i=1}^s a_i \|^2\le s \sum_{i=1}^s \|a_i \|^2 $ and split stochastic gradient $[\nabla F_n^{(i)} (\theta_{q,n,t-1},\xi_{n,t-1})]$ into $s=3$ parts, i.e., $[\nabla F_n^{(i)} (\theta_{q,n,t-1},\xi_{n,t-1}) - \nabla F_n^{(i)} (\theta_{q,n,t-1})]$, $[F_n^{(i)} (\theta_{q,n,t-1}) - F_n^{(i)} (\theta_{q})]$, and $[F_n^{(i)}(\theta_{q})]$.
Next, we notice that the third term on the right-hand-side of Eq.(\ref{mm_2}) can be simplified, because (i) for IID data distribution, the cost function of each worker $n$ is the same as the global cost function, i.e., $\nabla F_n(\theta_q) = \nabla F(\theta_q)$, and (ii) for non-IID data distribution, the gradient noise assumption (Assumption~5) implies that $\frac{1}{\Gamma_q^{(i)}} \sum_{n\in\mathcal{N}_q^{(i)}} \nabla F_n (\theta_{q}) = F(\theta_{q})$. Thus in both cases, we have:
\begin{eqnarray}
& \frac{3L}{2} \mathbb{E}\left\| \frac{1}{\Gamma_q^{(i)}} \sum_{n\in\mathcal{N}_q^{(i)}} \sum_{t=1}^T \gamma \nabla F_n^{(i)} (\theta_{q}) \right\|^2 & \le \frac{3LT^2\gamma^2}{2}\sum_{i=1}^K \mathbb{E}\| \nabla F^{(i)} (\theta_{q}) \|^2 \nonumber \\
& & = \frac{3LT^2\gamma^2}{2} \mathbb{E}\| F (\theta_{q}) \|^2 , \label{mm_3}
\end{eqnarray}
where we again used the sum of norm of $K$ regions in the last step.
Now we notice that the first and second terms of Eq.(\ref{mm_2}) have been bounded by Lemma~2 and Lemma~3, excpet for constants $\gamma$ and ${1}/{T}$. Applying these results directly and also plugging in Eq.(\ref{mm_3}) into Eq.(\ref{mm_2}), we obtain the desired upperbound:
\begin{eqnarray}
& \frac{L}{2 } \mathbb{E}\left\| \theta_{q+1} - \theta_{q} \right\|^2 & \le \frac{3LTN\gamma^2\sigma^2}{2({\Gamma^*})^2} {\rm \ (for \ IID) \ or \ } \frac{3LTK\gamma^2\sigma^2}{2} {\rm \ (for \ non-IID)} \nonumber \\
& & \ \ \ \ + \frac{3L^3 \gamma^4 T^3 NG}{2\Gamma^*} + \frac{3L^3T^2\gamma^2\delta^2 N}{2\Gamma^*} \mathbb{E} \|\theta_{q} \|^2 \nonumber \\
& & \ \ \ \ + \frac{3LT^2\gamma^2}{2} \mathbb{E}\| F_n (\theta_{q}) \|^2. \label{mm_4}
\end{eqnarray}
\vspace{0.07in}
\noindent {\bf Combining the two Upperbounds}. Finally, we will apply the upperbound for $\mathbb{E}\left< \nabla F(\theta_q) , \ \theta_{q+1} - \theta_{q} \right>$ in Eq.(\ref{mm_1}) as well as the upperbound for $\frac{L}{2} \mathbb{E}\left\| \theta_{q+1} - \theta_{q} \right\|^2$ in Eq.(\ref{mm_4}), and plug them into Eq.(\ref{mm_0}). First we take the sum over $q=1,\ldots,Q$ on both sides of Eq.(\ref{mm_0}), which becomes:
\begin{eqnarray}
& & \mathbb{E} [ F(\theta_{Q+1})] - \mathbb{E} [ F(\theta_{0})] \nonumber \\
& & \ \ \ \ \ \ \ \ = \sum_{q=1}^Q \mathbb{E} [ F(\theta_{q+1})] - \sum_{q=1}^Q \mathbb{E}[F(\theta_q)] \nonumber \\
& & \ \ \ \ \ \ \ \ \le \sum_{q=1}^Q \mathbb{E}\left< \nabla F(\theta_q) , \ \theta_{q+1} - \theta_{q} \right> + \sum_{q=1}^Q \frac{L}{2 } \mathbb{E}\left\| \theta_{q+1} - \theta_{q} \right\|^2. \label{mm_5}
\end{eqnarray}
Now plugging in the two upperbounds and re-arranging the terms, for IID data distribution, we derive:
\begin{eqnarray}
& & \mathbb{E} [ F(\theta_{Q+1})] - \mathbb{E} [ F(\theta_{0})] \nonumber \\
& & \ \ \ \ \ \ \ \ \le - \frac{T\gamma}{2} \left( 1-3LT\gamma \right) \sum_{q=1}^Q \mathbb{E}\| \nabla F(\theta_q)\|^2 \nonumber \\
& & \ \ \ \ \ \ \ \ \ \ \ \ + \frac{\gamma TQ}{2} \left( \frac{TL^2\gamma^2 NG}{\Gamma^*} +\frac{3LN\gamma \sigma^2}{(\Gamma^*)^2} + \frac{3L^3\gamma^3T^3NG}{\Gamma^*} \right) \nonumber \\
& & \ \ \ \ \ \ \ \ \ \ \ \ +\frac{T\gamma}{2} \left( \frac{L^2\delta^2 N}{\Gamma^*} +\frac{3L^3T\gamma \delta^2N}{\Gamma^*} \right) \sum_{q=1}^T \mathbb{E} \| \theta_q \|^2.
\end{eqnarray}
We choose learning rate $\gamma\le 1/(6LT)$ and use the fact that $\mathbb{E} [ F(\theta_{Q+1})]$ is non-negative. The inequality above becomes:
\begin{eqnarray}
& \frac{T\gamma}{4} \sum_{q=1}^Q \mathbb{E}\| \nabla F(\theta_q)\|^2 & \le \mathbb{E} [ F(\theta_{0})] + \frac{T\gamma Q}{2}\left( \frac{3LN\gamma \sigma^2}{(\Gamma^*)^2} + \frac{3L^2\gamma^2TNG}{2\Gamma^*}\right) \nonumber \\
& & \ \ \ \ + \frac{T\gamma}{2}\left( \frac{3L^2\delta^2 N}{2\Gamma^*} \right)\sum_{q=1}^T \mathbb{E} \| \theta_q \|^2.
\end{eqnarray}
Dividing both sides above by $4/(QT\gamma)$ and choosing $\gamma = 1/\sqrt{TQ}$, we have:
\begin{eqnarray}
& \frac{1}{Q} \sum_{q=1}^Q \mathbb{E}\| \nabla F(\theta_q)\|^2 & \le \frac{ 4\mathbb{E} [ F(\theta_{0})]}{\sqrt{TQ}} + \frac{6LN \sigma^2}{\sqrt{TQ}(\Gamma^*)^2} + \frac{3L^2NG}{Q\Gamma^*} \nonumber \\
& & \ \ \ \ + \frac{3L^2\delta^2 N}{\Gamma^*} \cdot \frac{1}{Q}\sum_{q=1}^T \mathbb{E} \| \theta_q \|^2 \nonumber \\
& & = \frac{G_0}{\sqrt{TQ}} + \frac{V_0}{\sqrt{Q}} + \frac{I_0}{\Gamma^*} \cdot \frac{1}{Q}\sum_{q=1}^T \mathbb{E} \| \theta_q \|^2,
\end{eqnarray}
where we introduce constants $G_0=4\mathbb{E} [ F(\theta_{0})]+6LN \sigma^2/(\Gamma^*)^2$, $V_0=3L^2NG/\Gamma^*$, and $I_0=3L^2\delta^2 N$. This completes the proof of Theorem~1.
Finally, for non-IID data distribution, we plug the two upperbounds into Eq.(\ref{mm_5}) and re-arrange the terms. We follow a similar procedure and choose learning rate $\gamma=1/\sqrt{TQ}$ and $\gamma \le 1/(6LT)$. It is straightforward to show that for non-IID data distribution:
\begin{eqnarray}
\frac{1}{Q} \sum_{q=1}^Q \mathbb{E}\| \nabla F(\theta_q)\|^2 \le \frac{H_0}{\sqrt{TQ}} + \frac{V_0}{\sqrt{Q}} + \frac{I_0}{\Gamma^*} \cdot \frac{1}{Q}\sum_{q=1}^T \mathbb{E} \| \theta_q \|^2,
\end{eqnarray}
where $H_1=4\mathbb{E} [ F(\theta_{0})]+6LK\sigma^2$ is a different constant. This completes the proof of Theorem~2.
\section{Experimental Details}
\subsection{Experiment Setup}
The code implementation is open sourced and can be found at
\noindent\href{https://github.com/hanhanAnderson/FL_Converge}{\text{https://github.com/hanhanAnderson/FL\_Converge}}.
In this experimental section we evaluate different pruning techniques from state-of-the-art designs and verify our proposed theory under unifying pruning framework using two datasets.
Unless stated otherwise, the accuracy reported is defined as $$\frac{1}{n} \sum _{i} p_{i}\sum_{j}\text{Acc}(f_{i}(x_{j}^{(i)},\theta_{i}\odot m_{i}),y_{j}^{i}))$$ averaged over three random seeds with same random initialized starting $\theta_{0}$. Some key hyper-parameters includes total training rounds $Q = 100$, local training epochs $T = 5$, testing batch size $bs = 128$ and local batch size $bl = 10$. Momentum for SGD is set to 0.5. standard batch normalization is used.
We focus on three points in our experiments: (i) the general coverage of federated learning with heterogeneous models by pruning (ii) the impact of coverage index $\Gamma_{min}$ (iii) the impact of mask error $\delta$.
We examine theoretical results on the following two common image classification datasets: MNIST and CIFAR10, among $N=100$ workers with IID and non-IID data with participation ratio $c = 0.1$. For IID data, we follow the design of balanced MNIST by previous research, and similarly obtain balanced CIFAR10. For non-IID data, we obtained balanced partition with label distribution skewed, where the number of the samples on each device is up to at most two out of ten possible classifications.
\subsection{Pruning Techniques}
In the paper we select 4 pruning techniques as baselines and we elaborate the details of them.
Let $P_{m}=\frac{\|m\|_{0}}{|\theta|}$ be the sparsity of mask $m$, e.g.,$P_{m}=75\%$ for a model when 25 \% of its weights are pruned, and M is the number of the parameters in the model.
Then a mask for weights pruning can be defined as:
\begin{equation}
m_{i} = \begin{cases}
1 & \text{, if } \mathit{argsort}(\theta[i]) < P_{m} * M\\
0 & \text{, otherwise }
\end{cases} , i \in M
\end{equation}
Similarly we have the defination for neuron pruning:
\begin{equation}
m_{i} = \begin{cases}
1 & \text{, if } \mathit{argsort}(\sum \theta_{i}) < P_{m} * N\\
0 & \text{, otherwise }
\end{cases} , \theta_{i} \in \textbf{Neuron}\ i
\end{equation}
where N is the total number of neurons in the network, and fixed subnetwork:
\begin{equation}
m_{i} = \begin{cases}
1 & \text{, if } i < P_{m} * M\\
0 & \text{, otherwise }
\end{cases} , i \in M
\end{equation}
where M is the total number of parameters in the network.
Note in adaptive pruning such mask is subject to change after each round of global aggregation.
For pruning with pre-trained mask, the mask is generated based on eq(x) for first 3 rounds then fixed for the rest of the training.
An illustration of those pruning techniques can be found in figure.
\begin{figure}[h]
\centering
\includegraphics[width=0.4\textwidth]{Appedix/pics/PruningMethod.png}
\caption{Illustration of pruning techniques used in this paper}
\label{fig:my_label}
\end{figure}
\section{More Results on MNIST dataset}
In this section we present more supplementary experimental results on MNIST dataset. Specifically, we present the training progress in respect of global loss and accuracy for selected pruning techniques. For the final training results we focus on WP, FS and NP as PT is not found competitive without a carefully designed algorithm, however we still keep the training details for PT.
\subsection{Change of Notations}
In the main paper we use code name for simplicity of notation and better understanding. Here we present the results with their detailed settings.
For a full model without pruning it can be described as
$\mathbb{P}_{1} (\theta)= \{\textsl{S}_{1},\textsl{S}_{2},\textsl{S}_{3},\textsl{S}_{4}\}$, where
$$m_{i} = 1 \ \text{if} \ \theta_{i}\in \{\textsl{S}_{1}\cup \textsl{S}_{2}\cup \textsl{S}_{3}\cup \textsl{S}_{4}\} \ \text{ otherwise} \ m_{i} = 0$$.
Similarly we have another 3 pruning polices as follows:
$$\mathbb{P}_{2} (\theta)= \{\textsl{S}_{1},\textsl{S}_{3},\textsl{S}_{4}\}$$
$$\mathbb{P}_{3} (\theta)= \{\textsl{S}_{1},\textsl{S}_{2},\textsl{S}_{4}\}$$
$$\mathbb{P}_{4} (\theta)= \{\textsl{S}_{1},\textsl{S}_{2},\textsl{S}_{3}\}$$
And we further denote a local client with its pruning policy, as an example, the case "*WP-M1" uses 4 local clients with full models, 2 local clients with pruned models using pruning policy $\mathbb{P}_{4}$, 2 local clients with pruned models using pruning policy $\mathbb{P}_{2}$ and 2 local clients with pruned models using pruning policy $\mathbb{P}_{3}$, then we denote its code name as "1111223344" for simpler notation. Note that we continue to use code name "FedAvg" as a baseline rather than "1111111111". For the rest of the appendix we continue using such notations for denoting its pruning policy settings.
\begin{table}[h]
\centering
\resizebox{\textwidth}{!}{%
\begin{tabular}{@{}cccccccccccc@{}}
\toprule
\multirow{2}{*}{codename} &
\multirow{2}{*}{1} &
\multirow{2}{*}{0.75} &
\multirow{2}{*}{0.5} &
\multirow{2}{*}{PARAs} &
\multirow{2}{*}{FLOPs} &
\multirow{2}{*}{$\Gamma_{min}$} &
\multirow{2}{*}{\%PARA} &
\multirow{2}{*}{\%FLOPS} &
IID &
\multicolumn{2}{c}{Non-IID} \\ \cmidrule(l){10-12}
& & & & & & & & & Accuracy & Global & Local \\ \midrule
1111111111 & 10 & & & 159010 & 158800 & 10 & 1.00 & 1.00 & 98.045 & 93.59 & 93.82 \\
1111114444 & 6 & 4 & & 143330 & 143120 & 6 & 0.90 & 0.90 & 98.18 & 95.15 & 95.49 \\
1111144447 & 5 & 4 & 1 & 135490 & 135280 & 5 & 0.85 & 0.85 & 97.51 & 89.13 & 89.29 \\
1111223344 & 4 & 6 & & 135490 & 135280 & 8 & 0.85 & 0.85 & 98.325 & 95.48 & 95.82 \\
1111234444 & 4 & 6 & & 135490 & 135280 & 6 & 0.85 & 0.85 & 98.395 & 95.45 & 95.96 \\
1111234567 & 4 & 3 & 3 & 123730 & 123520 & 7 & 0.77 & 0.77 & 96.735 & 88.99 & 88.9 \\
1111444444 & 4 & 6 & & 135490 & 135280 & 4 & 0.85 & 0.85 & 97.85 & 89.13 & 89.29 \\
1111444477 & 4 & 4 & 2 & 127650 & 127440 & 4 & 0.80 & 0.80 & 96.99 & 93.02 & 93.12 \\
1111556677 & 4 & & 6 & 111970 & 111760 & 6 & 0.70 & 0.70 & 95.545 & 80.07 & 79.34 \\
1114556677 & 3 & 1 & 6 & 108050 & 107840 & 5 & 0.67 & 0.67 & 95.80 & 79.3 & 79.75 \\
1234556677 & 1 & 3 & 6 & 100210 & 100000 & 5 & 0.63 & 0.62 & 95.315 & 81.66 & 81.64 \\
1455666777 & 1 & 1 & 8 & 92370 & 92160 & 3 & 0.58 & 0.58 & 94.795 & 79.15 & 79.08 \\
2233445677 & 0 & 6 & 4 & 104130 & 103920 & 5 & 0.65 & 0.65 & 95.955 & 81.27 & 81.17 \\
1444777777 & 1 & 3 & 6 & 92370 & 92160 & 6 & 0.65 & 0.65 & 95.10 & 72.19 & 71.64 \\ \bottomrule
\end{tabular}%
}
\caption{Results For Weights Pruning on MNIST}
\label{tab:my-table}
\end{table}
\begin{table}[h]
\centering
\resizebox{\textwidth}{!}{%
\begin{tabular}{@{}cccccccccccc@{}}
\toprule
\multirow{2}{*}{codename} &
\multirow{2}{*}{100\%} &
\multirow{2}{*}{75\%} &
\multirow{2}{*}{50\%} &
\multirow{2}{*}{PARAs} &
\multirow{2}{*}{FLOPs} &
\multirow{2}{*}{$\Gamma_{min}$} &
\multirow{2}{*}{\%PARA} &
\multirow{2}{*}{\%FLOPS} &
IID &
\multicolumn{2}{c}{Non-IID} \\ \cmidrule(l){10-12}
& & & & & & & & & Accuracy & Global & Local \\ \midrule
1111111111 & 10 & & & 159010 & 158800 & 10 & 1.00 & 1.00 & 98.13 & 95.31 & 95.33 \\
1111114444 & 6 & 4 & & 143110 & 142920 & 6 & 0.90 & 0.90 & 97.97 & 93.6 & 93.82 \\
1111144447 & 5 & 4 & 1 & 135160 & 134980 & 5 & 0.85 & 0.85 & 97.395 & 91.92 & 92.18 \\
1111223344 & 4 & 6 & & 135160 & 134980 & 8 & 0.85 & 0.85 & 97.865 & 91.9 & 92.42 \\
1111234444 & 4 & 6 & & 135160 & 134980 & 6 & 0.85 & 0.85 & 97.86 & 92.99 & 92.93 \\
1111234567 & 4 & 3 & 3 & 123235 & 123070 & 7 & 0.77 & 0.775 & 96.645 & 83.82 & 83.61 \\
1111444444 & 4 & 6 & & 135160 & 134980 & 4 & 0.85 & 0.85 & 97.53 & 91.8 & 92.07 \\
1111444477 & 4 & 4 & 2 & 127210 & 127040 & 4 & 0.80 & 0.80 & 96.775 & 84.91 & 85.02 \\
1111556677 & 4 & & 6 & 111310 & 111160 & 6 & 0.70 & 0.70 & 96.575 & 69.11 & 69.63 \\
1114556677 & 3 & 1 & 6 & 107335 & 107190 & 5 & 0.67 & 0.675 & 95.345 & 77.53 & 77.7 \\
1234556677 & 1 & 3 & 6 & 99385 & 99250 & 5 & 0.62 & 0.625 & 95.475 & 72.8 & 72.4 \\
1455666777 & 1 & 1 & 8 & 91435 & 91310 & 3 & 0.57 & 0.575 & 94.41 & 61.96 & 62.49 \\
2233445677 & 0 & 6 & 4 & 103360 & 103220 & 5 & 0.65 & 0.65 & 96.375 & 60.23 & 61.01 \\
1444777777 & 1 & 3 & 6 & 99385 & 99250 & 5 & 0.62 & 0.625 & 95.23 & 60.54 & 61.85 \\ \midrule
& & & & & & & & & & &
\end{tabular}%
}
\caption{Results For Neuron Pruning on MNIST}
\label{tab:my-table}
\end{table}
\begin{table}[h]
\centering
\resizebox{\textwidth}{!}{%
\begin{tabular}{@{}cccccccccccc@{}}
\toprule
\multirow{2}{*}{codename} &
\multirow{2}{*}{100\%} &
\multirow{2}{*}{75\%} &
\multirow{2}{*}{50\%} &
\multirow{2}{*}{PARAs} &
\multirow{2}{*}{FLOPs} &
\multirow{2}{*}{$\Gamma_{min}$} &
\multirow{2}{*}{\%PARA} &
\multirow{2}{*}{\%FLOPS} &
IID &
\multicolumn{2}{c}{Non-IID} \\ \cmidrule(l){10-12}
& & & & & & & & & Accuracy & Global & Local \\ \midrule
1111111111 & 10 & & & 159010 & 158800 & 10 & 1.00 & 1.00 & 97.67 & 94.12 & 94.45 \\
1111114444 & 6 & 4 & & 143110 & 142920 & 6 & 0.9 & 0.90 & 97.76 & 92.33 & 92.55 \\
1111144447 & 5 & 4 & 1 & 135160 & 134980 & 6 & 0.85 & 0.85 & 97.34 & 93.79 & 93.92 \\
1111444444 & 4 & 6 & & 135160 & 134980 & 4 & 0.85 & 0.85 & 97.62 & 92.05 & 92.33 \\
1111444477 & 4 & 4 & 2 & 127210 & 127040 & 4 & 0.80 & 0.8 & 97.32 & 92.67 & 92.95 \\
1111444777 & 4 & 3 & 3 & 123235 & 123070 & 4 & 0.77 & 0.775 & 97.35 & 91.34 & 91.73 \\
1111777777 & 4 & & 6 & 111310 & 111160 & 4 & 0.70 & 0.7 & 97.18 & 93.6 & 93.48 \\
1114777777 & 3 & 1 & 6 & 107335 & 107190 & 3 & 0.67 & 0.675 & 97.12 & 93.7 & 93.57 \\
1444777777 & 1 & 3 & 6 & 99385 & 99250 & 1 & 0.62 & 0.625 & 97.01 & 90.74 & 90.57 \\
1477777777 & 1 & 1 & 8 & 91435 & 91310 & 1 & 0.57 & 0.575 & 96.88 & 90.73 & 90.67 \\ \bottomrule
\end{tabular}%
}
\caption{Results For Fixed Sub-network on MNIST}
\label{tab:my-table}
\end{table}
\subsection{More Results}
\subsubsection{Case for IID data}
We present the full results of training for IID case in Fig 2 - 5
\\
\begin{figure}[h]
\centering
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{Appedix/pics/WPl.png}
\caption{Global Loss}
\label{fig:1}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{Appedix/pics/WPn.png}
\caption{Accuracy}
\label{fig:three sin x}
\end{subfigure}
\caption{Results on Weights Pruning on MNIST IID}
\label{fig:1}
\end{figure}
\begin{figure}[h]
\centering
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{Appedix/pics/FSl.png}
\caption{Global Loss}
\label{fig:1}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{Appedix/pics/FSn.png}
\caption{Accuracy}
\label{fig:three sin x}
\end{subfigure}
\caption{Results on Fixed Sub-network on MNIST IID}
\label{fig:1}
\end{figure}
\begin{figure}[h]
\centering
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{Appedix/pics/NPl.png}
\caption{Global Loss}
\label{fig:1}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{Appedix/pics/NPn.png}
\caption{Accuracy}
\label{fig:three sin x}
\end{subfigure}
\caption{Results on Neuron Pruning on MNIST IID}
\label{fig:1}
\end{figure}
\begin{figure}[h]
\centering
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{Appedix/pics/PTl.png}
\caption{Global Loss}
\label{fig:1}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{Appedix/pics/PTn.png}
\caption{Accuracy}
\label{fig:three sin x}
\end{subfigure}
\caption{Results on Pruning with pre-trained mask on MNIST IID}
\label{fig:1}
\end{figure}
\subsubsection{Case for non-IID data}
We present the full results of training for non-IID case in Fig 6 - 9
\begin{figure}[h]
\centering
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{Appedix/pics/non-iid/WPl.png}
\caption{Global Loss}
\label{fig:1}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{Appedix/pics/non-iid/WPn.png}
\caption{Accuracy}
\label{fig:three sin x}
\end{subfigure}
\caption{Results on Weights Pruning on MNIST non-IID}
\label{fig:1}
\end{figure}
\begin{figure}[h]
\centering
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{Appedix/pics/non-iid/NPl.png}
\caption{Global Loss}
\label{fig:1}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{Appedix/pics/non-iid/NPn.png}
\caption{Accuracy}
\label{fig:three sin x}
\end{subfigure}
\caption{Results on Neuron Pruning on MNIST non-IID}
\label{fig:1}
\end{figure}
\begin{figure}[h]
\centering
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{Appedix/pics/non-iid/FSl.png}
\caption{Global Loss}
\label{fig:1}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{Appedix/pics/non-iid/FSn.png}
\caption{Accuracy}
\label{fig:three sin x}
\end{subfigure}
\caption{Results on Fixed Sub-network on MNIST non-IID}
\label{fig:1}
\end{figure}
\begin{figure}[h]
\centering
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{Appedix/pics/non-iid/PTl.png}
\caption{Global Loss}
\label{fig:1}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{Appedix/pics/non-iid/PTn.png}
\caption{Accuracy}
\label{fig:three sin x}
\end{subfigure}
\caption{Results on Pruning with pre-trained mask on MNIST non-IID}
\label{fig:1}
\end{figure}
\section{More Results For CIFAR-10-IID}
In this section we present more supplementary experimental results on CIFAR 10 dataset to test the effects of pruning on convolutional layers. Specifically, we present the training progress in respect of global loss and accuracy for selected pruning techniques where we focus on WP and FS.
\subsection{Change of Notations}
In the main paper we use code name for simplicity of notation and better understanding. Here we present the results with their detailed settings.
For a full model without pruning it can be described as
$\mathbb{P}_{1} (\theta)= \{\textsl{S}_{1},\textsl{S}_{2},\textsl{S}_{3},\textsl{S}_{4}\}$, where
$$m_{i} = 1 \ \text{if} \ \theta_{i}\in \{\textsl{S}_{1}\cup \textsl{S}_{2}\cup \textsl{S}_{3}\cup \textsl{S}_{4}\} \ \text{ otherwise} \ m_{i} = 0$$.
As we have demonstrated the effects of pruning MLP layers, on CIFAR10 datasets we focus on the effects of conv2d layers.
We have another 3 pruning polices for conv2d layers as follows:
$$\mathbb{P}_{2} (\theta)= \{\textsl{S}_{1},\textsl{S}_{3},\textsl{S}_{4}\}$$
$$\mathbb{P}_{3} (\theta)= \{\textsl{S}_{1},\textsl{S}_{2},\textsl{S}_{4}\}$$
$$\mathbb{P}_{4} (\theta)= \{\textsl{S}_{1},\textsl{S}_{2},\textsl{S}_{3}\}$$
For WP and PT, when using $\mathbb{P}_{2}$ the top 75\% of kernels will be kept, i.e. for the first conv2d layer, the 5 largest kernels out of total of 6 kernels will be kept, and the 6-th kernel will be pruned. Under all pruning polices MLP layers will be pruned at 75\% accordingly. Note under such settings, code name without full model '1' , e.g. '2222333444', will not satisfy our necessary condition of convergence.
For FS, we denote $\mathbb{P}_{2}$ as the similar policy as above but only the first continuous parameters, i.e. for the first conv2d layer, the first 5 kernels out of total of 6 kernels will be kept, and the 6-th kernel will be pruned, together with pruning MLP layers at 75\%. We denote $\mathbb{P}_{3}$ as only pruning conv2d layers and $\mathbb{P}_{4}$ as only pruning MLP layers. In this case, note that even with same codename for WP and FS, their results are NOT directly comparable.
And we further denote a local client with its pruning policy, as an example, the case "*WP-M1" uses 4 local clients with full models, 2 local clients with pruned models using pruning policy $\mathbb{P}_{4}$, 2 local clients with pruned models using pruning policy $\mathbb{P}_{2}$ and 2 local clients with pruned models using pruning policy $\mathbb{P}_{3}$, then we denote its code name as "1111223344" for simpler notation. Note that we continue to use code name "FedAvg" as a baseline rather than "1111111111". For the rest of the appendix we continue using such notations for denoting its pruning policy settings. For the final training results we focus on WP, FS and NP as PT is not found competitive without a carefully designed algorithm, however we still keep the training details for PT.
\begin{table}[]
\centering
\resizebox{\textwidth}{!}{%
\begin{tabular}{@{}llcccc@{}}
\toprule
Codename & PARAs(K) & \% & FLOPs(K) & \% & Testing Accuracy \\ \midrule
1111111111 & 512.80 & 1.00 & 653.8 & 1.00 & 53.63 \\
1111111122 & 482.34 & 0.94 & 619.6 & 0.94 & 53.12 \\
1111112222 & 451.936 & 0.88 & 587.0 & 0.89 & 52.66 \\
1111112223 & 451.936 & 0.88 & 587.0 & 0.89 & 52.98 \\
1111112233 & 451.936 & 0.88 & 587.0 & 0.89 & 54.20 \\
1111113333 & 451.936 & 0.88 & 587.0 & 0.89 & 52.96 \\
1111114444 & 451.936 & 0.88 & 587.0 & 0.89 & 51.61 \\
1111222222 & 421.504 & 0.82 & 553.7 & 0.84 & 51.69 \\
1111222334 & 421.504 & 0.82 & 553.7 & 0.84 & 52.20 \\
1111223344 & 421.504 & 0.82 & 553.7 & 0.84 & 52.54 \\
1222333444 & 375.856 & 0.73 & 503.6 & 0.77 & 49.15 \\ \bottomrule
\end{tabular}%
}
\caption{Results For Weights Pruning on CIFAR 10}
\label{tab:my-table}
\end{table}
\begin{table}[]
\centering
\resizebox{\textwidth}{!}{%
\begin{tabular}{@{}llcccc@{}}
\toprule
Codename & PARAs(K) & \% & FLOPs(K) & \% & Testing Accuracy \\ \midrule
1111111111 & 512.81 & 1.00 & 653.80 & 1.00 & 54.78 \\
1111111122 & 476.37 & 0.92 & 619.68 & 0.94 & 54.10 \\
1111112222 & 439.93 & 0.85 & 585.57 & 0.89 & 52.87 \\
1111113333 & 471.28 & 0.91 & 589.48 & 0.90 & 53.96 \\
1111113344 & 467.92 & 0.91 & 589.06 & 0.90 & 53.90 \\
1111114444 & 464.57 & 0.90 & 588.64 & 0.90 & 54.44 \\
1111222222 & 403.49 & 0.78 & 551.46 & 0.84 & 52.74 \\
2222333444 & 372.59 & 0.72 & 488.47 & 0.74 & 52.35 \\ \bottomrule
\end{tabular}%
}
\caption{Results For Fixed Sub-network on CIFAR 10}
\label{tab:my-table}
\end{table}
\begin{figure}[h]
\centering
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{Appedix/pics/CIFAR/WPl.png}
\caption{Global Loss}
\label{fig:1}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{Appedix/pics/CIFAR/WPn.png}
\caption{Accuracy}
\label{fig:three sin x}
\end{subfigure}
\caption{Results on Weights Pruning on CIFAR10 IID}
\label{fig:1}
\end{figure}
\begin{figure}[h]
\centering
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{Appedix/pics/CIFAR/PTl.png}
\caption{Global Loss}
\label{fig:1}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{Appedix/pics/CIFAR/PTn.png}
\caption{Accuracy}
\label{fig:three sin x}
\end{subfigure}
\caption{Results on Pruning with pre-trained mask on CIFAR10 IID}
\label{fig:1}
\end{figure}
\begin{figure}[h]
\centering
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{Appedix/pics/CIFAR/FSl.png}
\caption{Global Loss}
\label{fig:1}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{Appedix/pics/CIFAR/FSn.png}
\caption{Accuracy}
\label{fig:three sin x}
\end{subfigure}
\caption{Results on Fixed Sub-network Pruning on CIFAR10 IID}
\label{fig:1}
\end{figure}
\section{Conclusion}
In this paper, we establish (for the first time) the sufficient conditions for FL with heterogeneous local models and arbitrary adaptive pruning to converge to a stationary point of standard FL, at a rate of $\frac{1}{\sqrt{Q}}$. It applies to general smooth cost functions and recovers a number of important FL algorithms as special cases. The analysis advocates designing pruning strategies with respect to both minimum coverage index $\Gamma_{\rm min}$ and pruning-induced noise $\delta^2$.
We further empirically demonstrated the correctness of the theory and the performances of the proposed design.
Our work provides a theoretical understanding of FL with heterogeneous clients and dynamic pruning, and presents valuable insights on FL algorithm design, which will be considered in future work.
\bibliographystyle{plain}
|
2,869,038,155,862 | arxiv | \section{Introduction}\label{Intro}
The hadronic final states that can be produced in $\tau$ lepton decays, provide a clean environment to study the dynamics of strong interactions at energies below the $\tau$
lepton mass. The leading weak interactions that drives the flavor transitions in these decays are dressed by the strong and electromagnetic interactions to generate a large
diversity of hadronic and photonic states. The hadronic vertices can be cleanly extracted and used to test several properties of QCD and electroweak interactions, or to
extract fundamental parameters of the Standard Model~\cite{Pich:2013kg}.
In this paper we study the $\tau^{\pm} \to \pi^{\pm}\nu_{\tau}\ell^+\ell^-$ ($\ell =e$ or $\mu$) decays, which have been considered previously \cite{Dib:2011hc} in
the context of sterile neutrino exchange overlooking the Standard Model contribution which, to our knowledge, has not been studied before. We will present the results of this
calculation and analyze the associated phenomenology in this article, ignoring all possible new physics contributions. These decay channels have not been attempted to measure
so far although, as we will show, they are likely to be detected in the near future facilities.
The $\tau$ lepton decays under consideration are the crossed channels of the $\pi^{\pm} \to \ell^{\pm} \nu_{\ell} e^+e^-$ decays, which have been studied
in the past \cite{Bryman:1982et, Kersch:1985rn} and have been already observed \cite{Beringer:1900zz}. Both decays are interesting because they involve the
$\gamma^*W^{*\mp}\pi^{\pm}$ vertex with the two gauge bosons off their mass-shells. The analogous radiative $\tau^{\pm} \to \pi^{\pm}\nu_{\tau}\gamma$ and
$\pi^{\pm} \to \ell^{\pm} \nu_\ell\gamma$ decays, which have been widely studied before \cite{Bijnens:1992en, radpidec, radtaudec, Guo:2010dv}, provide information on the same
vertex in the case of a real photon. The knowledge of the $\gamma W \pi$ vertex in the full kinematical range is of great importance, not only for testing QCD predictions,
but also because it plays a relevant role in computing the radiative corrections to $\pi \to \ell \nu$, $\tau \to \pi \nu_{\tau}$ decays or in the evaluation of the hadronic
light-by-light contributions to the muon anomalous magnetic moment \cite{Hlbl}.
These four-body decays of pions and $\tau$ leptons explore different virtualities of the photon and $W$ boson and can provide complementary information on the relevant form
factors. The low energies involved in pion decays are sensitive to $QCD$ predictions in the chiral and isospin limits, while $\tau$ lepton decays involve energy scales where
the resonance degrees of freedom become relevant. As is well known, rigorous predictions from $QCD$ for the form factors that describe the $\gamma W\pi$ and $\gamma \gamma \pi$
vertices can be obtained only in the chiral and short-distance limits. Therefore the information provided by $\tau$ lepton decays is valuable in order to understand the
extrapolation between these two limiting cases.
The vector and axial-vector form factors relevant to our study are calculated in the framework of the Resonance Chiral Theory ($\RCT$) \cite{Ecker:1988te, RChT}. In order
to fix the free couplings appearing in these calculations we also impose available short-distance constraints in the large $N_C$ limit of $QCD$. As a result, we are able to
predict the branching ratios and the invariant-mass spectrum of the lepton pair in $\tau^{\pm} \to \pi^{\pm}\nu_{\tau}\ell^+\ell^-$ decays.
In Sec. \ref{ME and DR} we decompose the matrix element in terms of the model-independent ($QED$) and the $SD$ (vector and axial-vector) contributions, where the latter
depend on the corresponding hadronic form factors. These are studied in detail in Sec. \ref{SD FFs} and the QCD constraints on their short-distance behaviour
in the $N_C\to\infty$ limit are discussed in Sec. \ref{Shortdistance}. The related phenomenological analysis is presented in Sec. \ref{Pheno} and we give
our conclusions in Sec \ref{Concl}. An appendix with the results of the spin-averaged squared matrix element completes our discussion.
\section{Matrix element and decay rate}\label{ME and DR}
We consider the process $\tau^-(p_\tau)\to\pi^-(p)\nu_\tau(q)\ell^+(p_+)\ell^-(p_-)$. This decay is generated by demanding that the photon in the $\tau^-(p_\tau)\to\pi^-(p)\nu_\tau(q)\gamma(k)$
decays becomes virtual and then converts into a lepton pair (lepton pair production mediated by the $Z$ boson is negligible); at the amplitude level, it suffices to
change the photon polarization $\epsilon^{\mu}$ in the radiative decay by $e\bar{u}(p_-)\gamma_{\mu}v(p_+)/k^2$, with $k=p_++p_-$ the photon momentum and $e$ the positron
charge. Therefore, one can relate the description of the structure dependent contributions in the former to that in the latter \cite{Guo:2010dv}. In analogy with the
radiative pion and one-meson tau decays, the matrix element can be written as the sum of four contributions:
\begin{equation}\label{matrix element decomposition}
\mathcal{M} \left[\tau^-(p_\t) \to \pi^-(p) \nu_\tau(q) \lmas(p_+)\lmenos(p_-)\right]
= \mathcal{M}_{IB_\tau} + \mathcal{M}_{IB_\pi} + \mathcal{M}_{V} + \mathcal{M}_{A}\,.
\end{equation}
The relevant diagrams are depicted in Fig.\ref{Fig:1}. The notation introduced for the amplitudes describes the four kinds of
contributions: $\mathcal{M}_{IB_{\tau}}$ is the bremsstrahlung off the tau lepton, (figure \ref{Fig:1}(a)); $\mathcal{M}_{IB_{\pi}}$ is the sum of the bremsstrahlung off the
$\pi$ meson (figure \ref{Fig:1}(b)), and the diagram with the local $W^*\gamma^*\pi$ vertex (figure \ref{Fig:1}(c)); $\mathcal{M}_{V}$ is the structure dependent vector
contribution (figure \ref{Fig:1}(d)) and $\mathcal{M}_{A}$ the structure dependent axial-vector contribution (figure \ref{Fig:1}(e)). Our imprecise knowledge of the exact
mechanism of hadronization in the last two terms is parametrized in terms of hadronic form factors, which are functions of $p\cdot k$ and $k^2
.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.5]{diagramstaupillnu.eps}
\caption{Feynman diagrams for the different kinds of contributions to the $\tau^-\to\pi^-\nu_\tau\ell^+\ell^-$ decays, as explained in the main text. The
dot indicates the hadronization of the $QCD$ currents. The solid square (triangle) represents the $SD$ contribution mediated by the axial-vector (vector) current.}
\label{Fig:1}
\end{figure}
The decay amplitude is composed of the following set of gauge-invariant contributions ($G_F$ is the Fermi constant, $V_{ud}=0.9742$ the $ud$ quark mixing angle,
$F_\pi=92.2$ MeV \cite{Beringer:1900zz} and we have defined $ \mathcal{M}_{IB}=\mathcal{M}_{IB_{\tau}}+\mathcal{M}_{IB_{\pi}}$),
\begin{eqnarray}\label{explicit expressions matrix element}
\mathcal{M}_{IB} & = & -i G_F V_{ud}\frac{e^2}{k^2}F_\pi M_\tau \bar{u}(p_-)\gamma_\mu v(p_+)\bar{u}(q)(1+\gamma_5)\left[\frac{2p^\mu}{2p\cdot k+k^2} +
\frac{2\pt^\mu-\slashed{k}\gamma^\mu}{-2\pt\cdot k+k^2}\right]u(\pt)\,,\nonumber\\
\mathcal{M}_{V} & = & - G_F V_{ud} \frac{e^2}{k^2} \bar{u}(p_-)\gamma^\nu v(p_+) F_V(p\cdot k,k^2)\epsilon_{\mu\nu\rho\sigma}k^\rho p^\sigma \bar{u}(q)\gamma^\mu(1-\gamma_5)u(\pt)\,,\\
\mathcal{M}_{A} & = & i G_F V_{ud} \frac{2e^2}{k^2}\bar{u}(p_-)\gamma_\nu v(p_+) \left\lbrace F_A(p\cdot k,k^2)\left[(k^2+p\cdot k)g^{\mu\nu}-k^\mu p^\nu\right] - \frac{1}{2} A_2(k^2) k^2g^{\mu\nu} \right. \nonumber \\ && \ \ \ \ \left. + \frac{1}{2} A_4(k^2) k^2(p+k)^\mu p^\nu\right\rbrace\bar{u}(q)\gamma_\mu(1-\gamma_5)u(\pt)\nonumber\, .
\end{eqnarray}
The structure-dependent contributions are described in terms of one vector and three axial-vector Lorentz invariant form factors. These form factors will be discussed in detail
later in the article and, in particular, the dependence on $k^2$ of $F_A(p\cdot k,k^2)$ and $F_V(p\cdot k,k^2)$ will be given in section \ref{SD FFs}. It can be easily
checked that the decay amplitudes corresponding to the radiative $\tau^- \to \pi^-\nu_\tau\gamma$ decays can be obtained from Eq. (\ref{explicit expressions matrix element})
by replacing $e\bar{u}(p_-)\gamma^{\mu}v(p_+) \rightarrow \epsilon^{\mu}/k^2$, where $\epsilon^{\mu}$ is the polarization four-vector of the real photon, and then by setting
$k^2=0$. In this case, the decay amplitude depends only upon two form factors, $F_A(p\cdot k,k^2=0)$ and $F_V(p\cdot k,k^2=0)$, whose expressions can be read from
Ref.~\cite{Guo:2010dv}. The additional axial-vector form factors $A_2(k^2)$, and $A_4(k^2)$ can be found in Ref.~\cite{Bijnens:1992en}.
Eq.(\ref{explicit expressions matrix element}) can be checked from the corresponding expressions for $K^+ \to \mu^+\nu_\mu\ell^+\ell^-$ in
eq.(4.9) in Ref.~\cite{Bijnens:1992en} by using crossing symmetry and the conservation of the
electromagnetic current. As noted in this reference, the parametrization of the axial-vector form factor used by the Particle Data Group \cite{Beringer:1900zz} for the
analogous $\pi^+ \to \mu^+\nu_{\mu} e^+e^-$ decays, neglects the $A_4(k^2)$ form factor \footnote{The other form factors are related via
$-\sqrt{2}m_\pi\left[F_A(p\cdot k,k^2),A_2(k^2),F_V(k^2)\right]=\left[F_A,R,F_V\right]$ to the ones used in Ref.~\cite{Beringer:1900zz}.}. Given the different kinematics of
our problem we will keep it in the following.
As we will see later, at next-to-leading order in $\chi PT$, $A_2(k^2)$ and $A_4(k^2)$ can be expressed in terms of only one form factor (this is no longer true
at the next order \cite{Bijnens:1992en}, whose contributions we neglect). If we define this form factor as $B(k^2)\equiv-\frac{1}{2}A_2(k^2)$, then
$\frac{1}{2}A_4(k^2)=-B(k^2)/(k^2+2p\cdot k)$ and the axial-vector SD amplitude is simplified to
\begin{eqnarray}\label{A matrix element}
\mathcal{M}_{A} & = & i G_F V_{ud} \frac{2e^2}{k^2}\bar{u}(p_-)\gamma_\nu v(p_+) \left\lbrace F_A(p\cdot k,k^2)\left[(k^2+p\cdot k)g^{\mu\nu}-k^\mu p^\nu\right]\right.\nonumber\\
& & \left. +B(k^2) k^2 \left[g^{\mu\nu}-\frac{(p+k)^\mu p^\nu}{k^2+2p\cdot k}\right]\right\rbrace\bar{u}(q)\gamma_\mu(1-\gamma_5)u(\pt)\,.
\end{eqnarray}
The results of summing the different contributions to the squared matrix element over polarizations are collected in the appendix.
The $IB$ contributions are model-independent in the sense that they are determined in terms of the parameters of the well known non-radiative $\tau^- \to \pi^- \nu_\tau$ decays
and using $QED$. They provide the dominant contribution to the decay rate in the case of a real photon emission \cite{Guo:2010dv} owing to the well known infrarred divergent behavior. For the decay
under consideration we can expect that this behaviour is softened since $k^2 \geq 4m_{\ell}^2$. The $SD$ (or model-dependent) contributions require the modelling of the
$\gamma^* W^*\pi$ vertex for photon and $W$ boson virtualities of the order of $1$ GeV. Those terms can be split into a vector $V$ and an axial-vector $A$ contributions
according to Eq. (\ref{explicit expressions matrix element}) and must include the resonance degrees of freedom that are relevant at such energies (see Sec. \ref{SD FFs}).
Therefore, the decay rate can be conveniently separated into six terms which correspond to three moduli squared ($IB,\ VV,\ AA$) and three interference terms ($IB-V,\ IB-A,\
V-A$). Thus, we can write the decay rate as follows:
\begin{eqnarray} \label{parts Gamma radiative decay tau one pG}
\Gamma_{\rm total} = \Gamma_{IB} + \Gamma_{VV} + \Gamma_{AA}+ \Gamma_{IB-V} + \Gamma_{IB-A}+\Gamma_{V-A}\ .
\end{eqnarray}
In terms of the five independent kinematical variables needed to describe a four-body decay, the differential decay rate is given by
\begin{equation}\label{differential decay rate}
{\rm d}\Gamma\left(\tau^-\to\nu_\tau\pi^-\ell^+\ell^-\right) = \frac{X \beta_{12}\beta_{34}}{4(4\pi)^6M_\tau^3}\overline{|\mathcal{M}|^2} {\rm d}s_{34} {\rm d}s_{12}
{\rm d}({\rm cos}\theta_1) {\rm d}({\rm cos}\theta_3) {\rm d}\phi_3\,,
\end{equation}
where $\overline{|\mathcal{M}|^2}$ is the spin-averaged unpolarized decay probability,
\begin{equation}\label{definitions differential decay rate}
X = \frac{\lambda^{1/2}(M_\tau^2,s_{12},s_{34})}{2}\,,\quad \beta_{ij}\,=\,\frac{\lambda^{1/2}(s_{ij},m_i^2,m_j^2)}{s_{ij}}\,,
\end{equation}
and $\lambda(a,b,c)=a^2+b^2+c^2-2ab-2ac-2bc$.
The five independent kinematical variables in eq. (\ref{differential decay rate}) were chosen as
$\left\lbrace s_{12},\,s_{34},\,\theta_1,\,\theta_3,\,\phi_ 3\right\rbrace$, where $s_{12}:=(p_1+p_2)^2$ and $s_{34}:=(p_3+p_4)^2$; the momenta were relabelled
\footnote{We decided to write eqs.(\ref{differential decay rate}) and (\ref{definitions differential decay rate}) in terms of the second set of momenta in
eq.(\ref{relabelling}) for its general usefulness in four-body decays. See Ref.\cite{AlainGabriel} for details. On the contrary, we prefer to present the rest of
eqs.(\ref{matrix element decomposition} to (\ref{averaged VA}) in terms of the first set of momenta in eq.(\ref{relabelling}) for an easier interpretation.} as
\begin{equation}\label{relabelling}
\left\lbrace p_\tau,\, q,\, p,\, p_+,\, p_-\right\rbrace \to \left\lbrace p,\, p_1,\, p_2,\, p_3,\, p_4\right\rbrace.
\end{equation}
The definition of the angles is the standard one. Finally, the integration limits are
\begin{eqnarray}\label{integration limits}
& & s_{34}^{min}=(m_3+m_4)^2\,,\;s_{34}^{max}=(M-m_1-m_2)^2\,,\quad \theta_{1,3}\in [0,\pi]\,,\quad \phi_3\in[0,2\pi]\,,\nonumber\\
& & s_{12}^{min}=(m_1+m_2)^2\,,\;s_{12}^{max}=\left(M-\sqrt{s_{34}}\right)^2\,.
\end{eqnarray}
In this way, the outermost integration corresponds to the square of the invariant mass $s_{34}$ of the lepton-antilepton pair, assuming it can be the easiest spectrum to be measured in the
considered decays.
\section{Structure-dependent form factors}\label{SD FFs}
Although the hadronic form factors cannot be computed from the underlying theory, the symmetries of QCD are nonetheless the guiding principle to write the effective Lagrangian
that will be used. At very low energies, the strong interaction Lagrangian exhibits a chiral $SU(n_f)\otimes SU(n_f)$ symmetry in the approximate limit of ($n_f$) massless light quarks.
This symmetry allows to develop $\CPT$ \cite{CPT} as an expansion in powers of momenta and masses of the lightest mesons (that acquire mass through explicit chiral symmetry breaking),
over a typical hadronic scale which can be identified with the lightest resonances or the chiral symmetry breaking scale. Since the energies probed in hadronic tau decays are
larger than these hadronic scales, the $\CPT$ expansion parameter does no longer converge at high invariant masses. In paralel new degrees of freedom, the lightest resonances,
become excited and they should be introduced as dynamical fields in the action. This is done in $\RCT$ \cite{Ecker:1988te} working in the convenient antisymmetric tensor
formalism which warrantees that the contact interactions of next-to-leading order ($NLO$) $\CPT$ are already included in the $\RCT$ Lagrangian, as can be seen by integrating
the resonances out. Now the expansion parameter is $1/N_C$ ($N_C$ being the number of colours of the gauge group) \cite{LargeN} and the theory at leading order has a spectrum
of infinitely many stable states with only tree level interactions. In our case, we will see that the kinematics of the problem damps very strongly the observables above $1$
GeV, which justifies considering only the exchange of the lightest vector and axial-vector resonance multiplets \footnote{Given the (axial-)vector character of the Standard
Model couplings of the hadronic matrix elements in $\tau$ decays, form factors for these processes are ruled by vector and axial-vector resonances.}. We will introduce the most
important $NLO$ correction in the $1/N_C$ counting given by the meson widths as they are needed to achieve a sensible description of the propagating resonances.
The relevant effective Lagrangian reads~:
\begin{eqnarray}
\label{eq:ret} {\cal L}_{\rm R\chi T} & \doteq & {\cal L}_{WZW} \,+ \,
{\cal L}_{\rm kin}^{\rm V}\, + \, \frac{F_\pi^2}{4}\langle u_{\mu} u^{\mu} + \chi _+
\rangle \, + \, \frac{F_V}{2\sqrt{2}} \langle V_{\mu\nu} f_+^{\mu\nu}\rangle
\, \nonumber \\
& & \hspace{-1.9cm} + \ i \,\frac{G_V}{\sqrt{2}} \langle V_{\mu\nu} u^\mu
u^\nu\rangle \, +\, \sum_{i=1}^{7} \, \frac{c_i}{M_V} \, {\cal O}^i_{\rm
VJP} \,+\, \sum_{i=1}^{4} \, d_i \, {\cal O}^i_{\rm VVP} \,+\,
\sum_{i=1}^{5} \, \lambda_i \, {\cal O}^i_{\rm VAP} \ ,
\label{lagrangian}
\end{eqnarray}
where all coupling constants are real and $M_V$ is the mass of the lightest vector meson resonance nonet \cite{Cirigliano:2003yq}.
We follow here the notation in Refs.~\cite{Ecker:1988te,RuizFemenia:2003hm,GomezDumm:2003ku}, where the explicit form of these operators can be found.
The structure-dependent form factors in $\tau^-\to\nu_\tau\pi^-\ell^+\ell^-$ decays that appear in Eq. (\ref{explicit expressions matrix element}), can be obtained from the
same Feynman diagrams considered in Ref.~\cite{Guo:2010dv} for the $\tau^-\to\nu_\tau\pi^-\gamma$ decays. This is achieved by replacing the real photon by a virtual one,
which then converts into the lepton-antilepton pair. These diagrams are given in Figs.\ref{fig.v} and \ref{fig.a} for the vector and axial-vector current contributions,
respectively.\\
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=0.6]{piv}
\caption{Vector current contributions to the $W^{-*}\rightarrow \pi^- \gamma^*$ vertex. \label{fig.v}}
\end{center}
\end{figure}
\\
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=0.6]{pia}
\caption{Axial-vector current contributions to the $W^{-*}\rightarrow \pi^- \gamma^*$ vertex. \label{fig.a}}
\end{center}
\end{figure}\\
Since both the $W$ gauge boson and the photon are virtual in the present case, the form factors defining the $\gamma^*W^* \pi$ vertex will depend upon two invariant variables
which we choose as $t:=(p+k)^2=k^2+2p\cdot k+m_\pi^2$ and $k^2$. The other important difference is that the second diagram of Fig.\ref{fig.a} -which was zero for real
photons~\cite{Guo:2010dv}- will now contribute, giving rise to the additional form factor $B\left(k^2\right)$. This term can be related to the isovector component of the
electromagnetic $\pi^+\pi^-$ form factor \cite{Bijnens:1992en} and it accounts for the off-shellness of the photon that is not contained in the pure scalar $QED$ contribution.
In the framework of the $\RCT$ the vector form factor $F_V\left(t, k^2\right)$, defined in eq.(\ref{explicit expressions matrix element}), adopts the following expression:
\begin{eqnarray}\label{F_V}
F_V(t,k^2) &=& -\frac{N_C}{24\pi^2 F_\pi}+ \frac{2\sqrt2 F_V}{3 F_\pi M_V
}\bigg[ (c_2-c_1-c_5) t +
(c_5-c_1-c_2-8c_3) m_\pi^2 + 2 (c_6-c_5) k^2\bigg]\times\nonumber \\
& & \left[ \frac{\mathrm{cos}^2\theta}{M_\phi^2-k^2-iM_\phi\Gamma_\phi}\left(1-\sqrt{2} \mathrm{tg}\theta \right)
+ \frac{\mathrm{sin}^2\theta}{M_\omega^2-k^2-iM_\omega\Gamma_\omega}\left(1+\sqrt{2} \mathrm{cotg}\theta \right)\right]
\nonumber \\
& & + \frac{2\sqrt2 F_V}{3 F_\pi M_V }\, D_\rho(t)\, \bigg[ ( c_1-c_2-c_5+2c_6) t +
(c_5-c_1-c_2-8c_3) m_\pi^2 + (c_2-c_1-c_5)k^2\bigg] \nonumber \\
& & + \frac{4 F_V^2}{3 F_\pi }\, D_\rho(t)\, \bigg[ d_3 (t+4k^2) +
(d_1+8d_2-d_3) m_\pi^2 \bigg]\times\nonumber \\
& & \left[ \frac{\mathrm{cos}^2\theta}{M_\phi^2-k^2-iM_\phi\Gamma_\phi}\left(1-\sqrt{2} \mathrm{tg}\theta \right)
+ \frac{\mathrm{sin}^2\theta}{M_\omega^2-k^2-iM_\omega\Gamma_\omega}\left(1+\sqrt{2} \mathrm{cotg}\theta \right)\right]\,,
\end{eqnarray}
where
\begin{equation}
D_\rho(t) = \frac{1}{M_\rho^2 - t - i M_\rho \Gamma_\rho(t)}\,\,,
\end{equation}
and $\Gamma_\rho(t)$ stands for the decay width of the $\rho(770)$ resonance included following the definition given in Ref. \cite{GomezDumm:2000fz}:
\begin{equation}\label{rhowidth}
\Gamma_\rho(s)=\frac{sM_\rho}{96\pi F_\pi^2}\left[\sigma_\pi^3(s)\theta\left(s-4m_\pi^2\right)+\frac{1}{2}\sigma_K^3(s)\theta\left(s-4m_K^2\right)\right]\,,
\end{equation}
with $\sigma_P(s)=\sqrt{1-\frac{4m_P^2}{s}}$.
For the purposes of numerical evaluation, we will assume the ideal mixing for the $\omega-\phi$ system of vector resonances, namely:
\begin{eqnarray}
\omega_1 = \mathrm{cos}\theta \;\omega - \mathrm{sin}\theta\;\phi \; \sim \sqrt{\frac{2}{3}} \omega - \sqrt{\frac{1}{3}} \phi \,, \nonumber \\
\omega_8 = \mathrm{sin}\theta \;\omega + \mathrm{cos}\theta\;\phi \; \sim \sqrt{\frac{2}{3}} \phi + \sqrt{\frac{1}{3}} \omega \, .
\end{eqnarray}
In this limit, the contribution of the $\phi$ meson to eq.(\ref{F_V}) vanishes; in addition we will neglect any energy-dependence in their off-shell widths given that they
are rather narrow resonances.
Similarly, the axial-vector form-factor $F_A(t, k^2)$ is given by
\begin{eqnarray} \label{F_A}
F_A(t, k^2) &=& \frac{F_V^2}{F_\pi}\left(1-\frac{2G_V}{F_V}\right)\,D_\rho(k^2) - \frac{F_A^2}{F_\pi} D_{\mathrm{a}_1}(t)
+ \frac{F_A F_V}{\sqrt{2} F_\pi}\,D_\rho(k^2)\, D_{\mathrm{a}_1}(t)\, \bigg( - \lambda'' t +
\lambda_0 m_\pi^2 \bigg)\,,\nonumber\\
\end{eqnarray}
where we have used the notation
\begin{eqnarray}
\sqrt{2}\lambda_0 &=&-4\lambda_1- \lambda_2-\frac{\lambda_4}{2}-\lambda_5\,, \nonumber \\
\sqrt{2} \lambda'' &=& \lambda_2-\frac{\lambda_4}{2}-\lambda_5\,,
\end{eqnarray}
for the relevant combinations of the couplings in $\mathcal{L}_2^{VAP}$, eq(\ref{lagrangian}).
The energy-dependent a$_1(1260)$ resonance width entering $D_{\mathrm{a}_1}(t)$ was studied within this framework in Ref.~\cite{Dumm:2009va} where the dominant
$\pi\pi\pi$ and $KK\pi$ absorptive cuts where obtained in terms of the corresponding three-meson form factors \cite{Dumm:2009va, Dumm:2009kj}. Here we have used the updated
fit results of Ref.~\cite{TAUOLA2} which were obtained using the complete multi-dimensional distributions measured by BaBar \cite{Nugent:2013ij}.
Finally, the additional axial-vector form factor $B(k^2)$, is
\begin{equation} \label{B}
B(k^2) = F_\pi \frac{F_V^{\pi^+\pi^-}|_\rho(k^2)-1}{k^2}\,,
\end{equation}
where $F_V^{\pi^+\pi^-}|_\rho$ corresponds to the $I=1$ part of the $\pi^+\pi^-$ vector form factor. Based on the effective field theory description of Ref.\cite{Guerrero:1997ku}
including only the $\rho(770)$ contribution and reproducing the $\chi PT$ results \cite{Gasser:1990bv, Bijnens:1998fm, Bijnens:2002hp}, several phenomenological approaches
including the effect of higher excitations have been developed \cite{SanzCillero:2002bs, Roig:2011iv}. This form factor has also been addressed
within dispersive representations exploiting analyticity and unitarity constraints \cite{Pich:2001pj, De Troconiz:2001wt, Ananthanarayan:2011xt, Hanhart:2012wi}. Here we will
follow the approach of Ref.~\cite{Dumm:2013zh} and will use a dispersive representation of the form factor at low energies matched to a phenomenological description at
intermediate energies including the excited resonances contribution. A three-times subtracted dispersion relation will be used
\begin{equation}\label{FV_3_subtractions}
F_V^\pi(s) \,=\,\exp \Biggl[ \alpha_1\, s\,+\,\frac{\alpha_2}{2}\,
s^2\,+\,\frac{s^3}{\pi}\! \int^\infty_{s_{\rm thr}}\!\!ds'\,
\frac{\delta_1^1(s')} {(s')^3(s'-s-i\epsilon)}\Biggr] \, ,
\end{equation}
where \cite{Boito:2008fq}
\begin{equation}
\label{delta}
\tan \delta_1^1(s) = \frac{\Im m F_V^{\pi(0)}(s)}{\Re e
F_V^{\pi(0)}(s)} \ ,
\end{equation}
with
\begin{eqnarray} \label{SU2formula}
\hspace{-.5cm} F_V^{\pi\,(0)}(s) & = & \frac{M_\rho^2}{M_\rho^2
\left[1+\frac{s}{96\pi^2 F_\pi^2}\left(A_\pi(s)+
\frac12 A_K(s)\right)\right]-s}\nonumber \\
& = & \frac{M_\rho^2}{M_\rho^2 \left[1+\frac{s}{96\pi^2 F_\pi^2}\Re e
\left(A_\pi(s) + \frac12 A_K(s)\right)\right]-s-i M_\rho
\Gamma_\rho(s)}\ .
\end{eqnarray}
The loop function is ($\mu$ can be taken as $M_\rho$)
\begin{equation}\label{loopfun_2pi}
A_{P}(k^2) \, = \ln{\left( \frac{m^2_P}{\mu^2}\right)} + {8 \frac{m^2_P}{k^2}} -
\frac{5}{3} + \sigma_P^3(k^2) \,\ln{\left(\frac{\sigma_P(k^2)+1}{\sigma_P(k^2)-1}\right)}\,,
\end{equation}
and the phase--space factor $\sigma_P(k^2)$ was defined after Eq. (\ref{rhowidth}).
The parameters $\alpha_1,\,\alpha_2$ and the $\rho(770)$ resonance parameters entering $B(k^2)$ will be extracted \cite{Dumm:2013zh} from fits to BaBar
$\sigma(e^+e^-\to\pi^+\pi^-)$ data \cite{Aubert:2009ad} excluding the $\omega(782)$ contribution. We have used the preliminary values
$\alpha_1=1.87,\,\alpha_2=4.26$ in the numerics.
\section{Short-distance constraints}\label{Shortdistance}
The form factors derived in the previous Section satisfy the constraints imposed by chiral symmetry. Some of the remaining free parameters can be fixed by requiring that they
satisfy the short-distance $QCD$ behavior. The study of two-point spin-one Green functions within perturbative QCD \cite{Floratos:1978jb} showed that both of them go to a
constant value at infinite transfer of momenta. Assuming local duality, the imaginary part of the quark loop can be understood as the sum of infinite positive contributions
of intermediate hadron states. If these must add up to a constant it should be expected that each of the contributions vanishes in that limit. This vanishing should be
accomplished asymptotically and, consequently, it is expected that all resonance excitations up to the QCD continuum contribute to the meson form factors in this limit. This
conclusion is also derived from the large-$N_C$ limit of QCD, where these requirements find their most natural application.
On the contrary, phenomenology suggests that the effect of excited resonances on the short-distance relations is pretty small. To give just two examples, if the effects of
the $\rho(1450)$ resonance are ignored in the pion vector form factor \cite{Guerrero:1997ku}, the generic asymptotic constraint ($i$ corresponds to the index of the multiplet)
\begin{equation}
\sum_i F_V^i G_V^i = F_\pi^2\,,
\end{equation}
that is obtained in the $N_C\to\infty$ limit reduces to $F_VG_V=F_\pi^2$. Upon integration of the resonances, this produces the prediction of the $\chi PT$ low-energy coupling
\begin{equation}
L_9=\frac{F_VG_V}{2M_\rho^2}=\frac{F_\pi^2}{2M_\rho^2}=7.2\cdot10^{-3}\,,
\end{equation}
in remarkably good agreement with the phenomenologically extracted value, which shows that the corrections to obtaining the high-energy constraint considering only the lightest
multiplet are smaller than $5\%$ in this case.
Our second example concerns the study of the $\tau^-\to (K\pi)^-\nu_\tau$ decays. In Ref.~\cite{Jamin:2006tk} the effect of the $K^\star(1410)$ resonance was included through
\begin{equation}
\gamma = -\frac{F_V'G_V'}{F^2}=\frac{F_VG_V}{F^2}-1\,.
\end{equation}
While $\gamma=0$ if the second multiplet is neglected, in the subsequent analyses \cite{Jamin:2008qg, Boito:2008fq, Boito:2010me} it was found $\gamma=-0.05\pm0.02$, which
supports the idea that the modifications introduced by the second multiplet to the short distance constraints are at the $5\%$ level \footnote{This conclusion is supported
by the analysis of the $\tau^-\to K^-\eta\nu_\tau$ decays \cite{Escribano:2013bca}.}.
This number should be, however, enlarged for estimating the error associated to the neglect of the heavier multiplets on the high-energy constraints in our problem. The
previous examples were given for two-meson form factors and we are dealing with the form factors corresponding to an (axial-vector) current coupled to a pseudoscalar
and a photon (giving the lepton-antilepton pair), which has a much richer dynamics. Our estimate on the error is discussed at the end of this section.
The vanishing of the vector form factor in eq.(\ref{F_V}) for $t\to\infty$ and $k^2\to\infty$ yields
\begin{eqnarray}
c_1-c_2+c_5&=&0\,, \nonumber \\
2(c_6-c_5)&=&\frac{-N_C M_V}{32\sqrt{2}\pi^2F_V}\,,
\end{eqnarray}
in agreement with the results of Ref.~\cite{RuizFemenia:2003hm} for the $VVP$ Green's function. No restrictions are found on the other couplings entering eq.(\ref{F_V}). The
high-energy conditions found in Ref.~\cite{RuizFemenia:2003hm} for them are
\begin{eqnarray}
-c_1-c_2-8c_3+c_5&=&0\,,\nonumber \\ d_1+8d_2-d_3&=&\frac{F_\pi^2}{8F_V^2}\,, \\
d_3&=& \frac{-N_C}{64\pi^2}\frac{M_V^2}{F_V^2}+\frac{F_\pi^2}{8F_V^2}\, . \nonumber
\end{eqnarray}
No short-distance requirements are obtained for the axial-vector form factor in eq.(\ref{F_A}), which already vanishes in the limit of $k^2$ and $t$ simultaneously large. The
corresponding couplings are constrained by the high-energy conditions on the two-point Green functions of vector and axial-vector currents \cite{Ecker:1988te}
\begin{equation}
F_V G_V = F_\pi^2\,,\quad 2 F_V G_V - F_V^2=0\,,
\end{equation}
and by the short-distance constraints applying in the $VAP$ Green's function \cite{Cirigliano:2004ue} and three-meson hadronic form factors \cite{Dumm:2009va, Dumm:2009kj}:
\begin{equation}
\lambda'=\frac{F_\pi^2}{2\sqrt{2}F_AG_V}\,,\quad \lambda''=\frac{2G_V-F_V}{2\sqrt{2}F_A}\,,\quad \lambda_0=\frac{\lambda'+\lambda''}{4}\,.
\end{equation}
If the Weinberg sum rules \cite{Weinberg:1967kj} ($F_V^2-F_A^2=F_\pi^2$, $F_V^2 M_V^2=F_A^2M_A^2$) are imposed, all couplings are predicted in terms of $F_\pi$ and $M_V$:
\begin{eqnarray}\label{Couplings}
c_1-c_2+c_5&=&0\,,\nonumber \\ 2(c_6-c_5)&=&\frac{-N_C M_V}{64\pi^2F_\pi}\,,\nonumber \\ c_1-c_2-8c_3+c_5&=&0\,, \nonumber \\ d_1+8d_2-d_3&=& \frac{1}{16}\,, \\
d_3&=&\frac{-N_C M_V^2}{128\pi^2F_\pi^2}+\frac{1}{16}\,,\nonumber \\ G_V&=& \frac{F_\pi}{\sqrt{2}}\,,\nonumber \\ F_V&=&\sqrt{2}F_\pi\,,\nonumber \\ F_A&=& F_\pi\,,\nonumber \\ \lambda'&=& \frac{1}{2}\,,\nonumber \\ \lambda''&=&0 \,,\nonumber \\ \lambda_0&=&\frac{1}{8}\,. \nonumber
\end{eqnarray}
In numerical evaluations we will take $M_V=775$ MeV.
In order to estimate the error of our predictions we may be conservative and consider uncorrelated variations of the above relations (\ref{Couplings}) of around $1/3$.
Comparison to hadronic tau decay data suggests, however, that the typical error of our approach is smaller \cite{Roig:2012zj}, $\lesssim20\%$, and we will take this figure
for estimating the error ranges. We will, nonetheless keep $c_1-c_2+c_5=0$ to avoid the leading powers violating the asymptotic behaviour \cite{Dumm:2012vb}. In this way, we
will assume variations of $\pm 20\%$ for the non-vanishing combinations of couplings in eq.(\ref{Couplings}): $c_6-c_5$, $d_1+8d_2-d_3$, $d_3$, $G_V$, $F_V$, $F_A$, $\lambda'$
and $\lambda_0$ and we will set $|c_1+c_2+8c_3-c_5|\leq0.01$ and $|\lambda''|\leq0.04$ so that they are smaller than analogous non-vanishing couplings according to
eq.(\ref{Couplings}).
\section{Phenomenological analysis}\label{Pheno}
Using the results of previous sections, we have evaluated the branching fractions and the invariant mass spectrum of the $\ell^+\ell^-$ pair for the decays
$\tau^-\to\pi^-\nu_\tau\ell^+\ell^-$ ($\ell=e,\,\mu$). In order to assess the contributions of structure-dependent ($SD$) and inner-bremsstrahlung ($IB$) contributions we
have evaluated separately the moduli squared and interferences in both observables as discussed in Section 2.
The form factors that describe $SD$ contributions were given in eqs.(\ref{F_V}) to (\ref{loopfun_2pi}) and the coupling constants involved were fixed using short-distance $QCD$
constraints in eq.(\ref{Couplings}). The branching ratios that are predicted using these form factors are shown in the second and third columns of Table \ref{Tab:1}; the
corresponding allowed ranges that are obtained by letting the couplings to vary within $20\%$ of their central values, as described in the previous section, are shown in the
fourth and fifth columns of Table \ref{Tab:1}. The couplings which were predicted to vanish ($c_1+c_2+8c_3-c_5$ and $\lambda''$) have a marginal influence on the error estimates.
Also, the impact of the variations on $\lambda_0$, $\lambda'$ and on $d_1+8d_2-d_3$ are rather mild and the error ranges are basically determined by the uncertainties on the
remaining couplings: $F_V$, $F_A$, $G_V$, $c_5-c_6$ and $d_3$.
\begin{table*}[h!]
\begin{center}
\begin{tabular}{|c||c|c||c|c|}
\hline
& $\ell=e$ & $\ell=\mu$& $\ell=e$ & $\ell=\mu$\\
\hline
IB& $1.461\cdot10^{-5}$ & $1.600\cdot10^{-7}$ & $\pm 0.006\cdot10^{-5}$& $\pm 0.007\cdot10^{-7}$\\
IB-V& $-2\cdot10^{-8}$ & $1.4\cdot10^{-8}$ & $\left[-1\cdot10^{-7},1\cdot10^{-7}\right]$ & $\left[-4\cdot10^{-9},4\cdot10^{-8}\right]$\\
IB-A& $-9\cdot10^{-7}$ & $1.01\cdot10^{-7}$ & $\left[-3\cdot10^{-6},2\cdot10^{-6}\right]$ & $\left[-2\cdot10^{-7},6\cdot10^{-7}\right]$\\
VV & $1.16\cdot10^{-6}$ & $6.30\cdot10^{-7}$ & $\left[4\cdot10^{-7},4\cdot10^{-6}\right]$ & $\left[1\cdot10^{-7},3\cdot10^{-6}\right]$\\
AA& $2.20\cdot10^{-6}$ & $1.033\cdot10^{-6}$ & $\left[1\cdot10^{-6},9\cdot10^{-6}\right]$ & $\left[2\cdot10^{-7},6\cdot10^{-6}\right]$\\
V-A& $2\cdot10^{-10}$ & $-5\cdot10^{-11}$ & $\sim10^{-10}$ & $\sim10^{-10}$\\
\hline
TOTAL& $1.710\cdot10^{-5}$& $1.938\cdot10^{-6}$ & $\left(1.7^{+1.1}_{-0.3}\right)\cdot 10^{-5}$& $\left[3\cdot10^{-7},1\cdot10^{-5}\right]$\\
\hline
\end{tabular}
\caption{\small{The central values of the different contributions to the branching ratio of the $\tau^-\to\pi^-\nu_\tau\ell^+\ell^-$ decays ($\ell=e,\,\mu$) are displayed
on the left-hand side of the table. The error bands of these branching fractions are given in the right-hand side of the table. The error bar of the IB contribution stems
from the uncertainties on the $F_{\pi}$ decay constant and $\tau$ lepton lifetime \cite{Beringer:1900zz}.}} \label{Tab:1}
\end{center}
\end{table*}
The normalized invariant-mass distribution of the lepton pair,
\begin{equation}\label{spectrum}
\frac{1}{\Gamma_\tau}\cdot \frac{d\Gamma\left(\tau^-\to\pi^-\nu_\tau e^+e^-\right)}{ds_{34}}\, ,
\end{equation}
is shown in Fig. \ref{Fig:4}. As it can be observed, the $IB$ contribution dominates the spectrum for values of $s_{34}\lesssim 0.1$ GeV$^2$. For larger
values (which can be better appreciated in Fig. \ref{Fig:5}) the $SD$ part overcomes the former and the $AA$ contribution dominates in the rest of the spectrum apart from
the $\rho(770)$ peak region where the $VV$ part overtakes it. The interference terms $IB-V$ and $IB-A$ are negative for most of the spectrum and do not appear in the
figure.
\begin{figure}[h!]
\centering
\vspace{1.3cm}
\includegraphics[scale=0.55,angle=-90]{e_low.eps}
\caption{\small{The different contributions to the normalized $e^+e^-$ invariant mass distribution defined in Eq. (\ref{spectrum}) are plotted. A double logarithmic scale was
needed.} \label{Fig:4}}
\end{figure}
\begin{figure}[h!]
\includegraphics[scale=0.55,angle=-90]{e_high.eps}
\caption{\small{The different contributions to the normalized $e^+e^-$ invariant mass distribution defined in Eq. (\ref{spectrum}) are plotted in a magnification for
$s_{34}\gtrsim 0.1$ GeV$^2$ intended to better appreciate the $SD$ contributions. A double logarithmic scale was needed.} \label{Fig:5}}
\end{figure}
The normalized $\mu^+\mu^-$ invariant mass distribution (similar definition as in Eq. (\ref{spectrum})) is shown in Fig. \ref{Fig:6}. In this case the $IB$ and
$SD$ contributions (essentially $AA$ apart from the $\rho(770)$ peak region) are comparable for $s_{34}\lesssim 0.1$ GeV$^2$. For higher values of the squared photon invariant
mass the main contribution comes from the $AA$ part and the $VV$ contribution shows up through the peak at the $\rho(770)$ mass.
\begin{figure}[!h]
\begin{center}
\vspace*{1.2cm}
\includegraphics[scale=0.55,angle=-90]{mu.eps}
\caption[]{\small{The different contributions to the normalized $\mu^+\mu^-$ invariant mass distribution are plotted. A double
logarithmic scale allows to display the different contributions more clearly.}} \label{Fig:6}
\end{center}
\end{figure}
In Figs.~\ref{Fig:4}-\ref{Fig:6} vertical fluctuations can be appreciated in certain energy regions of the normalized invariant-mass distributions. In order to compute these
distributions in the $s_{34}$ variable, we have integrated numerically the decay probability over the remaining four independent kinematical variables by using a fortran code
based on the VEGAS routine. The observed fluctuations arise from the Monte Carlo evaluation over the four-body phase space integration. The branching fractions shown in Table
\ref{Tab:1} were obtained by integrating numerically these invariant-mass distributions and checked from a direct integration over the five independent kinematical variables.
We have found that the $SD$ contribution is sizable ($15\%$) in the case of $\ell=e$ decays and dominant ($92\%$) for $\ell=\mu$. Accordingly, it will be easy to
pin it down from the experimental data if enough statistics is accumulated: in $\ell=e$ decays by confirming that the differential decay width ceases to decrease as expected
from $IB$ around $s_{34}\sim 0.1$ GeV$^2$ and starts increasing up to the $\rho(770)$ peak region; in the $\ell=\mu$ case first because if falls down slower than expected
from a $QED$ contribution \footnote{The $(1/\Gamma)\, d\Gamma/ds_{34}$ distribution and the $IB$ contribution to it can be well approximated by $a+b$ Log$(s_{34})$ in the range
$\left[0.11,0.19\right]$ GeV$^2$. We find $b^{TOT}=-1.314(3)\cdot10^{-6}$ and $b^{IB}=-8.87(3)\cdot10^{-7}$, quantifying the effect of $SD$ contributions in this region. We
quote for completeness our results $a^{TOT}=-5.63(6)\cdot10^{-7}$ and $a^{IB}=-1.221(5)\cdot10^{-6}$.} and, from $s_{34}\sim 0.3$ GeV$^2$ on, because it starts to rise up
to the $\rho(770)$ peak region. In case a fine binning is achieved in this zone it will be possible to confirm the expected $VV$ contribution in either decay mode as well.
The fact that, in both decays, the contribution to the decay width of the $s_{34}>1$ GeV$^2$ region is negligible justifies our assumption of including only the lightest
multiplet of vector and axial-vector resonances. This result is not trivial in the axial-vector case and in the vector case it is not modified even if the
$\rho(1450)$ exchange is included phenomenologically \cite{Dumm:2009va}.
We have also assessed the relevance of the axial-vector $B$ form factor, introduced in eq.\ref{A matrix element} (see also eq.\ref{B}). We find it important,
as the $(AA)+(IB-A)$ contributions drop to $33\%$ and $25\%$ of the values shown in Table \ref{Tab:1} if this form factor is neglected. This, in turn, results in a decrease
of the branching ratio of $5\%$ for $\ell=e$ and $44\%$ for $\ell=\mu$. Therefore, it is essential to include this contribution in the muon decay channel. This explains why
the $AA$ normalized invariant mass distribution was peaked in the $\rho(770)$ mass region for either channel, since the $B$ form factor is proportional to the isovector
component of the electromagnetic di-pion form factor.
A future study of the data corresponding to the $SD$-dominated part of the spectrum will also allow to test the hadronization proposed in Ref.~\cite{Guo:2010dv} for the
$\tau^-\to\pi^-\gamma\nu_\tau$ decays. In particular, in that reference it was found that
\begin{equation}
\Gamma\left(\tau^-\to\pi^-(\gamma)\nu_\tau\right)=\Gamma\left(\tau^-\to\pi^-\nu_\tau\right)(1+\delta_\gamma)\,,
\end{equation}
with $\delta_\gamma\sim1.460\cdot10^{-2}$ for a photon energy threshold of $50$ MeV. The $SD$ part, whose contribution was found to be $\delta_\gamma\sim0.138\cdot10^{-2}$,
could be tested through the $\tau^-\to\pi^-\nu_\tau\ell^+\ell^-$ ($\ell=e,\,\mu$) decays considered in this paper. This knowledge can also be extended to the computation of
the radiative corrections to the ratio $R_{\tau/\pi}:=\Gamma\left(\tau^-\to\pi^-\nu_\tau\right)/\Gamma\left(\pi^-\to\mu^-\bar{\nu}_\mu\right)$ \cite{radtaudec}, relevant
for lepton universality tests \cite{Pich:2013kg}.
Finally, the study of radiative tau decays is also important for a faithful modelling of backgrounds in lepton flavour violation searches, as it was noted for the
$\tau^-\to\pi^-\gamma\nu_\tau$ decays in the case where the pion is missidentified as a muon and resembles the $\tau^-\to\mu^-\gamma$ \cite{Guo:2010ny} signal. The standard
simulation of the radiative decay is performed with PHOTOS \cite{PHOTOS}, which only includes the scalar $QED$ contribution neglecting the $SD$ parts. Analogously, the
$\tau^-\to\pi^-\ell^+\ell^-\nu_\tau$ ($\ell=e,\,\mu$) decays under consideration might also mimick the $\tau^-\to\mu^-\ell^+\ell^-$ processes. Although it seems that
the inclusion of $QCD$ contributions for the $\ell=\mu$ case will be important (as the $SD$ part gives the bulk of the branching ratio), a devoted study is needed to confirm
this, because the involved processes are three- and four-body decays, which complicates things with respect to the study in Ref.~\cite{Guo:2010ny}, where the kinematics of
$\tau^-\to\mu^-\gamma$ is completely fixed selecting the photons with almost maximal energy in $\tau^-\to\pi^-\gamma\nu_\tau$ decays as the relevant background.
\section{Conclusions}\label{Concl}
We have studied for the first time the $\tau^-\to\pi^-\nu_\tau\ell^+\ell^-$ ($\ell=e,\,\mu$) decays. We have evaluated the model-independent contributions by using $QED$
and have obtained the structure-dependent part ($W^* \to \pi^-\gamma^*$ vertex) using $R\chi T$. This approach ensures the low-energy limit of $\chi PT$ and includes the
lightest resonances as active degrees of freedom worked out within the convenient antisymmetric tensor formalism. We have been able to predict all the couplings involved in the
relevant Lagrangian term using short-distance QCD constraints (in the $N_C\to\infty$ limit and restricting the spectrum to the lowest-lying spin-one resonances) on the
related Green functions and form factors and considered the error stemming from this procedure in a conservative way.
Within this framework we predict
$BR\left(\tau^-\to\pi^-\nu_\tau e^+e^-\right)=\left(1.7^{+1.1}_{-0.3}\right)\cdot 10^{-5}$ and \break $BR\left(\tau^-\to\pi^-\nu_\tau \mu^+\mu^-\right)\in \left[3\cdot10^{-7},1\cdot10^{-5}\right]$.
We find that while the $\ell=e$ decays should be within discovery reach at the future super-flavour facilities, this will only be possible for the $\ell=\mu$ decays if they
happen to be close to the upper limit of the range we have given.
The studied hadronic currents are ready for installation in the $R\chi T$ based version \cite{TAUOLA2, Shekhovtsova:2012ra} of TAUOLA, the standard Monte Carlo generator for
tau lepton decays.
\section*{Acknowledgements} This work has been partially supported by the Spanish grant FPA2011-25948 and Conacyt (M\'exico). P.R. acknowledges the hospitality of
Departamento de F\'{\i}sica at CINVESTAV, where part of this work was done.
\section*{Appendix} \label{LongExpressions}
We collect in this appendix the results of summing over polarizations and averaging over that of the tau the different contributions to the squared matrix element. We refrain
from writing the lengthy outcome of the contraction of the indices which was used in our programs.
\begin{eqnarray}\label{averaged IB}
& & \overline{\Big|\mathcal{M}_{IB}\Big|^2} = 16 G_F^2 |V_{ud}|^2 \frac{e^4}{k^4}F_\pi^2 M_\tau^2 \ell_{\mu\nu} \left[\frac{-\tau^{\mu\nu} k^2}
{\left(k^2-2 k\cdot p_\tau\right)^2}+\frac{4 p^{\mu} q^{\nu} k\cdot p_\tau}{\left(k^2+2 k\cdot p\right) \left(k^2-2 k\cdot p_\tau\right)}\right.\nonumber\\
& & \left. +\frac{4 p_\tau^{\mu} q^\nu k\cdot p_\tau}{\left(k^2-2 k\cdot p_\tau\right)^2}-\frac{2 g^{\mu \nu} k\cdot p_\tau k\cdot q}{\left(k^2-2 k\cdot p_\tau\right)^2}-\frac{4 p^{\mu} p_\tau^{\nu} k\cdot q}{\left(k^2+2 k\cdot p\right) \left(k^2-2 k\cdot p_\tau\right)}\right.\nonumber\\
& & \left. -\frac{4 p_\tau^{\mu} p_\tau^{\nu} k\cdot q}{\left(k^2-2 k\cdot p_\tau\right)^2}+\frac{8 p^{\mu} p_\tau^{\nu} p_\tau\cdot q}{\left(k^2+2 k\cdot p\right) \left(k^2-2 k\cdot p_\tau\right)}\right.\nonumber\\
& & \left. +\frac{4 p^{\mu} p^{\nu} p_\tau\cdot q}{\left(k^2+2 k\cdot p\right)^2}\,+\frac{4 p_\tau^{\mu} p_\tau^{\nu} p_\tau\cdot q}{\left(k^2-2 k\cdot p_\tau\right)^2}\right]\,,\nonumber\\
\end{eqnarray}
\begin{eqnarray}\label{averaged IBV}
& & \overline{2 \Re e\left[\mathcal{M}_{IB}\mathcal{M}_V^*\right]} = - 32 G_F^2|V_{ud}|^2 \frac{e^4}{k^4}F_\pi M_\tau^2 \Im m \left\lbrace F_V^*(p\cdot k,k^2)\ell^\mu_{\nu^\prime} \epsilon^{\mu^\prime \nu^\prime \rho^\prime \sigma^\prime}k_{\rho^\prime}p_{\sigma^\prime} \mathcal{V}_{\mu\mu^\prime}\right\rbrace\,,\nonumber
\end{eqnarray}
\begin{eqnarray}\label{averaged IBA}
& & \overline{2 \Re e\left[\mathcal{M}_{IB}\mathcal{M}_A^*\right]} = -64 G_F^2 |V_{ud}|^2 \frac{e^4}{k^4}F_\pi M_\tau^2 \ell_\mu^{\nu^\prime} \Re e\left[ \mathcal{A}^*_{\mu^\prime\nu^\prime} \mathcal{V}^{\mu\mu^\prime}\right]\,,\nonumber
\end{eqnarray}
\begin{eqnarray}\label{averaged V}
& & \overline{ \Big|\mathcal{M}_{V}\Big|^2} = 16 G_F^2 |V_{ud}|^2 \frac{e^4}{k^4} \Big|F_V(p\cdot k,k^2)\Big|^2\epsilon_{\mu^\prime\nu^\prime\rho^\prime\sigma^\prime}\epsilon_{\mu\nu\rho\sigma}
k^\rho p^\sigma k^{\rho\prime} p^{\sigma\prime} \ell^{\nu{\nu^\prime}}\tau^{\mu{\mu^\prime}}\,,\nonumber
\end{eqnarray}
\begin{eqnarray}\label{averaged A}
& & \overline{\Big|\mathcal{M}_{A}\Big|^2} = 64 G_F^2 |V_{ud}|^2 \frac{e^4}{k^4}\ell_{\nu{\nu^\prime}}\tau_{\mu{\mu^\prime}}\mathcal{A}^{\mu\nu}{\mathcal{A}^{\mu^\prime\nu^\prime}}^*\,,\nonumber
\end{eqnarray}
\begin{eqnarray}\label{averaged VA}
& & \overline{2 \Re e\left[\mathcal{M}_{V}\mathcal{M}_A^*\right]} = -64 G_F^2 |V_{ud}|^2 \frac{e^4}{k^4}\Im m\left[ F_V(p\cdot k,k^2)\epsilon_{\mu\nu\rho\sigma}k^\rho p^\sigma \ell_{\nu^\prime}^\mu \tau^{\mu\mu^\prime}{\mathcal{A}_{\mu^\prime}^{\nu^\prime}}^*\right]\nonumber\,,
\end{eqnarray}
where we have defined
\begin{eqnarray}
\ell^{\mu\nu} & = & p_-^\mu p_+^\nu+p_-^\nu p_+^\mu-g^{\mu\nu}(m_\ell^2+p_-\cdot p_+)\, \nonumber \\ \tau^{\mu\nu}& =& p_\tau^\mu q^\nu+p_\tau^\nu q^\mu-g^{\mu\nu}p_\tau \cdot q\,,\\
\mathcal{A}^{\mu\nu}& = & F_A(p\cdot k,k^2)\left[(k^2+p\cdot k)g^{\mu\nu}-k^\mu p^\nu\right]+B(k^2) k^2 \left[g^{\mu\nu}-\frac{(p+k)^\mu p^\nu}{k^2+2p\cdot k}\right]\,,\nonumber\\
\mathcal{V}_{\mu\nu} & = & \frac{2p_\mu q_{\nu}}{2k\cdot p+k^2}+\frac{-g_{\mu\nu}k\cdot q+2q_{\nu}p_{\tau\,\mu} -i\epsilon_{\mu\nu\rho\sigma}k^\rho q^\sigma+k_{\nu}q_\mu}{k^2-2k\cdot p_\tau}\, ,\nonumber
\end{eqnarray}\label{definitions}
and used the conservation of the electromagnetic currents implying $k_\mu\ell^{\mu\nu}=0=\ell^{\mu\nu}k_\nu$.
|
2,869,038,155,863 | arxiv | \section{Introduction}
\label{sec:intro} Blogs have become an important medium of communication
and information on the World Wide Web. Due to their accessible and
timely nature, they are also an intuitive source for data involving the
spread of information and ideas. By examining linking propagation
patterns from one blog post to another, we can infer answers to some
important questions about the way information spreads through a social
network over the Web. For instance, does traffic in the network exhibit
bursty, and/or periodic behavior? After a topic becomes popular, how
does interest die off -- linearly, or exponentially?
In addition to temporal aspects, we would also like to discover
topological patterns in information propagation graphs (cascades). We
explore questions like: do graphs of information cascades have common
shapes? What are their properties? What are characteristic in-link
patterns for different nodes in a cascade? What can we say about the
size distribution of cascades?
Finally, how can we build models that generate realistic cascades?
\subsection{Summary of findings and contributions}
{\em Temporal patterns:} For the two months of observation, we found
that blog posts do {\em not} have a bursty behavior; they only have a
weekly periodicity. Most surprisingly, the popularity of posts drops
with a {\em power law}, instead of exponentially, that one may have
expected. Surprisingly, the exponent of the power law is $\approx$-1.5,
agreeing very well with Barabasi's theory of heavy tails in human
behavior~\cite{barabasi05human}.
{\em Patterns in the shapes and sizes of cascades and blogs:} Almost
every metric we measured, followed a power law. The most striking result
is that the size distribution of cascades (= number of involved posts),
follows a perfect Zipfian distribution, that is, a power law with slope
=-2. The other striking discovery was on the shape of cascades. The most
popular shapes were the ``stars'', that is, a single post with several
in-links, but none of the citing posts are themselves cited.
{\em Generating Model:} Finally, we design a flu-like epidemiological
model. Despite its simplicity, it generates cascades that match several
of the above power-law properties of real cascades. This model could be
useful for link prediction, link-spam detection, and ``what-if''
scenarios.
\subsection{Paper organization}
In section~\ref{sec:related} we briefly survey related work. We
introduce basic concepts and terminology in section~\ref{sec:prelim}.
Next, we describe the blog dataset, and discuss the data cleaning steps.
We describe temporal link patterns in section~\ref{sec:observ}, and
continue with exploring the characteristics of the information cascades.
We develop and evaluate the {Cascade generation model}\ in section~\ref{sec:models}. We
discuss implications of our findings in section~\ref{sec:discussion},
and conclude in section~\ref{sec:conclusion}.
\section{Related work}
\label{sec:related} To our knowledge this work presents the first
analysis of temporal aspects of blog link patterns, and gives detailed
analysis about cascades and information propagation on the blogosphere.
As we explore the methods for modeling such patterns, we will refer to
concepts involving power laws and burstiness, social networks in the
blog domain, and information cascades.
\subsection{Burstiness and power laws}
How often do people create blog posts and links? Extensive work has been
published on patterns relating to human behavior, which often generates
bursty traffic. Disk accesses, network traffic, web-server traffic all
exhibit burstiness. Wang et al in~\cite{Wang02Data} provide fast
algorithms for modeling such burstiness. Burstiness is often related to
self-similarity, which was studied in the context of World Wide Web
traffic~\cite{Crovella96Self}. Vazquez et al \cite{Vazquez:2006}
demonstrate the bursty behavior in web page visits and corresponding
response times.
Self-similarity is often a result of heavy-tailed dynamics. Human
interactions may be modeled with networks, and attributes of these
networks often follow \emph{power law}
distributions~\cite{faloutsos99powerlaw}. Such distributions have a PDF
(probability density function) of the form $p(x) \propto x^\gamma$,
where $p(x)$ is the probability to encounter value $x$ and $\gamma$ is
the exponent of the power law. In log-log scales, such a PDF gives a
straight line with slope $\gamma$. For $\gamma < -1$, we can show that
the Complementary Cumulative Distribution Function (CCDF) is also a
power law with slope $\gamma + 1$, and so is the rank-frequency plot
pioneered by Zipf~\cite{Zipf49Human}, with slope $1/(1+\gamma)$. For
$\gamma = -2$ we have the standard Zipf distribution, and for other
values of $\gamma$ we have the generalized Zipf distribution.
Human activity also follows periodicities, like daily, weekly and yearly
periodicities, often in combination with the burstiness.
\subsection{Blogs}
Most work on modeling link behavior in large-scale on-line data has been
done in the blog domain~\cite{Lada05election,Adar:2005,Kumar:2003}. The
authors note that, while information propagates between blogs, examples
of genuine cascading behavior appeared relatively rare. This may,
however, be due in part to the Web-crawling and text analysis techniques
used to infer relationships among posts~\cite{Adar:2005,Gruhl:2004a}.
Our work here differs in a way that we concentrate solely on the
propagation of links, and do not infer additional links from text of the
post, which gives us more accurate information.
There are several potential models to capture the structure of the
blogosphere. Work on information diffusion based on
topics~\cite{Gruhl:2004a} showed that for some topics, their popularity
remains constant in time (``chatter'') while for other topics the
popularity is more volatile (``spikes''). Authors in~\cite{Kumar:2003}
analyze community-level behavior as inferred from blog-rolls --
permanent links between ``friend'' blogs. Analysis based on thresholding
as well as alternative probabilistic models of node activation is
considered in the context of finding the most influential nodes in a
network~\cite{Kempe:2003}, and for viral
marketing~\cite{Richardson:2002}. Such analytical work posits a known
network, and uses the model to find the most influential nodes; in the
current work we observe real cascades, characterize them, and build
generative models for them.
\subsection{Information cascades and epidemiology}
Information cascades are phenomena in which an action or idea becomes
widely adopted due to the influence of others, typically, neighbors in
some network~\cite{Bikchandani:1992,Goldenberg:2001,Granovetter:1978}.
Cascades on random graphs using a threshold model have been
theoretically analyzed~\cite{Watts:2002}. Empirical analysis of the
topological patterns of cascades in the context of a large product
recommendation network is in~\cite{jurij05patterns}
and~\cite{jure06viral}.
The study of epidemics offers powerful models for analyzing the spread
of viruses. Our topic propagation model is based on the \emph{SIS}
(Susceptible-Infected-Susceptible) model of
epidemics~\cite{Bailey1975Diseases}. This is models flu-like viruses,
where an entity begin as ``susceptible'', may become ``infected'' and
infectious, and then heals to become susceptible again. A key parameter
is the infection probability $\beta$, that is, the probability of a
disease transmission in a single contact. Of high interest is the {\em
epidemic threshold}, that is, the critical value of $\beta$, above which
the virus will spread and create an epidemic, as opposed to becoming
extinct. There is a huge literature on the study of epidemics on full
cliques, homogeneous graphs, infinite graphs (see~\cite{Hethcote:2000}
for a survey), with recent studies on power-law
networks~\cite{Equiluz02Epidemic} and arbitrary
networks~\cite{WangCWF03}.
\section{Preliminaries}
\label{sec:prelim}
\begin{figure*}
\begin{center}
\begin{tabular}{c c c}
\epsfig{file=FIG/blogosphere.eps, width=0.30\textwidth} &
\epsfig{file=FIG/blognetwork.eps, width=0.35\textwidth} &
\epsfig{file=FIG/postnetwork.eps, width=0.27\textwidth} \\
(a) Blogosphere & (b) {Blog network}\ & (c) {Post network}\ \\
\end{tabular}
\end{center}
\caption{The model of the blogosphere (a). Squares represent blogs and
circles blog-posts. Each post belongs to a blog, and can contain
hyper-links to other posts and resources on the web. We create two
networks: a weighted blog network (b) and a post network (c). Nodes
$a, b, c,d$ are {\em cascade initiators}, and node $e$ is a {\em
connector}.}
\label{fig:blogosphere}
\end{figure*}
In this section we introduce terminology and basic concepts regarding
the blogosphere and information cascades.
Blogs (weblogs) are web sites that are updated on a regular basis. Blogs
have the advantage of being easy to access and update, and have come to
serve a variety of purposes. Often times individuals use them for online
diaries and social networking, other times news sites have blogs for
timely stories. Blogs are composed of posts that typically have room for
comments by readers -- this gives rise to discussion and opinion forums
that are not possible in the mass media. Also, blogs and posts
typically link each other, as well as other resources on the Web. Thus,
blogs have become an important means of transmitting information. The
influence of blogs was particularly relevant in the 2004 U.S. election,
as they became sources for campaign fundraising as well as an important
supplement to the mainstream media~\cite{Lada05election}. The
blogosphere has continued to expand its influence, so understanding the
ways in which information is transmitted among blogs is important to
developing concepts of present-day communication.
We model two graph structures emergent from links in the blogosphere,
which we call the \emph{{Blog network}} and the \emph{{Post network}}.
Figure~\ref{fig:blogosphere} illustrates these structures. Blogosphere
is composed of blogs, which are further composed of posts. Posts then
contain links to other posts and resources on the web.
From Blogosphere (a), we obtain the {Blog network}\ (b) by collapsing all
links between blog posts into weighted edges between blogs. A directed
blog-to-blog edge is weighted with the total number of links occurring
between posts in source blog pointing to posts in destination blog.
From the {Blog network}\ we can infer a social network structure, under the
assumption that blogs that are ``friends'' link each other often.
In contrast, to obtain the {Post network}\ (c), we ignore the posts' parent
blogs and focus on the link structure. Associated with each post is
also the time of the post, so we label the edges in {Post network}\ with the
time difference $\Delta$ between the source and the destination posts.
Let $t_u$ and $t_v$ denote post times of posts $u$ and $v$, where $u$
links to $v$, then the link time $\Delta = t_u - t_v$. Note $\Delta>0$,
since a post can not link into the future and there are no self-edges.
From the {Post network}, we extract information cascades, which are induced
subgraphs by edges representing the flow of information. A cascade (also
known as conversation tree) has a single starting post called the {\em
cascade initiator} with no out-links to other posts (e.g. nodes
$a,b,c,d$ in Figure~\ref{fig:blogosphere}(c)). Posts then join the
cascade by linking to the initiator, and subsequently new posts join by
linking to members within the cascade, where the links obey time order
($\Delta>0$). Figure~\ref{fig:cascade} gives a list of cascades
extracted from {Post network}\ in Figure~\ref{fig:blogosphere}(c). Since a
link points from the follow-up post to the existing (older) post,
influence propagates following the reverse direction of the edges.
\begin{figure}
\centering
\epsfig{file=FIG/cascades.eps, width=0.45\textwidth}
\caption{Cascades extracted from Figure~\ref{fig:blogosphere}.
Cascades represent the flow of information through nodes in the
network. To extract a cascade we begin with an initiator with no
out-links to other posts, then add nodes with edges linking to the
initiator, and subsequently nodes that link to any other nodes in the
cascade.}
\label{fig:cascade}
\end{figure}
We also define a \emph{non-trivial} cascade to be a cascade containing
at least two posts, and therefore a \emph{trivial cascade} is an
isolated post. Figure~\ref{fig:cascade} shows all non-trivial cascades
in Figure~\ref{fig:blogosphere}(c), but not the two trivial cascades.
Cascades form two main shapes, which we will refer to as \emph{stars}
and \emph{chains}. A star occurs when a single center posts is linked
by several other posts, but the links do not propagate further. This
produces a wide, shallow tree. Conversely, a chain occurs when a root is
linked by a single post, which in turn is linked by another post. This
creates a deep tree that has little breadth. As we will later see most
cascades are somewhere between these two extreme points. Occasionally
separate cascades might be joined by a single post -- for instance, a
post may summarize a set of topics, or focus on a certain topic and
provide links to different sources that are members of independent
cascades. The post merging the cascades is called a \emph{connector
node}. Node $e$ in Figure~\ref{fig:cascade}(c) is a connector node. It
appears in two cascades by connecting cascades starting at nodes $b$ and
$c$.
\section{Experimental setup}
\label{sec:data}
\subsection{Dataset description}
We extracted our dataset from a larger set which contains 21.3 million
posts from 2.5 million blogs from August and September
2005~\cite{GlanceHNSST05}. Our goal here is to study temporal and
topological characteristics of information propagation on the
blogosphere. This means we are interested in blogs and posts that
actively participate in discussions, so we biased our dataset towards
the more active part of the blogosphere.
We collected our dataset using the following procedure. We started with
a list of the most-cited blog posts in August 2005. For all posts we
traversed the full conversation tree forward and backward following
post's in- and out-links. For practical reasons we limited the depth of
such conversation trees to 100 and the maximum number of links followed
from a single post to 500. This process gave us a set of posts
participating in conversations. From the posts we extracted a list of
all blogs. This gave us a set of about $45,000$ active blogs. Now, we
went back to the original dataset and extracted all posts coming from
this set of active blogs.
This process produced a dataset of $2,422,704$ posts from $44,362$ blogs
gathered over a two-month period from beginning of August to end of
September 2005. There are the total of $4,970,687$ links in the dataset
out of which $245,404$ are among the posts of our dataset and the rest
point to other resources (e.g. images, press, news, web-pages). For each
post in the dataset we have the following information: unique Post ID,
the URL of the parent blog, Permalink of the post, Date of the post,
post content (html), and a list of all links that occur in the post's
content. Notice these posts are not a random sample of all posts over
the two month period but rather a set of posts biased towards active
blogs participating in conversations (by linking to other posts/blogs).
In Figure~\ref{fig:postovertime} we plot the number of posts per day
over the span of our dataset. The periodicities in traffic on a weekly
basis will be discussed in section~\ref{sec:observ}. Notice that our
dataset has no ``missing past'' problem, i.e. the starting points of
conversation are not missing due to the beginning of data collection,
since we followed the conversation all the way to its starting point and
thus obtained complete conversations.
The posts span the period from July to September 2005 (90 days), while
the majority of the data comes from August and September. The July posts
in the dataset are parts of conversations that were still active in
August and September.
\begin{figure}
\centering
\epsfig{file=FIG/postperday.eps, width=0.70\textwidth}
\caption{Number of posts by day over the three-month period.}
\label{fig:postovertime}
\end{figure}
\subsection{Data preparation and cleaning}
We represent the data as a cluster graph where clusters correspond to
blogs, nodes in the cluster are posts from the blog, and hyper-links
between posts in the dataset are represented as directed edges. Before
analysis, we cleaned the data to most clearly represent the structures
of interest.
\textbf{Only consider out-links to posts in the dataset.} We removed
links that point to posts outside our dataset or other resources on the
web (images, movies, other web-pages). The major reason for this is that
we only have time-stamps for the posts in the dataset while we know
nothing about creation time of URLs outside the dataset, and thus we
cannot consider these links in our temporal analysis.
\textbf{Use time resolution of one day.} While posts in blogspace are
often labeled with complete time-stamps, many posts in our dataset do
not have a specific time stamp but only the date is known. Additionally,
there are challenges in using time stamps to analyze emergent behaviors
on an hourly basis, because posts are written in different time zones,
and we do not normalize for this. Using a coarser resolution of one day
serves to reduce the time zone effects. Thus, in our analysis the time
differences are aggregated into 24-hour bins.
\textbf{Remove edges pointing into the future.} Since a post cannot
link to another post that has not yet been written, we remove all edges
pointing into the future. The cause may be human error, post update, an
intentional back-post, or time zone effects; in any case, such links do
not represent information diffusion.
\textbf{Remove self edges.} Again, self edges do not represent
information diffusion. However, we do allow a post to link to another
post in the same blog.
\section{Observations, patterns and laws}
\label{sec:observ}
\subsection{Temporal dynamics of posts and links}
Traffic in blogosphere is not uniform; therefore, we consider traffic
patterns when analyzing influence in the temporal sense. As
Figure~\ref{fig:postovertime} illustrates, there is a seven-day
periodicity. Further exploring the weekly patterns,
Figure~\ref{fig:weekaggregated} shows the number of posts and the number
of blog-to-blog links for different days of the week, aggregated over
the entire dataset. Posting and blog-to-blog linking patterns tend to
have a \emph{weekend effect} of sharply dropping off at weekends.
\begin{figure}[tb]
\begin{center}
\begin{tabular}{c c}
\epsfig{file=FIG/PostsPerDayOfWeek.eps, width=0.45\textwidth} &
\epsfig{file=FIG/LinksPerDayOfWeek.eps, width=0.45\textwidth} \\
(a) Posts & (b) Blog-to-Blog links\\
\end{tabular}
\end{center}
\caption{Activity counts (number of posts and number of links)
per day of week, from Monday to Sunday, summed over entire dataset.}
\label{fig:weekaggregated}
\end{figure}
Next, we examine how a post's popularity grows and declines over time.
We collect all in-links to a post and plot the number of links occurring
after each day following the post. This creates a curve that indicates
the rise and fall of popularity. By aggregating over a large set of
posts we obtain a more general pattern.
Top left plot of Figure~\ref{fig:popularity} shows number of in-links
for each day following a post for all posts in the dataset, while top
right plot shows the in-link patterns for Monday posts only (in order to
isolate the weekly periodicity). It is clear that the most links occur
on the first 24 hours after the post, after that the popularity
generally declines. However, in the top right plot, we note that there
are ``spikes'' occurring every seven days, each following Monday. It
almost appears as if there is compensatory behavior for the sparse
weekend links. However, this is not the case. Mondays do not have an
unusual number of links; Monday only appears to spike on these graphs
because the natural drop-off of popularity in the following days allows
Monday to tower above its followers.
\begin{figure}[tb]
\begin{center}
\begin{tabular}{c c}
\epsfig{file=FIG/all-uw.eps, width=0.45\textwidth} &
\epsfig{file=FIG/mon-uw.eps, width=0.45\textwidth} \\
\epsfig{file=FIG/all-w.eps, width=0.45\textwidth} &
\epsfig{file=FIG/mon-w.eps, width=0.45\textwidth} \\
\epsfig{file=FIG/PL-alldays.eps, width=0.45\textwidth} &
\epsfig{file=FIG/PL-mondays.eps, width=0.45\textwidth} \\
All posts & Only Monday posts \\
\end{tabular}
\end{center}
\caption{Number of in-links vs. the days after the post in
log-linear scale; when considering all posts (top left), only
Monday posts (top right). After removing the day-of-the week
effects (middle row). Power law fit to the data with exponents
$-1.6$ and $-1.46$ (bottom row).}
\label{fig:popularity}
\end{figure}
Thus, fitting a general model to the drop-off graphs may be problematic,
since we might obtain vastly different parameters across posts simply
because they occur at different times during the week. Therefore, we
smooth the in-link plots by applying a weighting parameter to the plots
separated by day of week. For each delay $\Delta$ on the horizontal
axis, we estimate the corresponding day of week $d$, and we prorate the
count for $\Delta$ by dividing it by $l(d)$, where $l(d)$ is the percent
of blog links occurring on day of week $d$.
This weighting scheme normalizes the curve such that days of the week
with less traffic are bumped up further to meet high traffic days,
simulating a popularity drop-off that might occur if posting and linking
behavior were uniform throughout the week. A smoothed version of the
post drop-offs is shown in the middle row of
Figure~\ref{fig:popularity}.
We fit the power-law distribution with a cut-off in the tail (bottom
row). We fit on 30 days of data, since most posts in the graph have
complete in-link patterns for the 30 days following publication. We
performed the fitting over all posts and for all days of the week
separately, and found a stable power-law exponent of around $-1.5$,
which is exactly the value predicted by the model where the bursty
nature of human behavior is a consequence of a decision based queuing
process~\cite{barabasi05human} -- when individuals execute tasks based
on some perceived priority, the timing of the tasks is heavy tailed,
with most tasks being rapidly executed, whereas a few experience very
long waiting times.
\begin{observation}
The probability that a post written at time $t_p$ acquires a link at
time $t_p + \Delta$ is:
\[
p(t_p + \Delta) \propto \Delta^{-1.5}
\]
\end{observation}
\begin{figure}[tb]
\begin{center}
\begin{tabular}{c c c}
\epsfig{file=FIG/BlogNet-inDeg.eps, width=0.32\textwidth} &
\epsfig{file=FIG/BlogNet-outDeg.eps, width=0.32\textwidth} &
\epsfig{file=FIG/Blogs-OutLinks_InLinks.eps, width=0.32\textwidth} \\
(a) & (b) & (c) \\
\end{tabular}
\end{center}
\caption{(a, b) In- and out-degree distributions of the {Blog network}; (c) the
scatter plot of the number of in- and out-links of the blogs.}
\label{fig:indegree}
\end{figure}
\subsection{{Blog network}\ topology}
The first graph we consider is the {Blog network}. As illustrated in
Figure~\ref{fig:blogosphere}(c), every node represents a blog and there
is a weighted directed edge between blogs $u$ and $v$, where the weight
of the edge corresponds to the number of posts from blog $u$ linking to
posts at blog $v$. The network contains $44,356$ nodes and $122,153$
edges. The sum of all edge weights is the number of all post to post
links ($245,404$). Connectivity-wise, half of the blogs belong to the
largest connected component and the other half are isolated blogs.
We show the in- and out-degree distribution in
Figure~\ref{fig:indegree}. Notice they both follow a heavy-tailed
distribution. The in-degree distribution has a very shallow power-law
exponent of $-1.7$, which suggests strong rich-get-richer phenomena. One
would expect that popular active blogs that receive lots of in-links
also sprout many out-links. Intuitively, the attention (number of
in-links) a blog gets should be correlated with its activity (number of
out-links). This does not seem to be the case. The correlation
coefficient between blog's number of in- and out-links is only $0.16$,
and the scatter plot in Figure~\ref{fig:indegree} suggests the same.
\begin{figure}[t]
\begin{center}
\begin{tabular}{c c}
\epsfig{file=FIG/PostPerBlog-BlogPosts1.eps, width=0.45\textwidth} &
\epsfig{file=FIG/BlogNet-wgts.eps, width=0.45\textwidth} \\
\end{tabular}
\end{center}
\caption{Distribution of the number of posts per blog (a);
Distribution of the number of blog-to-blog links, i.e. the
distribution over the {Blog network}\ edge weights (b).}
\label{fig:postsDistr}
\end{figure}
The number of posts per blog, as shown in
Figure~\ref{fig:postsDistr}(a), follows a heavy-tailed distribution. The
deficit of blogs with low number of posts and the knee at around 40
posts per blog can be explained by the fact that we are using a dataset
biased towards active blogs. However, our biased sample of the blogs
still maintains the power law in the number of blog-to-blog links (edge
weights of the {Blog network}) as shown in \ref{fig:postsDistr}(b). The
power-law exponent is $-2.7$.
\begin{figure}[t]
\begin{center}
\begin{tabular}{c c}
\epsfig{file=FIG/PostNet-inDeg.eps, width=0.45\textwidth} &
\epsfig{file=FIG/PostNet-outDeg.eps, width=0.45\textwidth} \\
\end{tabular}
\end{center}
\caption{{Post network}\ in- and out-degree distribution.}
\label{fig:postDeg}
\end{figure}
\begin{figure*}[t]
\centering
\begin{tabular}{ccccccccccc}
\includegraphics[scale=0.20, angle=180]{FIG/cascades/casc-002.ps} &
\includegraphics[scale=0.20, angle=180]{FIG/cascades/casc-003.ps} &
\includegraphics[scale=0.20, angle=180]{FIG/cascades/casc-004.ps} &
\includegraphics[scale=0.20, angle=180]{FIG/cascades/casc-005.ps} &
\includegraphics[scale=0.20, angle=180]{FIG/cascades/casc-006.ps} &
\includegraphics[scale=0.20, angle=180]{FIG/cascades/casc-007.ps} &
\includegraphics[scale=0.20, angle=180]{FIG/cascades/casc-008.ps} &
\includegraphics[scale=0.20, angle=180]{FIG/cascades/casc-009.ps} &
\includegraphics[scale=0.20, angle=180]{FIG/cascades/casc-010.ps} &
\includegraphics[scale=0.20, angle=180]{FIG/cascades/casc-011.ps} &
\includegraphics[scale=0.20, angle=180]{FIG/cascades/casc-012.ps} \\
$G_{2}$ & $G_{3}$ & $G_{4}$ & $G_{5}$ & $G_{6}$ & $G_{7}$ & $G_{8}$ &
$G_{9}$ & $G_{10}$ & $G_{11}$ & $G_{12}$ \\
\end{tabular}
\begin{tabular}{ccccccccccc}
\includegraphics[scale=0.20, angle=180]{FIG/cascades/casc-014.ps} &
\includegraphics[scale=0.20, angle=180]{FIG/cascades/casc-015.ps} &
\includegraphics[scale=0.20, angle=180]{FIG/cascades/casc-016.ps} &
\includegraphics[scale=0.20, angle=180]{FIG/cascades/casc-018.ps} &
\includegraphics[scale=0.20, angle=180]{FIG/cascades/casc-029.ps} &
\includegraphics[scale=0.20, angle=180]{FIG/cascades/casc-034.ps} &
\includegraphics[scale=0.20, angle=180]{FIG/cascades/casc-083.ps} &
\includegraphics[scale=0.20, angle=180]{FIG/cascades/casc-100.ps} &
\includegraphics[scale=0.20, angle=180]{FIG/cascades/casc-107.ps} &
\includegraphics[scale=0.20, angle=180]{FIG/cascades/casc-117.ps} &
\includegraphics[scale=0.20, angle=180]{FIG/cascades/casc-124.ps} \\
$G_{14}$ & $G_{15}$ & $G_{16}$ & $G_{18}$ & $G_{29}$ & $G_{34}$ &
$G_{83}$ & $G_{100}$ & $G_{107}$ & $G_{117}$ & $G_{124}$ \\
\end{tabular}
\caption{Common cascade shapes ordered by the frequency.
Cascade with label $G_r$ has the frequency rank $r$.}
\label{fig:shapes}
\end{figure*}
\subsection{{Post network}\ topology}
In contrast to {Blog network}\ the {Post network}\ is very sparsely connected. It
contains 2.2 million nodes and only $205,000$ edges. $98\%$ of the posts
are isolated, and the largest connected component accounts for $106,000$
nodes, while the second largest has only 153 nodes.
Figure~\ref{fig:postDeg} shows the in- and out-degree distributions of
the {Post network}\ which follow a power law with exponents $-2.1$ and $-2.9$,
respectively.
\subsection{Patterns in the cascades}
\label{sec:cascades}
We continue with the analysis of the topological aspects of the
information cascades formed when certain posts become popular and are
linked by the other posts. We are especially interested in how this
process propagates, how large are the cascades it forms, and as it will
be shown later, what are the models that mimic cascading behavior and
produce realistic cascades.
Cascades are subgraphs of the {Post network}\ that have a single root post,
are time increasing (source links an existing post), and present the
propagation of the information from the root to the rest of the cascade.
Given the {Post network}\ we extracted all information cascades using the
following procedure. We found all cascade initiator nodes, i.e. nodes
that have zero out-degree, and started following their in-links. This
process gives us a directed acyclic graph with a single root node. As
illustrated in Figure~\ref{fig:cascade} it can happen that two cascades
merge, e.g.\ a post gives a summary of multiple conversations
(cascades). For cascades that overlap our cascade extraction procedure
will extract the nodes bellow the connector node multiple times (since
they belong to multiple cascades). To obtain the examples of the common
shapes and count their frequency we used the algorithms as described
in~\cite{jurij05patterns}.
\subsubsection{Common cascade shapes}
First, we give examples of common {Post network}\ cascade shapes in
Figure~\ref{fig:shapes}. A node represents a post and the influence
flows from the top to the bottom. The top post was written first, other
posts linking to it, and the process propagates. Graphs are ordered by
frequency and the subscript of the label gives frequency rank. Thus,
$G_{124}$ is $124^{th}$ most frequent cascade with 11 occurrences.
We find the total of $2,092,418$ cascades, and 97\% of them are trivial
cascades (isolated posts), 1.8\% are smallest non-trivial cascades
($G_2$), and the remaining 1.2\% of the cascades are topologically more
complex.
Most cascades can essentially be constructed from instances of stars and
trees, which can model more complicated behavior like that shown in
Figure~\ref{fig:shapes}. Cascades tend to be wide, and not too deep.
Structure $G_{107}$, which we call a \emph{cite-all chain}, is
especially interesting. Each post in a chain refers to every post before
it in the chain.
We also find that the cascades found in the graph tend to take certain
shapes preferentially. Also notice that cascade frequency rank does not
simply decrease as a function of the cascade size. For example, as shown
on Figure~\ref{fig:shapes}, a 4-star ($G_4$) is more common than a chain
of 3 nodes ($G_5$). In general stars and shallow bursty cascades are the
most common type of cascades.
\subsubsection{Cascade topological properties}
What is the common topological pattern in the cascades? We next examine
the general cascade behavior by measuring and characterizing the
properties of real cascades.
\begin{figure*}[t]
\begin{center}
\begin{tabular}{c c c}
\epsfig{file=FIG/Casc-OutDegD.eps, width=0.335\textwidth} &
\epsfig{file=FIG/Casc-InDegD.eps, width=0.335\textwidth} &
\epsfig{file=FIG/CascInDegAtL.eps, width=0.28\textwidth} \\
(a) Out-degree & (b) In-degree & (c) In-degree at level $L$\\
\end{tabular}
\end{center}
\caption{Out- and in-degree distribution over all cascades
extracted from the {Post network}\ (a,b) and the in-degree distribution
at level $L$ of the cascade. Note all distributions are
heavy tailed and the in-degree distribution is remarkably stable over
the levels.}
\label{fig:cascInOutDeg}
\end{figure*}
First we observe the degree distributions of the cascades. This means
that from the {Post network}\ we extract all the cascades and measure the
overall degree distribution. Essentially we work with a {\em bag of
cascades}, where we treat a cascade as separate disconnected sub-graph
in a large network.
Figure~\ref{fig:cascInOutDeg}(a) plots the out-degree distribution of
the bag of cascades. Notice the cascade out-degree distribution is
truncated, which is the result of not perfect link extraction algorithm
and the upper bound on the post out-degree (500).
Figure~\ref{fig:cascInOutDeg}(b) shows the in-degree distribution of the
bag of cascades, and (c) plots the in-degree distribution of nodes at
level $L$ of the cascade. A node is at level $L$ if it is $L$ hops away
from the root (cascade initiator) node. Notice that the in-degree
exponent is stable and does not change much given the level in the
cascade. This means that posts still attract attention (get linked) even
if they are somewhat late in the cascade and appear towards the bottom
of it.
\begin{figure*}[t]
\begin{center}
\begin{tabular}{c c c}
\epsfig{file=FIG/RootedTr-All.eps, width=0.33\textwidth} &
\epsfig{file=FIG/cnt-vsStarNodes.eps, width=0.33\textwidth} &
\epsfig{file=FIG/cnt-vsChainNodes.eps, width=0.33\textwidth} \\
(a) All cascades & (b) Star cascade & (c) Chain cascade \\
\end{tabular}
\end{center}
\caption{Size distribution over all cascades (a), only stars (b), and chains
(c). They all follow heavy tailed distributions with increasingly
steeper slopes.
}
\label{fig:cascSzDist}
\end{figure*}
Next, we ask what distribution do cascade sizes follow? Does the
probability of observing a cascade on $n$ nodes decreases exponentially
with $n$? We examine the {\em Cascade Size Distributions} over the bag
of cascades extracted from the {Post network}. We consider three different
distributions: over all cascade size distribution, and separate size
distributions of star and chain cascades. We chose stars and chains
since they are well defined, and
given the number of nodes in the cascade, there is no ambiguity in the
topology of a star or a chain.
Figure~\ref{fig:cascSzDist} gives the Cascade Size Distribution plots.
Notice all follow a heavy-tailed distribution. We fit a power-law
distribution and observe that overall cascade size distribution has
power-law exponent of $\approx -2$ (Figure~\ref{fig:cascSzDist}(a)),
stars have $\approx -3.1$ (Figure~\ref{fig:cascSzDist}(b)), and chains
are small and rare and decay with exponent $\approx -8.5$
(Fig.~\ref{fig:cascSzDist}(c)). Also notice there are outlier chains
(Fig.~\ref{fig:cascSzDist}(c)) that are longer than expected. We
attribute this to possible flame wars between the blogs, where authors
publish posts and always refer to the last post of the other author.
This creates chains longer than expected.
\begin{observation}
Probability of observing a cascade on $n$ nodes follows a Zipf
distribution:
\[
p(n) \propto n^{-2}
\]
\end{observation}
As suggested by Figure~\ref{fig:shapes} most cascades follow tree-like
shapes. To further verify this we examine how the diameter, defined as
the length of the longest undirected path in the cascade, and the
relation between the number of nodes and the number of edges in the
cascade change with the cascade size in Figure~\ref{fig:cascDplDiam}.
\begin{figure}[t]
\begin{center}
\begin{tabular}{c c}
\epsfig{file=FIG/casc-dpl.eps, width=0.45\textwidth} &
\epsfig{file=FIG/casc-diam.eps, width=0.45\textwidth} \\
(a) Edges vs. nodes & (b) Diameter \\
\end{tabular}
\end{center}
\caption{Diameter and the number of edges vs. the cascade size.
Notice that diameter increases logarithmically with the cascade
size, while the number of edges basically grows linearly with the
cascade size. This suggests cascades are mostly tree-like structures.}
\label{fig:cascDplDiam}
\end{figure}
This gives further evidence that the cascades are mostly tree-like. We
plot the number of nodes in the cascade vs.\ the number of edges in the
cascade in Figure~\ref{fig:cascDplDiam}(a). Notice the number of edges
$e$ in the cascade increases almost linearly with the number of nodes
$n$ ($e \propto n^{1.03}$). This suggests that the average degree in the
cascade remains constant as the cascade grows, which is a property of
trees and stars. Next, we also measure cascade diameter vs.\ cascade
size (Figure~\ref{fig:cascDplDiam}(b)). We plot on linear-log scales and
fit a logarithmic function. Notice the diameter increases
logarithmically with the size of the cascade, which means the cascade
needs to grow exponentially to gain linear increase in diameter, which
is again a property of the balanced trees and very sparse graphs.
\subsubsection{Collisions of cascades}
By the definition we adopt in this paper, the cascade has a single
initiator node, but in real life one would also expect that cascades
collide and merge. There are connector nodes which are the first to
bring together separate cascades. As the cascades merge, all the nodes
bellow the connector node now belong to multiple cascades. We measure
the distribution over the connector nodes and the nodes that belong to
multiple cascades.
First, we consider only the connector nodes and plot the distribution
over how many cascades a connector joins
(Figure~\ref{fig:connectors}(a)). We only consider nodes with out-degree
greater than 1, since nodes with out-degree 1 are trivial connectors --
they are connecting the cascade they belong to. But there are still
posts that have out-degree greater than 1, and connect only one cascade.
These are the posts that point multiple out-links inside the same
cascade (e.g. $G_{12}$ and $G_{107}$ of Figure~\ref{fig:shapes}). The
dip the at the number of joined cascades equal to 1 in
Figure~\ref{fig:connectors}(a) gives the number of such nodes.
As cascades merge, all the nodes that follow belong to multiple
cascades. Figure~\ref{fig:connectors}(b) gives the distribution over the
number of cascades a node belongs to. Here we consider all the nodes and
find out that $98\%$ of all nodes belong to a single cascade, and the
rest of distribution follows a power-law with exponent $-2.2$.
\begin{figure}[h]
\begin{center}
\begin{tabular}{c c}
\epsfig{file=FIG/connectors-all.eps, width=0.45\textwidth} &
\epsfig{file=FIG/CascPerNode.eps, width=0.45\textwidth} \\
\end{tabular}
\end{center}
\caption{Distribution of joined cascades by the connector nodes
(a). We only consider nodes with out-degree greater than 1.
Distribution of a number of cascades a post belongs to (b);
$98\%$ of posts belong to a single cascade.}
\label{fig:connectors}
\end{figure}
\section{Proposed model and insights}
\label{sec:models}
What is the underlying hidden process that generates cascades? Our goal
here is to find a generative model that generates cascades with
properties observed in section~\ref{sec:cascades}
(Figures~\ref{fig:cascInOutDeg} and~\ref{fig:cascSzDist}). We aim for
simple and intuitive model with the least possible number of parameters.
\subsection{{Cascade generation model}}
We present a conceptual model for generating information cascades that
produces cascade graphs matching several properties of real cascades.
Our model is intuitive and requires only a single parameter that
corresponds to how interesting (easy spreading) are the conversations in
general on the blogosphere.
Intuitively, cascades are generated by the following principle. A post
is posted at some blog, other bloggers read the post, some create new
posts, and link the source post. This process continues and creates a
cascade. One can think of cascades being a graph created by the spread
of the virus over the {Blog network}. This means that the initial post
corresponds to infecting a blog. As the cascade unveils, the virus
(information) spreads over the network and leaves a trail. To model this
process we use a single parameter $\B$ that measures how infectious are
the posts on the blogosphere. Our model is very similar to the SIS
(susceptible -- infected -- susceptible) model from the
epidemiology~\cite{Hethcote:2000}.
Next, we describe the model. Each blog is in one of two states: {\em
infected} or {\em susceptible}. If a blog is in the infected state this
means that the blogger just posted a post, and the blog now has a chance
to spread its influence. Only blogs in the susceptible (not infected)
state can get infected. When a blog successfully infects another blog, a
new node is added to the cascade, and an edge is created between the
node and the source of infection. The source immediately recovers, i.e.
a node remains in the infected state only for one time step. This gives
the model ability to infect a blog multiple times, which corresponds to
multiple posts from the blog participating in the same cascade.
More precisely, a single cascade of the {\em {Cascade generation model}} is generated by the
following process.
\begin{enumerate}
\item[(i)] Uniformly at random pick blog $u$ in the {Blog network}\ as a
starting point of the cascade, set its state to {\em
infected}, and add a new node $u$ to the cascade graph.
\item[(ii)] Blog $u$ that is now in infected state, infects each
of its uninfected directed neighbors in the {Blog network}\
independently with probability $\B$. Let $\{v_1,\dots, v_n\}$
denote the set of infected neighbors.
\item[(iii)] Add new nodes $\{v_1,\dots, v_n\}$ to the cascade and
link them to node $u$ in the cascade.
\item[(iv)] Set state of node $u$ to not infected. Continue
recursively with step (ii) until no nodes are infected.
\end{enumerate}
We make a few observations about the proposed model. First, note that
the blog immediately recovers and thus can get infected multiple times.
Every time a blog gets infected a new node is added to the cascade. This
accounts for multiple posts from the blog participating in the same
cascade. Second, we note that in this version of the model we do not try
to account for topics or model the influence of particular blogs. We
assume that all blogs and all conversations have the same value of the
parameter $\B$. Third, the process as describe above generates cascades
that are trees. This is not big limitation since we observed that most
of the cascades are trees or tree-like. In the spirit of our notion of
cascade we assume that cascades have a single starting point, and do not
model for the collisions of the cascades.
\subsection{Validation of the model}
We validate our model by extensive numerical simulations. We compare the
obtained cascades towards the real cascades extracted from the {Post network}.
We find that the model matches the cascade size and degree
distributions.
We use the real {Blog network}\ over which we propagate the cascades. Using
the {Cascade generation model}\ we also generate the same number of cascades as we found in
{{Post network}} ($\approx 2$ million). We tried several values of $\B$
parameter, and at the end decided to use $\B = 0.025$. This means that
the probability of cascade spreading from the infected to an uninfected
blog is $2.5\%$. We simulated our model 10 times, each time with a
different random seed, and report the average.
\begin{figure}[t]
\centering
\begin{tabular}{ccccccccccc}
\includegraphics[scale=0.30, angle=180]{FIG/cascades/cgm025-002.ps} &
\includegraphics[scale=0.30, angle=180]{FIG/cascades/cgm025-003.ps} &
\includegraphics[scale=0.30, angle=180]{FIG/cascades/cgm025-004.ps} &
\includegraphics[scale=0.30, angle=180]{FIG/cascades/cgm025-005.ps} &
\includegraphics[scale=0.30, angle=180]{FIG/cascades/cgm025-006.ps} &
\includegraphics[scale=0.30, angle=180]{FIG/cascades/cgm025-007.ps} &
\includegraphics[scale=0.30, angle=180]{FIG/cascades/cgm025-008.ps} &
\includegraphics[scale=0.30, angle=180]{FIG/cascades/cgm025-009.ps} &
\includegraphics[scale=0.30, angle=180]{FIG/cascades/cgm025-010.ps} \\
\end{tabular}
\caption{Top 10 most frequent cascades as generated by the {Cascade generation model}.
Notice similar shapes and frequency ranks as in Figure~\ref{fig:shapes}.}
\label{fig:cgmShapes}
\end{figure}
First, we show the top 10 most frequent cascades (ordered by frequency
rank) as generated by the {Cascade generation model}\ in Figure~\ref{fig:cgmShapes}. Comparing
them to most frequent cascades from Figure~\ref{fig:shapes} we notice
that top 7 cascades are matched exactly (with an exception of ranks of
$G_4$ and $G_5$ swapped), and rest of cascades can also be found in real
data.
Next, we show the results on matching the cascade size and degree
distributions in Figure~\ref{fig:modelRes}. We plot the true
distributions of the cascades extracted from the {Post network}\ with dots,
and the results of our model are plotted with a dashed line. We compare
four properties of cascades: (a) overall cascade size distribution, (b)
size distribution of chain cascades, (c) size distribution of stars, and
(d) in-degree distribution over all cascades.
Notice a very good agreement between the reality and simulated cascades
in all plots. The distribution over of cascade sizes is matched best.
Chains and stars are slightly under-represented, especially in the tail
of the distribution where the variance is high. The in-degree
distribution is also matched nicely, with an exception of a spike that
can be attributed to a set of outlier blogs all with in-degree 52. Note
that cascades generated by the {Cascade generation model}\ are all trees, and thus the
out-degree for every node is 1.
\begin{figure*}[thb]
\begin{center}
\begin{tabular}{c c}
\epsfig{file=FIG/All-s0w0f1_0-50.eps, width=0.45\textwidth} &
\epsfig{file=FIG/Chain-s0w0f1_0-50.eps, width=0.45\textwidth} \\
(a) All cascades & (b) Chain cascades \\
\epsfig{file=FIG/Star-s0w0f1_0-50.eps, width=0.45\textwidth} &
\epsfig{file=FIG/InDeg-s0w0f1_0-50.eps, width=0.45\textwidth} \\
(c) Star cascades & (d) In-degree distribution\\
\end{tabular}
\end{center}
\caption{Comparison of the true data and the model. We plotted the
distribution of the true cascades with circles and the estimate of
our model with dashed line. Notice remarkable agreement between the
data and the prediction of our simple model.}
\label{fig:modelRes}
\end{figure*}
\subsection{Variations of the model}
We also experimented with other, more sophisticated versions of the
model. Namely, we investigated various strategies of selecting a
starting point of the cascade, and using edge weights (number of
blog-to-blog links) to further boost cascades.
We considered selecting a cascade starting blog based on the blog
in-degree, in-weight or the number of posts. We experimented variants
where the probability $\B$ of propagating via a link is not constant but
also depends on the weight of the link (number of references between the
blogs). We also considered versions of the model where the probability
$\B$ exponentially decays as the cascade spreads away from the origin.
We found out that choosing a cascade starting blog in a biased way
results in too large cascades and non-heavy tailed distributions of
cascade sizes. Intuitively, this can be explained by the fact that
popular blogs are in the core of the {Blog network}, and it is very easy to
create large cascades when starting in the core. A similar problem
arises when scaling $\B$ with the edge weight. This can also be
explained by the fact that we are not considering specific topics and
associate each edge with a topic (some blog-to-blog edges may be very
topic-specific) and thus we allow the cascade to spread over all edges
regardless of the particular reason (the topic) that the edge between
the blogs exists. This is especially true for blogs like BoingBoing
\footnote{\url{www.boingboing.net}} that are very general and just a
collection of ``wonderful things''.
\section{Discussion}
\label{sec:discussion} Our finding that the the popularity of posts
drops off with a power law distribution is interesting since intuition
might lead one to believe that people would ``forget'' a post topic in
an exponential pattern. However, since linking patterns are based on the
behaviors of individuals over several instances, much like other
real-world patterns that follow power laws such as traffic to Web pages
and scientists' response times to letters~\cite{Vazquez:2006}, it is
reasonable to believe that a high number of individuals link posts
quickly, and later linkers fall off with a heavy-tailed pattern.
Our findings have potential applications in many areas. One could argue
that the conversation mass metric, defined as the total number of posts
in all conversation trees below the point in which the blogger
contributed, summed over all conversation trees in which the blogger
appears, is a better proxy for measuring influence. This metric captures
the mass of the total conversation generated by a blogger, while number
of in-links captures only direct responses to the blogger's posts.
For example, we found that BoingBoing, which a very popular blog about
amusing things, is engaged in many cascades. Actually, 85\% of all
BoingBoing posts were cascade initiators. The cascades generally did not
spread very far but were wide (e.g., $G_{10}$ and $G_{14}$ in
Fig.~\ref{fig:shapes}). On the other hand $53\%$ of posts from a
political blog MichelleMalkin~\footnote{\url{www.michellemalkin.com}}
were cascade initiators. But the cascade here were deeper and generally
larger (e.g., $G_{117}$ in Fig.~\ref{fig:shapes}) than those of
BoingBoing.
\section{Conclusion}
\label{sec:conclusion}
We analyzed one of the largest available collections of blog
information, trying to find how blogs behave and how information
propagates through the blogosphere. We studied two structures, the
``{Blog network}'' and the ``{Post network}''. Our contributions are two-fold: (a)
The discovery of a wealth of temporal and topological patterns and (b)
the development of a generative model that mimics the behavior of real
cascades. In more detail, our findings are summarized as follows:
\begin{itemize}
\item {\em Temporal Patterns:} The decline of a post's popularity
follows a power law. The slope is $\approx$-1.5, the slope
predicted by a very recent theory of heavy tails in human
behavior~\cite{barabasi05human}
\item {\em Topological Patterns:} Almost any metric we examined
follows a power law: size of cascades, size of blogs, in- and
out-degrees. To our surprise, the number of in- and out-links of
a blog are not correlated. Finally, stars and chains are basic
components of cascades, with stars being more common.
\item {\em Generative model:} Our idea is to reverse-engineer the
underlying social network of blog-owners, and to treat the
influence propagation between blog-posts as a flu-like virus,
that is, the SIS model in epidemiology. Despite its simplicity,
our model generates cascades that match very well the real
cascades with respect to in-degree distribution, cascade size
distribution, and popular cascade shapes.
\end{itemize}
Future research could try to include the content of the posts, to help
us find even more accurate patterns of influence propagation. Another
direction is to spot anomalies and link-spam attempts, by noticing
deviations from our patterns.
\section*{Acknowledgements} This material is based upon work
supported by the National Science Foundation under Grants No.
SENSOR-0329549 medium ITR IIS-0534205 and under the auspices of the U.S.
Department of Energy by University of California Lawrence Livermore
National Laboratory under contract No.W-7405-ENG-48. This work is also
partially supported by the Pennsylvania Infrastructure Technology
Alliance (PITA), an IBM Faculty Award, a Yahoo Research Alliance Gift,
with additional funding from Intel and NTT. Additional funding was
provided by a generous gift from Hewlett-Packard. Jure Leskovec was
partially supported by a Microsoft Research Graduate Fellowship, and
Mary McGlohon by a National Science Foundation Graduate Fellowship.
Any opinions, findings, and conclusions or recommendations expressed in
this material are those of the author(s) and do not necessarily reflect
the views of the National Science Foundation, or other funding parties.
|
2,869,038,155,864 | arxiv | \section{Introduction}
Markov Chain Monte Carlo (MCMC) methods have been widely applied in many areas for more than 40 years \citep{hastings70monte}. In particular, they are successful when the target distribution $\pi$ is available only up to a normalizing constant. To sample from such a distribution, various MCMC algorithms have been developed. A typical MCMC algorithm is designed using a transition kernel $P$ that has $\pi$ as stationary distribution.
See for example \citet{meyn93markov,robert04monte,roberts04general}, and the references therein.
Constructing a transition kernel to sample efficiently from a given distribution, although conceptually easy, is rather difficult in practice. The difficulty is that generic algorithms such as the Metropolis--Hastings algorithm requires a careful choice and tuning of the proposal kernel.
The development of adaptive MCMC (AMCMC) methods stems partly from these difficulties. Instead of having a fixed Markov kernel $P$, at each round $n$ an AMCMC algorithm selects a kernel $P_{\what\theta_n}$ from a family of Markov kernels $\{P_\theta\}_{\theta\in\Theta}$, where the value (parameter) $\what\theta_n$ is computed based on possibly all the samples generated up to time $n$, so that the transition kernel is automatically self-adapted. This approach is very appealing in practice, as it frees the users from parameter tuning, and provides a better exploration-exploitation balance in the performance.
As a consequence, AMCMC algorithms often yield smaller asymptotic errors in Monte Carlo estimations. The theoretical and methodological studies of AMCMC have drawn attentions of many researchers lately. See for example the survey by \citet{atchade11adaptive} and the references therein.
In this paper, we investigate the convergence rates of two AMCMC algorithms: the {\it Importance Resampling MCMC (IRMCMC)} algorithm introduced by \citet{atchade09resampling}, and the {\it Equi-Energy (EE) sampler} by \citet{kou06equi}. The IRMCMC algorithm is also referred to {\it interacting annealing} algorithm \citep{bercu12fluctuations}. For the EE sampler, we actually focus on a simplified version, which is sometimes referred to as {\it interacting tempering} algorithm \citep{fort11central}.
Throughout the paper we denote by $\{X_n\}_{n\in{\mathbb N}}$ the random process generated by either of these algorithms. A common feature is that in either case, the dynamics of $\{X_n\}_{n\in{\mathbb N}}$ is driven by a sequence of random measures $\what\theta_n$ computed from an auxiliary chain $\{Y_n\}_{n\in{\mathbb N}}$. Most of the theoretical results available so far focused on the convergence of marginal distributions
\[
{\cal L}_{X_n}\Rightarrow\pi,
\] and on the law of large numbers:
\[
\lim_{n\to\infty}\frac1n \summ i1n f(X_i) = \pi(f) \mbox{ almost surely.}
\]
See for example \citet{andrieu08note,andrieu11nonlinear}, \citet{atchade09resampling,atchade10cautionary} and \citet{fort11convergence} (there is a mistake in the proof of \citep{atchade10cautionary}, pointed out in \citep{fort11convergence}).
Central limit theorems for such AMCMC algorithms have only been considered recently by \citet{fort11central} and \citet{bercu09functional,bercu12fluctuations}. In short, introducing the auxiliary chain makes the stochastic process no longer Markov, which raises considerable technical difficulties. We point out that there are other classes of AMCMC algorithms, for which the parameters take values in finite dimensional spaces (e.g.~the adaptive Metropolis algorithm introduced by~\citet{haario99adaptive}). The analysis of such algorithms is out of the scope of this paper.
In this paper, we study the {\it convergence rate} (or mixing time) of the IRMCMC and EE algorithms. That is, we provide upper bounds on the distances between ${\cal L}_{X_n}$ (the distribution of $X_n$) and the target distribution. This type of rate differs and complements rate of convergence obtained in central limit theorems. Mixing time results provide information on the burn-in time of the algorithm, whereas central limit theorems (such as those mentioned above) deal with the rate of convergence and fluctuations of Monte Carlo averages.
Beside \citet{andrieu07efficiency} who considered AMCMC with a finite-dimensional parameter, and to the best of our knowledge, the mixing time of AMCMC has not been investigated so far.
We show that the IRMCMC algorithm has convergence rate of order $O(n^{-1})$. In particular, we also provide a simple example, for which the convergence rate has lower bound $1/n$. That is, one should not expect a rate superior to $O(n^{-1})$ in general. We also show that for $m$-tuple IRMCMC (to be defined in section~\ref{sec:mIRMCMC}), the convergence rate is also within $O(n^{-1})$.
For the EE sampler, under some regularity conditions, we show that the rate of convergence is $O(n^{-1/2})$ in terms of a slightly weaker norm than the total variation distance. Our results are qualitative, in that the constants in the rates are not explicit. But they clarify what is known about these algorithms, and suggest in particular that longer burn-in should be selected for the EE sampler.
The rest of the paper is organized as follows. The remaining of the introduction gives a general description of the algorithms considered in the paper and introduces some notation. Section~\ref{sec:IRMCMC} is devoted to IRMCMC algorithm. The convergence rate is established in Section~\ref{sec:IRMCMCrate}, and for multiple IRMCMC in Section~\ref{sec:mIRMCMC}. Section~\ref{sec:EE} is devoted to EE sampler. The convergence rate is established in Section~\ref{sec:EErate}. A remark on the connection to parallel tempering is provided in Section~\ref{sec:mEE}.
\subsection{A class of AMCMC algorithms}
We describe the general framework of AMCMC considered in this paper.
Let ${\cal X}$ denote a general state space. An AMCMC algorithm is a stochastic process $\{(X_n,Y_n)\}_{n\geq 0}$ in ${\cal X}\times{\cal X}$, designed such that the main chain $X_n$ converges to the target distribution $\pi$ in a certain sense to be described precisely later. Let ${\cal P}$ denotes the set of all probability measures on ${\cal X}$, and $\{K_\theta,\;\theta\in{\cal P}\}$ a set of transition kernels on ${\cal X}$. Let $P$ be a transition kernel on ${\cal X}$ with invariant probability measure $\pi$. Starting from $P$ and $\{K_\theta,\theta\in{\cal P}\}$, we consider the family of transition kernel $P_\theta$ given by
\[
P_\theta(x,\cdot) = (1-\epsilon)P(x,\cdot) + \epsilon K_{\theta}(x,\cdot),\;\;\theta\in{\cal P},\;x\in{\cal X}.
\]
The dynamics of the AMCMC algorithms considered in this paper can be unified as follows: given ${\cal F}_n = \sigma(X_0,\dots,X_n,Y_0,\dots,Y_n)$, $X_{n+1}$ and $Y_{n+1}$ are conditionally independent, and for all bounded measurable functions $h:{\cal X}\to{\mathbb R}$,
\[{\mathbb E}_x \left(h(X_{n+1})\mid {\cal F}_n,\,\{Y_k\}_{k\geq n+1}\right)={\mathbb E}_x \left(h(X_{n+1})\mid {\cal F}_n\right)= P_{\wt\theta_n}h(X_n), \mbox{ almost surely}
\]
where $\wt \theta_n(\cdot)\ensuremath{\stackrel{\mathrm{def}}{=}} n^{-1}\sum_{j=1}^n \delta_{Y_j}(\cdot)$, denotes the empirical measure associated to the auxiliary chain $\{Y_n\}_{n\geq 0}$. Each algorithm is determined by the choice of the kernels $K_\theta$. For the IRMCMC,
\[
K_\theta(x,A)=\frac{\int_A w(z)\theta({\rm d} z)}{\int_{{\cal X}}w(z){\rm d} z},
\]
where $w(z)={{\rm d}\pi}/{{\rm d}\pi_Y}(z)$ (see Section~\ref{sec:notation} for our convention on $\pi$ and ${\rm d}\pi$), while for the EE, the following choice is made
\[
K_\theta(x,A)=\textbf{1}_A(x) + \int_{{\cal X}}\left(1\wedge\frac{\pi(z)\pi_Y(x)}{\pi(x)\pi_Y(z)}\right)\left(\textbf{1}_A(z)-\textbf{1}_A(x)\right)\theta({\rm d} z).
\]
In both cases, $\pi_Y$ is an auxiliary distribution chosen such that it is relatively close to $\pi$, and easy to sample from (at least easier than $\pi$). We assume that the evolution of the auxiliary train is independent of the main chain in the sense that for bounded measurable function $h:{\cal X}\to{\mathbb R}$,
\[
{\mathbb E}(h(Y_{n+1})\mid {\cal F}_n) = {\mathbb E}(h(Y_{n+1})\mid Y_0,\dots,Y_n), \mbox{ almost surely.}
\]
The description of the dynamics of the joint process $\{(X_n,Y_n)\}_{n\geq 0}$ is completed by specifying the dynamics of $Y_{n+1}\mid \F_n$, which is either a Markov chain with target distribution $\pi_Y$, or the main chain of another adaptive MCMC with target distribution $\pi_Y$, not necessarily Markov.
The rationale of these algorithms can be viewed as follows. For $\theta=\theta_\star=\pi_Y$, the Markov chain with transition kernel $P_{\theta_\star}$ have nice mixing properties, due to the choice of $\pi_Y$. Unfortunately, however, it is not possible to implement directly the kernel $P_{\theta_\star}$. The idea here therefore is (i) to run an auxiliary chain $\{Y_n\}_{n\geq 0}$ with limiting distribution $\pi_Y$, so that the empirical measure $\wt \theta_n$ converges to $\theta_\star$, and (ii) to sample $X_{n+1}$ from
\[
P_{\wt\theta_n}(X_n,\cdot) = (1-\epsilon)P(X_n,\cdot) + \epsilon K_{\wt\theta_n}(X_n,\cdot)\,,
\]
which approximates $P_{\theta_\star}(X_n,\cdot)$ as $n\to\infty$.
\subsection{Notation}\label{sec:notation}
We assume that the state space ${\cal X}$ is a Polish space equipped with a metric $\textsf{d}$, and ${\cal B}$ is the associated Borel $\sigma$-algebra. In addition, $({\cal X},{\mathbb B})$ is a measure space with a reference $\sigma$-finite measure, which we denote for short by ${\rm d} x$. Let $\pi$ and $\pi_Y$ be probability measures on $({\cal X},{\mathbb B})$.
We assume that $\pi$ and $\pi_Y$ are both absolutely continuous with respect to ${\rm d} x$ and with a little abuse of notation, we also use $\pi$ and $\pi_Y$ to denote the density respectively. That is, we write $\pi({\rm d} x) = \pi(x){\rm d} x$ and similarly for $\pi_Y$. For a transition kernel $Q$, a measure $\nu$ and a function $h$, we shall write $\nu Q(\cdot)\ensuremath{\stackrel{\mathrm{def}}{=}} \int\nu({\rm d} z)Q(z,\cdot)$, and $Qh(\cdot)\ensuremath{\stackrel{\mathrm{def}}{=}} \int Q(\cdot,{\rm d} z)h(z)$. We denote $\what\pi_{Y,n}$ the empirical measure associated to the auxiliary chain $\indn Y$ defined by
\[
\what \pi_{Y,n}(\cdot) \ensuremath{\stackrel{\mathrm{def}}{=}} \frac1n\summ i1n\delta_{Y_i}(\cdot)\,.
\]
At times, we also use the notation $\wt \theta_n(\cdot)$ to denote $\what \pi_{Y,n}(\cdot)$. For functions $f:{\cal X}\to{\mathbb R}$, we write
\[
\what\pi_{Y,n}(\wb f) \ensuremath{\stackrel{\mathrm{def}}{=}} \what\pi_{Y,n}(f) - \pi_Y(f).
\]
We let $C$ denote general constants that do not depend on $n$, but may change from line to line.
\newpage
\section{Importance Resampling MCMC}\label{sec:IRMCMC}
We consider the {importance-resampling Markov Chain Monte Carlo} method described in \citet{atchade09resampling}.
\begin{Algo}[IRMCMC]\label{algo:IRMCMC}
Fix $\epsilon\in(0,1)$. Pick arbitrary $X_0 = x_0$ and $Y_0 = y_0$. Let $P$ be an arbitrary Markov kernel with stationary distribution $\pi$. At each round $n$, $X_n$ and $Y_n$ are conditionally independent given $\F_{n-1}$, and
\[
X_{n}\mid \F_{n-1} \sim \left\{
\begin{array}{l@{\mbox{ w.p.~}}l}
P(X_{n-1},\cdot) & 1-\epsilon\,,\\
\what\theta_{n-1}(\cdot) & \epsilon\,,
\end{array}\right.
\]
where $\what\theta_n$ is the (randomly) weighted empirical distribution defined by
\begin{equation}\label{eq:theta}
\what\theta_n(\cdot) = \summ i1{n}\frac{\wt w(Y_i)}{\summ j1{n}\wt w(Y_j)}\delta_{Y_i}(\cdot)=\frac{\int_\cdot \wt w(z)\what\pi_{Y,n}({\rm d} z)}{\int_{{\cal X}} \wt w(z)\what\pi_{Y,n}({\rm d} z)},
\end{equation}
with $\wt w(y) \propto \pi(y)/\pi_Y(y)=: w(y)$, and $\what\theta_0 = \delta_{y_0}$. We assume $|w|_\infty\ensuremath{\stackrel{\mathrm{def}}{=}}\sup_{x\in{\cal X}}|w(x)|<\infty$.
\end{Algo}
\begin{Rem}\label{rem:w}
The assumption on the boundedness of $w$ is not too restrict. Indeed, very often in practice, we have $\wt\pi$, the un-normalized density function of $\pi$ as a bounded function, and set the auxiliary chain with stationary distribution $\wt\pi_Y\propto\pi_Y$ obtained by $\wt\pi_Y = \wt\pi^T$ with $T\in(0,1)$. In this case, $\wt w = \wt \pi/\wt \pi_Y$ is bounded and thus so is $w$.
\end{Rem}
Equivalently, for any bounded function $f:{\cal X}\to{\mathbb R}$,
\[
{\mathbb E} (f(X_{n+1})\mid {\cal F}_n) = P_{\what\theta_n}f(X_n) \mbox{ almost surely, }
\]
where for all probability measures $\theta$ on ${\cal X}$, $P_\theta(x,\cdot)$ is defined by
\begin{equation}\label{eq:Ptheta}
P_\theta(x,\cdot) = (1-\epsilon)P(x,\cdot) + \epsilon \theta(\cdot)\,.
\end{equation}
For the time being, we make no particular assumption on the dynamics of the auxiliary chain $\{Y_n\}_{n\geq 0}$.
\subsection{Convergence rate of IRMCMC}\label{sec:IRMCMCrate}
The following equivalent representation of Algorithm~\ref{algo:IRMCMC} is useful. Let the auxiliary chain be as above. Furthermore, let $\{Z_n\}_{n\geq 0}$ be a sequence of independent and identically distributed random variables with $\mathbb P(Z_1 = 1) = 1-\mathbb P(Z_1 = 0) = \epsilon$. Furthermore, we assume that $\{Z_n\}_{n\geq 0}$ and $\{Y_n\}_{n\geq 0}$ are independent and for each $n\geq 1$, $Z_n$ and $\F_{n-1}$ are independent. Then, at round $n$, we can introduce $Z_n$, and write the conditional distribution of $X_n$ given $Z_n,\F_{n-1}$ as
\[
X_n\mid \F_{n-1},Z_n\sim\left\{
\begin{array}{l@{\mbox{ if }}l}
P(X_{n-1},\cdot) & Z_n = 0\\
\what\theta_{n-1}(\cdot) & Z_n = 1\,.
\end{array}
\right.
\]
Define
\begin{equation}\label{eq:tau}
\tau_0 =0, \tau_{i+1} = \min\{k>\tau_i: Z_k = 1\} \mbox{ and } n^* = \max\{k: \tau_k\leq n\}\,.
\end{equation}
Observe that at each time $\tau_k>0$, conditioning on $Y_0, Y_1,\dots,Y_{\tau_k-1}$, $X_{\tau_k}$ is sampled from $\what\theta_{\tau_k-1}$, independent of $X_0,\dots,X_{\tau_k-1}$. Furthermore, $Y_0,\dots,Y_n$ are independent from $\tau_1,\dots,\tau_{n^*}$. Therefore, we first focus on
\begin{equation}\label{eq:eta}
\eta_n\ensuremath{\stackrel{\mathrm{def}}{=}} \mathbb P(X_{n+1}\in\cdot\mid Z_{n+1}=1) = {\mathbb E}\what\theta_{n}(\cdot)\,,n\in{\mathbb N}\,.
\end{equation}
We first obtain a bound on the total variation distance $\nnTV{\eta_n-\pi}$.
Recall that, given two probability distributions $\mu$ and $\nu$, the total variation distance $\nnTV{\mu-\nu}$ is defined by:
\begin{equation}\label{eq:W}
\nnTV{\mu-\nu} = \frac12\sup_{|f|\leq 1}|\mu(f)-\nu(f)|\,.
\end{equation}
For convenience, write
\begin{equation}\label{eq:Bn}
B_n\ensuremath{\stackrel{\mathrm{def}}{=}} |w|_\infty\sup_{|f|\leq 1}{\mathbb E}\what\pi_{Y,n}(\wb f) + 2|w|_\infty^2\sup_{|f|\leq 1}{\mathbb E}^\prime{\what\pi_{Y,n}(\wb f)}^2, n\in{\mathbb N}.
\end{equation}
Recall that throughout we assume $|w|_\infty<\infty$.
\begin{Lem}\label{lem:etak} For all $n\in{\mathbb N}$,
\begin{equation}\label{eq:etanTV}
\nnTV{\eta_n-\pi}
\leq B_n.
\end{equation}
\end{Lem}
\begin{Rem}
A special case of $\nnTV{\eta_n-\pi}$, when $w\equiv 1$ (equal weights), is the so-called {\it cesaro mixing time} of $\{Y_n\}_{n\geq 0}$. See for example~\citet[Chapter 6.6]{levin09markov}.
\end{Rem}
The proof of Lemma~\ref{lem:etak} is postponed to next subsection.
We have no explicit requirement on $\pi_Y$, except that $w= \pi/\pi_Y$ is bounded. However, the convergence of $Y_1,Y_2,\dots$ to $\pi_Y$ is implicitly ensured when we require further $\sup_{|f|\leq 1}{\mathbb E}(\what\pi_{Y,n}(f) - \pi_Y(f)) + \sup_{|f|\leq 1}{\mathbb E}(\what\pi_{Y,n}(f) - \pi_Y(f))^2\to 0$ as $n\to\infty$.
Indeed, these rates yield an upper bound on the convergence rate of ${\cal L}_{X_n}\Rightarrow \pi$, as shown in the following theorem.
We set $B_0 = B_{-1} = 1$.
\begin{Thm}\label{thm:IRMCMC}
Consider $\indn X$ generated from Algorithm~\ref{algo:IRMCMC}.
Then,
\begin{equation}\label{eq:boundXn}
\nnTV{{\cal L}_{X_n}-\pi}
\leq \sum_{\ell=0}^n (1-\epsilon)^{n-\ell}B_{\ell-1}.
\end{equation}
Furthermore, for any bounded measurable function $f$,
\begin{multline}\label{eq:L2fX}
\PE\bb{\frac1{\sqrt n}\summ i1n(f(X_i)-\pi(f))}^2\\
\leq \frac{80\epsilon^{-2}|f|_\infty^2}n + 64\epsilon^{-2}|f|_\infty^2 + |f|_\infty^2^\prime{\frac1{\sqrt n}\summ 01{n-1}\sqrt{B_k}}^2, n\in{\mathbb N}.
\end{multline}
\end{Thm}
The proof of Theorem~\ref{thm:IRMCMC} is postponed to next subsection.
\begin{Rem}
The control of the total variation distance depends on the initial position of the auxiliary chain, as in general, $B_n$ depends on the initial position $Y_0 = y_0$. We omit this dependence throughout this paper. At the same time, it is obvious that the initial position $X_0 = x_0$ is irrelevant.
\end{Rem}
\begin{Rem}
In Theorem~\ref{thm:IRMCMC}, we do not assume any ergodicity assumption on the kernel $P$. In the case $P$ is say, geometrically ergodic, one can improve (\ref{eq:boundXn}) quantitatively by bounding the term $\nnTV{\eta_k P^{n-k}-\pi}$ more effectively. For example, if $P$ is uniformly ergodic with rate $\rho$, then (\ref{eq:boundXn}) would become
\[\nnTV{{\cal L}_{X_n}-\pi}
\leq \sum_{\ell=0}^n \left[\rho(1-\epsilon)\right]^{n-\ell}B_{\ell-1}.\]
A similar improvement can be formulated for (\ref{eq:L2fX}). However, these improvements do not change the rate but only the constant in the corollary below. Beside, such improvements will not be easily available if $P$ is sub-geometrically ergodic.
\end{Rem}
Now, as a corollary we obtain an upper bound on the convergence rate of IRMCMC algorithm, under the following assumption.
\assumpH
\item \label{A:Y}
There exist a finite constant $C$ such that for all measurable function $f:\;{\cal X}\to\mathbb R$, with $|f|_\infty\leq 1$,
\begin{equation}\label{eq:AY}
{\mathbb E} \what\pi_{Y,n}(\wb f)\leq \frac {C}n \quad\mbox{ and }\quad \PE\bbpp{\what\pi_{Y,n}(\wb f)}^2 \leq \frac{C}n .
\end{equation}
\assumpE
\begin{Rem}
For example, if $\indn Y$ is a Markov chain with transition kernel $P_Y$ and stationary distribution $\pi_Y$, then the first part of~\eqref{eq:AY} holds if $P$ is geometrically ergodic \citep{roberts04general}. The second part of~\eqref{eq:AY} essentially assumes that the sample variances of $\{f(Y_n)\}_{n\in{\mathbb N}}$ is bounded, which also holds for geometrically ergodic Markov chains under appropriate moment condition on $f$ (e.g. $\pi(|f|^{2+\epsilon})<\infty$ with $\epsilon>0$). Occasionally this condition can fail, as the sample variance might become infinity in the limit. See for example~\citet{haggstrom05central} and~\citet{haggstrom07variance}.
\end{Rem}
\begin{Coro}\label{coro:1}
Consider the importance resampling MCMC (Algorithm~\ref{algo:IRMCMC}). If Assumption~\ref{A:Y} holds, then there exists a finite constant $C$ such that
\[
\nnTV{{\cal L}_{X_n}-\pi} \leq \frac {C}n\,.
\]
Furthermore for any bounded measurable function $f$,
\[\PE\bb{\frac1{\sqrt n}\summ i1n(f(X_i)-\pi(f))}^2 \leq C |f|_\infty^2\,, n\in{\mathbb N}\,.\]
\end{Coro}
\subsection{Proofs of Lemma~\ref{lem:etak} and Theorem~\ref{thm:IRMCMC}}
\begin{proof}[Proof of Lemma~\ref{lem:etak}]
Fix $n\geq 1$ and write $\what\pi_Y \equiv\what\pi_{Y,n}$.
Rewrite $\eta_n(f)$ as,
\begin{eqnarray*}
\eta_n(f) & = & {\mathbb E}\bbpp{\summ j1{n}\frac{w(Y_j)}{\summ l1{n}w(Y_l)}f(Y_j)} \nonumber\\
& = & {\mathbb E}\bb{\frac1n\summ j1{n}w(Y_j)f(Y_j)+ \bbpp{1-\frac1n{\summ j1{n}w(Y_j)}}\summ j1{n}\frac{w(Y_j)f(Y_j)}{\summ l1{n}w(Y_l)}}\nonumber\\
& = & {\mathbb E}\bb{\what\pi_{Y,n}(wf) + (\pi_{Y}(w)-\what\pi_{Y,n}(w))\what\theta_n(f)}\,,
\end{eqnarray*}
where in the second term above we used the fact that $\pi_Y(w)=1$.
Since $\pi(f) = \pi_Y(wf)$,
\begin{multline}
\nnTV{\eta_n-\pi} = \sup_{|f|\leq 1}\frac12\bbpp{\eta_n(f)-\pi(f)}\\
\leq \frac12\sup_{|f|\leq 1}{{\mathbb E}\what\pi_{Y,n}(\wb{wf})} + \frac12\sup_{|f|\leq 1}{{\mathbb E}^\prime{\what\pi_{Y,n}(\wb w)\what\theta_n(f)}}\\
\leq |w|_\infty{\sup_{|f|\leq 1}{{\mathbb E}\what\pi_{Y,n}(\wb f)} + \frac12\sup_{|f|\leq 1}{{\mathbb E}\bb{\what\pi_{Y,n}(\wb w)\bbpp{\what\theta_n(f)-\pi_Y(wf)}}}}.\label{eq:etan}
\end{multline}
By Cauchy--Schwarz inequality,
\begin{multline}
\sup_{|f|\leq 1}{\mathbb E}\bb{{\what \pi_{Y,n}(\wb w)}\bbpp{\what\theta_n(f)-\pi_Y(wf)}}\\
\leq \bb{{\mathbb E}\bbpp{\what \pi_{Y,n}(\wb w)}^2}^{1/2}\times \sup_{|f|\leq 1}\bb{{\mathbb E}\bbpp{\what\theta_n(f)-\pi_Y(wf)}^2}^{1/2}\,.\label{eq:etan2}
\end{multline}
The first term is bounded by $|w|_\infty\sup_{|f|\leq 1}\sbb{{\mathbb E}\spp{\what \pi_{Y,n}(\wb f)}^2}^{1/2}$. For the second term, observe that
\begin{multline}\label{eq:etan3}
{\mathbb E}\bbpp{\what\theta_n(f)-\pi_Y(wf)}^2 \\
\leq 2{\mathbb E} \bbpp{\what\theta_n(f)-\what\pi_{Y,n}(wf)}^2 + 2 {\mathbb E} \bbpp{\what\pi_{Y,n}(wf) - \pi_Y(wf)}^2,
\end{multline}
and
\begin{multline}
{\mathbb E} \bbpp{\what\theta_n(f)-\what\pi_{Y,n}(wf)}^2 = {\mathbb E} \bbpp{\summ j1{n}\frac{w(Y_j)f(Y_j)}{\summ l1{n}w(Y_l)} - \frac1n\summ j1{n}w(Y_j)f(Y_j)}^2\\
= {\mathbb E}\bb{\bbpp{1-\what\pi_{Y,n}(w)}^2\what\theta_n^2(f)}
\leq {\mathbb E}\bbpp{\pi_Y(w)-\what\pi_{Y,n}(w)}^2\\
\leq |w|_\infty^2\sup_{|g|\leq 1}{\mathbb E}\bbpp{\what \pi_{Y,n}(\wb g)}^2,\label{eq:etan4}
\end{multline}
and the above calculation holds for all $f: |f|\leq 1$. Combining~\eqref{eq:etan},~\eqref{eq:etan2},~\eqref{eq:etan3} and~\eqref{eq:etan4} yields the desired result.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:IRMCMC}]
We recall that $\tau_{n^*}$ is the last time $k$ before $n$ that the main chain is sampled from $\what\theta_{k-1}$. Now, we can write
\begin{eqnarray}
\nnTV{{\cal L}_{X_n}-\pi} & = & \sup_{|f|\leq 1}\frac12|{\mathbb E} f(X_n) - \pi (f)|\nonumber\\
& = & \sup_{|f|\leq 1}\frac12\abs{\summ k0n{\mathbb E}(f(X_n), \tau_{n^*} = k) - \pi(f)}\nonumber\\
& = & \sup_{|f|\leq 1}\frac12\abs{\summ k0n\mathbb P(\tau_{n^*} = k)\sbb{{\mathbb E}(f(X_n)\mid \tau_{n^*} = k) - \pi(f)}}\nonumber\\
& \leq & \summ k0n\mathbb P(\tau_{n^*} = k)\sup_{|f|\leq 1}\frac12|{\mathbb E}(f(X_n)\mid \tau_{n^*} = k) - \pi(f)|.\label{eq:bound}
\end{eqnarray}
Observe that the conditional distribution of $X_n$ given that $\tau_{n^*} = k\geq 1$, is $\eta_{k-1}P^{n-k}$ (set $\eta_0=\delta_{Y_0}$). Then,
\begin{eqnarray*}
\sup_{|f|\leq 1}\frac12|{\mathbb E}(f(X_n)\mid\tau_{n^*} = k) - \pi(f)| & = &
\sup_{|f|\leq 1}\frac12|\eta_{k-1}P^{n-k}(f) - \pi(f)| \\
& = & \nnTV{\eta_{k-1}P^{n-k}-\pi}\,.
\end{eqnarray*}
By the fact that $\pi P = \pi$, we have $\nnTV{\eta_{k-1} P^{n-k}-\pi} \leq \nnTV{\eta_{k-1}-\pi}\leq B_{k-1}$, by Lemma \ref{lem:etak}. Also $\mathbb P(\tau_{n^*} = k)=\epsilon(1-\epsilon)^{n-k}$ for $k=1,\dots,n$ and $\mathbb P(\tau_{n^*} = 0) = (1-\epsilon)^n$. Thus,~\eqref{eq:bound} becomes \eqref{eq:boundXn}.
To establish (\ref{eq:L2fX}), we show that the partial sum $\sum_{k=1}^n \left(f(X_k)-\pi(f)\right)$ admits a well behaved martingale approximation. For a probability measure $\theta$ on ${\cal X}$, define
\[\pi_\theta(A)=\epsilon\sum_{j=0}^\infty(1-\epsilon)^j(\theta P^j)(A),\;\;A\in{\cal B}.\]
Clearly, $\pi_\theta$ is a probability measure on $({\cal X},{\cal B})$, and in fact we have for all $A\in{\cal B}$,
\begin{multline*}
\pi_\theta P_\theta(A)=\epsilon\sum_{j=0}^\infty (1-\epsilon)^j \int(\theta P^j)({\rm d} z)\left((1-\epsilon)P(z,A)+\epsilon\theta(A)\right)\\
=\epsilon\sum_{j=0}^\infty (1-\epsilon)^{j+1}(\theta P^{j+1})(A)+\epsilon\theta(A)=\pi_\theta(A).\end{multline*}
This means that the kernel $P_\theta$ is invariant by $\pi_\theta$. It is also easy to check by induction that for any bounded measurable function $f$, and $n\geq 1$,
\begin{equation}\label{eq:ratePtheta1}
P_\theta^nf(x)-\pi_\theta(f)=(1-\epsilon)^nP^nf(x)-\epsilon\sum_{j=n}^\infty(1-\epsilon)^j(\theta P^j)f.\end{equation}
It then follows that
\begin{equation}\label{eq:ratePtheta2}
\nnTV{P_\theta^n(x,\cdot)-\pi_\theta}\leq 2(1-\epsilon)^n.
\end{equation}
As a result of (\ref{eq:ratePtheta2}), the function
\[g_\theta(x)=\sum_{j=0}^\infty \left(P_\theta^jf(x)-\pi_\theta(f)\right),\]
is well-defined with $|g_\theta|_\infty\leq 2\epsilon^{-1}|f|_\infty$, and satisfies Poisson's equation
\begin{equation}\label{eq:PoissonPtheta}
g_\theta(x)-P_\theta g_\theta(x)=f(x)-\pi_\theta(f),\;\;\;x\in{\cal X}.\end{equation}
In particular, we have $f(X_k)-\pi_{\what\theta_{k-1}}(f)=g_{\what\theta_{k-1}}(X_k)-P_{\what\theta_{k-1}}g_{\what\theta_{k-1}}(X_k)$, almost surely. Using this, we write:
\begin{multline*}
\sum_{k=1}^n \left(f(X_k)-\pi(f)\right)=\sum_{k=1}^n \left(\pi_{\what\theta_{k-1}}(f)-\pi(f)\right)+ \sum_{k=1}^n \left(f(X_k)-\pi_{\what\theta_{k-1}}(f)\right)\\
=\sum_{k=1}^n \left(\pi_{\what\theta_{k-1}}(f)-\pi(f)\right)+ \sum_{k=1}^n \left(g_{\what\theta_{k-1}}(X_k)-P_{\what\theta_{k-1}}g_{\what\theta_{k-1}}(X_{k-1})\right)\\
+\sum_{k=1}^n \left(P_{\what\theta_{k-1}}g_{\what\theta_{k-1}}(X_{k-1})-P_{\what\theta_{k}}g_{\what\theta_{k}}(X_{k})\right) + \sum_{k=1}^n \left(P_{\what\theta_{k}}g_{\what\theta_{k}}(X_{k})-P_{\what\theta_{k-1}}g_{\what\theta_{k-1}}(X_{k})\right).
\end{multline*}
Using (\ref{eq:PoissonPtheta}), the definition of $g_\theta$, and (\ref{eq:ratePtheta1}), it is easy to check that for any probability measures $\theta,\theta'$ and $x\in{\cal X}$,
\[P_\theta g_\theta(x)-P_{\theta'}g_{\theta'}(x)=\int(\theta'-\theta)({\rm d} z)\left(\epsilon\sum_{j=0}^\infty j(1-\epsilon)^j P^jf(z)\right).\]
this implies that the term $\sum_{k=1}^n \left(P_{\what\theta_{k}}g_{\what\theta_{k}}(X_{k})-P_{\what\theta_{k-1}}g_{\what\theta_{k-1}}(X_{k})\right)$ is a telescoping sum and we have
\begin{multline*}
\left|\sum_{k=1}^n \left(P_{\what\theta_{k}}g_{\what\theta_{k}}(X_{k})-P_{\what\theta_{k-1}}g_{\what\theta_{k-1}}(X_{k})\right)\right|\\
\leq
\abs{^\prime{\what\theta_n-\what\theta_0}^\prime{\epsilon\sif j0j(1-\epsilon)^jP^jf}}
\leq \frac{2(1-\epsilon)}\epsilon|f|_\infty.
\end{multline*}
The term $\sum_{k=1}^n \left(P_{\what\theta_{k-1}}g_{\what\theta_{k-1}}(X_{k-1})-P_{\what\theta_{k}}g_{\what\theta_{k}}(X_{k})\right)$ is also a telescoping sum and we have
\[
\left|\sum_{k=1}^n \left(P_{\what\theta_{k-1}}g_{\what\theta_{k-1}}(X_{k-1})-P_{\what\theta_{k}}g_{\what\theta_{k}}(X_{k})\right)\right|\leq 4\epsilon^{-1}|f|_\infty.
\]
From the definition of $\pi_\theta$, notice that we can write
\[\sum_{k=1}^n \left(\pi_{\what\theta_{k-1}}(f)-\pi(f)\right)=\sum_{k=1}^n \what\theta_{k-1}(f_\epsilon-\pi(f_\epsilon)),\]
where $f_\epsilon(x)=\epsilon\sum_{j=0}^\infty(1-\epsilon)^j P^j f(x)$.
Thus,
\begin{multline*}
{\mathbb E}\bb{\summ k1n^\prime{\pi_{\what\theta_{k-1}}(f)-\pi(f)}}^2 \\
\leq
^\prime{\summ k1n{\mathbb E}^{1/2}\what\theta_{k-1}^2(f_\epsilon - \pi(f_\epsilon))}^2
\leq |f|_\infty^2^\prime{\summ k0{n-1}\sqrt{B_k}}^2,
\end{multline*}
where in the last equality, we use the fact that $\sup_{|f|_\infty\leq 1}\PE\what\theta_k^2(f-\pi(f))\leq B_k$ which is (\ref{eq:etan3}) and is proved as part of Lemma~\ref{lem:etak}.
Finally we also notice that $\sum_{k=1}^n \left(g_{\what\theta_{k-1}}(X_k)-P_{\what\theta_{k-1}}g_{\what\theta_{k-1}}(X_{k-1})\right) =:\summ k1n D_k$ is a martingale with respect to $\{\F_n\}$, whence
\[
{\mathbb E}^\prime{\summ k1nD_k}^2 = \summ k1n{\mathbb E} D_k^2\leq 4n\sup_{\theta}|g_\theta|_\infty^2\leq 16\epsilon^{-2}|f|_\infty^2n .
\]
Using all the above, we obtain~\eqref{eq:L2fX}.
\end{proof}
\subsection{An example on the lower bound}
We provide an example where $O(n^{-1})$ is also the lower bound of the rate for both $\nnTV{\eta_n-\pi}$ and $\nnTV{{\cal L}_{X_n}-\pi}$. This shows that the rate in our upper bound in Corollary~\ref{coro:1} is optimal.
\begin{Example}\label{rem:2state}
Consider the simple case when ${\cal X} = \{\pm1\}$, and $\pi = \pi_Y$. In this case, the weight function is uniform ($w \equiv 1$). Suppose the auxiliary chain $\{Y_n\}_{n\geq 0}$ has transition matrix
\[
P_Y = \bbpp{
\begin{array}{cc}
1-a & a\\
b & 1-b
\end{array}
}, \mbox{ with } a,b\in(0,1)\,.
\]
The corresponding Markov chain has stationary distribution $\pi_Y = (a+b)^{-1}(b,a)$ and eigenvalues $\lambda_1 = 1,\lambda_2 = 1-a-b$. Suppose $a+b\neq 1$ and the chain starts at $Y_0 = -1$. By straight-forward calculation, $\mathbb P(Y_n = -1) = a/(a+b) + b/(a+b)\lambda_2^n$. Then,
\begin{multline*}
{\mathbb E}\what\pi_{Y,n}(\{-1\}) - \pi_Y(\{-1\}) \\
\equiv \frac1n\summ i1{n}(\mathbb P(Y_i = -1) - \pi_Y(\{-1\})) = \frac a{a+b}\frac1n\frac{\lambda_2-\lambda_2^{n+1}}{1-\lambda_2}\,.
\end{multline*}
It then follows from the definition that $\nnTV{\eta_n-\pi}\geq C/n$.
Furthermore, in~\eqref{eq:Ptheta} set $P(x,\cdot) = \pi(\cdot)$. That is, $P$ is the {\it best} kernel we can put into the algorithm, in the sense that it takes one step to arrive at the stationary distribution (although this is too ideal to be practical). Now,
\begin{eqnarray*}
\mathbb P(X_n = -1) - \pi(\{-1\}) & = & (1-\epsilon)\pi(\{-1\}) + \epsilon{\mathbb E}\what\pi_{Y,n}(\{-1\}) - \pi(\{-1\})\\
& = & \epsilon\bbpp{{\mathbb E}\what\pi_{Y,n}(\{-1\}) - \pi_Y(\{-1\})}\,.
\end{eqnarray*}
It then follows that $\nnTV{{\cal L}_{X_n}-\pi}\geq C/n$.
\end{Example}
\subsection{Multiple IRMCMC}\label{sec:mIRMCMC}
We discuss importance-resampling MCMC algorithm in forms of multiple chains and establish a similar convergence rate as in Section~\ref{sec:IRMCMCrate}.
\begin{Algo}[Multiple IRMCMC]\label{algo:mIRMCMC}
We construct iteratively $m$ discrete-time stochastic processes $X\topp \ell \equiv \{X_n\topp \ell\}_{n\geq 0}, \ell=0,\dots, m$ as follows. Fix $\epsilon\in(0,1)$.
Let $X\topp 0$ be a Markov chain with target distribution $\pi_0$ starting at $x_0$. Then iteratively, for each $\ell=1,\dots,m$ with $X\topp {\ell-1}$ constructed, design $X\topp{\ell}$ starting from $x_\ell$, so that $X\topp\ell$ and $X\topp{\ell-1}$ interact as the main chain and the auxiliary chain respectively in Algorithm~\ref{algo:IRMCMC}. Namely, let $P_\ell$ be a Markov kernel with stationary distribution $\pi_\ell$, and sample $X_{n+1}\topp\ell$ from $P_{\ell,\what\theta\topp{\ell-1}_{n}}(X_n\topp \ell,\cdot)$ with
\[
P_{\ell,\theta}(x,\cdot) = (1-\epsilon) P_\ell(x,\cdot) + \epsilon\theta(\cdot)
\]
and
\[
\what\theta\topp{\ell-1}_n(\cdot) = \summ i1{n}\frac{w_\ell(X_i\topp{\ell-1})}{\summ j1{n}w_\ell(X_j\topp{\ell-1})}\delta_{X_i\topp{\ell-1}}(\cdot),
\]
with $w_\ell(x) = {\pi_\ell(x)}/{\pi_{\ell-1}(x)}, x\in{\cal X}$.
Note that the $\ell$-th chain $X\topp \ell$ at time $n$, depends on $\{X_k\topp \ell\}_{k=0,\dots,n-1, \ell=1,\dots m-1}$.
We assume that $\max_{\ell=1,\dots,m}|w_\ell|_\infty<\infty$.
\end{Algo}
In view of Theorem~\ref{thm:IRMCMC}, it suffices to control
\begin{equation}\label{eq:Bnl}
B_n\topp\ell \ensuremath{\stackrel{\mathrm{def}}{=}} \sup_{|f|\leq 1}{\mathbb E}\what\pi_{X\topp{\ell},n}(\wb f) + \sup_{|f|\leq 1}{\mathbb E}^\prime{\what\pi_{X\topp{\ell},n}(\wb f)}^2, n\in{\mathbb N},
\end{equation}
where this time $\what\pi_{X\topp\ell,n}(\wb f) \ensuremath{\stackrel{\mathrm{def}}{=}} \what\pi_{X\topp\ell,n}(f) - \pi_{\ell-1}(f)$.
In fact, it suffices to control $B_n^{(0)}$, which is the purpose of the following assumption.
\assumpH
\item \label{A:mIRMCMC}
As $n\to\infty$, the initial Markov chain $\{X_n\topp 0\}_{n\geq 0}$ satisfies $B_n^{(0)}\leq C/n$.
\assumpE
\begin{Thm}\label{thm:mIRMCMC}
Consider the multiple IRMCMC (Algorithm~\ref{algo:mIRMCMC}) for which Assumption~\ref{A:mIRMCMC} holds and $\max_{\ell=1,\dots,m}|w_\ell|_\infty<\infty$. Then for $\ell = 1,\dots,m$, there exists a finite constant $C$ such that
\begin{equation}\label{eq:lognm}
\nnTV{{\cal L}_{X_n\topp \ell} - \pi_\ell} \leq \frac{C}n\,,
\end{equation}
and for any bounded measurable function $f$,
\begin{equation}\label{eq:varm}
{\mathbb E}\bb{\frac1{\sqrt n}\summ i1n^\prime{f(X_i\topp{\ell})-\pi(f)}}^2 \leq C.\end{equation}
\end{Thm}
\begin{proof} Simply observe that (\ref{eq:lognm}) and (\ref{eq:varm}) imply that $B_n^{(\ell)}\leq C/n$, as $n\to\infty$. By Theorem \ref{thm:IRMCMC}, this implies in turn that (\ref{eq:lognm}) and (\ref{eq:varm}) hold with $\ell$ replaced by $\ell+1$. Given \ref{A:mIRMCMC}, the result follows by induction.
\end{proof}
\newpage
\section{Equi-Energy Sampler}\label{sec:EE}
In this section, we consider the simplified EE sampler as follows. Recall that the auxiliary chain $\{Y_n\}_{n\geq 0}$ evolves independently from the main chain $\{X_n\}_{n\geq 0}$.
\begin{Algo}[Equi-Energy sampler]\label{algo:EE}
Fix $\epsilon\in(0,1)$. Start $X_0 = x_0$ and $Y_0 = y_0$. At each round $n$, generate
\[
X_n \sim\left\{
\begin{array}{ll}
P(X_{n-1},\cdot) & \mbox{ w.p.~} 1-\epsilon\\
K_{\what\theta_{n-1}}(X_{n-1},\cdot) & \mbox{ w.p.~} \epsilon
\end{array}
\right.\,,
\]
where $\what\theta_n = \what\pi_{Y,n}$ is the empirical measure associated to $\{Y_n\}_{n\geq 0}$ and $K_\theta$ is defined by
\[
K_\theta(x,A)=\textbf{1}_A(x) + \int_{{\cal X}}\left(1\wedge\frac{\pi(z)\pi_Y(x)}{\pi(x)\pi_Y(z)}\right)\left(\textbf{1}_A(z)-\textbf{1}_A(x)\right)\theta({\rm d} z).
\]
In other words,
for all non-negative functions $h:{\cal X}\to{\mathbb R}$ and $n\in{\mathbb N}$,
\begin{equation}\label{dynEE}
\PE_{x}\left(h(X_{n+1})\mid \F_{n}\right)=P_{\what\theta_{n}}h(X_{n}) \;\;\mbox{ almost surely,}
\end{equation}
where for any probability measure $\theta$ on ${\cal X}$, $P_\theta$ is defined as
\begin{equation}\label{eq:EE}
P_\theta(x,A)=(1-\epsilon)P(x,A)+\epsilon K_\theta(x,A),
\end{equation}
Recall that we write $\pi({\rm d} x) \equiv \pi(x){\rm d} x$ and similarly for $\pi_Y$ with a little abuse of language, and $w(x) = \pi(x)/\pi_Y(x)$. We assume $|w|_\infty<\infty$.
\end{Algo}
The kernel $K_{\theta_\star}$ is the Independent Metropolis kernel with target $\pi$ and proposal $\theta_\star = \pi_Y$. It is well known that under the assumption $|w|_\infty<\infty$ (recall Remark~\ref{rem:w}), the kernel $K_{\theta_\star}$ is uniformly ergodic \citep{mengersen96rates}, and this property is inherited by $P_{\theta_\star}$. That is, there exists $\rho\in (0,1)$, such that
\begin{equation}\label{eq:rateconv}
\nnTV{P^n_{\theta_\star}(x,\cdot) - \pi(\cdot)}\leq C \rho^n,\;\;\;n\geq 0.
\end{equation}
\subsection{Convergence rate of EE sampler}\label{sec:EErate}
We make the following assumptions.
\assumpH
\item \label{A:subGaussian}
There exist a finite universal constant $C$ such that for any measurable function $f:\;{\cal X}\to\mathbb R$, with $|f|_\infty\leq 1$,
\[
\sup_{n}\PP\left(\left|\frac{1}{\sqrt{n}}\summ j1n ^\prime{f(Y_j) - \pi_Y(f)}\right|>x\right) \leq C\exp\left(-\frac{x^2}{C\sigma^2(f)}\right),
\]
where $\sigma^2(f)\ensuremath{\stackrel{\mathrm{def}}{=}} \int_{\cal X} f^2(x)\pi_Y({\rm d} x)$.
\assumpE
\assumpH
\item \label{A:w} The function $w:\;{\cal X}\to\mathbb R$ is continuous (with respect to the metric on ${\cal X}$), and
\begin{equation}\label{eq:w}
\sup_{x\in{\cal X}}\frac{\phi(x)}{w^2(x)}<\infty,
\end{equation}
where $\phi(x)\ensuremath{\stackrel{\mathrm{def}}{=}} \pi_Y\left(\{z:\; w(z)\leq w(x)\}\right)$.
\assumpE
\assumpH
\item \label{A:continuous} The kernel $P$ is such that if $f:{\cal X}\to\mathbb R$ is continuous, then $Pf$ is also continuous.
\assumpE
\begin{Rem}Deviation bounds as in~\ref{A:subGaussian} are available for various conditions on transition kernels. See for example \citet[Proposition 1.2]{cattiaux08deviation}.
\end{Rem}
\begin{Rem}
Assumption~\ref{A:w} is not restrictive. For example, consider ${\cal X} = {\mathbb R}$ and $\pi_Y = \pi^T$ with some $T\in(0,1)$. For the sake of simplicity, we focus on $x\in{\mathbb R}_+$ and define $\phi_+(x) \ensuremath{\stackrel{\mathrm{def}}{=}} \pi_Y(\{z>0:w(z)\leq w(x)\})$. Suppose the density $\pi(x)$ decays asymptotically as $x^{-\alpha}$ for $\alpha>1$ as $x\to\infty$. Then, $\pi_Y(x)\sim x^{-T\alpha}$ and $w(x)\sim x^{(T-1)\alpha}$. Here and below, we write $a(x)\sim b(x)$ if $\lim_{x\to\infty}a(x)/b(x) = 1$. Assume further that $T\alpha>1$. Then, $\phi_+(x)\sim (T\alpha-1)^{-1} x^{1-T\alpha}$ and
\[
\frac{\phi_+(x)}{w^2(x)} \sim\frac1{T\alpha-1}x^{1+2\alpha-3T\alpha}.
\]
Therefore,~\eqref{eq:w} holds, if $T>(1+2\alpha)/(3\alpha)$.
\end{Rem}
\begin{Thm}\label{thm:EE}
Consider the Equi-Energy sampler described as above and suppose that Assumptions \ref{A:subGaussian}--\ref{A:continuous} hold. Then, there exists a constant $C$, such that for all continuous functions $f:{\cal X}\to{\mathbb R}$ and $n\in{\mathbb N}$,
\begin{equation}\label{eq:thm:EE}
\left|{\mathbb E}\left(f(X_n)-\pi(f)\right)\right|\leq \frac{C|f|_\infty}{\sqrt{n}}.
\end{equation}
\end{Thm}
\begin{proof}
Fix $n\geq 2$ and $1\leq q\leq n$. Fix $f:\;{\cal X}\to\mathbb R$ with $|f|_\infty= 1$. Then write
\[
\PE_xf(X_n)-P_{\theta_\star}^nf(x)=\PE_x\left(P_{\theta_\star}^{n-q}f(X_q)-P_{\theta_\star}^nf(x)\right)-\PE_x\left(P_{\theta_\star}^{n-q}f(X_q)-f(X_n)\right).
\]
For the first term we can use \eqref{eq:rateconv} to get:
\[\left|\PE_x\left(P_{\theta_\star}^{n-q}f(X_q)-P_{\theta_\star}^nf(x)\right)\right|\leq C \rho^{n-q},\]
for some finite constant $C$ that does not depend on $f$. For the second term, we write:
\begin{eqnarray}
& & \PE_x\left(P_{\theta_\star}^{n-q}f(X_q)-f(X_n)\right)\nonumber\\
& = & \PE_x\left[\sum_{j=q}^{n-1}\left(P_{\theta_\star}^{n-j}f(X_j)-P_{\theta_\star}^{n-j-1}f(X_{j+1})\right)\right]\nonumber\\
& = & \sum_{j=q}^{n-1}\PE_x\left[P_{\theta_\star}^{n-j}f(X_j)-\PE_x\left(P_{\theta_\star}^{n-j-1}f(X_{j+1})\mid \F_j\right)\right]\nonumber\\
& = & \sum_{j=q}^{n-1}\PE_x\left[P_{\theta_\star}^{n-j}f(X_j)-P_{\what\theta_j}P_{\theta_\star}^{n-j-1}f(X_{j})\right]\nonumber\\
& = &\sum_{j=q}^{n-1}C_0 \rho^{n-j-1}\PE_x\left[\left(P_{\theta_\star}-P_{\what\theta_j}\right)\zeta_{n,j}(X_j)\right],\label{eq:Ex}
\end{eqnarray}
where in the last line we write
\[
\zeta_{n,j}(x)=\frac{P_{\theta_\star}^{n-j-1}(f(x)-\theta_\star(f))}{C_0\rho^{n-j-1}},\;\;x\in{\cal X}\,,
\]
with $C_0$ and $\rho$ chosen as in~\eqref{eq:rateconv}.
As a consequence of~\eqref{eq:rateconv}, $|\zeta_{n,j}|_\infty\leq 1$. It is also continuous by the continuity of $f$ and Assumption~\ref{A:continuous}.
To simplify the notation, for any function $g:\;{\cal X}\to \mathbb R$, define
\begin{equation}\label{eq:Hg}
H_g(x,z)\ensuremath{\stackrel{\mathrm{def}}{=}}\alpha(x,z)\left(g(z)-g(x)\right),\quad x,z\in{\cal X},\end{equation}
where
\begin{equation}\label{eq:alpha}
\alpha(x,z)\ensuremath{\stackrel{\mathrm{def}}{=}} 1\wedge \frac{w(z)}{w(x)}.
\end{equation}
Thus, we can write
\[
P_\theta g(x)-P_{\theta_\star}g(x)=\epsilon\int H_g(x,z)(\theta({\rm d} z)-\theta_\star({\rm d} z)).
\]
Always based on $g:{\cal X}\to{\mathbb R}$, we introduce the class of functions
\[
{\cal F}_g\ensuremath{\stackrel{\mathrm{def}}{=}}\ccbb{z\mapsto H_g(x,z):\;x\in{\cal X}},
\]
and the empirical process
\[{\mathbb G}_n(h)\ensuremath{\stackrel{\mathrm{def}}{=}} \frac{1}{\sqrt{n}}\sum_{j=1}^n \left(h(Y_j)-\pi_Y(h)\right),\;\;\; h\in{\cal F}_g.
\]
Therefore, the expectation term in~\eqref{eq:Ex} becomes
\begin{multline*}
{\mathbb E}_x\bb{\bbpp{P_{\theta_\star}-P_{\what\theta_j}}\xi_{n,j}(X_j)}
= \epsilon{\mathbb E}_x\bb{\int H_{\xi_{n,j}}(X_j,z)(\theta_\star({\rm d} z)-\what\theta_j({\rm d} z))}\\
= -\epsilon{\mathbb E}_x\bb{\frac1j\summ \ell1j H_{\xi_{n,j}}(X_j,Y_\ell) - \int_{\cal X} H_{\xi_{n,j}}(X_j,z)\theta_\star({\rm d} z)}\\
= -\frac{\epsilon}{\sqrt{j}}{\mathbb E}_x\bb{{\mathbb G}_j\bbpp{H_{\xi_{n,j}}(X_j,\cdot)}}\,,
\end{multline*}
whence
\begin{eqnarray*}
\left|\PE_x\left(P_{\theta_\star}^{n-q}f(X_q)-f(X_n)\right)\right| & = & \abs{\epsilon\sum_{j=q}^{n-1}\frac{C_0\rho^{n-j-1}}{\sqrt{j}}{\mathbb E}_x\bb{{\mathbb G}_j\bbpp{H_{\xi_{n,j}}(X_j,\cdot)}}}\\
& \leq & C_0\sum_{j=q}^{n-1}\frac{\rho^{n-j-1}}{\sqrt{j}}\PE_x\left(\sup_{h\in \F_{\zeta_{n,j}}}\left|{\mathbb G}_j(h)\right|\right).
\end{eqnarray*}
We prove in Lemma \ref{lem2} below that for any continuous function $g:{\cal X}\to\mathbb R$ such that $|g|_\infty\leq 1$,
\[
\PE_x\left(\sup_{h\in \F_{g}}\left|{\mathbb G}_n(h)\right|\right)\leq C,
\]
for some constant $C$ that does not depend on $n$ nor $g$. We conclude that
\[
\left|\PE_x\left(P_{\theta_\star}^{n-q}f(X_q)-f(X_n)\right)\right|\leq C\sum_{j=q}^{n-1}\frac{1}{\sqrt{j}}\rho^{n-j-1}.
\]
Thus for any $1\leq q\leq n$,
\[
\left|\PE_x\left(f(X_n)\right)-\theta_\star(f)\right|\leq C \left\{\rho^n + \rho^{n-q} + \epsilon \sum_{j=q}^{n-1}\frac{\rho^{n-j-1}}{\sqrt{j}}\right\}\leq Cn^{-1/2},
\]
by choosing $q=n-\lfloor\frac{-\log n}{2\log\rho}\rfloor$.
\end{proof}
We rely on the following technical result on the auxiliary chain $\{Y_n\}_{n\geq 0}$.
\begin{Lem}\label{lem2}Suppose that Assumptions \ref{A:subGaussian} and \ref{A:w} hold, and let $g:{\cal X}\to\mathbb R$ be continuous such that $|g|_\infty\leq 1$. Then
\[
\PE_x\left(\sup_{h\in \F_g}\left|{\mathbb G}_n(h)\right|\right)\leq C,\]
for a constant $C$ that does not depend on $n$.
\end{Lem}
\begin{proof}
Throughout the proof $n\geq 1$ is fixed. Assumption \ref{A:subGaussian} suggests the following metric on $\F_g$:
\[\textsf{d}(h_1,h_2)=\sigma(h_1-h_2)=\left(\int_{\cal X}\left(h_1(x)-h_2(x)\right)^2\pi_Y({\rm d} x)\right)^{1/2},\]
which has the following properties.
For $x_1,x_2\in{\cal X}$, it is easy to check that
\begin{equation}\label{dist1}
\left |H_g(x_1,z)-H_g(x_2,z)\right|\leq 2\left|\alpha(x_1,z)-\alpha(x_2,z)\right| + \left|g(x_1)-g(x_2)\right|.\end{equation}
It follows that
\begin{multline}\label{dist2}
\textsf{d}\left(H_g(x_1,\cdot),H_g(x_2,\cdot)\right)\\
\leq \sqrt 2\left|g(x_1)-g(x_2)\right|
+ 2\sqrt2\sqrt{\int\left|\alpha(x_1,z)-\alpha(x_2,z)\right|^2\pi_Y({\rm d} z)}.\end{multline}
This implies that the diameter of $\F_g$ is bounded by $\delta(\F_g)=4\sqrt{2}$. It also implies that with respect to $\textsf{d}$, the empirical process $\{{\mathbb G}_n(h),\;h\in \F_g\}$ is separable. Indeed, for $x\in{\cal X}$ arbitrary and $h=H_g(x,\cdot)$, using the Polish assumption, we can find a sequence $x_m\in{\cal X}$ ($x_m$ belongs to a countable subset of ${\cal X}$) such that $x_m\to x$, as $m\to\infty$. Setting $h_m=H_g(x_m,\cdot)$, it follows from (\ref{dist2}) and the continuity of $u$ and $E$ that $h_m\to h$ in $(\F,\textsf{d})$, and (\ref{dist1}) easily show that ${\mathbb G}_n(h_m)-{\mathbb G}_n(h)=n^{-1/2}\sum_{\ell=1}^n\left(H_g(x,Y_\ell)-H_g(x_m,Y_\ell)\right)+\sqrt{n}\pi_Y\left(H_g(x,\cdot)-H_g(x_m,\cdot)\right)\to 0$ as $m\to\infty$ for all realizations of $\{Y_1,\ldots,Y_n\}$.
For any $h_1,h_2\in\F_{g}$, Assumption \ref{A:subGaussian} implies that for any $x>0$
\[
\PP_x\left(\left|{\mathbb G}_n(h_1)-{\mathbb G}_n(h_2)\right|>x\right)\leq C\exp\left(-\frac{x^2}{c\textsf{d}^2(h_1,h_2)}\right).
\]
Then we apply \citet[Corollary 2.2.8]{vandervaart96weak} to conclude that for $h_0\in\F_g$, there exists a constant $C$ independent of $g$, such that
\[
\PE_x\left(\sup_{h\in \F_{g}}\left|{\mathbb G}_n(h)\right|\right)\leq \PE_x|{\mathbb G}_n(h_0)| + C\int_0^{\delta(\F_g)} \sqrt{1+\log\textsf{D}(\epsilon,\F_g,\textsf{d})}\;{\rm d}\epsilon<\infty,\]
where $\textsf{D}(\epsilon,\F_g,\textsf{d})$ is the packing number of $\F_g$ with respect to $\textsf{d}$. Assumption \ref{A:subGaussian} shows that $\PE_x|{\mathbb G}_n(h_0)|<\infty$. To control the entropy number, we further bound the right hand of (\ref{dist2}).
Without loss of generality, assume $x_1,x_2\in{\cal X}$ and $w(x_1)<w(x_2)$. If $w(x_1)\vee w(x_2)\leq w(z)$, then $\alpha(x_1,z)-\alpha(x_2,z)=0$. If $w(z)\leq w(x_1)$, then
\[
\left|\alpha(x_1,z)-\alpha(x_2,z)\right|^2=\left|\frac{w(z)}{w(x_1)} - \frac{w(z)}{w(x_2)}\right|^2\leq \frac{1}{w(x_1)^2}\left(w(x_2)-w(x_1)\right)^2.
\]
If $w(x_1)\leq w(z)\leq w(x_2)$, then
\[\left|\alpha(x_1,z)-\alpha(x_2,z)\right|^2=\left|1- \frac{w(z)}{w(x_2)}\right|^2
\leq \frac{1}{w(x_2)^2}\left(w(x_2)-w(x_1)\right)^2.\]
Thus
\begin{multline*}
\int\left|\alpha(x_1,z)-\alpha(x_2,z)\right|^2\pi_Y({\rm d} z)\\
\leq ^\prime{\frac{\phi(x_1)}{w(x_1)^2}
+ \frac{\phi(x_2)}{w(x_2)^2}}\left(w(x_2)-w(x_1)\right)^2\leq C\left(w(x_2)-w(x_1)\right)^2,
\end{multline*}
where $\phi(x)\ensuremath{\stackrel{\mathrm{def}}{=}} \pi_Y\left(\{z:\; w(z)\leq w(x)\}\right)$, and the last inequality follows from \ref{A:w}.
Together with (\ref{dist2}), we conclude from this bound that
\begin{equation}\label{lip}
\textsf{d}\left(H_g(x_1,\cdot),H_g(x_2,\cdot)\right)
\leq C (\left|g(x_1)-g(x_2)\right|+ \left|w(x_2)-w(x_1)\right|).
\end{equation}
Since $|g|_\infty\leq 1$ and $w(x)\in [0,|w|_\infty]$, this implies that the $\epsilon$-packing number of $(\F_g,\textsf{d})$ is at most of order $\epsilon^{-2}$, so that $\int_0^{\delta(\F_g)} \sqrt{1+\log\textsf{D}(\epsilon,\F_g,\textsf{d})}\;{\rm d}\epsilon\leq C \int_0^{\delta(\F_g)}\sqrt{1+\log(1/\epsilon)}d\epsilon<\infty$.
This proves the lemma.
\end{proof}
\subsection{Connection with Parallel Tempering}\label{sec:mEE}
Our results suggest that the EE sampler mixes relatively slowly. A plausible reason for this slow mixing is the dependence on the entire sample path $\{Y_k\}_{0\leq k\leq n}$. The EE sampler is closely related to the Parallel Tempering (PT) algorithm of \citep{geyer91markov}, which suggests that it might be possible to exploit this connection by deriving versions of the EE sampler with better mixing properties. Like the EE sampler, a 2-chain PT generates a stochastic process $\{(X_n,Y_n)\}_{n\geq 0}$ where with probability $1-\epsilon$, $X_n$ is generated from $P(X_{n-1},\cdot)$ and with probability $\epsilon$, one proposes to swap the two chains. Thus PT is closely related to a EE-type algorithm where the empirical measure $\what\pi_{Y,n}$ would be replaced by a Dirac measure $\delta_{Y_n}$. However, we show that in general, this new algorithm does not maintain the correct stationary distribution. We hope that the discussion in this section would be helpful in conceptualizing new adaptive algorithms in the future.
The modified EE sampler is as follows. Let $\{Y_n\}_{n\geq 0}$ be the auxiliary chain with transition kernel $P_Y$ and stationary distribution $\pi_Y$. Let $\{X_n\}_{n\geq 0}$ be a chain satisfying the following assumption: for all continuous and bounded function $f$,
\[
{\mathbb E}[f(X_{n+1})\mid X_0,\dots,X_n,Y_0,\dots,Y_n] = P_{\delta_{Y_n}}f(X_n), n\in{\mathbb N},
\]
where $P_\theta$ is as in~\eqref{eq:EE}, and denote the stationary distribution of $P$ by $\pi_X$.
\begin{Rem}
The difference from the EE sampler is that we replace $\what\pi_{Y,n}$ by $\delta_{Y_n}$. If, when $X_{n+1}$ is moving to $Y_n$, we also make $Y_{n+1}$ move to $X_n$, then we are allowing {\it swaps} between the two chains. Such swaps are in the spirit of parallel tempering algorithms (see e.g.~\citet{geyer91markov}).
\end{Rem}
A nice property of this algorithm is that $\{(X_n,Y_n)\}_{n\geq 0}$ is a Markov chain. Indeed, it has transition kernel
\begin{equation}\label{eq:MEE}
P_{X,Y}(x,y,{\rm d} z,{\rm d} w) = P_{\delta y}(x,{\rm d} z)P_Y(y,{\rm d} w)\,.
\end{equation}
This Markov chain may not have the desired stationary distribution. Let $\pi_{X,Y}$ denote the stationary distribution and let $\pi_{X,Y}\topp i, i=1,2$ denote its two marginal distributions. Naturally, we wish $\pi_{X,Y}\topp 1 = \pi_X$ and $\pi_{X,Y}\topp 2 = \pi_Y$.
By construction, the latter identity is always true.
However, the former does not always hold.
Since $P_\theta = (1-\epsilon)P + \epsilon K_\theta$
and $P(x,{\rm d} z)P_Y(y,{\rm d} w)$ has stationary distribution $\pi_X({\rm d} z)\pi_Y({\rm d} w)$, instead of~\eqref{eq:MEE} it suffices to focus on the transition kernel
\begin{equation}\label{eq:MEE1}
P_{X,Y}(x,y,{\rm d} z,{\rm d} w) = K_{\delta_Y}(x,{\rm d} z)P_Y(y,{\rm d} w)\,.
\end{equation}
Consider the simple case when both chains take values from $\{\pm1\}$.
Let the auxiliary chain has the following transition matrix and stationary distribution:
\[
P_Y = \left(
\begin{array}{cc}
1- a & a\\
b & 1-b
\end{array}
\right)\quad\mbox{ and }\quad \pi_Y = ^\prime{\frac b{a+b}, \frac a{a+b}}\,.
\]
Recall that in this case,
\[
K_{\delta y}(x,z) = \alpha(x,y)\indd{z=y} + (1-\alpha(x,y))\indd {z=x}\,,
\]
with
\[
\alpha(x,y) = 1\wedge \frac{\pi_X(y)}{\pi_X(x)}\frac{\pi_Y(x)}{\pi_Y(y)}\,.
\]
Write $c = \alpha(1,-1)$ and $d= \alpha(-1,1)$. Then, one can write the transition matrix of $P_{X,Y}$ as follows:
\[
\begin{array}{l|llll}
& (1,1) & (1,-1) & (-1,1)& (-1,-1)\\
\hline
(1,1) & 1-a & a & 0 & 0\\
(1,-1) & (1-c)b & (1-c)(1-b) & cb & c(1-b)\\
(-1,1) & d(1-a) & da & (1-d)(1-a) & (1-d)a\\
(-1,-1) & 0 & 0 & b & 1-b
\end{array}
\]
For example, the table reads as
\[
P(1,-1,1,-1) = P_{\delta_{-1}}(1,1)P_Y(-1,-1) = (1-c)(1-b)\,.
\]
We solve $\pi_{X,Y}P_{X,Y} = \pi_{X,Y}$ and obtain
\[
\pi_{XY} = \frac1{(a+b)((1-a-b)cd + ac + bd)}
\left(
\begin{array}{c}
bd(b+(1-a-b)c)
\\
abd\\
abc\\
ac(a+(1-a-b)d)
\end{array}
\right)^\top\,.
\]
To see that $\pi_{X,Y}\topp1$ does not always equal $\pi_X$, take for example $a=b=1/3$ and $\pi_X = (2/3,1/3)$. In this case, $c = 1/2$, $d=1$, and $\pi_{X,Y} = (3/8,1/4,1/8,1/4)$, whence $\pi_{X,Y}\topp1 = (5/8,3/8)\neq \pi_X$.
\def\cprime{$'$} \def\polhk#1{\setbox0=\hbox{#1}{\ooalign{\hidewidth
\lower1.5ex\hbox{`}\hidewidth\crcr\unhbox0}}}
|
2,869,038,155,865 | arxiv | \section{Appendix}
The three-loop 1PI diagrams of the fermion propagator.
\begin{figure}[h]
\begin{center}
\includegraphics[scale=1.0]{Fig1}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[scale=1.0]{Fig2}
\end{center}
\end{figure}
\newpage
The contributions to $Z_2^{-1}$ from different sets of the diagrams ($KR'$):\\
Fig.1.
\begin{eqnarray*}
& &-\frac{C_F}{\varepsilon^3}\left[-\frac74C^2-\frac43CC_F+\frac13Ctf\right]\\
& &-\frac{C_F}{\varepsilon^2}\left[\frac{219}{24}C^2+\frac{35}6CC_F-\frac43C_F
-\frac{19}6Ctf-\frac43C_Ftf\right]\\
& &-\frac{C_F}{\varepsilon}\left[-\frac{233}{12}C^2+\frac{17}2CC_F-\frac7{12}
C^2_F+\frac{43}6Ctf-\frac73C_Ftf\right]
-\frac{C_F}{\varepsilon}C\zeta(3)\left(\frac52C-2C_F\right).
\end{eqnarray*}
Fig.2a:
$$
\frac{C_F^2}{\varepsilon^2}\left(\frac5{12}C-\frac13tf\right)
+\frac{C_F^2}{\varepsilon}\left(\frac38C-\frac16tf\right).
$$
Fig.2b:
\begin{eqnarray*}
- \frac{C_F}{\varepsilon^2} \Bigl[\frac{175}{72}C^2-\frac{55}{18}Ctf
+ \frac89t^2f^2 \Bigr]
- \frac{C_F}{\varepsilon} \left[-\frac{2171}{432}C^2+\frac{527}{108}Ctf+
2C_Ftf-\frac{20}{27}t^2f^2 \right] .
\end{eqnarray*}
Fig.2c:
\begin{eqnarray*}
-\frac{C_F^2}{\varepsilon^3} \left[ \frac13C-\frac16C_F\right]
+\frac{C_F^2}
{\varepsilon^2} \Bigl[3C-\frac7{12}C_F &-& \frac23tf
\Bigr]\\
&-& \frac{C_F^2}{\varepsilon}\left[\frac{91}{24}C-2\zeta(3)C+\frac1{12}C_F-
\frac56tf\right].
\end{eqnarray*}
Fig.2d: $~~~~~~~~-\frac2{\varepsilon^3}C_F^2C$
\\
The contributions to $Z_{\overline{\psi}\psi}$ from the diagrams with the
insertion
$-\!\!\bigotimes\!\!-$ were calculated simultaneously with the diagrams
shown in Fig.1 and Fig.2. In fact, for the fermion propagator instead of
$\hat{p}/p^2$ we used $(\hat{p}+m)/p^2$, and after multiplication of all
propagators we kept only the part without the mass term and the
part linear in $m$.
The coefficient in front of $m$ determines the contribution of the diagrams with the mass insertion. For example, from the diagram
\begin{figure}[h]
\begin{center}
\includegraphics[scale=1.0]{figure5}
\end{center}
\end{figure}
\noindent
\newpage
we get the following diagrams with the insertion:
\begin{figure}[h]
\begin{center}
\includegraphics[scale=1.0]{figure6}
\end{center}
\end{figure}
The
contributions to $Z_{\overline{\psi} \psi}$ from different sets of diagrams ($KR'$)\\
\vspace{0.5cm}
Fig.1:
\vspace{-0.3cm}
\begin{eqnarray*}
&-&\frac{C_F}{\varepsilon^3} \left[\frac{31}3C^2+\frac{50}3CC_F+8C_F^2-4Ctf-
\frac83C_Ftf \right] \\
&-&\frac{C_F}{\varepsilon^2}\left[-\frac{181}6C^2-\frac{80}3C_FC+8C_F^2
+\frac{34}3Ctf+\frac83C_Ftf\right]\\
&-&\frac{C_F}{\varepsilon}\left[\frac{181}4C^2-\frac{145}3CC_F+17C_F^2
-13Ctf+\frac{20}3C_Ftf\right]\\
&-&\frac{C_F}{\varepsilon}\zeta(3)\left[-\frac32C^2+20C_FC-16C_F^2-8Ctf\right].
\end{eqnarray*}
Fig.2a:
$$
-\frac{C_F^2}{\varepsilon^3}\left[\frac{10}3-\frac83tf\right]-\frac{C_F^2}
{\varepsilon^2}\left[-\frac{17}3C+4tf\right]-\frac{C_F^2}{\varepsilon}
\left[-C+\frac43tf \right].
$$
Fig.2b:
\begin{eqnarray*}
&-&\frac{C_F}{\varepsilon^3}\left[\frac{175}{36}C^2-\frac{55}9Ctf+
\frac{16}9t^2f^2\right] \\
&-&\frac{C_F}{\varepsilon^2}\left[-\frac{337}{27}C^2+\frac{346}{27}Ctf+
4C_Ftf-\frac{64}{27}t^2f^2\right]\\
&-&\frac{C_F}{\varepsilon}\left[\frac{18685}{1296}C^2-\frac{1915}{324}Ctf
-17C_Ftf-\frac{80}{81}t^2f^2\right]\\
&-&\frac{C_F}{\varepsilon}\zeta(3)\left[-C^2-8Ctf+16C_Ftf\right]
\end{eqnarray*}
Fig.2c:
\begin{eqnarray*}
&-&\frac{C_F^2}{\varepsilon^3}\left[6C+\frac83C_F-\frac83tf\right]
-\frac{C_F^2}{\varepsilon^2}\left[-17C-10C_F+4tf\right]\\
&-&\frac{C_F^2}{\varepsilon}\left[\frac{80}3C+5C_F-\frac{16}3tf
-16\zeta(3)C+16\zeta(3)C_F\right]
\end{eqnarray*}
Fig.2d: 0.
\vspace{1cm}
|
2,869,038,155,866 | arxiv | \section*{Acknowledgements}
The authors would like to especially thank Damien Woods and Pierre-Etienne Meunier for valuable discussions, guidance, and suggestions.
\ifabstract
\later{
}
\fi
\iffull
\fi
\bibliographystyle{abbrv} %
\section{A Nondeterministic CA Which Can Simulate Any aTAM System}\label{CAIU4aTAM}
In Theorem~\ref{thm:caiu}, we show that for any aTAM system, there is a synchronous nondeterministic CA such that for an appropriate choice of finite initial configuration, this CA simulates the aTAM system. This gives some sense of a synchronous nondeterministic CA being {\em intrinsically universal for} the aTAM.
\begin{theorem}\label{thm:caiu}
There exists a synchronous nondeterministic CA $\mathcal{A} = (\mathbb{Z}^2, S, N, \delta)$ such that for any aTAM system $\mathcal{T} = (T, \sigma, \tau)$ there exists a finite initial configuration
$c_0$ of $\mathcal{A}$ so that $\left(\mathcal{A}, c_0\right)$ simulates $\mathcal{T}$.
\end{theorem}
To show this Theorem, we appeal to the following Lemma. The construction in Section~\ref{sec:caconstruction}, proves this Lemma.
\begin{lemma}\label{lem:casim}
For any aTAM system $\mathcal{T} = (T, \sigma, \tau)$, there exists a synchronous nondeterministic CA $\mathcal{A} = (\mathbb{Z}^2, S, N, \delta)$ and an initial configuration $c_0$ such that $\left(\mathcal{A}, c_0\right)$ simulates $\mathcal{T}$.
\end{lemma}
Then Theorem~\ref{thm:caiu} is proven as follows. First, there is a tile set $U$ which is intrinsically universal for the aTAM and can be used at temperature $\tau=2$ to simulate any aTAM system. Therefore we let $\mathcal{U}$ be an aTAM system that uses $U$ at $\tau=2$.
By Lemma~\ref{lem:casim}, we can
then give a CA that suffices for Theorem~\ref{thm:caiu} by constructing a CA that simulates an arbitrary $\mathcal{U}$ (i.e. one with an arbitrary seed). See Section~\ref{sec:pocCAsimTAS} for more details.
\subsection{CA Construction}\label{sec:caconstruction}
The goal of this construction is to give a synchronous nondeterministic CA $\mathcal{B} = (\mathbb{Z}^2, S, N, \delta)$ and initial configuration that simulates an arbitrary aTAM system $\mathcal{T} = (T,\sigma, \tau)$. The neighborhood of $\mathcal{B}$ is the Moore neighborhood and
the states and local rules for $\mathcal{B}$ are obtained as follows. First, we add a state to $S$ for each tile type of $\mathcal{T}$ we call these states $\mathtt{tile\_states}$. We also add states $\mathtt{token\_state\_up}$,
$\mathtt{token\_state\_left}$, $\mathtt{token\_state\_down}$ and $\mathtt{token\_state\_right}$ and we use $\mathtt{token\_states}$ to refer to any of these $4$ states.
We refer to any cell in a $\mathtt{token\_state}$ as the {\em token}. This token moves one cell counterclockwise at each time step and only one cell is in a $\mathtt{token\_state}$ at any given time.
At each time step, the cell in a $\mathtt{token\_state}$ moves one cell either up, left, down or right in an effort to traverse the {\em surfaces} of an existing
configuration, where a surface of a configuration is the connected set of quiescent cells
that neighbor (using the Moore neighborhood) at least one non-quiescent state. (See Figure~\ref{fig:walkingToken}.) Note that a configuration may have many disjoint surfaces.
\begin{figure}[htp]
\begin{center}
\includegraphics[width=1.2in]{images/walkingToken}
\caption{A token traversing the surface of a configuration. The surface of the configuration is denoted by light grey tiles. The cell labeled $T$ is in $\mathtt{token\_state\_left}$ as indicated by the arrow depicted on the cell.}
\label{fig:walkingToken}
\end{center}
\end{figure}
At each time step, a cell in a $\mathtt{token\_state}$ can nondeterministically transition to a $\mathtt{tile\_state}$ if and only if the tile corresponding to this state could bind in the simulated aTAM system. This ensures that at any given time step, at most one cell transitions from a quiescent state to a $\mathtt{tile\_state}$. Figure~\ref{fig:ruleFromTileSet} shows an example of a local rule obtained from a tile set.
\begin{figure}[htp]
\begin{center}
\includegraphics[width=3in]{images/ruleFromTileSet}
\caption{{\bf (a)} A tile set consisting of $5$ tile types. Glues $b$ and $c$ have strength $2$ and $a$ glues have strength $1$. {\bf (b)} A local rule corresponding to the tile set in {\bf(a)}. Cells in the Moore neighborhood that are in $\mathtt{tile\_states}$ are labeled with the label of the corresponding tile type in {\bf (a)}. The cell labeled $T$ is in a $\mathtt{token\_state}$. Blank cells are quiescent.}
\label{fig:ruleFromTileSet}
\end{center}
\end{figure}
The idea is that the token can put cells in a $\mathtt{tile\_state}$ on the surface of a configuration. However a configuration may have many disjoint surfaces.
Non-quiescent states of a configuration can break the $\mathbb{Z}^2$ lattice into disjoint regions of cells in quiescent states. For example, this can occur when the CA is simulating a tile set that assembles a frame, i.e. tiles around some square of empty tiles.
This leads to disjoint surfaces that the token must traverse. Therefore, care must be taken in order to allow the token to traverse surfaces separated by non-quiescent states. This is accomplished by adding a $\mathtt{bridge\_tile\_state}$ to $S$ for each tile type of the simulated aTAM system. The token is allowed to ``pass over'' these $\mathtt{bridge\_tile\_states}$. Passing over a cell in $\mathtt{bridge\_tile\_state}$ is done by adding a $\mathtt{bridge\_tile\_token\_state}$ to $S$ for each tile type in $T$ and each direction $\mathtt{up}$, $\mathtt{left}$, $\mathtt{down}$ and $\mathtt{right}$.
Figure~\ref{fig:tokenTracing} shows the token and its path as it traverses two surfaces of the configuration by crossing $\mathtt{bridge\_tile\_states}$ . When transitioning to a $\mathtt{tile\_state}$ or $\mathtt{bridge\_tile\_state}$, we can determine which state to transition to by using Moore neighborhoods. For more details on how $\mathtt{token\_states}$ or $\mathtt{bridge\_tile\_states}$ work, see Section~\ref{sec:tokenBridgeDets}.
\begin{figure}[htp]
\begin{center}
\includegraphics[width=1.2in]{images/tokenTracing}
\caption{Traversing two disjoint surfaces using a single token and $\mathtt{bridge\_tile\_states}$}
\label{fig:tokenTracing}
\end{center}
\end{figure}
\later{
\section{Construction details CA simulation of aTAM system}\label{sec:casimtasdets}
\subsection{Local rules involving the token and bridge tile states}\label{sec:tokenBridgeDets}
Here we describe the local rules that allow the token to traverse the surface of a configuration and how cells may transition to a $\mathtt{bridge\_tile\_state}$.
The direction of each $\mathtt{token\_state}$ determines its future direction of ``movement'' so that a Moore neighborhood can be used to determine the direction of travel. Specifically, the direction of the state
refers to the relative position of a quiescent cell that will change to the a $\mathtt{token\_state}$. Figure~\ref{fig:tokenRuleExample} shows a local rule for a neighborhood with a token.
\begin{figure}[htp]
\begin{center}
\includegraphics[width=3in]{images/tokenRuleExample}
\caption{In the Moore neighborhood, we can determine the direction of the next $\mathtt{token\_state}$.}
\label{fig:tokenRuleExample}
\end{center}
\end{figure}
To understand the need for $\mathtt{bridge\_tile\_states}$, notice that with our CA, all points in $\mathbb{Z}^2$ that map to non-quiescent states are neighbors in the lattice.
This follows from the fact that only the token can transition to a $\mathtt{tile\_state}$.
In this case, we say that the configurations of the CA are {\em connected}. Figure~\ref{fig:mooreNeighborhood} shows a Moore neighborhood of a cell in a $\mathtt{token\_state}$.
\begin{figure}[htp]
\begin{center}
\includegraphics[width=1.2in]{images/mooreNeigborhood}
\caption{Transitioning the center cell to a $\mathtt{tile\_state}$ would divide the neighborhood's quiescent states. Therefore the center cell transitions to a $\mathtt{bridge\_tile\_state}$}
\label{fig:mooreNeighborhood}
\end{center}
\end{figure}
Under the condition
that the configuration is connected we examine the $8$ cells of the Moore neighborhood around the center cell. If transitioning to a $\mathtt{tile\_state}$ results in dividing the quiescent points of the lattice restricted to the neighborhood into two disjoint subsets,
the center cell transitions to a $\mathtt{bridge\_tile\_state}$, otherwise it transitions to a $\mathtt{tile\_state}$. The algorithm to do this treats a cell already in $\mathtt{bridge\_tile\_state}$ like one in the quiescent state. Figure~\ref{fig:bridgeTileState} depicts a
time step where a cell in a $\mathtt{token\_state}$ transitions to a cell in a $\mathtt{bridge\_tile\_state}$. Note that transitioning to a $\mathtt{tile\_state}$ would ``trap'' the token, but transitioning to a $\mathtt{bridge\_tile\_state}$ allows the token to traverse multiple surfaces.
\begin{figure}[htp]
\begin{center}
\includegraphics[width=3in]{images/bridgeTileState}
\caption{{\bf(a)} A $\mathtt{token\_state}$ prior to transitioning to a $\mathtt{bridge\_tile\_state}$ {\bf (b)} The surface is split into two surface. The token must pass over a cell in $\mathtt{bridge\_tile\_state}$ to completely traverse both surfaces.}
\label{fig:bridgeTileState}
\end{center}
\end{figure}
}
If and when a path of cells in a $\mathtt{bridge\_tile\_state}$ no longer leads to quiescent states and the final quiescent state transitions to a $\mathtt{tile\_state}$, the token traverses the cells in $\mathtt{bridge\_tile\_states}$ as it continues its counterclockwise traversal of a configuration. Since the path of cells in $\mathtt{bridge\_tile\_states}$ no longer leads to any quiescent states, as the token traverses cells in $\mathtt{bridge\_tile\_states}$, these cells transition to $\mathtt{tile\_states}$ that
correspond to their $\mathtt{bridge\_tile\_state}$ counterparts. As a result, the token no longer unnecessarily traverses a path of cells in $\mathtt{bridge\_tile\_states}$ that would only lead to other cells in $\mathtt{bridge\_tile\_states}$.
\begin{figure}[htp]
\begin{center}
\includegraphics[width=3in]{images/collapsingBridges}
\caption{{\bf (a)}The token prior to transitioning to a $\mathtt{tile\_state}$. {\bf(b)} Two time steps later we see a cell in $\mathtt{bridge\_tile\_state}$ has transitioned to just a $\mathtt{tile\_state}$. {\bf(c)} The entire path of cells in $\mathtt{bridge\_tile\_states}$ have transitioned to corresponding $\mathtt{tile\_states}$.}
\label{fig:collapsingBridges}
\end{center}
\end{figure}
An example of a CA simulating an aTAM system can be found at {\url{http://self-assembly.net/CASimTAS}}. There are also instructions for creating a CA based on an aTAM system.
\later{
\subsection{Proof of Correctness}\label{sec:pocCAsimTAS}
Let $\mathcal{T} = (T,\sigma, \tau)$ be an aTAM system. First we show that given an aTAM system, the construction in Section~\ref{sec:caconstruction} can be used to give a CA that simulates
$\mathcal{T}$. Let $\mathcal{A}$ be the cellular automaton obtained by the construction. In other words, let $\mathcal{A}$ is the CA with states $\mathtt{tile\_states}$, $\mathtt{bridge\_tile\_states}$ and $\mathtt{bridge\_tile\_token\_states}$ corresponding to tile types of $\mathcal{T}$ as well as $4$ $\mathtt{token\_state}$.
We can take the rescaling $\mathcal{A^{\prime}}$ to be
the trivial rescaling of $\mathcal{A}$. In other words, we take $\mathcal{A^{\prime}}$ to just be $\mathcal{A}$.
Then we take the representation function $R$ to be the partial function that maps a cell with state $\mathtt{tile\_states}$, $\mathtt{bridge\_tile\_states}$ or $\mathtt{bridge\_tile\_token\_states}$ to a tile with
tile type that corresponds to the state representing this particular tile type.
The initial configuration $c_0$ of $\mathcal{A}$ can be obtained from $\sigma$ by first
mapping each point in ${\rm dom} \;{\sigma}$ to a the corresponding $\mathtt{tile\_states}$. Then, since in general non-quiescent cells must be connected, but could divide quiescent cells into disjoint regions of the lattice,
we connect any disjoint regions of connected quiescent cells by paths of non-quiescent cells. Diagonal quiescent cells are not considered connected. Then we change the states of the cells along this path to corresponding $\mathtt{bridge\_tile\_states}$. Figure~\ref{fig:initConfig} gives an example of changing changing states along paths connecting disjoint regions of quiescent cells to $\mathtt{bridge\_tile\_states}$. Finally, we put a
cell just above a cell in a $\mathtt{tile\_state}$ in $\mathtt{token\_state\_left}$.
\begin{figure}[htp]
\begin{center}
\includegraphics[width=4.5in]{images/initConfig}
\caption{An initial configuration obtained from an initial assembly before {\bf(a)} and after {\bf(b)} changing states along paths connecting disjoint regions of quiescent cells to $\mathtt{bridge\_tile\_states}$. Grey cells denote cells in any $\mathtt{tile\_state}$ while cells labeled $B$ denote cells in $\mathtt{bridge\_tile\_states}$. The cell labeled $T$ is in $\mathtt{token\_state\_left}$.}
\label{fig:initConfig}
\end{center}
\end{figure}
Now, to see that $\left(\mathcal{A}, c_0\right)$ simulates $\mathcal{T}$, notice that the token enforces that at most a single cell of a configuration of $\mathcal{A}$ transitions to
$\mathtt{tile\_state}$, $\mathtt{bridge\_tile\_state}$ or $\mathtt{bridge\_tile\_token\_state}$. Therefore applying the global rule to a configuration of $\mathcal{A}$ results in a configuration where either the token has moved, or the token has moved and a quiescent state transitions to a state representing a tile. In the former case, the configuration before and after application of the global rule represent the same assembly. In the latter case, letting $\alpha$ be the assembly represented by the configuration $c$ prior to applying the global rule we see that any configuration in $G(c)$ represents an assembly obtained by adding a single tile to $\alpha$ of the necessary type.
In other words, single state changes from the quiescent state to a state representing a tile type correspond to additions of single tiles in the aTAM system. Hence, $\mathcal{T}$ {\em follows} $\left(\mathcal{A}, c_0\right)$. Likewise, any possible single tile binding in $\mathcal{T}$ corresponds to some possible transition of a
quiescent state to a state that represents the tile type of the binding tile in the corresponding location (with perhaps several transitions which only move the token into the correctly corresponding location). Therefore, $\left(\mathcal{A}, c_0\right)$ {\em models} $\mathcal{T}$. This shows Lemma~\ref{lem:casim}.
To show Theorem~\ref{thm:caiu}, let $\mathcal{T}$ be an arbitrary aTAM system and let $\mathcal{U} = (U,\sigma_{\mathcal{T}},2)$ be the aTAM system that uses the tile set of \cite{IUSA}, which is intrinsically universal for the aTAM, to simulate $\mathcal{T}$ under $R_{t}$. Now let $\mathcal{A}$ be the CA which simulates $\mathcal{U}$ under $R_{a}$ as in the proof of Lemma~\ref{lem:casim}.
Then note that $\sigma_{\mathcal{T}}$ gives rise to an initial configuration $c_0$ of $\mathcal{A}$. With this initial configuration, $R_{a}^*$ followed by $R_{t}^*$ maps configurations of $\mathcal{A}$ with initial configuration $c_0$ to assemblies of $\mathcal{T}$. This composition of maps gives a representation function that shows that $\left(\mathcal{A}, c_0\right)$ simulates $\mathcal{T}$.
}
\section{An aTAM Tile Set Which Can Simulate Any Nondeterministic CA}\label{sec:TAsimCA}
\begin{theorem}\label{thm:TAMsimIU}
There exists an aTAM tile set $U$ which is able to simulate the entire class of nondeterministic CA systems with finite initial configurations. %
\end{theorem}
Theorem~\ref{thm:TAMsimIU} states that there is a single tile set $U$ in the aTAM which can be used to form a TAS $\mathcal{U} = (U,\sigma_{\mathcal{C}},2)$ which is dependent upon a given CA $\mathcal{C}$, for any arbitrary nondeterministic CA $\mathcal{C}$ and a finite initial configuration for $\mathcal{C}$, where the seed to the aTAM system encodes information about $\mathcal{C}$ and its initial configuration, so that $\mathcal{U}$ simulates $\mathcal{C}$. In order to prove Theorem~\ref{thm:TAMsimIU}, we will progress in two steps, first proving the following Lemma.
\begin{lemma}\label{lem:TAMsimGOL}
Let CA $\mathcal{A}$ be Conway's Game of Life. There exists an aTAM tile set $U$ and a scalable representation function $R$ such that, given $c_0$ as an arbitrary but finite initial configuration of $\mathcal{A}$, there exists an aTAM TAS $\mathcal{T} = (U, \sigma_{c_0}, 2)$ such that $\mathcal{T}$ simulates $\left(\mathcal{A}, c_0\right)$.
\end{lemma}
Lemma~\ref{lem:TAMsimGOL} states that there exists a single tile set $U$ in the aTAM which can be used to simulate the Game of Life CA given any finite initial configuration. We now present a construction to prove this.
\subsection{Overview of construction to prove Lemma~\ref{lem:TAMsimGOL}}
The system $\mathcal{T} = (U,\sigma_{c_0},2)$ will be designed so that the seed is a single line of tiles which encodes the initial configuration, $c_0$, of $\mathcal{A}$. Assume that all non-quiescent cells within $c_0$ can fit into an $n \times n$ square. (Throughout this discussion, we will refer to a \emph{cell} as exactly one of the cells of $\mathcal{A}$ and the \emph{grid} as the full set of cells being simulated at a given time. A \emph{step} refers to a single time step of $\mathcal{A}$ and a \emph{stage} refers to the assembly representing the entire grid at a particular step.) The encoding of the initial configuration consists of a listing of the states of each of the $n^2$ cells within that box. Since it is possible that, at each time step $0<t<\infty$, a cell which was previously quiescent and which was just outside the boundary of the currently simulated grid switches its state to a non-quiescent value, to accurately simulate the full behavior of $\mathcal{A}$ we must simulate an increasingly larger grid at each time step. In order to assure that no (non-quiescent) behavior of $\mathcal{A}$ could occur beyond the bounds of our simulation, at each stage we increase the dimensions of the grid by $2$, adding a row of cells to each of the top and bottom, and a column to each of the left and right. We say that we perform a recursive, ``in-place'' simulation of $\mathcal{A}$, namely one in which every subassembly which maps to a single cell at some time step $t$ contains within it, at smaller scale factors, the entire configuration of $\mathcal{C}$ at \emph{every} time step $t'<t$ (recursive), and also that the subassembly mapping to any single cell at any time step $t$ is contained within an infinite hierarchy of subassemblies which each map to a unique cell at some time step $t'$ where $t < t' < \infty$, i.e. each simulated cell and grid is fully encapsulated within the simulation of a single cell at the next greater time step (in-place).
See Figures~\ref{fig:TASsimCA-stage1done} and \ref{fig:TASsimCA-stage2done} for high-level depictions of the simulation of time steps $0$ (the initial configuration of $\mathcal{A}$) and $1$ (the first transition of $\mathcal{A}$). Details of the construction can be found in Section~\ref{sec:TASsimCA-append}.
\begin{figure}[htp]
\begin{center}
\includegraphics[width=3.5in]{images/TASsimCA-stage1done}
\caption{Completed formation of the representation of time step $0$.}
\label{fig:TASsimCA-stage1done}
\end{center}
\end{figure}
\begin{figure}[htp]
\begin{center}
\includegraphics[width=5.0in]{images/TASsimCA-stage2done}
\caption{Completed growth of the second stage. Blue squares depict the representations of the cells of that stage.}
\label{fig:TASsimCA-stage2done}
\end{center}
\end{figure}
\vspace{-10pt}
\later{
\section{Construction details for aTAM simulation of CA}\label{sec:TASsimCA-append}
To show how our construction works, we will break it down into a series of modules, describing how each works independently, and then describe how they are all combined for the full construction. (Note that these modules consist mainly of tiles which assemble to perform well-known primitives, such as counting and rotating values, used in various aTAM constructions. Therefore, many of the details of these modules are omitted except where relevant modifications are made to the commonly used versions. The reader is encouraged to see \cite{IUSA} for additional details about many such primitives.)
Note that $\mathcal{A} = (\mathbb{Z}^2, S, N, \delta)$ uses the Moore neighborhood, i.e. $\{(x,y)|x,y\in\{-1,0,1\}\}$, and that $S = \{0,1\}$. Let $n \in \mathbb{N}$ be the dimensions of a square bounding box which completely encircles all cells in $c_0$ which are not quiescent, along with at least one ring of quiescent cells around the perimeter,
then let $c'_0$ be the $n \times n$ square of cells contained within that box. From here onward, we will refer to $c'_0$ as the initial configuration for the CA system being simulated.
\subsection{Seed configuration}
We construct $\sigma_{c_0}$, the seed assembly for $\mathcal{T}$, as a single column of tiles. Let $n$ be the dimensions of $c'_0$, and note that it requires $n^2$ tiles to encode $c'_0$. The eastern glues of $\sigma_{c_0}$, starting from the bottom and moving up, encode $c'_0$ by encoding the states of the bottom row of cells in $c'_0$ from left to right, followed by the row in $c'_0$ directly above that, etc., fully encoding the $n \times n$ block of cells in $c'_0$ in a line. For each position representing a cell which is the leftmost cell in its row of $c'_0$, a special `$*$' marker is added, and for each border of the $n\times n$ block which the cell is on, a corresponding arrow marker is added. An example can be seen in Figure~\ref{fig:TASsimCA-seed}.
\begin{figure}[htp]
\begin{center}
\includegraphics[width=1.5in]{images/TASsimCA-seed}
\caption{(Left) An example $3\times3$, or $n=3$, initial configuration $c'_0$ with each cell given a uniquely identifying letter for identification purposes, (Right) The row of tiles encoding $c'_0$, showing the ordering of the representation of cells in relation to $c'_0$.}
\label{fig:TASsimCA-seed}
\end{center}
\end{figure}
Below the encoding of $c'_0$ are the portions of the seed assembly labeled ``counter'' and ``spacer'' in Figure~\ref{fig:TASsimCA-seed}. The ``counter'' portion contains the number $n-1$ encoded in binary which will be used as the maximum value for assembling counters, and the ``spacer'' portion simply contains some spacer tiles which are used to ensure that the full height of the ``counter'' + ``spacer'' portions is equal to $n + 2 + \lambda$, where $\lambda$ is equal to 2 for this version of the construction.
\subsection{Initial seed growth}\label{sec:seedGrowth}
The growth from the seed column $\sigma_{c_0}$ occurs in three ways (which are all represented in Figure~\ref{fig:TASsimCA-seed-to-block}). The portion which grows to the east along the bottom (beginning from the ``counter'' section) contains a set of $3$ nested binary counters, $ctr_0$, $ctr_1$, and $ctr_2$, which each start at $0$ and count to a maximum value of $n-1$ as follows: $ctr_0$ increments at every column, $ctr_1$ increments every time $ctr_0$ reaches the value $n-1$ (at which point $ctr_0$ resets to $0$ and continues counting), and $ctr_2$ increments every time $ctr_1$ reaches the value $n-1$. In this way, once $ctr_1$ has counted to $n-1$, the full length traveled is $n^2$. This type of growth is possible since the counter grows in a standard zig-zag manner, and it passes forward the encoded value of $n-1$ which it uses to compare against current counter values. Additionally, while this segment is growing to the east, it also rotates downward the information encoded in its initial row (i.e. the maximum counter value and spacing). The north surface of the counter provides a base across which the representation of $c'_0$ grows, to the east. The west side of the seed column initiates growth which rotates the counter and spacer values clockwise, while forming a square with a specially marked corner (shown as grey in Figure~\ref{fig:TASsimCA-seed-to-block}) for reasons to be discussed later. The north facing counter then grows in the same way as the east facing counter. The southern counter grow at a width of $n+2+\lambda$, and the western counter's width is $n+1+\lambda$, which are both much wider than necessary for the counters themselves, for reasons which will be discussed later. Once they have reached the distance $n^2$, the counters ``pause'' counting while a square of dimension $n+2+\lambda$ form.
\begin{figure}[htp]
\begin{center}
%
{\subfloat[{\scriptsize Counters grow along the south and west, while the values of $c'_0$ move east.}]
{\label{fig:TASsimCA-seed-to-block}\includegraphics[height=2.3in]{images/TASsimCA-seed-to-block}}}
\quad\quad
{\subfloat[{\scriptsize Subsequent growth which ``selects'' the values from the first column of $c'_0$.}]
{\label{fig:TASsimCA-seed-to-block2}\includegraphics[height=2.3in]{images/TASsimCA-seed-to-block2}}}
\caption{The initial growth of the seed assembly shown in Figure~\ref{fig:TASsimCA-seed}.}
%
%
\end{center}
\end{figure}
\subsection{Completion of initial stage}
The box on the bottom right of Figure~\ref{fig:TASsimCA-seed-to-block} initiates the growth of a series of zig-zag, upward and downward growing columns which 1) propagate the encoding of $c'_0$ to the right while shifting the ``$*$'' marks up by one position (thus marking the locations of the cells in the second column), and 2) copy all values with ``$*$'' marks (before the shift) to the top row. Note that these are guaranteed to fit with no more than one value per column since the width of the rectangle is that of the outer counters which is $n+2+\lambda$ and there are only a total of $n$ values for any row. The square on top of the western counter initiates the growth of a rectangle which reaches the location where the remaining values for the first column of $c'_0$ (i.e. $d$ and $g$ in the example) are located. This allows a square to form which selects the first value (in the example, $a$) as the value representing this simulated cell.
At this point, the southern and western counters are both able to continue growing, with their $ctr_0$ and $ctr_1$ values reset to $0$ while their $ctr_2$ values are both incremented to $1$. Further growth similar to the pattern of growth up to this point continues until the $ctr_2$ counters have each reached their maximum value, resulting in the full formation of the assembly representing the initial configuration of $\mathcal{A}$, as shown in Figure~\ref{fig:TASsimCA-stage1done}. It can be seen how, at this point, the values for each of the grid locations in $c'_0$ have been selected in regions corresponding to their locations in $c'_0$ (highlighted in green) by performing simple shifting of markers across the values propagated through the \emph{fibers}.
We refer to the rectangular portions and the smaller squares as \emph{fiber}. The fibers on the western and southern sides of the assembly of any given stage are called \emph{boundary fibers}, while the rest are called \emph{stage fibers}. The bottom row of upward growing stage fibers perform the column selection and shifting of the ``$*$'' marks as previously discussed. The upper rows simply select the leftmost marked values which are passed to them and propagate all remaining values upward while shifting the mark by one position. Each horizontal fiber collects all values for its row, picking them up one at a time from left to right, getting one from each square where it crosses with a vertical fiber. See Figure~\ref{fig:TASsimCA-horiz-fiber} for an example. It does this by designating each of its rows to carry the value of one cell in that row, with the top and bottom hardcoded to each represent a cell in the quiescent state (and note that when these are added they are initialized to contain the necessary markers denoting whether they are at the left or right side of the grid - see Figure~\ref{fig:TASsimCA-seed} for an example of the arrow markers). Since it will gather the $n$ cell values for the current stage, the addition of a quiescent value to the bottom and top simulates the addition of an extra cell on the left and right side of this row in the grid for the next stage (since this listing of cell values will be used to compute the values for the next stage), and the height of this fiber is $n+2+\lambda$, ensuring that all values will fit with one per row. The horizontal fiber grows in a zig-zag manner, using the square below it to guide it until it reaches an intersection with a vertical fiber. Through the square of the intersection, the columns grow just in the upward direction, propagating all of the information about the cell values for that column upward while using cooperation from the west to propagate the ``$*$'' marker one position to the right, and to also collect that value all the way to the top right of the square. After leaving the intersection, it resumes its zig-zag growth, which allows it to shift all collected cell values for that row down by one (other than the top and bottom values, which stay in fixed positions).
The horizontal fiber provides for a portion of the addition of a ring of quiescent cells for the next stage. To handle the addition of the new bottom row, the boundary fiber creates a similar list of values, with all set to quiescent and containing the necessary markings for the edges of the grid that they are on. To handle the addition of another row on the top, the next stage will insert quiescent values during the distribution of cell values. The squares at the intersections of stage fibers all grow using cooperation between the two fibers, in order to preserve the information from both fibers and position it appropriately. The result of the growth of all of the stage fiber is that each simulated cell has the representation of the correct value, plus the horizontal fibers at their eastern edges contain a complete representation of all cell values at this stage of the simulation. Note that the shaded grey squares are simply filled by generic ``filler tiles'' which carry no specific information.
\begin{figure}[htp]
\begin{center}
\includegraphics[width=3.5in]{images/TASsimCA-horiz-fiber}
\caption{Growth of a horizontal fiber, beginning from its initiation and growing through two intersections with vertical fibers.}
\label{fig:TASsimCA-horiz-fiber}
\end{center}
\end{figure}
The entire top row of fiber of a stage is specially designated and grows to a height of $1$ less than the others as it grows from left to right. The completion of the first stage occurs when the top rightmost square (that representing cell $i$ in Figure~\ref{fig:TASsimCA-stage1done}) has its top rightmost corner completed. Once this tile is placed, it allows for the attachment of the tile shown in black, which initiates a row of tiles which grow to the left edge of the assembly, and then down the entire left side. Note that since both the top and left fibers were narrower than the other fibers by a single tile, they are now the same widths, creating a perfectly square assembly for the representation of the initial stage.
\subsection{Growth of subsequent stages}
\begin{figure}[htp]
\begin{center}
\includegraphics[width=3.5in]{images/TASsimCA-stage2}
\caption{Beginning growth of the second stage.}
\label{fig:TASsimCA-stage2}
\end{center}
\end{figure}
To begin growth of the next stage, upon reaching the southwest corner of the assembly representing the first stage, growth is initiated along both the south and west sides which copies the counter and spacer information from the counters there, and then adds $2$ to the maximum counter value (since the dimensions of the cells simulated by this stage will increase by $2$ to add a new perimeter of quiescent cells to handle any growth of the active configuration). This results in the new boundary fiber for the next stage, and these counters increment each time they reach a square representing the maximum length of the boundary fiber for the previous stage. Upon reaching that distance, the bottom boundary fiber creates a square which initiates upward growth of a special stage fiber. While quite similar to the corresponding stage fiber of the previous stage fiber, rather than just extracting the necessary values for the first column of this stage, this fiber embeds the functionality of the ``transition computation gadget.'' This functionality is described in detail in Section~\ref{sec:TASsimCA-comp-gadg}, but essentially it is used to compute the new values of the cells in the current column. Other than the fact that the first row of vertical stage fibers for each stage use the transition computation gadget rather than just cell value selection as for the first stage, the rest of the information propagation through the fibers is the same.
The vertical stage fiber is able to use the cell values exposed by the horizontal fibers of the last stage to compute the new cell values, with each such horizontal fiber exposing the values of one row of cells. The leftmost vertical stage fiber places the ``$*$'' marker on the bottom value of each, which represents the rightmost cell of each row. It then performs the computation of new values and passes them upward by rotating them up and right, with the ``$*$'' remaining on the leftmost. Note that there is room to accommodate all $n$ values after performing the computation because the computation requires two columns and the width of the fiber is $n+2$. While doing this, the vertical fiber also passes all of the values from the horizontal fibers of the previous stage through from left to right, while moving the ``$*$'' marker for each up by one position. This makes the values available for the computation of the next column's values by the next vertical fiber, with the ``$*$'' markers in the correct positions.
When a vertical fiber reaches the topmost row of simulated cells for the current stage, it will insert a new quiescent value row of newly added cells at the top of the grid during this step, and mark them as the top cells of the grid. The fact that the values of these cells were not included in the computation of cell values for this stage cannot result in incorrect computation due to the fact that the initial configuration $c'_0$ was created with a perimeter of quiescent cells, and growth at every step has ensured that there is always a buffer of quiescent cells which can be assumed during the computation.
The vertical fiber is able to determine how high to grow by detecting the marker from the top right corner of the previous stage. It is able to determine which information is from the immediately previous stage (in the case of advanced stages where there are stage fibers from multiple previous stages available) because, as a vertical fiber for one stage copies across the values for the fibers of previous stages, if it is the last vertical fiber for its stage (which can easily be determined by the boundary fiber which initiates its growth) it marks the information it is copying across from all other fibers as ``copy-only'', meaning that that information no longer participates in the computation of new cell values. The vertical fibers within the square representing the computation of values for a given stage copy all information from previous fibers upward and to the right. As the next stage forms, all such information is copied into and through the neighboring cells. If it is received from below, it is copied up and to the right, and if it is received from the right it is only copied to the right.
This system allows fibers of each stage to pass along the fibers and values from all previous stages, and only when they are vertical fibers of the initial formation of a stage do they perform computation of new cell values. Otherwise, fibers are simply nested copies carrying the initial computation of the cell values of their stage, which are passed around through the cells of later stages. This ensures that the value for each cell at a particular time step is only computed once (which is necessary in the case of the simulation of a nondeterministic CA, to be discussed), even though it is represented (for that cell and time step) in an eventually infinite number of cells (as recursive copies of all previous configurations leading up to each cell). Thus, each cell of the computation at time step $n$ contains the full configuration history of $\mathcal{A}$ for steps $0$ through $n-1$.
\subsection{Transition computation gadget}\label{sec:TASsimCA-comp-gadg}
We now define the \emph{transition computation gadget}, which is used to execute the transition function which computes the new value of a column of cells given the current values of all cells in the grid. This consists of a pair of vertical columns which grow up then down along the length of a column of tiles which expose the values of the simulated cells from the previous stage. These will be contained in the ends of the fibers of the immediately previous stage, with each fiber containing the values of exactly one row, in row order of left to right. Further, the values of any cells which are on the boundary of the grid will be marked accordingly. For the first (leftmost) transition computation gadget of each stage, the bottom values of each row will be assumed to implicitly have a ``$*$'' mark, and this mark will be shifted upward by one position for use by each subsequent transition computation gadget. The purpose of the first, upward growing column is to gather the values of the following cells of the neighborhood of each cell marked with a ``$*$'': $\{(-1,-1),(0,-1),(1,-1),(-1,0)\}$. (Recall that the locations marked with a ``$*$'' represent the values for the cells in the current column.) For clarity, we will now explain how it does this for a single ``double marked'' location (i.e. just for the case of this explanation, one of the locations marked with a ``$*$'' is also given another mark, say ``$+$'') , as the process can easily be extended to simultaneously handle all of the ``$*$'' marked tiles but the explanation is a bit clearer when focusing on one of them. Computation for the single position is accomplished by the upward growing row ``remembering'' the values of the last cell encountered until it encounters a `$*$', then remembering the last cell, the cell with the `$*$', and the next cell. It then continues by remembering those three and the last encountered cell until it either encounters another `$*$', at which point it forgets the last group of 3 and starts over, or it encounters the specially ``double marked'' cell. Once it reaches that cell, following the scheme outlined it will have arrived carrying the necessary information for the four neighbors. (Note that when tiles record the fact that a cell is on the border of the grid, then the quiescent value can be substituted for the missing neighbor cell or cells.) The upward growing column completes, then initiates the downward growing column which similarly gathers the value of the remaining four neighbors. The correct location of the `$*$' markings is crucial for this to occur correctly. All neighborhood information can be gathered since only a constant amount of information must be contained in any given tile. See Figure~\ref{fig:TASsimCA-transition-gadget} for two examples of the values for a cell's neighborhood being gathered. Again, note that the process can be easily extended so that all cells marked with ``$*$'' are computed in the same two rows by just gathering and ``dropping off'' the information at all relevant locations, while still retaining the need for only a bounded number of cell states to be remembered by any single tile, regardless of the size of the grid and thus the number of marked tiles at any time.
Once the downward growing column reaches the height of the cell to transition, the glues adjacent to that location contain the information about the entire neighborhood. By having one tile type for each possible set of neighborhood values, namely all $2^9$ combinations of $0$'s and $1$'s for the $9$ cells in a neighborhood, the tile set is designed to allow for the placement of exactly one tile type, which correctly represents the value of that cell if it executed its transition function.
\begin{figure}[htp]
\begin{center}
\includegraphics[width=2.5in]{images/TASsimCA-transition-gadget}
\caption{The functioning of the transition computation gadget. (Left) The example grid being considered, (Middle-left) The neighborhood surrounding a cell and for each neighbor, an arrow indicating whether the upward or downward growing column is used to collect its value. (Middle-right) Gathering the neighborhood values for cell $i$. Note that since $i$'s column is on the left side of the grid, the markings on the cells of that column (previously discussed but not shown here) allow the quiescent value to be used for the cells currently off of the simulated grid. (Right) Gathering neighborhood values for cell $j$.}
\label{fig:TASsimCA-transition-gadget}
\end{center}
\end{figure}
\subsection{Correct simulation}
The scalable representation function $R$ which testifies to the simulation of $\left(\mathcal{A}, c_0\right)$ by $\mathcal{T}$ works as follows. Given a time step $t \in \mathbb{N}$, it is able to find the scale factor at which the configuration $c \in G^t(c_0)$ is represented by first finding the scale factor of the first stage ($t=0$), which is $n^3 + (n+1)(n+2+\lambda)$ where $n$ is the dimensions of $c'_0$. Let $w_t = n + (2+\lambda)(t+1)$ be the width of the fiber at step $t$, and we can recursively define the dimensions at step $t$ as $d_t = (n+1)w_t + nd_{t-1}$. After computing $d_t$, $R_{d_t}$ can simply inspect the given assembly $\alpha$ to determine if the completed square of the necessary dimensions exists. If so, it can use the computed dimensions to find the locations of the square intersections of stage fibers where the cell values for each simulated cell are marked to determine the states of all cells in the grid being simulated for step $t$. All other cells of $\mathcal{A}$ must be in the quiescent state. If not, $R_{d_t}$ is undefined, which ensures that the representation of the CA proceeds in a synchronous manner, with each time step being defined only once all cells states have been computed. For $t$ in $\mathbb{N}$, scalable representation function $R$ and function $f(t) = d_t$, we can see that the conditions of simulation given in Definition~\ref{def:tassim} hold. As an interesting side effect, all simulated cells for time step $t$ contain the entire computational history of $\mathcal{A}$ for time steps $0$ through $t-1$.
} %
\subsection{Overview of construction to prove Theorem~\ref{thm:TAMsimIU}}
Now that we have defined the above construction for a tile set which can simulate the Game of Life CA given an arbitrary finite initial configuration, we sketch the necessary extensions to provide a tile set which can simulate \emph{any} synchronous nondeterministic CA.
Let $\mathcal{C} = (\mathbb{Z}^2, S, N, \delta)$ be an arbitrary synchronous nondeterministic CA, $d$ be the maximum unit distance of any element of $N$ from the center position (i.e. the distance of a cell's furthest neighbor in its neighborhood), $b$ be the number of bits required to represent $S$, and $c_0$ be the initial configuration of $\mathcal{C}$. Now, define $M$ as a nondeterministic Turing machine which takes as input the encoding of a synchronous nondeterministic CA (in some standard encoding) and the representation of a grid of cells in the same format they are represented in the previous construction (i.e. as they appear in the seed assembly or along the western edges of the stage fibers of a given stage), with the only difference being that the states are not now restricted to only single bits, but instead may be series of bits (with delimiters between the bits representing the states of different cells), and outputs the new cell state for the one cell marked with ``$*$'' and ``$+$''. Note that $M$ must be nondeterministic to simulate $\mathcal{C}$, and in order to randomly select from a set of $s$ possible states it simply chooses the bits of a binary number $i$ between $0$ and $s-1$ (the random choice of bits will be performed by the nondeterministic attachment of one of two tiles in each bit position) and then selects the $i$th of the possible states. For details on how to approach uniform distribution across the selection of possible choices, various gadgets of increasing but bounded space complexity can be used (see \cite{RNSSA}). Define $r$ to be the longest running time of $M$ when given any single neighborhood $S^{|N|}$ for $\mathcal{C}$, and let $m$ be the maximum amount of tape space used. Note that both $r$ and $m$ can be easily (if exhaustively) determined by simply running $M$ for each of the $S^{|N|}$ possible neighborhoods.
Now we adjust the previous construction as follows. To create $c'_0$ from $c_0$, we do as before, but we add an additional $d$ rings of quiescent cells around $c'_0$ to account for the fact that $\mathcal{C}$ may have an arbitrarily large neighborhood and we want to simulate enough quiescent cells around the border of our grid at all times to ensure that we are simulating all non-quiescent cells. In the seed assembly, in place of the ``spacer'', encode the definition of $\mathcal{C}$ (in the encoding used by $M$) plus $r+m$ spacer tiles to provide enough space for $M$'s tape and running time (the space provided here will ensure that rotations of these values provide the necessary space throughout the assembly). In place of the transition computation gadget, $M$ will be run. In order to do this, whenever the boundary fiber reaches a point at which it would have formerly grown a square which initiates the transition computation gadget, instead of growing the square, $M$ is simulated in a standard zig-zag manner. Additionally, $M$ only computes the transition for a single cell (beginning with the bottom one marked with ``$*$'') and then passes the newly computed cell value along with the rest of the (unchanged) cell values for that column upward. As before, all needed information is also passed through the simulation of $M$ to the right. Now, rather than just receiving the new cell values and passing them along, the squares at the intersections of vertical and horizontal stage fibers also execute $M$, whose definition is passed in from the west via the western boundary fiber. This allows all of the same information flow, but splits up the computations of new cell values in such a way that each simulated cell computes at most a single new cell value, which can be done within the time and space bounds, $r$ and $m$. In order to provide consistency of scale, a counter is embedded within the running of $M$ so that, in the case of computations which require variable amounts of time relative to each other, the counter ensures that $r+m$ space is always used. Again, as in the previous construction, once the value for a cell at a particular time step is computed, it is continually passed along into all other representations of the same cell within other larger cells, but never recomputed.
Finally, the representation function $R$ now must be adjusted to take into account the fact that cell states are now binary strings, and also take into account the new scaling factor due to the padding provided to run $M$. Nonetheless, this construction retains the same properties such as the previous, in terms of completing each stage before beginning the next.
\section{Introduction}\label{sec:intro}
Mathematical models of systems composed of large, distributed collections of simple components which are guided by only local interactions between neighboring elements have demonstrated the rise of emergent complexity, and provided enormous insight into the behaviors of many naturally occurring systems, while also guiding the modeling and development of complex artificial systems. Two such notable models are cellular automata (CA) and the astract Tile Assembly Model (aTAM). In this paper, we seek to explore the relationship between these two models.
Introduced by Stanislaw Ulam and John von Neumann in the 1940's, CA consist of an infinite grid of cells which can each sense their immediate neighborhoods and then all independently but synchronously update their states based on a finite set of rules and the state of their neighborhoods. Since their introduction, CA have provided a rich theoretical framework for studying the power of systems governed by local interactions. Much of that study has included classifications of the relative powers of various CA systems with differing neighborhoods and rules governing their state changes, and a large amount of this classification has been the result of various demonstrations of the ability of one system to simulate another, including very importantly the definitions of what it means for one system to simulate another. A key notion developed during this study was that of intrinsic universality \cite{AlbertCulik87,DurandRoka89,DelormeMOT11,DelormeMOT11a,Ollinger-CSP08,Ollinger-STACS03,Goles-etal-2011,ArrigSchabThey}, %
which was designed to capture a strong notion of simulation, in which one particular automaton is capable of simulating the \emph{behavior} of any automaton within a class of automata. Furthermore, to simulate the behavior of another automaton, the simulating automaton must evolve in such a way that a translated rescaling (rescaled not only with respect to rectangular blocks of cells, but also with respect to time) of the simulator can be mapped to a configuration of the simulated automaton. The specific rescaling depends on the simulated automaton and gives rise to a global rule such that each step of the simulated automaton's evolution is mirrored by the simulating automaton, and vice versa via the inverse of the rule. In this way, it is said that the simulator captures the dynamics of the simulated system, acting exactly like it, modulo rescaling. This is in contrast to a computational simulation, for example when a general purpose digital computer runs a program to simulate a cellular automata while the processor's components don't actually arrange themselves as, and behave like, a grid of cellular automata. Such computational simulations of computable systems can be performed by systems in any Turing universal model. However, as we will discuss shortly, Turning universality does not imply the simulation capabilities necessary for intrinsic universality.
Introduced by Erik Winfree in 1998 \cite{Winf98}, the abstract Tile Assembly Model (aTAM) is a mathematical model in which the individual components are square ``tiles'', with ``glues'' on their edges, which are able to autonomously bind together to form structures based only on the amount and strengths of matching glues on edges of adjacent tiles. The aTAM was inspired by Wang tiling \cite{Wang61}, but provides a model for the dynamic growth of tilings. Like various CA, the aTAM has been proven to be computationally universal and capable of quite powerful behavior. Recently, taking example from the work on CA, much work has been done to classify the power of the aTAM and derivative tile assembly models based on their powers of simulation \cite{USA,Versus,GeoTiles,OneTile,2HAMIU}. In fact, \cite{IUSA} showed that the aTAM is intrinsically universal, which means that there is a single tile set $U$ such that, for any aTAM tile assembly system $\mathcal{T}$ (of any temperature), the tiles of $U$ can be arranged into a seed structure dependent upon $\mathcal{T}$ so that the resulting system (at temperature $2$), using only the tiles from $U$, will faithfully simulate the behaviors of $\mathcal{T}$. In contrast, in \cite{2HAMIU} it was shown that no such tile set exists for the 2HAM since, for every temperature, there is a 2HAM system which cannot be simulated by any system operating at a lower temperature. Thus no tile set is sufficient to simulate 2HAM systems of arbitrary temperature, despite the fact that the 2HAM is computationally universal, and can also simulate any arbitrary aTAM system as shown in \cite{Versus}. Furthermore, it was shown in \cite{IUNeedsCoop} that although the aTAM in 3 dimensions is computationally universal at temperature $1$ (see \cite{CooFuSch11}), it is unable to simulate the behavior of the majority of temperature $2$ aTAM systems. These results from \cite{IUNeedsCoop} and \cite{CooFuSch11} prove that Turing universality does not imply the simulation power necessary for intrinsic universality.
As early as the aTAM's initial introduction, its power to simulate CA was explored. Winfree et al. showed that the 2-D aTAM can be used to simulate 1-D CA \cite{Winfree96}, and Winfree \cite{Winf98} showed that the 3-D aTAM can simulate 2-D CA. Furthermore, the aTAM is commonly colloquially referred to as an asynchronous, nondeterministic CA in which quiescent states that change to ``tile'' states never change again (analogous to write-once memory). These comparisons led naturally to our exploration of simulations between the two models using the same dimensions for each, namely 2-D. However, even between systems within the same model, defining a satisfactory notion of simulation, namely one which captures the essence of one system ``behaving'' like the other while also generating analogous results, or output, can be difficult. %
While the definition of a CA system simulating an aTAM system may be in some sense rather natural, the definition of an aTAM system simulating a CA system must take into account the write-once nature of tile assembly systems. To account for this, we modify the standard notions of simulation used in intrinsic universality to allow for an increasing scale factor during simulation. Essentially, such simulation definitions typically make use of a standard block replacement scheme in which, throughout the simulation, each constant sized block of the simulator can be directly mapped to an element of the simulated system. To allow a static model such as the aTAM to simulate a dynamic model such as CA, we allow the scale factor of the simulation to increase after each time step of the simulated system is completed.
For our main results, we present the following. First, a single nondeterministic, synchronous CA which, for any arbitrary aTAM system $\mathcal{T}$, can be given an initial configuration dependent upon $\mathcal{T}$ so that it will exactly simulate $\mathcal{T}$, producing the same output patterns (modulo rescaling) and preserving the dynamics of $\mathcal{T}$. Second, we exhibit a single aTAM tile set which, for any nondeterministic, synchronous CA $\mathcal{C}$ which begins with a finite initial configuration (i.e. all but a finite number of cells begin in a quiescent state), can be given an initial seed configuration dependent upon $\mathcal{C}$ so that it will exactly simulate $\mathcal{C}$, producing the same output patterns (modulo rescaling) and preserving the dynamics of $\mathcal{C}$.
\section{Preliminaries}
Here we define the terms and models used throughout the rest of the paper.
We work in the $2$-dimensional discrete space $\mathbb{Z}^2$. Define the set
$U_2 = \{(0,1), (1,0), (0,-1), (-1,0)\}$ to be the set of all
\emph{unit vectors} in $\mathbb{Z}^2$.
We also sometimes refer to these vectors by their
cardinal directions $N$, $E$, $S$, $W$, respectively.
All \emph{graphs} in this paper are undirected.
A \emph{grid graph} is a graph $G =
(V,E)$ in which $V \subseteq \mathbb{Z}^2$ and every edge
$\{\vec{a},\vec{b}\} \in E$ has the property that $\vec{a} - \vec{b} \in U_2$.
In the subsequent definitions, given two partial functions $f,g$, we write $f(x) = g(x)$ if~$f$ and~$g$ are both defined and equal on~$x$, or if~$f$ and~$g$ are both undefined on $x$.
\subsection{The abstract Tile Assembly Model}
\label{sec:atam-def}
In this section we give an informal description of the abstract Tile Assembly Model (aTAM), which is the theoretical version of the TAM which does not model the kinetics of physical self-assembling systems. The reader is encouraged to see \cite{RotWin00, Winf98, jSSADST} for a formal development of the model.
Intuitively, a tile type $t$ is a unit square that can be
translated, but not rotated, having a well-defined ``side
$\vec{u}$'' for each $\vec{u} \in U_2$. Each side $\vec{u}$ of $t$
has a ``glue'' with ``label'' $\textmd{label}_t(\vec{u})$--a string
over some fixed alphabet--and ``strength''
$\textmd{str}_t(\vec{u})$--a nonnegative integer--specified by its type
$t$. Two tiles $t$ and $t'$ that are placed at the points $\vec{a}$
and $\vec{a}+\vec{u}$ respectively, \emph{bind} with \emph{strength}
$\textmd{str}_t\left(\vec{u}\right)$ if and only if
$\left(\textmd{label}_t\left(\vec{u}\right),\textmd{str}_t\left(\vec{u}\right)\right)
=
\left(\textmd{label}_{t'}\left(-\vec{u}\right),\textmd{str}_{t'}\left(-\vec{u}\right)\right)$.
Here the glue function is assumed to be the usual diagonal glue function. In other words,
only glues with matching labels are allowed to interact.
Fix a finite set $T$ of tile types.
A $T$-\emph{assembly}, sometimes denoted simply as an \emph{assembly} when $T$ is clear from the context, is a partial
function $\pfunc{\alpha}{\mathbb{Z}^2}{T}$ defined on at least one input, with points $\vec{x}\in\mathbb{Z}^2$ at
which $\alpha(\vec{x})$ is undefined interpreted to be empty space,
so that ${\rm dom} \; \alpha$ is the set of points with tiles.
We write $|\alpha|$ to denote $|{\rm dom} \; \alpha|$, and we say $\alpha$ is
\emph{finite} if $|\alpha|$ is finite. For assemblies $\alpha$
and $\alpha'$, we say that $\alpha$ is a \emph{subassembly} of
$\alpha'$, and write $\alpha \sqsubseteq \alpha'$, if ${\rm dom} \; \alpha
\subseteq {\rm dom} \; \alpha'$ and $\alpha(\vec{x}) = \alpha'(\vec{x})$ for
all $x \in {\rm dom} \; \alpha$.
For $\tau \in \mathbb{N}$, an assembly is \emph{$\tau$-stable} if every cut of its binding graph has strength at least $\tau$, where the weight of an edge is the strength of the glue it represents.
That is, the assembly is stable if at least energy $\tau$ is required to separate the assembly into two parts.
In the aTAM, self-assembly begins with a \emph{seed assembly} $\sigma$ (typically
assumed to be finite and $\tau$-stable) and
proceeds asynchronously and nondeterministically, with tiles
adsorbing one at a time to the existing assembly in any manner that
preserves stability at all times.
An aTAM \emph{tile assembly system} (\emph{TAS}) is an ordered triple
$\mathcal{T} = (T, \sigma, \tau)$, where $T$ is a finite set of tile
types, $\sigma$ is a seed assembly with finite domain, and $\tau$ is
the temperature. An \emph{assembly sequence} in a TAS $\mathcal{T} = (T, \sigma, \tau)$ is
a (possibly infinite) sequence $\vec{\alpha} = ( \alpha_i \mid 0
\leq i < k )$ of assemblies in which $\alpha_0 = \sigma$ and each
$\alpha_{i+1}$ is obtained from $\alpha_i$ by the ``$\tau$-stable''
addition of a single tile. The \emph{result} of an assembly sequence
$\vec{\alpha}$ is the unique assembly $\res{\vec{\alpha}}$
satisfying ${\rm dom} \;{\res{\vec{\alpha}}} = \bigcup_{0 \leq i <
k}{{\rm dom} \;{\alpha_i}}$ and, for each $0 \leq i < k$, $\alpha_i
\sqsubseteq \res{\vec{\alpha}}$.
We write $\prodasm{T}$ for the
\emph{set of all producible assemblies of} $\mathcal{T}$. An
assembly $\alpha$ is \emph{terminal}, and we write $\alpha \in
\termasm{\mathcal{T}}$, if no tile can be stably added to it. We
write $\termasm{T}$ for the \emph{set of all terminal assemblies of
} $\mathcal{T}$.
The set $\prodasm{T}$ is partially ordered by the relation $\longrightarrow$ defined by
\begin{eqnarray*}
\alpha \longrightarrow \alpha' & \textmd{iff} & \textmd{there is an assembly sequence } \vec{\alpha} = (\alpha_0,\alpha_1,\ldots) \\
& & \textmd{such that } \alpha_0 = \alpha \textmd{ and } \alpha' = \res{\vec{\alpha}}.
\end{eqnarray*}
A TAS ${\mathcal T}$ is \emph{directed}, or
\emph{produces a unique assembly}, if it has exactly one terminal
assembly i.e., $|\termasm{T}| = 1$. The reader is cautioned that the
term ``directed'' has also been used for a different, more specialized
notion in self-assembly \cite{AKKRS09}. We interpret ``directed'' to
mean ``deterministic'', though there are multiple senses in which a
TAS may be deterministic or nondeterministic.
\subsection{Cellular Automata}
In our discussion about cellular automata, we will use the following definitions. Most of the conventions used in these definitions come from \cite{DelormeMOT11}.
\begin{definition}\label{CADef}
A {\em $2$-dimensional nondeterministic cellular automata} $\mathcal{A}$ is a $4$-tuple $(\mathbb{Z}^2, S, N, \delta)$ where
\begin{enumerate}
\item $S$ is a finite set of states.
\item $N\subset \mathbb{Z}^2$ is a finite set defining the neighborhood of $\mathcal{A}$.
\item $\delta : S^{|N|} \to 2^S$ is the local rule of $\mathcal{A}$. $\delta$ maps a neighborhood defined by $N$ and a point in $\mathbb{Z}^2$, usually referred to as a {\em cell}, to a set of states.
\end{enumerate}
Note that a {\em deterministic cellular automata} is simply a special case of a nondeterministic CA in which $\delta: S^{|N|} \to S$, i.e. it maps each neighborhood to a single state.
A {\em configuration} $c$ is a mapping from $\mathbb{Z}^2$ to $S$. Let $C$ be the set of configurations in $\mathcal{A}$.
The {\em global rule} $G$ is obtained as follows.
For $p\in \mathbb{Z}^2$, $G:C\to 2^C$ such that $c^{\prime} \in G(c) \iff c^{\prime}(p) \in \delta(c_{p+v_1}, \dots, c_{p+v_k})$ where $\{v_1, \dots, v_k\} = N$.
\end{definition}
We assume that $S$ contains a unique {\em quiescent state} where a quiescent state $q$ is a state such that $\delta$ maps a neighborhood of cells in this state to a singleton set containing only the quiescent state. In this paper, we only consider finite initial configurations (which we will typically denote by $c_0$) where all but finitely many cells are quiescent. In this paper we are concerned with the CA-initial configuration pair $\left(\mathcal{A}, c_0\right)$ and refer to such pairs as CA {\em systems}.
There are many interesting examples of cellular automata. One of particular interest here is John Conway's Game of Life. (See \cite{Gardner70}.)
This is a $2$D cellular automata where each cell is in one of two states $\mathtt{alive}$ or $\mathtt{dead}$.
Local rules are given for a $3\times 3$ squares for cells according to the following.
\begin{enumerate}
\item An $\mathtt{alive}$ cell with less than two neighbors becomes $\mathtt{dead}$.
\item An $\mathtt{alive}$ cell with two or three neighbors stays $\mathtt{alive}$.
\item An $\mathtt{alive}$ cell with more than three neighbors becomes $\mathtt{dead}$.
\item A $\mathtt{dead}$ cell with three $\mathtt{alive}$ neighbors becomes $\mathtt{alive}$.
\end{enumerate}
These simple rules give rise to an amazing amount of complexity and structure. In fact, in~\cite{Rendell11} a universal Turing machine built in Conway's Game of Life is presented that starts from a finite configuration that encodes another Turing machine and its tape and simulates the execution of the encoded Turing machine with the encoded tape as input.
\subsection{CA simulation of a TAS}
For $S$ as in Definition~\ref{CADef} and $k$ a vector of $\mathbb{Z}^2$, let $\psi^k : S^{\mathbb{Z}^2}\to S^{\mathbb{Z}^2}$ be the bijection mapping a configuration $c$ to the configuration
$c^{\prime}$ such that for each cell $i$, $c^{\prime}_{i+k} = c_i$. $\psi^k$ is called the {\em shift} operator.
Now let $m=(m_1, m_2)$ be a pair of strictly positive integers.
$o^m: S^{\mathbb{Z}^2}\to (S^{[0,m_1]\times[0,m_2]})^{\mathbb{Z}^2}$ is the bijection such that for all $c\in S^{\mathbb{Z}^2}$, $z\in \mathbb{Z}^2$ and $p\in [0,m_1]\times[0,m_2]$,
$o^m(c)(z)(p) = c(mz + p)$. $o^m$ is called the {\em packing} map.
Let $\mathcal{A} = (\mathbb{Z}^2, S, N, \delta)$ be a cellular automaton. An {\em $\langle m,n,k\rangle$-rescaling} of $\mathcal{A}$ is a cellular automaton $\mathcal{A}^{\langle m,n,k \rangle}$
with states set $S^{[0,m_1]\times[0,m_2]}$ and global transition function $o^m\circ\psi^k\circ G_{\mathcal{A}}^{n} \circ o^{-m}$, where $G^n_{\mathcal{A}}$ is the composition of the global function for $\mathcal{A}$ $n$ times.
We now define what it means to say that a synchronous nondeterministic $2$D cellular automata with an initial configuration {\em simulates} an aTAM system. First we let $R$ be the partial function that maps individual cells in some state to single tiles with some tile type. $R$ is the {\em representation} function.
In the following definitions, $\mathcal{A} = (\mathbb{Z}^2, S, N, \delta)$ is a synchronous nondeterministic CA with $C$ denoting the set of configurations and $\mathcal{T} = (T, \sigma, \tau)$ is an aTAM system.
We denote by $c_0$ a finite initial configuration in $C$ and let $\tilde{C} = \cup_{n=0}^{\infty}G^n(c_0)$. In other words, $\tilde{C}$ is all of the configurations obtained by applying the global rule some number of times to $c_0$.
Let $R^*:\tilde{C} \to \prodasm{T}$ be the canonical extension of $R$. Finally, we let $\left(\mathcal{A}, c_0\right)$ be the pair consisting of the CA $\mathcal{A}$ and the initial configuration $c_0$.
\begin{definition}\label{def:follows}
We say that $\mathcal{T}$ {\em follows} $\left(\mathcal{A}, c_0\right)$ iff
for all $c \in \tilde{C}$, $\alpha, \beta \in \mathcal{A}[\mathcal{T}]$ and $n\geq 0$, if $R^*(c) = \alpha$ and $\beta \in R^*[G^n(c)]$ then $\alpha \longrightarrow \beta$.
\end{definition}
Note that $R^*[G^n(c)]$ denotes the image of the set $G^n(c)$ under $R^*$. Informally, Definition~\ref{def:follows} means that if a configuration represents an assembly $\alpha$, then anything this configuration maps to under applications of the global rule represents some assembly that $\alpha$ can grow into.
The following definition captures the idea that for an assembly $\alpha$ represented by a configuration $c$, any assembly that $\alpha$ grows into is represented by a configuration obtained from $c$ by applications of the
global rule.
\begin{definition}
We say that $\left(\mathcal{A}, c_0\right)$ {\em models} $\mathcal{T}$ if
$\forall \alpha \in \mathcal{A}[\mathcal{T}], \exists c \in \tilde{C}$ such that $R^*(c) = \alpha$ and $\forall \beta \in \mathcal{A}[\mathcal{T}]$,
if $\alpha \longrightarrow \beta$ then $\exists n\geq 0$ such that $\beta \in R^*[G^n(c)]$.
\end{definition}
Note that a configuration representing some terminal assembly $\alpha$ must transition to configurations that still represent $\alpha$.
Finally, we give the definition of simulation.
\begin{definition}
$\left(\mathcal{A}, c_0\right)$ {\em simulates} $\mathcal{T}$ iff there is an ${\langle m,n,k \rangle}$-rescaling $\mathcal{A^{\prime}}$ of $\mathcal{A}$ such that $\mathcal{T}$ follows $\mathcal{A^{\prime}}$ and $\mathcal{A^{\prime}}$ models $\mathcal{T}$.
\end{definition}
\subsection{TAS simulation of a CA}
As in the previous section, $\mathcal{A} = (\mathbb{Z}^2, S, N, \delta)$ denotes a synchronous nondeterministic CA with a finite initial configuration $c_0$, $C$ denotes the set of configuration and $\mathcal{T} = (T, \sigma, \tau)$ denotes an aTAM system. Again, let $\tilde{C} = \cup_{n=0}^{\infty}G^n(c_0)$ where $c_0$ is the finite initial configuration of $\mathcal{A}$ and let $\left(\mathcal{A}, c_0\right)$ be the pair consisting of the CA $\mathcal{A}$ and the initial configuration $c_0$.
Because any aTAM system produces static assemblies and the state of a cell of a CA may change multiple times, it would be impossible to represent a cell of a configuration in $\tilde{C}$ with fixed block assemblies over $T$. Therefore, we
introduce the notion of a {\em scalable representation function}.
For $n\in\mathbb{Z}^+$, an \emph{$n$-block supertile} over $T$ is a partial function $\alpha : \mathbb{Z}_n^2 \dashrightarrow T$, where $\mathbb{Z}_n = \{0,1,\ldots,n-1\}$.
Let $B^T_n$ be the set of all $n$-block supertiles over $T$. The $n$-block with no domain is said to be $\emph{empty}$. For a general assembly $\alpha:\mathbb{Z}^2 \dashrightarrow T$, define $\alpha^n_{x,y}$ to be the $n$-block supertile defined by $\alpha^n_{x,y}(i,j) = \alpha(nx+i,ny+j)$ for $0 \leq i,j< n$.
Let $R_n$ for $n\in \mathbb{N}$ be a partial function that maps assemblies over $T$ to configurations in $C$ with the following property. If $\alpha \in \prodasm{T}$ and $R_n(\alpha) = c$, then for some $n$, $\alpha$ can be broken into $n$-block supertiles such that $R_n$ maps these supertiles to cells of $c$. In other words, for a given assembly $\alpha$, the partial function $R_n$ either is not defined on $\alpha$ or maps $\alpha$ to $c\in C$ by mapping $n$-block supertiles of $\alpha$ to cells of $c$.
Then the scalable representation function
is defined as $R:\mathbb{N}\times \prodasm{T} \dashrightarrow \tilde{C}$ where $R(n,\beta)=R_n(\beta)$.
Finally, we define simulation of a CA with initial configuration $c_0$ by an aTAM system.
\begin{definition}\label{def:tassim}
$\mathcal{T}$ {\em simulates} $\left(\mathcal{A}, c_0\right)$ (under scalable representation function $R$) iff
there exists a computable function $f:\mathbb{N}\to \mathbb{N}$ such that the following hold.
\begin{enumerate}
\item $\forall n\in \mathbb{N}$, $R_{f(n)}[\prodasm{T}] = G^n(c_0)$.\label{def:statement1}
\item $\forall \alpha\in \prodasm{T}$ such that $R_{f(n)}(\alpha) = c_n\in G^n(c_0)$, for any $\beta \in \prodasm{T}$ in the domain of $R_{f(n+1)}$ such that $\alpha \longrightarrow \beta$ it must be the case that $R_{f(n+1)}(\beta) \in G^{n+1}(c_0)$. \label{def:statement2}
\item $\forall c_n\in G^n(c_0)$ such that $R_{f(n)}(\alpha) = c_n$ for some $\alpha \in \prodasm{T}$, if $\alpha \longrightarrow \beta$ where $\beta$ is in the domain of $R_{f(n+1)}$ then
$R_{f(n+1)}(\beta) \in G^{n+1}(c_0)$.\label{def:statement3}
\end{enumerate}
\end{definition}
In Definition~\ref{def:tassim}, $f$ can be thought of as taking a time step $n$ and determines a block size for the representation.
Then $R$ takes $f(n)$ and an assembly and either returns a configuration in $G^n(c_0)$ or nothing if the assembly has not fully simulated the $n^{th}$ time step. This is necessary to simulate the dynamics of a synchronous CA, in which all cells simultaneously update their states.
Basically statement~\ref{def:statement1} of Definition~\ref{def:tassim} says that starting with an initial configuration, every configuration obtained by applying the global rule is represented by some assembly over $T$ and that any step-assembly pair $(n,\alpha)$ in the domain of $R$ represents some configuration.
Moreover, statements \ref{def:statement2} and \ref{def:statement3} of Definition~\ref{def:tassim} implies that these representations follow the action of the global rule.
|
2,869,038,155,867 | arxiv | \section{Introduction}\label{sec:intro}
Sentiment lexicons play a crucial role in many existing and emerging
opinion mining applications. Not only do they serve as a valuable
source of features for supervised classifiers
\cite{Mohammad:13,Zhu:14} but they also achieve competitive results
when used as the main component of a sentiment analysis system
\cite{Taboada:11}. Due to this high impact and tremendous costs of
building such lexicons manually, devising new algorithms for an
automatic generation of polarity lists has always been an area of
active research in the sentiment analysis literature
\cite[pp.~79-91]{Liu:12}. Nevertheless, despite some obvious progress
in this field \cite{Cambria:16b}, the applicability of these
approaches to other languages and text genres still raises questions:
It is, for instance, unclear whether simply translating the existing
English sentiment resources would produce better results than applying
the methods that were initially proposed for their creation directly
to the target language. Furthermore, for automatic systems which draw
their knowledge from lexical taxonomies, such as \textsc{WordNet}
\cite{Miller:95}, it remains unanswered whether these approaches would
also work for languages in which such resources are much smaller in
size, and, even if they would, whether the resulting lexicons would
then be general enough to carry over to more colloquial texts.
Finally, for methods which derive their polarity lists from text
corpora, it is not clear whether these approaches would still yield an
acceptable quality when operating on inherently noisy input data.
In this paper, we try to analyze these and other problems in detail,
using the example of German Twitter. More precisely, given a
collection of German microblogs with manually labeled polar terms and
prior polarities of these expressions, we want to find an SLG method
that can best predict these terms and their semantic orientation. For
this purpose, we compare the existing German sentiment lexicons (most
of which were semi-automatically translated from popular English
resources) with the results of common automatic dictionary- and
corpus-based SLG approaches.
We begin our study by describing the data set which will be used in
our evaluation. Afterwards, in Section~\ref{sec:eval-metrics}, we
introduce the metrics with which we will assess the quality of various
polarity lists. Then, in Section~\ref{sec:semi-automatic}, we
evaluate three most popular existing German sentiment lexicons---the
German Polarity Clues \cite{Waltinger:10}, SentiWS \cite{Remus:10},
and Zurich Polarity List of \newcite{Clematide:10}, subsequently
comparing them with popular automatic SLG approaches in
Section~\ref{sec:automatic}. Finally, after estimating the impact of
different seed sets on the automatic methods and performing a
qualitative analysis of their entries, we draw our conclusions and
outline directions for future research in the final part of this
paper.
To avoid unnecessary repetitions, we deliberately omit a summary of
related work, since most of the popular SLG algorithms will be
referenced in the respective evaluation sections anyway. We should,
however, note that, apart from the research on the automatic lexicon
generation, our study is also closely related to the experiments of
\newcite{Andreevskaia:08} and the ``Sentiment Analysis in Twitter''
track of the SemEval competition
\cite{Nakov:13,Rosenthal:14,Rosenthal:15}. In contrast to the former
work, however, where the authors trained a supervised classifier on
one domain and applied it to another in order to determine the
polarities of the sentences, we \emph{explicitly model a situation
where no annotated training data are available}, thus looking for
the most general unsupervised SLG strategy which performs best
regardless of the target domain, and we also \emph{evaluate these
strategies on the level of lexical phrases only}. Furthermore,
unlike in the SemEval track, where the organizers also provided
participants with sufficient labeled in-domain training sets and then
asked them to predict the contextual polarity of pre-annotated polar
expressions in the test data, we \emph{simultaneously try to predict
polar terms and their prior polarities, learning both of them
without supervision}.
\section{Data}\label{sec:data}
We perform our evaluation on the publicly available Potsdam Twitter
Sentiment corpus (PotTS; Sidarenka, 2016).\footnote{We use version
0.1.0 of this corpus.} This collection comprises 7,992 microblogs
pertaining to the German federal elections, general political life,
papal conclave~2013, as well as casual everyday conversations. Two
human experts annotated these posts with polar terms and their prior
polarities,\footnote{The annotators had been asked to judge the
semantic orientation of a term irrespective of its possible
negations. They could, however, consider the context for
determining whether a particular reading of a polysemous word in the
text was subjective or not.} reaching a substantial agreement of
0.75 binary $\kappa$ \cite{Cohen:60}.\footnote{A detailed
inter-annotator agreement study of this corpus is provided in
\cite{Sidarenka:16}.} We used the complete data set labeled by one
of the annotators as our test corpus, getting a total of 6,040
positive and 3,055 negative terms including multi-word expressions.
However, since many of these expressions were emoticons, which, on the
one hand, were a priori absent in common lexical taxonomies due to
their colloquial nature and therefore not amenable to dictionary-based
SLG systems but, on the other hand, could be easily captured by
regular expressions, we decided to exclude non-alphabetic smileys
altogether from our study. This left us with a set of 3,459 positive
and 2,755 negative labeled terms (1,738 and 1,943 unique expressions
respectively), whose $\kappa$-agreement run up to 0.59. Besides the
test set, we selected a small subset of 400 tweets from the other
annotator and used it as development data for tuning the
hyper-parameters of the tested approaches.\footnote{That way, we only
used the labeled corpus for evaluation or parameter optimization,
other resources---\textsc{GermaNet} \cite{Hamp:97} and the German
Twitter Snapshot \cite{Scheffler:14}---were used for training the
methods.}
\section{Evaluation Metrics}\label{sec:eval-metrics}
A central question to our experiments are the evaluation metrics that
we should use for measuring lexi\-con quality. Usually, this quality is
estimated either \textit{intrinsically} (i.e., taking a lexicon in
isolation and immediately assessing its accuracy) or
\textit{extrinsically} (i.e., considering the lexicon within the scope
of a bigger application such as a supervised classifier which utilizes
lexicon's entries as features).
Traditionally, intrinsic evaluation of English sentiment lexicons
amounts to comparing these polarity lists with the General Inquirer
(GI; Stone, 1966)---a manually compiled set of 11,895 words annotated
with their semantic categories---by taking the intersection of the two
resources and estimating the percentage of matches in which
automatically induced polar terms have the same polarity as the GI
entries. This evaluation method, however, is somewhat problematic:
First of all, it is not easily transferable to other languages, since
even a manual translation of the GI lexicon is not guaranteed to cover
all language- and domain-specific polar expressions. Secondly, due to
the intersection, this method does not penalize for a low recall so
that a lexicon consisting of just two terms \textit{good}$^+$ and
\textit{bad}$^-$ will have the highest possible score, often
surpassing polarity lists with a greater number of entries. Finally,
this comparison does not account for polysemy. As a result, an
ambiguous word only one of whose (possibly rare) senses is subjective
will always be ranked the same as a purely polar term.
Unfortunately, an extrinsic evaluation does not always provide a
solution in this case, since, depending on the type of the extrinsic
system (e.g., a document classifier), it might still presuppose a
large data set for training other system components and, furthermore,
might yield overly high scores, which, however, are mainly due to
these extrinsic modules rather than the quality of the lexicons
themselves.
Instead of using these approaches, we opt for a direct comparison of
the induced polarity lists with an existing annotated corpus, since
this type of evaluation allows us to solve at least three of the
previously mentioned issues: It does account for the recall, it does
accommodate polysemous words,\footnote{Recall that the annotators of
the PotTS data set were asked to annotate a polar expression iff its
actual sense in the respective context was polar.} and it does
preclude intermediate components which might artificially boost the
results. In particular, in order to check a lexicon against the PotTS
data set, we construct a case-insensitive trie
\cite[pp. 492--512]{Knuth:98} from the lexicon entries and match this
trie against the contiguously running corpus text,\footnote{In other
words, we successively compare lexicon entries with the occurrences
of corpus tokens in the same linear order as these occurrences
appear in the text.} simultaneously comparing it with the actual
word forms and lemmas of corpus tokens.\footnote{We use the
\textsc{TreeTagger} of \newcite{Schmid:95} for lemmatization.} A
match is considered correct iff the matched entry absolutely
corresponds to the (possibly lemmatized) expert's annotation and has
the same polarity as the one specified by the human coder. That way,
we estimate the precision, recall, and \F{}-score for each particular
polarity class (positive, negative, and neutral), considering all
words absent in the lexicons (not annotated in the corpus) as neutral.
\section{Semi-Automatic Lexicons}\label{sec:semi-automatic}
We first apply the above metric to estimate the quality of the
existing German resources: the German Polarity Clues (GPC; Waltinger,
2010), SentiWS (SWS; Remus, 2010), and the Zurich Polarity List (ZPL)
of \newcite{Clematide:10}.
The GPC set comprises 10,141 subjective entries automatically
translated from the English sentiment lexicons Subjectivity Clues
\cite{Wilson:05} and SentiSpin \cite{Takamura:05}, with a subsequent
manual correction of these translations, and several synonyms and
negated terms added by the authors. The SWS lexicon includes 1,818
positively and 1,650 negatively connoted terms, also providing their
part-of-speech tags and inflections (resulting in a total of 32,734
word forms). Similarly to the GPC, the authors used an English
sentiment resource---the GI lexicon of \newcite{Stone:66}---to
bootstrap their polarity list, manually revising these automatic
translations afterwards. In addition to that, \newcite{Remus:10} also
expanded their set with words and phrases frequently co-occurring with
positive and negative seed lexemes using collocation information
obtained from a corpus of 10,200 customer reviews and the German
Collocation Dictionary \cite{Quasthoff:10}. Finally, the Zurich
Polarity List features 8,000 subjective entries taken from
\textsc{GermaNet} synsets \cite{Hamp:97}. These synsets were manually
annotated with their prior polarities by human experts. Since the
authors, however, found the number of polar adjectives obtained that
way insufficient for running further classification experiments, they
automatically enriched this lexicon with more attributive terms by
analyzing conjoined corpus collocations using the method of
\newcite{Hatzivassi:97}.
\begin{table}[h]
\begin{center}
\bgroup \setlength\tabcolsep{0.1\tabcolsep}\scriptsize
\begin{tabular}{p{0.162\columnwidth}
*{9}{>{\centering\arraybackslash}p{0.074\columnwidth}}
*{2}{>{\centering\arraybackslash}p{0.068\columnwidth}}}
\toprule
\multirow{2}*{\bfseries Lexicon} & %
\multicolumn{3}{c}{\bfseries Positive Expressions} & %
\multicolumn{3}{c}{\bfseries Negative Expressions} & %
\multicolumn{3}{c}{\bfseries Neutral Terms} & %
\multirow{2}{0.068\columnwidth}{\bfseries\centering Macro\newline \F{}} & %
\multirow{2}{0.068\columnwidth}{\bfseries\centering Micro\newline \F{}}\\
\cmidrule(lr){2-4}\cmidrule(lr){5-7}\cmidrule(lr){8-10}
& Precision & Recall & \F{} & %
Precision & Recall & \F{} & %
Precision & Recall & \F{} & & \\\midrule
GPC & 0.209 & 0.535 & 0.301 & %
0.195 & 0.466 & 0.275 & %
0.983 & 0.923 & 0.952 & %
0.509 & 0.906 \\
SWS & 0.335 & 0.435 & 0.379 & %
0.484 & 0.344 & \textbf{0.402} & %
0.977 & 0.975 & 0.976 & %
0.586 & 0.952\\
ZPL & 0.411 & 0.424 & 0.417 & %
0.38 & 0.352 & 0.366 & %
0.977 & 0.979 & 0.978 & %
0.587 & 0.955 \\
GPC $\cap$ SWS $\cap$ ZPL & \textbf{0.527} & 0.372 & \textbf{0.436} & %
\textbf{0.618} & 0.244 & 0.35 & %
0.973 & \textbf{0.99} & \textbf{0.982} & %
\textbf{0.589} & \textbf{0.964} \\
GPC $\cup$ SWS $\cup$ ZPL & 0.202 & \textbf{0.562} & 0.297 & %
0.195 & \textbf{0.532} & 0.286 & %
\textbf{0.985} & 0.917 & 0.95 & %
0.51 & 0.901 \\\bottomrule
\end{tabular}
\egroup
\caption{Evaluation of semi-automatic German sentiment lexicons.\\
{\small GPC -- German Polarity Clues \cite{Waltinger:10}, SWS --
SentiWS \cite{Remus:10}, ZPL -- Zurich Polarity Lexicon
\cite{Clematide:10}}}
\label{snt-lex:tbl:gsl-res}
\end{center}
\end{table}
For our evaluation, we tested the three lexicons in isolation and also
built their union and intersection in order to check for ``synergy''
effects. The results are shown in Table~\ref{snt-lex:tbl:gsl-res}.
As can be seen from the statistics, with a few exceptions, the highest
scores for all classes as well as the best macro- and micro-averaged
\F{}-measures are achieved by the intersection of all three lexicons.
On the other hand, as expected, the highest recall of polar
expressions (and consequently the best precision at recognizing
neutral terms) is attained by the union of these resources. The only
case where individual lexicons are able to outperform these
combinations is observed for the \F{}-score of the negative class,
where both SentiWS and ZPL show better results than their
intersection, which is mainly due to the higher recall of these two
polarity lists.
\section{Automatic Methods}\label{sec:automatic}
A natural question which arises upon the evaluation of the existing
semi-automatic resources is how well fully automatic methods can
perform in comparison with these lexicons. Traditionally, automatic
SLG algorithms have been grouped into dictionary- and corpus-based
ones, with their own complementary strengths and weaknesses.
Dictionary-based approaches, for instance, incorporate distilled
linguistic knowledge from a typically manually labeled lexical
database, but lack any domain specificity. Corpus-based methods, on
the other hand, can operate directly on unannotated in-domain data,
but often have to deal with an extreme noisiness of their input.
Since it was unclear which of these properties would have a stronger
impact on the net results, we decided to reimplement the most commonly
used algorithms from both of these paradigms and evaluate them on the
PotTS corpus.
\subsection{Dictionary-Based Approaches}\label{ssec:dba}
For dictionary-based methods, we adopted the systems proposed by
\newcite{Hu:04}, \newcite{Blair-Goldensohn:08}, \newcite{Kim:04},
\newcite{Esuli:06c}, as well as the min-cut and label-propagation
approaches of \newcite{Rao:09}, and the random-walk algorithm
described by \newcite{Awadallah:10}.
The first of these works \cite{Hu:04} expanded a given set of seed
terms with known semantic orientations by propagating polarity values
of these terms to their \textsc{WordNet} synonyms and passing reversed
polarity scores to the antonyms of these words. Later on, this idea
was further refined by \newcite{Blair-Goldensohn:08}, who obtained
polarity labels for new terms by multiplying a score vector $\vec{v}$
containing the orientation scores of the known seed words (-1 for
negative expressions and 1 for positive ones) with an adjacency matrix
$A$ constructed for the \textsc{WordNet} graph.
With various modifications, the core idea of passing the polarity
values through a lexical graph was adopted in almost all of the
following dictionary-based works: \newcite{Kim:04}, for instance,
computed the polarity class for a new word $w$ by multiplying the
prior probability of this class with the likelihood of the word $w$
occurring among the synonyms of the seed terms with the given semantic
orientation, choosing at the end the polarity which maximized this
equation. Other ways of bootstrapping polarity lists were proposed by
\newcite{Esuli:06c}, who created their \textsc{SentiWordNet} resource
using a committee of Rocchio and SVM classifiers trained on
successively expanded sets of polar terms; \newcite{Rao:09}, who
adopted the min-cut approach of \newcite{Blum:04}, also comparing it
with the label-propagation algorithm of \newcite{Zhu:02}; and,
finally, \newcite{Awadallah:10}, who used a random walk method by
estimating the polarity of an unknown word as the difference between
an average number of steps a random walker had to make in order to
reach a term from the positive or negative set.
Since some of these approaches relied on different seed sets or
pursued different objectives (two- versus three-way classification),
we decided to unify their settings and interfaces for the sake of our
experiments. In particular, we were using the same translated seed
list of \newcite{Turney:03} for all methods, expanding this set by 10
neutral terms (``neutral'' \emph{neutral}, ``sachlich''
\emph{objective}, ``technisch'' \emph{technical}, ``finanziell''
\emph{financial} etc.).\footnote{All translated seed sets are provided
along with the source code for this paper.} Additionally, we
enhanced all binary systems to ternary classifiers, so that each
tested method could differentiate between positive, negative, and
neutral terms. In the final step, we applied these methods to
\textsc{GermaNet} \cite{Hamp:97}---a German equivalent of the English
\textsc{WordNet} \cite{Miller:95}, which, however, is much smaller in
size, having 20,792 less synsets for the three common parts of speech
(nouns, adjectives, and verbs) than the Princeton resource.
\begin{table}[h]
\begin{center}
\bgroup \setlength\tabcolsep{0.1\tabcolsep}\scriptsize
\begin{tabular}{p{0.142\columnwidth}
>{\centering\arraybackslash}p{0.06\columnwidth}
*{9}{>{\centering\arraybackslash}p{0.072\columnwidth}}
*{2}{>{\centering\arraybackslash}p{0.058\columnwidth}}}
\toprule
\multirow{2}*{\bfseries Lexicon} & %
\multirow{2}*{\bfseries \# of Terms} & %
\multicolumn{3}{c}{\bfseries Positive Expressions} & %
\multicolumn{3}{c}{\bfseries Negative Expressions} & %
\multicolumn{3}{c}{\bfseries Neutral Terms} & %
\multirow{2}{0.068\columnwidth}{\bfseries\centering Macro\newline \F{}} & %
\multirow{2}{0.068\columnwidth}{\bfseries\centering Micro\newline \F{}}\\
\cmidrule(lr){3-5}\cmidrule(lr){6-8}\cmidrule(lr){9-11}
& & Precision & Recall & \F{} & %
Precision & Recall & \F{} & %
Precision & Recall & \F{} & & \\\midrule
\textsc{Seed Set} & 20 & \textbf{0.771} & 0.102 & 0.18 & %
0.568 & 0.017 & 0.033 & %
0.963 & \textbf{0.999} & \textbf{0.981} & %
0.398 & \textbf{0.962}\\
HL & 5,745 & 0.161 & 0.266 & 0.2 & %
0.2 & 0.133 & 0.16 & %
0.969 & 0.96 & 0.965 & %
0.442 & 0.93\\
BG & 1,895 & 0.503 & 0.232 & \textbf{0.318} & %
0.285 & 0.093 & 0.14 & %
0.968 & 0.991 & 0.979 & %
\textbf{0.479} & 0.959\\
KH & 356 & 0.716 & 0.159 & 0.261 & %
0.269 & 0.044 & 0.076 & %
0.965 & 0.997 & \textbf{0.981} & %
0.439 & \textbf{0.962}\\
ES & 39,181 & 0.042 & \textbf{0.564} & 0.078 & %
0.033 & \textbf{0.255} & 0.059 & %
\textbf{0.981} & 0.689 & 0.81 & %
0.315 & 0.644\\
RR$_{\textrm{mincut}}$ & 8,060 & 0.07 & 0.422 & 0.12 & %
0.216 & 0.073 & 0.109 & %
0.972 & 0.873 & 0.92 & %
0.383 & 0.849\\
RR$_{\textrm{lbl-prop}}$ & 1,105 & 0.567 & 0.176 & 0.269 & %
\textbf{0.571} & 0.046 & 0.085 & %
0.965 & 0.997 & \textbf{0.981} & %
0.445 & \textbf{0.962}\\
AR & 23 & 0.768 & 0.1 & 0.176 & %
0.568 & 0.017 & 0.033 & %
0.963 & \textbf{0.999} & \textbf{0.981} & %
0.397 & \textbf{0.962}\\
HL $\cap$ BG $\cap$ RR$_{\textrm{lbl-prop}}$ & 752 & 0.601 & 0.165 & 0.259 & %
0.567 & 0.045 & 0.084 & %
0.965 & 0.997 & \textbf{0.981} & %
0.441 & \textbf{0.962}\\
HL $\cup$ BG $\cup$ RR$_{\textrm{lbl-prop}}$ & 6,258 & 0.166 & 0.288 & 0.21 & %
0.191 & 0.146 & \textbf{0.165} & %
0.97 & 0.958 & 0.964 & %
0.446 & 0.929\\\bottomrule
\end{tabular}
\egroup
\caption{Evaluation of dictionary-based approaches.\\ {\small HL
-- \newcite{Hu:04}, BG -- \newcite{Blair-Goldensohn:08}, KH --
\newcite{Kim:04}, ES -- \newcite{Esuli:06c}, RR --
\newcite{Rao:09}, AR -- \newcite{Awadallah:10}}}
\label{snt-lex:tbl:lex-res}
\end{center}
\end{table}
The results of this evaluation are shown in
Table~\ref{snt-lex:tbl:lex-res}. This time, the situation is much
more varied, as different systems can achieve best results on just
some aspects of certain classes but can hardly attain best overall
scores in all categories. This is, for instance, the case for the
positive and negative polarities, where the best precision scores are
reached by the seed set in the first case and the label propagation
algorithm of \newcite{Rao:09} in the second case. However, with
respect to the recall, both of these polarity lists perform notably
worse than the approach of \newcite{Esuli:06c}. Yet other
systems---the matrix-vector method of \newcite{Blair-Goldensohn:08}
and the union of the three overall top-scoring systems
respectively---reach the highest \F{}-scores for these two classes.
Nevertheless, we can still notice three main tendencies in this
evaluation:
\begin{inparaenum}[\itshape i\upshape)]
\item the method of \newcite{Esuli:06c} generally gets the highest
recall of polar terms and, consequently, achieves the best precision
in recognizing neutral words, but suffers from a low precision for
the positive and negative polarities;
\item simultaneously five systems attain the same best \F{}-scores on
recognizing neutral terms, which, in turn, leads to the best
micro-averaged \F{}-results for all polarity classes; and, finally,
\item the system of \newcite{Blair-Goldensohn:08} shows the best
macro-averaged performance. This approach, however, is extremely
susceptible to its hyper-parameter settings (in particular, we
considered the maximum number of times the initial vector $\vec{v}$
was multiplied with the adjacency matrix $A$ as such a parameter and
noticed a dramatic decrease of method's scores after the fifth
iteration).
\end{inparaenum}
\subsection{Corpus-Based Approaches}\label{ssec:cba}
An alternative way to generate polarity lists is to use corpus-based
approaches. In contrast to dictionary-based methods, these systems
typically operate immediately on raw texts and are, therefore,
virtually independent of any manually annotated linguistic resources.
This flexibility, however, might come at the cost of a reduced
accuracy due to an inherent noisiness of the unlabeled data. The most
prominent representatives of this class of algorithms are the
approaches proposed by \newcite{Takamura:05}, \newcite{Velikovich:10},
\newcite{Kiritchenko:14}, and \newcite{Severyn:15}, which we briefly
describe in this section.
Drawing on the pioneering work of \newcite{Hatzivassi:97}, in which
the authors expanded an initial list of polar adjectives by analyzing
coordinately conjoined terms from a text corpus, \newcite{Takamura:05}
enhanced this algorithm, extending it to other parts of speech and
also incorporating semantic links from \textsc{WordNet} in addition to
the co-occurrence statistics extracted from the corpus. After
representing the final set of terms as an electron lattice, whose edge
weights corresponded to the contextual and semantic links between
words, the authors computed the most probable polarity distribution
for this lattice by adopting the Ising spin model from statistical
mechanics.
The approach of \newcite{Velikovich:10} was mainly inspired by the
label-propagation algorithm of \newcite{Rao:09}, with the crucial
difference that, instead of taking an averaged sum of the adjacent
neighbor values when propagating the label scores through the graph,
the authors took the maximum of these scores in order to prune
unreliable, noisy corpus links. Similarly, \newcite{Kiritchenko:14}
built on the method of \newcite{Turney:03} and computed polarity
scores for new words by taking the difference of their PMI
associations with noisy labeled positive and negative classes.
Finally, \newcite{Severyn:15} trained a supervised SVM classifier on a
distantly labeled data set and included the top-ranked unigram and
bigram features in their final lexicon.
For our evaluation, we applied these methods to the German Twitter
Snapshot \cite{Scheffler:14}---a collection of 24~M microblogs
gathered in April, 2013, constructing the collocation graph from the
lemmatized word forms of this corpus and only considering words which
appeared at least four times in the analyzed data. We again were
using the \textsc{TreeTagger} of \newcite{Schmid:95} for lemmatization
and \textsc{GermaNet} \cite{Hamp:97} for deriving semantic links
between word vertices for the method of \newcite{Takamura:05}.
\begin{table}[h]
\begin{center}
\bgroup \setlength\tabcolsep{0.1\tabcolsep}\scriptsize
\begin{tabular}{p{0.142\columnwidth}
>{\centering\arraybackslash}p{0.06\columnwidth}
*{9}{>{\centering\arraybackslash}p{0.072\columnwidth}}
*{2}{>{\centering\arraybackslash}p{0.058\columnwidth}}}
\toprule
\multirow{2}*{\bfseries Lexicon} & %
\multirow{2}*{\bfseries \# of Terms} & %
\multicolumn{3}{c}{\bfseries Positive Expressions} & %
\multicolumn{3}{c}{\bfseries Negative Expressions} & %
\multicolumn{3}{c}{\bfseries Neutral Terms} & %
\multirow{2}{0.068\columnwidth}{\bfseries\centering Macro\newline \F{}} & %
\multirow{2}{0.068\columnwidth}{\bfseries\centering Micro\newline \F{}}\\
\cmidrule(lr){3-5}\cmidrule(lr){6-8}\cmidrule(lr){9-11}
& & Precision & Recall & \F{} & %
Precision & Recall & \F{} & %
Precision & Recall & \F{} & & \\\midrule
\textsc{Seed Set} & 20 & \textbf{0.771} & 0.102 & 0.18 & %
\textbf{0.568} & 0.017 & 0.033 & %
0.963 & \textbf{0.999} & \textbf{0.981} & %
0.398 & \textbf{0.962}\\
TKM & 920 & 0.646 & \textbf{0.134} & \textbf{0.221} & %
0.565 & \textbf{0.029} & \textbf{0.055} & %
\textbf{0.964} & 0.998 & \textbf{0.981} & %
\textbf{0.419} & \textbf{0.962}\\
VEL & 60 & 0.764 & 0.102 & 0.18 & %
\textbf{0.568} & 0.017 & 0.033 & %
0.963 & 0.999 & 0.98 & %
0.398 & \textbf{0.962}\\
KIR & 320 & 0.386 & 0.106 & 0.166 & %
\textbf{0.568} & 0.017 & 0.033 & %
0.963 & 0.996 & 0.979 & %
0.393 & 0.959\\
SEV & 60 & 0.68 & 0.102 & 0.177 & %
\textbf{0.568} & 0.017 & 0.033 & %
0.963 & \textbf{0.999} & \textbf{0.981} & %
0.397 & \textbf{0.962}\\
TKM $\cap$ VEL $\cap$ SEV & 20 & \textbf{0.771} & 0.102 & 0.18 & %
\textbf{0.568} & 0.017 & 0.033 & %
0.963 & \textbf{0.999} & \textbf{0.981} & %
0.398 & \textbf{0.962}\\
TKM $\cup$ VEL $\cup$ SEV & 1,020 & 0.593 & \textbf{0.134} & 0.218 & %
0.565 & \textbf{0.029} & \textbf{0.055} & %
\textbf{0.964} & 0.998 & 0.98 & %
0.418 & \textbf{0.962}\\\bottomrule
\end{tabular}
\egroup
\caption{Evaluation of corpus-based approaches.\\ {\small TKM --
\newcite{Takamura:05}, VEL -- \newcite{Velikovich:10}, KIR --
\newcite{Kiritchenko:14}, SEV -- \newcite{Severyn:15}}}
\label{snt-lex:tbl:corp-meth}
\end{center}
\end{table}
The results of these experiments are shown in
Table~\ref{snt-lex:tbl:corp-meth}. This time, we can observe a clear
superiority of Takamura et al.'s method, which not only achieves the
best recall and \F{} in recognizing positive and negative items but
also attains the highest micro- and macro-averaged results for all
three polarity classes.
The cardinality of the other induced lexicons, however, is much
smaller than the size of Takamura et al.'s polarity list. Moreover,
these lexicons also show absolutely identical scores for the negative
expressions as the original seed set. Since these results were
somewhat unexpected, we decided to investigate the reasons for
possible problems. As it turned out, the macro-averaged \F{}-values
of these methods were rapidly going down on the held-out development
set as the number of their induced polar terms increased. Since we
considered the lexicon size as one of the hyper-parameters of the
tested approaches, we immediately stopped populating these lexicons
when we noticed a decrease in their results. As a consequence, only
the highest-ranked terms (all of which had the positive polarity) were
included in the final lists.
One of the reasons for such rapid quality decrease was the
surprisingly high positive bias of the initial seed set: While
converting the original seed list of \newcite{Turney:03} to German, we
translated the English word ``correct'' as ``richtig''. This German
word, however, also has another reading which means \emph{real} (as in
\emph{a real fact} or \emph{a real sports car}) and which was much
more frequent in the analyzed snapshot, often appearing in an
unequivocally negative context, e.g., ``ein richtiger Bombenanschlag''
(\emph{a real bomb attack}) or ``ein richtiger Terrorist'' (\emph{a
real terrorist}). As a consequence of this, methods relying on
distant supervision had to deal with an extremely unbalanced training
set (the automatically labeled corpus that we distantly obtained for
the approach of \newcite{Kiritchenko:14} using these seeds, for
instance, had 716,210 positive versus 92,592 negative training
instances).
\section{Effect of Seed Sets}\label{sec:seedsets}
Since the set of the initial seed terms appeared to play an important
role for at least three of the tested methods, we decided to analyze
the impact of this factor in more detail by repeating our experiments
with the seed lists proposed by \newcite{Hu:04}, \newcite{Kim:04},
\newcite{Esuli:06c}, and \newcite{Remus:10}. For this purpose, we
manually translated the seed sets of \newcite{Hu:04} and
\newcite{Kim:04} into German. Since the authors, however, only
provided some examples of their seeds without specifying the full
lists, we filled up our translations with additional polar terms to
match the original cardinalities. A different procedure was applied
to obtain the seed set of \newcite{Esuli:06c}---since this resource
comprised a vast number of neutral terms (the authors considered as
neutral all words from the General Inquirer lexicon which were not
marked there as either positive or negative), we automatically
translated the neutral subset of these seeds with the help of a
publicly available translation site
({\small\url{http://www.dict.cc}}), using the first suggestion
returned by this service for each original English term.
\begin{figure}[hbtp!]
\centering
\includegraphics[height=12em,width=0.8\linewidth]{%
img/sentilex-dict-alt-seed-sets.png}
\caption{Macro-averaged \F{}-scores of the dictionary-based approaches
with different seed sets.}\label{snt:fig:sent-dict-lex-alt-seeds}
\end{figure}
The updated results for the dictionary-based approaches with the
alternative seed sets are shown in
Figure~\ref{snt:fig:sent-dict-lex-alt-seeds}. This time, we again can
notice superior scores achieved by the method of
\newcite{Blair-Goldensohn:08}, which not only performs better than the
other systems on average but also seems to be less susceptible to the
varying quality and size of the different seed lists. The remaining
methods typically achieve their best macro-averaged results with
either of the two top-scoring polarity sets---the seed list of
\newcite{Kim:04} or the seed set of \newcite{Esuli:06c}. This is, for
instance, the case for the method of \newcite{Kim:04} and the min-cut
approach of \newcite{Rao:09}, whose performance with the native
Kim-Hovy seed set is on par with their results achieved using the
Turney-Littman seeds. The label-propagation and random walk
algorithms can even strongly benefit from the seeds provided by
\newcite{Kim:04}. The remaining two methods---\newcite{Hu:04} and
\newcite{Esuli:06c}---work best in combination with the initial
polarity set proposed by \newcite{Esuli:06c}.
\begin{figure}[hbtp!]
\centering
\includegraphics[height=12em,width=0.8\linewidth]{%
img/sentilex-crp-alt-seed-sets.png}
\caption{Macro-averaged \F{}-scores of the corpus-based approaches
with different seed sets.}\label{snt:fig:sent-crp-lex-alt-seeds}
\end{figure}
A slightly different situation is observed for the corpus-based
approaches as shown in Figure~\ref{snt:fig:sent-crp-lex-alt-seeds}.
Except for the method of \newcite{Takamura:05}, all three remaining
methods---\newcite{Velikovich:10}, \newcite{Kiritchenko:14}, and
\newcite{Severyn:15}---show very similar (though not identical)
scores. Moreover, these scores are also very close to the results
achieved by the respective seed sets without any expansion. The
primary reasons for this were again the positive bias of the distantly
labeled tweets and the consequently premature stopping of the
expansion.
Following the suggestion of one of the reviewers, we additionally
included two more seed sets in our evaluation: gold precision and
emoticons. The former list contained just two polar terms---``gut''
(\textit{good}$^+$) and ``schlecht'' (\textit{bad}$^-$)---which showed
an almost perfect precision on the PotTS data
set.\footnote{Unfortunately, we could not include more terms in this
seed set due to a high lexical ambiguity of other polar words. Even
in our proposed prototypical seed list, one of the terms---``gut''
(\textit{good})---could have another rather rare reading
(\textit{manor}) when used as a noun.} The latter seed set
consisted of two regular expressions: one for capturing positive
smileys and another one for matching negative emoticons. As can be
seen form the figure, these lists, however, could hardly outperform
any of our initially used seed sets.
\section{Analysis of Entries}\label{sec:analysis}
Besides investigating the effects of different hyper-parameters and
seeds, we also decided to have a closer look at the actual results
produced by the tested methods. For this purpose, we extracted ten
highest scored entries (not counting the seed terms) from each
automatic lexicon and summarized them in
Table~\ref{snt-lex:tbl:top-10}.
\begin{table}[*hbt]
\begin{center}
\bgroup \setlength\tabcolsep{0.03\tabcolsep}\scriptsize
\begin{tabular}{%
>{\centering\arraybackslash}p{0.02\columnwidth}
*{10}{>{\centering\arraybackslash}p{0.095\columnwidth}}}
\toprule
\textbf{Rank} & %
\textbf{HL} & \textbf{BG} & \textbf{KH} & %
\textbf{ES} & \textbf{RR}$^{**}_{\textrm{mincut}}$ & %
\textbf{RR}$_{\textrm{lbl-prop}}$ & %
\textbf{TKM} & \textbf{VEL} & \textbf{KIR} & %
\textbf{SEV} \\\midrule
1 & \ttranslate{perfekt}{perfect} & %
\ttranslate{flei\ss{}ig}{diligent} &%
\ttranslate{anr\"uchig}{indecent} &%
\ttranslate{namenlos}{nameless} &%
\ttranslate{planieren}{to plane} &%
\ttranslate{prunkvoll}{splendid} &%
\ttranslate{Stockfotos}{stock photos} &%
\ttranslate{Wahl\-kampf\-ge\-schenk}{election gift} &%
\ttranslate{Suchmaschinen}{search engines} &%
\ttranslate{Scherwey}{Scherwey}\\
2 & \ttranslate{musterg\"ultig}{immaculate} & %
\ttranslate{b\"ose}{evil} &%
\ttranslate{unecht}{artificial} &%
\ttranslate{ruhelos}{restless} &%
\ttranslate{Erdschicht}{stratum} &%
\ttranslate{sinnlich}{sensual} &%
\ttranslate{BMKS65}{BMKS65} &%
\ttranslate{Or\-dens\-ge\-schich\-te}{order history} &%
\ttranslate{\#gameinsight}{\#gameinsight} &%
\ttranslate{krebsen}{to crawl}\\
3 & \ttranslate{vorbildlich}{commendable} & %
\ttranslate{beispielhaft}{exemplary} &%
\ttranslate{irregul\"ar}{irregular} &%
\ttranslate{unbewaffnet}{unarmed} &%
\ttranslate{gefallen}{please} &%
\ttranslate{pomp\"os}{ostentatious} &%
\ttranslate{Ziya}{Ziya} &%
\ttranslate{Indologica}{Indologica} &%
\ttranslate{\#androidgames}{\#androidgames} &%
\ttranslate{kaschieren}{to conceal}\\
4 & \ttranslate{beispielhaft}{exemplary} & %
\ttranslate{edel}{noble} &%
\ttranslate{drittklassig}{third-class} &%
\ttranslate{interesselos}{indifferent} &%
\ttranslate{Zeiteinheit}{time unit} &%
\ttranslate{unappetitlich}{unsavory} &%
\ttranslate{Shoafoundation}{shoah found.} &%
\ttranslate{Indologie}{Indology} &%
\ttranslate{Selamat}{selamat} &%
\ttranslate{Davis}{Davis}\\
5 & \ttranslate{exzellent}{excellent} & %
\ttranslate{t\"uchtig}{proficient} &%
\ttranslate{sinnlich}{sensual} &%
\ttranslate{reizlos}{unattractive} &%
\ttranslate{Derivat}{derivate} &%
\ttranslate{befehlsgem\"a\ss{}}{as ordered} &%
\ttranslate{T1199}{T1199} &%
\ttranslate{Energieverbrauch}{energy consumption} &%
\ttranslate{Pagi}{Pagi} &%
\ttranslate{\#Klassiker}{\#classics}\\
6 & \ttranslate{exzeptionell}{exceptional} & %
\ttranslate{emsig}{busy} &%
\ttranslate{unprofessionell}{unprofessional} &%
\ttranslate{w\"urdelos}{undignified} &%
\ttranslate{Oberfl\"ache}{surface} &%
\ttranslate{vierschr\"otig}{beefy} &%
\ttranslate{Emilay55}{Emilay55} &%
\ttranslate{Schimmelbildung}{mold formation} &%
\ttranslate{\#Sparwelt}{\#savingsworld} &%
\ttranslate{Nationalismus}{nationalism}\\
7 & \ttranslate{au\ss{}ergew\"ohnlich}{extraordinary} & %
\ttranslate{eifrig}{eager} &%
\ttranslate{abgeschlagen}{exhausted} &%
\ttranslate{absichtslos}{unintentional} &%
\ttranslate{Essbesteck}{cutlery} &%
\ttranslate{regelgem\"a\ss}{regularly} &%
\ttranslate{Eneramo}{Eneramo} &%
\ttranslate{Hygiene}{hygiene} &%
\ttranslate{\#Seittest}{\#Seittest} &%
\ttranslate{Kraftstoff}{fuel}\\
8 & \ttranslate{au\ss{}erordentlich}{exceptionally} & %
\ttranslate{arbeitsam}{hardworking} &%
\ttranslate{gef\"allig}{pleasing} &%
\ttranslate{ereignislos}{uneventful} &%
\ttranslate{abl\"osen}{to displace} &%
\ttranslate{wahrheitsgem\"a\ss}{true} &%
\ttranslate{GotzeID}{GotzeID} &%
\ttranslate{wasserd}{waterp} &%
\ttranslate{Gameinsight}{Gameinsight} &%
\ttranslate{inaktiv}{idle}\\
9 & \ttranslate{viertklassig}{fourth-class} & %
\ttranslate{musterg\"ultig}{exemplary} &%
\ttranslate{musterg\"ultig}{exemplary} &%
\ttranslate{regellos}{irregular} &%
\ttranslate{Musikveranstaltung}{music event} &%
\ttranslate{fettig}{greasy} &%
\ttranslate{BSH65}{BSH65} &%
\ttranslate{heizkostensparen}{saving heating costs} &%
\ttranslate{\#ipadgames}{\#ipadgames} &%
\ttranslate{8DD}{8DD}\\
10 & \ttranslate{sinnreich}{ingenious} & %
\ttranslate{vorbildlich}{commendable} &%
\ttranslate{unrecht}{wrong} &%
\ttranslate{fehlerfrei}{accurate} &%
\ttranslate{Gebrechen}{afflictions} &%
\ttranslate{lumpig}{shabby} &%
\ttranslate{Saymak.}{Saymak.} &%
\ttranslate{Re\-fe\-renz\-ar\-chi\-tek\-tu\-ren}{reference architectures} &%
\ttranslate{Fitnesstraining}{fitness training} &%
\ttranslate{Mailadresse}{mail address}\\\bottomrule
\end{tabular}
\egroup
\caption{Top ten polar terms produced by the automatic methods.\\
{\small ** -- the min-cut method of \newcite{Rao:09} returns an
unsorted set}}
\label{snt-lex:tbl:top-10}
\end{center}
\end{table}
As can be seen from the table, the approaches of \newcite{Hu:04},
\newcite{Blair-Goldensohn:08}, \newcite{Kim:04}, as well as the
label-propagation algorithm of \newcite{Rao:09} produce almost perfect
polarity lists. The \textsc{SentiWordNet} approach of
\newcite{Esuli:06c}, however, already features some spurious terms
(e.g., ``absichtslos'' \emph{unintentional}) among its top-scored
entries. Finally, the min-cut approach of \newcite{Rao:09} returns a
set of mainly objective terms, which, however, is rather due to the
fact that this method performs a cluster-like partitioning of the
lexical graph without ranking the words assigned to a cluster.
An opposite situation is observed for the corpus-based systems: The
top-scoring polarity lists returned by these approaches not only
include many apparently objective terms but are also difficult to
interpret in general, as they contain a substantial number of slang
and advertising terms (e.g., ``BMKS65'', ``\#gameinsight'',
``\#androidgames'' etc.). This again supports the hypothesis that an
extreme content noisiness of the input domain might pose considerable
difficulties to sentiment lexicon generation methods.
\section{Conclusions and Future Work}\label{sec:concl}
Based on the above observations and our experiments, we can formulate
the main conclusions that we come to in this paper as follows:
\begin{itemize}
\item semi-automatic translations of common English polarity lists
notably outperform automatic SLG approaches that are applied
directly to non-English data;
\item despite their allegedly worse ability to accommodate new
domains, dictionary-based methods are still superior to corpus-based
systems (at least in terms of the proposed intrinsic evaluation),
provided that a sufficiently big lexical taxonomy exists for the
target language;
\item a potential weakness of the dictionary-based algorithms,
however, is their susceptibility to different hyper-parameter
settings and the size and composition of the initial seed sets;
\item nevertheless, the effect of the seed sets might be even stronger
for the corpus-based approaches which rely on distant supervision,
if the resulting noisy labeled training set becomes highly
unbalanced.
\end{itemize}
In this respect, there appears to be a great need for a corpus-based
method which can both benefit from in-domain data and be resistant to
non-balanced training sets; and we are, in fact, currently working on
such an algorithm. By taking advantage of the recent advances in deep
learning and distributional semantics, we aim to show an efficient way
of getting suitable vector representations for polar terms and
generating high-quality sentiment lexicons from these automatically
learned vectors.
\section*{Acknowledgments}\label{sec:ackn}
We thank the anonymous reviewers for their suggestions and comments.
|
2,869,038,155,868 | arxiv | \section{Introduction}
Exactly solvable quantum mechanical models proved to be invaluable tools in
the understanding of many fundamental quantum mechanical concepts. In particular,
they give insight into complex phenomena, like the symmetries of quantum
mechanical systems, and they also allow the investigation of transitions through critical parameter domains \cite{[Le15]}.
Our goal in this paper is more simple, and analytical solution serves as a firm basis of numerical integration of the radial Schroedinger equation.
In this work we consider a very simple case in which a neutral particle
is scattered on a spherically symmetric target nucleus represented
by a real Woods-Saxon (WS) type \cite{[Wo54]} nuclear potential $v(r)$.
The WS potential is the most common phenomenological nuclear potential used
in the description of nuclear reactions in the Gamow shell model (GSM) description \cite{[Mi09]} of drip line nuclei. A recent analysis of this type is the description of the $^7$Be(p,$\gamma$)$^8$B and the $^7$Li(n,$\gamma$)$^8$Li reactions in Ref. \cite{[Fo15]}.
The key elements of GSM are the Berggren-ensembles of single particle
states. The Berggren-ensemble might include resonant and sometime anti-bound
states beside the bound states and the scattering states along a complex path.
The shape of the path determines the set of the $S$-matrix pole states to be included in the ensemble. In order to have smooth contribution from the scattering states the shape of the path should go reasonably far from the poles. Therefore to know, where the poles are located has crucial importance.
The WS potential was introduced long time ago as a smoothed replacement of the square well (SQ)
potential, because the sudden jump of the constant depth of the potential to zero was considered unrealistic in the SQ potential. An introduction of a diffuseness parameter simulated the gradual decrease of the potential value
in the surface region of the nucleus. The only disadvantage of the diffuse potential was that the closed analytical form of the solution of the radial Schroedinger
equation had to be sacrificed. More precisely, for zero angular momentum the
solution could be calculated analytically but for higher partial waves the equation could be solved without approximation \cite{[Ku15]} only by using numerical integration.
In this work we use the opportunity of having both analytic and numeric solutions for the $l=0$ case and compare the
distribution of the
poles of the scattering matrix $S$ for the two types
of the potentials.
In section 2 of the paper we define the poles of the S-matrix on the complex $k$-plane for zero angular
momentum.
In section 3 we give analytical formulae of the $S$-matrix for WS and GWS potentials. In section 4 and 5 we
present the cut-off versions of the potentials (CWS and CGWS potentials) and the numerical methods for
calculating the positions of the resonant poles and the normalization of the resonant wave functions. A simple perturbation correction of the cut-off of these
potentials is also given in the section 5. Section 6 contains the results of the numerical
examples for heavy and light nuclear systems. The summary of the results is given in section 7.
\section{Poles of the $S$-matrix on the complex $k$ plane}
The nuclear potential $v(r)$ determines the so called {\it squared local wave number}
\begin{equation}
\label{locwn}
k^2(r)=[k^2 -v(r)]~,
\end{equation}
and the radial Schroedinger equation to be solved is
\begin{equation}
\label{radial}
u^{\prime\prime}(r,k)+k^2(r) u(r,k)=0~.
\end{equation}
Here prime denotes the derivative with respect to (wrt) the radial distance $r$.
The first boundary condition (BC) for the solution $u(r,k)$ is its regularity at $r=0$:
\begin{equation}
\label{regular}
u(0,k)=0~.
\end{equation}
The other BC is specified
in the asymptotic region, where the nuclear potential becomes zero, and
the solution $u(r,k)$
is proportional to a combination of the incoming $e^{-ikr}$ and outgoing $e^{ikr}$
free spherical waves:
\begin{equation}
\label{scattbc}
u(r,k)=C [e^{-ikr}-S_{l=0}(k)e^{ikr}]~.
\end{equation}
The ratio of the two type of solutions is fixed by
the element of the scattering matrix $S_0(k)$. For scattering solutions
the BC in Eqs. (\ref{regular}) and (\ref{scattbc}) can be satisfied for any value of $k$.
For the poles of $S(k)$
the BC should be of purely outgoing type. For a decaying resonance the solution in the asymptotic region is
proportional to
$e^{ikr}$.
The poles of $S_0(k)$ for a real potential are either real energy poles (bound and anti-bound poles) or complex energy poles. The real energy poles lie on the imaginary $k$-axis. The complex energy poles are the resonances.
The decaying resonances lie in the fourth quadrant of the $k$ plane. The mirror images
of the decaying resonances
are the capturing resonances in the third quadrant.
The complex wave number of a decaying (Gamow or Gamow-Siegert) resonance is denoted by $k=k^R+ i k^I$ with $k^I<0$, and $k^R>0$, and by $k^R<0$ for a capturing resonance. Since the energy is proportional to $k^2$, therefore the unbound poles (i.e. the anti-bound poles and the resonances) lie on the second energy-sheet.
\section{WS and GWS potentials}
We want to study the dependence of the resonant pole positions on the tail of the nuclear potential. We compare the pole structure of the potentials without any cut
and that of the cut-off potentials. A SQ potential is zero beyond its radius
but the WS potential becomes zero only at infinity. It is convenient to write
the WS potential as a product of its strength $V_1$
\begin{equation}
\label{WS}
V^{WS}(r)=V_1f^{WS}(r)~,
\end{equation}
and
its radial shape:
\begin{equation}
\label{WSRS}
f^{WS}(r)=-
\frac{1}{1+e^{\frac{r-R}{a}}}~.
\end{equation}
This shape has two parameters, the radius $R$ and the diffuseness $a$. The exponential tail in Eq. (\ref{WS})
tends to zero when
the radial distance $r$ goes to infinity. To calculate scattering cross-sections
we need the matrix element of the scattering matrix $S$. The value of $S_0$ can be calculated from matching
the solution of the radial Schroedinger equation to that of the asymptotical solution.
For a WS potential the matching to the asymptotic solutions can be
done only at infinite distance.
Sometimes we can complement the WS potential with a surface type term with radial shape
\begin{equation}
\label{surfaceform}
f^{SWS} (r)=-\frac{e^{\frac{r-R}{a}}}{(1+e^{\frac{r-R}{a}} )^2}~,
\end{equation}
and strength $V_2$ and consider the
so called generalized WS potential (GWS)
\begin{equation}
\label{general}
V^{GWS}(r)=V_1f^{WS} (r)+V_2f^{SWS} (r)~.
\end{equation}
For
$l=0$ the solution is given
analytically by Gy. Bencze \cite{[Be66]} and very recently by O. Bayrak and E. Aciksoz \cite{[Ba15]}. Thanks to the analytical solution the matching to
the free solution
can be performed at infinity.
For GWS potential the $l=0$ scattering matrix element $S_0(k)$ at the complex wave number $k$ is given in Ref. \cite{[Be66]} as follows:
\begin{equation}
\label{sform}
S_0(k)=\exp(-2ikR){\Gamma(2ika)\over \Gamma(-2ika)}\cdot
{A{\Gamma(1-2\lambda)\over \Gamma(1-\lambda-\mu+ika)\Gamma(-\lambda+\mu+ika)}-
{\Gamma(1+2\lambda)\over \Gamma(\lambda+\mu+ika)\Gamma(1+\lambda-\mu+ika)}
\over
{\Gamma(1+2\lambda)\over \Gamma(1+\lambda-\mu-ika)\Gamma(\lambda+\mu-ika)}-A{\Gamma(1-2\lambda)\over \Gamma(-\lambda+\mu-ika)\Gamma(1-\lambda-\mu-ika)}}~~,
\end{equation}
where $\Gamma(z)$ stands for the Gamma function with complex argument $z$, see e.g. \cite{[Ab65]}.
\begin{equation}
\label{A}
A=
\Big(\frac{b}{1+b}\Big)^{2\lambda}(1+b)^{-2ika}
\frac{_2F_1\big(\lambda+\mu+ika,1+\lambda-\mu+ika,1+2\lambda;\frac{b}{1+b}\big)}
{_2F_1\big(-\lambda+\mu-ika,1-\lambda-\mu-ika,1-2\lambda;\frac{b}{1+b}\big)}~,
\end{equation}
where $b=e^{-R/a}$ and
$_2F_1(a,b,c;z)$ stands for the Hyper-geometric Function \cite{[Ab65]} with complex argument $z$.
The parameters have the following values:
\begin{equation}
\label{lam}
\lambda=ika\sqrt{1+\frac{V_1}{E}}~, \mu={1\over2}+{1\over2}\sqrt{1+4k^2a^2\frac{V_2}{E}}~,
{\rm and}~E={\hbar^2 k^2\over 2M}~.
\end{equation}
The complex energy $E$ is calculated from the complex wave number $k$ by using the reduced mass $M$ of the target nucleus plus neutron system.
In order to find the poles of $S_0(k)$ in a domain of the complex $k$-plane we search for the zeros of $D(k)$, which is the denominator in Eq. (\ref{sform}):
\begin{equation}
\label{denom}
D(k)=
\frac{\Gamma(1+2\lambda)}{\Gamma(1+\lambda-\mu-ika)\Gamma(\lambda+\mu-ika)}-A\frac{\Gamma(1-2\lambda)}{\Gamma(-\lambda+\mu-ika)\Gamma(1-\lambda-\mu-ika)}~.
\end{equation}
The zero of $D(k)$ is found by the program BENCZE written in Wolfram Mathematica.
Having the discrete set of zeros $k_m$ in the fourth $k$-quadrant
\begin{equation}
\label{zeros}
D(k_m)=0~,
\end{equation}
we can order them as the $k^R>0$ values increase.
The bound and the anti-bound poles along the imaginary $k$-axis are ordered differently, according to the number of the nodes $n$ in the radial wave function excluding the origin.
The $a\to 0$ limit of the WS potential in Eq. (\ref{WS}) corresponds to the SQ potential. Poles in the square well potential were studied extensively by Nussenzveig \cite{[Nu59]}. He showed, that for $l=0$ the pole trajectories converge to asymptotes with $k_m^R=\frac{m\pi}{R}$ as the depth $V_1\rightarrow 0$.
The distance of the consecutive poles is determined basically by
the radius of the square well $R$, where the wave function is reflected.
For large $m$ values the distance between two consecutive resonant poles:
$|k_{m+1}-k_m|$ approaches the value of $\frac{\pi}{R}$, so we can approximate
the $m$ dependence of the pole position $k_m$ for large $m$ by
\begin{equation}
\label{rekn}
k^R_{m}= \frac{m\pi}{R} + O(1)~.
\end{equation}
In the book of R. Newton \cite{[Ne82]} a similar expression is given for the real parts of the starting values
of the $k$-trajectories.
The values there were given for so called strictly finite range (SFR) potentials vanishing at and beyond $R_{max}$ \cite{[Da12]}. For a SQ potential the radius $R$ and the finite range $R_{max}$ is the same distance. The WS or the GWS potentials become zero only at infinite distance, therefore they are not SFR type potentials.
In Ref. \cite{[Sa14]} it was shown that for some types of the SFR potentials
the starting points of the pole trajectories (staring point of a trajectory is
a $k$ value belonging to very small potential strength $V_1$) can be described
by a relation:
\begin{equation}
\label{modkn}
|k_{m}|= \frac{m\pi}{R} + O(1)~.
\end{equation}
Although these findings was observed for very small $V_1$ values, we
speculate that Eq. (\ref{modkn}) might be valid approximately for realistic values of $V_1$, too.
Therefore after calculating the complex $k_m$ eigenvalues with realistic values of $V_1$ we tried to fit the $|k_m|$ values
by a first order polynomial:
\begin{equation}
\label{line}
p(m)=a_0+a_1m~.
\end{equation}
The best fit first order polynomial minimizes the sum of the
squares of the differences:
\begin{equation}
\label{deltamod}
\Delta(a_0,a_1)=\sum_{m=m_s}^{m_u} [|k_{m}|-p(m)]^2 \to {\rm min}~.
\end{equation}
From the slope $a_1$ of the best fit polynomial
we can deduce a distance $\cal{R}$
based on the relation:
\begin{equation}
\label{range}
{\cal{R}}=\frac{\pi}{a_1}~.
\end{equation}
Since the relation in Eq. (\ref{modkn}) expected to be valid for large $m$ values, we apply a lower cut $m_s$ for the $m$ values and check the validity of the linear behavior
as $m_s$ increases. The upper value of the index $m_u$ is fixed at a large value.
\section{Cut-off WS and GWS potentials}
By cutting off the WS potential or the GWS potential to zero at a finite $R_{max}$ distance
we can convert them to SFR potential and solve
the radial equation by numerical integration.
At the direct numerical integration we proceed from the origin $r=0$ step by step to $R_{max}$ where the nuclear potential becomes zero. At or beyond this distance, i.e. at $R_a\ge R_{max}$ we match the numerical solution to that of the free solution and calculate $S_0(k)$.
The cut-off Woods-Saxon
potential (CWS) has the form:
\begin{equation}
\label{WSpot}
V^{CWS}(r)=V_1f^{CWS}(r)~,
\end{equation}
where the radial shape is
\begin{equation}
\label{vagottWS}
f^{CWS}(r)=-\left\{
\begin{array}{rl}
\frac{1}{1+e^{\frac{r-R}{a}}}
&\textrm{, if } r~<~R_{max}\\
0~~~~&\textrm{, if } r~\geq~ R_{max}~.
\end{array}
\right.
\end{equation}
In the cut-off GWS (CGWS) form the surface potential term in Eq. (\ref{surfaceform})
is cut to zero at the same $R_{max}$ distance.
Although the method of the numerical calculation of the $k_m$ eigenvalues are given in several places, see e.g. Refs. \cite{[Ve82]},\cite{[Ix95]},
\cite{[Bar15]},
let us sketch briefly how the pole solutions of the radial equation are calculated numerically.
We introduce left and right solutions of the radial equation in
Eq. (\ref{radial}) and an intermediate distance $R_{id}$, which separates the left and
right regions.
A left solution satisfies the initial values:
\begin{equation}
\label{leftzero}
u_{left}(0,k)=0,{\rm ~and} \quad\quad u_{left}^\prime(0,k)=1~,
\end{equation}
and it is defined in the $r\in [0,R_{id}]$ interval. We get it by integrating the Eq. (\ref{radial}) numerically from the origin until $R_{id}$, where we calculate the logarithmic
derivative of the left solution:
\begin{equation}
\label{leftlgder}
L_{left}(k,R_{id})=\frac{u_{left}^\prime(R_{id},k)}{u_{left}(R_{id},k)}~.
\end{equation}
The right solution satisfies outgoing boundary condition at the distance $R_a\ge R_{max}$, where the nuclear potential is zero, therefore the initial values for the right
solution are outgoing waves:
\begin{equation}
\label{righbc}
u_{right}(R_a,k)=e^{ikR_a},{\rm ~and} \quad\quad u_{right}^\prime(R_a,k)=ik e^{ikR_a}~.
\end{equation}
The right solution is defined in the $r\in [R_{id},R_a]$ interval. We integrate radial equation in Eq. (\ref{radial}) numerically in backward direction, starting from $R_a$ till $r=R_{id}$, where we calculate the logarithmic
derivative of the right solution:
\begin{equation}
\label{rightlgder}
L_{right}(k,R_{id})=\frac{u_{right}^\prime(R_{id},k)}{u_{right}(R_{id},k)}~.
\end{equation}
The eigenvalue $k_m$ of the pole state belongs to the
zero of the difference of the left and right logarithmic derivatives:
\begin{equation}
\label{logder}
G(k_m,R_{id})=L_{left}(k_m,R_{id})-L_{right}(k_m,R_{id})=0~.
\end{equation}
The computer programs GAMOW \cite{[Ve82]}, and ANTI \cite{[Ix95]} find the zeros of $G(k_m,R_{id})$, at certain $R_{id}$ matching distance $0<R_{id}<R_{max}\le R_{a}$.
The computer program GAMOW \cite{[Ve82]} uses Fox-Goodwin method with fix mesh size, and the program ANTI \cite{[Ix95]}
uses the more powerful Ixaru's method \cite{[Ix84]} for the numerical integration.
For a broad resonance the proper choice of the $R_{id}$ matching distance is difficult. The zero is searched by Newton iterations, and the iteration
process often converges poorly or fails. Therefore we developed a new method in which we compare the logarithmic derivatives not at a fixed $R_{id}$ distance but in a wide region in $r$. This method is built into the program JOZSO\footnote{The program name is chosen to honor the late J\'ozsef Zim\'anyi to whom one of the authors (T. Vertse) is grateful for starting his carrier.} \cite{[No15]}.
We calculate $G(k_m,r)$ in Eq. (\ref{logder}) at equidistant mesh-points with mesh size $h$
at $r_j=j h$, $j\in [i_1,i_2]$. The mesh points are taken from a region where the nuclear potential falls most rapidly.
Then
we search for the absolute minimum of the function of two real variables
$k^R$, and $k^I$:
\begin{equation}
\label{minima1}
F(k^R,k^I)=\log [\sum_{j=i_1}^{i_2} | G(k,r_j)|]~.
\end{equation}
Absolute minima of the function ${F}(k^R,k^I)$ in Eq. (\ref{minima1})
should have a large negative value. The position of the absolute minimum is the pole position of $S(k)$. The minimum of the function is found by using the Powell's method in Ref. \cite{[numrec]}. The function $-F(k)$ shows peaks
at the minima of $F(k)$.
To find the minima
of the function $F(k^R,k^I)$ first we explore the landscape of the
function $F(k)$ in a complex $k$ domain of our interest.
Then we search for the minima of the function ${F}(k^R,k^I)$ in Eq. (\ref{minima1}).
\section{Normalized resonant solution}
At the pole $k_m$ the left and the right solutions can be matched smoothly,
because their logarithmic derivatives are equal, or
we take the left solution $u_{left}(r,k_m)$ in the interval $r\in [0,R_a]$ as a non-normalized solution of the radial equation in Eq. (\ref{radial}).
Sometimes we need the normalized solution, e.g. in Berggren-ensemble \cite{[Be68]}
in which
all pole solutions are normalized to unity.
The square of the norm is composed from the sum of the contributions of the internal and external regions:
\begin{equation}
\label{norm2}
N^2=N_{int}^2+N_{ext}^2~.
\end{equation}
The first one we calculate numerically, by quadrature
\begin{equation}
\label{normi}
N_{int}^2=\int_0^{R_a} u_{left}^2(r) dr~,
\end{equation}
while the second one is given analytically as in Ref. \cite{[Ve87]}.
\begin{equation}
\label{normext}
N_{ext}^2=-\frac{u_{left}(R_a,k_m)^2}{2ik_m}~.
\end{equation}
Then the normalized solution is simply:
\begin{equation}
\label{normsol}
u(r,k_m)=\frac{1}{N} u_{left}(r,k_m)~.
\end{equation}
Using the normalized solution we can estimate
the energy shift $\Delta\epsilon_m$ of the pole energy $E$ of the CGWS potential
due to the change of the potential without cut.
The tail of the resonant normalized solution beyond $R_a$ is an outgoing wave given as
\begin{equation}
\label{asol}
Ae^{ik_mr}=\frac{u(R_a,k_m)}{e^{ik_mR_a}}e^{ik_mr}~.
\end{equation}
We can try to correct the effect of the cut-off of the tail of the GWS potential on the energy of the resonance using first order
perturbation approach as:
\begin{equation}
\label{epert}
\Delta \epsilon_m=\int_{R_{max}}^\infty V^{GWS}(r) u^2(r,k_m) dr~,
\end{equation}
and compare it to the energy difference
\begin{equation}
\label{ediff}
E({\rm BENCZE})-E({\rm JOZSO})~.
\end{equation}
If in the volume and in the surface terms of $V^{GWS}(r)$ in Eq. (\ref{general}) we approximate the fall of the tails of the potentials by $e^{\frac{R-r}{a}}$ in the integration region of Eq. (\ref{epert}), we can approximate the energy difference by the analytic expression
\begin{equation}
\label{apepert}
\Delta\epsilon_m=\frac{a(V_1+V_2)A^2}{(1-2ik_ma)}~e^{\frac{R-R_{max}}{a}+2ik_mR_{max}}~.
\end{equation}
A correction for the wavenumber shift $\Delta k_m$ can be calculated from the energy shift $\Delta\epsilon_m$ by solving a second order algebraic equation
\begin{equation}
\label{kshift}
\Delta k_m=k_m+\sqrt{k_m^2+c_1 \Delta\epsilon_m}~,
\end{equation}
where $c_1$ denotes the factor between $k^2$ in fm$^{-2}$ and the energy $E$ in MeV ($k^2=c_1E$).
The corrected resonance energy can be written as
\begin{equation}
\label{ecorr}
E^{corr}_m=E_m+\Delta\epsilon_m~,
\end{equation}
while the corrected wavenumber of the resonance has the form
\begin{equation}
\label{kcorr}
k^{corr}_m=k_m+\Delta k_m~.
\end{equation}
The accuracies of these corrections are checked by comparing the corrected wave numbers to the values calculated by the program BENCZE, see Table \ref{sp56fe} of the next chapter.
If the perturbative correction worked well for all poles for $l=0$ then we could use it later for $l>0$ when we
are unable to handle the problem analytically.
\section{Numerical examples}
We applied our formalism for two systems in which neutrons are scattered on a heavy target nucleus and on a lighter nucleus. For the first one we choose the $^{208}$Pb+n system,
with potential parameters $V_1=44.4$ MeV, $V_2=0$, $r_0=1.27$ fm, $a=0.7$ fm.
The radius of the potential is $R=r_0~208^{1/3}\approx 7.52$ fm.
For the second example we considered a lighter system studied in Ref. \cite{[Ba15]}. In that work only bound states $^{56}$Fe+n system were calculated, here we extend the studies for resonances.
\subsection{WS results for a heavy system}
In Fig. \ref{abssws} we show the $|S_0(k)|$ on the domain
$k^R\in [-0.1,5]$ fm$^{-1}$ and $k^I\in [-4,-0.1]$ fm$^{-1}$ calculated for the WS
potential with the parameters listed above.
\begin{figure}[h!]
\includegraphics[width=1.\columnwidth]{fig1.eps}
\caption{ $|S(k)|$ on the domain
$k^R\in [-0.1,5]$ fm$^{-1}$ and $k^I\in [-4,-0.1]$ fm$^{-1}$ calculated for a WS
potential with depth parameter: $V_1=44.4$ MeV, $V_2=0$ MeV, $a=0.7$ fm,
$R=7.52$ fm }
\label{abssws}
\end{figure}
It can be seen that the poles form a mountain with peaks
being almost equidistant in $k^R$. If we assign a sequence number $m$ to each
peaks we can fit a first order polynomial in Eq. (\ref{line}) to either the
$k_m^R$ values as function of $m$, or to the $|k_m|$ values.
We observed that the fits to the
$k_m^R$ values and to the $|k_m|$ values produce very similar results.
Therefore in the remaining part of this paper we use only the fits to the $|k_m|$ values.
\begin{figure}[h!]
\includegraphics[width=1.\columnwidth]{fig2.eps}
\caption{ $-F(k)$ on the domain
$k^R\in [-0.1,5]$ fm$^{-1}$ and $k^I\in [-4,-0.1]$ fm$^{-1}$ calculated for a CWS
potential with depth parameter: $V_1=44.4$ MeV, $V_2=0$ MeV, $a=0.7$ fm,
$R=7.52$ fm, $R_{max}=12$ fm. }
\label{absscws}
\end{figure}
\begin{figure}[h!]
\includegraphics[width=1.\columnwidth]{fig3.eps}
\caption{Dependence of the slopes of the fitted lines on the lower cut value of the sequence number of the pole $m_s$ for the WS and CWS potentials, for $l=0$ and with
parameters: $V_1=44.4$ MeV, $R=7.52$ fm
$a=0.7$ fm (red squares), $a=0.05$ fm (black circles). $R_{max}=12$ fm for CWS potential (green diamonds). The first order polynomials were fitted to the $|k_m|$ values. The horizontal lines correspond to the values $\pi/R$ (blue line) and $\pi/R_{max}$ (violet line). }
\label{varaslope}
\end{figure}
In Fig. \ref{varaslope} we show the $a$ dependence of the slope of the first order polynomial fitted to the $|k_m|$ values calculated for the WS
potential with two different values of the diffuseness.
The very small, $a=0.05$ fm value simulates the SQ potential. The other case with $a=0.7$ fm is the normal diffuseness value for $^{208}$Pb.
The horizontal blue line corresponds to the value $\frac{\pi}{R}$, where $R$ is the common
radius of the square well and the WS potential. The distance of the asymptotes
of the poles in the SQ well is $\frac{\pi}{R}$.
As $m_s$ increases the $a_1$ slope values become soon independent on $m_s$ and their values are close to the $\frac{\pi}{R}$ value.
This simple property seems to be inherited from the SQ potential to the WS potential. The deviations from this simple rule increase as the value of the diffuseness parameter
increases, but the deviations remain within $5\%$ of the $\frac{\pi}{R}$ value for $m_s>20$ even for $a=0.7$ fm (red squares). Therefore the estimated range in Eq. (\ref{range}) is close to the radius parameter of the WS potential.
\subsection{CWS results for a heavy system}
\label{heavy}
To compare to the cut-off potentials,
we can calculate the positions of the poles in a CWS potential with the same
parameters as the WS potential used, but with a cut-off radius $R_{max}=12$ fm.
The distributions of the complex poles can be visualized if we plot the landscape of the function $-F(k)$
defined in Ref. \cite{[Bar15]} on the same domain of the complex $k$-plane as we considered in Fig. \ref{abssws}. The results are displayed in Fig. \ref{absscws}.
The peaks of the function $-F(k)$ are at the same $k$-values where the poles
of $S(k)$ are.
One can see in Fig. \ref{absscws} that the peaks form a single group of poles (mountain), but there are more peaks for CWS potential than in Fig. \ref{abssws} for the WS potential.
\begin{figure}[h!]
\includegraphics[width=1.\columnwidth]{fig4.eps}
\caption{Dependence of the slopes of the fitted lines on the lower cut value of the sequence number of the pole $m_s$ for the CWS potentials, for $l=0$ and with
parameters: $V_1=44.4$ MeV, $R=7.52$ fm
$a=0.7$ fm for different $R_{max}$ values. Squares for $R_{max}=12$, stars for
$R_{max}=15$, circles for $R_{max}=18$. The first order polynomials were fitted to the $|k_m|$ values. The horizontal lines corresponds to the values $\pi/R$ (magenta) and $\pi/R_{max}$, with $R_{max}=12$ (red line), $R_{max}=15$ (green line), $R_{max}=18$ (blue line). }
\label{varcutoffslope}
\end{figure}
It is interesting to study how the pole positions change as the diffuseness parameter $a$ of the CWS potential approaches zero to simulate a SQ potential.
As the value of the diffuseness is reduced another mountain starts to develop at low
$k^R$ values. A similar phenomenon can be observed in the case of $^{56}$Fe in Fig. \ref{absscws2}. The other mountain joins to the first mountain as $m$ increases (at higher energy). Reducing the diffuseness further the second mountain moves away from the first one.
It was observed in Ref. \cite{[Bar15]} that close lying resonances interact with each other, therefore
we analyze the first mountain only when the other one is far enough not to interact with the resonances of the first mountain.
For a smoother CWS potential with $a=0.7$ fm the reflection at $R$ is negligible and
the radial wave function is reflected only at $R_{max}$. Poles of the CWS in this case form a single mountain with slopes close to the value of $\pi/R_{max}$, see Fig. \ref{varaslope}.
The $R_{max}$ dependence of the slope of the fitted first order polynomial are shown in Fig. \ref{varcutoffslope}.
This dependence on the unphysical parameter $R_{max}$ is an inconvenient feature of the CWS potential \cite{[ra11]}.
\subsection{CWS and CGWS potentials for a lighter system}
The lighter system is the $^{56}$Fe+n system studied by Bayrak and Aciksoz \cite{[Ba15]}. They used GWS and CGWS potentials for calculating bound state
energies in that system and found reasonable good agreements between the
bound state energies calculated by the analytical method and the numerical one
with cut-off potential. Now we extend the studies for resonant states and want to
investigate the effect of the cut-off on the positions of the resonant poles.
\begin{figure}[h!]
\includegraphics[width=1.\columnwidth]{fig5.eps}
\caption{ Radial shapes of the GWS potentials combined from
attractive volume and repulsive surface terms with parameters: $V_1=47.78$ MeV and $V_2=-200$ MeV,
$R=4.92$ fm, $a=0.6$ fm. }
\label{wsgws}
\end{figure}
In Fig. \ref{wsgws} one can see the radial shapes of
the CGWS potential with strengths $V_1=47.78$ MeV and $V_2=-200$ MeV,
$R=4.92$ fm, $a=0.6$ fm. The parameters were taken from Bayrak \cite{[Ba15]}. Here the dashed red curve is the dot-dashed curve in Fig. 1 of that paper.
One can see in the figure that the surface term with negative $V_2$ strength produces a barrier in
the GWS potential. The result of the barrier is the appearance of a few narrow resonances in that potential. If the barrier is high, then there are more narrow
resonances. If we cut the tail of the GWS potential we introduce extra reflections
at the cut-off radius beside the ones at the nuclear radius $R$.
In Fig. \ref{absscws2} we plot the landscape of the function $-F(k)$ in the domain
$k^R\in [-0.1,5]$ fm$^{-1}$, $k^I\in [-1.2,0]$ fm$^{-1}$ calculated by using CGWS potential. The positions of the poles for GWS and CGWS potentials are shown in Fig. \ref{gwscgwsv2200}. Positions of the poles in GWS are denoted by black circles.
The poles of the GWS without cut are
distributed quite regularly on the $k$-plane. If we fit the $m$-dependence of their $|k_m|$ values, the slope of the first order polynomial $a_1$ gives
a range value $\cal{R}$ being very close to $R=4.92$ fm. Therefore the reflection of the
wave function happens at the radius of the GWS potential.
The poles for CGWS potential are denoted by red squares in Fig. \ref{gwscgwsv2200}. These poles were calculated by the program JOZSO. The results of the perturbation correction to the cut-off are denoted by green stars. Numerical values of the resonant pole positions are listed in Table \ref{sp56fe}.
\begin{table}[h!]
\begin{center}
\caption{Dependence of the complex $k_m$ wave numbers of the $l=0$ poles of the $S$-matrix on the $V_2$ strength of the surface term for $^{56}$Fe+$n$ in the GWS potential. Analytical results were calculated by the code BENCZE, the numerical results were calculated by the program {\rm JOZSO} \cite{[No15]}. The corrected $k_m^{corr}$ values were calculated using Eq. (\ref{kcorr}). The cut-off radius value was $R_{max}=12$ fm. The
strength of the volume term were kept fixed at $V_1=47.78$ MeV. $V_2$ is given in MeV, and all wave numbers are in fm$^{-1}$ units.}
\label{sp56fe}
\begin{tabular}{ccccccc}
\hline\hline
$V_2$ & $k^R_m$(BENCZE) & $k^I_m$(BENCZE) & $k^R_m$(JOZSO) & $k^I_m$ (JOZSO) & ${\rm Re}(k_m^{corr})$ & ${\rm Im}(k_m^{corr})$\\
\hline\hline
$0$&$1.18982$&$-0.73207$&$1.15854$&$-0.61148$&$1.16310$&$-0.65090$\\
\hline
$-50$&$0.62716$&$-0.41298$&$0.62731$&$-0.41300$ &$0.62716 $&$ -0.41297$\\
$-50$&$1.56608$&$-0.62284$&$1.57202$&$-0.62082$&$1.56615$&$-0.62244$\\
$-50$&$2.26765$&$-0.83134$&$2.43836$&$-0.75586$&$2.42596$&$-0.78478$\\
\hline
$-100$&$0.94090$&$-0.21067$&$0.94088$&$-0.21071$&$ 0.94090$&$-0.21067$\\
$-100$&$1.70535$&$-0.47048$&$1.70194$& $-0.46605$& $ 1.70482$&$ -0.47034 $\\
$-100$&$2.37343$&$-0.69445$&$2.40312$& $-0.63339$& $2.39838$&$-0.64374$\\
\hline
$-200$&$1.24873$&$-0.07323$&$ 1.24873$& $-0.07323$&$1.24873$&$-0.07323$\\
$-200$&$1.89306$&$-0.29392$&$ 1.89311$& $-0.29429$&$ 1.89306$&$-0.29392 $\\
$-200$&$2.51849$&$-0.52297$&$ 2.53211$& $-0.49802$&$2.52623$&$-0.51614$\\
\hline\hline
\end{tabular}
\end{center}
\end{table}
The first line in Table \ref{sp56fe} shows one pole in WS and CWS potential
without barrier. There is no narrow resonance in this case. The resonance
in the CWS potential was selected as the closest pole to that of the WS potential shown in the first row. The correction of $k$ brings the pole in the CWS potential
somewhat closer to the one in the WS potential but the agreement is doubtful.
The surface term with $V_2<0$ produces
a potential barrier. As the height of the barrier increases, the differences between the wave numbers calculated by the program BENCZE and that of program
JOZSO, respectively become much smaller than the difference for $V_2=0$ in the first row of the table.
The first and the second lines at each $V_2$ values show narrow resonances, where the CGWS results and their corrected values approach reasonably accurately
the GWS results in the second and third columns of the table.
The agreement for the third resonances is not so convincing therefore we
suspect that the given form of the first order perturbation correction in Eq. (\ref{epert}) does not work so well for
not narrow resonances.
For the rest of the resonances not shown in the table, the differences of the
$k$ values in the potentials without cut and with cut-off are considerably larger.
The correction term in Eq. (\ref{epert}) is clearly unable to correct the increasingly large differences originating from the cut-off the potential for broader resonances.
Three groups of poles can be observed in Fig. \ref{gwscgwsv2200}. The group A consists of the first three poles lying closest to
the real axis. They are the narrow resonances listed in the last three
rows and fourth and fifth columns of the Table \ref{sp56fe}.
The positions of these narrow resonances approximate well the corresponding poles of the GWS potential calculated by the code Bencze. The results of the
correction improve the agreement even further. The general behaviour of these
group of resonances is similar to that of the resonances in GWS potential, namely that the distance of the resonances in group of A is determined by the
radius $R$ of the CGWS potential. They are caused by the reflection of the radial wave function at the nuclear radius $R$. The A group of poles proceeds with another group (B) in which
the distance of poles is much smaller and it is determined by the cut-off radius $R_{max}=12$ fm. The poles in group B
are due to the reflection of the wave function at the cut-off radius.
The third group of poles
(C) is composed of the five remaining broadest resonances in Fig. \ref{gwscgwsv2200}.
We suspect that they are most probable due to the double reflections at $R$ and $R_{max}$. Their distance is determined approximately
by the difference $R_{max}-R$, as one can see in Table \ref{range56fe}, where we varied the value of $R_{max}$. The largest difference is between $R_{max}-R$ and ${\cal R}$ is for $R_{max}=21$ fm.
We suspect that it is due to the combined effect of the accumulation of numerical errors during the largest distance and the interaction
with the closest poles in the other groups.
\begin{table}[h!]
\begin{center}
\caption{Comparism of the ranges calculated from the best fit first order polynomial in Eq. (\ref{range}) for the group of resonances B, A, C of the $^{56}$Fe+$n$ system
in the CGWS potential with parameters: $V_1=47.78$ MeV, $V_2=-200$ MeV, $R=4.92$ fm, $a=0.6$ fm. Ranges ${\cal R}_A$, ${\cal R}_B$, ${\cal R}_C$ are
the ranges corresponding to the groups A, B and C.}
\label{range56fe}
\begin{tabular}{cccccc}
\hline\hline
$R_{max}$&${\cal R}_B~~~$&$R$&${\cal R}_A~~~$& $R_{max}-R$&${\cal R}_C$\\
\hline\hline
15 & 14.9999~~~~~& 4.92&4.8356~~~~~& 10.08 & 10.1911 \\
18 & 18.0360~~~~~& 4.92&4.7776~~~~~& 13.08 & 13.3406 \\
21 & 21.0829~~~~~& 4.92&4.7864~~~~~& 16.08 & 17.4930 \\
\hline\hline
\end{tabular}
\end{center}
\end{table}
The perturbation corrections in Eq. (\ref{kcorr}) are not large enough for the poles in group B and C to bring these poles to the vicinity of the poles calculated by the code BENCZE. Therefore the correction works well only for the narrow resonances in group A.
The resonances in group A can be considered as physical resonances, since there
positions depend on the physical parameters of the GWS potential and they are
practically independent of the cut-off radius. The (in)dependence of the positions is shown in Table \ref{sprmax56fe}. The position of the $m=1$ resonance does not change when the value of the $R_{max}$ increased to $21$ fm
from $12$ fm. A similar increase of the $R_{max}$ value changed the positions
of $k_2$ and $k_3$ only in the last three decimal digits. So the dependence on the unphysical parameter can be neglected for this group of resonances.
\begin{table}[h!]
\begin{center}
\caption{Dependence of the complex $k_m$ wave numbers of the $l=0$ poles of the $S$-matrix on the value of $R_{max}$
for the three narrow resonances of the $^{56}$Fe+$n$ system in the CGWS potential with parameters: $V_1=47.78$ MeV, $V_2=-200$ MeV, $R=4.92$ fm, $a=0.6$ fm. }
\label{sprmax56fe}
\begin{tabular}{cccc}
\hline\hline
$m$& $R_{max}$& $k^R_m$& $k^I_m$\\
\hline\hline
$1$&$15.0$&$ 1.24873$& $-0.07323$\\
$1$&$18.0$&$ 1.24873$& $-0.07323$\\
$1$&$21.0$&$ 1.24873$& $-0.07323$\\
\hline
$2$&$15.0$&$ 1.89305$& $-0.29393$\\
$2$&$18.0$&$ 1.89306$& $-0.29392$\\
$2$&$21.0$&$ 1.89306$& $-0.29392$\\
\hline
$3$&$15.0$&$ 2.51152$& $-0.52311$\\
$3$&$18.0$&$ 2.51952$& $-0.52337$\\
$3$&$21.0$&$ 2.51840$& $-0.52283$\\
\hline\hline
\end{tabular}
\end{center}
\end{table}
In Fig. \ref{gwscgwsv2200} the positions of the first three
resonances in group A are on the top of the resonances of the GWS potential. The
differences are small and can not be seen on the scale of the figure.
The corrected eigenvalues of these three resonances are on the top
of the resonances of the GWS potential, because the corrections are also
small for these narrow resonances.
\begin{figure}[h!]
\includegraphics[width=1.\columnwidth]{fig6.eps}
\caption{ $-F(k)$ on the domain
$k^R\in [-0.1,5]$ fm$^{-1}$ and $k^I\in [-1.2,0.0]$ fm$^{-1}$ calculated for a CWS
potential with depth parameter: $V_1=47.78$ MeV, $V_2=-200$ MeV, $a=0.6$ fm,
$R=4.92$ fm, $R_{max}=12$ fm for $^{56}$Fe +n system.}
\label{absscws2}
\end{figure}
\begin{figure}[h!]
\includegraphics[width=1.\columnwidth]{fig7.eps}
\caption{ Positions of the poles in GWS and in CGWS potentials with
parameters $V_1=47.78$ MeV and $V_2=-200$ MeV,
$R=4.92$ fm, $a=0.6$ fm and $R_{max}=12$ fm without and with correction. }
\label{gwscgwsv2200}
\end{figure}
For the resonances in group A the pole position is
practically independent of the cut-off radius
(Table \ref{sprmax56fe}), and the results of the programs
BENCZE and JOZSO are almost the same, as one can see in Table \ref{sp56fe}.
The number of the narrow resonances increases with the height of the barrier.
\begin{figure}[h!]
\includegraphics[width=1.\columnwidth]{fig8.eps}
\caption{ Radial shapes of the $m=2$ normalized resonant wave function with $k=(1.893,-0.294)$ fm$^{-1}$
in CGWS potential with parameters: $V_1=47.78$ MeV and $V_2=-200$ MeV,
$R=4.92$ fm, $a=0.6$ fm, $R_{max}=12$ fm. }
\label{wfn3}
\end{figure}
Fig. \ref{wfn3} shows the normalized radial wave function for one of the narrow resonances in CGWS potential with parameters: $V_1=47.78$ MeV and $V_2=-200$ MeV,
$R=4.92$ fm, $a=0.6$ fm and $R_{max}=12$ fm. It is the $m=2$ resonance in Table
\ref{sprmax56fe} .
In the figure the real and the imaginary parts of the
resonant wave function oscillate around the zero axis. In the external region
($r>R$) the magnitudes of the oscillations increase as $r$ increases. In the internal region of the potential, the number of zeros of the
real part of the wave function is $3$. If we make a similar plot for the $m=1$ resonant wave function the number of zeros of the
real part of the wave function is $2$. This is in agreement with the finding
in Ref. \cite{[Ba15]} in which the 0s and 1s states of the $^{56}$Fe are bound states.
For narrow resonances the number of zeros of the real parts can be considered as the generalizations of the node number $n$
of the bound states. The imaginary part of the wave function also oscillates around the zero axis with exponentially growing amplitudes in the external region. Here the phase of the oscillations is
shifted wrt that of the real part.
For this narrow resonance the magnitude of the internal oscillations is smaller
than that of the real part, and the crossing points of the zero axis are not far from
the crossing points of the real part. We observed that these features are general characteristics of the Gamow wave functions for narrow resonances.
In certain respect the narrow resonances resemble to the bound states and they are sometimes
called as {\it quasi-stationary states}.
For the quasi-stationary states the corrections due to the cut-off the tail of the potential are small and their $k$ eigenvalues are not very sensitive to the
cut-off distance as we demonstrated it in Table \ref{sprmax56fe}.
As the imaginary parts of $k$ become larger, the tail region moves inward, the imaginary part of the wave function competes with the real part. At the same time the value of the
cut-off distance becomes more important and the $k$ pole position of the CGWS potential
moves away from the $k$ of the GWS potential, as one can see in Fig. \ref{gwscgwsv2200}. The strong dependence on the cut-off distance was noticed
earlier by two of us in Ref. \cite{[Sa08]} for resonances with $l=5$.
\section{Summary}
We have investigated the effect of the cutting the tails of the WS and GWS potentials
for the resonant poles of the $S$-matrix by calculating the pole positions.
For zero angular momentum the radial Schroedinger equation is solved by using the analytical formula of Gy. Bencze \cite{[Be66]}. The positions of the poles
were calculated with high precision by the program BENCZE written in
Wolfram Mathematica.
The cut-off versions of the same potentials, the CWS and CGWS potentials were
studied by solving the radial equation numerically by the FORTRAN program JOZSO \cite{[No15]}. Pole distributions were studied by exploring the landscape
of the modulus of the complex function $-F(k)$, which has maxima at the same
$k$ positions as the $S(k)$-matrix. The maxima of the function supplied us with
starting values for finding the accurate positions of the poles.
Presenting the landscapes of the $|S(k)|$ and the $|-F(k)|$ functions gives an
excellent tool to demonstrate the differences of the pole distributions of the
potentials without cut and with cut at finite distance.
With the absence of the surface term of the GWS potential we can study the normal WS and CWS potentials and the effect of the cut of their tails.
The pole structure of the WS and the CWS potential is basically different as it was pointed out by R. Newton \cite{[Ne82]}. The WS and the GWS potentials produce one
group of poles (mountain) similar to that of the square well potential with the same
nuclear radius. From the slopes of the first order polynomials fitted best to the moduli of the $k$ eigenvalues a range $\cal{R}$ can be deduced being very close to the nuclear radius parameter $R$. For these potentials the reflection
of the radial wave function takes place at $R$.
The CWS and the CGWS potentials produce two or three groups of poles (mountains). They are more visible if in the CGWS potential a repulsive surface term is present and we have a potential barrier producing a few narrow resonances.
The positions of the narrow resonances are similar to those of the GWS potential,
they form a first group of poles in which the distance of the poles are similar to that of the square well potential with radius $R$.
The appearance of the second and sometimes the third group of poles is apparently due to the cut of the CWS potential. The poles in the two different mountains are labelled
separately by indexing the order of their $k^R$ values. The moduli of the $|k_m|$ values in both mountains can be approximated by first order polynomials
as the function of the $m$ values. From the $a_1$ slopes of these polynomials one can
derive a sort of distance where reflection of the solution takes place. We assume that the appearance of the poles for broad resonances is due to reflections at these distances. Reflections take place when the derivative of the potential
has sudden change. In a square well this evidently happen at its radius $R$.
For GWS potentials for small values of the diffuseness the derivatives are still large and the wave functions are reflected at the nuclear radius $R$.
For a CGWS potential the potential has a jump at the cut-off radius $R_{max}$
where the derivative does not exist. This causes a reflection of the wave function
at this distance, and the distance of the poles in the first mountain of the
diffuse well is influenced by this reflection. The $\cal{R}$ distances calculated from
the slope of the best fit linear polynomials therefore are very close to the
value of $R_{max}$.
For small diffuseness and for not very large energy the radial wave function
can be reflected at $R$ and at $R_{max}$ as well and oscillations between these
two distances produces the third group of poles (mountain).
The effect of cutting off the tail of the potential is too large for the second
and the third group of poles and the first order perturbation correction is
unable to recover the cut.
The numerical results were performed for two different nuclear systems, the heavy target case for $^{208}$Pb+n system and for a lighter system for
$^{56}$Fe+n system studied recently by Bayrak and Aciksoz \cite{[Ba15]}.
We observed similar results for the
light and the heavy target systems, as far as the reflections of the wave function
are concerned and conclude that our results are typical
for the nuclear potentials studied.
\section*{Acknowledgement}
Authors are grateful to L. Gr. Ixaru and A. T. Kruppa for valuable discussions.
This work was supported by the Hungarian Scientific Research -- OTKA Fund No. K112962.
\bibliographystyle{elsarticle-num}
|
2,869,038,155,869 | arxiv | \section{Nanofiber Modes}
\label{sec: fiber eigenmodes}
It is useful to quantize both the displacement field $\op{\ufield}(\vec{\positionsymbol})$ and the electric field $\op{\Efield}(\vec{\positionsymbol})$ in terms of eigenmodes of the nanofiber, modeled as a cylinder of radius $R$.
\subsection{Flexural Phonons}
\label{sec: nanofiber phonons}
The thermal vibrations of a nanofiber can be described using linear elasticity theory. The dynamical quantity of linear elasticity theory is the displacement field $\vec{\ufieldcomp}(\pos,t)$ that indicates how far and in which direction each point $\vec{\positionsymbol}$ of a body is displaced from its equilibrium position~\cite{S_achenbach_wave_1973,gurtin_linear_1984}. Canonical quantization of linear elasticity theory in terms of a set of vibrational eigenmodes can be performed in the usual way~\cite{S_cohen-tannoudji_photons_2004}. The resulting displacement field operator in the Schrödinger picture is
\begin{equation}
\op{\ufield}(\vec{\positionsymbol})=\sum_\mu U_\mu \spare{\vec{\wmodecomp}_\mu(\vec{\positionsymbol}) \op{b}_\mu +\text{H.c.}}.
\end{equation}
Here, $\vec{\wmodecomp}_\mu(\vec{\positionsymbol})$ are the mode fields associated with the phonon modes, $\mu$ is a multiindex suitable for labeling the modes, $\op{b}_\mu$ are the corresponding bosonic ladder operators, and $\text{H.c.}$ indicates the Hermitian conjugate. The mode density is $U_\mu \equiv \sqrt{\hbar/2\rho\omega_\mu}$, where $\rho$ denotes the mass density of the nanofiber and $\omega_\mu$ are the phonon frequencies. The phonon Hamiltonian takes the form $\op{H}_\text{phn} = \hbar \sum_\mu \omega_\mu \hconj{\op{b}}_\mu \op{b}_\mu$. The eigenmodes of a nanofiber (modeled as a homogeneous, and isotropic cylinder) are well known~\cite{S_achenbach_wave_1973,meeker_guided_1964,armenakas_free_1969}. In cylindrical coordinates $(r,\varphi,z)$, the mode fields factorize into partial waves
\begin{align}\label{eqn-S:displacement radial partial wave decomposition}
\vec{\wmodecomp}_\mu(\vec{\positionsymbol}) &= \frac{\boldsymbol{\wmodercomp}_\mu(r)}{2\pi} e^{i (j\varphi + p z)}
&\text{or}&&
\vec{\wmodecomp}_\mu(\vec{\positionsymbol}) &= \frac{\boldsymbol{\wmodercomp}_\mu(r)}{\sqrt{\piL}}e^{i j \varphi} \sin(p z),
\end{align}
where $p$ is the propagation constant along the nanofiber axis and $j\in\mathds{Z}$. The left expression corresponds to the mode fields of an infinitely long nanofiber. It models traveling phonons on a long nanofiber that are not reflected at its tapered ends. In this case, $p\in\mathds{R}$. The right expression models the standing waves of a finite nanofiber (a phonon cavity) located at $z \in [0,L]$ with fixed ends that reflect phonons. Such a cavity supports phonons with $p = \pim/L$, where $m=1,2,\dots$. Transitions between motional states in a nanofiber-based two-color trap are dominated by flexural phonon modes with $j=\pm1$~\cite{S_hummer_heating_2019}. The continuum of traveling flexural phonons can be labeled by $\mu = (p,j)$, and the discrete set of cavity modes by $\mu = (m,j)$. Flexural phonons with $\si{\kilo\hertz}$ to $\si{\mega\hertz}$ frequencies that are relevant here have wavelengths much larger than the radius of the nanofiber. In this limit, the radial partial waves $\boldsymbol{\wmodercomp}_\mu(\vec{\positionsymbol})$ have vector components
\begin{align}\label{eqn-S:displacement F modes low-frequency limit}
\mathcal{W}^r_{\mu}(r) &= \frac{1}{R}, &
\mathcal{W}^\varphi_{\mu}(r) &= \frac{i j}{R}, &
\mathcal{W}^z_{\mu}(r) &= -\frac{ip}{R}r,
\end{align}
which are normalized according to $\int_0^R r|\boldsymbol{\wmodercomp}_\mu(r)|^2 \ddr=1$ to leading order in $\phonkR$. These flexural modes form a single band in the $(\omega_\mu,p)$ plane with a dispersion relation $\omega_\mu = \cFLRp^2/2$ that is quadratic in the low frequency limit~\cite{S_hummer_heating_2019}. In the case of a flexural mode cavity, the phonon spectrum is hence $\omega_\mu = m^2 \pi^2 R\sqrt{E/\rho} / (2 L^2) $. The effective speed of sound is $v = \sqrt{E/\rho}$, where $E$ is the Young modulus of the nanofiber material. For fused silica, $E = \SI{72.6}{\giga\pascal}$ and $\rho = \SI{2.20}{\gram/\centi\meter^3}$ such that $v=\SI{5.74e3}{\meter/\second}$~\cite{S_bass_handbook_2001}.
\subsection{Nanofiber-guided Photons}
\label{sec: nanofiber photons}
In the paper, we propose to perform fluorescence spectroscopy of surface-bound states using a nanofiber-guided probe laser. We need to describe nanofiber-guided photons to model this spectroscopy scheme. The electromagnetic field in the presence of the nanofiber can be quantized based on the photonic eigenmodes of the system~\cite{S_glauber_quantum_1991,cohen-tannoudji_photons_2004}. The photonic eigenmodes of a nanofiber (modeled as a cylindrical step-index waveguide with relative electric permittivity $\permitt$) are well known~\cite{S_marcuse_light_1982,snyder_optical_2012}. The resulting Hamiltonian is $\op{H}_\text{pht} = \hbar \sum_\eta \omega_\eta \hconj{\op{a}}_\eta \op{a}_\eta$, where $\eta$ is a multi-index suitable for labeling the eigenmodes, $\omega_\eta$ is the frequency of each eigenmode, and $\op{a}_\eta$ is the corresponding bosonic ladder operator. The electric field operator in the Schrödinger picture is
\begin{equation}
\op{\Efield}(\vec{\positionsymbol}) = \sum_\eta E_\eta \spare{\op{a}_\eta \,\vec{\emodecomp}_\eta(\vec{\positionsymbol}) + \text{H.c.} },
\end{equation}
where we define the mode density $E_\eta \equiv \sqrt{ \hbar \permitt_0 \omega_\eta/2}$ and $\permitt_0$ is the vacuum permittivity. The electric mode fields are of the form
\begin{equation}
\vec{\emodecomp}_\eta(\vec{\positionsymbol}) = \frac{\boldsymbol{\emodercomp}_\eta(r)}{2\pi} e^{i(m\varphi + k z)},
\end{equation}
with propagation constant $k \in \mathds{R}$ and azimuthal order $m\in\mathds{Z}$. These modes are quasi-circular polarized~\cite{S_le_kien_field_2004}. We are interested in photons in the single-mode regime of the nanofiber, that is, with frequencies below the cutoff frequency $\photfreq_c \simeq 2.405 \, c / ( R \sqrt{\permitt-1})$~\cite{S_marcuse_light_1982}. Here, $c$ is the vacuum speed of light. For fused silica, $\permitt = 2.1$~\cite{S_bass_handbook_2001} such that the silica nanofiber with a radius of $R = \SI{305}{\nano\meter}$ considered in our case study has a cutoff frequency corresponding to a free-space wavelength of $\lambda_c = \SI{835.7}{\nano\meter}$. In the single-mode regime, only modes on the $\text{HE}_{11}$ band with azimuthal order $m = \pm1$ are nanofiber-guided. For the setup considered in the paper, the fluorescence spectrum is independent of the sign of $m$ and we may choose $m=1$ without loss of generality. In this case, the radial partial waves of the electric mode field have vector components
\begin{align}
r &< R \text{ :} & r &> R \text{ :} \nonumber \\
\mathcal{E}^{r}_{\eta}(r) &= \frac{i A_\eta}{a^2} \spare{ k a \besselJ{1}'(\photar) - \frac{\omega_\eta}{c} \beta \frac{\besselJ{1}(a r)}{r} }, &
\mathcal{E}^{r}_{\eta}(r) &= - \alpha \frac{i A_\eta}{b^{2}}\spare{ k b \besselK{1}'(\photbbr) - \beta \frac{\omega_\eta}{c} \frac{\besselK{1}(b r)}{r} }, \nonumber \\
\mathcal{E}^{\varphi}_{\eta}(r) &= \frac{A_\eta}{\alpha^2} \spare{ \beta \frac{\omega_\eta}{c} a \besselJ{1}'(\photar) - k \frac{\besselJ{1}(a r)}{r} }, &
\mathcal{E}^{\varphi}_{\eta}(r) &= - \alpha \frac{A_\eta}{b^{2} }\spare{ \beta \frac{\omega_\eta}{c} b \besselK{1}'(\photbbr) - k \frac{\besselK{1}(b r)}{r} }, \\
\mathcal{E}^{z}_{\eta}(r) &= A_\eta\besselJ{1}(a r), &
\mathcal{E}^{z}_{\eta}(r) &= \alpha A_\eta \besselK{1}(b r), \nonumber
\end{align}
where $a \equiv \sqrt{\omega_\eta^2/v^2 - k^2}$, $b \equiv \sqrt{k^2 - \omega_\eta^2/c^2 }$ and $v =c/\sqrt{\permitt}$ is the speed of light inside the nanofiber. The functions $\besselJ{m}$ and $\besselK{m}$ are Bessel functions and modified Bessel functions, respectively. The prime indicates the first derivative. We define
\begin{align}
\alpha &\equiv \frac{\besselJ{1}(\alpha)}{\besselK{1}(\tilde{\beta}}, &
\beta &\equiv \frac{(\permitt-1)}{R c} \frac{ k \omega_\eta}{a b} \frac{\besselJ{1}(a R) \besselK{1}(b R)}{ a \besselJ{1}(a R) \besselK{1}'(b R) + b \besselJ{1}'(a R) \besselK{1}(b R) }.
\end{align}
The amplitude $A_\eta$ is determined by the normalization condition $\permitt_0^2\int_0^\infty r\permitt(r) \, \cconj{\boldsymbol{\emodercomp}}_\eta(r) \cdot \boldsymbol{\emodercomp}_\eta(r)\, \ddr = 1$. Here, $\permitt(r)$ is the relative permittivity as a function of the radial position. The dispersion relation $\omega_\eta(k)$ is implicitly given by the frequency equation
\begin{equation}\label{eqn-S:guided mode frequency equation}
\spare{ a \besselJ{1}(a R) \besselK{1}'(b R) + b \besselK{1}(b R) \besselJ{1}'(a R) }
\spare{ a \besselJ{1}(a R) \besselK{1}'(b R) + \permitt b \besselK{1}(b R) \besselJ{1}'(a R) }
= \spare{ \frac{(\permitt-1)}{R c} \frac{k \omega_\eta}{ a b}\besselJ{1}(a R) \besselK{1}(b R) }^2.
\end{equation}
The frequency equation has only one zero $\omega_\eta(k)$ in the single-mode regime.
\section{Linewidths for Adsorbed and Surface-Bound Atoms}
\label{sec: linewidths surface-bound and adsorbed}
We provide details on the calculation of the motional states of adsorbed and surface-bound atoms shown in Fig.~2 of the paper. We also summarize how to calculate the linewidths of transition between the motional states due to the interaction with flexural cavity phonons. These linewidths are used to plot the spectra in Fig.~3 of the paper.
\subsection{Motional States}
The potentials considered in the paper are cylindrically symmetric, that is, $V(\vec{\positionsymbol}) = V(r)$. The motional states $\ket{\xi} \equiv \ket{\nu,l,q}$ of an atom in these potentials, therefore, have wavefunctions of the form
\begin{equation}\label{eqn-S:adsorbed eigenstates}
\Psi_\xi(\vec{\positionsymbol}) = \braket{\vec{\positionsymbol}|\nu,l,q} = \frac{\psi_{l\nu}(r)}{2\pi\sqrt{r}} e^{i(l\varphi + \atkz)}.
\end{equation}
The Hamiltonian describing the motion of the atom is $\op{H}_\text{ext} = \hbar \sum_\xi \omega_\xi \ketbra{\xi}{\xi}$. The corresponding frequencies are $ \omega_\xi = \omega_{l\nu} + \hbarq^2/2M$ for an atom of mass $M$. Here, the quantum numbers $\nu\in\mathds{N}$, $l\in\mathds{Z}$, and $q\in\mathds{R}$ label the excitations in radial, azimuthal, and axial direction, respectively. The radial partial waves $\psi_{l\nu}(r)$ are obtained by solving the one-dimensional Schrödinger equation with the effective potential $V_l(r)$~\cite{S_messiah_quantum_2014}:
\begin{align}\label{eqn-S:effective Schroedinger equation}
&\spare{-\frac{\hbar^2}{2M}\partial_r^2 + V_l(r)} \psi_{l\nu}(r) = \hbar\omega_{l\nu}\psi_{l\nu}(r), &
V_l(r) &\equiv V(r) + \frac{\hbar^2}{2M} \pare{l^2 - \frac{1}{4}}.
\end{align}
The second term in the above potential is an angular momentum barrier. It can be neglected for azimuthal orders $l$ up to of a few hundred for adsorbed cesium atoms in weakly bound states considered in this paper. In that case, there is no coupling between the atomic motion in radial and azimuthal direction and $\psi_{l\nu}(r) = \psi_{\nu}(r)$. \Cref{eqn-S:effective Schroedinger equation} then reduces to the Schrödinger equation
\begin{equation}\label{eqn-S:Schroedinger equation}
\spare{ - \frac{\hbar^2}{2M} \partial_r^2 + V(r)} \psi_\nu(r) = \hbar \omega_\nu \psi_\nu(r)
\end{equation}
that we solve to calculate the states shown in the paper.
The perfect cylindrical symmetry of the nanofiber is an idealization. In practice, the surface of a nanofiber is not perfectly smooth and may feature local imperfections. Moreover, the nanofiber cross section is not perfectly circular and varies both in size and exact shape over the length of the nanofiber. In consequence, the bound motional states of the surface-induced potential do not exhibit perfect cylindrical symmetry, either. However, the interaction between phonons and photons on the one side and atoms on the other is not significantly altered by such imperfections. In particular, they do not significantly affect the atoms' motion in radial direction, in particular for weakly bound states considered in our manuscript where the probability amplitude close to the nanofiber surface is low. Since the spectroscopy scheme we propose is only sensitive to the radial motion of the atoms and does not rely on a particular symmetry of the atom states, deviations from a perfect cylindrical symmetry in the atoms' motional state will not influence the predicted spectra in Fig.~3 of the paper.
\subsection{Atom-Phonon Interaction}
The coupling between atom motion and phonons arises because the phonons displace the potential, $V[\op\atrpos - \op{\ufieldcomp}^r(R,\op\atphipos,\op\atzpos)]$. The interaction Hamiltonian is obtained by expanding the shifted potential to second order around $\vec{\ufieldcomp}=\boldsymbol{0}$ and can be cast into the form $\op{H}_{\motional\text{-}\vibrational} = \op{H}^{(1)}_{\motional\text{-}\vibrational} + \op{H}^{(2)}_{\motional\text{-}\vibrational}$ where
\begin{equation}
\begin{split}
\op{H}^{(1)}_{\motional\text{-}\vibrational} &= \hbar \sum_{\mu {\atindex'} \xi} \pare{ g_{\mu {\atindex'} \xi} \op{b}_\mu \ketbra{{\atindex'}}{\xi} + \text{H.c.} }, \\
\op{H}^{(2)}_{\motional\text{-}\vibrational} &= \hbar \sum_{{\phonindex'} \mu {\atindex'} \xi} \pare{ \frac{K_{{\phonindex'} \mu {\atindex'} \xi}}{2} \op{b}_{\phonindex'} \op{b}_\mu \ketbra{{\atindex'}}{\xi} + \text{H.c.} }
+ \hbar \sum_{{\phonindex'} \mu {\atindex'} \xi} G_{{\phonindex'} \mu {\atindex'} \xi} \hconj\op{b}_{\phonindex'} \op{b}_\mu \ketbra{{\atindex'}}{\xi}.
\end{split}
\end{equation}
The coupling rates between atoms and cavity phonons are, at first order,
\begin{align}
\label{eqn-S:def 1-phonon coupling constant cavity phonons}
g_{ \mu {\atindex'} \xi} &= g_{ \mu {\atn'} \nu} \delta_{(l+j),{\atl'}} ~ \frac{1}{2}\cpare{ \delta\spare{{\atk'} - (q+p)} - \delta\spare{ {\atk'} - (q-p) } }, &
g_{ \mu {\atn'} \nu} &= \frac{ i }{\sqrt{2\pi}} \frac{\mathcal{A}^{(1)}_{{\atn'}\nu}}{\sqrt{\hbar\rho \omega_\mu L}R},
\end{align}
and, at second order,
\begin{align}
\label{eqn-S:def 2-phonon coupling constant cavity phonons}
K_{{\phonindex'} \mu {\atindex'} \xi} &= G_{{\phonindex'} \mu {\atn'} \nu} \delta_{{\atl'},(l+j+{\phonl'})} [\delta], &
G_{{\phonindex'} \mu {\atindex'} \xi} &= G_{{\phonindex'} \mu {\atn'} \nu} \delta_{({\atl'}+{\phonl'}),(l+j)} [\delta], &
G_{{\phonindex'} \mu {\atn'} \nu} &= \frac{1}{2\pi} \frac{\mathcal{A}^{(2)}_{{\atn'}\nu}}{\rho\sqrt{\omega_{\phonindex'} \omega_\mu}\lenR^2},
\end{align}
\begin{equation}
[\delta] \equiv \frac{1}{4} \big\{ \delta\spare{({\atk'}+{\phonk'}) - (q+p)} + \delta\spare{({\atk'}-{\phonk'}) - (q-p)} - \delta\spare{({\atk'}-{\phonk'}) - (q+p)} - \delta\spare{({\atk'}+{\phonk'}) - (q-p)} \big\}.
\end{equation}
The wavefunction overlaps $\mathcal{A}^{(1)}_{{\atn'}\nu}$ and $\mathcal{A}^{(2)}_{{\atn'}\nu}$ are defined in the paper.
We focus on the radial motion of the atoms. Since phonons carry only little momentum, we neglect changes in the momentum of the atomic motion in the axial and azimuthal direction. To infer how the presence of thermal phonons affects the radial atomic motion, let us at first select two states $\ket{\nu_1}$ and $\ket{\nu_2}=\ket{\nu_1+1}$ that are neighbors in frequency. For the time being, we neglect all other atom states. The dynamics of this simplified model can be described by
\begin{align}\label{eqn-S:minimal model}
\op{H}_\text{ext} &= \hbar \frac{\freq_0}{2} {\op{\sigma}^z}, &
\op{H}_{\motional\text{-}\vibrational} &= \hbar \sum_\mu \spare{ \pare{ g_\mu {\op{\sigma}^+} - \cconjg_\mu {\op{\sigma}^-} } \op{b}_\mu+ \text{H.c.} } + \hbar \sum_\mu G_\mu ( \hconj\op{b}_\mu\op{b}_\mu - \bar{n}_\mu) {\op{\sigma}^z}.
\end{align}
We use Pauli matrices ${\op{\sigma}^+} = \ketbra{\nu_2}{\nu_1}$, ${\op{\sigma}^-} = \ketbra{\nu_1}{\nu_2}$, and ${\op{\sigma}^z} = \ketbra{\nu_2}{\nu_2} - \ketbra{\nu_1}{\nu_1}$. The coupling rates are $g_\mu \equiv g_{\mu\nu_2\nu_1}$ and $G_\mu \equiv (G_{\mu\phonindex\nu_2\nu_2} - G_{\mu\phonindex\nu_1\nu_1} )/2 \in \mathds{R}$. In deriving \cref{eqn-S:minimal model}, we have redefined $\op{H}_\text{ext}$ to include a correction $\Delta \freq_0 \equiv \sum_\mu G_\mu \bar{n}_\mu$ to the transition frequency $\freq_0 \equiv \omega_{\nu_2}-\omega_{\nu_1} + \Delta \freq_0$. The correction arises from $\op{H}_{\motional\text{-}\vibrational}^{(2)}$ due to the finite thermal population of the phonon modes. It can be neglected for the parameters used in the case study in the paper. We also neglect nonresonant terms (i.e., terms that are not energy conserving) in $\op{H}_{\motional\text{-}\vibrational}^{(2)}$, since all phonon scattering, absorption, and emission processes are dominated by resonant terms. At this point, there are still terms proportional to ${\op{\sigma}^+}$ and ${\op{\sigma}^-}$ remaining, which lead to transitions between the two atom states through two-phonon absorption, emission, or inelastic scattering at first order in $\op{H}_{\motional\text{-}\vibrational}^{(2)}$. These processes contribute to the broadening of the resonance when the transition $\nu_1 \leftrightarrow \nu_2$ is externally driven. However, the coupling constants are much smaller than for the elastic two-phonon scattering processes generated by the terms $\hconj\op{b}_\mu \op{b}_\mu {\op{\sigma}^z}$, which cause dephasing. As a result, the linewidth induced by $\op{H}_{\motional\text{-}\vibrational}^{(2)}$ is dominated by dephasing due to the resonant ${\op{\sigma}^z}$ terms retained in \cref{eqn-S:minimal model}.
\subsection{Effective Evolution of the Atomic Motion}
In practice, the phonon modes have a thermal population and nonzero decay rates $\kappa_\mu$ due to internal losses and their interaction with the environment (e.g., through the absorption of guided laser light and the clamping of the nanofiber). We model the dynamics of the joint atom-phonon state operator $\op{\rho}$ using the Liouvillian $\mathcal{L} = \mathcal{L}_\text{ext} + \mathcal{L}_\text{phn} + \mathcal{L}_{\motional\text{-}\vibrational}$, where
\begin{align}
\mathcal{L}_\text{ext}\op{\rho} &= -\frac{i}{\hbar}[\op{H}_\text{ext}, \op{\rho} ], &
\mathcal{L}_\text{phn}\op{\rho} &= -\frac{i}{\hbar}[\op{H}_\text{phn}, \op{\rho} ] + \sum_\mu \kappa_\mu (\bar{n}_\mu+1) \mathcal{D}_{\op{b}_\mu}\op{\rho} + \kappa_\mu \bar{n}_\mu \mathcal{D}_{\hconj\op{b}_\mu}\op{\rho}, &
\mathcal{L}_{\motional\text{-}\vibrational}\op{\rho} &= -\frac{i}{\hbar}[\op{H}_{\motional\text{-}\vibrational}, \op{\rho} ],
\end{align}
and the dissipator is $\mathcal{D}_{\op{b}_\mu}\op{\rho} = \op{b}_\mu \op{\rho} \hconj\op{b}_\mu - \{\hconj\op{b}_\mu \op{b}_\mu,\op{\rho}\}/2$. The steady-state of the phonon bath according to $\mathcal{L}_\text{phn}$ is the thermal state $\densopbath_\text{ss} = e^{-\op{H}_\text{phn}/(k_B T)} / \tr[e^{-\op{H}_\text{phn}/(k_B T)}]$ with thermal populations $\bar{n}_\mu$ determined by the Bose-Einstein distribution. Here, $T$ is the temperature of the nanofiber. Since the transition frequency $\freq_0 \gg |g_\mu|, |G_\mu|$ is large compared to the coupling rates, it is possible to obtain an effective description of the atom motion alone. If we further assume $\kappa_\mu \gg |g_\mu|, |G_\mu|$, we can use adiabatic elimination to trace out the phonon modes~\cite{S_cirac_laser_1992,breuer_theory_2002}. The dynamics of the state operator $\op{\densopsyscomp}$ of the atomic motion is then described by the Liouville–von Neumann equation $\partial_t \op{\densopsyscomp}(t) = \mathcal{L}_\text{eff}\op{\densopsyscomp}(t)$ with the effective Liouvillian
\begin{align}\label{eqn-S:effective Liouvillian}
\mathcal{L}_\text{eff}\op{\densopsyscomp} &= - \frac{i}{\hbar} \com{\op{H}_\text{eff} }{ \op{\densopsyscomp} } + \Gamma^- \mathcal{D}_{{\op{\sigma}^-}}\op{\densopsyscomp} + \Gamma^+ \mathcal{D}_{{\op{\sigma}^+}}\op{\densopsyscomp} + \Gamma^z \mathcal{D}_{{\op{\sigma}^z}}\op{\densopsyscomp}, &
\op{H}_\text{eff} &= \hbar\frac{\freq_\effective}{2} {\op{\sigma}^z} .
\end{align}
Here, $\Gamma^+$ and $\Gamma^-$ are the phonon-induced depopulation rates of the states $\nu_1$ and $\nu_2$, respectively, and $\Gamma^z$ is the rate of phonon-induced dephasing between the two states:
\begin{align}
\label{eqn-S:TLS depopulation}
\Gamma^+ &= 2 \sum_\mu |g_\mu|^2 \Re \spare{ \bar{n}_\mu K_\mu^- + (\bar{n}_\mu+1) K_\mu^+ }, &
\Gamma^- &= 2 \sum_\mu |g_\mu|^2 \Re \spare{ (\bar{n}_\mu+1) K_\mu^- + \bar{n}_\mu K_\mu^+ }, \\
\label{eqn-S:TLS dephasing and correlator}
\Gamma^z &= 2 \sum_\mu \bar{n}_\mu (\bar{n}_\mu+1) \frac{G_\mu^2}{\kappa_\mu}, &
K_\mu^\pm &\equiv \frac{\kappa_\mu/2}{(\kappa_\mu/2)^2 + ( \freq_0 \pm \omega_\mu )^2} + i \frac{\freq_0 \pm \omega_\mu }{(\kappa_\mu/2)^2 + ( \freq_0 \pm \omega_\mu )^2}.
\end{align}
The transition frequency $\freq_\effective \equiv \freq_0 + \Delta_\Lamb$ is subject to the Lamb shift $\Delta_\Lamb \equiv \sum_\mu (2\bar{n}_\mu+1) |g_\mu|^2 \Im \spare{ K_\mu^- + K_\mu^+ }$, which can be neglected in our case study.
\subsection{Linewidth of Transitions}
To determine the phonon-induced linewidth of the transition $\nu_1 \leftrightarrow \nu_2$, we can, for instance, add a driving term $\op{H}_d(t) = \hbar \Omega \spare{ {\op{\sigma}^-} e^{i \omega_d t} + \text{H.c.} }/2$ to \cref{eqn-S:effective Liouvillian}. In the limit of a driving that is weak compared the influence of the bath, $\Omega \ll (\Gamma^\pm, \Gamma^z)$, the steady-state population of the state $\ket{\nu_2}$ is
\begin{equation}
\braket{\nu_2|\op{\densopsyscomp}_\text{ss}|\nu_2} \simeq \frac{\Omega^2}{2 (\Gamma^- + \Gamma^+)} \frac{\Gamma/2}{\Delta^2 + (\Gamma/2)^2} + \frac{\Gamma^+}{\Gamma^- + \Gamma^+},
\end{equation}
where $\Delta \equiv \omega_d - \freq_\effective$ is the detuning of the drive. The resonance in the population as a function of the detuning has a Lorentzian shape with linewidth (full width at half maximum) of
\begin{equation}\label{eqn-S:linewidth TLS}
\Gamma = \Gamma^- + \Gamma^+ + 4 \Gamma^z.
\end{equation}
The linewidth has two distinct contributions: $\Gamma^{(1)} \equiv \Gamma^- + \Gamma^+$ due to the depopulation of the two involved states, and $\Gamma^{(2)} \equiv 4 \Gamma^z$ due to the dephasing of the two states. By construction of the model \cref{eqn-S:minimal model}, we neglect depopulation induced by $\op{H}^{(2)}_{\motional\text{-}\vibrational}$ since it leads to a broadening that is smaller than $\Gamma^{(2)}$.
It is straightforward to generalize to transitions between any of the radial motional states $\ket{\nu}$. In analogy to \cref{eqn-S:linewidth TLS}, we model the linewidth of the transition $\nu \leftrightarrow {\atn'}$ between any two states as
\begin{align}
\Gamma_{{\atn'}\nu} &\equiv \Gamma^{(1)}_{{\atn'}\nu} + \Gamma^{(2)}_{{\atn'}\nu}.
\end{align}
Here,
\begin{align}\label{eqn-S:dephasing broadening general}
\Gamma^{(2)}_{{\atn'}\nu} &\equiv 8 \sum_\mu \bar{n}_\mu^2 \frac{ G^2_{\mu{\atn'}\nu} }{\kappa_\mu}, &
G_{\mu{\atn'}\nu} &\equiv \frac{1}{4\pi} \frac{\mathcal{A}^{(2)}_{{\atn'}\atnb}-\mathcal{A}^{(2)}_{\nu\atn}}{ \rho \omega_\mu \lenR^2}
\end{align}
in analogy to \cref{eqn-S:TLS dephasing and correlator}. Note that $G_{\mu{\atn'}\nu} \in \mathds{R}$. The rate $\Gamma^{(2)}_{{\atn'}\nu}$ is dominated by the fundamental cavity mode $\mu_1$, since the coupling rates drop as $\omega_\mu^{-2}$ with the phonon frequency. Hence,
\begin{equation}\label{eqn-S:dephasing broadening simplified}
\Gamma^{(2)}_{{\atn'}\nu}
\simeq 16 \bar{n}^2 \frac{G^2_{\mu_1{\atn'}\nu} }{\phonfreq_1} Q
= \frac{32}{\pi^{12}} \frac{k_B^2 T^2 L^8 Q}{\hbar^2 R^9} \sqrt{\frac{\rho}{E^5}} \spare{\mathcal{A}^{(2)}_{{\atn'}\atnb}-\mathcal{A}^{(2)}_{\nu\atn}}^2,
\end{equation}
where $\bar{n}$ is the thermal population, $\phonfreq_1$ the frequency, and $Q = \phonfreq_1/\kappa_1$ the quality factor of the fundamental cavity mode.
The broadening $\Gamma^{(1)}_{{\atn'}\nu}$ is the sum of the depopulation rates of both states. In general, transitions to any other state contribute to the depopulation rates. In the limit of large thermal populations $\bar{n}_\mu \gg 1$, we obtain
\begin{align}\label{eqn-S:depopulation broadening general}
\Gamma^{(1)}_{{\atn'}\nu} &\equiv \Gamma^d_{{\atn'}} + \Gamma^d_{\nu}, &
\Gamma^d_{\nu} & \equiv 2 \sum_{\nu''\neq\nu} \sum_\mu \bar{n}_\mu |g_{\mu\nu''\nu}|^2 \Re \spare{ K^-_{\mu\nu''\nu} + K^+_{\mu\nu''\nu} }, &
\ReK_{\mu{\atn'}\nu}^\pm &\equiv \frac{\kappa_\mu/2}{(\kappa_\mu/2)^2 + ( |\atfreq_{\atnb\atn}| \pm \omega_\mu )^2}
\end{align}
in analogy to \cref{eqn-S:TLS depopulation,eqn-S:TLS dephasing and correlator}. Here, $\atfreq_{\atnb\atn} \equiv \omega_{\atn'} - \omega_\nu$ is the transition frequency and $g_{\mu{\atn'}\nu}$ is defined in \cref{eqn-S:def 1-phonon coupling constant cavity phonons}. The state overlaps $\mathcal{A}^{(1)}_{{\atn'}\nu}$ quickly decay with increasing distance
$|{\atn'}-\nu|$. As a result, it is often sufficient to include transitions to the states $\nu''=\nu\pm1$ closest in frequency when calculating $\Gamma^d_{\nu}$. If the cavity is sufficiently small such that the fundamental cavity mode has a frequency $\phonfreq_1$ larger than the relevant transition frequencies, $\Gamma^{(1)}_{{\atn'}\nu}$ is dominated by the fundamental mode and we can approximate
\begin{align}\label{eqn-S:depopulation broadening small cavity}
\Gamma^{(1)}_{{\atn'}\nu} &\simeq \Gamma^-_{\nu} + \Gamma^+_{\nu} + \Gamma^-_{{\atn'}} + \Gamma^+_{{\atn'}}, &
\Gamma^{\pm}_{\nu} &
\equiv 4 \bar{n} \frac{|g_{\mu_1(\nu\pm1)\nu}|^2}{\phonfreq_1} \frac{1}{Q}
= \frac{16}{\pi^7} \frac{k_B T L^5}{\hbar^2 R^5 Q } \sqrt{\frac{\rho}{E^3}} | \mathcal{A}^{(1)}_{(\nu\pm1)\nu} |^2,
\end{align}
which corresponds to Eq.~(9) in the paper. We use \cref{eqn-S:depopulation broadening general,eqn-S:dephasing broadening simplified} to calculate the linewidths that appear in Fig.~3 of the paper, with relevant contributions only stemming from $\Gamma^{(2)}_{{\atn'}\nu}$.
In the heterodyne fluorescence spectroscopy scheme we propose in the paper, transitions between all motional states are driven simultaneously. Transitions between states $\nu$ and ${\atn'}=\nu+1$ that are nearest neighbors in frequency are most likely and lead to resonances of the largest power, see Fig.~3 in the paper. Therefore, it is useful to focus on nearest-neighbor transitions to determine for which parameters the motional quantization can be resolved. For nearest-neighbor transitions, \cref{eqn-S:depopulation broadening general} simplifies to
\begin{equation}
\label{eqn-S:depopulation broadening nearest neighbor}
\Gamma^{(1)}_{(\nu+1)\nu} \simeq 16 \sum_{m=1}^\infty \bar{n}_\mu |g_{\mu(\nu+1)\nu}|^2 \Re \spare{ K_{\mu(\nu+1)\nu}^- + K_{\mu(\nu+1)\nu}^+ }.
\end{equation}
In deriving \cref{eqn-S:depopulation broadening nearest neighbor}, we approximate the upward and downward depopulation rates of each state as equal. In this case, \cref{eqn-S:depopulation broadening small cavity} further simplifies to
\begin{equation}
\label{eqn-S:depopulation broadening nearest neighbor small cavity}
\Gamma^{(1)}_{(\nu+1)\nu}
\simeq 16 \bar{n} \frac{|g_{\mu_1{\atn'}\nu}|^2}{\phonfreq_1} \frac{1}{Q}
= \frac{64}{\pi^7} \frac{k_B T L^5}{\hbar^2 R^5 Q } \sqrt{\frac{\rho}{E^3}} | \mathcal{A}^{(1)}_{{\atn'}\nu} |^2.
\end{equation}
In \cref{fig-S: depopulation vs dephasing broadening}, we plot the contributions $\Gamma^{(1)}_{{\atn'}\nu}$ and $\Gamma^{(2)}_{{\atn'}\nu}$ to the linewidth as a function of the cavity length $L$ using \cref{eqn-S:depopulation broadening nearest neighbor,eqn-S:depopulation broadening nearest neighbor small cavity}. We select the transition between the states $\nu=261$ and ${\atn'}=262$ shown in Fig.~2b of the paper. Below the horizontal dashed line, the linewidth $\Gamma_{{\atn'}\nu}$ is smaller than the separation $\Delta \omega$ to the next nearest-neighbor transition. In the regime $\Gamma_{{\atn'}\nu}/\Delta \omega \ll 1$, transitions between motional states can be resolved. This regime can be realized either by choosing a sufficiently small cavity, or by working at sufficiently low nanofiber temperatures. For the parameters chosen in \cref{fig-S: depopulation vs dephasing broadening}, the contribution $\Gamma^{(1)}_{{\atn'}\nu}$ can be neglected compared to $\Gamma^{(2)}_{{\atn'}\nu}$. Note that, for simplicity, we assume a constant quality factor $Q = \omega_\mu/\kappa_\mu = 100$ for all modes (in particular the fundamental mode decisive for the linewidth). This assumption cannot hold for arbitrarily large cavities: It is to be expected that the quality factor is reduced for modes with longer wavelengths, which in turn lowers $\Gamma^{(2)}_{{\atn'}\nu}$ compared to a simple extrapolation of \cref{fig-S: depopulation vs dephasing broadening}.
The ideal length $L$ optimizes between the absolute strength and the signal-to-noise ratio of the spectroscopy signal. Our analysis predicts that shorter nanofibers lead to a better signal-to-noise ratio; see \cref{fig-S: depopulation vs dephasing broadening}. However, the number of atoms close to the nanofiber is proportional to the nanofiber length. Shorter nanofibers will therefore reduce the absolute signal strength and require, for instance, longer measurement times. The length of $\SI{5}{\micro\meter}$ chosen in our case study represents the longest nanofiber compatible with resolving weakly bound atoms, assuming that the nanofiber is heated to a temperature of $\SI{420}{\kelvin}$ by the transmitted laser beam of power $P_r=\SI{1}{\milli\watt}$~\cite{S_wuttke_thermalization_2013}. Achieving lower nanofiber temperatures is difficult since the thermal coupling of the nanofiber to its environment is very low \cite{S_wuttke_thermalization_2013}, but would allow to work with longer nanofibers.
\begin{figure}
\centering
\includegraphics[width=264.43122pt]{figureS1.pdf}
\caption{Contributions $\Gamma^{(1)}_{{\atn'}\nu}$ and $\Gamma^{(2)}_{{\atn'}\nu}$ to the transition linewidth as a function of the cavity length $L$ and for two different nanofiber temperatures. As an example, we select the transition between the states $\nu=261 \leftrightarrow {\atn'}=262$ shown in Fig.~2b of the paper. The transition frequency is $\atfreq_{\atnb\atn} = 2\pi \times \SI{327}{\kilo\hertz}$. The separation to the neighboring transition $\nu=262 \leftrightarrow {\atn'}=263$ is $\Delta \omega = 2\pi \times \SI{39}{\kilo\hertz}$. We assume a quality factor of $\omega_\mu/\kappa_\mu=100$ for all phonon modes. The solid lines represent $\Gamma^{(2)}_{{\atn'}\nu}$, calculated from \cref{eqn-S:dephasing broadening simplified}. The dashed-dotted lines represent $\Gamma^{(1)}_{{\atn'}\nu}$, calculated from \cref{eqn-S:depopulation broadening nearest neighbor}. We also plot the asymptote \cref{eqn-S:depopulation broadening nearest neighbor small cavity} for the limit $\phonfreq_1 \gg \atfreq_{\atnb\atn}$. The resonances visible in $\Gamma^{(1)}_{{\atn'}\nu}$ occur whenever a cavity mode is resonant with the transition. The star indicates the parameters we use to plot the spectra in Fig.~3 of the paper. Below the horizontal dashed line $\Gamma_{{\atn'}\nu}/\Delta \omega < 1$, which indicates that transitions between motional states can be resolved.}
\label{fig-S: depopulation vs dephasing broadening}
\end{figure}
\section{Linewidths for Optically Trapped Atoms}
\label{sec: linewidths optical trap}
We derive the phonon-induced depopulation rate of radial motional states of atoms that are trapped in a two-color trap and interact with the traveling flexural phonons of a long nanofiber. This model is able to explain the heating rates observed in existing nanofiber-based atom trap setups~\cite{S_hummer_heating_2019}. We calculate the depopulation rates using the numerical methods also applied to the adsorbed and surface-bound states. We use these results to verify our numerical calculations by comparing them with analytical results obtained in the limit of a harmonic trap.
\begin{figure}
\centering
\includegraphics[width=225.3267pt]{figureS2.pdf}
\caption{Radial states and their phonon-induced linewidths of a cesium atom in a nanofiber-based two-color optical trap. The states are obtained by solving \cref{eqn-S:Schroedinger equation}. We neglect the coupling between the motion in radial, azimuthal, and axial direction. On the left-hand side, we plot the corresponding potential $V$ (yellow), the spectrum $\omega_\nu/2\pi$ of motional states (dark blue), and two examples of the atom wavefunction (red) in arbitrary units. The gray area at $r-R<0$ marks the position of the nanofiber. On the right-hand side, we plot the phonon-induced linewidths $\Gamma_\nu$ of the motional states, assuming a temperature of $T=\SI{600}{\kelvin}$.}
\label{fig-S: optical trap linewidths}
\end{figure}
Fig.~1 of the paper shows a typical two-color trap potential. It is realized by launching two counterpropagating beams with a free-space wavelength of $\SI{1064}{\nano\meter}$ (red detuned with respect to the cesium $D_2$ line) and a combined power of $2\times\SI{2}{\milli\watt}$ into the nanofiber, as well as a running-wave light field with a wavelength of $\SI{840}{\nano\meter}$ (blue detuned) and a power of $\SI{4.5}{\milli\watt}$. All beams are linearly polarized, with a $\pi/2$ angle between the polarization planes of the blue- and red-detuned light fields. All other parameters are as in the case study presented in the paper. The trap minima are located in the polarization plane of the red-detuned light field. Close to the ground state of the trap, the radial motion of the atom decouples from its motion in the axial and azimuthal direction.
The radial motional states $\ket{\nu}$ can be obtained by solving \cref{eqn-S:Schroedinger equation}. We plot two examples of the corresponding wavefunctions in \cref{fig-S: optical trap linewidths}. To leading order in the phonon degrees of freedom, these states couple to flexural phonons through the interaction Hamiltonian
\begin{align}
\op{H}_{\motional\text{-}\vibrational} &= \hbar \sum_{\mu {\atn'} \nu} \spare{ g_{\mu {\atn'} \nu} \op{b}_\mu \ket{{\atn'}}\bra{\nu} + \text{H.c.} }, &
g_{ \mu {\atn'} \nu} &= -\frac{1}{2\sqrt{2}\pi} \frac{\mathcal{A}^{(1)}_{{\atn'}\nu}}{\sqrt{\hbar\rho \omega_\mu}R}.
\end{align}
The resulting depopulation rates can be calculated at first order in perturbation theory:
\begin{align}\label{eqn-S:linewidths optical trap}
\Gamma^d_\nu &= \frac{1}{\sqrt{2}\pi} \frac{\boltzmannT}{\hbar^2\sqrt{R^5 \sqrt{E \rho^3}}} \sum_{{\atn'} \neq \nu} \frac{|\mathcal{A}^{(1)}_{{\atn'}\nu}|^2}{\sqrt{|\atfreq_{\atnb\atn}|^5}}.
\end{align}
In deriving \cref{eqn-S:linewidths optical trap}, we assume a high thermal occupation $\bar{n}_\mu \gg 1$. We plot the depopulation rates for each state on the right-hand side of \cref{fig-S: optical trap linewidths}.
The potential is approximately harmonic for states close to the ground state of the optical trap at $\atpos_0=(\atrpos_0,\atphipos_0,\atzpos_0)$. The atom Hamiltonian can then be written as $\op{H}_\text{ext} = \sum_i \hbar \trapfreq_{ i} \hconj\hat{a}_i \hat{a}_i$ where we introduce bosonic creation and annihilation operators $\hconj\hat{a}_i$ and $\hat{a}_i$ for the harmonic motion of the atom in direction $i=r,\varphi,z$. The trap frequencies are $\trapfreq_{ i} = \sqrt{\partial_i^2 V_0(\atpos_0)/M}$. The interaction between the phonons and the atomic motion is of the form
\begin{equation}\label{eqn-S:interaction Hamiltonian optical trap}
\op{H}_{\motional\text{-}\vibrational} \simeq \sum_{\mu i} \hbar (\hat{a}_i + \hconj\hat{a}_i)( g_{\mu i} \op{b}_\mu + \cconjg_{\mu i} \hconj\op{b}_\mu ).
\end{equation}
The coupling constants between the radial motion and flexural nanofiber phonons in particular is~\cite{S_hummer_heating_2019}
\begin{equation}
g_{\mu r} = - \frac{1}{4\pi} \frac{1}{R} \sqrt{\frac{M \trapfreq_{\rpos}^3}{\rho \omega_\mu}} e^{+i (j \varphi_0 + \phonkz_0)}.
\end{equation}
We again denote the radial motional states by $\ket{\nu}$, where $\nu\in\mathds{N}$ is the number of motional quanta. For each state $\ket{\nu}$, the spontaneous radiative decay rate is $\Gamma_{0} \equiv 2\pi \sum_{\mu} \rho_{\mu} |g_{\phonindexr}|^2$. Here, the sum runs over the phonon modes $\mu$ resonant with the trap and $\rho_{\mu} = \left|d\omega_\mu/dp\right|^{-1}$ is the phonon density of states. The depopulation rate is $\Gamma_{\nu} \simeq (2 \nu + 1) \bar{n}_\mu \Gamma_{0}$ if the thermal occupation $\bar{n}_\mu \gg 1$ of the resonant phonon modes is large. Hence, we obtain the following analytical expression for the phonon-induced depopulation rates of the radial motional states of an atom close to the ground state $\ket{0}$ of a nanofiber-based optical trap:
\begin{equation}\label{eqn-S:linewidths harmonic trap}
\Gamma^d_\nu = \frac{(2 \nu + 1)}{2\sqrt{2}\pi} \frac{\boltzmannT M}{\hbar} \sqrt{\frac{\trapfreq_{\rpos}}{R^5 \sqrt{E \rho^3}}}.
\end{equation}
We use this expression to verify our numerical methods: The numerical result $\Gamma^d_\nu = \SI{214}{\hertz}$ for the ground state $\ket{0}$ obtained using \cref{eqn-S:linewidths optical trap} and presented in \cref{fig-S: optical trap linewidths} agrees well with the rate $\Gamma^d_\nu = \SI{216}{\hertz}$ obtained analytically using \cref{eqn-S:linewidths harmonic trap}. These results are compatible with experimentally observed linewidths~\cite{S_reitz_coherence_2013,hummer_heating_2019,albrecht_fictitious_2016}.
\section{Heterodyne Fluorescence Spectroscopy}
\label{sec: spectroscopy}
In the paper, we propose heterodyne fluorescence spectroscopy to probe the quantized spectrum of surface-bound motional states. Under suitable conditions~\cite{S_cirac_spectrum_1993}, the resulting signal reveals Raman-type transitions between different states of the radial center-of-mass motion of atoms in their electronic ground state. This approach has advantages compared to the transmission~\cite{S_sague_cold-atom_2007,patterson_spectral_2018} or fluorescence excitation spectroscopy~\cite{S_nayak_optical_2007} used in previous experimental studies of surface-induced effects on atoms near optical nanofibers. These latter techniques probe surface-induced shifts between the ground state and a given excited electronic state of the atoms. In consequence, their resolution is limited by the natural linewidth of the excited electronic state. For the Raman spectroscopy technique proposed here, the surface-induced shifts only change the overall strength of the signal but not its shape. In consequence, the Raman spectroscopy is not limited by spectral width of the optically excited state and can provide access to the closely spaced energy levels shown in Fig.~2 of the paper.
To probe the radial motional states of atoms bound directly to the nanofiber surface, a circularly polarized probe laser with a frequency $\photfreq_p$ detuned from resonance with the atom is coupled into the fiber as a traveling wave. The resulting polarization in the nanofiber region is quasi-circularly polarized, with azimuthal order $m=\pm1$; see \cref{sec: nanofiber photons}. The probe beam has a wavelength in the single-mode regime of the nanofiber, such that probe photons are guided on the $\text{HE}_{11}$ band in the nanofiber region. We assume that the probe laser is sufficiently far detuned from resonance with transitions between the $6\text{S}$ and $6\text{P}$ manifolds of the cesium atom to treat the atom as an effective two-level system with ground state $\ket{g}$, excited state $\ket{e}$, and transition frequency $\omega_0$. Those photons that are scattered by the atom back into the nanofiber in the forward direction are recombined with the local oscillator on a beam splitter. The frequency $\photfreq_s$ of a scattered photon is changed to $\photfreq_s$ when the atom simultaneously changes its motional state, leading to motional sidebands in the spectrum of the probe beam. The frequency difference between the probe beam and the local oscillator results in a beat that can be observed with a photodetector. The local oscillator is shifted by an offset $\Delta\photfreq$ such that the spectrum of the photocurrent contains sidebands at $\photfreq_p+\Delta\photfreq - \photfreq_s$. This shift separates the Stokes- and anti-Stokes sidebands in the final signal and to choose the optimal working point for the photodetector. Moreover, the polarization of the local oscillator is matched to the polarization of the probe beam. In consequence, the beat signal is predominantly due to photons that are scattered without changing their polarization. This specific choice of polarizations eliminates the contribution of changes of the atoms' azimuthal motional state to the spectroscopy signal, while the detection of light scattered in the forward direction minimizes the recoil in the axial motion of the atoms. As a result, the proposed spectroscopy configuration is only sensitive to the radial motion of the atoms, and the motional sidebands correspond to transitions $\nu \to {\atn'}$ between different radial motional states.
The atom-phonon-photon system can then be described by the Hamiltonian $\op{H}' = \op{H}_\text{ext} + \op{H}_\text{phn} + \op{H}_{\motional\text{-}\vibrational} + \op{H}_\text{int} + \op{H}_\text{pht} + \op{H}_{\electronic\text{-}\photonic}$ where the electronic structure of the atom is governed by $\op{H}_\text{int} = \hbar \omega_0 \ketbra{e}{e}$ and the atom interacts with the electric field through the dipole coupling $\op{H}_{\electronic\text{-}\photonic} = - \op{\dipolevec} \cdot \op{\Efield}(\op\atpos)$. Here, $\op{\dipolevec}$ is the dipole moment of the atom. This model assumes that the probe laser is weak such that multiple scattering of a photon by several atoms can be neglected, and it is sufficient to treat every atom individually. To predict the spectral distribution of the power $P(\omega)$ of the scattered light as a function of the frequency difference $\omega \equiv \photfreq_s - \photfreq_p$, one can calculate the steady-state of the system in the presence of a coherently driven laser mode and a thermal nanofiber phonon bath using a master equation approach~\cite{S_lindberg_resonance_1986,cirac_spectrum_1993}. There is, however, an alternative way to approximate the resulting spectrum that is sufficient for the purpose of this paper: The motional states we consider have lifetimes corresponding to $2\pi/\Gamma_\nu \sim \SI{1}{\milli\second}$ that are much longer than the time of $2\pi/\atLinewidth_0 \sim \SI{100}{\nano\second}$ it takes a probe photon to be absorbed and re-emitted by the atom. Here, $\atLinewidth_0$ is the lifetime of states in the $6\text{P}$ manifold of cesium. We can, therefore, treat the motional states as eigenstates for the duration of the scattering process and neglect their coupling to the nanofiber phonons. This approximation allows us to employ scattering theory to obtain the position and relative weight of the motional sidebands in the spectrum $P(\omega)$. In a second step, we then account for the finite linewidth of transitions between the motional states.
We assume that the probe laser has a sufficiently low power such that the atom only interacts with one photon at a time. The relevant transitions are, therefore, between states where the atom starts in its internal ground state $\ket{g}$ and the motional state $\ket{\xi}=\ket{\nu,l,q}$, and ends again in its ground state but with a different motional state $\ket{{\atindex'}}=\ket{{\atn'},{\atl'},{\atk'}}$. Simultaneously, a photon is scattered from the mode ${\photindex_p}$ to the mode ${\photindex_s}$. Since we detect only scattered photons that are still nanofiber-guided, propagate in the same direction, and have the same polarization, the modes ${\photindex_p}$ and ${\photindex_s}$ can only differ in their frequencies. Conservation of angular momentum then implies that $\photl'=m$. Moreover, we can neglect the change in kinetic energy of the atom due to recoil along the nanofiber axis, so ${\atk'} \simeq q$. Energy conservation hence requires the detected photon to have a frequency shifted by $\omega = \omega_\nu - \omega_{\atn'}$. One can show using the resolvent ~\cite{S_cohen-tannoudji_atom-photon_1998} that the scattering matrix element for transitions $\nu \to {\atn'}$ while changing the frequency of the photon by $\omega$ is
\begin{equation}\label{eqn-S:scattering matrix element}
S_{{\atn'}\nu}(\omega) \simeq \frac{2\pi i}{\hbar^2} \delta\pare{ \omega_{{\atn'}\nu} - \omega } \frac{ (d/3)^2 \mathcal{F}_{{\atn'}\nu} }{ \Delta + i \atLinewidth_0/2}.
\end{equation}
Here, $\omega_{{\atn'}\nu} \equiv \omega_{{\atn'}} - \omega_\nu$ is the frequency difference between the initial and the final radial motional state of the atom and $\Delta\equiv \photfreq_p - \omega_0$ is the detuning of the probe laser from resonance with the atom. Note that $\omega_0$ and $\atLinewidth_0$ are modified by the presence of the nanofiber compared to a cesium atom in free space. They depend on the distance between the atom and the nanofiber and hence on the radial motional state $\nu$. In the following, we assume that differences in the transition frequency and decay rate can be neglected over the limited range of motional states we consider. The relative weights of the sidebands in \cref{eqn-S:scattering matrix element} are determined by the Franck-Condon factors
\begin{equation}\label{eqn-S:Franck Condon factors}
\mathcal{F}_{{\atn'}\nu} \equiv \frac{E_{\photindex_s} E_{\photindex_p}}{(2\pi)^2} \int_0^\infty \cconj\psi_{{\atn'}}(r) \, \cconj{\boldsymbol{\emodercomp}}_{\photindex_s}(r) \cdot \boldsymbol{\emodercomp}_{\photindex_p}(r) \, \psi_{\nu}(r) \mathop{}\!\mathrm{d} r.
\end{equation}
In deriving \cref{eqn-S:scattering matrix element}, we (i) exploit that the scattering of a probe photon by the atom is sufficiently fast such that the motional state of the atom does not decay in the meantime; (ii) assume that $|\Delta| \gg |\omega_{{\atn'}\nu}|$, which is the case for the weakly bound states considered in the paper if the probe laser is detuned by a few $\si{\nano\meter}$; (iii) assume that the detuning is sufficiently large for the response of the atom to be isotropic, that is, $\braket{g|\op{\dipolecomp}^i \op{\dipolecomp}^j|g}= (d/3)^2 \delta^{ij}$ where $d\in\mathds{R}$ and $\op{\dipolecomp}^i$ are components of the dipole moment $\op{\dipolevec}$ of the atom.
The power of the scattered light is $P(\omega) \propto \sum_{\nu,{\atn'}\neq\nu}n(\nu) |S_{{\atn'}\nu}(\omega)|$ where $n(\nu)$ is the number of atoms initially in the motional state $\nu$. In practice, the sharp sidebands in \cref{eqn-S:scattering matrix element} are broadened due to sources of noise and decoherence affecting either the laser or the motion of the atom. If the same laser source is used for both the probe beam and the reference beam, the frequency drift of the laser has no effect and the linewidths of the sidebands are determined by the decoherence of the motional atomic states. We can model the phonon-induced linewidths of the motional states by replacing the sharp sidebands in \cref{eqn-S:scattering matrix element} with Lorentzian resonances of the appropriate width $\Gamma_{{\atn'}\nu}$ and the same total power:
\begin{equation}
\delta\pare{ \omega_{{\atn'}\nu} - \omega } \to \frac{1}{\pi} \frac{ \Gamma_{{\atn'}\nu}/2 }{ \pare{ \omega_{{\atn'}\nu} - \omega }^2 + \pare{\Gamma_{{\atn'}\nu}/2}^2 }.
\end{equation}
The motional states considered in the paper fall into a frequency interval that is small compared to the depth of the potential $V(r)$. In consequence, we can approximate the occupation $n(\nu)$ of these states as constant. The power of the light scattered by the atom is therefore
\begin{equation}
P(\omega) \propto \sum_{\nu,{\atn'}\neq\nu} \frac{ \Gamma_{{\atn'}\nu}/2 }{ \pare{ \omega_{{\atn'}\nu} - \omega }^2 + \pare{\Gamma_{{\atn'}\nu}/2}^2 } \left| \mathcal{F}_{{\atn'}\nu} \right|^2
\end{equation}
as a function of the frequency difference between probe photons and detected photons.
\begin{figure}
\centering
\includegraphics[height=158.26pt]{figureS3a.pdf}
\hspace*{.6cm}
\includegraphics[height=158.26pt]{figureS3b.pdf}
\caption{Sidebands in the fluorescence spectrum of an atom bound to an optical nanofiber. Panel (a) shows the spectrum for atoms in a two-color trap. The sidebands are due to transitions between the radial motional states shown in \cref{fig-S: optical trap linewidths}. The motion in azimuthal and axial direction leads to additional sidebands that are not represented here. We neglect the coupling between the motion in radial, azimuthal, and axial direction. Panel (b) corresponds Fig.~3b in the paper and shows a larger interval of the spectrum for adsorbed atoms. The indicated transitions involve the states $\nu=(253,249)$, which have frequencies $\omega_\nu = -2\pi \times (\num{8.9},\num{20})\,\si{\mega\hertz}$ and lie deeper than the states shown in Fig.~2a of the paper.}
\label{fig-S: fluorescence spectra}
\end{figure}
In Fig.~3 of the paper, we show fluorescence spectra for adsorbed atoms and atoms in the hybrid light- and surface-induced potential. In \csubref{fig-S: fluorescence spectra}{a}, we plot the spectrum due to transitions between the optically trapped states shown in \cref{fig-S: optical trap linewidths}. We use \cref{eqn-S:linewidths optical trap} to calculate the linewidths, assuming that the linewidths of atoms trapped in two-color traps around a long nanofiber are limited by depopulation. We further approximate the population of the motional states as equal. In practice, the spectrum features additional sidebands from the motion in axial and azimuthal direction since the two-color trap confines the atom in all three spatial directions. These sidebands are omitted in \cref{fig-S: fluorescence spectra}. We use the power $P_0$ of the sideband corresponding to transitions between the radial ground state $\nu=0$ and first excited state $\nu=1$ as a reference and plot all spectra in units of $P_0$.
\Csubref{fig-S: fluorescence spectra}{b} shows the fluorescence spectrum for adsorbed atoms in a larger frequency interval than in Fig.~3a in the paper, involving states with larger binding energies. The corresponding wave functions have a much smaller spatial extent, which results in smaller Franck-Condon factors. Atoms in these states are, therefore, much less likely to scatter a nanofiber-guided photon and are more difficult to probe. Moreover, transitions with larger frequencies can no longer be resolved due to their increasing linewidths. For these reasons, we focus on states with binding energies of a few $\si{\mega\hertz}$ and transition frequencies of a few hundred $\si{\kilo\hertz}$ in the paper.
|
2,869,038,155,870 | arxiv | \section{Introduction} \label{s:intros}
The Crab Nebula, the relic of a stellar explosion recorded by Chinese astronomers in 1054, has a special place in the history of astronomy.
It is our most frequently observed laboratory for high-energy astrophysics.
Located at a distance of $\approx2$ kpc, the system is energized by a pulsar of spindown luminosity $L_{plsr}\approx5\times 10^{38}$ erg/s and current spin period P $\approx34$ ms.
The history and general properties of the system are nicely summarized in the review by Hester (2008).
Optical and X-ray images (Hester et al.\ 1995; Weisskopf et al.\ 2000; Hester et al.\ 2002) of the inner nebula show features such as an inner ring, toroidal structure, knots, and two opposing jets originating from the pulsar –-- these latter presumably aligned with its rotation axis and proper motion vector (Caraveo \& Mignani 1999; Ng \& Romani 2007; Kaplan et al.\ 2008 and references therein).
The ``inner-ring", prominent in X-rays, is commonly accepted as being the termination shock produced by the relativistic wind of particles accelerated by the pulsar.
Many of the optical and X-ray features brighten and fade and/or move over weeks or months (e.g., Hester et al.\ 1995; Hester et al.\ 2002).
The quiescent or average spectral energy distribution (SED) of the Crab Nebula has a characteristic two-humped form (see, e.g., Figure~\ref{f:sed}; Atoyan \& Aharonian 1996; Bucciantini, Arons, \& Amato 2011 and references therein).
The synchrotron spectrum extends from $\approx30$~MHz to $\approx 1.2\times 10^{22}$~Hz (500~${\rm Me\!V}$).
Most of the power is radiated by $\approx$~${\rm Te\!V}$\ electrons in the near UV $\approx10$~${\rm e\!V}$\ with an associated luminosity of $\approx 1.3 \times 10^{38}$ {\rm ergs/s} (Hester 2008).
This roughly matches the loss of rotational energy by the pulsar, which releases its energy electromagnetically, generating a current $\approx200$~TA and inducing an electro-motive force (EMF) $\approx50$~PV.
However, the nebula is currently varying on a few-year timescale (Wilson-Hodge et al.\ 2011).
At higher energies, Compton scattering has a luminosity of $\approx10^{36}$~{\rm erg/s}, peaking around 60~${\rm Ge\!V}$\ (Albert et al.\ 2008) and measured up to $\approx80$~${\rm Te\!V}$\ (e.g., Aharonian et al.\ 2004; Abdo et al.\ 2010).
Since 2007, the AGILE and {\sl Fermi}~ satellites have detected several $\gamma$-ray flares from the Crab Nebula (Tavani et al.\ 2011; Abdo et al.\ 2011; Striani et al.\ 2011a; Buehler et al.\ 2012) in the $0.1-1$~${\rm Ge\!V}$\ range.
The most dramatic flares exhibit variability on timescales as short as a few hours, although it is unclear whether they are distinct events or just the largest variations from a stationary power spectrum of fluctuations.
Prior to the 2011-April event, the only Crab $\gamma$-ray flare covered by a multi-wavelength observing program was the 2010-September flare, which triggered observations in radio, optical (using both ground-based telescopes and HST), and X-ray bands.
Despite the $\gamma$-ray brightness of the flares, there has been no evidence for correlated variations in radio (Lobanov, Horns, \& Muxlow 2011; this paper), near infrared (Kanbach, et al.\ 2010, this paper), optical (Caraveo et al.\ 2010), or X-ray bands (Evangelista et al.\ 2010; Shaposhnikov et al.\ 2010; Tennant et al.\ 2010; Ferrigno et al.\ 2010; Horns et al.\ 2010; Cusumano et al.\ 2011; Tennant et al.\ 2011; Tavani et al.\ 2011; Striani et al.\ 2011b; this paper).
Here we focus on the {\sl Fermi}-LAT results for the 2011-April flare (Buehler et al.\ 2012), which allow us to assess the source behavior in detail.
The source doubled its $\gamma$-ray flux within eight hours and reached a peak flux 30-times its average.
The isotropic luminosity increased to $\approx2\times10^{37}$~{\rm erg/s} in $\approx10$~hr and the spectrum peaked at $\approx400$~${\rm Me\!V}$.
Table~\ref{t:r1} gives the $\gamma$-ray powerlaw photon spectral index, the integrated photon flux above 100~${\rm Me\!V}$, and the photon spectral flux at 100~${\rm Me\!V}$, as measured during the 10-ks time intervals when X-ray data (\S\ref{s:xray_obs}) were taken.
Notification as to the level of flaring prompted us to trigger pre-approved Target of Opportunity observations with {\sl Chandra}~ and with the NRAO\footnote{The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.} Karl G.~Jansky Very Large Array (VLA).
We were also fortunate to obtain a Keck image in the near infrared, albeit not under ideal conditions.
Figure~\ref{f:r0} shows the {\sl Fermi}-LAT $\gamma$-ray counting rate as a function of time and also indicates the times of the {\sl Chandra}, Keck (\S\ref{s:Keck}) and some of the VLA (\S\ref{s:vla}) observations.
\section{X-ray Observations and Data Analysis} \label{s:xray_obs}
With the back-illuminated ACIS S3 CCD on the {\sl Chandra}~ X-ray Observatory approximately centered on the Crab pulsar, we obtained five observations (Table~\ref{t:r1}) during and somewhat after the 2011-April $\gamma$-ray flare.
For these observations, the spacecraft dithered with an amplitude set to $1\arcsec$.
Although standard processing typically produces an aspect solution better than $0.5\arcsec$, even this small uncertainty can introduce noticeable shifts when comparing different data sets.
Thus, we re-registered images for our analysis using the read-out streak and the pulsar as guides.
As each of these 5 images was placed at approximately the same CCD location, spatial non-uniformity in the ACIS response (e.g., due to contamination) does not introduce spurious temporal variability.
Owing to the Crab's high flux, the ACIS observations employed a special mode with 0.2-s frame time, which limits the CCD read-out to a $300\times 300$~ACIS-pixel ($\approx 150\arcsec\times 150\arcsec$) subarray.
Although each observation lasted about 10 ks, telemetry saturation reduced the effective integration time to approximately 1200 s per observation.
Despite the short frame time of the special ACIS mode, regions of high surface brightness suffer somewhat from pile-up effects.
We consider only data in the range 0.5--8.0~${\rm ke\!V}$\ because of severe interstellar absorption at low energies and declining flux at high energies.
Using these data, we then search for X-ray variations approximately contemporaneous with the 2011-April $\gamma$-ray flare.
\subsection{X-ray Image Analysis \label{ss:x_analysis}}
Figure~\ref{f:r1} shows an image of the number of counts per ACIS pixel, summed over the 5 observations.
For each observation, we re-binned a $120\times 120$~ACIS-pixel image centered on the pulsar into a $60\times 60$~array of $2\times 2$~ACIS pixels.
Each of these $I = 3600$ ``analysis pixels" is sufficiently large (about 1 square arcsec) to enclose most of the {\sl Chandra}~ point spread function anywhere in the field of view.
Note that we also performed an analysis using a circular bin of radius $1$~ACIS pixel on an oversampled grid of cadence $0.1\times$~ACIS pixel --- i.e., a spherical top hat smoothing of the events.
As each method gave similar results, we here report the results for the ``analysis pixel" binning, for which each pixel is statistically independent.
For each analysis pixel $i$, we calculate the mean count rate $r_i$ averaged over the $J = 5$ observations, weighted\footnote{\ $r_i = \sum_{j=1}^{J} \{r_{ij}/\sigma_{ij}^{2}\} / \sum_{j=1}^{J} \{1/\sigma_{ij}^{2}\}$} by the respective (counting-rate) statistical error $\sigma_{ij}$ for each analysis pixel and observation.
For evaluating statistical significance of temporal variations over the $J = 5$ observations, we compute\footnote{\ $\chi_{i}^{2} = \sum_{j=1}^{J} \{(r_{ij}-r_{i})^{2}/\sigma_{ij}^{2}\}$ and $S_{i} \equiv (\chi_{i}^{2}-\nu_{i})/\sqrt{2 \nu_{i}}$ where $\nu_{i} = (J-1)$.} $\chi_{i}^{2}$ along with a derived significance measure $S_{i}$.
For purposes of discussion, we also compute the appropriately weighted\footnote{\ $\sigma_{i}^{2} = J / \sum_{j=1}^{J} \{1/\sigma_{ij}^{2}\}$ and $s_{i}^2 = \frac{J}{(J-1)} \sum_{j=1}^{J} \{(r_{ij}-r_{i})^{2}/\sigma_{ij}^{2}\} / \sum_{j=1}^{J} \{1/\sigma_{ij}^{2}\} = \frac{\chi_{i}^{2}}{(J-1)} \sigma_{i}^{2}$} statistical error $\sigma_{i}$ and sample standard deviation $s_{i}$ for each pixel $i$.
As properly weighted, $\chi_{i}^{2} = (J-1)\, s_{i}^{2} / \sigma_{i}^{2}$.
While we have rigorously calculated $r_{i}$, $\chi_{i}^{2}$, $\sigma_{i}$, and $s_{i}$ for each pixel $i$ using appropriate weightings, we note that the weightings are nearly uniform as the effective duration of the each of the $J=5$ observations was nearly the same---about 1200 s.
\subsection{Variability of the X-ray images} \label{ss:x_variability}
The (counting) statistical error $\sigma_{i}$ is the primary noise term and thus governs the sensitivity for detecting temporal variations at the analysis-pixel (square-arcsec) scale.
Figure~\ref{f:r2} shows the image and the corresponding histogram of the distribution of $\sigma_{i}$, which ranges from 0.0025 to 0.024 ct/s per analysis pixel.
Based upon the $\chi^{2}$ probability distribution and the number of ``tries'' ($I=3600$ independent analysis pixels), a 99\%-confidence detection would require a $\chi_{i, 99\%}^{2} > 31.2$ on $(J-1) = 4$~degrees of freedom.
This corresponds to a sample standard deviation $s_{i, 99\%} > 2.80\, \sigma_{i}$, which ranges from 0.0071 to 0.068 ct/s over the field.
We do not here display the analogous image and histogram for the sample standard deviation $s_{i}$, which ranges from 0.0014 to 0.048 ct/s per analysis pixel.
Instead, Figure~\ref{f:r3} shows the image and histogram of the distribution of a calculated (\S \ref{ss:x_analysis}) significance measure $S_{i}$, related to $\chi_{i}^{2} = (J-1)\, s_{i}^{2} / \sigma_{i}^{2}$.
The statistically most significant variation has $\chi_{i}^{2} = 23.5$ on $\nu = (J-1) = 4$ degrees of freedom giving $S_{i} = 6.9$.
Such a fluctuation is expected statistically in at least 1 of 3600 pixels in 31\% of realizations.
Table~\ref{t:r2} gives the sample standard deviation and 99\%-confidence upper limit to the count-rate variation, for the analysis pixel with the statistically most significant X-ray variation.
While we detect no variations statistically significant at 99\%confidence, it is curious that the 3 most significant variations occur at locations on the inner ring.
Note that if a feature, such as one of the knots, doesn't change in intensity but moves from one analysis pixel to another, then our $\chi_{i}^{2}$ test would detect this as a variation.
We know that features in the inner ring of the Nebula do move and expect to detect some variability due to this motion.
However, as our 5 observations span only 14 days and $1\arcsec$ corresponds to 11.5 light days, only relativistic motion would be detectable.
Other effects, such as changes in the roll angle of the read-out streak, can also lead to spurious variability.
Indeed, this may play a role for the analysis pixel with the most significant variation, which lies adjacent to the average read-out streak (Figure~\ref{f:r1}).
\subsection{Limits to the X-ray flux} \label{ss:x_limits}
Thus far, we have described the X-ray data for each analysis pixel in units of ACIS count rate.
Neglecting for the moment pile-up effects, the photon spectral flux (or other related radiation quantity) is proportional to the count rate for an assumed spectral shape.
Consequently, any change in count rate corresponds to a proportionate change in photon spectral flux (for an assumed spectral shape).
Using the {\sl Chandra}~ PIMMS\footnote{http://asc.harvard.edu/toolkit/pimms.jsp} for the ACIS-S detector and an absorption column $N_{\rm H} = 3.1\times 10^{21}\ {\rm cm}^{-2}$,
we determine (ignoring pile-up) this constant of proportionality for an X-ray power-law photon index $\Gamma_{x} = \frac{2}{3}$, 1, and 2:
At $E_{x} = 1$~${\rm ke\!V}$, $N_{E}(E_{x})/r =$\ 0.99, 1.26, and 2.46 $\times 10^{-3}$~ph/(cm$^2$ s ${\rm ke\!V}$) per ct/s, respectively.
Correcting for pile-up has little effect in low-count-rate regions, but would raise these flux upper limits by $\approx10$\% or so for high-count-rate regions.
Table~\ref{t:r2} calculates the photon spectral flux $N_{E}(E_{x})$, the energy spectral flux $F_E(E_{x})$, and the indicative (isotropic) luminosity $E L_{E}(E_{x}) = 4 \pi D^{2} E F_{E}(E_{x})$ at $D = 2$~kpc, corresponding to the sample standard deviation and 99\%-confidence upper limit for the count-rate variation in the analysis pixel with the most significant X-ray variation.
Figure~\ref{f:r5} displays an image of the energy spectral flux $F_{E}(E_{x})$ at $E_{x} = 1$~${\rm ke\!V}$\ for $\Gamma_{x} = 1$, based upon the sample standard deviation $s_{i}$ of the count rate in each analysis pixel.
\subsection{Constraints on the X-ray to $\gamma$-ray Spectral Index} \label{ss:x_index}
We now compare the X-ray data with the $\gamma$-ray data to quantify the implications of our lack of detection of time variations in the X-ray data.
Our approach compares a variability measure for the X-ray (1-${\rm ke\!V}$) photon spectral flux $\Delta N_{E}(E_{x})$ in each analysis pixel with the analogous variability measure for the $\gamma$-ray (100-${\rm Me\!V}$) photon spectral flux $\Delta N_{E}(E_{\gamma})$.
In particular, we calculate the sample standard deviation of the $\gamma$-ray spectral flux at 100~${\rm Me\!V}$, using power-law fits to the 5 {\sl Fermi}-LAT measurements that were simultaneous with the 5 {\sl Chandra}~ observations.
Table~\ref{t:r1} lists the 5 {\sl Chandra}~ ObsIDs, their dates, along with the $\gamma$-ray photon index $\Gamma_{\gamma}$, integrated photon flux $N(>$100~${\rm Me\!V}$), and photon spectral flux $N_{E}$(100~${\rm Me\!V}$).
For the 5 {\sl Fermi}-LAT observations, the mean and sample standard deviation of the photon spectral flux at 100~${\rm Me\!V}$\ are $1.21\times 10^{-10}$ and $5.77\times 10^{-11}$ ph/(cm$^2$ s ${\rm ke\!V}$), respectively.
Based upon the sample standard deviation ($s_{i}$) of photon spectral flux at $E_{x} = 1$~${\rm ke\!V}$\ for each X-ray analysis pixel and the measured standard deviation ($5.77\times 10^{-11}$ ph/(cm$^2$ s ${\rm ke\!V}$)) at $E_{\gamma} = 100$~${\rm Me\!V}$, we constrain the effective X-ray to $\gamma$-ray photon index of the flaring component: $\Gamma_{x\gamma} \equiv -\log[\Delta N_{E}(E_{\gamma})/\Delta N_{E}(E_{x})] / \log[E_{\gamma}/E_{x}]$.
Figure~\ref{f:r6} shows the image and corresponding histogram of the distribution of upper limits to $\Gamma_{x\gamma}$ based upon the sample standard deviation of the X-ray measurements and assuming $\Gamma_{x} = 1$.
In that the $\gamma$-ray variations are statistically significant and the X-ray variations are not, we compute 99\%-confidence upper limits to $\Gamma_{x\gamma}$ (Table~\ref{t:r2} last row).
Note that the upper limits to $\Gamma_{x\gamma}$ are marginally consistent with the low-energy extrapolation of the $\gamma$-ray spectrum ($\Gamma_{\gamma} = 1.27 \pm 0.12$) of the flaring component (Buehler et al.\ 2012).
\subsection{Variability within an X-ray Image} \label{ss:x_short}
In sections~\ref{ss:x_analysis}--\ref{ss:x_variability}, the search for variability focused on sensitivity to flux changes amongst the five pointings with a minimum cadence of 0.6~days.
Here, we search for variability on shorter time-scales---namely within each pointing.
As described in Section~\ref{s:xray_obs}, the ACIS S3 CCD was read at most roughly 6000 times in each pointing due to telemetry saturation (deadtime).
In this study, rather than using the $2\times2$ ACIS analysis pixel, we
employed a circular search bin with a radius of $1$~ACIS-pixel
($0.49\arcsec$) on a grid with $0.1$~ACIS pixel spacing.
Note that this oversampling implies that the results of the test in adjacent
pixels are not statistically independent.
(We also analyzed these data using the statistically independent analysis
pixels of \S\ref{ss:x_analysis} with similar results as below.)
Using the frame number of each detected photon, we derive the empirical cumulative distribution function (ECDF) of the frames with a photon arriving in the analysis pixels of the CCD.
This ECDF is then compared with the corresponding ECDF of the exposure given
by the sequence of frames actually read.
Finally, we compare the two ECDFs using a Kolmogorov-Smirnov test, resulting
in a probability estimate $Q$ that the two ECDFs are derived from the same
parent distribution.
A low value of $Q$ would indicate possible variability.
The results of the test for the five pointings were very similar.
The smallest value, $Q_\mathrm{min}= 6.7\times10^{-7}$, was obtained in observation $13151$.
Note that selecting this point represents a tuning bias, as the noise in neighboring points is highly correlated due to the oversampling.
The probability of finding at least one pixel with $Q_\mathrm{min}$
considering that there are $120\times120/(\pi\times1^2)\times5$ statistically independent trials is 0.015, which we regard as a {\em lower} limit due to the tuning bias.
A 0.015 probability is tantalizing but not compellingly significant:
Hence, we do {\em not} claim detection of short-time-scale variability.
The fact that the location of the point with minimum Q is very close to the pulsar, a region in which pileup plays a strong role in blotting out the image, bolsters our somewhat conservative conclusion.
\section{Near-Infrared Image of the Inner Knot}\label{s:Keck}
The extreme saturation of the pulsar in the X-ray images means that we cannot easily study the central 2$^{\prime\prime}$ in X-rays.
However, this region does contain a nebular structure of particular interest: the ``inner knot'' whose peak is $0.65^{\prime\prime}$ southeast of the pulsar at position angle $118^\circ$ East from North (Hester 2008).
This structure, an oval shape extending $\approx 0.75^{\prime\prime}$, is well measured in HST and ground-based near-IR images.
Given its relatively red spectrum (energy spectral index $\alpha_\nu = -1.3 \pm 0.1$ versus $\alpha_\nu = 0.27 \pm 0.03$ for the pulsar; Sandberg and Sollerman 2009), it is one of the near-IR brightest structures in the Nebula.
Sandberg and Sollerman (2009) note that the knot varies by a factor of 2; we confirm typical variability of $20-30\%$ in archival {\it HST} images.
Komissarov \& Lyutikov (2011) have proposed that this structure represents radiation from an oblique termination shock in the pulsar wind nebula.
In this picture, the Earth line-of-sight is tangent to the flow at the inner knot position, and thus the intensity experiences substantial Doppler boosting for synchrotron emission in the mildly relativistic post-shock flow.
Indeed, in relativistic MHD simulations they find that this bright spot is highly variable and can dominate the $\gamma$-ray synchrotron emission.
Alternatively, the knot could be a time varying standing shock in the polar jet flow itself, a flow known to be highly variable from HST imaging (Hester 1995, 2002, 2008).
It is thus of interest to check the status of the knot during the
2011-April $\gamma$-ray flare.
Unlike the sequence of multiwavelength observations performed after the 2010 September flare, it was impossible to trigger an allocated HST Target of Opportunity observation owing to solar constraints in April.
Happily we were able to obtain a Keck Near Infrared Camera (NIRC2) $K'$ exposure (Figure~\ref{f:k1} left image) on MJD 55667.250, almost precisely at the peak of the $\gamma$-ray flux and 2.5\,h before the ACIS image ObsID 13152 (Figure~\ref{f:r0}).
Unhappily, the observations occurred during twilight and only one $20\times4$-s integration without dithering was obtained.
Under these conditions the adaptic-optics (AO) loop did not close, leaving an undithered image with native 0.46$^{\prime\prime}$ full width at half maximum (FWHM) seeing.
This frame was dark subtracted and an approximate background was removed using an immediately subsequent image.
Despite the modest image quality, the inner knot was well detected.
After subtracting the pulsar with a scaled image of the comparably bright companion star $4^{\prime\prime}$ northeast, we measured the knot flux and position.
We find a magnitude $K'= 15.60\pm 0.03$ and an offset $0.64\pm0.04^{\prime\prime}$ from the pulsar.
For comparison we measured a high-quality NIRC2 $K'$ image (Figure~\ref{f:k1} right image) obtained 2005 Nov 10.
Here the knot is $K'= 15.94\pm 0.02$ at offset $0.58\pm0.02^{\prime\prime}$.
We also note that Sandberg and Sollerman (2009) measured $K_s = 15.80\pm0.03$ on 2003 Oct 18.
We conclude that the knot was in a relatively bright state during the flare ($\approx 35\%$ brighter than in 2005), but well within the normal range of flux (and position) variation.
Thus, there is no dramatic change in the inner knot in the near-IR band.
We use the amplitude of the measured variation as an upper limit to any variation in the inner knot associated with the $\gamma$-ray flare (Figure~\ref{f:sed}).
\section{Radio Observations}\label{s:vla}
On 2011 April 14, we triggered a prompt radio follow-up program with the VLA.
The VLA observations occurred in 8 epochs starting April 15 and ending July 10.
These observations detected the pulsar at 2 epochs, but found no other point sources in the field.
Observations were predominantly in the range $4-8$~GHz, with additional observations at 1.4~GHz (not reported; see below) and at 22~GHz (for some later epochs).
Unless otherwise noted, all observations used two sub-bands, each with a 128-MHz bandwidth.
In each run, observations of the target were bracketed with scans of a phase calibrator (J0559+2353, except where noted) and a flux calibrator (3C~147).
The fields of view are limited to the primary beam response of the antennas, with full width at half power of 9$'$/$\nu_{\rm 5}$ with $\nu_{\rm 5} \equiv \nu$/(5~GHz).
Table~\ref{t:vla1} provides a summary of the observational parameters and results.
Our initial observations were obtained through a {\sl Fermi}~ guest-investigator cycle-3 program (S3184) approved for four 1-hour runs in the L-band and C-band ($\approx\, 1.4$ and $5$ GHz, respectively).
The VLA was in its B-array configuration during these observations, resulting in images with angular resolution $\approx\,$1\arcsec/$\nu_{5}$.
We found that the L-band data for the Crab were highly confused due to the brightness and complexity of the steep-spectrum nebular emission in the first (April 15) and second (April 19) epochs.
Consequently, we modified our strategy for subsequent observations.
After the first epoch, we split the C-band observations into two widely spaced side-bands centered at 4.2 and 7.8 GHz, aiming better to constrain the spectrum of any detected source.
We also began scheduling observations only at frequencies greater than 4~GHz after the second epoch.
In these B-array data, our point-source limits at the lower frequency are $\approx 3-4\times$ larger than at the higher frequency, again due to the Crab Nebula's steep-spectrum radio emission.
After non-detection of any significant radio point-source emission down to $\approx\,$1-7 mJy (3-sigma) sensitivities in the initial
three VLA observations 1--7 days after the $\gamma$-ray peak (Hays et al.\ 2011), we purposely delayed the fourth observation until 9 days after the previous observation to probe longer time scales.
Only in this last observation (April 30) did we obtain a significant point-source detection, which was coincident with the Crab pulsar position, but only in the upper side-band centered at 7.8~GHz.
The detection is a factor of 5 greater than the 3-sigma limit from 9 days prior. The source was not detected at the lower frequency side-band (4.2~GHz) with a limit indicating a source with a flat radio spectrum.
Following the point-source detection on April 30, we became aware of VLA TEST observations of the Crab obtained on April 22 (program TDEM0007, PI: D.~Frail).
These data were obtained with wide bandwidth (16 $\times$ 128 MHz wide sidebands), so were more sensitive than those from our observing runs.
The flux densities were scaled to 3C~147; J0534+1927 was utilized for phase calibration.
The flat-spectrum radio source coincident with the Crab pulsar detected in our Apr-30 observation was confirmed in the Apr-22 data in two bands, but with a much lower ($10\times$) flux.
Also, the source spectrum was rather steep, with an energy spectral index $\alpha = 2.21\pm 0.34$ ($S_{\nu}\propto\nu^{-\alpha}$) between 5~GHz and 8.6~GHz.
Following the radio detections of the pulsar, we requested further VLA monitoring of the Crab through Director's Discretionary Time (program 11A-268 = AC1052).
In addition to the C-band observations, we obtained exposures in the K~band (centered at 22.396 and 22.254 GHz) aiming to constrain further the spectrum of any detected radio source.
Through this program, we obtained 2-hour runs on May 12/13 (while the VLA was in its hybrid BnA array) and on July 10/11 (in A array), and an additional 1-hour run on May 28, using one of the early (April 19, from program S3184) frequency setups.
An angular resolution of $\approx$(0.3\arcsec/$\nu_{\rm 5}$) is typically achieved in A-array VLA observations.
With the higher resolution, we obtained systematically 4$\times$ lower flux limits than in the lower resolution B-array data, presumably due to lesser contribution from the extended nebular emission.
In none of these later epoch follow-up observations, did we detect a point source, to typical limits of $1-2$ mJy at each of the three frequencies (see Table~\ref{t:vla1}).
\subsection{Discussion of the Radio Data} \label{ss:vladiscussion}
Previously, it was argued that the $\gamma$-ray flaring possibly originates in a knot 5.7\arcsec\ east of the pulsar (Tavani et al.\ 2011).
Indeed, this knot is the site of the most significant X-ray variability we observe (\S~\ref{ss:x_variability}) during the 2011-April flaring episode.
Variable radio emission was detected around the time of the previous Crab $\gamma$-ray
flaring episode (Lobanov, Horns, \& Muxlow 2011) with fainter flux densities than achieved in our VLA observations.
However, we found no significant radio point-source counterpart to this knot in any of our 8 epoch VLA observations following the 2011-April $\gamma$-ray flare.
Rather, we detected a variable continuum radio source with the VLA, coincident within $0.2\arcsec$ of the Crab pulsar position in 2 of 8 epochs.
However, the flux level and cadence of the radio detections is consistent with previous observations by Moffett \& Hankins (1996), who detected the pulsar $ 20-40\%$ of the times they observed.
Moreover, the dates of the radio point-source detections coincident with the pulsar do not coincide with any feature in the $\gamma$-ray lightcurve (Figure~\ref{f:r0}), having occurred 8-16 days after the brightest $\gamma$-ray peak.
Consequently, our VLA follow-up observations provide no conclusive evidence for the site of the $\gamma$-ray flares.
\section{Discussion} \label{s:consequences}
Here we discuss possible explanations for the absence in non-$\gamma$-ray bands of variability that is obviously correlated with the $\gamma$-ray flare. In addition we present a conceptual model for the production of the $\gamma$-ray flares.
\subsection{$\gamma$-ray Emission}
As was recognized immediately, the SED of $\gamma$-ray flares peak near a characteristic energy about 5 times the energy $\alpha^{-1}m_ec^2\approx 70$~${\rm Me\!V}$, which is identified with radiation-reaction-limited synchrotron (magneto-bremsstrahlung) emission (e.g., Landau \& Lifshitz 1959).
Subjected to comparable parallel and perpendicular electromagnetic acceleration, an electron radiates at this energy, independent of the strength of the acceleration.
An electron emitting synchrotron radiation in a magnetostatic field with peak emission at several $100$~${\rm Me\!V}$\ would cool in turning through $\approx0.2$~radian and would thus require a parallel electric field $E\approx5 cB$ to compensate the radiative loss.
The Crab pulsar releases energy in an essentially electromagnetic form.
Poynting flux flows radially outward from the the pulsar through the light cylinder at
$c P/(2\pi) \approx 1500$~km and into an outflowing wind, where at least some of the electromagnetic-energy flux may transform into a plasma-energy flux.
How, where, and to what extent this happens has long been a matter of debate (e.g., Arons 2010; Kirk et al.\ 2009).
Furthermore, the electromagnetic component has a DC toroidal part with an associated quadrupolar current distribution, and an AC, ``striped'' part containing current sheets separated by $\frac{1}{2} c P \approx5000$~km.
The transformation from electromagnetic to plasma energy might be non-dissipative---through the action of a Lorentz force (e.g., Bogovalov 1997; Bogovalov 2001)---or dissipative---through particle heating and acceleration (e.g., Coroniti 1990; Lyubarsky \& Kirk 2001; Sironi \& Spitkovsky 2011).
However, it must occur somewhere as magnetic flux would otherwise accumulate in the nebula, ultimately reacting back on the pulsar.
Some of this transformation from electromagnetic to plasma energy may occur at a shock (P{\'e}tri \& Lyubarsky 2007; Sironi \& Spitkovsky 2011) with radius $\approx10^{17}$~cm, where the wind momentum flux balances the ambient nebular pressure (Rees \& Gunn 1974; Kennel \& Coroniti 1984).
It has also been proposed that the toroidal field loops contract to form an axial pinch (identified with the X-ray jet) and reconnect at an equatorial current sheet (the torus) (Komissarov \& Lyubarsky 2003; Del Zanna, Amato, \& Bucciantini 2004; Camus et al.\ 2009).
In many respects the pulsar is a current generator.
The supersonic wind contains outflowing fluxes of electrons and positrons.
(Any ions that are present behave like positrons of similar rigidity but do not radiate.)
The difference in their fluxes determine the current density.
This current may concentrate into sheets and filaments, where strong dissipation can occur---as happens in heliospheric and laboratory plasmas (Gosling et al.\ 2005; Sui \& Holman 2003; Sergeev et al.\ 1993).
In particular, the inner wind, the shock, the jet, and the torus are all natural sites of rapid dissipation and $\gamma$-ray emission.
If we consider this dissipation more generally under electromagnetic conditions, a current $I$ may be associated with a potential difference $V\approx IZ_0$ where $Z_0=\mu_0 c=377\, \Omega$ is the impedance of free space and we drop model-dependent constants of order unity.
(This result can be anticipated on the basis of dimensional analysis or exhibited in particular simple cases.)
The maximum energy to which an electron or positron can be accelerated is $\gamma_{{\rm max} }m_e c^2\approx$ ${\rm e\!V}$\ $\approx e I Z_0$ and the expected power is then $L\approx I V\approx I^2 Z_0$.
On this basis, a spectrum of currents extending up to $\approx30$~TA should suffice to account for the $\gamma$-ray variations.
However, currents do not automatically dissipate as just described.
Large electric fields are normally discharged in a few plasma periods.
The best way to create them here is transiently over a few Larmor periods and radii.
This, in turn, requires local charge separation of the plasma in the emission site.
To be more precise, the density $n=n_-+n_+$ of electrons plus positrons/ions will combine to create a local current density $j\approx nec$.
For example, in the case of a pinch, the gradient, polarization and curvature drifts automatically produce the axial current.
However, a supporting local electric field of strength $E\approx cB$ also requires that $|n_--n_+|\approx n$.
Put another way, the charge and current are mostly in an emission site that is a few Larmor radii in size and survives for a few Larmor periods of the $\gamma$-ray emitting particles.
When this happens, the ``Ohmic'' dissipation is radiative, not collisional as is normally the case.
For this to occur, the particles must be sufficiently energetic to radiate efficiently.
This requires that most of the particles are concentrated in an emission site that is as small as $\approx\gamma_{{\rm max}}^3r_e\approx(eIZ_0 /m_ec^2)^3r_e\approx10^{16}$~cm.
The key point is that there should be extensive and sustained radiation-reaction-limited emission at the peak $\gamma$-ray-flare energy, even though the particle energy and magnetic field might be changing.
Note that when this condition is unsatisfied, efficient particle acceleration to lower energy should still result: Most of the magnetic dissipation and particle acceleration in the nebula might occur in this fashion.
Detailed modeling is necessary to determine whether or not such a scheme can reproduce the powerful, narrow-band $\gamma$-ray variation that is observed (Uzdensky et al.\ 2011; Cerutti et al.\ 2012; Cerutti, Uzdensky, \& Begelman 2012; Lyutikov, Balsara, \& Matthews 2012; Bykov et al.\ 2012; Sturrock \& Aschwanden 2012; Blandford \& Yuan 2012) and to see if unstable magnetized plasmas, carrying large currents, evolve to satisfy these conditions.
\subsection{Associated Emission}
Whether we interpret the $\gamma$-rays as coming from radiation-reaction-limited synchrotron emission or simply extrapolate the observed $\gamma$-ray spectra to lower energy, it should not be surprising that direct, associated emission has not yet been observed in the X-ray, optical, or radio bands:
In these bands the contrast with the steady emission is too small to be easily noticed.
However, the indirect effects could be larger and detectable.
For example, the large 2011-April flare produced a radiant energy of $6\times10^{40}$~ergs if isotropic, equivalent to the energy contained within a region of size $\approx2\times10^{16}$~cm subtending an angle $\approx0.3$~arcsec.
It seems unlikely that the dynamical aftermath of a major flare would not alter the ambient emission---either through compression or rarefaction that would cause the magnetic field strength
and the electron distribution function to change significantly.
The associated surface brightness change should be several percent, assuming a total emission region of size $\approx0.3$~arcsec, consistent with our upper limits.
Even if future observations fail to exhibit associated emission, they may still rule out specific detailed mechanisms in local sites.
Understanding the emission mechanism could have a significance beyond pulsar wind nebulae.
In particular, it could provide a clue to the surprisingly rapidly variable emission seen in relativistic jets in the radio, optical, X-ray, and ${\rm Te\!V}$\ bands.
If so, the Crab Nebula would once again be the source of fresh and important astrophysical insight.
\section{Summary}\label{s:summary}
Using the {\sl Chandra}, Keck, and VLA Observatories, we acquired X-ray, near-IR and radio images of the Crab Nebula, contemporaneous with the 2011-April $\gamma$-ray flare.
We searched for variability in the X-ray data over two time-scale ranges:
First we tested for pointing-to-pointing variations amongst the 5 pointings, each with an effective exposure time $\approx 1200$~s and a minimum separation of 0.6~days.
Second we tested for variations within each of the 5 observations.
In neither case did we detect statistically significant X-ray variations; thus we can set only upper limits to any X-ray variations associated with the $\gamma$-ray flare.
As the {\sl Chandra}~ ACIS images suffer severe pile-up near the Crab pulsar, our search for variability in the X-ray images was not sensitive to variations within the central $\approx 1.5\arcsec$ or so.
Comparing the upper limits to X-ray variations with the {\sl Fermi}-LAT-measured $\gamma$-ray variations, we set upper limits at $99$\%-confidence to the effective X-ray--$\gamma$-ray photon power-law index $\Gamma_{x\gamma} \leq 1.20$ to $ \leq 1.27$, dependent upon assumptions about the X-ray index $\Gamma_{x}$.
As {\sl Fermi}-LAT measures a $\gamma$-ray index $\Gamma_{\gamma} = 1.27\pm 0.12$ for the flaring component, it is statistically possible that the flaring component's spectrum extends as a simple power-law from $\gamma$-rays to X-rays.
Further, we note that our upper limit to $\Gamma_{x\gamma}$ is consistent with transparent synchrotron emission, whose photon index must be $>\frac{2}{3}$.
Comparison of two Keck near-IR observations found that the inner knot ($\approx 0.65\arcsec$ from the pulsar) was somewhat brighter than average during the $\gamma$-ray flare, but well within the normal range of brightness fluctuations typically observed.
We used the measured ($\approx35\%$) change in the near-IR flux from this knot as an upper limit to near-IR variations associated with the $\gamma$-ray flare.
We also performed a number of VLA observations searching for a point source appearing either at an unusual location and/or contemporaneous with the $\gamma$-ray flare.
Other than the pulsar itself, no such source was detected.
Figure~\ref{f:sed} shows the spectral energy distribution (SED) of the Crab Nebula over the observed electromagnetic spectrum.
The plot also shows the SED of the 2011-April $\gamma$-ray flare and the various limits determined here on variable radio, near-infrared, and X-ray emission possibly associated with the $\gamma$-ray flare.
Finally, we reviewed and discussed potential implications of $\gamma$-ray flares and theoretical issues to be addressed.
We concluded that, apart from lower-energy emission directly associated with the $\gamma$-ray flare itself, the dynamical aftermath of a major flare could alter the ambient emission --- e.g., through compression of the magnetic field.
The associated surface brightness change would likely be only several percent, assuming a total emission region of size $\approx0.3\arcsec$, consistent with our upper limits.
Although no ``smoking gun" has been identified, one should be encouraged that we have identified a number of regions in the X-ray images that are possible candidates.
We have also established further Target of Opportunity observations with {\sl Chandra}~ and HST that will be triggered at the onset of the next $\gamma$-ray flare.
The X-ray observations will also probe the region very close to the pulsar using the {\sl Chandra}~ High-Resolution Camera (HRC).
\section{Acknowledgments}
The work of MCW, SLO, and AFT is supported by the {\sl Chandra}~ Program.
The {\sl Chandra}~ data was obtained in response to a pre-approved target of opportunity request granted under {\sl Chandra}~ Director's Discretionary Time.
The {\sl Fermi}~ LAT Collaboration acknowledges generous ongoing support
from a number of agencies and institutes that have supported both the
development and the operation of the LAT as well as scientific data analysis.
These include the National Aeronautics and Space Administration and the
Department of Energy in the United States, the Commissariat \`a l'Energie Atomique
and the Centre National de la Recherche Scientifique / Institut National de Physique
Nucl\'eaire et de Physique des Particules in France, the Agenzia Spaziale Italiana
and the Istituto Nazionale di Fisica Nucleare in Italy, the Ministry of Education,
Culture, Sports, Science and Technology (MEXT), High Energy Accelerator Research
Organization (KEK) and Japan Aerospace Exploration Agency (JAXA) in Japan, and
the K.~A.~Wallenberg Foundation, the Swedish Research Council and the
Swedish National Space Board in Sweden.
Additional support for science analysis during the operations phase is gratefully
acknowledged from the Istituto Nazionale di Astrofisica in Italy and the Centre National d'\'Etudes Spatiales in France.
CCC, GBT, and JBL thank Tim Hankins for useful discussions, and the NRAO scheduling
committee for alerting us to the VLA TEST data and for their prompt consideration of our Director's Discretionary Time observations.
CCC, GBT, and JBL were supported in part by NASA through a {\sl Fermi}~ cycle-3 guest investigator grant.
In addition, work by CCC~at NRL is supported in part by NASA DPR S-15633-Y.
Our analyses utilized software tools provided by the {\sl Chandra}~ X-ray Center (CXC) in the application package CIAO and from the High-Energy Astrophysics Science Archive Research Center (HEASARC, operated by the NASA Goddard Space Flight Center, Greenbelt MD, and by the Smithsonian Astrophysical Observatory, Cambridge MA).
{\it Facilities:} \facility{CXO}, \facility{Fermi}, \facility{VLA}
|
2,869,038,155,871 | arxiv | \section{Introduction}\label{sec.Intro}
In recent years, modern engineering systems have become larger and more complex than ever, as large-scale systems and networked control systems have become much more common, and the ``system-of-systems" philosophy has become the dominant design methodology. Coincidentally, specifications regarding these systems have grown more intricate themselves, and asserting they are met is of utmost importance. Recently, several attempts have been made to adapt contract theory, which is a modular approach for software verification, to control dynamical systems. In this paper, we present a framework for assume/guarantee contracts for discrete-time dynamical control systems, and present computational tools for verifying these contracts for perturbed and unperturbed linear time-invariant (LTI) systems using linear programming (LP).
\subsection{Background and Related Work}
The problem of verification tasks one to find a proof that a certain model satisfies given specifications.
This problem is referred to as model checking in the fields of computer science and software engineering, where it has been studied extensively over the last few decades \cite{Wallace1989,Baier2008}. There, the software package under test is usually converted to or abstracted by a finite transition system, on which specifications are usually put in the form of linear temporal logic formulae. It is well-known that any linear temporal logic formula can be transformed to an equivalent automaton \cite{Vardi1996}, meaning that standard procedures from automata theory can be used to verify that the given finite transition system satisfies the specifications. Namely, one checks whether the set of accepted languages by the negation of the formula and the trace of the finite transition system have a non-empty intersection \cite{Baier2008}. In practice, this check is done by finding a path with certain desired properties in the graph describing the product automaton, implying it is tractable even for systems with thousands or millions of states.
Over the years, various attempts were made to apply the framework of model checking for finite transition systems to verify specifications for control systems with continuous (and infinite) state space. The main tools used in all of them are \emph{abstraction} and \emph{simulation}, which are notions connecting control systems with continuous state-spaces and finite transition systems \cite{Tabuada2009,Belta2017}. Namely, verification for a continuous control system is achieved by (1) abstracting it by a finite transition system, and (2) applying the model checking framework to this finite abstraction. The correctness of the verification process stems from the fact that the finite transition system approximately (bi-)simulates the continuous control system \cite{Girard2007,Girard2009,Wongpiromsarn2010}. Unfortunately, the abstraction of continuous control systems relies on discretization of the state-space. Thus, these methods cannot handle systems with high-dimensional dynamics, due to the curse of dimensionality, as this collection of methods treats the system-under-test as a single monolithic entity. In particular, even minute changes to the system (e.g., replacing one the actuators with a comparable alternative) would require executing a completely new verification process.
As noted in the literature, scalable development of large-scale systems with intricate specifications requires a modular approach, i.e., a design methodology allowing different components or subsystems to be developed independently of one another \cite{Baldwin2006,Huang1998}. This philosophy can be achieved by verifying or designing each component on its own, while treating all other components as part of the (unknown) environment. In software engineering, design and verification are often modular by design; requirements for the software package are almost always defined in terms of modules, or even individual functions and methods. Moreover, each function or module can be verified on its own, independently of the other parts of the software \cite{Grumberg1994}.
Perhaps the best example of the modular design philosophy in software engineering is contract theory \cite{Meyer1992,Benveniste2018}. Contract theory is a modular approach for software engineering, which explicitly defines assumptions on the input and guarantees on the output of each software component. It can be used to design and verify software components, and even automatically fix bugs in the code \cite{Pei2014}.
On the contrary, this situation is significantly different for control systems. Control design is often non-modular, as it requires the designer to know an exact (or approximate) model for each component in the system. For example, even the most scalable distributed and decentralized control methods, such as \cite{Siljak2005,Rantzer2015}, require a single authority with complete knowledge of the system model in order to design the decentralized or distributed controllers, i.e., they do not follow this modular design philosophy. Recently, several attempts have been made to derive modular design procedures for control systems. Some methods try to "modularize" the previous procedure, which treated the system as a single monolithic entity, by considering composition-compatible notions of abstraction and simulation \cite{Meyer2017,Hussien2017, Saoud2018b, Zamani2018}. Another approach, which is geared toward safety specifications, is to search for a composition-compatible method to calculate invariant sets \cite{Smith2016, Nilsson2016,Chen2018}.
In recent years, several attempts have been made to adapt contract theory to a modular design and verification framework for control systems. It has been successfully applied to the design of the ``cyber" aspects of cyber-physical systems, see \cite{Nuzzo2014,Nuzzo2015} and references therein. More recently, several frameworks have been proposed for contract theory for dynamical control systems, see e.g., \cite{Besselink2019,Shali2021,Saoud2018,Saoud2019,Eqtami2019,Ghasemi2020,Saoud2021}.
In \cite{Besselink2019,Shali2021}, the authors propose methods for prescribing contracts on continuous-time systems, and verify these contracts either using geometric control theory methods, or using behavioural systems theory, respectively. Discrete-time systems are considered in \cite{Saoud2018,Saoud2019,Eqtami2019, Ghasemi2020}, where assumptions are put on the input signal to the system, and guarantees are put on the state and the output of the system. However, prescribing guarantees on the state of the system goes against the spirit of contract theory, as the state of the system is an internal variable. Thus, we aim at presenting a contract-based framework for discrete-time dynamical control systems which does not refer to the state of the system, and present efficient computational tools for their verification.
\subsection{Contributions}
In this paper, we propose a novel framework for assume/guarantee contracts on discrete-time dynamical control systems. These contracts prescribe assumptions on the input to a system and guarantees on its output, relative to its input. We prescribe LP-based computational tools for verification of contracts defined by time-invariant linear inequalities, both for unperturbed and perturbed LTI systems. These computational tools are explicitly stated by Algorithm \ref{alg.VerifyCertainIota} (for unpertubed LTI systems) and Algorithm \ref{alg.VerifyUncertain} (for perturbed LTI systems). First, we present LP-based computational tools applicable to unperturbed LTI systems for a class of contracts defined by time-invariant linear inequalities. Second, and more importantly, we extend the verification framework also for perturbed LTI systems.
To the knowledge of the authors, no works presenting a contract theory framework for perturbed systems currently exist. We also note that standard formal theory methods usually require special treatment when applied to perturbed or uncertain systems \cite{Sadigh2016,Shen2019,ApazaPerez2021}.
We first tackle the verification problem for unperturbed LTI systems. We use strong induction to show that the system satisfies the contract if and only if an infinite number of implications between inequalities hold (Theorem \ref{thm.Inductive}). These implications are then recast as linear programs, and we use $k$-induction \cite{Donaldson2011} to achieve verification by solving finitely-many linear programs, culminating in Algorithm \ref{alg.VerifyCertainIota}, for which we prove correctness and analyze its complexity (Theorems \ref{thm.CertainAlgCorrectness} and \ref{thm.ThetaInduciveLTI}). We then consider the problem of verifying that a perturbed system $\Sigma$ satisfies a contract $\mathcal{C}$ defined by time-invariant linear inequalities. We first show that $\Sigma$ satisfies the contract $\mathcal{C}$ if and only if the nominal counterpart $\hat{\Sigma}$ of $\Sigma$ satisfies a robustified contract $\mathcal{C}^\prime$ (Theorem \ref{thm.UncertainEquiv}). Ideally, we could then achieve a comparison-based procedure for verification, verifying that $\Sigma$ satisfies $\mathcal{C}$ by showing that the unperturbed system $\hat{\Sigma}$ satisfies $\mathcal{C}^\prime$, as we already have tools for the latter task.
Unfortunately, the contract $\mathcal{C}^\prime$ is defined by \emph{time-varying} linear inequalities, as the robustification of the guarantees at time $k$ corresponds to the worst-case behaviour of the perturbation up to time $k$. To alleviate this problem, we consider the most lenient time-invariant contract $\hat{\mathcal{C}}$ refining $\mathcal{C}^\prime$. Unfortunately, as $\hat{\mathcal{C}}$ depends on the perturbation for the entire time horizon, infinitely many robustification terms are necessary, rendering this approach intractable. To address this, we \emph{approximate} $\hat{\mathcal{C}}$ by a tractable under-approximation $\hat{\mathcal{C}}_\epsilon$ of arbitrary precision $\epsilon > 0$. As a result, we can verify that $\Sigma$ satisfies $\mathcal{C}$ by verifying that the unperturbed LTI system $\hat{\Sigma}$ satisfies the contract $\hat{\mathcal{C}}_\epsilon$, which is defined by time-invariant linear inequalities (Proposition \ref{prop.Taus}). Thus, verification can be achieved using the LP-based tools presented earlier in the paper, resulting in Algorithm \ref{alg.VerifyUncertain}. The computational complexity of the algorithm scales as $\log(1/\epsilon)$, meaning that even extremely small values of $\epsilon$ are tractable. We also study the assumptions and $\epsilon$-optimality of the suggested verification algorithm, see Section \ref{subsec.AnalysisUncertain}. These tools present a significant extension of our preliminary results, presented in the conference paper \cite{SharfADHS2020}, which only considered a significantly restricted class of contracts, and only unperturbed LTI systems.
The rest of the paper is structured as follows. Section \ref{sec.Background} presents the basics of the assume/guarantee framework, and also gives some background on polyhedral sets. Section \ref{sec.Cert} presents more general LP-based tools for verification for unperturbed LTI systems. Section \ref{sec.Uncertain} presents LP-based tools for verification for perturbed LTI systems. Finally, Section \ref{sec.CaseStudy} exemplifies the achieved tools for verification through case studies.
\paragraph*{Notation}
We denote the collection of natural numbers by $\mathbb{N} = \{0,1,2,\ldots\}$. For two sets $X,Y$, we denote their Cartesian product by $X\times Y$.
For a positive integer $n$, we denote the collection of all signals $\mathbb{N} \to \mathbb{R}^n$ by $\mathcal{S}^n$. For vectors $v,u \in \mathbb{R}^n$, we understand $v \le u$ as an entry-wise inequality. Moreover, we denote the Euclidean norm of a vector $v\in \mathbb{R}^n$ as $\|v\|$, and the operator norm of a matrix $P$ as $\|P\| = \sup_{v \neq 0} \frac{\|Pv\|}{\|v\|}$. The all-one vector is denoted by $\mathds{1}$, and the Minkowski sum of two sets $X,Y \subseteq \mathbb{R}^d$ is defined as $X+Y = \{x+y: x\in X,y\in Y\}$
Given a state-space system $(A,B,C,D)$, the observability matrix of depth $m$ is denoted by $\mathcal{O}_m = [C^\top,(CA)^\top,\ldots,(CA^m)^\top]^\top$. Moreover, the observability index $\nu$ is the minimal integer such that ${\rm rank}~\mathcal{O}_\nu = {\rm rank}~\mathcal{O}_{\nu+1}$. Moreover, given a state $x$ for the system, we let $p_\mathcal{O}(x)$ be the projection of $x$ on the observable subspace of the system.
\section{Background} \label{sec.Background}
In this section, we present some basic notions about assume/guarantee contracts, as well as some basic facts about polyhedral sets.
\subsection{Assume/Guarantee Contracts}
We present several basic notions in the theory of abstract assume/guarantee contracts for dynamical closed-loop control systems. These have been previously presented in the preliminary work \cite{SharfADHS2020}, and are derived from \cite{Nuzzo2015,Benveniste2018}. Computational tools for these contracts will be given in the upcoming sections.
\begin{defn} \label{def.Systems}
A system $\Sigma$ {has an input $d\in \mathcal{S}^{n_d}$, output $y \in \mathcal{S}^{n_y}$, and state $x\in \mathcal{S}^{n_x}$. It is defined by a set $\mathcal{X}_0 \subseteq \mathbb{R}^{n_x}$ of initial conditions, matrices $A,B,C,D,E,F$ of appropriate dimensions, and two bounded sets $\mathcal{P} \subseteq \mathbb{R}^{n_p}$, $\mathcal{R} \subseteq \mathbb{R}^{n_r}$.} The evolution and observation are given by the following equations, which hold for any $k\in \mathbb{N}$:
\begin{align} \label{eq.GoverningEquations}
x(0)&\in \mathcal{X}_0,\\\nonumber
x(k+1) &= Ax(k) + Bd(k) + E\omega(k),~ \omega(k)\in \mathcal{P}\\ \nonumber
y(k) &= Cx(k) + Dd(k) + F\zeta(k),~\zeta(k)\in \mathcal{R}
\end{align}
For signals $d\in \mathcal{S}^{n_d}$ and $y\in \mathcal{S}^{n_y}$, we write $y\in \Sigma(d)$ if there exists a signal $x\in \mathcal{S}^{n_x}$ such that $d(\cdot),x(\cdot),y(\cdot)$ satisfy \eqref{eq.GoverningEquations}.
\end{defn}
We include the set of allowable initial states $\mathcal{X}_0$ in the definition of a system, as otherwise we cannot discuss several important specifications. For example, asking whether the output of the system always lies inside a safe set is meaningless if we make no assumptions on the initial state, e.g., it is meaningless if the initial state lies outside the safe set.
\begin{rem} \label{rem.InitDepend}
Definition \ref{def.Systems} can be extended by allowing $\mathcal{X}_0$ to be dependent of $d(0)$. This is reasonable if the system tries to avoid an obstacle whose position is defined by $d(\cdot)$, assuming that the system does not start on top of the obstacle. This is also reasonable for systems trying to track $d(\cdot)$, assuming their initial tracking error is not too large. The methods presented in this paper work under this more general assumption. However, we consider the restricted definition to enhance readability.
\end{rem}
\begin{rem}
Definition \ref{def.Systems} can also include non-linear systems, as the sets $\mathcal{P},\mathcal{R}$ can also included unmodeled non-linear terms.
\end{rem}
We consider specifications on dynamical control systems in the form of assume/guarantee contracts, which prescribe assumptions on the input signal $d(\cdot) \in \mathcal{S}^{n_d}$ and issue guarantees on the output signal $y(\cdot) \in \mathcal{S}^{n_y}$, relative to the input signal:
\begin{defn} \label{defn.AG}
An assume/guarantee contract is a pair $(\mathcal{D},\Omega)$ where $\mathcal{D} \subseteq \mathcal{S}^{n_d}$ are the assumptions and $\Omega \subseteq \mathcal{S}^{n_d} \times \mathcal{S}^{n_y}$ are the guarantees.
\end{defn}
In other words, we put assumptions on the input $d(\cdot)$ and demand guarantees on the input-output pair $(d(\cdot),y(\cdot))$.
Assume/guarantee contracts prescribe specifications on systems through the notion of satisfaction:
\begin{defn}
We say that a system $\Sigma$ satisfies $\mathcal{C} = (\mathcal{D},\Omega)$ (or implements $\mathcal{C}$), and write $\Sigma \vDash \mathcal{C}$, if for any $d\in \mathcal{D}$ and any $y\in \Sigma(d)$, we have $(d,y)\in \Omega$.
\end{defn}
Another notion that will be of use to us is the notion of refinement. It considers two contracts defined on the same system, and compares them to one another:
\begin{defn} \label{def.refine}
Let $\mathcal{C}_i = (\mathcal{D}_i,\Omega_i)$ be contracts for $i=1,2$, with the same input $d(\cdot) \in \mathcal{S}^{n_d}$ and the same output $y(\cdot)\in \mathcal{S}^{n_y}$. We say $\mathcal{C}_1$ \emph{refines} $\mathcal{C}_2$ (and write $\mathcal{C}_1 \preccurlyeq \mathcal{C}_2$) if $\mathcal{D}_1 \supseteq \mathcal{D}_2$ and $\Omega_1 \cap (\mathcal{D}_2 \times \mathcal{S}^{n_y}) \subseteq \Omega_2 \cap(\mathcal{D}_2 \times \mathcal{S}^{n_y})$.
\end{defn}
Colloquially, $\mathcal{C}_1 \preccurlyeq \mathcal{C}_2$ if $\mathcal{C}_1$ assumes less than $\mathcal{C}_2$, but guarantees more given the assumptions.
The framework of assume/guarantee contracts supports modularity in design using the notions of refinement and composition. These allow one to dissect contracts on composite systems to contracts on subsystems or on the individual components. The reader is referred to the references \cite{SharfADHS2020,Nuzzo2015} for more information about these notions. Moreover, the references \cite{SharfADHS2020} present preliminary results for computational tools verifying them. Due to space limitations, we focus on prescribing tools for verifying that a given system $\Sigma$ satisfies a given contract $\mathcal{C}$, without assuming that the system can be separated into smaller subsystems.
\subsection{Polyhedral Sets} \label{subsec.Polyhedral}
In this paper, we focus on specifications defined by linear inequalities, i.e., specifications defined using polyhedral sets:
\begin{defn}
A set $S \subseteq \mathbb{R}^{d}$ is called polyhedral if it is defined by the intersection of finitely many half-spaces. Equivalently, there exist a matrix $A$ and a vector $b$ such that the set $S$ is defined by $S = \{z\in \mathbb{R}^d: Az \le b\}$.
\end{defn}
Polyhedral sets are known to be convex. Moreover, optimizing linear cost functions {over} them corresponds to solving a linear program, which can be done quickly using off-the-shelf solvers, e.g., Yalmip \cite{Lofberg2004}. Any polyhedral set has an equivalent representation, known as the vertex representation:
\begin{lem}[\hspace{0.1pt}\cite{Schrijver1998}]
The set $S \subseteq \mathbb{R}^d$ is polyhedral if and only if there exist matrices $F,G$ such that $S = \{F\lambda + G\theta : \mathds{1}^\top \lambda = 1,~\lambda,\theta \ge 0\}$.
\end{lem}
Both representations of the polyhedral set can be useful for different reasons. The subspace representation $\{Az \le b\}$ is usually easier to define, and can be used to easily calculate the pre-image of a polyhedral set under a linear transformation. The vertex representation is useful for computing the Minkowski sum of two polyhedral sets, and for computing the image of a polyhedral set under a linear transformation.
In this paper, we will often encounter a situation in which we would like to verify that one polyhedral set is a subset of another polyhedral set. This inclusion can be easily verified:
\begin{lem} \label{lem.Inclusion}
Let $S_1,S_2$ be polyhedral sets.
\begin{itemize}
\item If the sets are given in subspace representation, $S_i = \{z \in \mathbb{R}^d: A_i z \le b_i\}$, then $S_1 \subseteq S_2$ if and only if $\varrho_j \le 0$ for any $j$, where $\varrho_j$ is given as the value of the following linear program, and ${\rm e}_j$ is the $j$-th standard basis vector:
\begin{align*}
\varrho_j = \max\{{\rm e}^\top_j (A_2 z - b_2) : A_1 z \le b_1\}.
\end{align*}
\item If the sets are given in vertex representation, $S_i = \{F_i \lambda + G_i \theta : \mathds{1}^\top \lambda = 1,~\lambda,\theta \ge 0\}$, then $S_1 \subseteq S_2$ if and only if there exist matrices $\Lambda,\Theta_F,\Theta_G$ {with positive entries} such that the following relations holds:
\begin{align} \label{eq.InclusionVertex}
{G_1 = G_2\Theta_G,~F_1 = F_2\Lambda + G_2\Theta_F,~\Lambda^\top \mathds{1} = \mathds{1}.}
\end{align}
\end{itemize}
\end{lem}
\begin{proof}
The first claim follows immediately, as $\varrho_j \le 0$ if and only if $\{z:A_1z \le b_1\} \subseteq\{z:{\rm e}_j^\top A_2 z \le {\rm e}_j^\top b_2\}$. As for the second claim, \eqref{eq.InclusionVertex} holds if and only if:
\begin{itemize}
\item The columns of $F_1$ belong to $S_2$.
\item The columns of $G_1$ belong to $\{G_2\theta : \theta \ge 0\}$.
\end{itemize}
It is clear that if these conditions hold, then $S_1 \subseteq S_2$. As for the other direction, the first condition obviously holds. For the second condition, if we take some column $g$ of $G_1$, then $F_1{\rm e}_1 + tg \in S_1$ for any $t>0$. Thus, there exist some $\lambda_t,\theta_t\ge 0$ such that $\lambda_t^\top \mathds{1} = 1$ and $F_1{\rm e}_1 + tg = F_2 \lambda_t + G_2 \theta_t$. Thus:
\begin{align*}
\frac{1}{t} G_2 \theta_t = \frac{1}{t} F_1{\rm e_1} - \frac{1}{t} F_2 \lambda_t + g
\end{align*}
As $t \to \infty$, the right hand side tends to $g$ (as the elements of $\lambda_t$ are bounded between $0$ and $1$). This means that $g$ lies in the closure of the closed set $\{G_2\theta : \theta \ge 0\}$. As $g$ was an arbitrary column of $G_2$, the proof is complete.
\end{proof}
\subsection{Linear Time-Invariant Contracts}
We aim to present efficient LP-based methods for verifying that $\Sigma \vDash \mathcal{C}$, and we do so for contracts defined by linear inequalities:
\begin{defn}\label{def.LinCon}
A \emph{linear time-invariant} (LTI) contract $\mathcal{C} = (\mathcal{D},\Omega)$ of depth $m\in \mathbb{N}$ with input $d(\cdot) \in \mathcal{S}^{n_d}$ and output $y(\cdot) \in \mathcal{S}^{n_y}$ is given by matrices $\mathfrak A^r\in \mathbb{R}^{n_a \times n_d},\mathfrak G^r \in \mathbb{R}^{n_g\times (n_d+n_y)}$ for $r=0,\ldots,m$ and vectors $\mathfrak a^0 \in \mathbb{R}^{n_a},\mathfrak g^0 \in \mathbb{R}^{n_g}$ such that:
\begin{align} \label{eq.LinCon}
\mathcal{D} &= \left\{d(\cdot): \sum_{r=0}^m\mathfrak A^r d(k-m+r) \le \mathfrak a^0,~\forall k\ge m\right\},\\ \nonumber
\Omega &= \left\{(d(\cdot),y(\cdot)): \sum_{r=0}^m\mathfrak G^r \begin{bmatrix} d(k-m+r) \\ y(k-m+r) \end{bmatrix} \le \mathfrak g^0,~ \forall k\ge m\right\}
\end{align}
\end{defn}
\begin{rem}
Definition \ref{def.LinCon} generalizes the contracts considered in \cite{SharfADHS2020}, which considered LTI contracts of depth $m=1$. It is no restriction to assume $m\ge 1$, as any contract of depth $m=0$ is also a contract of depth $m=1$ with $\mathfrak A^1, \mathfrak G^1 = 0$.
\end{rem}
Linear time-invariant contracts are defined using polyhedral sets for the stacked input vector $[d(k)^\top,\ldots,d(k-m)^\top]^\top$ and the similarly defined stacked output vector. We assume that the inequalities defining the assumptions are self-consistent, in the sense that if a signal satisfies them for some interval of length $m$, it can be extended for all future time.
\begin{defn}
Given matrices $\{\mathfrak V^r\}_{r=0}^m$ and a vector $\mathfrak v^0$, we say $(\{\mathfrak V^r\}_{r=0}^m,\mathfrak v^0)$ is \emph{extendable} if for any vectors $u_0,u_1,\ldots,u_m$ such that $\sum_{r=0}^m \mathfrak V^ru_r\le \mathfrak v^0$, there exists some vector $u_{m+1}$ such that $\sum_{r=0}^m \mathfrak V^r u_{r+1} \le \mathfrak v^0$.
\end{defn}
\begin{prop}
Let $\{\mathfrak V^r\}_{r=0}^m$ be matrices and $\mathfrak v^0$ be a vector. Write $\mathfrak{V}_- = \left[\mathfrak V^0,\ldots,\mathfrak V^m \right]$ and consider the polyhedral set $S_- = \{z: \mathfrak V_- z \le \mathfrak v^0\}$. We define the shift operators as:
\begin{align*}
T = \left[\begin{smallmatrix} 0 & I & \cdots & 0 & 0 \\ 0 & 0 & \ddots & 0 & 0 \\ \vdots & \vdots & \ddots & \ddots & \vdots \\ 0 & 0 & \cdots & 0 & I \\ 0 & 0 & \cdots & 0 & 0 \end{smallmatrix}\right], ~ K = \left[\begin{smallmatrix} 0 \\ 0 \\ \vdots \\ 0 \\ I \end{smallmatrix}\right],
\end{align*}
where $I$ is the identity matrix. The tuple $(\{\mathfrak V^r\}_{r=0}^m,\mathfrak v^0)$ is extendable if and only if the polyhedral set $TS_-=\{Tz: z\in S_-\}$ is contained in the polyhedral set $S_- + {\rm Im} K $.
\end{prop}
In particular, extendibility can be tested using the tools presented in the previous subsection.
\begin{proof}
By writing $z = [u_0^\top,\ldots,u_{m}^\top]^\top$, extendibility is equivalent to the following implication - whenever $z \in S_-$, there exists some $u_{m+1}$ such that $Tz + Ku_{m+1} \in S_-$. In other words, if $z\in S_-$, then there exists some $u_{m+1}$ such that $Tz \in S_- + \{-Ku_{m+1}\}$. This corresponds to $z\in S_-$ implying that $Tz \in S_- + {\rm Im}K$, proving the proposition.
\end{proof}
\section{Verification for Unperturbed Systems} \label{sec.Cert}
In this section, we consider the verification problem for unperturbed systems. These are closed-loop systems $\Sigma$ of the form \eqref{eq.GoverningEquations} for which the sets $\mathcal{P},\mathcal{R}$ consist of a single element. Equivalently, these are affine dynamical control systems governed by the following equations
\begin{align} \label{eq.GoverningEquations2}
x(0)&\in \mathcal{X}_0,\\\nonumber
x(k+1) &= Ax(k) + Bd(k) + w,~\forall k \in \mathbb{N}\\
y(k) &= Cx(k) + Dd(k) + v,~\forall k \in \mathbb{N}.\nonumber
\end{align}
where the vectors $w,v$ depend on the matrices $E,F$ and the sets $\mathcal{P},\mathcal{R}$, each containing a single element. Throughout this section, we fix such a system, governed by \eqref{eq.GoverningEquations2}. Moreover, we fix matrices $\{\mathfrak A^r,\mathfrak G^r\}_{r=0}^m$ and vectors $\mathfrak a^0, \mathfrak g^0$ defining an LTI contract $\mathcal{C}$ of depth $m$ via \eqref{eq.LinCon}. Our goal is to find a computationally tractable method for verifying that $\Sigma \vDash \mathcal{C}$.
This section is split into two parts. First, we present an exact reachability-based procedure for verifying whether $\Sigma \vDash \mathcal{C}$ holds. The procedure will require us to solve infinitely many linear programs, meaning it is intractable. The second part of this section will use induction (or more precisely, $k$-induction \cite{Donaldson2011}) to augment the verification procedure to be tractable, at the cost of making it conservative.
\subsection{Reachability-based Verification}
By definition, a control system $\Sigma$ satisfies a contract $\mathcal{C} = (\mathcal{D},\Omega)$ if for any admissible input $d(\cdot)\in \mathcal{D}$, and any trajectory $(d(\cdot),x(\cdot),y(\cdot))$ of $\Sigma$, we have $(d(\cdot),y(\cdot))\in \Omega$. If the guarantees $\Omega$ were independent of the input $d(\cdot)$, then this property can be understood in terms of reachability analysis - $\Sigma \vDash \mathcal{C}$ if and only if the output of any trajectory of the system, with inputs taken from $\mathcal{D}$, lies in the set $\Omega$. For LTI contracts, the assumptions and guarantees are stated as a collection of requirements, one corresponding to each time $k$. The theorem below uses extendibility to convert this reachability-based criterion by a collection of implications that must be verified. This is a generalization of Theorem 3 in \cite{SharfADHS2020}.
\begin{thm}
\label{thm.Inductive}
Let $\mathcal{C} = (\mathcal{D},\Omega)$ be an LTI contract of depth $m\ge 1$ of the form \eqref{eq.LinCon}, and let $\Sigma$ be a system of the form \eqref{eq.GoverningEquations2}. Assume that $(\{\mathfrak A^r\}_{r=0}^m,\mathfrak a^0)$ is extendable. Then $\Sigma \vDash \mathcal{C}$ if and only if for any $n\in \mathbb{N}, n\ge m-1$, the following implication holds: for any $d_0,d_1,\ldots,d_{n+1} \in \mathbb{R}^{n_d}$, any $x_0,x_1,\ldots,x_{n+1} \in \mathbb{R}^{n_x}$ and any $y_0,y_1,\ldots,y_{n+1}\in \mathbb{R}^{n_y}$, the condition:
\begin{align} \label{eq.FullFormDemand}
\begin{cases}
x_0 \in \mathcal{X}_0,\\
\sum_{r=0}^m \mathfrak G^r \left[\begin{smallmatrix} d_{k-m+r} \\ y_{k-m+r} \end{smallmatrix}\right] \le \mathfrak g^0,&\forall k=m,\ldots,n,\\
\sum_{r=0}^m \mathfrak A^r d_{k-m+r} \le \mathfrak a^0,&\forall k=m,\ldots,n+1, \\
x_{k+1} = Ax_k + Bd_k + w,&\forall k=0,\ldots,n, \\
y_k = Cx_k + Dd_k + v,&\forall k=0,\ldots,n+1,
\end{cases}
\end{align}
implies:
\begin{align}\label{eq.FullFormResult}
\sum_{r=0}^m \mathfrak G^r \begin{bmatrix} d_{n+1-m+r} \\ y_{n+1-m+r} \end{bmatrix} \le \mathfrak g^0.
\end{align}
\end{thm}
In other words, satisfaction is equivalent to the following collection of statements, defined for all $n\in \mathbb{N}$ - if the initial conditions hold, the guarantees hold up to time $n$, and the assumptions and the dynamics hold up to time $n+1$, then the guarantees hold at time $n+1$. We now prove the theorem:
\begin{proof}
Suppose first that whenever \eqref{eq.FullFormDemand} holds, so does \eqref{eq.FullFormResult}, and take any $d\in \mathcal{D}$ and $y\in \Sigma(d)$. Our goal is to show that $(d,y)\in \Omega$. As $d\in \mathcal{D}$, the following inequality holds for all $k\in \mathbb{N}, k\ge m$:
\begin{align*}
\sum_{r=0}^m \mathfrak A^r d(k-m+r) \le \mathfrak a^0.
\end{align*}
Moreover, as $y\in \Sigma(d)$, there exists some signal $x(\cdot)$ so that \eqref{eq.GoverningEquations2} holds for all $k\in \mathbb{N}$. Thus, if we choose $d_k = d(k), x_k = x(k)$ and $y_k = y(k)$ for all $k=0,1,\ldots,n+1$ and use the implication $\eqref{eq.FullFormDemand}\implies\eqref{eq.FullFormResult}$, we conclude that \eqref{eq.FullFormResult} holds for any $n\ge m$ by induction on $n$. Thus, $(d,y)\in\Omega$, and hence $\Sigma \vDash \mathcal{C}$.
On the contrary, suppose that $\Sigma \vDash \mathcal{C}$, and we wish to prove that \eqref{eq.FullFormDemand} implies \eqref{eq.FullFormResult}. We take $n\in \mathbb{N}, n \ge m$ and some $d_0,d_1,\ldots,d_{n+1}\in \mathbb{R}^{n_d}$, $x_0,x_1,\ldots,x_{n+1}\in \mathbb{R}^{n_x}$ and $y_0,y_1,\ldots,y_{n+1}\in \mathbb{R}^{n_y}$ such that \eqref{eq.FullFormDemand} holds, and show that \eqref{eq.FullFormResult} also holds. Suppose that we show that there exist signals $ d(\cdot)$ and $ y(\cdot)$ such that ${y} \in \Sigma({d})$, ${d} \in \mathcal{D}$ both hold, and $ d(k) = d_k, y(k) = y_k$ also hold for all $k=0,1,\ldots,n+1$. In that case, we have that $( d, y)\in \Omega$ as $\Sigma \vDash \mathcal{C}$, which would imply the desired inequality at time $k=n+1$. Thus, it suffices to prove that such signals $ d, y$ exist.
Recall that $(\{\mathfrak A^r\}_{r=0}^m,\mathfrak a^0)$ was assumed to be extendable. As the inequality $\sum_{r=0}^m \mathfrak A^r d(k-m+r) \le \mathfrak a^0$ holds for all $k=m,\ldots,n+1$, we conclude that there exists a signal ${d} \in \mathcal{D}$ such that $ d(k) = d_k$ holds for $k=0,1,\ldots,n+1$. We define signals ${x},{y}$ as follows - for $k=0,1,\ldots,n+1$, we define ${x}(k) = x_k$ and ${y}(k) = y_k$. For $k\ge n+2$, we define $ x(k) = A x(k-1) + B d(k-1) + w$ and $ y(k) = C x(k) + D d(k) + v$. As we assumed that \eqref{eq.FullFormDemand} holds, we conclude that $ y\in \Sigma( d)$. We thus proved the existence of signals ${d},{y}$ satisfying ${y} \in \Sigma({d})$, ${d} \in \mathcal{D}$, and $ d(k) = d_k, y(k) = y_k$ for $k=0,1,\ldots,n+1$. We deduce the implication holds, concluding the proof.
\end{proof}
Theorem \ref{thm.Inductive} allows one to prove that an unperturbed LTI system $\Sigma$ satisfies an LTI contract $\mathcal{C}$ by proving infinitely-many implications of the form $\eqref{eq.FullFormDemand}\implies\eqref{eq.FullFormResult}$. Moreover, these implications can be seen as one polyhedral set being a subset of another polyhedral set, and can thus be verified using the tools in Section \ref{subsec.Polyhedral}.
These prove that if the system satisfies the contract ``up to time $n$", then it satisfies it ``up to time $n+1$". However, using the theorem directly to verify satisfaction is infeasible, as there are infinitely many implications to prove. Section \ref{subsec.kInduction} below will show that it suffices to prove finitely many implications of the form $\eqref{eq.FullFormDemand}\implies\eqref{eq.FullFormResult}$ in order to verify satisfaction. Moreover, we can test the validity of these implications by recasting them as optimization problems, similarly to Lemma \ref{lem.Inclusion}. For any $n,p\in \mathbb{N}$ such that $n-p\ge m-1$, we consider the following optimization problem:
\begin{align} \label{eq.Prob_np_Lin}
\max_{d_k,x_k,y_k} ~&~ \max_i \left[{\rm e}_i^\top\left( \sum_{r=0}^{m} \mathfrak G^{r} \begin{bmatrix}d_{n+1-m+r} \\ y_{n+1-m+r}\end{bmatrix} - \mathfrak g^0\right)\right]\\ \nonumber
%
{\rm s.t.} ~&~ \sum_{r=0}^{m} \mathfrak G^r \begin{bmatrix}d_{k-m+r} \\ y_{k-m+r}\end{bmatrix} \le \mathfrak g^0\hspace{2pt}~~,\forall k=m+p,\ldots,n,\\ \nonumber
%
~&~\sum_{r=0}^{m} \mathfrak A^r d_{k-m+r} \le \mathfrak a^0~~~~~~,\forall k=m+p,\ldots,n+1, \\ \nonumber
%
~&~x_{k+1} = Ax_k + Bd_k + w\hspace{2pt}~,\forall k=p,\ldots,n, \\ \nonumber
~&~y_k = Cx_k + Dd_k + v~~~~~,\forall k=p,\ldots,n+1, \\ \nonumber
%
~&~ x_p \in \mathcal{X}_p, \nonumber
\end{align}
where $\mathcal{X}_p,~p=1,2,\ldots,n$ are sets to be defined later, and $\rm e_i$ are the standard basis vectors. We denote this problem by $V_{n,n-p}$ and its optimal value as $\theta_{n,n-p}$. Here, $p$ represents the first time we consider, $n$ represents the last time at which we know the guarantee holds, and $\ell = n-p$ is the length of history we consider. For $p=0$, the problem $V_{n,n}$ computes the ``worst-case violation" of the guarantee at time $n+1$, given that the assumptions and dynamics hold at times $0\ldots,n+1$ and that the guarantees hold at times $0\ldots,n$. Thus, Theorem~\ref{thm.Inductive} can be restated in the following form:
\begin{cor} \label{cor.Thetann}
Under the assumptions of Theorem~\ref{thm.Inductive}, $\Sigma \vDash \mathcal{C}$ if and only if $\theta_{n,n} \le 0$ for all $n \in \mathbb{N}$, $n\ge m-1$.
\end{cor}
\begin{proof}
For any $n\in \mathbb{N}$, $\theta_{n,n}\le 0$ if and only if \eqref{eq.FullFormResult} holds whenever \eqref{eq.FullFormDemand} holds. The result now follows by applying Theorem \ref{thm.Inductive}.
\end{proof}
\subsection{Tractable Verification using $k$-induction} \label{subsec.kInduction}
Corollary \ref{cor.Thetann} proves it suffices to compute $\theta_{n,n}$ for all $n\in\mathbb{N}, n\ge m-1$ to in order to verify whether $\Sigma\vDash \mathcal{C}$. As this requires solving infinitely many linear programs, the method is intractable. Moreover, we prefer to compute $\theta_{n,\ell}$ for small $\ell = n - p$, as this is a simpler problem with fewer variables.
The main difficulty in using $V_{n,\ell}$ for small $\ell$ is that it requires knowledge of the state trajectory $x(\cdot)$ at time $p = n - \ell$, captured in \eqref{eq.Prob_np_Lin} via the constraint $x_p \in \mathcal{X}_p$. This is simply reduced to the initial value $x_0 \in \mathcal{X}_0$ for the problems $V_{n,n}$.
An efficient solution of $V_{n,\ell}$ for small $\ell$ requires a characterization of $\mathcal{X}_p$ satisfying the following criteria. First, we would like $\mathcal{X}_p$ to be \emph{independent} of $p$, as this will imply that verification can be done by solving a finite number of optimization problems (thus not requiring the computation of all $\theta_{n,n}$ as in Corollary \ref{cor.Thetann}).
Second, $V_{n,\ell}$ is equivalent to $V_{n+1,\ell}$ where $\mathcal{X}_{p+1}$ is the image of $\mathcal{X}_p$ under the dynamics $x_{p+1} = Ax_p + Bd_p + w$. Thus, we search for $\mathcal{X}_p$ which is a robust invariant set, and specifically the smallest robust invariant set containing $\mathcal{X}_0$.
However, the smallest robust invariant set containing $\mathcal{X}_0$ might be fairly complex to explicitly state, implying that the optimization problem $V_{n,\ell}$ cannot be explicitly defined, let alone solved. For example, \cite{Fisher1988} studies the minimally robust invariant set containing $\mathcal{X}_0 = \{0\}$ for two-dimensional linear time-invariant systems, given in state-space form via $x^+ = Ax + Bd$. It shows that this minimally robust invariant set is polyhedral if and only if all of the eigenvalues of the system matrix $A$ are rational. We note that the problem $V_{n,n-p}$ is a linear program if and only if $\mathcal{X}_p$ is a polyhedral set, meaning that if we take $\mathcal{X}_p$ as the minimal robust invariant set, the rationality of the eigenvalues of $A$ determines whether $V_{n,p}$ is a linear program or not. This problem is escalated even further if we assume that the eigenvalues of $A$ are computed numerically, meaning we cannot determine their rationality. We can also try and find some robust invariant set containing $\mathcal{X}_0$, not necessarily the smallest one, but an explicit form is still hard to find. For example, \cite{Rakovic2005} tries to find a polyhedral robust invariant set containing $\mathcal{X}_0$, offering a very partial solution for $\mathcal{X}_0 = \{0\}$. A general solution to this problem is not known to the authors.
We make a detour around the tractability problem for the robust invariant set by choosing $\mathcal{X}_p = \mathbb{R}^{n_x}$. This results in a more conservative test, in the sense that $\mathcal{X}_p$ is larger than necessary, and the demand $\theta_{n,\ell} \le 0$ becomes stricter. However, the resulting problems $V_{n,n-p}$ are linear programs. Moreover, this choice allows us to verify contract satisfaction by solving finitely many linear programs, as suggested by Algorithm \ref{alg.VerifyCertainNoIota} below. The algorithm chooses which problems $V_{n,\ell}$ to solve based on an input parameter $\iota$, which essentially acts as a truncation parameter. Indeed, it defines the maximal history depth for the problems $V_{n,\ell}$ solved by the algorithm, and also the highest number $n$ for which $\theta_{n,n}$ is computed. Theorem \ref{thm.CertainAlgCorrectness} below studies Algorithm \ref{alg.VerifyCertainNoIota}, proving its correctness and analyzing its complexity. Later, Theorem \ref{thm.ThetaInduciveLTI} will suggest a choice of the parameter $\iota$.
\begin{algorithm} [t]
\caption{Verification for Unperturbed Systems, Ver. 1}
\label{alg.VerifyCertainNoIota}
{\bf Input:} An LTI contract $\mathcal{C} = (\mathcal{D},\Omega)$ of the form \eqref{eq.LinCon}, of depth $m \ge 1$, an LTI system $\Sigma$ of the form \eqref{eq.GoverningEquations2}, and a number $\iota \in \mathbb{N}$ satisfying $\iota \ge m-1$.\\
{\bf Output:} A boolean variable $\mathfrak{b}_{\mathcal{C},\Sigma,\iota}$.
\begin{algorithmic}[1]
\State Consider $V_{n,\ell}$ defined in \eqref{eq.Prob_np_Lin}, with $\mathcal{X}_1 = \mathbb{R}^{n_x}$. Solve them for $(n,\ell) \in\{(k,k): m-1\le k\le \iota-1\} \cup \{(\iota+1,\iota)\}$, and let $\theta_{n,\ell}$ be their solution.
\If{All computed values $\theta_{n,\ell}$ are non-positive}
\State {\bf Return} $\mathfrak{b}_{\mathcal{C},\Sigma,\iota} = $ true.
\Else
\State {\bf Return} $\mathfrak{b}_{\mathcal{C},\Sigma,\iota} = $ false.
\EndIf
\end{algorithmic}
\end{algorithm}
\begin{thm} \label{thm.CertainAlgCorrectness}
Let $\mathcal{C} = (\mathcal{D},\Omega)$ be an LTI contract of depth $m\ge 1$ of the form \eqref{eq.LinCon}, let $\Sigma$ be a system of the form \eqref{eq.GoverningEquations2}, and let $\iota$ be a natural number satisfying $\iota \ge m-1$.
\begin{enumerate}
\item If Algorithm \ref{alg.VerifyCertainNoIota} outputs $\mathfrak{b}_{\mathcal{C},\Sigma,\iota} = $ true, then $\Sigma \vDash \mathcal{C}$.
\item The algorithm solves $\iota-m+3$ linear programs.
\end{enumerate}
\end{thm}
\begin{proof}
The second part is obvious, so we focus on the first. We use two claims to prove the first part of the theorem:
\begin{itemize}
\item[i)] For any $n\in \mathbb{N}$, we have $\theta_{n,n}\le\theta_{n,n-1}\le\cdots\le \theta_{n,m-1}$.
\item[ii)] For any $\ell \ge m-1$, we have $\theta_{\ell,\ell} \le \theta_{\ell+1,\ell} = \ell_{\ell+2,\ell} = \cdots$.
\end{itemize}
We first explain why these claims imply the first part of the theorem, and then prove that the claims hold.
Assume both claims hold and that $\mathfrak{b}_{\mathcal{C},\Sigma,i} =$ true, i.e., that $\theta_{n,n} \le 0$ for $n=m-1,\ldots,\iota$ and that $\theta_{\iota+1,\iota} \le 0$, and we prove that $\Sigma \vDash \mathcal{C}$. By Corollary \ref{cor.Thetann}, it suffices to show that $\theta_{n,n} \le 0$ for $n\ge \iota + 1$. If $n \ge \iota+1$, then $\theta_{n,n} \le \theta_{n,\iota}$ (by the first claim), and $\theta_{n,\iota} = \theta_{\iota+1,\iota}$ (by the second claim). As $\theta_{\iota+1,\iota} \le 0$ by assumption, we conclude that $\theta_{n,n} \le 0$. As $n \ge \iota+1$ was arbitrary, we yield $\Sigma \vDash \mathcal{C}$.
We now prove claim i). Fix some $p$ such that $1\le p \le n-m+1$, so that $\ell = n-p$ satisfies $n-1\ge \ell \ge m-1$. We can relate the problem $V_{n,n-p+1}$ to the problem $V_{n,n-p}$ by altering some of its constraints. We first remove the constraints that the guarantees, assumptions and dynamics hold at time $p-1$. We also note that while the problem $V_{n,n-p}$ restricts $x_p \in \mathbb{R}^{n_x}$, $V_{n,n-p+1}$ restricts $x_p$ to be achieved from the dynamics (via $x_{p-1} \in \mathbb{R}^{n_x}$ and $x_p = Ax_{p-1} + Bd_{p-1}+w$). Thus, $V_{n,n-p+1}$ has the same cost function as $V_{n,n-p}$, but stricter constraints. In particular, as both are maximization problems, we conclude that $\theta_{n,n-p+1} \le \theta_{n,n-p}$, as desired.
As for claim ii), we similarly relate $V_{n+1,\ell}$ and $V_{n,\ell}$ by altering the names of $d_k,x_k,y_k$ to $d_{k+1},x_{k+1},y_{k+1}$, and changing the set of initial conditions from $\mathcal{X}_0$ to $\mathbb{R}^{n_x}$ (only if $n=\ell$)
Thus, $\theta_{\ell,\ell} \le \theta_{\ell+1,\ell} = \theta_{\ell+2,\ell} = \cdots$.
\end{proof}
Theorem \ref{thm.CertainAlgCorrectness} shows that Algorithm \ref{alg.VerifyCertainNoIota} can be used to verify that an unperturbed system $\Sigma$ satisfies an LTI contract $\mathcal{C}$. The algorithm uses a parameter $\iota \ge m-1$ dictating the number of linear programs solved by the algorithm. Namely, the first $\iota-m+1$ programs deal with the initial conditions of the system, and the last program deals with the long-term behaviour of the system.
As $\iota$ becomes larger, the algorithm becomes less (over-)conservative - if $\iota_1 \le \iota_2$ then $\theta_{n,\iota_2} \le \theta_{n,\iota_1}$, so $\theta_{n,\iota_1} \le 0$ implies $\theta_{n,\iota_2} \le 0$. However, larger values of $\iota$ result in a larger number of linear programs, which are also more complex as they have more variables.
We must find a systematic way to choose the parameter $\iota$ effectively. As the following theorem shows, $\theta_{\iota+1,\iota}$ can be infinite for one value of $\iota$, but finite (and non-positive) for other values of $\iota$:
\begin{thm} \label{thm.ThetaInduciveLTI}
Let $\mathcal{C} = (\mathcal{D},\Omega)$ and $\Sigma$ be as in Theorem \ref{thm.CertainAlgCorrectness}. Let $\nu$ be the observability index of $\Sigma$, and define $\mathcal{X}_p = \mathbb{R}^{n_x}$ for all $p\neq 0$.
Define the sets:
\begin{align*}
\mathcal{D}_\star &= \left\{(d_0,d_1,\ldots,d_m): \sum_{r=0}^m \mathfrak A^r d_r \le \mathfrak a^0\right\},
\end{align*}
\begin{align*}
\Omega_\star &= \left\{(d_0,y_0,\ldots,d_m,y_m) : \sum_{r=0}^m \mathfrak G^r \begin{bmatrix} d_r \\ y_r \end{bmatrix} \le \mathfrak g^0\right\}
\end{align*}
Assume $\mathcal{D}_\star$ is bounded, and that for any bounded set $E \subseteq \mathbb{R}^{(m+1)n_d}$, the intersection $\Omega_\star \cap (E\times \mathbb{R}^{(m+1)n_y})$ is bounded. Then $\theta_{n,\ell} < \infty$ for $n \ge \ell \ge \max\{m,\nu\}-1$, and $\theta_{n,\ell} = \infty$ if $n,\nu-1 > \ell \ge m-1$.
\end{thm}
\begin{proof}
Define $\mu = \max\{m,\nu\}$. We first show that $\theta_{n,\mu-1} < \infty$, implying that $\theta_{n,\ell} < \infty$ for all $\ell \ge \mu - 1$ by claim i) in the proof of Theorem \ref{thm.CertainAlgCorrectness}.
Consider a feasible solution
$\{d_k,x_k,y_k\}_{k=n-\mu+1}^{n+1}$ of $V_{n,\mu-1}$. Because $\mathcal{D}_\star$ is bounded, $(\mathcal{D}_\star \times \mathbb{R}^{(m+1)n_y}) \cap \Omega_\star$ is bounded. Thus, for some constant $M_0>0$, we have $\|d_k\|,\|y_k\|,\|d_{n+1}\| \le M_0$ for $k=n-\nu+1,\ldots,n$ (as $\mu \ge m$).
However, as $\mu \ge \nu$, $p_\mathcal{O}(x_{n-\mu+1})$ can be achieved as a linear combination of $y_{n-\mu+1},\ldots,y_n$ and $d_{n-\mu+1},\ldots,d_n$ using $\mathcal{O}_\mu$. We thus find some $M_1>0$, depending on $M_0$ and $\mathcal{O}_\nu$, such that $\|p_\mathcal{O}(x_{n-\mu+1})\| \le M_1$.
As $\|d_{n-\mu+k}\| \le M_0$ for all $k$, we yield $\|p_\mathcal{O}(x_{n-\mu+k})\| \le M_k$ for $k=1,2,\ldots,\mu+1$, where $M_k = \|A\| M_{k-1} + \|B\|M_0$. Thus $\|y_{n+1}\| \le \|C\| M_{n+1} + \|D\|M_0$, implying that the set of feasible solutions $\{d_k,x_k,y_k\}_k$ of $V_{n,\mu-1}$ is bounded, and therefore $\theta_{n,\mu-1} < \infty$.
For the second part, we note that if $\nu-1 > \ell \ge m-1$, then $\nu > m$. In particular, claim ii) in the proof of Theorem \ref{thm.CertainAlgCorrectness} implies it suffices to show that $\theta_{n,\nu-2} = \infty$.
By definition of the observability index, ${\rm rank}~\mathcal{O}_{\nu} > {\rm rank}~\mathcal{O}_{\nu-1}$, implying there exists a non-zero vector $\xi \in \ker(\mathcal{O}_\nu)^\perp \cap \ker(\mathcal{O}_{\nu-1})$, so $CA^k \xi = 0$ for $k\le \nu-2$, but $CA^{\nu-1}\xi \neq 0$. Take any feasible solution $\{d_k,x_k,y_k\}_{k=n-\nu+2}^{n+1}$ and some $\alpha \in \mathbb{R}$ to be chosen later. Define a new solution
$\{\check d_k,\check x_k,\check y_k\}_k$ by $\check d_k = d_k,$
\begin{align*}
\check x_k = \begin{cases} x_{n-\nu+2} + \alpha\xi & k=n-\nu+2, \\ A\check x_{k-1} + B\check d_{k-1} & {\rm else} \end{cases},
\end{align*}
and $\check y_k = C\check x_k + D\check d_k$. We have that $\check d_k = d_k$,$\check x_k = x_k$ and $\check y_k = y_k$ for any $k\le n$, thus $\{\check d_k,\check x_k,\check y_k\}_{k=n-\nu+2}^{n+1}$ forms a feasible solution of $V_{n,\nu-2}$.
Moreover, $\check y_{n+1} = y_{n+1} + \alpha CA^{\nu-1}\xi$. We claim that for any $M>0$, there exists some $\alpha$ such that the value of the cost function of $V_{n,\nu-2}$ for the feasible solution $\{\check d_k,\check x_k,\check y_k\}_{k=n-\nu+2}^{n+1}$ is at least $M$.
Consider the set $Q = \Omega_\star\cap(D_\star \times \mathbb{R}^{(m+1)n_y})$. By assumption, $Q$ is bounded, hence $(\check d_{n-m+2},\check y_{n-m+2},\ldots,\check d_{n+1},\check y_{n+1})\not\in Q$ for any $\alpha\in \mathbb{R}$ such that $|\alpha|$ is large enough. This is only possible if there exists some $i$ such that the $i$-th row of $\mathfrak G^m$, denoted $\mathfrak G_i^m$, satisfies $(\mathfrak G_i^m)^\top \left[\begin{smallmatrix} 0 \\ \xi \end{smallmatrix}\right] \neq 0$. If we denote the sign of $(\mathfrak G_i^m)^\top \left[\begin{smallmatrix} 0 \\ \xi \end{smallmatrix}\right] \neq 0$ as $\lambda$ and choose $\alpha = \lambda t$ for $t$ arbitrarily large, the value of the cost function grows arbitrarily large. Thus $\theta_{n,n-\nu+2} = \infty$.
\end{proof}
Theorem \ref{thm.ThetaInduciveLTI} suggests a value for the parameter $\iota$ when running Algorithm \ref{alg.VerifyCertainNoIota}. Indeed, it shows that for guarantees defined by compact sets, the algorithm always declares ``false" if $\iota < \max\{m,\nu\}-1$, no matter if $\Sigma \vDash \mathcal{C}$ or $\Sigma \not\vDash \mathcal{C}$. As we already stated before, larger values of $\iota$ result in a less (over-)conservative algorithm, but also in a more complex and slower algorithm. For that reason, we run Algorithm \ref{alg.VerifyCertainNoIota} with $\iota = \max\{m,\nu\}-1$. We explicitly state this in Algorithm \ref{alg.VerifyCertainIota}.
\begin{algorithm} [h]
\caption{Verification for Unperturbed Systems, Ver. 2}
\label{alg.VerifyCertainIota}
{\bf Input:} An LTI contract $\mathcal{C} = (\mathcal{D},\Omega)$ of the form \eqref{eq.LinCon}, of depth $m \ge 1$, an LTI system $\Sigma$ of the form \eqref{eq.GoverningEquations2}.\\
{\bf Output:} A boolean variable $\mathfrak{b}_{\mathcal{C},\Sigma}$
\begin{algorithmic}[1]
\State Compute the observability index $\nu$ of $\Sigma$.
\State Run Algorithm \ref{alg.VerifyCertainNoIota} with $\iota = \max\{m,\nu\}-1$, outputting the answer $\mathfrak{b}_{\mathcal{C},\Sigma,\iota}$.
\State {\bf Return} $\mathfrak{b}_{\mathcal{C},\Sigma} = \mathfrak{b}_{\mathcal{C},\Sigma,\iota}$
\end{algorithmic}
\end{algorithm}
The correctness of Algorithm \ref{alg.VerifyCertainIota}, as well as an estimate on its complexity, follow from Theorem \ref{thm.CertainAlgCorrectness}.
\section{Verification for Perturbed Systems} \label{sec.Uncertain}
The previous section provides an efficient method for verifying that a given unperturbed LTI system satisfies a given contract. We now extend our results to dynamical control systems with perturbations in the form of process and measurement noise, prescribing LP-based methods for verifying satisfaction.
For this section, we fix an LTI contract $\mathcal{C} = (\mathcal{D},\Omega)$ of the form \eqref{eq.LinCon}. We also fix a system $\Sigma$ as in \eqref{eq.GoverningEquations}, where the sets $\mathcal{P},\mathcal{R}$ correspond to process noise and measurement noise.
\begin{rem}
Suppose we want to verify that $\Pi \vDash \mathcal{C}$ for some \emph{nonlinear} system $\Pi$. In many cases, it suffices to show that $\Sigma \vDash \mathcal{C}$ for some LTI system $\Sigma$ with appropriately chosen process and measurement noise. For example, if $\Pi$ is governed by the equations $x(k+1) = x(k) + \sin(x(k))$ and $y(k) = x(k)$, we can consider the perturbed LTI system $\Sigma$ governed by the equations $x(k+1) = x(k) + \omega(k)$ and $y(k) = x(k)$, where $|\omega(k)|\le 1$. Trajectories of $\Pi$ are also trajectories of $\Sigma$, so verifying that $\Sigma \vDash \mathcal{C}$ is sufficient to prove that $\Pi \vDash \mathcal{C}$.
\end{rem}
We can consider an analogue of $V_{n,p}$ for the perturbed system $\Sigma$ and the contract $\mathcal{C}$, which would be of the form:
\begin{align} \label{eq.Vnp_noise}
\max_{d_k,x_k,y_k} ~&~ \max_i \left[{\rm e}_i^\top\left( \sum_{r=0}^{m} \mathfrak G^{r} \begin{bmatrix}d_{n+1-m+r} \\ y_{n+1-m+r}\end{bmatrix} - \mathfrak g^0\right)\right]\\ \nonumber
%
{\rm s.t.} ~&~ \sum_{r=0}^{m} \mathfrak G^r \begin{bmatrix}d_{k-m+r} \\ y_{k-m+r}\end{bmatrix} \le \mathfrak g^0~,\forall k=m+p,\ldots,n,\\ \nonumber
%
~&~\sum_{r=0}^{m} \mathfrak A^r d_{k-m+r} \le \mathfrak a^0~,\forall k=m+p,\ldots,n+1, \\ \nonumber
%
~&~x_{k+1} = Ax_k + Bd_k+E\omega_k~,\forall k=p,\ldots,n, \\ \nonumber
~&~y_k = Cx_k + Dd_k+F\zeta_k~,\forall k=p,\ldots,n+1, \\ \nonumber
%
~&~x_p \in \mathcal{X}_p,\\ \nonumber
%
~&~\omega_k\in\mathcal{P},\zeta_k\in\mathcal{R}~,\forall k=p,\ldots,n+1.
\end{align}
As before, the computational tractability of the problem depends on the set $\mathcal{X}_p$, as well as the sets $\mathcal{R}$ and $\mathcal{P}$. If $\mathcal{X}_p,\mathcal{P}$ and $\mathcal{R}$ are all defined by linear inequalities, we get a linear program. However, if the sets $\mathcal{P} ,\mathcal{R}$ are not defined by linear inequalities, we might get a nonlinear problem, or even a non-convex problem. For example, a uniform norm bound on the process noise, $\mathcal{P} = \{\omega : \omega^\top P \omega \le \gamma^2\}$, yields a quadratic optimization problem with $n-p+1 = \ell+1$ quadratic constraints. Another case is when the perturbation stems from sensor noise, and the sensor provides an estimate on its magnitude. In that case, we can write $\omega = (\delta,\Delta)$ where $\delta \in \mathbb{R}^{n_\delta}$ is the sensor noise, $\Delta\in \mathbb{R}$ is the estimate on its size satisfying $0 \le \Delta \le 1$, and $\|\delta\| \le \Delta$ holds. In this case, the optimization problem turns out to be non-convex.
\subsection{Comparison-based Verification}
To avoid nonlinear (or non-convex) problems, we take a different approach. Intuitively, the perturbed system $\Sigma$ satisfies the contract $\mathcal{C}$ if and only if the nominal version of $\Sigma$, with no process or measurement noise, satisfies a robustified version of $\mathcal{C}$. The goal of this section is to make this claim precise. The nominal version of $\Sigma$, denoted $\hat{\Sigma}$, is governed by:
\begin{align} \label{eq.Noiseless}
&x(0) \in \mathcal{X}_0,\\ \nonumber
&x(k+1) = Ax(k) + Bd(k),~\forall k\in \mathbb{N},\\ \nonumber
&y(k) = Cx(k) + Dd(k),~\forall k\in \mathbb{N}.
\end{align}
The system $\hat{\Sigma}$ is an unperturbed LTI system, so checking whether it satisfies some LTI contract is possible using Algorithm \ref{alg.VerifyCertainIota}.
The following theorem precisely defines the robustified version of $\mathcal{C}$:
\begin{thm} \label{thm.UncertainEquiv}
Let $\Sigma$ be a perturbed LTI system governed by \eqref{eq.GoverningEquations}, and let $\mathcal{C}$ be an LTI contract of the form \eqref{eq.LinCon}, where $\mathfrak G^i = [\mathfrak G^i_d,\mathfrak G^i_y]$ for $i=0,\ldots,m$. Define the auxiliary system $\hat \Sigma$ as \eqref{eq.Noiseless}, and let $T = \sum_{r=0}^m \mathfrak G_y^r C A^r$.
The system $\Sigma$ satisfies $\mathcal{C}$ if and only if $\hat{\Sigma}$ satisfies the contract $\mathcal{C}^\prime = (\mathcal{D},\Omega^\prime)$, where:\small
\begin{align} \label{eq.Cprime}
\Omega^\prime = \{&(d(\cdot),y(\cdot))\in \mathcal{S}^{n_d}\times \mathcal{S}^{n_y}: \\&\sum_{r=0}^m \mathfrak G^r\begin{bmatrix}d(k-m+r) \\ y(k-m+r)\end{bmatrix} \le \mathfrak g^0 - \tau^k, ~\forall k\ge m\},\normalsize \nonumber
\end{align}\normalsize
and the $i$-th entry of the vector $\tau^k$ is given by $\tau^{\mathcal R}_i + \tau^{\mathcal P,{\rm e}}_i + \sum_{\varsigma=0}^{k-m-1} \tau^{\mathcal{P},{\rm m},\varsigma}_i$, where:
\vspace{-5pt}
\small
\begin{align} \label{eq.taus}
&\tau^{\mathcal R}_i = \sum_{\ell=0}^m \max \left\{{\rm e}_i^\top \mathfrak G_y^\ell F \zeta_{\ell}:~\zeta_\ell \in \mathcal{R},~\forall \ell\right\},\\ \nonumber
&\tau^{\mathcal P,{\rm e}}_i = \sum_{\ell = 0}^{m-1} \max\left\{\left({\rm e}_i^\top\sum_{r=\sigma+1}^m\mathfrak G_y^r CA^{r-1-\ell}E\right)\omega_\ell :~\omega_\ell \in \mathcal{P} \right\},\\ \nonumber
&\tau^{\mathcal P,{\rm m},\varsigma}_i = \max \left\{{\rm e}_i^\top T A^\varsigma E \omega: ~\omega \in \mathcal{P}\right\},~\forall \varsigma\in\mathbb{N}.
\end{align} \normalsize
\end{thm}
\begin{proof}
We fix some $d(\cdot)\in \mathcal{D}$ and consider a trajectory $(d(\cdot),x(\cdot),y(\cdot))$ of $\Sigma$. By definition, there exist some signals $\omega(\cdot),\zeta(\cdot)$ such that for any $k\in \mathbb{N}$, we have
\begin{align*}
\begin{cases}
\omega(k)\in \mathcal{P},~\zeta(k) \in \mathcal{R},~x(0) \in \mathcal{X}_0,\\
x(k+1) = Ax(k) + Bd(k) + E\omega(k),\\
y(k) = Cx(k) + Dd(k) + F\zeta(k).
\end{cases}
\end{align*}
We consider the corresponding trajectory $(d(\cdot),\hat x(\cdot),\hat y(\cdot))$ of $\hat{\Sigma}$ with no process nor measurement noise, i.e., we define
\begin{align*}
\begin{cases}
\hat x(0) = x(0),\\
\hat x(k+1) = A\hat x(k) + Bd(k),~\forall k\in \mathbb{N}.\\
\hat y(k) = C\hat x(k) + Dd(k),~\forall k\in \mathbb{N}.
\end{cases}
\end{align*}
It is clear that for any time $t\in \mathbb{N}$, we have $y(t) = \hat y(t) + \tilde y(t)$ where $\tilde y(t) = F\zeta(t) + \sum_{s=0}^{t-1}CA^{t-s-1}E\omega(s)$. Fixing a time $k\ge m$, the guarantee of the contract $\mathcal{C}$ can be written as
\begin{align*}
\sum_{r=0}^m \mathfrak G^r \begin{bmatrix}d(k-m+r) \\ \hat y(k-m+r)\end{bmatrix} + \sum_{r=0}^m \mathfrak G_y^r \tilde y(k-m+r) \le \mathfrak g^0,
\end{align*}
or equivalently, by plugging the exact form of $\tilde y$, as
\small
\begin{align} \label{eq.Ineq1}
\sum_{r=0}^m &\mathfrak G^r \begin{bmatrix}d(k-m+r) \\ \hat y(k-m+r)\end{bmatrix} \le \mathfrak g^0 - \\& \nonumber \sum_{r=0}^m \mathfrak G_y^r \left(F\zeta(k-m+r) + \sum_{s=0}^{k-m+r-1}CA^{k-m+r-s-1}E\omega(s)\right)
\end{align} \normalsize
By replacing the order of summation, the double sum on the right-hand side of \eqref{eq.Ineq1} can be written as
\begin{align} \label{eq.UncertainTechnical}
\sum_{r=0}^m &\mathfrak G_y^rF\zeta(k-m+r) +\\& \nonumber \sum_{s=0}^{k-1}\sum_{r=\max\{m+s+1-k,0\}}^m \mathfrak G_y^rCA^{k-m+r-s-1}E\omega(s).
\end{align}
We can break the second sum into two double sums, one from $s=0$ to $s=k-m-1$ (for which the sum on $r$ starts at $0$), and one from $s=k-m$ to $s=k-1$ (for which the sum on $r$ starts at $m+s+1-k$). We thus get
\begin{align*}
\sum_{s=0}^{k-1}&\sum_{r=\max\{m+s+1-k,0\}}^m \left(\mathfrak G_y^rCA^{k-m+r-s-1}E\right)\omega(s) \\= &\sum_{s=0}^{k-m-1}\sum_{r=0}^m \mathfrak G_y^rCA^{k-m+r-s-1}E\omega(s) \\&\hspace{-9pt}+ \sum_{s=k-m}^{k-1}\sum_{r=m+s+1-k}^m \mathfrak G_y^rCA^{k-m+r-s-1}E\omega(s)
\end{align*}
replacing the summation index, $\varsigma = k-m-1-s$ for the first double sum and $\sigma = s-k+m$ for the second double sum, we arrive at the following expression
\begin{align*}
&\sum_{\varsigma=0}^{k-m-1}\left(\sum_{r=0}^m \mathfrak G_y^r CA^r\right) A^\varsigma E \omega(k-m-1-\varsigma) \\ +&\hspace{7pt}\sum_{\sigma = 0}^{m-1} \left(\sum_{r=\sigma+1}^m\mathfrak G_y^r CA^{r-1-\sigma}E\right)\omega(\sigma+k-m).
\end{align*}
We plug this expression in place of the double sum in \eqref{eq.UncertainTechnical}, which is then plugged into \eqref{eq.Ineq1}. Fixing $\hat y(\cdot)$, the inequality must hold for any choice of $\zeta(\cdot),\omega(\cdot)$ (corresponding to different trajectories of $\Sigma$ for the same input $d(\cdot)$). Optimizing over $\omega(t)\in \mathcal{P},\zeta(t)\in\mathcal{R}, \forall t$ gives
\begin{align*}
\sum_{r=0}^m \mathfrak G^r \begin{bmatrix}d(k-m+r) \\ \hat y(k-m+r)\end{bmatrix} \le \mathfrak g^0 - \tau^{\mathcal{R}} - \sum_{\varsigma=0}^{k-m-1} \tau^{\mathcal R,{\rm m},\varsigma} - \tau^{\mathcal{P},{\rm e}},
\end{align*}
concluding the proof.
\end{proof}
{
Loosely speaking, the result of Theorem \ref{thm.UncertainEquiv} transfers the effect of perturbations from the system to the contract. In (11), $\tau_i^\mathcal{R}$ captures the effect of measurement noise, whereas $\tau_i^{\mathcal{P},\rm e}$ and $\tau_i^{\mathcal{P},\rm m,\sigma}$ account for the effect of process noise (whose effect propagates through time). The theorem therefore} prescribes a comparison-based method of asserting that a perturbed LTI system $\Sigma$ satisfies a given time-invariant contract $\mathcal{C}$. Namely, we can check that an auxiliary (unperturbed) LTI system $\hat \Sigma$ satisfies another, robustified contract $\mathcal{C}^\prime$. The contract $\mathcal{C}^\prime = (\mathcal{D},\Omega^\prime)$ is defined by \emph{time-dependent} linear inequalities, as the vector $\tau^k$ explicitly depends on $k$. As a result, the methods exhibited in Section \ref{sec.Cert} are ineffective, as they assume the contract is time-invariant. In the next section, we overcome this problem using refinement and approximation.
\subsection{Tractability through Refinement and Approximation}
In order to overcome the problem of time-dependence, and to use the methods of Section \ref{sec.Cert}, we refine $\mathcal{C}^\prime$ by a time-invariant contract $\hat{\mathcal{C}} = (\mathcal{D},\hat \Omega)$, where:
\begin{align} \label{eq.C_hat}
\hat \Omega = \{(&d(\cdot),y(\cdot))\in \mathcal{S}^{n_d}\times \mathcal{S}^{n_y}:\\\nonumber &\sum_{r=0}^m\mathfrak G^r\begin{bmatrix}d(k-m+r) \\ y(k-m+r)\end{bmatrix} \le \mathfrak g^0 - \tau^\infty, ~\forall k\ge m\},
\end{align}
and we define $\tau^\infty_i = \tau^{\mathcal R}_i + \tau^{\mathcal P,{\rm e}}_i + \sum_{\varsigma=0}^{\infty} \tau^{\mathcal{P},{\rm m},\varsigma}_i$. It is obvious that $\hat{\mathcal{C}} \preccurlyeq \mathcal{C}^\prime$ if $\tau^{\mathcal{P},{\rm m},\varsigma}_i \ge 0$, which is guaranteed if $0\in \mathcal{P}$. In fact, $\mathcal{C}^\prime$ is the ``largest" or ``most lenient" time-invariant contract which refines $\mathcal{C}^\prime$.
However, this raises a new problem - computing the vector $\tau^\infty$ requires computing the $\tau^{\mathcal{P},{\rm m},\varsigma}_i$ for all $\varsigma\in \mathbb{N}$ and all $i$, i.e., it requires solving infinitely many optimization problems. We address this issue by truncating the infinite sum and overestimating its tail. This approach is formalized in the following theorem:
\begin{prop} \label{prop.Taus}
Suppose the assumptions of Theorem \ref{thm.UncertainEquiv} hold, and let $\hat{\mathcal{C}} = (\mathcal{D},\hat{\Omega})$ be as in \eqref{eq.C_hat}. Assume that for some $N_0 \in \mathbb{N}$, the matrix $A^{N_0}$ is contracting, i.e., the operator norm $\|A^{N_0}\|$ is strictly smaller than $1$. Moreover, assume that $\mathcal{P},\mathcal{R}$ are bounded sets. Then for any $i$, $\tau^\infty_i < \infty$. Furthermore, define $M_\mathcal{P} = \max_{\omega\in\mathcal{P}}\|\omega\|$ and $K_{A,N_0} = 1+\|A\|+\ldots+\|A^{N_0-1}\|$. Then for any $\epsilon > 0$, if we define a contract $\hat{\mathcal{C}}_\epsilon = (\mathcal{D},\hat{\Omega}_\epsilon)$ by:
\begin{align} \label{eq.C_hat_eps}
\hat \Omega_\epsilon = \{(&d(\cdot),y(\cdot))\in \mathcal{S}^{n_d}\times \mathcal{S}^{n_y}:\\\nonumber &\sum_{r=0}^m\mathfrak G^r\begin{bmatrix}d(k-m+r) \\ y(k-m+r)\end{bmatrix} \le \mathfrak g^0 - \tau^{\epsilon}, ~\forall k\ge m\},
\end{align}
then $\hat{\mathcal{C}}_\epsilon \preccurlyeq \hat{\mathcal{C}}$, where the entries of the vector $\tau^\epsilon$ are defined by:
\begin{align*}
&\tau^{\epsilon}_i = \tau^{\mathcal R}_i + \tau^{\mathcal P,{\rm e}}_i + \sum_{\varsigma=0}^{N(\epsilon,i)-1} \tau^{\mathcal{P},{\rm m},\varsigma}_i + \epsilon,
\end{align*}
and:
\vspace{-15pt}
\small
\begin{align} \label{eq.NeiStableA}
&N(\epsilon,i) = \max\left\{\left\lceil N_0 \log_{\frac{1}{\|A^{N_0}\|}} \left( \frac{\|T^\top e_i\|\|E\|K_{A,N_0} M_\mathcal{P} }{(1-\|A^{N_0}\|)\epsilon}\right)\right\rceil,N_0\right\}
\end{align}
\end{prop}
\begin{proof}
It is enough to show that for any $i$, the inequality $\tau_i^\infty \le \tau_i^{\epsilon}$ holds, or equivalently, that $\sum_{\varsigma = N(i,\epsilon)}^\infty \tau_i^{\mathcal{P},{\rm m},\varsigma} \le \epsilon$ holds.
For any $\varsigma \in \mathbb{N}$, we have:
\begin{align} \label{eq.TauBound}
\tau^{\mathcal P,{\rm m},\varsigma}_i &= \max \left\{e_i^\top T A^\varsigma E \omega: ~\omega \in \mathcal{P}\right\} \\&\le \nonumber \|A^\varsigma\|\|E\|\|T^\top e_i\| M_\mathcal{P}
\end{align}
By using the inequality $\|A^{aN_0+b}\| \le \|A^{N_0}\|^a\|A\|^b$ for $a,b\in\mathbb{N}$, we conclude that:
\begin{align*}
\sum_{\varsigma = N(i,\epsilon)}^{\infty} \|A^\varsigma\| &\le \left(\sum_{t=0}^{N_0-1} \|A\|^t\right) \left(\sum_{\varsigma= N(i,\epsilon)/N_0}^\infty \|A^{N_0}\|^\varsigma\right) \\&\le K_{A,N_0}\frac{\|A^{N_0}\|^{{N(i,\epsilon)}/N_0}}{1-\|A^{N_0}\|},
\end{align*}
where we use $\|A^{N_0}\| < 1$. Thus, we get:
\begin{align*}
\sum_{\varsigma = N(i,\epsilon)}^\infty \tau_i^{\mathcal{P},{\rm m},\varsigma} \le \|T^\top e_i\|\|E\|K_{A,N_0}M_\mathcal{P} \frac{\|A^{N_0}\|^{{N(i,\epsilon)}/N_0}}{1-\|A^{N_0}\|}
\end{align*}
plugging in $N(i,\epsilon)$ from \eqref{eq.NeiStableA}, we conclude that the expression on the right hand side is smaller or equal than $\epsilon$, as desired.
\end{proof}
Theorem \ref{thm.UncertainEquiv} and Proposition \ref{prop.Taus} suggest the following comparison-based algorithm for verification for perturbed systems, at least when the assumptions of Proposition \ref{prop.Taus} hold:
\begin{algorithm} [h]
\caption{Verification for Perturbed Systems}
\label{alg.VerifyUncertain}
{\bf Input:} An LTI contract $\mathcal{C}$ of the form \eqref{eq.LinCon}, a perturbed system $\Sigma$ of the form \eqref{eq.GoverningEquations}, and a conservatism parameter $\epsilon > 0$.\\
{\bf Output:} A boolean variable $\mathfrak{b}_{\mathcal{C},\Sigma}$.
\begin{algorithmic}[1]
\State Define the auxiliary noiseless system $\hat{\Sigma}$.
\For {each $i$},
\State Compute $N(\epsilon,i)$ as in \eqref{eq.NeiStableA}.
\State Compute $\tau^{\mathcal R}_i,\tau^{\mathcal P,{\rm e}}_i$ and $\tau^{\mathcal P,{\rm m},\varsigma}_i$ according to \eqref{eq.taus} for $\varsigma = 0,1,\ldots,N(\epsilon,i)-1$ .
\State Compute $\tau^{\epsilon}_i = \tau^{\mathcal R}_i + \tau^{\mathcal P,{\rm e}}_i + \sum_{\varsigma=0}^{N(\epsilon,i)-1} \tau^{\mathcal{P},{\rm m},\varsigma}_i + \epsilon$.
\EndFor
\State Define the contract $\hat{\mathcal{C}}_\epsilon = (\mathcal{D},\hat{\Omega}_\epsilon)$ as in \eqref{eq.C_hat_eps}.
\State Run Algorithm \ref{alg.VerifyCertainIota} for the system $\hat{\Sigma}$ and the contract $\hat{\mathcal{C}}_\epsilon$, outputting the answer $\mathfrak{b}_{\hat{\mathcal{C}}_\epsilon,\hat \Sigma}$ .
\State {\bf Return} $\mathfrak{b}_{\mathcal{C},\Sigma} = \mathfrak{b}_{\hat{\mathcal{C}}_\epsilon,\hat \Sigma}$ .
\end{algorithmic}
\end{algorithm}
\subsection{Properties of Algorithm \ref{alg.VerifyUncertain}} \label{subsec.AnalysisUncertain}
The rest of this section is devoted to studying the correctness, the assumptions, the conservatism, and the computational complexity of Algorithm \ref{alg.VerifyUncertain}. First, we claim that the algorithm correctly verifies satisfaction:
\begin{thm}[Correctness]
Suppose the assumptions of Theorem \ref{thm.UncertainEquiv} hold. If Algorithm \ref{alg.VerifyUncertain} outputs $\mathfrak{b}_{\mathcal{C},\Sigma} =$ true, then $\Sigma \vDash \mathcal{C}$.
\end{thm}
\begin{proof}
Algorithm \ref{alg.VerifyUncertain} outputs $\mathfrak{b}_{\mathcal{C},\Sigma} =$ true if and only if Algorithm \ref{alg.VerifyCertainIota}, when applied on the nominal system $\hat{\Sigma}$ and the robustified contract $\hat{\mathcal{C}}_\epsilon$, outputs $\mathfrak{b}_{\hat\mathcal{C}_\epsilon,\hat\Sigma} =$ true. In that case, Theorem \ref{thm.CertainAlgCorrectness} implies that $\hat{\Sigma} \vDash \hat{\mathcal{C}}_\epsilon$, hence $\hat{\Sigma} \vDash \mathcal{C}^\prime$ as $\hat{\mathcal{C}}_\epsilon \preccurlyeq \hat{\mathcal{C}} \preccurlyeq \mathcal{C}^\prime$. Thus, Theorem \ref{thm.UncertainEquiv} implies that $\Sigma \vDash \mathcal{C}$.
\end{proof}
We now study the assumptions of Algorithm \ref{alg.VerifyUncertain}, claiming they are not too strict.
\begin{thm}[Generality of Assumptions]\label{thm.AssumptionTau}
Suppose the assumptions of Theorem \ref{thm.UncertainEquiv} hold. Then:
\begin{itemize}
\item There exists $N_0 \in \mathbb{N}$ such that $\|A^{N_0}\|<1$ if and only if $A$ is a strictly stable matrix, i.e., all of its eigenvalues are inside the open unit disc in the complex plane.
\item Suppose that $A$ is not strictly stable, that $0\in \mathcal{R}$, and that the set $\mathcal{P}$ contains a neighborhood of the origin. Suppose further that $E$ has full row rank and that the image of $T^\top$ is not contained within the stable subspace of $A$. Moreover, assume that for some $d_0,d_1,\ldots,d_m$, the following set is bounded and non-empty:
\begin{align*}
Q = \left\{(y_0,\ldots,y_m) : \sum_{r=0}^m \mathfrak G^r \begin{bmatrix} d_r \\ y_r \end{bmatrix} \le \mathfrak g^0\right\}.
\end{align*}
Then $\Sigma \not\vDash \mathcal{C}$.
\end{itemize}
\end{thm}
The first claim implies the algorithm is applicable for strictly stable systems, and the second shows that systems which are not strictly stable cannot satisfy compact specifications, at least generically (as the matrix $T$ depends on the constraints).
\begin{proof}
We prove the claims in order. First, we denote the spectral radius of the matrix $A$ by $\rho(A) = \max\{|\lambda|: \exists v\neq 0, Av=\lambda v\}$. This is the maximum absolute value of an eigenvalue of $A$. By definition, $A$ is strictly stable if and only if $\rho(A)<1$. Moreover, Gelfand's formula states that $\lim_{n\to \infty} \|A^n\|^{1/n} = \inf_{n\ge 1} \|A^n\|^{1/n}= \rho(A)$ \cite{Lax2002}. Thus, there exists some $N_0 \in \mathbb{N}$ such that $\|A^{N_0}\|<1$ if and only if $\rho(A) < 1$, as claimed.
The proof of the second claim is relegated to the appendix, as it is a bit more involved. We will, however, give a sketch of the proof here. First, we show that most entries of the vector $\tau^k$ grow arbitrarily large as $k$ grows to infinity. Namely, we show that the $i$-th entry grows arbitrarily large if $T^\top {\rm e}_i$ is outside the stable subspace of $A$. In the second stage, we use this to show that the inequality defining the set $\Omega^\prime$ in \eqref{eq.Cprime} defines an empty set if $k$ is large enough. We will then conclude from Theorem \ref{thm.UncertainEquiv} that $\Sigma \not \vDash \mathcal{C}$.
\end{proof}
Next, we study the algorithm's approximation properties:
\begin{thm} \label{thm.epsTau}
Suppose that the assumptions of Theorem \ref{thm.UncertainEquiv} hold. Let $n,\ell\in \mathbb{N}$ such that $n\ge \ell \ge m-1$, and let $\epsilon > 0$. We denote the problems \eqref{eq.Prob_np_Lin} associated with $\hat \Sigma \vDash \hat \mathcal{C}$ and $\hat \Sigma \vDash \hat \mathcal{C}_\epsilon$ by and $V_{n,\ell}$ and $V_{n,\ell|\epsilon}$ respectively, and their values by $\theta_{n,\ell}$ and $\theta_{n,\ell|\epsilon}$. If $\theta_{n,\ell|\epsilon} > \epsilon$, then $\theta_{n,\ell} > 0$. In particular, Algorithm \ref{alg.VerifyCertainIota} would declare that $\hat\Sigma\not\vDash \hat\mathcal{C}$.
\end{thm}
In other words, the parameter $\epsilon$ serves as a tunable conservatism parameter for the approximation $\hat{C}_\epsilon \preccurlyeq \hat{\mathcal{C}}$.
\begin{proof}
We let $\mathbb{X}_{n,\ell}$ denote the feasible set of $V_{n,\ell}$, and $\mathbb{X}_{n,\ell|\epsilon}$ denote the feasible set of $V_{n,\ell|\epsilon}$. By construction, $\tau^{\epsilon}_i - \epsilon\le \tau^\infty_i \le \tau^{\epsilon}_i$ holds for every $i$.
Thus, by definition of the contracts $\hat\mathcal{C},\hat{\mathcal{C}}_\epsilon$, we conclude that $\mathbb{X}_{n,\ell} \supseteq \mathbb{X}_{n,\ell|\epsilon}$, as the constraints corresponding to the assumptions and the dynamics are identical, but the constraints corresponding to the guarantees are stricter.
Moreover, fixing some index $i$ in the cost function, the two problems ${V}_{n,\ell},{V}_{n,\ell|\epsilon}$ have the same cost function up to a constant, equal to $\tau^\infty_i - \tau^{\epsilon}_i$.
Choose an index $i$ such that at the optimal solution of ${V}_{n,\ell|\epsilon}$, the maximum of the cost function is attained at the index $i$. As both ${V}_{n,\ell|\epsilon}$ and $V_{n,\ell}$ are maximization problems, we yield:
\begin{align*}
{\theta}_{n,\ell} \ge {\theta}_{n,\ell|\epsilon} + \tau^\infty_i - \tau^{\epsilon}_i \ge
{\theta}_{n,\ell|\epsilon} - \epsilon > 0
\end{align*}
as claimed.
\end{proof}
Lastly, we shed light on the computational complexity of the algorithm. As before, we denote the depth of the contract $\mathcal{C}$ as $m$, and the observability index of the noiseless system $\hat{\Sigma}$ by $\nu$. The algorithm revolves around solving optimization problems of three different kinds:
\begin{itemize}
\item[i)] Solving the linear programs determining whether $\hat{\Sigma} \vDash \hat{\mathcal{C}}_\epsilon$. There are a total of $\max\{\nu,m\}+1$ linear programs, of dimension at most $(n_d+n_y+n_x)(\max\{\nu,m\}+1)$.
\item[ii)] Solving $M_\mathcal{P} = \max_{\omega\in \mathcal{P}}\|\omega\|$ to compute $N(\epsilon,i)$.
\item[iii)] Solving the optimization problems in \eqref{eq.taus}. We need to solve a total of $\sum_{i} (N(\epsilon,i)+2m+1)$ problems.
\end{itemize}
Solving the optimization problem (i) can be done very quickly using off-the-shelf optimization software, e.g., Yalmip \cite{Lofberg2004}. The tractability of the problems (ii) and (iii) depends on the exact form of $\mathcal{P},\mathcal{R}$. However, solving them is much more simple than solving \eqref{eq.Vnp_noise} for four main reasons:
First, these problems consider a single instance of $\omega$ or $\zeta$ at any given time, meaning that they are of a significantly lower dimension than \eqref{eq.Vnp_noise}, and they include far less constraints.
Second, the cost functions of these maximization problems are convex, meaning that the maximum is achieved on an extreme point of the set $\mathcal{P}$ or $\mathcal{R}$ \cite[Theorem 32.2]{Rockafellar1970}. Thus, even if the sets $\mathcal{P},\mathcal{R}$ are not convex, we can replace them by their convex hulls without changing the value of the problem. In other words, the convex relaxations of these optimization problems have the same value as the original problems.
Third, even if the sets $\mathcal{P},\mathcal{R}$ (or their convex hulls) are not defined using linear or quadratic inequalities, so standard LP and quadratic programming methods cannot be used, we can still use gradient-based, duality-based or interior-point-based methods. These methods will converge much faster for the optimization problems (ii) and (iii) than for the problem \eqref{eq.Vnp_noise}, due to the reduced dimension.
Lastly, the simplicity of the optimization problems (ii) and (iii) allows one to give closed-form formulae for the solution if $\mathcal{P},\mathcal{R}$ are described using simple terms, thus eliminating the need for a numerical solution of the problems. Indeed, the following proposition gives closed-form solution to the optimization problems appearing in \eqref{eq.taus} and in Proposition \ref{prop.Taus}:
\begin{prop} \label{prop.TauExplicit}
Consider a set $\mathcal{H} \subseteq \mathbb{R}^{q}$. We take a vector $b \in \mathbb{R}^{q}$, and define $M_b = \max_{z\in \mathcal{H}} b^\top z$ and $M_{\|} = \max_{z\in \mathcal{H}} \|z\|$.
\begin{itemize}
\item If $\mathcal{H} = \{z: z^\top H z \le \gamma^2\}$ for some positive-definite matrix $H$ and $\gamma > 0$, then $M_b = \gamma \|H^{-1/2}b\|$ and $M_\| = \gamma \|H^{-1/2}\|$.
\item If $\mathcal{H}$ is a bounded polyhedral set given in vertex representation, $\mathcal{H} = \{F\lambda : \mathds{1}^\top \lambda = 1, \lambda \ge 0\}$, then $M_b = \max_i {\rm e}_i^\top F^\top b$ and $M_\| = \max_i \|F{\rm e}_i\|$
\end{itemize}
\end{prop}
\begin{proof}
For the first case, we note that $z^\top H z \le \gamma^2$ if and only if $\|v\| \le \gamma$, where $z = H^{-1/2}v$. Thus:
\begin{align*}
M_b &= \gamma\max_{\|v\|\le 1} (H^{-1/2}b)^\top v = \gamma \|H^{-1/2}b\|,\\
M_\| &= \gamma\max_{\|v\|\le 1} \|H^{-1/2}v\|= \gamma \|H^{-1/2}\|
\end{align*}
For the second case, the result follows from the fact that the maximum of a convex function on a bounded polyhedral set is attained at one of its vertices \cite[Theorem 32.2]{Rockafellar1970}.
\end{proof}
We make one last remark about the number $N(\epsilon,i)$, which dictates the number of problems \eqref{eq.taus} we have to solve.
\begin{rem}
In Algorithm \ref{alg.VerifyUncertain}, we compute $N(\epsilon,i)$ using \eqref{eq.NeiStableA}, which depends on a number $N_0$ such that $\|A^{N_0}\| < 1$. First, the number $N(\epsilon,i)$ depends logarithmically on $1/\epsilon$, meaning that the algorithm is computationally tractable even for extremely small values of $\epsilon$. Second, if $A$ is strictly stable, then there exist infinitely many $N_0$ such that $\|A^{N_0}\| < 1$. Moreover, $N(\epsilon,i) \ge N_0$ holds by definition. Thus, we can iterate over different values of $N_0$ to find the smallest possible value of $N(\epsilon,i)$ for fixed $\epsilon$ and $i$. See Algorithm \ref{alg.ComputeNepsi} for details.
\end{rem}
\begin{algorithm} [h]
\caption{Computing the Optimal Threshold $N(\epsilon,i)$}
\label{alg.ComputeNepsi}
{\bf Input:} A stable matrix $A$, a matrix $C$, matrices $\{\mathfrak G_y^r\}_{r=0}^m$, a perturbation set $\mathcal{P}$, and a parameter $\epsilon > 0$\\
{\bf Output:} An optimal value of $N(\epsilon,i)$.
\begin{algorithmic}[1]
\State Compute $T = \sum_{r=0}^m \mathfrak G_y^r C A^r$ and $M_\mathcal{P} = \max_{\omega \in \mathcal{P}} \|\omega\|$.
\State Put $N_0 = 1$, $N^{\epsilon,i}_{\rm opt} = \infty$, and $K_{A,N_0} = 0$.
\While{$N_0 \le N^{\epsilon,i}_{\rm opt}$}
\State Add $\|A^{N_0 - 1}\|$ to the value of $K_{A,N_0}$.
\If{$\|A^{N_0}\| < 1$}
\State Compute $N(\epsilon,i)$ according to \eqref{eq.NeiStableA}.
\State Assign the value $\min\{N^{\epsilon,i}_{\rm opt},N(\epsilon,i)\}$ to $N^{\epsilon,i}_{\rm opt}$.
\EndIf
\State Assign the value $N_0+1$ to $N_0$.
\EndWhile
\State {\bf return} $N_{\rm opt}^{\epsilon,i}$
\end{algorithmic}
\end{algorithm}
\section{Numerical Examples} \label{sec.CaseStudy}
In this section, we apply the presented verification algorithm in two case studies. The first deals with a two-vehicle autonomous driving scenario, and the second deals with formation control for multi-agent systems.
\subsection{Two-Vehicle Leader-Follower system}
We consider two vehicles driving along a single-lane highway, as in Fig. \ref{fig.LeaderFollower}. We are given a headway $h>0$, and our goal is to verify that the follower keeps at least the given headway from the leader. Denoting the position and velocity of the follower as $p_f(k)$, $v_f(k)$, and the position and velocity of the leader as $p_l(k),v_l(k)$, the follower vehicle keeps the headway if and only if $p_f(k) - p_l(k) - hv_l(k) \ge 0$ holds at any time $k\in \mathbb{N}$. This scenario has been studied in \cite{SharfADHS2020} where the follower is assumed to have a known and unperturbed model. Here, we instead consider the same scenario for a follower with a perturbed model, affected by process noise.
\begin{figure}[b]
\centering
\includegraphics[width = 0.5\textwidth]{img/LeaderFollower.pdf}
\caption{Two vehicles on a single-lane highway.}
\label{fig.LeaderFollower}
\end{figure}
We start by explicitly stating the contract on the follower. The input to the follower includes the position and velocity of the leader, i.e., $d(k) = [p_l(k),v_l(k)]^\top$. The output from the follower includes its position and velocity, i.e., $y(k) = [p_f(k),v_f(k)]^\top$.
For assumptions on the input, we assume the leader vehicle follows the kinematic laws with a bound on the acceleration, i.e., for any time $k$,
\begin{align*}
&p_l(k+1) = p_l(k) + \Delta t v_l(k),~v_l(k+1) = v_l(k) + \Delta t a_l(k),~\\
&a_l(k) \in [-a_{\rm min},a_{\rm max}],
\end{align*}
where $a_l(k)$ is the acceleration of of the leader vehicle at time $k$, and $\Delta t>0$ is the length of the discrete time-step. For guarantees, we specify that the headway is kept, i.e., that $p_l(k) - p_f(k) - hv_f(k) \ge 0$ holds for any $k\in \mathbb{N}$. These specifications define a linear time-invariant contract $\mathcal{C}$ of depth $m=1$, defined using the following matrices and vectors:
\begin{align*}
&\mathfrak A^1 = \begin{bmatrix} 1 & 0 \\ -1 & 0 \\ 0 & 1 \\ 0 & -1 \end{bmatrix},~
\mathfrak A^0 = \begin{bmatrix} -1 & -\Delta t \\ 1 & \Delta t \\ 0 & -1 \\ 0 & 1 \end{bmatrix},~
\mathfrak a^0 = \begin{bmatrix} 0 \\ 0 \\ \Delta t a_{\rm max} \\ \Delta t a_{\rm min}\end{bmatrix}, \\
&\mathfrak G^1 = \begin{bmatrix} 0 & 0 & 0 & 0\end{bmatrix},~
\mathfrak G^0 = \begin{bmatrix} -1 & 0 & 1 & h\end{bmatrix},~
\mathfrak g^0 = [0].
\end{align*}
We now describe the dynamical control system governing the follower vehicle. The state of the follower includes only the position and the velocity, $x(k) = [p_f(k),v_f(k)]^\top$, meaning that the system has a state-observation, i.e., $y(k)=x(k)$. We assume that the state evolves according to the kinematic laws:
\begin{align*}
p_f(k+1) &= p_f(k) + \Delta t v_f(k),~\\
v_f(k+1) &= v_f(k) + \Delta t a_f(k) + \omega(k),
\end{align*}
where $a_f(k)$ is the acceleration of the follower, and $\omega(k)$ is the process noise, which can be understood as the aggregation of exogenous forces acting upon the vehicle, e.g., wind, drag, and friction. The acceleration of the follower is taken according to the following control law:
\begin{align*}
{
a_f(k) = \frac{p_l(k)-p_f(k) - hv_f(k)}{h\Delta t} + \frac{v_l(k) - v_f(k)}{h} - 1_{\rm m/s^2},}
\end{align*}
{in which the acceleration is dictated by the current headway, the difference in speed between the vehicles, and a constant term added to enhance robustness.}
The closed-loop system is hence governed by:
\begin{align*}
&x(k+1) = Ax(k) + Bd(k) + w + E\omega(k),~ \omega(k)\in \mathcal{P},~\\
&y(k) = x(k),~\mathcal{P} = \{\omega\in \mathbb{R}: |\omega| \le \Phi\}
\end{align*}
where:
\begin{align*}
A = \begin{bmatrix} 1 & \Delta t \\ -\frac{1}{h} & -\frac{\Delta t}{h} \end{bmatrix},~
B = \begin{bmatrix} 0 & 0 \\ \frac{1}{h} & \frac{\Delta t}{h} \end{bmatrix},~E = \begin{bmatrix} 0 \\ 1 \end{bmatrix},~w = \begin{bmatrix}0 \\ \Delta t\end{bmatrix}
\end{align*}
As for initial conditions, we follow Remark \ref{rem.InitDepend} and choose the set of initial conditions depending on $d(0) = [p_l(0),v_l(0)]^\top$. Namely, we assume that the headway at time $k=0$ satisfies $p_l(0) - p_f(0) - hv_f(0) \ge 0.7$.
We want to prove that the follower satisfies the contract with the given assumption and guarantees for a specific choice of parameters, and we do so by running Algorithm \ref{alg.VerifyUncertain}. We choose the parameters $\Delta t = 0.3{\rm s}$, $h = 2{\rm s}$, $a_{\rm max} = a_{\rm min} = 9.8{\rm m/s^2}$, $\Phi = 29{\rm cm}$ and a conservatism parameter $\epsilon = 10^{-12}$.
In order to run Algorithm \ref{alg.VerifyUncertain}, we first verify that $A$ is a strictly stable matrix. The eigenvalues of $A$ can be numerically computed to be $\lambda_1 = 0$ and $\lambda_{2} = 0.85$, and all are inside the open unit disc in the complex plane. Thus the assumptions of Algorithm \ref{alg.VerifyUncertain} hold. Running the algorithm, and using Algorithm~\ref{alg.ComputeNepsi} to compute the parameter $N(\epsilon,1)$\footnote{Note that here, the matrices $\mathfrak{G}^0,\mathfrak{G}^1$ only have one row, so we need to compute only a single parameter.} and Proposition \ref{prop.TauExplicit} to compute $\tau^\epsilon$, we find that $N(\epsilon,1) = 183$ and that $\tau^\epsilon$ is given by $\tau^\epsilon = 0.58$. As instructed by Algorithm \ref{alg.VerifyUncertain}, we now run Algorithm \ref{alg.VerifyCertainIota} for the system with no perturbation, i.e., the system given by the state-space representation defined by:
\begin{align*}
&x(k+1) = Ax(k) + Bd(k) + w,\\
&y(k) = x(k),\\
&p_l(0) - p_f(0) - hv_f(0) \ge 0.6,
\end{align*}
and the robustified contract $\hat{\Omega}_{\epsilon} = (\mathcal{D},\hat\Omega_\epsilon)$, where the assumptions are given by $\mathfrak{A}^1,\mathfrak{A}^0,\mathfrak{a}^0$ and the guarantees are given by $\mathfrak{G}^1,\mathfrak{G}^0,\mathfrak{g}^0-\tau^\epsilon$. The observability index $\nu$ is equal to $1$ in this case, and the depth of the LTI contract $\hat{\mathcal{C}}_\epsilon$ is $m=1$. Thus, $\iota = \max\{1,1\}-1 = 0$, and we are required to solve a total of $\iota+2 = 2$ optimization problems, $V_{0,0}$ and $V_{1,0}$. We use MATLAB's internal solver, {\tt linprog}, to solve the linear programs, and find that $\theta_{0,0} = -0.12 < 0$ and that $\theta_{1,0} = -0.02<0$. Thus, we conclude using Proposition \ref{prop.Taus} that the perturbed system defining the follower satisfies the contract. We also report that the algorithm was run on a Dell Latitude 7400 computer with an Intel Core i5-8365U processor, and the total runtime was $0.15$ seconds.
We demonstrate the fact that the follower satisfies the contract by simulation. We consider the following trajectory of the leader - its initial speed is about $110{\rm km/h}$, which is roughly kept for 30 seconds. It then starts to sway wildly for 30 seconds between $20-30{\rm km/h}$ and $110{\rm km/h}$, braking and accelerating as hard as possible. Finally, it stops swaying and keeps its velocity for 30 more seconds. The velocity and acceleration of the leader can be seen in Fig. \ref{fig.LeaderSimulation}(a) and \ref{fig.LeaderSimulation}(b). In particular, the leader vehicle satisfies the assumptions of the contract. The follower starts $46{\rm m}$ behind the leader, at a speed of $80{\rm km/h}$, meaning that the requirement on the initial condition is satisfied. We simulate the follower system for two cases, the first is where the noise $\omega(k)$ is adversarial, choosing the worst case value at each time, and the second is where the noise $\omega(k)$ distributes uniformly across $\mathcal{P}$. The results of the simulation can be seen in Fig. \ref{fig.LeaderSimulation}(c)-(f). In particular, it can be seen that the headway in both cases is always at least $h=2{\rm s}$, i.e., the guarantees are satisfied.
\begin{figure*}[t]
\centering
\subfigure[Velocity of leader] {\scalebox{.43}{\includegraphics{img_pdf/Velocity_Leader_Paper_TAC.pdf}}} \hspace{.5cm}
%
\subfigure[Acceleration of leader] {\scalebox{.43}{\includegraphics{img_pdf/Acceleration_Leader_Paper_TAC.pdf}}}\hspace{.5cm}
%
\subfigure[Headway] {\scalebox{.43}{\includegraphics{img_pdf/Headway_Paper_TAC.pdf}}}\hspace{2cm}
%
\subfigure[Distance between the vehicles] {\scalebox{.43}{\includegraphics{img_pdf/Distance_Paper_TAC.pdf}}}\hspace{.5cm}
%
\subfigure[Velocity of follower] {\scalebox{.43}{\includegraphics{img_pdf/Velocity_Follower_Paper_TAC.pdf}}}\hspace{.5cm}
%
\subfigure[Acceleration $a_f(k)$ of follower, as dictated by the controller] {\scalebox{.43}{\includegraphics{img_pdf/Acceleration_Follower_Paper_TAC.pdf}}}
\caption{Simulation of the two-vehicle leader-follower system. The black plots correspond to the leader, the blue plots correspond to the follower with worst-case process noise, and the red plots correspond to the follower with random process noise.}
\label{fig.LeaderSimulation}
\end{figure*}
\subsection{Formation Control for Double-Integrator Agents} \label{sec.DoubleInteg}
Formation control is a fundamental problem in the field of cooperative control, in which one tries to coordinate a collection of agents to achieve a certain spatial shape \cite{Oh2015}. This canonical problem has many versions depending on the sensing capabilities of the agents, as well as the desired degrees of freedom for the achieved shape. In all instances of the problem, the desired spatial formation is defined ``locally" by prescribing geometric constraints on each agent and agents adjacent to it, e.g., desired displacement \cite{Oh2015}, distance \cite{Oh2015}, or bearing \cite{Zhao2019}. The agents can then be maneuvered in space either by changing the geometric constraints, e.g., the desired displacement, or by assigning a few of the agents to be ``leaders", and having the other agents follow suit.
In this case study, we focus on displacement-based formation control for a directed network of double integrator agents. Our goal is to verify that a given multi-agent system satisfies a contract, in which the guarantees imply that it approximately reaches the correct spatial formation. Ideally, one would dissect this contract on the multi-agent system into smaller contracts on the individual agents. However, we run the verification process while treating the system as a monolithic entity, as our goal in this case study is to show that the methods we presented can work well even for high-dimensional systems.
We consider a network of $n_V$ $D$-dimensional agents. The system can be described using a directed graph $\mathcal{G} = (\mathcal{V},\mathcal{E})$, where the set of nodes $\mathcal{V}$ corresponds to the agents in the network, and the edges $\mathcal{E} \subset \mathcal{V} \times \mathcal{V}$ define the sensing relations between the agents. Specifically, for two nodes $i,j\in \mathcal{V}$, the edge $(i,j)$ belongs to $\mathcal{E}$ if and only if agent $i$ can measure the state of agent $j$. We let $n_E = |\mathcal{E}|$ be the number of edges in the graph.
The state of the $i$-th agent is given by $[p_i,v_i]$, where $p_i \in \mathbb{R}^D$ is the position of the agent, and $v_i \in \mathbb{R}^D$ is its velocity. We choose one agent, denoted as $1 \in \mathcal{V}$, to be the leader node, so it will move independently from all other agents, which will follow it in space while trying to keep the desired spatial shape. The input to the system is then given by $d = [a_1,\delta]$ where $a_1 \in \mathbb{R}^D$ is the acceleration of the leader node, and $\delta \in \mathbb{R}^{n_ED}$ is a stacked vector consisting of the desired displacements. More precisely, for each edge $(i,j) \in \mathcal{E}$, the vector $\delta_{ji} \in \mathbb{R}^D$ is the desired relative displacement from the $j$-th agent to the $i$-th agent. The output from the system consists of the positions, relative to the leader, i.e., $y = (p_i - p_1)_{i\in \mathcal{V}, i\neq 1}$\footnote{We choose the output as the relative position to avoid strict stability issues later, as formation control protocols are invariant to translating all of the agents in the same direction and by the same amount.}. The guarantees we want to make are that the agents' displacements are close to the desired ones. Namely, we wish to guarantee that $-(\mu_{\rm err})_{ij} \le p_i(k) - p_j(k) - \delta_{ji}(k) \le (\mu_{\rm err})_{ij}$ holds at any time $k\in \mathbb{N}$, where $\mu_{\rm err} \in \mathbb{R}^{Dn_E}$ is a constant vector defining the allowable error for each pair $(i,j)\in \mathcal E$. The entries $(\mu_{\rm err})_{ij}$ of the vector $\mu_{\rm err}$ can be chosen arbitrarily. However, if the graph $\mathcal{G}$ is a directed acyclic graph with large diameter, it is advisable to take the entries of $\mu_{\rm err}$ as different from one another, due to string-stability-like phenomena \cite{Feng2019}.
As for the assumptions, a reasonable assumption on $a_1 \in \mathbb{R}^D$ can bound the maximum acceleration and deceleration of the agent in each spatial direction, i.e., $a_i(k) \in [-a_{\rm min},a_{\rm max}]^D$. As for the desired displacements $(\delta_{ij})_{(i,j)\in\mathcal{E}}$, we make two assumptions. First, we assume that the desired displacements can only change by a bounded amount between time iterations. Namely, we assume that $\|\delta_{ij}(k+1) - \delta_{ij}(k)\|_\infty \le \mu_{\rm diff}$ for any $(i,j)\in \mathcal{E}$ and any time $k\in \mathbb{N}$, where $\|\cdot\|_\infty$ is the sup-norm. Moreover, we assume that the desired displacements $(\delta_{ij})_{(i,j)\in\mathcal{E}}$ are \emph{consistent} with one another, i.e., that there exists a configuration in space attaining these displacements. If we let $E \in \mathbb{R}^{n_V\times n_E}$ be the incidence matrix of the graph $\mathcal{G}$, this demand is equivalent to $\delta(k) \in {\rm Im}(E^\top \otimes {\rm I}_D)$, were ${\rm I}_D \in \mathbb{R}^{D\times D}$ is the identity matrix and $\otimes$ is the Kronecker product. By using the SVD decomposition\footnote{More precisely, if $E^\top = U\Sigma V^\top$ is the SVD decomposition, we define $P = \tilde{\Sigma}U^\top$, where $\tilde{\Sigma} \in \mathbb{R}^{n_E \times n_E}$ is a diagonal matrix satisfying $\tilde{\Sigma}_{ii} = 1$ if and only if $\Sigma_{ii} = 0$, and $\tilde{\Sigma}_{ii} = 0$ otherwise.} of $E^\top$, we build a matrix $P \in \mathbb{R}^{n_E\times n_E}$ such that $\ker(P) = {\rm Im}(E^\top)$, and we can restate the consistency assumption as $\left[\begin{smallmatrix} P \otimes {\rm I}_D \\ -P\otimes{\rm I}_D \end{smallmatrix}\right]^\top \delta(k) \le 0$ for any time $k\in \mathbb{N}$. In particular, the contract defined by the assumptions and guarantees is LTI of depth $1$.
As for the system, we assume that the agents are double integrators, where all non-leader agents follow a linear control law. Namely, we assume that the position and velocity of the $i$-th agent evolve according to the following equations
\begin{align*}
p_i(k+1) &= p_i(k) + \Delta t v_i(k),~\\v_i(k+1) &= v_i(k) + \Delta t a_i(k) + \omega_i(k),
\end{align*}
where the noise $\omega_i(k) \in \mathbb{R}^D$ corresponds to unmodeled forces on the agent, and we assume that the Euclidean norm of $\omega_i(k)$ is bounded by a tunable parameter $\omega_{\rm max}$.
Moreover, the control input $a_i(k)$ for $i\neq 1$ is given by the following linear law
\begin{align*}
a_i(k) = \frac{1}{d_{i}^{\rm out}} \sum_{j: (i,j)\in \mathcal{E}}\left(-\frac{p_i-p_j-\delta_{ji}}{\Delta t^2} - 2\frac{v_i-v_j}{\Delta t}\right),
\end{align*}
where $d_i^{\rm out}$ is the out-degree of the node $i$, i.e., the number of agents $j$ such that $(i,j)\in \mathcal{E}$. Unfortunately, the equations above define an LTI system which is not strictly stable, as the system matrix $A$ has $2D$ eigenvectors with eigenvalue $\lambda = 1$, namely $\left[\begin{smallmatrix}{\rm e}_i \otimes {\rm I}_D \\ 0\end{smallmatrix}\right]$ and $\left[\begin{smallmatrix} 0\\ {\rm e}_i \otimes {\rm I}_D \end{smallmatrix}\right]$. These correspond to moving all agents in the same direction and by the same amount, and to adding the same vector to all of the agents' velocities, correspondingly. To overcome this problem and make Algorithm \ref{alg.VerifyUncertain} applicable for this problem, we define $2(n_V-1)$ new coordinates as $q_i = p_i - p_1$ and $u_i = v_i - v_1$ for $1\neq i\in \mathcal{V}$. A simple calculation shows that $q,u$ evolve according to the following equations:
\begin{align*}
q_i(k+1) &= q_i(k) + \Delta t u_i(k),\\
u_i(k+1) &= u_i(k) + \Delta t a_i(k) + \omega_i(k) - \Delta t a_1(k),
\end{align*}
where the control input is given by
\begin{align*}
a_i(k) &= \frac{1}{d_{i}^{\rm out}} \sum_{j: (i,j)\in \mathcal{E}}\left(-\frac{q_i-q_j-\delta_{ji}}{\Delta t^2} - 2\frac{u_i-u_j}{\Delta t}\right),
\end{align*}
where we define $q_1 = u_1 = 0 \in \mathbb{R}^D$, and the output of the system is, as before, given by $y = q$. Thus, this is a perturbed LTI system with observability index equal to $\nu = 2$.
\begin{table*}[!ht]
\begin{center}
\begin{tabular}{ |c|c|c|c|c|c|>{\centering\arraybackslash}m{1.5cm}|>{\centering\arraybackslash}m{1.5cm}|c|c|c| }
\hline
$n_V$ & $n_E$ & Graph Type & System dim. & Input dim. & Output dim. & Number of Assumptions & Number of Guarantees & Alg. \ref{alg.ComputeNepsi} Time & LP Time & Total Time\\ \hline
5 & 10 & Complete & 16 & 22 & 8 & 84 & 40 & 0.03 & 0.48 & 0.51\\ \hline
10 & 45 & Complete & 36 & 92 & 18 & 364 & 180 & 0.41 & 2.88 & 3.29 \\ \hline
15 & 105 & Complete & 56 & 212 & 28 & 844 & 420 & 1.24 & 13.24 & 14.49 \\ \hline
20 & 190 & Complete & 76 & 382 & 38 & 1524 & 760 & 7.73 & 60.22 & 67.96 \\ \hline
30 & 435 & Complete & 116 & 872 & 58 & 3484 & 1740 & 76.48 & 527.61 & 604.09 \\ \hline
50 & 1225 & Complete & 196 & 2452 & 98 & 9804 & 2900 & 1532.81 & 9740.66 & 11273.47 \\\hline
30 & 30 & Cycle & 116 & 62 & 58 & 244 & 120 & 2.46 & 2.93 & 5.39 \\ \hline
50 & 50 & Cycle & 196 & 102 & 98 & 404 & 200 & 30.29 & 8.48 & 38.78 \\ \hline
\end{tabular}
\end{center}
\caption{An analysis of the runtime (in seconds) of Algorithm \ref{alg.VerifyUncertain} with Algorithm \ref{alg.ComputeNepsi} for the formation control problem, for $D=2$. Here, LP Time refers to the time (in seconds) it took to compute all parameters $\theta_{n,\ell}$ needed by the Algorithm \ref{alg.VerifyUncertain}.}
\label{table.Runtime}
\end{table*}
In order to verify whether the system satisfies the given contract, we choose certain values for the tunable parameters $\Delta t, a_{\rm max}, a_{\rm min}, \mu_{\rm diff},\mu_{\rm err}$ and $\omega_{\rm max}$, and run Algorithm \ref{alg.VerifyUncertain} with Algorithm \ref{alg.ComputeNepsi} and $\epsilon = 10^{-12}$. The algorithms were executed on a Dell Latitude 7400 computer with an Intel Core i5-8365U processor for multiple values of $n_V$ and different graphs $\mathcal{G}$. The runtimes are reported in Table \ref{table.Runtime}. The table concerns two distinct cases. In the first, the graph $\mathcal{G}$ is chosen as a complete graph on $n_V$ nodes. In this case, the runtime of the algorithm is about 10 minutes even for systems of order exceeding to 100, with thousands of assumptions and guarantees. Moreover, we can check whether the system satisfies the contract in about three hours even for systems of order roughly equal to $200$, with almost $10000$ assumptions and a few thousand guarantees.
In the second case, we choose the graph $\mathcal{G}$ by taking agents $\mathcal V = \{1,2,\ldots,n_V\}$, and taking a total of $n_V$ edges defined as follows - we take $(i+1,i)$ for all $i=1,2,\ldots,\lfloor n_V/2 \rfloor$, we also take $(i,i+1)$ for $i=\lfloor n_V/2 \rfloor+1,\ldots,n_V-1$, and lastly, we also take $(n_V,1)$. One can see the graph $\mathcal{G}$ as a union of two paths, of lengths $\lfloor n_V/2 \rfloor$ and $\lceil n_V/2 \rceil$, which coincide only at the first and the last node. The graph $\mathcal{G}$ can also be seen as a cycle, where we change the orientation of some of the edges. In this case, the matrices defining the system are sparse. As expected, the algorithm runs significantly faster in this case, terminating in under a minute even for a system of order roughly equal to $200$.
\subsection{Discussion}
We considered two numerical examples. The first numerical example considered a low-dimensional LTI system with interval uncertainty, whereas the second considered a very high-dimensional system with non-polyhedral constraints on the perturbation. The runtimes reported in Table \ref{table.Runtime} demonstrate the applicability of our approach even for extremely large systems and for specifications with many assumptions and guarantees.
We also compare our approach with other formal verification techniques.
Trying to apply classical model-checking tools would first require us to build an abstraction of the system, which is a finite transition system \cite{Belta2017}. This abstraction is almost always achieved either by discretizing the state space, by defining an equivalence relation using the signs of the values of the functions defining the guarantees, or by further refining either of the two. For the numerical example in Section \ref{sec.DoubleInteg} with $n=50$ vertices and a cycle graph, both approaches result in finite transition systems with roughly $10^{60}$ discrete states, rendering this approach as highly inapplicable.
Other approaches for verification rely on approximate simulation and bi-simulation, see \cite{Girard2007}. These methods first quantify the distance between the system-under-test and a lower-dimensional system, and then solve the verification problem for the latter using other methods, e.g., discretization-based model checking or reachability analysis. However, the standard definition of bi-simulation cannot incorporate assumptions on the input other than $u(k)\in \mathcal{U}, \forall k\in \mathbb{N}$, and thus cannot be used for verifying specifications defined by LTI contracts of depth $m\ge 1$. Once bi-simulation will be properly extended to incorporate non-static assumptions on the input, it could be coupled with the theory presented in this work.
\section{Conclusions}
In this paper, we presented a framework for verifying assume/guarantee contracts defined by time-invariant linear inequalities for perturbed LTI systems. First, we defined the notion of LTI contracts of an arbitrary depth $m$. Second, we generalized the results of \cite{SharfADHS2020} and provided an LP-based mechanism for verifying that a given unperturbed LTI system satisfies a general LTI contract of arbitrary depth $m$, namely Algorithm \ref{alg.VerifyCertainIota}. Third, we presented a comparison-based mechanism for verifying that a perturbed LTI system $\Sigma$ satisfies an LTI contract of arbitrary depth. Namely, we showed that a perturbed system satisfies a contract with linear-time invariant guarantees if and only if the nominal version of the system (with no perturbations) satisfies a robustified version of the contract. Unfortunately, this robustified contract is time-varying, so we refined it by a tractable LTI contract, and then applied the LP-based tools for unperturbed systems to check whether the nominal LTI system satisfies it. This discussion resulted in Algorithm \ref{alg.VerifyUncertain}, and the correctness, the assumptions, the computational complexity and the approximation properties of the algorithm were studied. We exhibited the tools developed in two case studies, one considering autonomous driving, and one considering multi-agent systems.
Future research can try and derive LP-based verification methods for a wider class of systems, including LTI hybrid systems, perturbed hybrid systems, and uncertain systems.
Another possible avenue for future research is building semi-definite programming-based tools for contracts defined using quadratic or LMI-based inequalities. Lastly, one could try to construct LP-based tools supporting the modular framework of contract theory, namely refinement and composition, extending the tools presented in \cite{SharfADHS2020}.
\bibliographystyle{ieeetr}
|
2,869,038,155,872 | arxiv | \section{Introduction}
Energy storage devices provide flexibility to alter the consumption behavior of an electricity consumer.
Storage owners at the consumer side could participate in demand response, energy arbitrage, peak demand shaving, power backup to name a few \cite{xi2014stochastic}, \cite{hashmi2019energy}.
These features
of storage devices will be more lucrative for storage owners
with the growth of intermittent generation sources which
increase volatility on the generation side in power network \cite{hashmi2018effect}.
Furthermore, storage devices are witnessing the decreasing of cost of battery making several applications of storage devices financially viable.
Electric consumer bills vary based on local policies, however, the primary variable component of electricity bills worldwide is the cost of energy consumption. Storage devices can perform arbitrage of energy with time varying consumer load, distributed generation production and electricity price. Furthermore, utilities promote inclusion of distributed generation and storage deployment by introducing net-metering. Net energy metering (NEM) or net-metering refers to the rate consumers receive for feeding power back to the grid. Most NEM policies indicate that consumers receive a rate at best equal to the buying price of electricity \cite{wiki_net}.
Authors in \cite{hashmi2017optimal} consider storage operation under equal buy and sell price case. This framework is generalized in \cite{hashmi2018netmetering}, covering cases where the ratio of buy and sell price could arbitrarily vary between 0 and 1.
For equal buying and selling price, the storage control becomes independent of inelastic load and renewable generation of the consumer \cite{hashmi2017optimal}, \cite{xu2017optimal}.
The cost function considered in this work includes inelastic load, renewable generation and storage charging and discharging efficiency, and ramping and capacity constraints.
We first show that the cost function, based on the selection of the optimization variables, is convex and piecewise linear. Then, we formulate the optimal arbitrage problem for an electricity consumer with renewable generation adopting NEM by using Linear Programming (LP).
Authors in \cite{zidar2016review} provide a summary of storage control methodologies used in power distribution networks among which LP based formulations can be solved efficiently using commercially available solvers.
The complexity of LP based algorithms is polynomial \cite{karmarkar1984new}.
Therefore, these algorithms can be used to efficiently solve the arbitrage problem for the duration of a day divided into smaller time steps ranging from 5 minutes to an hour.
A day is the typical time horizon over which arbitrage is performed \cite{mokrian2006stochastic,hu2010optimal}.
Authors in \cite{cruise2019control} observe that the energy arbitrage problem for storage is convex in nature and under the price-taker assumption the cost function will have a piecewise linear structure \cite{hashmi2017optimal} and hence LP tools could be used.
LP techniques for energy storage arbitrage have been used in several prior works: \cite{park2017linear}, \cite{byrne2015potential}, \cite{chouhan2016optimization}, \cite{thatte2013risk}, \cite{bradbury2014economic}, \cite{nguyen2018maximizing}, \cite{wang2018energy}.
Authors in \cite{bradbury2014economic, byrne2015potential, wang2018energy} consider storage operation in presence of time-varying electricity price.
However, in these formulations no renewable energy source or consumer load is assumed to be present.
Authors in \cite{chouhan2016optimization, thatte2013risk} consider optimal scheduling of storage battery for maximizing energy arbitrage revenue in presence of distributed energy resources and variable electricity price.
Formulations presented in \cite{nguyen2018maximizing, park2017linear} consider storage performing arbitrage in a residential setting with inelastic load and local generation.
Most common
LP formulations for energy arbitrage such as in \cite{park2017linear}, \cite{wang2018energy}, \cite{thatte2013risk}, \cite{byrne2015potential} consider separation of charging and discharging
components.
In these formulations, they do not include constraint
enforcing only one of the charging or the discharging component to be active at any particular time as the inclusion of such a constraint makes these formulations nonlinear. Therefore, in these formulations, optimal results cannot be guaranteed.
Authors in \cite{chouhan2016optimization, bradbury2014economic} do not consider energy storage charging and discharging efficiencies in the cost minimization, making it straight forward to apply LP.
Authors in \cite{nguyen2018maximizing} consider a special case of optimization with zero-sum aggregate storage power output. For such a case LP tools could be used, however, generalizing the formulations needs to be explored further.
The key contributions of this paper are as follows:\\
$\quad \bullet$ \textit{LP formulation for storage control}:
We formulate the LP optimization problem for piecewise linear convex cost function, for storage with efficiency losses, ramping and capacity constraints and a consumer with inelastic load and renewable generation. The buying and selling price of electricity are varying over time. The selling price is assumed to be at best equal to buying price for each time instant, this assumption is in sync with most net-metering policies worldwide.
We describe the LP formulations for lossy battery with inelastic consumption, renewable generation and selling price less than or equal to buying price.
The reduction of this formulation for cases
(a) lossless battery with equal buying and selling price of electricity and (b) lossy battery with selling price less than or equal to buying price, is trivial and not included in this paper. Based on the structure of the cost function we apply an epigraph based minimization described in \cite{boyd2004convex} to the arbitrage problem. \\
$\quad \bullet$ \textit{Real-time implementation:} We implement an auto-regressive based forecast model along with model predictive control and
numerically analyze their effect on arbitrage gains using
real data from a household in Madeira in Portugal and electricity price from California ISO \cite{ENOnline}. The effect of uncertainty on arbitrage gains is more pronounced for cases where selling price is higher compared to cases where selling price is closer to zero.\\
$\quad \bullet$ \textit{Sensitivity of ratio of selling and buying price}: We numerically analyze the effect of the ratio of buying and selling price of electricity on the value of storage integration with inelastic load and renewable generation. We observe that the value of storage performing arbitrage significantly increases in the presence of load and renewable generation with the increasing disparity of selling and buying price of electricity, compared to only storage performing arbitrage.
Inclusion of storage in the presence of load and renewable could be profitable even for cases where the selling price is zero or small compared to buying price. For the same case, only storage performing arbitrage would not be profitable.
The paper is organized as follows. Section~\ref{lpsec2} provides the description of the system.
Section~\ref{lpsec3} presents the linear programming formulation of storage performing arbitrage with inelastic load, renewable generation and net-metering based compensation.
Section~\ref{lpsec4} presents an online algorithm using the proposed optimal arbitrage algorithm along with auto-regressive forecasting in the MPC framework.
Section~\ref{lpsec5} discusses numerical results.
Finally, Section~\ref{lpsec6} concludes the paper.
\section{System Description}
\label{lpsec2}
We consider the operation of a single residential user of electricity over a fixed period of time.
The user is assumed to be equipped with a rooftop solar PV and a battery to store excess generation. It is also connected to the electricity grid
from where it can buy or to which it can sell energy.
The objective is to find an efficient algorithm for a user to
make optimal decisions over a period of varying electricity prices considering variations in the solar generation and end user load.
The total duration, $T$,
of operation is divided into $N$ steps indexed by $\{1,...,N\}$.
The duration of step $i \in \{1,...,N\}$ is denoted as $h_i$. Hence, $T=\sum_{i=1}^{N} h_i$.
The price of electricity, $p_{\text{elec}}(i)$, equals the buying price, $p_b(i)$, if the consumption is positive; otherwise $p_{\text{elec}}(i)$ equals the selling price, $p_s(i)$.
\begin{equation}
p_{\text{elec}}(i)=
\begin{cases}
p_b(i) ,& \text{if consumption } \geq 0 ,\\
p_s(i) , & \text{otherwise,}
\end{cases}
\end{equation}
Note $p_{\text{elec}}$ is ex-ante and the consumer is a price taker.
The ratio of selling and buying price at time $i$ is denoted as
\begin{equation}
\kappa_i = \frac{p_s(i)}{p_b(i)}.
\end{equation}
The end user inelastic consumption is denoted as $d_i$
and generates $r_i$ units of energy through renewable sources in time step $i$.
Net energy consumption without storage is denoted as $z_i = d_i - r_i ~ \in \mathbb{R} $.
Fig.~\ref{systemblock} shows the block diagram of the system considered, i.e., an electricity consumer with renewable generation and energy storage battery.
\vspace{-5pt}
\begin{figure}[!htbp]
\center
\includegraphics[width=3.1in]{sys.pdf}
\caption{\small{Behind-the-meter electricity consumer with inelastic consumption, renewable generation and energy storage battery.}}\label{systemblock}
\end{figure}
The efficiency of charging and discharging of the
battery are denoted by $\eta_{\text{ch}}, \eta_{\text{dis}} \in (0,1]$, respectively.
We denote the change in the energy level of the battery at $i^{\text{th}}$ instant by $x_i= h_i \delta_i$,
where $\delta_i$ denotes the storage ramp rate at $i^{\text{th}}$ instant such that $\delta_i \in [\delta_{\min}, \delta_{max}]$ $\forall i$ and $\delta_{\min} \leq 0,\delta_{\max} \geq 0$ are the minimum and the maximum ramp rates (kW);
$\delta_i > 0$ implies charging and $\delta_i < 0$ implies discharging.
Energy consumed by the storage in the $i^{th}$ instant is given by
$
s_i =f(x_i)= \frac{1}{\eta_{\text{ch}}}[x_i]^+ - \eta_{\text{dis}}[x_i]^-,
$
where $x_i$ must lie in the range from $X_{\min}^i=\delta_{\min}h_i$ to $X_{\max}^i= {\delta_{\max}h_i}$.
Note $[x_i]^+ = \max(0, x_i)$ and $[x_i]^- = \max(0, -x_i)$.
Alternatively, we can write $x_i = \eta_{\text{ch}}[s_i]^+ - \frac{1}{\eta_{\text{dis}}}[s_i]^-$.
The limits on $s_i$ are given as $ s_i \in [S_{\min}^i, S_{\max}^i]$, where $S_{\min}^i=\eta_{\text{dis}}\delta_{\min}h_i$ and $S_{\max}^i= \frac{\delta_{\max}h_i}{\eta_{\text{ch}}}$.
Let $b_i$ denote the energy stored in the battery at the $i^{\textrm{th}}$ step. Then,
$b_i = b_{i-1} + x_i$. The capacity of the battery imposes the constraint $b_i\in [b_{\min},b_{\max}], \forall i$,
where $b_{\min}, b_{\max}$ are the minimum and the maximum battery capacity.
The total energy consumed between time step $i$ and $i+1$ is given as $L_i = z_i+s_i$.
Energy storage battery operational life is often quantified using cycle and calendar life which decides the cycles a battery should perform over a time period.
Friction coefficient, denoted as $\eta_{\text{fric}} \in [0,1]$, and introduced in \cite{hashmi2018limiting} assists in reducing the operational life of the battery such that low returning transactions of charging and discharging are eliminated, thus increasing the operational life of the battery.
In subsequent work, authors in \cite{hashmi2018long} propose a framework to tune the value of friction coefficient for increasing operational life of battery.
In a prior work, \cite{hashmi2018pfcpowertech}, we show that redefining $\eta_{\text{ch}}$ equal to $\eta_{\text{ch}}\eta_{\text{fric}}$ and $\eta_{\text{dis}}$ equal to $\eta_{\text{dis}}\eta_{\text{fric}}$, we can control the cycles of operation by eliminating the low returning transactions by reducing the value of $\eta_{\text{fric}}$.
\subsection{Arbitrage under Net-Metering}
The optimal arbitrage problem (denoted as (P)) is defined as the minimization of the
cost of the total consumed energy, $ \min \sum_{i=1}^N L_i p_{\text{elec}}(i)$, subject to the battery constraints.
It is given as follows:
\vspace{-12pt}
\begin{gather*}
\text{(P) }
\min \sum_{i=1}^N C_{nm}^{i}(x_i),\vspace{-12pt}
\end{gather*}
$\text{subject to, }
b_{\min} - b_0\leq \sum_{j=1}^i x_j \leq b_{\max}- b_0 , \forall i \in \{1,..,N\},$ and $ x_i \in \left[X_{\min}^i , X_{\max}^i\right] \forall i \in \{1,..,N\}$.
{
$C_{\text{nm}}^{i}(x_i)$ denotes the energy consumption cost function at instant $i$ and is equal to $[z_i + f(x_i)]^+ p_b(i) - [z_i + f(x_i)]^- p_s(i)$.
}
{Now we will show that} the optimal arbitrage problem is convex in $x=(x_i, i=1:N)$.
For this convexity to hold we require $p_b(i) \geq p_s(i)$ for all $i=1:N$, i.e., {$\kappa_i \in [0,1]$}.
The proposed framework is applicable for the case where selling price of electricity for the end user is lower than the buying price. This assumption is quite realistic as this is generally the case in most practical net metering policies \cite{wiki_net}. \vspace{-2pt}
\begin{theorem}
\label{thm:convexity}
If $p_b(i) \geq p_s(i)$ for all $i=1:N$, then problem (P) is convex in $x$.
\end{theorem}
\begin{proof}
Let $\psi(t) =a[t]^+ - b[t]^-$ with $a\geq b \geq 0$. Using $t=[t]^+- [t]^-$ we have $\psi(t) = (a-b)[t]^+ + bt$.
Since both $[t]^+$ and $t$ are convex in $t$ and $a-b, b\geq 0$ we have that $\psi$ is convex since it is the positive sum of two convex functions.
Now let $f(x)= \frac{1}{\eta_{ch}} [x]^+ - \eta_{dis}[x]^-$ and $G_i(s) =[z_i+s]^+p_b(i) -[z_i+s]^-p_s(i)$.
Then by the above reasoning we have that for $p_b(i) \geq p_s(i) \geq 0$ and $\eta_{ch}, \eta_{dis} \in (0,1]$,
$G_i$ is convex in $s$ and $f$ is convex in $x$. Also, note that $G_i$ is non-decreasing in $s$. Hence, for $\lambda \in [0,1]$ we have
\begin{align}
G_i\big(f(\lambda x +(1-\lambda)y)\big) &\leq G_i\big(\lambda f(x) + (1-\lambda)f(y)\big)\\
&\leq \lambda G_i(f(x)) + (1-\lambda)G_i(f(y))
\end{align}
%
In the above, the first inequality follows from the convexity of $f$ and non-decreasing nature of $G_i$
and the second inequality follows from convexity of $G_i$. Therefore, we have that $G_i\cdot f=G_i(f())$ is
a convex function in $x$. This shows that the objective function of (P) is convex in $x$ since $C_{nm}^i=G_i\cdot f$.
Since the constraints are linear in $x$ thus problem (P) is convex.
\end{proof}
\section{Optimal Arbitrage with Linear Programming}
\label{lpsec3}
The optimal arbitrage problem, (P), can be solved using linear programming as the cost function is (i) convex and (ii) piecewise linear, and (iii) the associated ramping and capacity constraints are linear.
In this section, we provide an LP formulation for the optimal arbitrage of the storage device under net-metering and consumer inelastic load and renewable generation, leveraging the epigraph based minimization presented in \cite{boyd2004convex}.
A summary of the epigraph based formulation for a piecewise linear convex cost function is presented in Appendix~\ref{epigraphsec}.
The optimal arbitrage formulation for storage under net-metering and consumer inelastic load and renewable generation using the epigraph formulation is presented in this section.
Fig.~\ref{costlp} shows the two cost functions depending on the net-load without storage output, i.e. for $z_i>0$ and $z_i<0$. Note that there are 4 unique segments which form the cost function $C_{nm}(i)$. The slope, x-intercept and y-intercept of these linear segments are given in Table~\ref{segproperties}.
\begin{table}[!htbp]
\caption {\small{Cost function for storage with load under NEM}}
\vspace{-5pt}
\label{segproperties}
\begin{center}
\begin{tabular}{| c | c| c|c|}
\hline
Segment& Slope & x-intercept& y-intercept \\
\hline
Segment 1 & $p_b(i)/\eta_{ch}$ &$-z_i \eta_{ch}$ & $z_ip_b(i)$ \\
Segment 2 & $p_s(i)\eta_{dis}$ &$-z_i/ \eta_{dis}$ & $z_ip_s(i)$ \\
Segment 3 & $p_b(i)\eta_{dis}$ &$-z_i/ \eta_{dis}$ & $z_ip_b(i)$ \\
Segment 4 & $p_s(i)/\eta_{ch}$ &$-z_i \eta_{ch}$ & $z_ip_s(i)$ \\
\hline
\end{tabular}
\hfill\
\end{center}
\end{table}
The epigraph based LP formulation is possible as irrespective of the sign of the load, the cost function is given as
\begin{equation}
\begin{split}
C_{nm}(i) = \max\big(\text{Segment 1, Segment 2},\\ \text{Segment 3, Segment 4}\big).
\end{split}
\label{signindependent}
\end{equation}
Since, Eq.\ref{signindependent} is independent of the sign of load and based on the intercepts, Eq.\ref{signindependent} is valid for $p_b(i) \geq p_s(i)$ and for $\eta_{ch}, \eta_{dis} \in (0,1]$ (conditions of convexity), therefore, we could formulate this problem as an LP.
Using the epigraph equivalent formulation for piecewise linear convex cost function we formulate the optimal arbitrage problem using linear programming, denoted as $\text{P}_{\text{LP}}$
\begin{gather*}
(\text{P}_{\text{LP}})~~\min \quad \{t_1 + t_2+...+t_N\}, \\
\text{subject to, }~~
\text{(a) Segment 1:~}\frac{p_b^i}{\eta_{ch}} x_i + z_i p_b^i \leq t_i, ~\forall i \\
\text{(b) Segment 2:~} {p_s^i}{\eta_{dis}} x_i + z_i p_s^i \leq t_i, ~\forall i\\
\text{(c) Segment 3:~} {p_b^i}{\eta_{dis}} x_i + z_i p_b^i \leq t_i, ~\forall i\\
\text{(d) Segment 4:~} \frac{p_s^i}{\eta_{ch}} x_i + z_i p_s^i \leq t_i, ~\forall i\\
\text{(e) Ramp constraint:~} x_i \in [X_{\min}^i, X_{\max}^i], ~\forall i\\
\text{(f) Capacity constraint:~} \sum {x_i} \in [b_{\min}-b_0, b_{\max}-b_0],~ \forall i.
\end{gather*}
\begin{figure}[!htbp]
\center
\includegraphics[width=3.4in]{costlp.pdf}
\caption{\small{The cost function segment wise for positive and negative net load $z$ \cite{hashmi2018netmetering}. The decision variable is storage change in charge level, $x_i$, and cost function, $C_{nm}(i)$ is formed with 4 unique line segments.}}\label{costlp}
\end{figure}
The cost function for only lossy storage operation under NEM would have two-piecewise linear segments and it would be linear for equal buying and selling price of electricity with lossless battery.
Authors in \cite{chouhan2016optimization, bradbury2014economic} present this case in their LP formulation.
This case could be obtained by simplifying the more general case depicted as $\text{P}_{\text{LP}}$ in Fig.~\ref{costlp}.
We make our code open source on formulating optimal arbitrage problem using linear programming\footnote{{https://github.com/umar-hashmi/linearprogrammingarbitrage}}.
\section{Real-time implementation}
\label{lpsec4}
The previous section discussed optimal storage arbitrage under complete knowledge of future net loads and prices. In this section, we consider the setting where future values may be unknown. To that end, we first develop a forecast model for net load without storage (which includes inelastic consumer load and consumer distributed generation) and electricity price for future times, where the forecast is updated after each time step.
Then, we develop the forecasting model for net load with solar generation using AutoRegressive Moving Average (ARMA) model and electricity price forecast using AutoRegressive Integrated Moving Average (ARIMA).
The forecast models based on ARMA and ARIMA model developed in \cite{hashmi2019arbitrage} are used in this work.
The forecast values are fed to a Model Predictive Control (MPC) scheme to identity the optimal modes of operation of storage for the current time-instance. Any of the developed schemes from the previous section can be used for the optimization inside MPC. These steps (forecast and MPC) are repeated sequentially and highlighted in online Algorithm~\ref{algLPuncertainty}: \texttt{ForecastMPClinearProgram}.
\begin{algorithm}
\small{\textbf{Storage Parameters}: {$\eta_{\text{ch}}, \eta_{\text{dis}}, \delta_{\max}, \delta_{\min}, b_{\max}, b_{\min}$, $b_0$}}.\\
\small{\textbf{Inputs}: {$h, N, T,i=0 $, Rolling horizon optimization time period $ N_{\text{opt}}$, ~ Historical inelastic load, renewable generation and electricity price data}}.
\begin{algorithmic}[1]
\State Use historical data to tune ARMA and ARIMA models,
\While{$i < N$}
\State Increment $i=i+1$,
\State Real-time electricity price value $p_{\text{elec}}(i)$ and load $z_i$,
\State Forecast $\hat{z}$ from time step $i+1$ to $i+ N_{\text{opt}}$ using ARMA,
\State Forecast $\hat{p}_{b}$ and $\hat{p}_{s}$ from time $i+1$ to $i+ N_{\text{opt}}$ using ARIMA,
\State Calculate $\hat{\kappa}$ as the ratio of $\hat{p}_{s}$ and $\hat{p}_{b}$,
\State Build LP matrices for time step $i$ to $N$,
\State Solve the Linear Optimization problem for forecast vectors,
\State Calculate ${b_i}^*= b_{i-1}+\hat{x}^*(1)$,
\State Update $b_0={b_i}^*$, the initial capacity of battery is updated.
\State Return ${b_i}^*$, ${x_i}^*$.
\EndWhile
\end{algorithmic}
\caption{\texttt{ForecastMPClinearProgram}}\label{algLPuncertainty}
\end{algorithm}
\section{Numerical Results}
\label{lpsec5}
For the numerical evaluation, we use battery parameters listed in Table~\ref{parametersBatlp}.
The performance indices used for evaluating simulations are:\\
$\quad \bullet$ \textit{Arbitrage Gains:} denotes the gains (made in absence of load and renewable) or reduction in the cost of consumption (made in presence of load and renewable) due to storage performing energy arbitrage under time-varying electricity prices,\\
$\quad \bullet$ \textit{Cycles of operation}: In our prior work \cite{hashmi2018long} we develop a mechanism to measure the number of cycles of operation based on depth-of-discharge (DoD) of energy storage operational cycles. Equivalent cycles of 100\% DoD are identified. This index provides information about how much the battery is operated.
We use xC-yC notation to represent the relationship between ramp rate and battery capacity. xC-yC implies battery takes 1/x hours to charge and 1/y hours to discharge completely.
We perform sensitivity analysis with (a) four battery models with the different ramping capability listed in Table~\ref{parametersBatlp} and (b) 5 levels of the ratio of selling price and buying price of electricity, i.e., $\kappa\in\{1, 0.75, 0.5, 0.25, 0\}$. In this work we assume the selling price is equal to the product of scalar variable $\kappa$ and the buying price of electricity.
\begin{table}[!htbp]
\small
\caption {Battery Parameters}
\label{parametersBatlp}
\vspace{-10pt}
\begin{center}
\begin{tabular}{| c | c|}
\hline
$b_{\min}$& 200Wh\\
\hline
$b_{\max}$ & 2000 Wh\\
\hline
$b_{0}$ & 1000 Wh\\
\hline
$\eta_{\text{ch}}=\eta_{\text{dis}}$ & 0.95\\
\hline
$\delta_{\max} = - \delta_{\min}$ & 500 W for 0.25C-0.25C,\\ (4 battery model)&1000 W for 0.5C-0.5C\\
& 2000 W for 1C-1C, \\& 4000 W for 2C-2C\\
\hline
\end{tabular}
\hfill\
\end{center}
\end{table}
The optimization problem, $\text{P}_{\text{LP}}$, is solved using \texttt{linprog} in MATLAB\footnote{{https://www.mathworks.com/help/optim/ug/linprog.html}}. \texttt{linprog} uses dual-simplex \cite{andersen1995presolving} (default) algorithm.
\subsection{Deterministic Simulations}
The price data for our simulations in this subsection is taken from NYISO \cite{nyiso}. The load and generation data is taken from data collected at Madeira, Portugal.
Fig.~\ref{determinCase} shows the electricity price and energy consumption (includes inelastic load and rooftop solar generation) data used for deterministic simulations.
\begin{figure}[!htbp]
\center
\includegraphics[width=2.9in]{determinCase.pdf}
\vspace{-5pt}
\caption{\small{Electricity price and consumer net load data used for deterministic simulations.}}\label{determinCase}
\end{figure}
Table~\ref{resultOnlydeterminLP} and Table~\ref{resultLoaddeterminLP} lists the energy storage arbitrage without and with energy consumption load for the electricity price data shown in Fig.~\ref{determinCase}.
The observations are:\\
$\quad \bullet$ The value of storage in presence of load and renewable increases as $\kappa$ decreases. Note that for $\kappa=0$, the only storage operation provides zero gain (see Table~\ref{resultOnlydeterminLP}), however, for the same buying and selling levels, the consumer would make significant gains when operated with inelastic load and renewable generation (see Table~\ref{resultLoaddeterminLP}),\\
$\quad \bullet$ The cycles of operation for faster ramping batteries are higher compared to slower ramping batteries. This implies that faster ramping batteries should be compared in terms of gains per cycle with slower ramping batteries. Observing only gains could be misleading.\\
$\quad \bullet$ As $\kappa$ decreases, the cycles of operation decrease, thus the effect on storage operation in the cases presented is similar to $\eta_{\text{fric}}$ in reducing cycles of operation.\\
$\quad \bullet$ Note that for $\kappa=1$, the arbitrage gains with and without load are the same. This observation is in sync with claims made in \cite{hashmi2017optimal}. Authors in \cite{hashmi2017optimal} observe that storage operation becomes independent of load and renewable variation for equal buying and selling case.
\begin{table}[!htbp]
\small
\caption {Performance indices for only storage}
\label{resultOnlydeterminLP}
\vspace{-10pt}
\begin{center}
\begin{tabular}{| c| c| c|c| c| }
\hline
$\kappa$ & 2C-2C & 1C-1C & 0.5C-0.5C & 0.25C-0.25C \\
\hline
\hline
\multicolumn{5}{|c|}{Arbitrage gains in \$ cents for 1 day} \\
\hline
1 & 44.445 & 33.760 & 25.636 & 17.536 \\
0.75 & 18.842 & 17.668 & 14.077 & 9.921 \\
0.5 & 7.682 & 7.088 & 6.253 & 5.219 \\
0.25 & 2.513 & 2.502 & 2.483 & 2.422 \\
0 & 0 & 0 & 0 & 0 \\
\hline
\multicolumn{5}{|c|}{Cycles of operation for 1 day} \\
\hline
1 & 6.586 & 3.856 & 2.237 & 1.620 \\
0.75 & 2.401 & 1.742 & 1.484 & 0.795 \\
0.5 & 1.539 & 1.099 & 0.714 & 0.386 \\
0.25 & 0.182 & 0.171 & 0.164 & 0.160 \\
0 & 0 & 0 & 0 & 0 \\
\hline
\end{tabular}
\hfill\
\end{center}
\end{table}
\begin{table}[!htbp]
\small
\caption {Performance indices for storage + load}
\label{resultLoaddeterminLP}
\vspace{-10pt}
\begin{center}
\begin{tabular}{| c| c| c|c| c| }
\hline
$\kappa$ & 2C-2C & 1C-1C & 0.5C-0.5C & 0.25C-0.25C \\
\hline
\hline
\multicolumn{5}{|c|}{Arbitrage gains in \$ cents for 1 day} \\
\hline
1 & 44.445 & 33.760 & 25.636 & 17.536 \\
0.75 & 37.848 & 33.023 & 26.469 & 18.337 \\
0.5 & 39.045 & 34.105 & 27.696 & 19.344 \\
0.25 & 40.272 & 35.332 & 28.923 & 20.351 \\
0 & 41.500 & 36.560 & 30.150 & 21.358 \\
\hline
\multicolumn{5}{|c|}{Cycles of operation for 1 day} \\
\hline
1 & 6.586 & 3.835 & 2.263 & 1.620 \\
0.75 & 5.986 & 4.039 & 2.338 & 1.652 \\
0.5 & 5.986 & 4.033 & 2.364 & 1.660 \\
0.25 & 5.986 & 4.033 & 2.364 & 1.660 \\
0 & 5.986 & 4.033 & 2.364 & 1.660 \\
\hline
\end{tabular}
\hfill\
\end{center}
\end{table}
\begin{figure}[!htbp]
\center
\includegraphics[width=3.2in]{onlyStore.pdf}
\vspace{-5pt}
\caption{\small{Performance indices for only storage performing arbitrage with varying $\kappa$ for 1 day.}}\label{onlyStore}
\end{figure}
\begin{figure}[!htbp]
\center
\includegraphics[width=3.2in]{loadStore.pdf}
\vspace{-5pt}
\caption{\small{Storage along with inelastic load and renewable generation with varying $\kappa$ for 1 day.}}\label{loadStore}
\end{figure}
Fig.~\ref{onlyStore} and Fig.~\ref{loadStore} show the arbitrage gains, gains per cycle and cycles of operation with varying $\kappa$ for storage performing arbitrage without and with inelastic load and renewable generation. The gains per cycle are nearly flat with varying $\kappa$. Slow ramping batteries, 0.25C-0.25C and 0.5C-0.5C, have significantly higher gains per cycle compared to faster ramping batteries, 1C-1C and 2C-2C.
\subsection{Results with Uncertainty}
The forecast model is generated for load with solar generation and for electricity price.
The ARMA based forecast uses 9 weeks of data (starting from 29th May, 2019) for training and generates forecast for the next week.
\texttt{ForecastMPClinearProgram} is implemented in receding horizon.
The electricity price data used for this numerical experiment is taken from CAISO \cite{enonlinecalifornia} for the same days of load data.
To compare the effect of forecasting net load and electricity prices with perfect information, we present average arbitrage gains and cycles of operation starting from 1st June 2019. Rolling horizon time-period of optimization, $N_{\text{opt}}$, is selected as 1 day. This implies at 13:00 h today, the storage control decisions are based on parameter variation forecasts till 13:00 h tomorrow.
\begin{table}[!htbp]
\small
\caption {Deterministic arbitrage gains for only storage}
\label{deterministictOnlyLP}
\vspace{-10pt}
\begin{center}
\begin{tabular}{| c| c| c|c| c| }
\hline
$\kappa$ & 2C-2C & 1C-1C & 0.5C-0.5C & 0.25C-0.25C \\
\hline
\hline
\multicolumn{5}{|c|}{Arbitrage gains in \$ for 1 week} \\
\hline
1 & 9.411 & 7.059 & 4.784 & 3.065 \\
0.75 & 5.729 & 4.491 & 3.168 & 2.082 \\
0.5 & 3.166 & 2.550 & 1.833 & 1.217 \\
0.25 & 1.124 & 0.941 & 0.688 & 0.456 \\
0 & 0 & 0 & 0 & 0 \\
\hline
\multicolumn{5}{|c|}{Cycles of operation for 1 week} \\
\hline
1 & 58.729 & 37.257 & 21.324 & 12.107 \\
0.75 & 23.462 & 16.341 & 10.746 & 7.519 \\
0.5 & 12.689 & 9.770 & 7.579 & 6.174 \\
0.25 & 7.727 & 6.229 & 4.558 & 3.464 \\
0 & 0 & 0 & 0 & 0 \\
\hline
\end{tabular}
\hfill\
\end{center}
\end{table}
\begin{table}[!htbp]
\small
\caption {Deterministic arbitrage gains for storage with load}
\label{deterministictLoadLP}
\vspace{-10pt}
\begin{center}
\begin{tabular}{| c| c| c|c| c| }
\hline
$\kappa$ & 2C-2C & 1C-1C & 0.5C-0.5C & 0.25C-0.25C \\
\hline
\hline
\multicolumn{5}{|c|}{Arbitrage gains in \$ for 1 week} \\
\hline
1 & 9.411 & 7.059 & 4.784 & 3.065 \\
0.75 & 7.462 & 6.269 & 4.540 & 3.025 \\
0.5 & 6.641 & 5.987 & 4.468 & 3.019 \\
0.25 & 6.350 & 5.904 & 4.451 & 3.019 \\
0 & 6.313 & 5.902 & 4.451 & 3.019 \\
\hline
\multicolumn{5}{|c|}{Cycles of operation for 1 week} \\
\hline
1 & 58.700 & 37.294 & 21.324 & 12.107 \\
0.75 & 28.583 & 20.809 & 14.382 & 10.229 \\
0.5 & 19.296 & 16.629 & 13.007 & 9.971 \\
0.25 & 16.591 & 15.348 & 12.498 & 9.968 \\
0 & 16.041 & 15.201 & 12.484 & 9.968 \\
\hline
\end{tabular}
\hfill\
\end{center}
\end{table}
\begin{table}[!htbp]
\small
\caption {Stochastic indices for only storage}
\label{StochasticOnlyLP2}
\vspace{-10pt}
\begin{center}
\begin{tabular}{| c| c| c|c| c| }
\hline
$\kappa$ & 2C-2C & 1C-1C & 0.5C-0.5C & 0.25C-0.25C \\
\hline
\hline
\multicolumn{5}{|c|}{Arbitrage gains in \$ for 1 week} \\
\hline
1 & 6.035 & 4.684 & 3.469 & 3.000\\
0.75 & 5.024 & 4.118 & 3.081 & 1.904\\
0.5 & 3.004 & 2.367 & 1.692 & 1.110 \\
0.25 & 1.067 & 0.891 & 0.618 & 0.442 \\
\hline
\multicolumn{5}{|c|}{Cycles of operation for 1 week} \\
\hline
1 & 64.323 & 38.979 & 22.622 & 12.850 \\
0.75 & 24.870 & 16.169 & 10.570 & 7.733 \\
0.5 & 11.393 & 8.891 & 7.013 & 6.099 \\
0.25 & 6.429 & 5.557 & 4.359 & 3.395 \\
\hline
\end{tabular}
\hfill\
\end{center}
\end{table}
\begin{table}[!htbp]
\small
\caption {Stochastic indices for storage with load}
\label{StochasticLoadLP2}
\vspace{-10pt}
\begin{center}
\begin{tabular}{| c| c| c|c| c| }
\hline
$\kappa$ & 2C-2C & 1C-1C & 0.5C-0.5C & 0.25C-0.25C \\
\hline
\hline
\multicolumn{5}{|c|}{Arbitrage gains in \$ for 1 week} \\
\hline
1 & 6.034 & 4.684 & 3.496 & 3.000 \\
0.75 & 4.827 & 4.075 & 3.400 & 2.987 \\
0.5 & 4.168 & 3.711 & 3.292 & 2.975 \\
0.25 & 4.204 & 3.943 & 3.348 & 3.002 \\
0 & 4.427 & 3.896 & 3.396 & 3.009 \\
\hline
\multicolumn{5}{|c|}{Cycles of operation for 1 week} \\
\hline
1 & 64.322 & 38.979 & 22.622 & 12.850 \\
0.75 & 41.613 & 30.322 & 19.948 & 11.980 \\
0.5 & 34.658 & 27.627 & 18.744 & 11.348 \\
0.25 & 31.429 & 26.370 & 18.476 & 11.396 \\
0 & 32.958 & 28.255 & 19.845 & 11.372 \\
\hline
\end{tabular}
\hfill\
\end{center}
\end{table}
The deterministic results for without and with load are presented in Table~\ref{deterministictOnlyLP} and Table~\ref{deterministictLoadLP}. Compare the deterministic results with stochastic results presented in Table~\ref{StochasticOnlyLP2} and Table~\ref{StochasticLoadLP2}.
The primary numerical observations are:\\
$\quad \bullet$ Effect of uncertainty on arbitrage gains for a faster ramping battery is greater compared to a slower ramping battery, this observation is in sync with conclusions drawn in \cite{yize2018stochastic},\\
$\quad \bullet$ Combining storage with inelastic load with renewable generation provides greater gains for decreasing $\kappa$. Furthermore, the effect of uncertainty for lower $\kappa$ is lower compared to higher values of $\kappa$.\\
$\quad \bullet$ Profitability of operating only storage deteriorates sharply with decrease of $\kappa$. For only storage case under zero selling price case ($\kappa=0$) no arbitrage would be possible and the gain remains zero.
\section{Conclusion}
\label{lpsec6}
We formulate energy storage arbitrage problem using linear programming.
The linear programming formulation is possible due to piecewise linear convex cost functions.
In this formulation we consider: (a) net-metering compensation (with selling price at best equal to buying price) i.e. $\kappa_i \in [0,1]$, (b) inelastic load, (c) consumer renewable generation, (d) storage charging and discharging losses, (e) storage ramping constraint and (f) storage capacity constraint.
By conducting extensive numerical simulations, we analyze the sensitivity of energy storage batteries for varying ramp rates and varying ratio of selling and buying price of electricity. We observe that the value of storage in presence of load and renewable increases as the ratio of selling and buying price decreases.
We also perform stochastic simulation for real-time implementation and compare the stochastic results to the deterministic ones. Net-load and electricity price are modeled with AutoRegressive models for model predictive control. The effect of uncertainty on slow ramping batteries is observed to be lower compared to faster ramping batteries. Furthermore, as $\kappa$ decreases, arbitrage gains becomes more immune to uncertainty.
In a future work, we aim to control the cycles of operation of the battery by tuning the friction coefficient with different $\kappa$ values, such that the battery is not over-used, otherwise this would lead to reduction in battery operational life.
\bibliographystyle{IEEEtran}
|
2,869,038,155,873 | arxiv | \section{Introduction}
As a fundamental problem in statistics, hypothesis testing plays a key role in general scientific discovery areas such as anomaly detection and model criticism.
The goal of hypothesis testing is to determine which one among given hypotheses is true within a certain error probability level.
Unfortunately, the data-generating distributions are usually unknown so that it is difficult to obtain the optimal test leveraging the Neyman-Pearson Lemma~\citep{neyman1933ix}.
Although training samples from target distributions are often available, we cannot obtain reliable estimates of the underlying distributions for small-sample cases.
Therefore, hypothesis testing for small-sample scenarios is a challenging task, and it commonly arises in many practical applications such as health care~\citep{Schober19}, anomaly detection~\citep{Chandola2009, savage2014anomaly, ahmed2016survey}, and change-point detection~\citep{Vincent08,xie2020sequential,Liyanchange_21,xie2022minimaxdec}.
Various robust detectors are developed in existing literature to capture the distributional uncertainty such as distribution mis-specification and adversarial data perturbation.
They are constructed by seeking the worst-case detectors over distributional uncertainty sets that contain candidate distributions under the null and alternative hypotheses.
The earliest work on robust detectors dates back to Huber's masterpiece~\citep{Peter65}, which constructs the uncertainty sets as probability balls centered around nominal distributions using total-variation distance.
However, it is computationally intractable to obtain the corresponding optimal tests, especially for multivariate settings.
Recent works~\citep{Levy09, gul2017minimax} construct the uncertainty sets as balls using KL-divergence centered around empirical distributions such that all distributions within the sets are supported only on training samples.
We remark that for small-sample scenarios, this choice is too restrictive since there is a non-negligible probability that new samples are outside the support of training samples.
We consider a data-driven robust hypothesis testing problem when the sample size is small.
A closely related work~\citep{gao18robust} constructs the distributional uncertainty sets using Wasserstein distance.
The Wasserstein distance takes account into the geometry of sample space and therefore is suitable for comparing distributions with non-overlapping supports, and hedging against data outliers~\citep{gao2016distributionally}.
However, the Wasserstein robust test is not without limitation.
As shown in \citep{xie2021robust}, the induced optimal test is a likelihood ratio test between \emph{least favorable distributions}~(LFDs) supported on training samples, which may not be applicable if testing samples do not have the same support as those training samples.
Although it is possible to extend LFDs into the whole sample space using kernel smoothing~\citep{xie2021robust} or $k$-nearest neighbors~\citep{wang2021classconditioned} algorithms, the corresponding test may not achieve good performances as the distributional estimates are not necessarily reliable.
References~\citep{Zhongchang21,sun2022kernel} address the drawback of Wasserstein distance by constructing uncertainty sets using maximum mean discrepancy~(MMD).
To maintain computational tractability, their goal is to find the optimal detector so that asymptotically the type-II error exponent is maximized and the type-I error is below a threshold.
However, this test may not be optimal in small-sample cases and as demonstrated in some numerical experiments (see Section~\ref{Sec:application}), the MMD robust test may not achieve the best performances.
In this paper, we develop a new robust testing framework leveraging the idea of distributionally robust optimization~(DRO) with Sinkhorn distance~\citep{wang2021sinkhorn}, which, as a variant of Wasserstein distance with stochastic transport mapping, is defined as the cheapest transport cost between two distributions with entropic regularization~\citep{cuturi2013sinkhorn}.
Specifically, we study the robust hypothesis testing problem by seeking the worst-case detector over ambiguity sets so that the risk is minimized, where the ambiguity sets are constructed using Sinkhorn distance centered around the empirical distributions from samples.
The resulted worst-case detector is well-defined for samples outside the training samples, which usually leads to better generalization performances than the previous framework.
Our contributions are summarized as follows.
\begin{enumerate}
\item
We formulate the problem of robust hypothesis testing as an infinite-dimensional optimization that seeks the optimal detector and LFDs jointly, which is challenging to solve in general.
We derive its dual reformulation leveraging tools from distributionally robust optimization, which enables us to derive the optimal detector in two steps:
\begin{enumerate}
\item[(I)]
Given a fixed pair of distributions, we first find the corresponding optimal detector.
\item[(II)]
Then we find the LFDs by solving an infinite-dimensional convex optimization. We leverage the Monte-Carlo approximation idea to solve a finite-dimensional problem instead.
\end{enumerate}
\item
Various numerical experiments using both synthetic and real datasets are conducted to demonstrate competitive performances of our proposed method.
\end{enumerate}
The rest of this paper is organized as follows.
Section~\ref{Section:setup} describes the main formulation and a brief introduction to Sinkhorn DRO,
Section~\ref{Sec:methodology} develops the methodology for solving the robust hypothesis testing problem,
Section~\ref{Sec:application} reports several numerical results,
and Section~\ref{Sec:conclusion} provides some concluding remarks.
\if\fullversion1 All omitted proofs and experiment details can be found in \citep{jie22isit_full}.
\fi
\if\fullversion2
All omitted proofs and other details can be found in Appendix.
\fi
\textit{Notations:}
Denote $\mathbb{F}$ as the set $\{0,1\}$.
The base of the logarithm function $\log$ is $e$.
For any non-negative integer $N$, define $[N] := \{1, \ldots, N\}$.
Given a reference measure $\nu$ supported on $\Omega$ and a function $f:~\Omega\to\mathbb{R}$, define the essential supremum $\text{ess-sup}~f=\inf\{t:~\nu\{f(z)>t\}=0\}$.
We write $\mathbb{P}\ll\nu$ if the distribution $\mathbb{P}$ is absolutely continuous with respect to the measure $\nu$.
Denote by $\text{supp}(\mathbb{P})$ the support of the distribution $\mathbb{P}$.
\section{Problem Setup}\label{Section:setup}
Let $\Omega\subseteq\mathbb{R}^d$ be the sample space in which the observed samples take their values, and $\mathcal{P}(\Omega)$ be the set of all distributions supported on $\Omega$.
Denote by $\mathcal{P}_0, \mathcal{P}_1\subseteq\mathcal{P}(\Omega)$ the uncertainty sets under hypotheses $H_0$ and $H_1$, respectively.
Given two sets of training samples $\{x_1^k,\ldots,x_{n_k}^k\}$ generated from $\mathbb{P}_k\in\mathcal{P}_k$ for $k\in\mathbb{F}$, denote the corresponding empirical distributions as $\hat{\mathbb{P}}_k=\frac{1}{n_k}\sum_{i=1}^{n_k}\delta_{x_i^k}$.
For notation simplicity, assume that $n_0=n_1=n$, but our formulation can be naturally extended for unequal sample sizes.
Given a new testing sample $\omega$, the goal of \emph{composite hypothesis testing} is to distinguish between the null hypothesis $H_0:~\omega\sim \mathbb{P}_0$ and the alternative hypothesis $H_1:~\omega\sim \mathbb{P}_1$, where $\mathbb{P}_k\in\mathcal{P}_k$ for $k\in\mathbb{F}$.
For a detector $T:~\Omega\to\mathbb{R}$, it accepts the null hypothesis $H_0$ when $T(\omega)\ge0$ and otherwise it accepts the alternative hypothesis $H_1$.
Under the Bayesian setting, the risk of this detector is quantified as the summation of type-I and type-II error:
\[
\mathcal{R}(T;\mathbb{P}_0,\mathbb{P}_1)=\mathbb{P}_0\{\omega: T(\omega)<0\} + \mathbb{P}_1\{\omega: T(\omega)\ge0\}.
\]
Since the objective function is highly non-convex, we replace it with its tight upper bound via convex approximations of the indicator function as discovered in \citep{nemirovski2007convex,xie2021robust}:
\[
\Phi(T;\mathbb{P}_0,\mathbb{P}_1) = \mathbb{E}_{\mathbb{P}_0}[\ell\circ(-T)(\omega)] + \mathbb{E}_{\mathbb{P}_1}[\ell\circ T(\omega)],
\]
\if\submitversion1
where $\ell$ is a \emph{generating function} (see Definition~\ref{Def:generating}).
\fi
\if\submitversion2
where $\ell$ is a \emph{generating function} (see Definition~\ref{Def:generating}) so that it always holds that
\[\Phi(T;\mathbb{P}_0,\mathbb{P}_1)\ge \mathcal{R}(T;\mathbb{P}_0,\mathbb{P}_1).
\]
\fi
\begin{definition}[Generating Function]\label{Def:generating}
A generating function $\ell:~\mathbb{R}\to\mathbb{R}_+\cup\{\infty\}$ is a non-negative valued, non-decreasing, convex function so that $\ell(0)=1$ and $\lim_{t\to-\infty}\ell(t)=0$.
\end{definition}
Table~\ref{tab:generating:function} lists some common choices of generating function $\ell$ and the corresponding optimal detector, in which the first, second, fourth one has been considered in existing literature \citep{goldenshluger2015hypothesis}, \citep{cheng2020classification}, \citep{xie2021robust}, respectively.
In this paper, we develop a minimax test that optimizes the worst-case risk function over all distributions within ambiguity sets $\mathcal{P}_0$ and $\mathcal{P}_1$:
\begin{equation}
\inf_{T}\sup_{\mathbb{P}_k\in\mathcal{P}_k, {k\in\mathbb{F}}}~\Phi(T;\mathbb{P}_0,\mathbb{P}_1),\label{Eq:minimax:test}
\end{equation}
where the sets $\mathcal{P}_k, {k\in\mathbb{F}}$ are formulated using Sinkhorn distance:
\begin{equation}\label{Eq:Sinkhorn:ambiguity}
\mathcal{P}_k = \{\mathbb{P}_k\in \mathcal{P}:~\mathcal{W}_{\varepsilon}(\hat{\mathbb{P}}_k, \mathbb{P}_k)\le \rho_k\}.
\end{equation}
The resulting worst-case distributions $\mathbb{P}_k^*, {k\in\mathbb{F}}$ in \eqref{Eq:minimax:test} are called the \emph{least favorable distributions}~(LFDs) in literature.
Leverging results from \citep[Theorem~1]{xie2021robust}, we can argue that the approximation \eqref{Eq:minimax:test} is near optimal for developing the robust test to optimize $\mathcal{R}(T;\mathbb{P}_0,\mathbb{P}_1)$, the summation of type-I and type-II error.
\begin{remark}[Batched Testing]
When given a batch of $n_{\text{Te}}$ testing samples $\omega_1,\ldots,\omega_{n_{\text{Te}}}$ generated from the same distribution and a detector $T:~\Omega\to\mathbb{R}$, the decision is made based on the principle of majority vote, i.e., we accept the null hypothesis $H_0$ if
\[
{T}(\omega_1,\ldots,\omega_{n_{\text{Te}}}):=\frac{1}{n_{\text{Te}}}\sum_{i=1}^{n_{\text{Te}}}{T}(\omega_i)<0.
\]
As shown in \citep[Proposition~1]{xie2021robust}, both type-I and type-II error for batched testing procedure decrease exponentially fast to zero as the testing sample size $n_{\text{Te}}$ increases.
\end{remark}
\subsection{Preliminaries about Sinkhorn DRO}\label{Sec:preliminary:SDRO}
In the following we review some details about Sinkhorn DRO.
The Sinkhorn distance is a variant of the Wasserstein distance based on entropic regularization.
\begin{definition}[Sinkhorn Distance]\label{Def:Sinkhorn}
Consider any two distributions $\mathbb{P},\mathbb{Q}\in \mathcal{P}(\Omega)$ and let $\nu\in\mathcal{M}(\Omega)$ be a reference measure such that $\mathbb{Q}\ll \nu$.
For regularization parameter $\varepsilon>0$, define the Sinkhorn distance between two distributions $\mathbb{P}$ and $\mathbb{Q}$ as
\[
\mathcal{W}_{\varepsilon}(\mathbb{P},\mathbb{Q})=\inf_{\gamma\in\Gamma(\mathbb{P},\mathbb{Q})}~\left\{\mathbb{E}_{(x,y)\sim\gamma}\left[c(x,y)\right] +\varepsilon H(\gamma\|\mathbb{P}\otimes\nu)\right\},
\]
where $\Gamma(\mathbb{P},\mathbb{Q})$ denotes the set of joint distributions whose first and second marginal distributions are $\mathbb{P}$ and $\mathbb{Q}$, respectively, $c(x,y)$ stands for the cost function, and $H(\gamma\|\mathbb{P}\otimes\nu)$ denotes the relative entropy between the distribution $\gamma$ and the measure $\mathbb{P}\otimes\nu$:
\[
H(\gamma\|\mathbb{P}\otimes\nu) = \int \log\left(\frac{\mathrm{d} \gamma(x,y)}{\mathrm{d} \mathbb{P}(x)\mathrm{d}\nu(y)}\right)\mathrm{d}\gamma(x,y).
\]
\end{definition}
With a measurable variable $f:~\Omega\to\mathbb{R}$, we associate value
\begin{equation}\label{Eq:primal:Sinkhorn}
V = \sup_{\mathbb{P}\in\mathcal{P}}~\mathbb{E}_{\mathbb{P}}[f].
\end{equation}
We construct the ambiguity set $\mathcal{P}$ using Sinkhorn distance, i.e., $\mathcal{P}=\{\mathbb{P}\in\mathcal{P}(\Omega):~\mathcal{W}_{\varepsilon}(\hat{\mathbb{P}},\mathbb{P})\le \rho\}$ for some nominal distribution $\hat{\mathbb{P}}$.
For instance, the nominal distribution $\hat{\mathbb{P}}$ can be an empirical distribution from samples.
Define the dual problem of \eqref{Eq:primal:Sinkhorn} as
\begin{equation}\label{Eq:dual:Sinkhorn}
V_D = \inf_{\lambda\ge0}~\lambda\bar{\rho} + \lambda\varepsilon\int \log\left(
\mathbb{E}_{\mathbb{Q}_{x,\varepsilon}}\left[
e^{f(z)/(\lambda\varepsilon)}
\right]
\right)\mathrm{d}\hat{\mathbb{P}}(x),
\end{equation}
where we define the constant
\[
\bar{\rho} = \rho + \varepsilon\int \log\left(\int e^{-c(x,z)/\varepsilon}\mathrm{d}\nu(z)\right)\mathrm{d}\hat{\mathbb{P}}(x)
\]
and the kernel probability distribution $\mathbb{Q}_{x,\varepsilon}$ as
\[
\mathrm{d} \mathbb{Q}_{x,\varepsilon}(z) = \frac{e^{-c(x,z)/\varepsilon}}{\int e^{-c(x,u)/\varepsilon}\mathrm{d}\nu(u)}\mathrm{d}\nu(z).
\]
The distribution $\mathbb{Q}_{x,\varepsilon}$ can be viewed as a posterior distribution of the random variable $Z$ given $X=x$, in which the prior distribution of $Z$ is proportional to $\nu$, and the likelihood model $P(X=x\mid Z=z)\propto e^{-c(x,z)/\varepsilon}$.
A strong duality result for the problem \eqref{Eq:primal:Sinkhorn} is provided in Theorem~\ref{Theorem:Sinkhorn} to obtain a more tractable form.
\begin{theorem}[Reformulation of Sinkhorn DRO]\label{Theorem:Sinkhorn}
Assume that
\begin{enumerate}
\item
$\nu\{z:~0\le c(x,z)<\infty\}=1$ for $\hat{\mathbb{P}}$-almost every $x$;
\item
$\int e^{-c(x,z)/\varepsilon}\mathrm{d}\nu(z)<\infty$ for $\hat{\mathbb{P}}$-almost every $x$;
\item
$\bar{\rho}\ge0$.
\end{enumerate}
Then it holds that $V = V_D$.
Additionally, when
\begin{equation}
\begin{aligned}
&\bar{\rho}':=\bar{\rho} + \varepsilon\int
\log\left(
\mathbb{E}_{\mathbb{Q}_{x,\varepsilon}}[1_A]
\right)\mathrm{d}\hat{\mathbb{P}}(x)<0,
\end{aligned}
\end{equation}
where the set $A:=\{z:~f(z)=\text{ess-sup}_{\nu}~f\}$, the constraint set $\mathcal{P}$ for problem \eqref{Eq:primal:Sinkhorn} is active and the worst case distribution $\mathbb{P}^*$ can be expressed as
\begin{equation}\label{Eq:worst:P:*}
\mathrm{d} \mathbb{P}^*(z) = \int \left(
\frac{e^{f(z)/(\lambda^*\varepsilon)}\mathrm{d}\mathbb{Q}_{x,\varepsilon}(z)}{\mathbb{E}_{\mathbb{Q}_{x,\varepsilon}}[e^{f(z)/(\lambda^*\varepsilon)}]}
\right)\mathrm{d}\hat{\mathbb{P}}(x),
\end{equation}
where $\lambda^*>0$ is the optimal solution for the problem~\eqref{Eq:dual:Sinkhorn}.
\end{theorem}
The finite-dimensional convex problem \eqref{Eq:dual:Sinkhorn} can be efficiently solved based on bisection search with Monte-Carlo sampling on the kernel distribution $\mathbb{Q}_{x,\varepsilon}$.
In particular, the generation of samples from $\mathbb{Q}_{x,\varepsilon}$ is easy for many cases.
For example, when the cost function $c(x,y)=\frac{1}{2}\|x-y\|^2$ and $\nu$ is the Lebesgue measure in $\mathbb{R}^d$, it holds that $\mathbb{Q}_{x,\varepsilon}=\mathcal{N}(x, \varepsilon I_d)$.
When the explicit density form of $\mathbb{Q}_{x,\varepsilon}$ is not available, we can also finish this task using the acceptance-rejection method~\citep{asmussen2007stochastic}.
From the expression \eqref{Eq:worst:P:*}, we realize the regularization parameter $\varepsilon$ quantifies the smoothness of the worst-case distribution $\mathbb{P}^*$.
Specifically, when the optimal Lagrangian multiplier $\lambda^*>0$, the worst-case distribution maps each $x\in\text{supp}(\hat{\mathbb{P}})$ to a distribution whose density function with respect to $\nu$ at $z$ is proportional to $\exp\left(\frac{1}{\varepsilon}(f(z) - \lambda^*c(x,z))\right)$.
When $\varepsilon\to0$, the distribution $\mathbb{P}^*$ is discrete and one recovers the classical Wasserstein DRO formulation.
When $\varepsilon\to\infty$, each sample is moved uniformly so that the distribution $\mathbb{P}^*$ is a uniform measure with respect to $\nu$.
See \citep{wang2021sinkhorn} for a detailed discussion.
\if\submitversion1
\begin{table*}[tb]
\centering
\caption{Common choices of generating function, together with its corresponding optimal detector and detector risk function.}
\label{tab:generating:function}
\rowcolors{2}{white}{gray!30}
\begin{tabular}{p{3cm}p{2.8cm}p{4.2cm}p{2.8cm}p{2.8cm}}
\toprule
$\ell(t)$ & $T^*$ & $\psi(r)$ & $1 - \Phi^*(\mathbb{P}_0,\mathbb{P}_1)/2$ \\
\midrule
$\exp(t)$ & $\log\sqrt{\mathrm{d}\mathbb{P}_0/\mathrm{d}\mathbb{P}_1}$ & $2\sqrt{r(1-r)}$& $H^2(\mathbb{P}_0,\mathbb{P}_1)$ \\
$\log(1+\exp(t))/\log 2$ & $\log(\mathrm{d}\mathbb{P}_0/\mathrm{d}\mathbb{P}_1)$ & $-(r\log r + (1-r)\log(1-r))/\log 2$& $\text{JS}(\mathbb{P}_0,\mathbb{P}_1)/\log 2$ \\
$(t+1)_+^2$ & $1 - 2(\mathrm{d}\mathbb{P}_0/\mathrm{d}(\mathbb{P}_0+\mathbb{P}_1))$ & $4r(1-r)$ & $\chi^2(\mathbb{P}_0,\mathbb{P}_1)$ \\
$(t+1)_+$ & $\text{sign}(\mathrm{d}\mathbb{P}_0-\mathrm{d}\mathbb{P}_1)$ & $2\min(r,1-r)$& $\text{TV}(\mathbb{P}_0,\mathbb{P}_1)$\\
\bottomrule
\end{tabular}
\end{table*}
\fi
\if\submitversion2
\begin{table*}[tb]
\centering
\caption{Common choices of generating function, together with its corresponding optimal detector and detector risk function.}
\label{tab:generating:function}
\rowcolors{2}{white}{gray!30}
\begin{tabular}{p{3cm}p{3cm}p{3cm}p{3cm}p{3cm}}
\toprule
$\ell(t)$ & $T^*$ & $\psi(r)$ & $1 - \Phi^*(\mathbb{P}_0,\mathbb{P}_1)/2$ \\
\midrule
$\exp(t)$ & $\log\sqrt{\mathrm{d}\mathbb{P}_0/\mathrm{d}\mathbb{P}_1}$ & $2\sqrt{r(1-r)}$& $H^2(\mathbb{P}_0,\mathbb{P}_1)$ \\
$\log(1+\exp(t))/\log 2$ & $\log(\mathrm{d}\mathbb{P}_0/\mathrm{d}\mathbb{P}_1)$ & $H_2(r)/\log 2$& $\text{JS}(\mathbb{P}_0,\mathbb{P}_1)/\log 2$ \\
$(t+1)_+^2$ & $1 - 2(\mathrm{d}\mathbb{P}_0/\mathrm{d}(\mathbb{P}_0+\mathbb{P}_1))$ & $4r(1-r)$ & $\chi^2(\mathbb{P}_0,\mathbb{P}_1)$ \\
$(t+1)_+$ & $\text{sign}(\mathrm{d}\mathbb{P}_0-\mathrm{d}\mathbb{P}_1)$ & $2\min(r,1-r)$& $\text{TV}(\mathbb{P}_0,\mathbb{P}_1)$\\
\bottomrule
\end{tabular}
\end{table*}
\fi
\section{Methodology}\label{Sec:methodology}
\begin{center}
\begin{algorithm}[!t]
\caption{
Algorithm for Sinkhorn Robust Detector
}
\label{Alg:Sinkhorn}
\begin{algorithmic}[1]\label{Alg:permutation:test}
\REQUIRE{Training samples $\{x_i^k\}_{i\in[n],k\in\mathbb{F}}$ and a testing sample $\omega$.}
\FOR{$i=1,2,\ldots,n$}
\STATE{Generate $m$ samples from $\mathbb{Q}_{i,\varepsilon}^k, k\in\mathbb{F}$ defined in \eqref{Eq:Q:i:Reg:k} and construct the corresponding empirical distribution $\hat{\mathbb{Q}}_{i,\varepsilon}^k$.}
\STATE{$\hat{\mathbb{G}}_{i,\varepsilon} \leftarrow (\hat{\mathbb{Q}}_{i,\varepsilon}^0 + \hat{\mathbb{Q}}_{i,\varepsilon}^1)/2$.}
\STATE{Calculate weighted importance ratio function $r_{i,\varepsilon}^k$ valued on $\text{supp}(\hat{\mathbb{G}}_{i,\varepsilon})$ for $k\in\mathbb{F}$.}
\ENDFOR
\STATE{Obtain $\{\ell_{i,k}\}_{i\in[n],k\in\mathbb{F}}$ as the optimal solution to problem \eqref{Eq:Psi*:approximated}.}
\STATE{Recover LFDs $\mathbb{P}_k^*, k\in\mathbb{F}$ according to \eqref{Eq:Psi*:approximated:d}.}\\
\textbf{Return} the $k$-NN detector valued on $\omega$ according to Remark~\ref{Remark:kNN:detector}.
\end{algorithmic}
\end{algorithm}
\end{center}
In this section, we first develop a strong duality theorem to reformulate the problem \eqref{Eq:minimax:test}, then we leverage the idea of Monte-Carlo approximation to solve the reformulated problem, from which we can obtain the robust detector.
The overall procedure is summarized in Algorithm~\ref{Alg:Sinkhorn}.
\subsection{Step 1: Exchange of Infimum and Supremum}
Similar to the discussion in Section~\ref{Sec:preliminary:SDRO}, for $k\in\mathbb{F}$, we define the constant
\begin{equation}
\bar{\rho}_k=\rho_k + \varepsilon\int \log\left(\int e^{-c(x,z)/\varepsilon}\mathrm{d}\nu(z)\right)\mathrm{d}\hat{\mathbb{P}}_k(x), \label{Eq:orho:k}
\end{equation}
where $\rho_k$ is introduced in \eqref{Eq:Sinkhorn:ambiguity} to quantify the size of the Sinkhorn ambiguity set. In addition, we define kernel probability distribution $\mathbb{Q}_{i,\varepsilon}^k$ as
\begin{equation}
\mathrm{d}\mathbb{Q}_{i,\varepsilon}^k(z)=\frac{e^{-c(x_i^k,z)/\varepsilon}}{\int e^{-c(x_i^k,u)/\varepsilon}\mathrm{d}\nu(u)}\mathrm{d}\nu(z).\label{Eq:Q:i:Reg:k}
\end{equation}
Proposition~\ref{Strong:Duality:inf:sup} presents our strong duality theorem, which enables us to switch the $\inf$ and $\sup$ operators in \eqref{Eq:minimax:test}.
It reveals that a robust detector can be obtained by finding the optimal detector for fixed distributions $\mathbb{P}_k, k\in\mathbb{F}$, and then finding the LFDs to maximize the risk over those detectors.
An expression of the optimal detector for fixed distributions is provided in Lemma~\ref{Lemma:optimal:detector}.
\begin{proposition}[Strong Duality]\label{Strong:Duality:inf:sup}
Assume that for $x\in\{x_i^k\}_{i\in[n],k\in\mathbb{F}}$, it holds that $\nu\{z:~0\le c(x,z)<\infty\}=1$ and $\int e^{-c(x,z)/\varepsilon}\mathrm{d}\nu(z)<\infty$.
When $\bar{\rho}_k\ge0$ for $k\in\mathbb{F}$, it holds that
\begin{equation}
\inf_{T:~\Omega\to\mathbb{R}}~\sup_{\substack{\mathbb{P}_k\in\mathcal{P}_k,\\{k\in\mathbb{F}}}}~\Phi(T;\mathbb{P}_0,\mathbb{P}_1)
=
\sup_{\substack{\mathbb{P}_k\in\mathcal{P}_k,\\{k\in\mathbb{F}}}}\Phi^*(\mathbb{P}_0,\mathbb{P}_1),\label{Eq:relation:strong:duality}
\end{equation}%
where $\Phi^*(\mathbb{P}_0,\mathbb{P}_1)$ is the infimum of $\Phi(T;\mathbb{P}_0,\mathbb{P}_1)$ over all detectors $T:~\Omega\to\mathbb{R}$.
\end{proposition}
\begin{lemma}[Optimal Detector~{\citep[Theorem 2]{gao18robust}}]\label{Lemma:optimal:detector}
For fixed $\mathbb{P}_k, k\in\mathbb{F}$, it holds that
\[
\Phi^*(\mathbb{P}_0,\mathbb{P}_1):= \int
[\psi\circ r(\omega)]
\mathrm{d}(\mathbb{P}_0+\mathbb{P}_1)(\omega),
\]
where the ratio
\begin{equation}\label{Eq:ratio:r}
r(\omega) = \frac{\mathrm{d}\mathbb{P}_0}{\mathrm{d}(\mathbb{P}_0+\mathbb{P}_1)}(\omega),
\end{equation}
and
\[
\psi(r) := \min_{t\in\mathbb{R}}~\{
\psi_t(r)\triangleq
(1-r)\ell(t) + r\ell(-t)\},\quad r\in[0,1].
\]
An optimal detector for $\Phi^*(\mathbb{P}_0,\mathbb{P}_1)$ is $T^*(\omega)=-t^*(\omega)$, where
\[
t^*(\omega):=\argmin_{t\in\mathbb{R}}~\{(1-r(\omega))\ell(t) + r(\omega)\ell(-t)\}. %
\]
\end{lemma}
\begin{IEEEproof}[Proof Sketch of Proposition~\ref{Strong:Duality:inf:sup}]
The idea to show the strong duality result is as follows.
We first reformulate the infimum of $\Phi$ among all detectors (see Lemma~\ref{Lemma:optimal:detector}), and then give the dual reformulation on the worst-case risk problem $\sup\{\Phi^*(\mathbb{P}_0,\mathbb{P}_1):~\mathbb{P}_k\in\mathcal{P}_k, k\in\mathbb{F}\}$
\if\fullversion2
(see Lemma~\ref{Strong:Duality:Optimal:Detector} in Appendix~\ref{Appendix:proof}).
\fi
\if\fullversion1
We highlight that the reference \citep{gao18robust} has developed a similar result,
\fi
\if\fullversion2
We highlight that the reference \citep{gao18robust} has developed a similar result as in Lemma~\ref{Strong:Duality:Optimal:Detector},
\fi
in which the ambiguity sets are constructed using Wasserstein distance.
However, their results cannot be directly applied because the LFDs of Wasserstein DRO are supported on finite number of points, so the dual problem is finite-dimensional and the duality of finite-dimensional convex programming holds.
In contrast, our dual problem is infinite-dimensional as the LFDs are absolutely continuous.
We leverage a non-trivial conic duality theorem in \citep[Theorem~2.165]{bonnans2013perturbation} to argue that the strong duality still holds.
Finally, we reformulate the inner supremum problem on the LHS of \eqref{Eq:relation:strong:duality} by applying the strong duality result of Sinkhorn DRO in Theorem~\ref{Theorem:Sinkhorn}, and then construct primal optimal solutions to show the duality gap between LHS and RHS in \eqref{Eq:relation:strong:duality} can be arbitrarily small.
\end{IEEEproof}
\subsection{Step 2: Finding Least Favorable Distributions}
Next, we discuss how to find LFDs by solving the following infinite-dimensional optimization problem
\begin{equation}\label{Eq:Psi:*}
\begin{aligned}
\sup_{\mathbb{P}_k\in\mathcal{P}, {k\in\mathbb{F}}}&\quad \Phi^*(\mathbb{P}_0,\mathbb{P}_1)\\
\mbox{s.t.}&\quad \mathcal{W}_{\varepsilon}(\hat{\mathbb{P}}_k, \mathbb{P}_k)\le \rho_k, {k\in\mathbb{F}}.
\end{aligned}
\end{equation}
The current formulation \eqref{Eq:Psi:*} is intractable because the decision variable is infinite-dimensional.
Moreover, it cannot be solved following standard tools from Sinkhorn DRO as the objective function $\Phi^*(\mathbb{P}_0,\mathbb{P}_1)$ is not linear with respect to $\mathbb{P}_0$ and $\mathbb{P}_1$.
To tackle this challenge, we first identify that this problem can be reformulated as a conic optimization problem with entropic constraints.
\begin{lemma}[Reformulation of \eqref{Eq:Psi:*}]\label{Lemma:reformulate:Phi*}
Under the setting of Proposition~\ref{Strong:Duality:inf:sup}, the problem \eqref{Eq:Psi:*} can be reformulated as
\begin{subequations}\label{Eq:Psi*:reformulated}
\begin{align}
\sup_{\substack{\ell_{i,k}\ge0,\\ i\in[n], k\in\mathbb{F}}}&\quad
\int \psi\left(
\frac{\mathrm{d}\mathbb{P}_0}{\mathrm{d}(\mathbb{P}_0+\mathbb{P}_1)}\right)\mathrm{d}(\mathbb{P}_0+\mathbb{P}_1)\label{Eq:Psi*:reformulated:a}\\
\mbox{s.t.}&\quad \frac{\varepsilon}{n}\sum_{i=1}^n\int \ell_{i,k}(z)\log(\ell_{i,k}(z))\mathrm{d}\mathbb{Q}_{i,\varepsilon}^k(z)\le \bar{\rho}_k,\label{Eq:Psi*:reformulated:b}\\
&\quad \int \ell_{i,k}(z)\mathrm{d}\mathbb{Q}_{i,\varepsilon}^k(z)=1, \label{Eq:Psi*:reformulated:c}\\
&\quad \mathrm{d}\mathbb{P}_k=\frac{1}{n}\sum_{i=1}^n\ell_{i,k}\mathrm{d}\mathbb{Q}_{i,\varepsilon}^k.\label{Eq:Psi*:reformulated:d}
\end{align}
\end{subequations}
\end{lemma}
To derive the reformulation \eqref{Eq:Psi*:reformulated}, we first apply the definition of Sinkhorn distance so that decision variables are the joint distributions between $\hat{\mathbb{P}}_k$ and $\mathbb{P}_k$, denoted as $\gamma_k$, $k\in\mathbb{F}$.
By the disintegration theorem, the joint distribution can be represented as $\gamma_k=\frac{1}{n}\sum_{i=1}^n\delta_{x_i^k}\otimes\gamma_{i,k}$,
where $\gamma_{i,k}$ stands for the conditional distribution of $\gamma_k$ given the first marginal of $\gamma_k$ equals $x_i^k$.
Define the importance ratio function $\ell_{i,k}:~\Omega\to\mathbb{R}_+$ as $\ell_{i,k}(z)=\mathrm{d}\gamma_{i,k}(z)/\mathrm{d}\mathbb{Q}_{i,\varepsilon}^k(z)$. Substituting the expressions of $\bar{\rho}_k$ and $\mathbb{Q}_{i,\varepsilon}^k$ implies the desired formulation.
\begin{remark}[Interpretation of Sinkhorn Detector]
Constraint of the problem \eqref{Eq:Psi:*} can also be reformulated as
\begin{align*}
\frac{\varepsilon}{n}\sum_{i=1}^nD_{\text{KL}}(\gamma_{i,k}\|\mathbb{Q}_{i,\varepsilon}^k)&\le \bar{\rho}_k,\quad
\mathbb{P}_k = \frac{1}{n}\sum_{i=1}^n\gamma_{i,k},
\end{align*}
where $\gamma_{i,k}$ is the conditional transport mapping provided that the first marginal equals to $x_i^k$.
In other words, Sinkhorn DRO formulation \eqref{Eq:Psi:*} can be understood as a {generalized KL-divergence constrained} problem.
When $\bar{\rho}_k=0$ for $k\in\mathbb{F}$, the constraint set only contains one feasible solution
$
\overline{\mathbb{P}}_k = \frac{1}{n}\sum_{i=1}^n\mathbb{Q}_{i,\varepsilon}^k,
$
which can be viewed as the non-parametric smooth density estimation constructed from samples $\{x_i^{k}\}_i$.
Consequently the optimal detector is the one based on estimated densities $\overline{\mathbb{P}}_0$ and $\overline{\mathbb{P}}_1$.
\end{remark}
The support of decision variables $\ell_{i,k}$ is the same as $\text{supp}(\mathbb{Q}_{i,\varepsilon}^k)$, making the reformulated problem \eqref{Eq:Psi*:reformulated} still infinite-dimensional and therefore intractable.
We solve its sample estimate problem instead, leveraging the Monte-Carlo approximation.
For each $i$ and $k$, we sample $m$ points from $\mathbb{Q}_{i,\varepsilon}^k$ and denote the corresponding empirical distribution as $\hat{\mathbb{Q}}_{i,\varepsilon}^k$.
If directly replacing the kernel distribution $\mathbb{Q}_{i,\varepsilon}^k$ with its empirical counterpart for the formulation in \eqref{Eq:Psi*:reformulated}, the LFDs $\mathbb{P}_0$ and $\mathbb{P}_1$ will have non-overlapping supports, and consequently the optimal detector is not well-defined.
We leverage the idea of importance sampling to derive the Monte-Carlo approximated problem.
Define the probability measure $\hat{\mathbb{G}}_{i,\varepsilon}$ as $\hat{\mathbb{G}}_{i,\varepsilon} = (\hat{\mathbb{Q}}_{i,\varepsilon}^0 + \hat{\mathbb{Q}}_{i,\varepsilon}^1)/2$, and let $r_{i,\varepsilon}^k:~\Omega\to\mathbb{R}_+$ be the weighted importance ratio function between the kernel distributions:
\[
r_{i,\varepsilon}^k(z):=\frac{2\mathrm{d}\mathbb{Q}_{i,\varepsilon}^k}{\mathrm{d}(\mathbb{Q}_{i,\varepsilon}^0 + \mathbb{Q}_{i,\varepsilon}^1)}(z).
\]
As a consequence, the problem \eqref{Eq:Psi*:reformulated} can be approximated as a finite-dimensional optimization problem:
\begin{subequations}\label{Eq:Psi*:approximated}
\begin{align}
\sup_{\substack{\ell_{i,k}\in\mathbb{R}_+^{2m},\\ i\in[n], k\in\mathbb{F}}}&\quad
\int \psi\left(
\frac{\mathrm{d}\mathbb{P}_0}{\mathrm{d}(\mathbb{P}_0+\mathbb{P}_1)}\right)\mathrm{d}(\mathbb{P}_0+\mathbb{P}_1)\\
\mbox{s.t.}&\quad \frac{\varepsilon}{n}\sum_{i=1}^n\int \ell_{i,k}\log(\ell_{i,k})r_{i,\varepsilon}^k\mathrm{d}\hat{\mathbb{G}}_{i,\varepsilon}\le \bar{\rho}_k,\label{Eq:Psi*:approximated:b}\\
&\quad \int \ell_{i,k}r_{i,\varepsilon}^k\mathrm{d}\hat{\mathbb{G}}_{i,\varepsilon}=\int r_{i,\varepsilon}^k\mathrm{d}\hat{\mathbb{G}}_{i,\varepsilon},\label{Eq:Psi*:approximated:c}\\
&\quad \mathrm{d}\mathbb{P}_k=\frac{\frac{1}{n}\sum_{i=1}^n\ell_{i,k}r_{i,\varepsilon}^k\mathrm{d} \hat{\mathbb{G}}_{i,\varepsilon}}{
\frac{1}{n}\sum_{i=1}^n\int r_{i,\varepsilon}^k\mathrm{d}\hat{\mathbb{G}}_{i,\varepsilon}
}.\label{Eq:Psi*:approximated:d}
\end{align}
\end{subequations}
The approximated problem \eqref{Eq:Psi*:approximated} always contains a feasible solution $\ell_{i,k}=1, i\in[n],k\in\mathbb{F}$.
In addition, constraints \eqref{Eq:Psi*:approximated:b}-\eqref{Eq:Psi*:approximated:d} are consistent estimates of the constraints \eqref{Eq:Psi*:reformulated:b}-\eqref{Eq:Psi*:reformulated:d}, respectively.
It is an open question whether the optimal value of the approximated problem~\eqref{Eq:Psi*:approximated} is a consistent estimate of the optimal value in \eqref{Eq:Psi:*}.
The technical difficulty is due to the infinite problem size of \eqref{Eq:Psi:*} so that discussions on properties of sample approximation estimators in \citep[Section~5.1]{shapiro2021lectures} do not apply.
We hope to address this issue in future works.
Since the importance ratio $\ell_{i,k}$ is supported on $\text{supp}(\hat{\mathbb{G}}_{i,\varepsilon})$, which consists of $2m$ points, the LFDs $\mathbb{P}_0^*$ and $\mathbb{P}_1^*$ from~\eqref{Eq:Psi*:approximated} will have the common support, consisting of $2mn$ points.
The approximated problem can be efficiently solved using common off-the-shelf software such as CVX~\citep{cvx,gb08}.
\if\fullversion2
In addition, we provide visualization of LFDs and impact of regularization parameters using a toy example in Appendix~\ref{Appendix:visual}.
\fi
\begin{remark}[$k$-NN Detector]\label{Remark:kNN:detector}
When making inference on any target sample $\omega$ that is beyond the support of $\mathbb{P}_0^*$ and $\mathbb{P}_1^*$, the approximated detector is defined using a weighted k-NN classifier:
\[
\tilde{T}(\omega) = \frac{1}{K}\sum_{s=1}^{K}~q_sT^*(x_s^*),
\]
where $x_1^*,\ldots,x_K^*$ are the $K$ nearest neighbors of $\omega$ and supported on $\mathbb{P}^*_k$, $k\in\mathbb{F}$, and $q_s$ is inversely proportional to $\|x_s^*-\omega\|$.
We take $K=5$ during numerical simulations.
\end{remark}
\begin{remark}[Complexity of \eqref{Eq:Psi*:approximated}]
The complexity of solving \eqref{Eq:Psi*:approximated} is independent of the data dimension $d$, as we only require the importance ratio functions evaluated on samples from $\hat{\mathbb{G}}_{i,\varepsilon}, i\in[n]$ as inputs to the convex program.
Moreover, as the constraint set is a ball of weighted $\ell_1$-norm, from convex optimization theory~\citep{nemirovski2001lectures} we know that when the objective is Lipschitz in $\ell_1$-norm, the computational complexity is of $O(\log(mn))$, which is nearly sample size independent.
This is true for all except the first case in Table~\ref{tab:generating:function}.
\end{remark}
\section{Applications}\label{Sec:application}
In this section, we apply our proposed method in three applications: composite hypothesis testing, digits classification, and change-point detection.
We take the cost function $c(x,y)=\frac{1}{2}\|x-y\|_2^2$, and the reference measure $\nu$ for Sinkhorn distance is chosen to be the Lebesgue measure.
For benchmark comparison, we also report the performance for other tests such as Wasserstein robust test~\citep{xie2021robust}, MMD robust test~\citep{Zhongchang21}, and neural network classification logit test~\citep{cheng2020classification}.
Hyper-parameters such as the radii of uncertainty sets and the entropic regularization parameter are selected using cross validation.
\if\fullversion1
Other experiment details are omitted in \citep[Appendix~\ref{Appendix:experiment}]{jie22isit_full}.
\fi
\if\fullversion2
Other experiment details are omitted in Appendix~\ref{Appendix:experiment}.
\fi
\if\submitversion1
\begin{figure}[t!]
\centering
\includegraphics[height=0.165\textwidth]{Plot_power_HDGM_NTr.pdf}
\includegraphics[height=0.165\textwidth]{Plot_power_MNIST_NTe.pdf}
\caption{Testing risk for HDGM Data (left) and MNIST Data (right).}
\label{fig:risk:HDGM}
\end{figure}
\fi
\if\submitversion2
\begin{figure}[t!]
\centering
\includegraphics[height=0.3\textwidth]{Plot_power_HDGM_NTr.pdf}
\includegraphics[height=0.3\textwidth]{Plot_power_MNIST_NTe.pdf}
\caption{Testing risk for HDGM Data (left) and MNIST Data (right).}
\label{fig:risk:HDGM}
\end{figure}
\fi
\subsection{Composite Hypothesis Testing}\label{Sub:composite:hypothesis}
Assume samples from two hypotheses are generated from high dimensional Gaussian mixture models~(HDGM) following distributions $\sum_{i=1}^2\frac{1}{2}\mathcal{N}((-1)^ie,I_D)$ and $\sum_{i=1}^2\frac{1}{2}\mathcal{N}((-1)^if,I_D)$, respectively, where $D=100$, $e$ is the unit vector in $\mathbb{R}^D$, and $f$ is a vector with the first half entries being $1$ and the remaining half being $-1$.
We find the optimal detectors based on $n\in[10]$ training samples from each distribution.
Then we test its averaged mis-classification rates based on $1000$ new testing samples from each distribution.
Then this experiment is repeated for $10$ independent trials.
Experiment results for this part are reported in Fig.~\ref{fig:risk:HDGM}, from which we can see that our proposed method performs the best over others, suggesting that it is useful for small-sample scenarios.
\subsection{MNIST Digits Classification}
Next, we examine the performance in the task of digits classification.
We randomly select five images from the MNIST dataset~\citep{lecun10} for digits $1$ and $2$ as training samples.
Then we divide test images from the same class into batches, each consisting of $n_{\text{Te}}\in[10]$ samples.
We compute the mis-classification rates for $1000$ randomly selected batches, and repeat the experiment for $10$ independent trials.
Experiment results are reported in Fig.~\ref{fig:risk:HDGM}, from which we can see that the risk of our proposed method decays quickly into zero as the testing batch size increases, and it significantly outperforms the others.
\begin{table}[t!]
\centering
\footnotesize
\caption{
Detection power for the task of change-point detection in four synthetic datasets.
For each instance the experiment is repeated for $100$ independent trials.
Thresholds for all methods are calibrated so that the significance level is $\alpha=0.05$.
}
\vspace{1mm}
\rowcolors{2}{white}{gray!30}
\begin{tabular}{p{1cm}p{1cm}p{1cm}p{1cm}p{1cm}}
\toprule
& Case 1 & Case 2 & Case 3 & Case 4\\
\midrule
NN & 0.12 & 0.40 & 0.37 & {\bf 0.58}\\
WDRO& 0.66 & 0.75 & 0.42 & 0.45\\
SDRO& {\bf 0.69} & {\bf 0.82} & {\bf 0.53} &0.56\\
\bottomrule
\end{tabular}%
\label{tab:MSRC:full}%
\vspace{-1em}
\end{table}%
\subsection{Offline Change-point Detection}
Finally, we investigate the performance for the offline change-point detection.
Suppose a series of samples are given with time horizon $T=200$ and we set the change point $K=100$.
The goal is to detect the change-point based on given samples.
The detection procedure is as follows. Take a sliding window size $\omega=20$.
For any candidate change time $t$, we treat samples from $[t-\omega,t-1]$ and $[t+\omega,t]$ as two groups of observations and solve for the LFDs $\mathbb{P}_0^*$ and $\mathbb{P}_1^*$, based on which we calculate the detection statistics as $D_t=-\tilde{T}(\omega_t)$.
We compute the CUSUM-type~\citep{Liyanchange_21} recursive detection statistic $S_t=\max\{0, S_{t-1} + D_t\}$.
A change is detected if $S_t$ exceeds a pre-specified threshold.
Thresholds for all methods are calibrated so that the false alarm rate is controlled within $\alpha=0.05$.
\if\fullversion2
We consider four cases of distribution changes using synthetic dataset, and the details are deferred in Appendix~\ref{Appendix:experiment}.
\fi
\if\fullversion1
We consider four cases of distribution changes using synthetic dataset, and the details are deferred in \citep[Appendix~\ref{Appendix:experiment}]{jie22isit_full}.
\fi
Table~\ref{tab:MSRC:full} reports the testing power, i.e., the probability of successfully detecting a change when the change exists, for various methods averaged for $100$ independent trials.
It shows that Sinkhorn robust test can capture the difference between pre- and post-change distributions well except that for the last case, NN slightly outperforms the Sinkhorn test.
\section{Conclusion}\label{Sec:conclusion}
We developed a data-driven approach for the problem of robust hypothesis testing in sample-sample scenario.
In particular, we proposed a distributionally robust optimization formulation that optimizes the worst-case risk over all distributions within ambiguity sets using Sinkhorn distance.
Generalizing this approach into other settings such as type-I error constrained tests or multiple hypothesis tests could be of research interest.
\if\submitversion1
\twocolumn
\balance
\clearpage
\bibliographystyle{IEEEtran}
|
2,869,038,155,874 | arxiv | \section{Introduction}
It is expected on general grounds that the classical description
of space-time geometry
is modified at very short length scales through quantum effects.
An interesting approach towards quantum geometry is based on
quantized symplectic spaces, whose structure is similar to quantum mechanical
phase space. Many examples of this type have been studied,
starting with the fuzzy sphere $S^2_N$
\cite{Madore:1991bw,hoppe1982QuaTheMasRelSurTwoBouStaPro},
the fuzzy torus $T^2_N$ and more elaborate 2-dimensional
spaces \cite{Arnlind:2010kw}, self-intersecting spaces such as
squashed $\C P^2$ \cite{Steinacker:2014lma}, and many more.
A general class class is provided by quantized
coadjoint orbits of compact semi-simple Lie groups. Many classical features of the underlying symplectic space are encoded
in their quantized version, which is based on the algebra of matrices $End(\cH)$
acting on a finite-dimensional Hilbert space $\cH$.
Of course, the notion of an algebra is not sufficient to define a geometry,
which should also contain a metric structure.
This extra structure arises in the context of Yang-Mills matrix models such as the IIB or IKKT model \cite{Ishibashi:1996xs}, which define a gauge theory on such fuzzy spaces.
In this context, a fuzzy space is specified by a {\em set of hermitian matrices}
$X^a$ for $a=1,...,D$. These matrices not only generate the algebra of ``functions''
$End(\cH)$, but also naturally define a matrix Laplacian $\Box = \delta^{ab} [X_a,[X_b,.]] $,
and a Dirac-type operator $\slashed{D} = \Gamma_a[X^a,.]$ where $\Gamma_a$ are
suitable Clifford or Gamma matrices.
However rather than focusing on the spectral geometry\footnote{see e.g. \cite{Glaser:2019lcd} for related work in that context.}
as in \cite{connes1995noncommutative},
we will emphasize a more direct approach based on
(quasi-) coherent states defined through the matrices $X^a$,
which provide a direct access to an underlying space $\cM$.
The obvious question is how to recover or extract the classical
geometry underlying these quantized or ``fuzzy'' spaces, defined by the
matrices $X^a$.
For special cases such as quantized
coadjoint orbits, one can construct {\em a sequence} of similar matrices
$X^a_{(N)} \in End(\cH_N)$, and show that the commutative description is recovered in the
limit $N\to\infty$.
This has led to the attitude that the geometrical
content of fuzzy spaces can only be obtained
in some semi-classical limit $N\to\infty$.
However, such a limit is not satisfactory
from a physics point of view, where one would like to
attach geometrical meaning to a given set of matrices $X^a$.
In particular, this is required to interpret numerical simulations of
Yang-Mills matrix models \cite{Nishimura:2019qal,Aoki:2019tby,Kim:2011cr,Anagnostopoulos:2020xai}, which are viewed as candidates for a quantum
theory of space-time and matter.
The purpose of the present paper is to establish a
natural framework of ``quantum geometry'', which can be
associated to {\em any} given set of $D$ hermitian matrices
without requiring any limit, and which may admit a {\em semi-classical}
or {\em almost-local} description
in some {\em regime}.
This is based on the previously introduced concept of quasi-coherent states
\cite{Ishiki:2015saa,Schneiderbauer:2016wub},
which can be associated to any set of hermitian matrices. The concept is
very much string-inspired \cite{Berenstein:2012ts},
and the quantum geometries are naturally viewed as
varieties or ``branes'' embedded in target space. Since the mathematical concepts
are very close to those of quantum mechanics, the name
``quantum geometry'' seems justified, even if that name is perhaps already over-loaded with
different meanings in the literature.
The framework nicely captures the standard examples of fuzzy spaces, but it is completely general.
Moreover, it naturally leads to an intrinsic concept of quantum K\"ahler geometry,
which is a special class of quantum geometries which satisfy certain
conditions\footnote{This is in distinct from
the approach in \cite{Ishiki:2016yjp}.}; there is no need to add any
structure by hand.
Of course for quantized coadjoint orbits, the coherent states are obtained easily
from the representation theory.
However, the present construction allows to reconstruct the full K\"ahler structure
of the (quantum) space {\em without resorting to representation theory},
which is not known in general.
In the semi-classical limit, many of the structures and steps
have been considered before,
notably in work by Ishiki etal \cite{Ishiki:2015saa,Ishiki:2016yjp} and in \cite{Schneiderbauer:2016wub,Berenstein:2012ts,deBadyn:2015sca,Karczmarek:2015gda}.
However, the novelty is in introducing a more abstract point of view.
We introduce the concept of an {\em abstract quantum space} $\cM$,
by considering the space of quasi-coherent states as a sub-variety
of $\C P^N$. This allows to make concise statements
for finite $N$, and to give a clear conceptual correspondence
between finite matrix configurations and geometry,
based on a space $Loc(\cH)\subset End(\cH)$ of almost-local operators. The semi-classical
description applies in some infrared (IR) regime, while
the UV regime of matrix geometry displays a very different and stringy nature,
which is manifest in string states.
This framework also allows to establish the existence of a
surjective quantization map for quantum K\"ahler manifolds, and to
make some non-trivial regularity statements about the
abstract quantum space $\cM$.
It is important to note that
the proposed framework is more than just some ad-hoc procedure:
by definition, the quasi-coherent states
provide an optimal basis where all matrices have minimal joint uncertainty,
i.e. they are simultaneously ''almost-diagonal``.
Such almost-commuting configurations are expected to play a dominant role in Yang-Mills matrix models. The approach is well-suited to be implemented on a
computer \cite{lukas_schneiderbauer_2016_45045,lukas_schneiderbauer_2019_2616687}, and
should provide a powerful tool to
understand and interpret the results of numerical simulations
of Yang-Mills matrix models.
This paper comprises 3 main parts. In section \ref{sec:quasicoherent}
we define the quasi-coherent states $|x\rangle$ for $x\in{\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}}^D$,
and study their properties as functions of $x\in{\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}}^D$.
Much of this section is more-or-less known in some form, but at least the
relation with solutions of the matrix-Yang-Mills equation is new.
In section \ref{sec:abstract-quantum}, we introduce the central concept of
an abstract quantum space $\cM\subset\C P^{N}$. This offers
a conceptually clear definition of almost-local operators
and the semi-classical regime. It also leads to a natural
concept of a real and complex quantum tangent space and quantum K\"ahler
manifolds. Some consequences are developed in section \ref{sec:coherent},
notably a quantization map for quantum K\"ahler manifolds.
These concepts are illustrated in a number of
examples in section \ref{sec:examples}, and
the relation with physical Yang-Mills matrix models is briefly discussed in
section \ref{sec:remarks}.
\section{Quasi-coherent states on ${\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}}^D$}
\label{sec:quasicoherent}
In this paper, a {\bf matrix configuration} will be a collection of $D$ hermitian matrices $X^a \in End(\cH)$
acting on some (separable) Hilbert space $\cH$. To avoid technical complications, we will assume that
$\cH\cong\C^N$ is finite-dimensional, apart from some illustrative infinite-dimensional examples.
Such a matrix configuration will be called {\bf irreducible} if the only matrix which commutes
with all $X^a$ is the unit matrix. Equivalently, the algebra generated by the
$X^a$ is the full matrix algebra $End(\cH)$. This will be assumed throughout.
By definition, such an irreducible matrix configuration does not admit
any common eigenvectors $|\psi\rangle$, since otherwise $|\psi\rangle\langle \psi|$
would commute with all $X^a$. Nevertheless, we are mainly interested in matrix configurations which are ''almost-commuting``, in the sense that the
commutators $[X^a,X^b]$ are ''small''; these are expected
to be the dominant configurations in Yang--Mills matrix models such as the
IIB or IKKT model \cite{Ishibashi:1996xs}. We therefore wish to find
a set of states which are optimally adapted to the matrix configuration,
so that the $X^a$ are ''as diagonal as possible``. This may also be of interest
in different contexts.
With this in mind, we
associate to an irreducible matrix configuration $X^a$ and a point $x\in{\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}}^D$
the following
{\bf displacement Hamiltonian}\footnote{
As explained in section \ref{sec:eff-metric-MM}, $H_x$
can be interpreted in the IIB model as energy of a point--brane at $x$
on the background defined by the matrix configuration $X^a$.}
(cf. \cite{Ishiki:2015saa,Schneiderbauer:2016wub})
\begin{align}
H_x := \frac 12\sum_{a=1}^D\, (X^a-x^a\mbox{1 \kern-.59em {\rm l}})^2 \ .
\end{align}
This is a positive definite\footnote{To see positive-definiteness, assume that
$H_x|\psi\rangle = 0$; this implies $X^a |\psi\rangle =x^a|\psi\rangle$ for all $a$,
but then $[H_x,|\psi\rangle\langle\psi|] = 0$ in
contradiction with irreducibility.} hermitian operator on $\cH$,
which can be thought of as an analog to the shifted harmonic oscillator.
It allows to find optimally localized approximate eigenstates
for the given matrix configuration as follows.
Let $\lambda(x) > 0$ be the lowest eigenvalue of $H_x$.
A {\bf quasi-coherent state} $|x\rangle$ at $x$ is then defined following \cite{Ishiki:2015saa,Schneiderbauer:2016wub} as normalized vector
$\langle x|x\rangle = 1$
in the eigenspace $E_x$ of $H_x$ with eigenvalue $\lambda(x)$,
\begin{align}
H_x |x\rangle = \lambda(x) |x\rangle \ .
\label{quasicoh-def}
\end{align}
We will assume for simplicity that $E_x$ is one-dimensional,
except possibly on some singular set $\cK\subset{\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}}^D$.
Clearly the quasi-coherent states $|x\rangle$ form a $U(1)$ bundle
\begin{align}
\cB \ \to \tilde{\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}}^D \qquad \mbox{over} \quad \tilde{\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}}^D := {\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}}^D \setminus\cK \ .
\label{sing-set-def}
\end{align}
Standard theorems \cite{rellich1969perturbation,kato2013perturbation}
ensure that $\lambda(x)$ and $E_x$ depend smoothly
on $x\in\tilde{\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}}^D$.
We can then choose some local section of $\cB$ near any given point $\xi\in\tilde{\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}}^D$,
denoted by $|x\rangle$.
Thus $\cK$ is the location where different
eigenvalues of $H_x$ become degenerate.
If $\lambda(x)$ can be extended smoothly at some point $p\in\cK$,
different eigenvalues simply touch without crossing, and
the sections $|x\rangle$ and the bundle
$\cB$ can be extended through $p$;
we can then basically remove $p$ from $\cK$.
Hence we can assume that $\cK$ contains
only points where some eigenvalues cross, i.e. $\lambda(x)$ cannot be
continued.
We denote this $\cK$ as {\bf singular set}.
The bundle is non-trivial if $\cK\neq 0$.
For any operator in $\Phi\in End(\cH)$, we can define the {\bf symbol} in $\cC(\tilde{\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}}^D)$ through the map
\begin{align}
End(\cH) & \to \cC(\tilde{\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}}^D) \nonumber\\
\Phi &\mapsto \langle x|\Phi|x\rangle =: {\bf\phi}(x) \ .
\label{symbol-RD}
\end{align}
Elements of $End(\cH)$ will be indicated by
upper-case letters, and functions by lower-case letters.
The map \eq{symbol-RD}
should be viewed as a de-quantization map, associating classical functions to
noncommutative ``functions'' (or rather observables) in $End(\cH)$.
In particular, the symbol of the matrices $X^a$ provides a map
\begin{align}
{\bf x^a} :\quad \tilde{\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}}^D &\to {\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}}^D \\
x &\mapsto {\bf x^a}(x):= \langle x|X^a|x\rangle \ .
\label{X-ecpect-embedding-symbol}
\end{align}
Generically ${\bf x^a}(x) \neq x^a$, and the deviation is
measured by the {\bf displacement}
\begin{align}
d^2(x) := \sum_a({\bf x^a}(x) - x^a)^2 \ .
\end{align}
The quality of the matrix configuration (or of the underlying quantum space)
is measured by
the {\bf dispersion} or uncertainty
\begin{align}
\delta^2(x) &:= \sum_a (\Delta X^a)^2 \nonumber\\
(\Delta X^a)^2 &:= \langle x|(X^a - {\bf x^a}(x))^2|x\rangle
= \langle x|X^a X^a|x\rangle
- {\bf x^a}(x) {\bf x^a}(x) \ .
\label{dispersion}
\end{align}
If $\delta^2(x)$ is small, then the $X^a$ can be interpreted as operators or observables which approximate
the functions ${\bf x^a}$ on $\tilde{\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}}^D$, and if $d^2(x)$ is also small then $X^a \approx {\bf x^a} \approx x^a$.
Note that \eq{quasicoh-def} implies
\begin{align}
\lambda(x) = \delta^2(x) + d^2(x) \ ,
\label{l-delta-D}
\end{align}
hence a small $\lambda(x)$ implies that both $\delta^2(x)$ and $d^2(x)$ are bounded by $\lambda(x) > 0$.
$d^2(x)$ will be understood in section \ref{sec:abstract-quantum} as displacement
of $x$ from the underlying quantum space or brane $\cM$.
Hence quasi-coherent states should be viewed as the states
with minimal dispersion and displacement for given $x\in\tilde{\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}}^D$,
cf. \cite{Schneiderbauer:2016wub} for a more detailed discussion.
\subsection{$U(1)$ connection, would-be symplectic form
and quantum metric}
\label{sec:inner-connect-symp}
Now we associate to any matrix configuration two unique tensors on $\tilde{\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}}^D$: the {\em would-be symplectic form} $\omega_{ab}$
and {\em quantum metric} $g_{ab}$.
Since $|x\rangle \in\cH$, the bundle $\cB$ over $\tilde{\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}}^D$ naturally inherits a metric and a connection.
We can define a connection 1-form $A$ via
\begin{align}
P \circ d |x\rangle &= |x\rangle i A , \qquad
iA := \langle x|d|x\rangle \quad \in\Omega^1(\tilde{\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}}^D)
\label{nabla-def}
\end{align}
where $P= |x\rangle\langle x|$ is the projector on $E_x$. Here $A$ is real because
\begin{align}
(\langle x |d|x\rangle)^* = d(\langle x |) |x\rangle = - \langle x |d|x\rangle \ ,
\end{align}
and transforms like a $U(1)$ gauge field
\begin{align}
|x\rangle \to e^{i\Lambda(x)}|x\rangle,
\qquad A_a \to A_a + \partial_a \Lambda \ .
\end{align}
In particular, we can
parallel transport $|x\rangle$ along a path
$\gamma$ in $\tilde{\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}}^D$.
This connection is analogous to a Berry connection. It
is encoded in the inner product
\begin{align}
\langle x|y\rangle =: e^{i \varphi(x,y) - D(x,y)} \ ,
\label{coherent-inner}
\end{align}
which defines a distance function $D(x,y)$ and a phase function $\varphi(x,y)$
which satisfy
\begin{align}
D(x,y) &= D(y,x) \geq 0, \qquad D(x,y) = 0 \ \Leftrightarrow \ x=y \nonumber\\
\varphi(x,y) &= - \varphi(y,x) \ .
\end{align}
The phase clearly depends on the particular section $|x\rangle$ of the bundle $\cB$, while $D(x,y)$ does not.
To understand these two functions, we
differentiate \eq{coherent-inner} w.r.t. $y$
\begin{align}
\langle x |d_y|y\rangle|_{y=x} = i d_y \varphi(x,y)|_{y=x} - d_y D(x,y)|_{y=x} \ .
\end{align}
Comparing with \eq{nabla-def} we conclude
\begin{align}
i d_y \varphi(x,y)|_{y=x} &= i A = i A_a dx^a \nonumber\\
d_y D(x,y)|_{y=x} &= 0 \ .
\label{del-D-varphi}
\end{align}
Hence the phase $\varphi(x,y)$ encodes the connection
$A$.
For a contractible closed path $\gamma = \partial\Omega$ in $\tilde{\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}}^D$,
the change of the phase of $|x\rangle$ along $\gamma$
is hence given by the field strength via Stokes theorem
\begin{align}
\oint_\gamma A = \int_\Omega dA\ .
\label{flux-integral}
\end{align}
If the connection is flat, the phase $\varphi(x,y)$ can be gauged away completely.
To proceed, consider the gauge-invariant hermitian $D\times D$ matrix defined by
\begin{align}
h_{ab} &= \big((\partial_a +i A_a) \langle x|\big)
(\partial_b - i A_b)|x\rangle|_{\xi} = h_{ba}^* \nonumber\\
&= (\partial_{x^a}+i A_a)(\partial_{y^b}- i A_b) e^{i \varphi(x,y) - D(x,y)}|_{\xi} \nonumber\\
&=: \frac i2(\omega_{ab} + g_{ab})
\label{hab-def}
\end{align}
at some reference point $\xi\in\tilde{\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}}^D$, which decomposes into
the real symmetric and antisymmetric tensors
$g_{ab}$ and $\omega_{ab}$.
The symmetric part
\begin{align}
g_{ab} &= \big((\partial_a +i A_a) \langle x|\big)
(\partial_b - i A_b)|x\rangle + (a\leftrightarrow b) \nonumber\\
&= (\partial_a\langle x|)\partial_b |x\rangle - A_a A_b+ (a\leftrightarrow b)\nonumber\\
&= (\partial_a\langle x|)\partial_b |x\rangle + (a\leftrightarrow b)
\label{g-def}
\end{align}
(using\eq{nabla-def})
is the pull-back of the Riemannian metric\footnote{Note that $g_{ab}$ is not related to the Euclidean metric $\delta_{ab}$ on
target space ${\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}}^D$. } on $\cH$
(or equivalently of the Fubini--Study metric on $\C P^{N-1}$) through
the section $|x\rangle$.
The antisymmetric part of $h_{ab}$ encodes a 2-form
\begin{align}
i\omega_{ab} &= i(\partial_a A_b - \partial_b A_a)
= (\partial_{a} \langle x|)\partial_{b}|x\rangle
- (\partial_{b} \langle x|)\partial_{a}|x\rangle \nonumber\\
i\omega = \frac i2\omega_{ab} dx^a \wedge dx^b \
&= d\langle x|\wedge d|x\rangle
= d(\langle x |d|x\rangle) = i dA
\label{omega-def}
\end{align}
which is the $U(1)$ field strength of the connection $A$ and therefore closed,
\begin{align}
\omega = dA , \qquad d\omega = 0 \ .
\label{F-omega-dA}
\end{align}
It follows that the expansion of $\varphi(x,y)$ to bilinear order in $x$ and $y$ (setting $\xi = 0$) is
\begin{align}
\varphi(x,y) &= A_a (x^a - y^a) +
\frac 12\omega_{ab} x^a y^b + ...
\end{align}
dropping all terms $O(x^2)$ or $O(y^2)$ and higher.
Similarly, the expansion of $D(x,y)$
is given by
\begin{align}
D(x,y) &= \frac 14 (x-y)^a (x-y)^b g_{ab} + ...
\label{D-expand}
\end{align}
since $D(x,x) = 0$ and $D(x,y)\geq 0$.
In fact viewing $\cB/U(1)$ as subset of
$\C P^{N-1}$, we can use the well-known formula
\begin{align}
\cos^2(\gamma(x,y)) = e^{-2D(x,y)}
\end{align}
where $\gamma(x,y)$
is the geodesic distance squared between $|x\rangle$ and $|y\rangle$
in the Fubini--Study metric on $\C P^{N-1}$.
Combining \eq{coherent-inner} and \eq{D-expand},
we learn that the quasi-coherent states
are localized within a region of size
\begin{align}
L_{\rm coh}^2 = \|g_{ab}\|^{-1}
\label{coherence-scale}
\end{align}
denoted as {\bf coherence scale}.
The $|x\rangle$ are approximately constant
below this scale due to \eq{coherent-inner}. The relation with
the uncertainty of $X^a$ will be given in \eq{uncertainty-X}.
Therefore $g_{ab}$ will be denoted as {\bf quantum metric}.
We will see in section \ref{sec:eff-metric-MM}
that there is a different, {\em effective} metric
which governs the low-energy physics on such quantum spaces in
Yang-Mills matrix models.
However, the intrinsic structure of the underlying quantum space
is best understood using a more abstract point of view
developed in section \ref{sec:abstract-quantum}.
We will see that $\omega$ typically
arises from a symplectic form on an underlying space $\cM$.
Therefore $\omega$ will be denoted as {\bf would-be symplectic form}.
Since it is the curvature of a $U(1)$ bundle,
its flux is quantized
for every 2-cycle $S^2$ in $\tilde{\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}}^D$ as
\begin{align}
\int\limits_{S^2} \frac1{2\pi}\omega = n, \qquad n\in{\mathds{Z}} \ .
\label{quant-cond-S2}
\end{align}
This
arises using \eq{flux-integral} as consistency condition on the $U(1)$ holonomy for the parallel transport
along a closed path $\gamma$ on $S^2$.
In more abstract language, $c_1 = -\frac{1}{2\pi} \omega$ is
the first Chern class of $\cB$ viewed as line bundle,
which is the pull-back of the first Chern class (or symplectic form)
of $\C P^{N-1}$
via the section $|x\rangle$.
The bundle $\cB$ is trivial if these
numbers vanish for all cycles $S^2$,
hence if $H^2(\tilde{\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}}^D)$ vanishes.
\subsection{Differential structure of quasi-coherent states}
\label{sec:derivatives-generators}
Assume that $|x\rangle$ is a local section of the quasi-coherent states, with
\begin{align}
H_x|x\rangle = \lambda(x) |x\rangle \ .
\label{H-EV-2}
\end{align}
Using Cartesian coordinates $x^a$ on ${\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}}^D$,
we observe that
\begin{align}
\partial_a H_x = -(X_a-x_a \mbox{1 \kern-.59em {\rm l}}) \ .
\label{del-H-X}
\end{align}
Thus differentiating \eq{H-EV-2} gives
\begin{align}
(H_x - \lambda(x))\partial_a|x\rangle &= -\partial_a (H_x - \lambda(x))|x\rangle
= \big(X_a-x_a + \partial_a\lambda\big)|x\rangle \ .
\label{deform-eigenstate}
\end{align}
Since lhs is orthogonal to $\langle x|$, it follows that
\begin{align}
0 = \langle x|\big(X_a-x_a + \partial_a\lambda\big)|x\rangle
\label{exp-X-1}
\end{align}
so that the expectation value or symbol of the basic matrices $X_a$ is given by
\begin{align}
\boxed{
{\bf x_a} = \ \langle x| X_a|x\rangle = x_a - \partial_a\lambda \ .
}
\label{expect-X}
\end{align}
Furthermore, \eq{deform-eigenstate} gives
(in the non-degenerate case under consideration)
\begin{align}
\partial_a |x\rangle &= |x\rangle \langle x| \partial_a |x\rangle
+ (H_x - \lambda)^{-1}\big(X_a-x_a + \partial_a\lambda\big)|x\rangle \nonumber\\
(\partial_a - i A_a) |x\rangle &= (H_x - \lambda)^{-1}\big(X_a-x_a + \partial_a\lambda\big)|x\rangle
\label{del-nabla-X-general}
\end{align}
using \eq{del-D-varphi}.
Even though the $(H_x - \lambda)^{-1}$ term is well-defined here,
it is better to replace $(H_x - \lambda)^{-1}$ with
an operator that is well-defined on $\cH$. This is achieved using
the ``reduced resolvent''
\begin{align}
(H_x - \lambda(x))^{'-1} := (\mbox{1 \kern-.59em {\rm l}}-P_x) (H_x - \lambda(x))^{-1}(\mbox{1 \kern-.59em {\rm l}}-P_x),
\qquad \qquad P_x := |x\rangle\langle x|
\end{align}
which satisfies
\begin{align}
(H_x - \lambda) (H_x - \lambda)^{'-1} &=\mbox{1 \kern-.59em {\rm l}}- P_x = (H_x - \lambda)^{'-1}(H_x - \lambda), \nonumber\\
(H_x - \lambda)^{'-1} |x\rangle &= 0 \ .
\label{gen-inv-rel}
\end{align}
Observing $(H_x - \lambda)^{'-1} (x_a - \partial_a\lambda)|x\rangle = 0$ due to \eq{H-EV-2}, we can
write \eq{del-nabla-X-general} as
\begin{align}
\boxed{ \
(\partial_a - i A_a) |x\rangle = \cX_a |x\rangle
\ }
\label{del-nabla-X-general-2}
\end{align}
where
\begin{align}
\cX_a = (H_x - \lambda)^{'-1} X_a \ .
\label{cX-def}
\end{align}
Moreover, we note
\begin{align}
\langle x|\cX_a|x\rangle = 0 \ .
\label{cX-exp-zero}
\end{align}
Hence $\cX_a$ generates the gauge-invariant
tangential variations of $|x\rangle$, which take value in the orthogonal
complement of $|x\rangle$.
This will be the basis for defining the quantum tangent space in section \ref{sec:abstract-quantum}.
The local section $|x\rangle$ over $\tilde{\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}}^D$ can now be written as
\begin{align}
|x\rangle = P \exp\Big(\int_\xi^x (\cX_a + i A_a) dx^a\Big)|\xi\rangle
\label{parallel-transport-states}
\end{align}
near the reference point $\xi\in \tilde{\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}}^D$.
Here $P$ indicates path ordering, which is just a formal way of writing the
solution of \eq{del-nabla-X-general-2}.
\subsection{Relating the algebraic and geometric structures}
Since the derivatives of $|x\rangle$ are
spanned by the $\cX^a|x\rangle$, the $U(1)$
field strength $\omega_{ab}$ and the
quantum metric $g_{ab}$ should be related to
algebraic properties for the $\cX^a$.
Indeed,
starting from \eq{hab-def}
\begin{align}
h_{ab} &= \langle x|\cX_a^\dagger \cX_b|x\rangle
= \frac i2(\omega_{ab} + g_{ab}) \ ,
\end{align}
we obtain
\begin{align}
i\omega_{ab} = h_{ab} - h_{ba}
&= \langle x|(\cX_a^\dagger \cX_b - \cX_b^\dagger \cX_a) |x\rangle
\label{om-ab-XX}
\end{align}
and
\begin{align}
g_{ab} &= h_{ab} + h_{ba} =\langle x|(\cX_a^\dagger \cX_b + \cX_b^\dagger \cX_a) |x\rangle \ .
\label{gab-XX}
\end{align}
This provides a first link between the geometric and algebraic
structures under consideration.
Furthermore, is useful to define the following hermitian
tensor (similar as in \cite{Ishiki:2016yjp})
\begin{align}
P_{ab}(x) &:= \langle x| X_a (H_x - \lambda)^{'-1}
X_b|x\rangle = P_{ba}(x) ^* \nonumber\\
&= \langle x| X_a \cX_b|x\rangle \ .
\label{P-explicit-herm}
\end{align}
Its symmetric part is obtained by taking derivatives of
\eq{expect-X}
\begin{align}
\partial_b {\bf x_a}(x) &= \partial_b x_a - \partial_b\partial_a\lambda \nonumber\\
&= \partial_b(\langle x|X_a|x\rangle )
= \langle x|(X_a(\cX_b+iA_b) + (\cX_b+iA_b)^\dagger X_a)|x\rangle \nonumber\\
&= P_{a b} + P_{ba}
\label{projector-embed}
\end{align}
lowering indices with $\delta_{ab}$; for the antisymmetric part see \eq{imaginary-P}.
This will be recognized as projector on the
embedded quantum space in \eq{P-M-proj}, as obtained
in the semi-classical limit in \cite{Ishiki:2016yjp}.
\subsection{Almost-local operators}
\label{sec:almost-local}
We would like to define a class
$Loc(\cH) \subset End(\cH)$ of {\bf almost-local operators}
which satisfy
\begin{align}
\Phi|x\rangle &\approx
|x\rangle \langle x|\Phi|x\rangle = P_x \Phi|x\rangle \
= |x\rangle \phi(x)\qquad \forall x\in\tilde{\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}}^D
\label{Phi-loc-approx}
\end{align}
where $\phi(x) = \langle x|\Phi|x\rangle$ is the symbol of $\Phi$, and $P_x = |x\rangle\langle x|$ is the
projector on the quasi-coherent state $|x\rangle$.
The question is how to make the meaning of
$\, \approx\, $ precise, without considering some limit
as in \cite{Ishiki:2015saa}. We should certainly require that
$\Phi|x\rangle \approx |x\rangle \phi(x)$ in $\cH$ for every $x$,
but it is not obvious yet how to handle the dependence on $x$,
and how to specify bounds.
The guiding idea is that it should make sense to identify
$\Phi$ with its symbol
\begin{align}
\Phi \ \sim \ \phi(x) = \langle x|\Phi|x\rangle \ ,
\end{align}
indicated by $\sim$ from now on.
This will be made more precise
in the section \ref{sec:quatiz-semi} by requiring that $\sim$ is an {\em approximate isometry} from
$Loc(\cH)$ to $\cC_{\rm IR}(\cM)$, where
$\cC_{\rm IR}(\cM)$ is a class of ``infrared'' functions on the abstract
quantum space associated to the matrix configuration.
The essence of almost-locality is then that
the {\em integrated} deviations
from classically are small compared with the classical
values.
With this in mind, we proceed to elaborate some consequences
of \eq{Phi-loc-approx} for fixed $x$ without specifying bounds.
Since $(\mbox{1 \kern-.59em {\rm l}} - P_x)$ is a projector, we have the estimate
\begin{align}
\langle x|\Phi^\dagger\Phi|x\rangle
= \langle x|\Phi^\dagger P_x\Phi|x\rangle
+ \langle x|\Phi^\dagger(\mbox{1 \kern-.59em {\rm l}} - P_x)\Phi|x\rangle
&\geq \ \langle x|\Phi^\dagger P_x\Phi|x\rangle = |\phi(x)|^2 \ .
\label{norm-loc-estim}
\end{align}
It follows that every hermitian almost-local operator $\Phi=\Phi^\dagger$ satisfies
\begin{align}
\langle x|\Phi\Phi|x\rangle \approx \langle x|\Phi|x\rangle ^2
= |\phi(x)|^2 \qquad \forall x\in\tilde{\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}}^D\ ,
\label{uncertain-1}
\end{align}
i.e. the uncertainty
of $\Phi$ is negligible,
\begin{align}
\langle x|(\Phi- \langle x|\Phi|x\rangle)^2|x\rangle \approx 0
\qquad \forall x\in\tilde{\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}}^D\ .
\label{uncertain-Phi}
\end{align}
This means that $(\Phi-\phi(x))|x\rangle$ is
approximately zero, which in turn implies \eq{Phi-loc-approx}.
Therefore almost-locality is essentially equivalent to \eq{uncertain-Phi},
up to global considerations and specific bounds.
A more succinct global version of \eq{uncertain-Phi} is given in
\eq{norm-uncertainty}.
We also note that for two operators $\Phi,\Psi\in Loc(\cH)$
the factorization properties
\begin{align}
\Phi\Psi|x\rangle
&\approx \Phi|x\rangle\langle x|\Psi|x\rangle
\approx |x\rangle \phi(x)\psi(x)\nonumber\\
\langle x|\Phi\Psi|x\rangle
&\approx \phi(x)\psi(x)
\label{semiclass-fact}
\end{align}
follow formally. However this does not mean that $Loc(\cH)$ is an algebra,
since the specific bounds may be violated by the product.
For some given matrix configuration, $Loc(\cH)$ may be empty or very small.
This happens e.g. for the minimal fuzzy spaces as discussed in section \ref{sec:degenerate}, and it is
expected for random matrix configuration.
But even in these cases, the associated
geometrical structures still provide useful insights.
For interesting quantum geometries, we expect that
all the $X^a$ are almost-local, hence
also polynomials $P_n(X)$ up to some maximal degree
$n$ due to \eq{semiclass-fact}. $Loc(\cH)$
can often be characterized by some bound on the
eigenvalue of $\Box$ \eq{Box}, or the uncertainty scale
$L_{NC}$ \eq{L-NC-def}. However,
$Loc(\cH)$ can never be more than a small subset of $End(\cH)$.
\subsection{Almost-local quantum spaces and Poisson tensor}
\label{sec:almost-loc-space}
To see how the Poisson structure
arises, define the real anti-symmetric matrix-valued function
\begin{align}
\theta^{ab} := -i\langle x| [X^a,X^b] |x\rangle = -\theta^{ba}
\label{theta-def}
\end{align}
on $\tilde{\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}}^D$.
To relate it to the previous structures, we shall loosely follow \cite{Ishiki:2016yjp}, starting from
the identity
\begin{align}
[X^a,X^b](X_b-x_b) + (X_b-x_b)[X^a,X^b]
= 2 [X^a,H_x] \ .
\label{H-x-comm}
\end{align}
Taking the expectation value, we obtain
\begin{align}
\langle x| [X^a,X^b](X_b-x_b)|x\rangle + \langle x| (X_b-x_b)[X^a,X^b] |x\rangle
= 2 \langle x|[X^a,H_x] |x\rangle = 0 \ .
\label{comm-X-rel}
\end{align}
If $X^a$ is almost-local\footnote{This is expected from the
definition of quasi-coherent states, as long as the uncertainty is
sufficiently small.}, then this implies
\begin{align}
0 \approx \langle x| [X^a,X^b] |x\rangle \langle x| (X_b-x_b)|x\rangle
= - i\theta^{ab} \partial_b\lambda
\label{theta-del-l}
\end{align}
using \eq{expect-X}. In section \ref{sec:quatiz-semi} we will see that
this implies $\lambda \sim {\rm const}$ on the embedded quantum space $\tilde\cM$,
and $P_{ac} + P_{ca} \sim \partial_c x_a$ is its tangential projector.
We now define an {\bf almost-local quantum space} to be a matrix configuration where all $X^a$ as well as all $[X^a,X^b]$
are almost-local operators.
Then they approximately commute, and we can proceed following \cite{Ishiki:2016yjp}
\begin{align}
-2 (H_x - \lambda) (X^a -x^a + \partial^a\lambda) |x\rangle
&= 2 [X^a,H_x] |x\rangle
\approx 2 (X_b-x_b)[X^a,X^b] |x\rangle \nonumber\\
&\approx 2(X_b-x_b) |x\rangle\langle x|[X^a,X^b]|x\rangle
= 2i (X_b-x_b) |x\rangle\theta^{ab}\nonumber\\
&\approx 2i (X_b-x_b + \partial_b\lambda) |x\rangle\theta^{ab}
\label{XX-asympt}
\end{align}
using the factorization property, \eq{theta-del-l} and \eq{H-EV-2}.
However the first approximation is subtle, since $(X_b-x_b) |x\rangle \approx 0$.
This can be justified if $X^a$ is a solution of the
{\bf Yang-Mills equations}\footnote{This argument also goes through for
the generalized Yang-Mills equation $\Box X^a \equiv [X_b,[X^b,X^a]] = m\, X^a$
as long as $m$ is sufficiently small, where $\Box$ is defined in \eq{Box}.}
\begin{align}
[X_b,[X^b,X^a]] = 0 \
\end{align}
which are indeed the equations of motion for Yang-Mills matrix models \cite{Ishibashi:1996xs}.
Then \eq{H-x-comm} implies
\begin{align}
[X^a,H_x]
&= (X_b-x_b)[X^a,X^b]
\end{align}
and the above steps become
\begin{align}
-2 (H_x - \lambda) X^a |x\rangle
&= 2 [X^a,H_x] |x\rangle
= 2 (X_b-x_b)[X^a,X^b] |x\rangle \nonumber\\
&\approx 2i (X_b-x_b) |x\rangle\theta^{ab} \nonumber\\
&\approx 2i (X_b-x_b + \partial_b\lambda) |x\rangle\theta^{ab} \ .
\label{XX-asympt-2}
\end{align}
The rhs is indeed orthogonal to $\langle x|$ due to \eq{expect-X}, and we can conclude
\begin{align}
- (X^a -x^a + \partial^a\lambda) |x\rangle
&\approx i(H_x - \lambda)^{'-1} (X_b-x_b+ \partial_b\lambda) |x\rangle\theta^{ab} \nonumber\\
&= i\theta^{ab}\cX_b |x\rangle
= i \theta^{ab} (\partial_b -iA_b)|x\rangle
\end{align}
hence
\begin{align}
(X^a -x^a + \partial^a\lambda)|x\rangle \approx - i \theta^{ab} (\partial_b -iA_b)|x\rangle
\label{Xtheta-1}
\end{align}
and by conjugating
\begin{align}
\langle x| (X^d -x^d + \partial^d\lambda) \approx
i \theta^{dc} (\partial_c + iA_c)\langle x| \ .
\label{Xtheta-2}
\end{align}
These relations are very useful.
First, they imply the important relation
\begin{align}
\boxed {\
[X^a,|x\rangle\langle x|] \approx - i \theta^{ab} \partial_b(|x\rangle\langle x|)
\ . \ }
\label{X-comm-theta}
\end{align}
Furthermore, multiplying \eq{Xtheta-1}
with $(\partial_c+iA_c)\langle x|$ gives
\begin{align}
- i \theta^{ab} \big((\partial_c+iA_c)\langle x|\big)(\partial_b -iA_b)|x\rangle
&\approx \langle x|\cX_c^\dagger (X^a -x^a + \partial^a\lambda) |x\rangle
= \langle x|\cX_c^\dagger X^a |x\rangle = P_{c}^{\ a} \ ,
\label{theta-P-1}
\end{align}
and similarly from \eq{Xtheta-2}
\begin{align}
i \theta^{ac} ((\partial_c + iA_c)\langle x|)(\partial_b-iA_b)|x\rangle
&\approx \langle x|X^a \cX_b|x\rangle
= P^{a}_{\ b} \ .
\label{theta-P-2}
\end{align}
Adding these and using \eq{projector-embed} and \eq{hab-def} gives
\begin{align}
\boxed{\
- \theta^{ab}\omega_{bc}
\approx \partial_c {\bf x^a} = \partial_c (x^a -\partial^a\lambda)
\ }
\label{theta-omega}
\end{align}
in the semi-classical regime, as in \cite{Ishiki:2016yjp}. The rhs will be recognized as
tangential projector on the embedded quantum space
$\tilde\cM \subset {\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}}^D$.
Therefore the above relation states that
$\theta^{ac}$ is tangential to $\tilde\cM$,
and the inverse of the would-be
symplectic form $\omega_{ab}$ on $\tilde\cM$.
This implies that
$\omega|_{\tilde\cM}$ is indeed non-degenerate i.e. symplectic, and
$\theta^{ac}$ is its associated Poisson structure\footnote{Recall that the Jacobi identity is a consequence of $d\omega = 0$.}.
Together with \eq{theta-def} we obtain
\begin{align}
[X^a,X^b]|x\rangle \approx i\{x^a,x^b\}|x\rangle
= i \theta^{ab}|x\rangle \
\end{align}
which can be written in the notation of section \ref{sec:quatiz-semi} as
semi-classical relation
\begin{align}
\boxed{\
[X^a,X^b] \sim i\{x^a,x^b\} = i \theta^{ab} \ .
\ }
\label{XX-theta-rel}
\end{align}
Moreover, this means that {\bf almost-local quantum spaces $\cM$
can be locally approximated by some Moyal-Weyl quantum plane} ${\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}}^{2n}_\theta$.
In particular, this implies
that {\bf the almost-K\"ahler condition} \eq{almost-Kahler} holds at least approximately.
Furthermore, taking the inner product of
\eq{Xtheta-1} and \eq{Xtheta-2}
we obtain
\begin{align}
(\Delta X^a)^2 =
\langle x| (X^a -x^a + \partial^a\lambda)(X^a -x^a + \partial^a\lambda)|x\rangle
&= \theta^{ab}\theta^{ac} g_{bc}
\label{uncertainty-X}
\end{align}
(no sum over $a$), where $g_{bc}$ is the
quantum metric \eq{g-def}.
Hence the uncertainty of $X^a$ is
characterized by the {\bf uncertainty length}\footnote{On
quantum K\"ahler manifolds, this reduces to the
well-known form $L_{\rm NC}^2 = \|\theta^{ab}\|$.}
\begin{align}
L_{\rm NC}^2 := \|\theta^{ab}\|^2 L_{\rm coh}^{-2} \ .
\label{L-NC-def}
\end{align}
We also note the relation \cite{Ishiki:2016yjp}
\begin{align}
i \theta^{ac} g_{cb} = \delta^{aa'}(P_{a'c} - P_{ca'})
= 2i \delta^{aa'}{\rm Im}(P_{a'c})
\label{imaginary-P}
\end{align}
which is obtained by subtracting \eq{theta-P-1} and \eq{theta-P-2};
in particular, $\theta^{ac} g_{cb}$ is antisymmetric.
Finally, by comparing \eq{Xtheta-1} with \eq{cX-def} we obtain
\begin{align}
\theta^{ab} (\partial_b -iA_b)|x\rangle \approx
i(X^a -x^a + \partial^a\lambda)|x\rangle
= (H_x-\lambda)i(\partial^a-i A^a)|x\rangle \ ,
\label{almost-Kahler}
\end{align}
which relates
$i(\partial^a -iA^a)|x\rangle$ and $\theta^{ab}(\partial_b -iA_b)|x\rangle$,
up to the action of $H_x-\lambda$.
\section{The abstract quantum space $\cM$}
\label{sec:abstract-quantum}
In the previous section we considered the bundle $\cB$
of quasi-coherent states $|x\rangle$ over $\tilde{\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}}^D$.
However, these states
often coincide for different $x$.
In this section we develop a general concept of quantum geometry which naturally
captures such situations, and leads to a variety $\cM \subset \C P^{N-1}$,
which is naturally embedded in ${\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}}^D$.
Consider the union of the normalized quasi-coherent
states for all $x\in\tilde{\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}}^D$
\begin{align}
\cB &:= \bigcup_{x\in\tilde{\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}}^D} U(1)|x\rangle \ \subset \ \cH \cong \C^N
\end{align}
as a subset of $\cH$; here the union need not be disjoint.
$\cB$ can be viewed as a $U(1)$ bundle\footnote{in slight abuse of notation we use the same letter $\cB$
as in section \ref{sec:quasicoherent}, hoping that no confusion arises.}
\begin{align}
\cB \to \cM , \qquad \cM := \cB/_{U(1)} \ \hookrightarrow \C P^{N-1} \
\label{M-sub-CP}
\end{align}
over $\cM$.
We denote $\cM$ as {\bf abstract quantum space associated to $X^a$}.
Thus $\cM$ inherits the induced
(subset) topology and metric from $\C P^{N-1}$.
A matrix configuration will be denoted as {\bf quantum manifold}
if $\cM\subset\C P^{N-1}$ is a regular (real) submanifold.
This is not far-fetched, since
standard theorems \cite{rellich1969perturbation,kato2013perturbation} ensure
the existence of (local) smooth maps
\begin{align}
{\bf q}: \quad U\subset\tilde{\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}}^D &\to \cM\subset \C P^{N-1} \nonumber\\
x &\mapsto |x\rangle \ .
\label{q-map}
\end{align}
However, ${\bf q}$ need not be injective.
To understand this better, we note that
\begin{align}
\cM \ \cong \ \tilde{\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}}^D/ _\sim \
\label{cM-equivalence-def}
\end{align}
where the equivalence relation $\sim$ on
$\tilde{\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}}^D$ is defined by identifying points $x\in\tilde{\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}}^D$
with identical eigenspace $E_x$.
Denote the equivalence class through a point $x\in\tilde{\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}}^D$ with $\cN_x$.
Due to the identity
\begin{align}
H_x = H_y + \frac 12(x^ax_a -y^a y_a)\mbox{1 \kern-.59em {\rm l}} - (x^a-y^a)X_a\ ,
\end{align}
$x\sim y$ implies that $|x\rangle$ is an eigenvector of $(x^a-y^a)X_a$,
\begin{align}
(x^a-y^a)X_a |x\rangle \propto |x\rangle \ .
\label{x-y-Nx-eq}
\end{align}
But this means that the {\em equivalence classes $\cN_x$ are always (segments of) straight lines or higher-dimensional planes}\footnote{The $\cN_x$ either extend to infinity or
end up at the singular set $\cK$, where the $|x\rangle$ may turn into higher eigenstates.}, and
it follows using \eq{deform-eigenstate} that
\begin{align}
w_a\cX^a|x\rangle = 0 = w_a(X^a - x^a + \partial^a\lambda)|x\rangle, \qquad w\in T\cN_x
\label{V-anihil}
\end{align}
along such directions. This implies via \eq{gab-XX} that $\cN_x$
is a null space w.r.t. the quantum metric $g_{ab}$ induced from $\C P^{N-1}$.
The quantum metric hence characterizes the dependence of the coherent
states along the non-trivial directions of $\cM$.
Moreover, kernel of $d{\bf q}$ at $x$ is given by $T\cN_x$.
The above observations provide a remarkable link between local and global
properties of ${\bf q}$: {\em whenever ${\bf q}(x) = {\bf q}(y)$ for $x\neq y$,
a linear kernel $T\cN_x \ni (x-y)$ of $d{\bf q}|_x$ arises}.
In particular if ${\rm rank}\, d{\bf q}=D$ i.e. ${\bf q}$ is an immersion,
${\bf q}$ must be injective globally, since otherwise $d{\bf q}$
has some non-trivial kernel.
This implies that ${\bf q}$ can be extended to $\tilde{\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}}^D$, and
\begin{thm}
If ${\bf q}$ \eq{q-map} is an immersion, then ${\bf q}:\, \tilde{\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}}^D \to \cM$ is bijective, and
$\cM$ is a $D$-dimensional quantum manifold. Moreover, $x^a$ provide global coordinates.
\end{thm}
An infinite-dimensional example is given by the Moyal-Weyl quantum plane, and the fuzzy disk \cite{Lizzi:2003ru} is expected to
provide a finite-dimensional example.
However, there are many interesting examples (such as the fuzzy sphere, see section \ref{sec:fuzzy-S2})
where the rank of $d{\bf q}$ is reduced. We can still make non-trivial statements with some extra assumption:
A quantum space $\cM$
will be called {\bf regular}
if ${\rm rank}\, d{\bf q}=m$ is constant on $\tilde{\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}}^D$. Then
the fibration $\tilde{\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}}^D/_\sim$ is locally trivial, and
according to the rank theorem \cite{lee2013smooth}
we can choose functions $y^\mu, \ \mu=1,...,m$ on a neighborhood of $\xi\in U \subset\tilde{\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}}^D$
such that the image ${\bf q}|_U \subset \cM\subset \C P^{N-1}$ is a submanifold of $\C P^{N-1}$.
Since the only possible degeneracies of ${\bf q}$ are the linear fibers $\cN$, it follows that
\begin{thm}
For regular quantum spaces i.e.
for ${\rm rank}\, d{\bf q}=m$ constant,
$\cM$ is a $m$-dimensional quantum manifold.
\end{thm}
In particular, there are no self-intersections of $\cM$, and $\tilde{\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}}^D$ has the structure of a bundle over $\cM$.
Clearly local versions of this statement can also be formulated;
e.g. if the rank of $d{\bf q}$ is reduced at some point, $\cM$ may be ``pinched''.
Furthermore, it may seem natural to conjecture that $\cM$
is compact, since $\cH$ is finite-dimensional; however, the
proper statement should be that $\cM$ has a natural compactification:
since $H_x \to -x_a X^a$ for $|x|\to\infty$, the state $|x\rangle$
approaches the lowest eigenspace of $e_a X^a$ for $e = \frac{x}{|x|}\in S^{D-1}$.
Hence if $\cM$ does not already contain these
states, then $\cM$ could be compactified by adding them (and possibly other states).
Now consider the following natural {\em embedding map} provided by the symbol of $X^a$:
\begin{align}
\boxed{
\begin{aligned}
{\bf x^a} :\quad \cM &\to {\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}}^D \\
|x\rangle &\mapsto {\bf x^a}:= \langle x|X^a|x\rangle
= x^a - \partial^a\lambda
\end{aligned}
}
\label{M-embedding-symbol}
\end{align}
using \eq{expect-X}. This is the quotient of the
previously defined function ${\bf x^a}$ \eq{X-ecpect-embedding-symbol} on $\tilde{\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}}^D$, which
is constant on the fibers $\cN_x$.
The image
\begin{align}
\boxed{\
\tilde\cM := {\bf x}(\cM) \quad \subset {\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}}^D
\ }
\label{embedded-M-def}
\end{align}
defines some variety in target space ${\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}}^D$.
In this way, we can associate to the abstract space $\cM$
a subset $\tilde\cM \subset{\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}}^D$, and
$\cB$ can be considered as a $U(1)$ bundle over $\tilde\cM$.
This structure defines the
{\bf embedded quantum space} or {\bf brane} associated to
the matrix configuration.
The concept is very reminiscent of noncommutative branes in string theory, which is borne out in the context of Yang-Mills matrix models,
cf. \cite{Seiberg:1999vs,Aoki:1999vr,Steinacker:2008ri}. However the embedding might be degenerate, and the
abstract quantum space is clearly a more fundamental concept.
If equivalence class $\cN_x$ of $x$ is non-trivial, further interesting statements can be made.
Observe that $\lambda(x) = \delta^2(x) + d^2(x)$ reduces on $\cN_x$ to
the displacement $d^2(x)$ plus a constant shift
$c = \delta^2(x)$. Therefore there is a unique $x_0\in \cN_x$ in each
equivalence class where $\lambda$ assumes its minimum.
This provides a natural representative of $\cM\cong \tilde{\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}}^D/_\sim$,
and another embedding function
\begin{align}
x_0^a: \quad \tilde{\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}}^D \to \cM \hookrightarrow{\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}}^D
\end{align}
which is constant on the fibers $\cN$ and faithfully represents\footnote{This also provides the natural adapted coordinates
implied by the constant rank theorem \cite{lee2013smooth}.} $\cM$.
It satisfies
\begin{align}
w_a({\bf x^a}(x_0) - x_0^a) = w_a\partial^a\lambda|_{x_0} = 0
\qquad \forall \ w\in T\cN_{x_0}
\label{l-min-N}
\end{align}
using \eq{expect-X},
because $\lambda$ assumes its minimum on $\cN_{x_0}$ at $x_0$.
Therefore ${\bf x^a}(x) = {\bf x^a}(x_0)$ provides the optimal estimator for $x_0$
in $\cN_x$, in the sense that
\begin{align}
x_0^a = P_x^\perp{\bf x^a}(x)
\label{X-X0-proj}
\end{align}
where
$P_x^\perp$ is the orthogonal projector on $\cN_x$ w.r.t. the Euclidean metric on ${\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}}^D$.
This provides justification for the numerical ``measuring algorithm'' in
\cite{Schneiderbauer:2016wub,lukas_schneiderbauer_2016_45045}, and suggests further refinements.
\paragraph{Quantum tangent space.}
From now on, we will assume that $\cM$ is a quantum manifold.
Since $\cM\subset\C P^{N-1}$ is a (sub)manifold, we can determine its tangent space.
Choose some point $\xi\in\cM$.
The results of section \ref{sec:derivatives-generators} notably \eq{del-nabla-X-general-2} imply that $T_\xi\cM$ is spanned by
the $D$ vectors
\begin{align}
(\partial_a- i A_a)|x\rangle = \cX_a|x\rangle \ \in T_\xi\C P^{N-1} \ ;
\end{align}
note that $\langle x|(\partial_a - i A_a)|x\rangle=0$,
hence $\cX_a|x\rangle$ is indeed a tangent vector\footnote{since $A_\mu$ can be gauged away at any given point, these are
derivatives of sections of the respective $U(1)$ bundles over $\cM$ and $\C P^{N-1}$, which can be taken as representatives
of tangent vectors on $\cM$ and $\C P^{N-1}$, respectively.
Although the $\cX_a$ depend implicitly on $x$,
the result is independent of the point $x\in\cN_x$ because $\cM$ is a manifold.}
of $\cM \subset \C P^{N-1}$,
and perpendicular to the ``would-be vertical vector'' $i|x\rangle$.
According to \eq{V-anihil}, any $w\in T\cN_x$ provides a non-trivial relation $w^a\cX_a|x\rangle=0$.
Hence after a suitable $SO(D)$ rotation, we can choose among the Cartesian coordinates on ${\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}}^D$
$m$ local coordinates $x^\mu$ which are perpendicular\footnote{Since $\cN_x$ is in one-to-one correspondence with
$\xi\in\C P^{N-1}$, we shall use this notation if appropriate.}
to $\cN_\xi$, and can serve as local coordinates of $\cM$
near $\xi$. We denote these as local {\em ``normal embedding'' coordinates} on $\cM$.
It follows that an explicit basis of the tangent vectors in $T_\xi\cM$ is given by
$(\partial_\mu- i A_\mu)|x\rangle = \cX_\mu|x\rangle$ for $\mu=1,...,m$.
This provides a natural definition of
the {\bf(real) quantum tangent space} of $\cM$:
\begin{align}
T_\xi\cM = \Big\langle\cX_\mu|x\rangle \Big\rangle_{\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}}
= \Big\langle\cX_a|x\rangle \Big\rangle_{\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}} \quad \subset \ \ T_\xi\C P^{N-1}
\label{tangent-space-R}
\end{align}
with basis
$\cX_\mu|x\rangle, \ \mu = 1,...,m$, so that $\dim T_\xi\cM = m = \dim\cM$.
One can now repeat the considerations in section \ref{sec:inner-connect-symp},
in terms of local coordinates
$x^\mu,\ \mu=1,...,m$ on $\cM$.
Thus $\cM$ is equipped with a $U(1)$ connection
\begin{align}
iA = \langle x|d|x\rangle
\end{align}
and a closed 2-form \eq{F-omega-dA}
\begin{align}
i\omega_\cM = d\langle x|d|x\rangle = \frac i2\omega_{\mu\nu} dx^\mu \wedge dx^\mu = i dA, \qquad d\omega_\cM = 0
\end{align}
as well as a quantum metric $g_{\mu\nu}$, which are simply the
pull-back of the symplectic structure and the
Fubini--Study metric on $\C P^{N-1}$.
These structures are intrinsic, and have nothing to do with target space ${\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}}^D$.
Given the basis $\cX_\mu|x\rangle$ of tangent vectors, we can evaluate the
symplectic form and the quantum metric in local embedding coordinates as
\begin{align}
i\omega_{\mu\nu}
&= \langle x|(\cX_\mu^\dagger \cX_\nu - \cX_\nu^\dagger \cX_\mu) |x\rangle \nonumber\\
g_{\mu\nu} &= \langle x|(\cX_\mu^\dagger \cX_\nu + \cX_\nu^\dagger \cX_\mu) |x\rangle \ .
\end{align}
It should be noted that the quantum tangent space $T_x\cM$
of the abstract quantum space is a subspace of $\C P^{N-1}$, and
has a priori nothing to do with the embedding in target space ${\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}}^D$.
This is indicated by the attribute ``quantum''.
The embedding \eq{M-embedding-symbol} in target space induces another
metric on $\cM$, which in turn is distinct from the effective
metric discussed in section \ref{sec:eff-metric-MM}.
It is tempting to conjecture that
for irreducible matrix configuration, $\omega_\cM$ is always non-degenerate,
and thus defines a symplectic form on $\cM$.
However this is not true, as demonstrated by the minimal fuzzy torus or minimal fuzzy $H^4$
where $\omega_\cM$ vanishes, cf. section \ref{sec:examples}.
But if there is a semi-classical regime,
$\omega_\cM$ is indeed non-degenerate and thereby a symplectic manifold, as discussed in the next section\footnote{
For reducible matrix configuration $\omega_\cM$ may be degenerate even in the semi-classical regime.}.
From now on we will mostly drop the subscript from $\omega_\cM = \omega$.
\paragraph{Embedded quantum space for almost-local quantum spaces.}
Now consider the tangent space $T\tilde\cM$ of the embedded brane
$\tilde\cM \subset{\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}}^D$ \eq{embedded-M-def}, which is spanned by $\partial_\mu {\bf x^a}$
for any local coordinates on $\cM$.
This can be understood for almost-local quantum spaces,
following the semi-classical analysis of \cite{Ishiki:2016yjp}.
Recall the relation \eq{theta-omega}
\begin{align}
\frac{\partial}{\partial x^c} {\bf x^a} \ \approx - \theta^{ab}\omega_{bc}
\end{align}
as tensors on $\tilde{\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}}^D$. It follows that $\theta^{ab}$ is non-degenerate on
$\tilde\cM$. Then $0 \approx i\theta^{ab} \partial_b\lambda$ \eq{theta-del-l}
implies that $\lambda$ is approximately constant on
$\tilde\cM$,
and the derivative of $\lambda$ along the transversal fiber $\cN$ (approximately) vanish
on $\tilde\cM$ due to \eq{l-min-N}. Then \eq{expect-X} implies
\begin{align}
{\bf x^a}(x) \approx x^a, \qquad
\ \partial_\mu {\bf x^a} \approx \partial_\mu x^a
\label{embed-x-approx}
\end{align}
so that both tensors $\theta^{ab}$ and $\omega_{bc}$ are approximately tangential to $\tilde\cM$,
and inverse of each other on $\tilde\cM$. This is particularly transparent in normal embedding coordinates.
In particular, $\tilde\cM$ is the location where $\lambda$ assumes its ``approximate'' minimum,
which was used in \cite{Schneiderbauer:2016wub,lukas_schneiderbauer_2016_45045} to numerically measure and picture such branes.
Then the embedding map \eq{M-embedding-symbol} is an immersion,
but (the closure of) $\tilde\cM \subset{\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}}^D$
may have self-intersections,
as in the example of squashed $\C P^2$ \cite{Steinacker:2014lma}.
Both $\omega_{ab}$ and $g_{ab}$ vanish along the directions $w^a$ along the fiber $\cN$,
\begin{align}
w^a\omega_{ab} = 0 = w^a g_{ab} \ , \qquad w\in T \cN \ .
\end{align}
Finally, we can recognize \eq{projector-embed}
\begin{align}
\partial^a {\bf x^a} = P^{ab} + P^{ba}
\label{P-M-proj}
\end{align}
as tangential projector on $\tilde\cM\subset{\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}}^D$, since
the rhs vanishes along the fibers $\cN$.
This was obtained in \cite{Ishiki:2016yjp} in the semi-classical limit,
but that relation holds in fact exactly.
\subsection{Quantization map, symbol and semi-classical regime}
\label{sec:quatiz-semi}
Given the quasi-coherent states, we can define a {\bf quantization map}
\begin{align}
\cQ: \quad \cC(\cM) &\to End(\cH) \nonumber\\
\phi(x) &\mapsto \int_\cM \phi(x) \,|x\rangle \langle x|
\label{Q-map}
\end{align}
which associates to every classical
function on $\cM$ an operator or observable in $End(\cH)$.
The integral on the rhs is defined\footnote{This is well-defined if
(the closure of) $\cM$ is a compact sub-manifold of $\C P^{N-1}$, which we shall assume.
It is essential to use the abstract quantum space $\cM$ here,
otherwise the integral would typically not make sense.} naturally via the symplectic volume form
\begin{align}
\int_\cM \phi(x) := \frac{1}{(2\pi\alpha)^n} \int\limits_\cM \Omega \, \phi(x) \ ,
\qquad \Omega := \frac{1}{n!}\omega^{\wedge n}
\label{int-def}
\end{align}
(assuming $\dim\cM = m=2n$),
where the normalization factor $\alpha$ is defined by
\begin{align}
N = Tr(\mbox{1 \kern-.59em {\rm l}}) = \int_\cM 1 \ .
\label{symp-vol}
\end{align}
Semi-classical considerations suggest that $\alpha\approx 1$,
however this cannot hold in full generality, since the symplectic form is degenerate for the minimal fuzzy torus and the integral vanishes.
It would be desirable to find sufficient conditions for $\alpha\approx 1$,
and a precise statement in particular for
the quantum K\"ahler manifolds discussed below. In any case, the trace is related to the intragel via
\begin{align}
Tr\cQ(\phi) = \int_\cM \phi(x) \ .
\label{Tr-Q}
\end{align}
The map $\cQ$ cannot be injective, since $End(\cH)$ is finite-dimensional;
the kernel is typically given by functions with high ``energy''.
It is not evident in general if this map is surjective, which
will be established below for the case of quantum K\"ahler manifolds.
We can now re-define the {\bf symbol map} \eq{symbol-RD} more succinctly as
\begin{align}
End(\cH) &\to \cC(\cM) \nonumber\\
\Phi &\mapsto \langle x|\Phi|x\rangle =: \phi(x) \ .
\label{symbol}
\end{align}
Both sides have a natural norm and inner product, given by
\begin{align}
\langle \Phi,\Psi\rangle = Tr(\Phi^\dagger\Psi)\quad \ \mbox{and}\quad \
\langle\phi,\psi\rangle = \int_\cM \phi(x)^*\psi(x)
\end{align}
leading to the Hilbert-Schmidt norm $\|\Phi\|_{HS}$ and the $L^2$ norm $\|\phi\|_2$, respectively.
The symbol map can be viewed as de-quantization map, which makes sense for any quantum space
in the present framework.
The concept of almost-local operators discussed in section \ref{sec:almost-local} can now also be refined.
We re-define $Loc(\cH)\subset End(\cH)$ as a maximal (vector) space of
operators such that the restricted symbol map
\begin{align}
Loc(\cH) &\to\cC_{\rm IR}(\cM) \nonumber\\
\Phi \ &\mapsto \langle x|\Phi|x\rangle =: \phi(x)
\label{symbol-M}
\end{align}
is an ``approximate isometry''
with respect to the Hilbert-Schmidt norm on $Loc(\cH) \subset End(\cH)$
and the $L^2$-norm on $\cC_{\rm IR}(\cM)\subset L^2(\cM)$.
We will then identify $\Phi \sim \phi$.
Approximate isometry means that $|\|\phi\|_2 - 1| < \varepsilon$ whenever $\|\Phi\|_{\rm HS}=1$
for some given $0 < \varepsilon < \frac 12$, depending on the context.
Then the polarization identity implies
\begin{align}
\langle\Phi,\Psi\rangle_{\rm HS} \approx \langle\phi,\psi\rangle_2 \ ,
\end{align}
hence an ON basis of $Loc(\cH)$ is mapped to a basis of $\cC_{\rm IR}(\cM)$
which is almost ON.
This defines the {\bf semi-classical regime}, which
can be made more precise in some given situation by specifying
some $\varepsilon$.
Accordingly, {\bf almost-local quantum spaces} are (re)defined as matrix configurations
where all $X^a$ and $[X^a,X^b]$ are in $Loc(\cH)$.
Of course some given matrix configuration may be far from any semi-classical space,
in which case $Loc(\cH)$ is trivial.
However we will see that for almost-local quantum space, $Loc(\cH)$
typically includes the almost-local operators
in the sense of \eq{Phi-loc-approx} up to some bound, and in particular polynomials in $X^a$
up to some order. Moreover, $\cQ$ is an approximate inverse of
the symbol map \eq{symbol-M} on $Loc(\cH)$. Then the semi-classical regime
should contain a sufficiently large class
of functions and operators to characterize the geometry to a satisfactory precision.
Let us try to justify these claims.
The first observation is that $\mbox{1 \kern-.59em {\rm l}} \in Loc(\cH)$, because its symbol is the constant function
$1_\cM$, and the norm is preserved due to \eq{symp-vol}.
Conversely, we should show the {\em completeness relation}
\begin{align}
\cQ(1_\cM) = \int_\cM |x\rangle\langle x| \
\stackrel{!}{\approx} \ \mbox{1 \kern-.59em {\rm l}} \
\label{one-approx}
\end{align}
which is equivalent\footnote{The following considerations would also go through if these relations
hold with some non-trivial density.} to the trace identity
\begin{align}
Tr\Phi = \int_\cM \langle x|\Phi|x\rangle \qquad \forall \Phi\in End(\cH) \ .
\end{align}
This is not automatic, since the integral vanishes e.g. on minimal $T^2_2$.
We can establish the completeness relation
at least formally\footnote{A more precise statement \eq{one-H0-coherent}
will be shown for
quantum K\"ahler manifold.}
(or rather approximately) for almost-local quantum spaces.
Indeed then \eq{X-comm-theta} implies
\begin{align}
[X^a,\cQ(\phi)] &\approx -i\int_\cM \phi(x) \theta^{ab} \partial_b(|x\rangle\langle x|) \nonumber\\
&= i\int_\cM \theta^{ab} \partial_b\phi(x) |x\rangle\langle x| \nonumber\\
&= \cQ(i\theta^{ab} \partial_b\phi)
\label{X-comm-Q}
\end{align}
because the integration measure $\Omega$ \eq{int-def} is invariant under
Hamiltonian vector fields.
In particular, $\cQ(1_\cM)$ (approximately) commutes with all $X^a$, which by irreducibility
implies $\cQ(1_\cM) \propto \mbox{1 \kern-.59em {\rm l}}$, and \eq{one-approx} follows using
the trace \eq{Tr-Q}.
Now assume that the completeness relation holds to a sufficient precision.
Let $\Phi$ be an almost-local hermitian operator as defined in section \ref{sec:almost-local},
with symbol $\phi$.
Then the trace relation gives
\begin{align}
\|\Phi\|_{\rm HS}^2 &\approx \int_\cM\langle x|\Phi\Phi|x\rangle \approx
\int_\cM \phi(x)^2 = \|\phi\|_2^2
\end{align}
using \eq{Phi-loc-approx}.
Therefore almost-local
operators in the sense of \eq{Phi-loc-approx} are indeed contained in $Loc(\cH)$,
up to the specific bounds.
Conversely, assume that $\|\Phi\|_{\rm HS} \approx \|\phi\|_2$ for hermitian $\Phi$.
Then the completenes relation implies
\begin{align}
\|\Phi\|_{\rm HS}^2 \approx \int_\cM \langle x|\Phi\Phi|x\rangle
&\approx \int_\cM \phi(x)^2 = \|\phi\|_2^2 \nonumber\\
\int_\cM \langle x|(\Phi - \phi(x))(\Phi - \phi(x))|x\rangle
&\approx 0
\label{norm-uncertainty}
\end{align}
which implies that $(\Phi-\phi(x))|x\rangle \approx 0$
$\forall x\in\cM$. Hence they are approximately local in the sense of \eq{Phi-loc-approx}. In particular they
approximately commute due to \eq{semiclass-fact},
\begin{align}
\Phi \Psi \approx \Psi \Phi, \qquad \Phi, \Psi \in Loc(\cH) \ .
\end{align}
Hence the above definition of $Loc(\cH)$ is a refinement of the
definitions in section \ref{sec:almost-local},
turning the local statements into global ones.
The image $\cC_{\rm IR}(\cM)$ is typically given by functions which are slowly varying on the
length scale $L_{\rm coh}$, corresponding to the
semi-classical or infrared regime.
To see that $\cQ$ is approximately inverse to the symbol map,
we note that the completeness relation implies
\begin{align}
|y\rangle &\approx \int_\cM |x\rangle\langle x|y\rangle \ .
\end{align}
This means that
\begin{align}
\langle x|y\rangle \approx \delta_y(x)
\end{align}
for any $y\in\cM$ w.r.t. the measure \eq{int-def},
consistent with
$|\langle x|y\rangle| \sim e^{-\frac 12(x-y)_g^2}$ \eq{coherent-inner} \eq{D-expand}. Then
\begin{align}
\cQ(\phi)|y\rangle &\approx \int_\cM \phi(x)|x\rangle\langle x|y\rangle
\approx \phi(y)|y\rangle .
\label{Q-phi-loc}
\end{align}
for functions $\phi(x)$
which are slowly varying on $L_{\rm coh}$.
Therefore $\cQ(\phi)$ is almost-local and hence
$\cQ(\phi)\in Loc(\cH)$ for slowly varying $\phi$,
and moreover
$\cQ$ is approximately the inverse of the symbol map on $Loc(\cH)$, since
\eq{Q-phi-loc} gives
\begin{align}
\langle y|\cQ(\phi)|y\rangle &\approx \phi(y) \ .
\end{align}
For almost-local quantum spaces, $Loc(\cH)$ contains in particular
the basic matrices
\begin{align}
X^a \approx \int_{\cM} {\bf x^a} |x\rangle\langle x| \ .
\end{align}
The approximation is good as long as the classical function
${\bf x^a}$ is approximately constant on $L_{\rm coh}$.
Moreover,
\eq{XX-theta-rel} gives the approximate commutation relations on $\cM$
\begin{align}
[X^a,X^b] \sim i \theta^{ab} = i\{x^a,x^b\} \ .
\end{align}
We have seen that $\theta^{ab}$ is tangential to
$\cM$ and the inverse of the symplectic form $\omega$ on $\cM$, hence
$\{x^a,x^b\}$ are Poisson brackets on $\cM$.
In this sense, the semi-classical geometry is encoded in the matrix configuration $X^a$.
These observations are summarized in table \ref{tab:correspondence}.
\begin{table}[h]
\begin{center}
\begin{tabular}{c c c}
$Loc(\cH) \subset End(\cH)$ & $\sim$ & $\cC_{\rm IR}(\cM) \subset L^2(\cM)$ \\ \hline
$ \Phi $ & $\sim$ & $ \phi(x) = \langle x|\Phi|x\rangle$ \\[1ex]
$ X^a $ & $\sim$ & $ {\bf x^a}(x) $ \\[1ex]
$ [.,.] $ & $\sim$ & $ i\{.,.\} $ \\[1ex]
$ Tr $ & $\sim$ & $ \int_\cM $ \\[1ex]
$\Box$ & $\sim$ & $e^{-\sigma}\Box_G$ \\
\end{tabular}
\caption{Correspondence between almost-local operators
and infrared functions on $\cM$ for almost-local quantum spaces. The metric structure is encoded
in the Laplacian $\Box$ \eq{Box-G}.}
\label{tab:correspondence}
\end{center}
\end{table}
This provides the starting point of the emergent geometry and gravity
considerations in \cite{Steinacker:2010rh,Steinacker:2019fcb},
which will be briefly
discussed in section \ref{sec:eff-metric-MM}.
The above Poisson structure extends trivially to
$\tilde {\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}}^D$, which for $D>\dim\cM$
decomposes into symplectic leaves of $\omega_{ab}$ that are preserved by the Poisson structure. Functions which are constant
on these leaves then have vanishing Poisson brackets, which leads to a degenerate effective metric
as discussed in section \ref{sec:eff-metric-MM}.
In the UV or deep quantum regime,
the above semi-classical picture is no longer justified, and in fact
it is very misleading.
In particular, consider {\em string states} which are defined as
rank one operators built out of quasi-coherent states
\cite{Steinacker:2016nsc,Iso:2000ew}
\begin{align}
\psi_{x,y}
&:= |x\rangle\langle y | \qquad \in End(\cH) \ .
\label{string-states}
\end{align}
They are highly non-local for $x\neq y$, and should not be interpreted as function
but rather as open strings (or dipoles) linking $|y\rangle$ to $|x\rangle$
on the embedded brane $\tilde\cM$.
These states provide a complete and more adequate picture of $End(\cH)$,
and exhibit the stringy nature of noncommutative field theory
and Yang-Mills matrix models \cite{Steinacker:2016nsc}.
This means that the physical content of Yang-Mills matrix models,
and more generally of noncommutative field theory,
is much richer than suggested by the semi-classical limit.
In particular, string states arise as high-energy excitation modes, leading to
UV/IR mixing in noncommutative field theory \cite{Minwalla:1999px}.
This is a phenomenon which has no counterpart in
conventional (quantum) field theory.
\subsection{Complex tangent space and quantum K\"ahler manifolds}
\label{sec:Kahler}
Now we return to the exact analysis. For any quantum manifold
$\cM$, the embedding $\cM \to \C P^{N-1}$
induces the tangential map
\begin{align}
T_\xi\cM &\to T_\xi\C P^{N-1} \ .
\label{M-embed-CP-d}
\end{align}
Now we take into account that $\C P^{N-1}$ carries an intrinsic complex structure
\begin{align}
\cJ:\quad T_\xi\C P^{N-1} &\to T_\xi\C P^{N-1}, \qquad \cJ v = i v
\label{complex-intrinsic}
\end{align}
for any $v\in T_\xi\C P^{N-1}$. Accordingly,
$T\C P^{N-1} \cong T^{(1,0)}\C P^{N-1}$ can be viewed as holomorphic
tangent bundle, thus bypassing an explicit complexification of its real tangent space.
With this in mind, we define the {\bf complex quantum tangent space} of $\cM$ as
\begin{align}
T_{\xi,\C}\cM := \Big\langle\cX_a|x\rangle \Big\rangle_\C \quad \subset \ \ T_\xi\C P^{N-1} \cong T_{\xi,\C}\C P^{N-1} \ ,
\label{tangent-space-C}
\end{align}
which also carries the complex structure
\begin{align}
\cJ \cX_a|x\rangle := i\cX_a|x\rangle \quad \in T_{\xi,\C}\cM \ , \qquad
\cJ^2 = -\mbox{1 \kern-.59em {\rm l}} \ .
\end{align}
Again, this complex tangent space
is not necessarily the complexification of the real one.
Using the basis $\cX_\mu|x\rangle, \ \mu = 1,...,m$ of $T_{\xi}\cM$
which arises in normal embedding coordinates, there may be relations
of the form
\begin{align}
(i c^\mu \cX_\mu - J^{\ \nu}\cX_\nu)|x\rangle = 0 \quad \mbox{for} \quad
J^{\nu}, c^\mu \in{\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}} \ ,
\label{complex-tangent-rel}
\end{align}
so that $T_{\xi,\C}\cM$ has reduced dimension over $\C$.
We will see that for quantum K\"ahler manifolds as defined below,
the complex dimension is half of the same as the real one.
\paragraph{Quantum K\"ahler manifolds.}
Consider the maximally degenerate case where the complex dimension of
$T_{\xi,\C}\cM$ is given by $n= \frac m2 \in\N$ where $m= \dim_{\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}}\cM$.
Then $T_{\xi}\cM$ is stable under the complex
structure operator $\cJ$
\begin{align}
T_{\xi,\C}\cM = T_{\xi}\cM
\label{Kahler-cond}
\end{align}
so that $T_\xi\cM$ should be viewed as holomorphic tangent space of $\cM$.
But this implies that $\cM\subset \C P^{N-1}$ is a complex sub-manifold (i.e. defined by holomorphic equations),
cf. \cite{voisin2003hodge} or Proposition 1.3.14 in \cite{Baouendi:1999uya}.
Such quantum manifolds $\cM$ will be called {\bf quantum K\"ahler manifolds},
for reasons explained below.
Indeed, all complex sub-manifolds of $\C P^{N-1}$ are known to be K\"ahler.
Note that this is
an intrinsic property of a quantum space $\cM$, and no extra structure is introduced here:
$\cM$ either is or is not of this type\footnote{
It is interesting to note that due to \eq{almost-Kahler},
$H_x$ preserves the complex tangent space $T_{\xi,\C}\cM$,
at least in the semi-classical regime. However, \eq{almost-Kahler} is still weaker than the K\"ahler condition.}.
We will see that this includes the well-known quantized or ``fuzzy'' spaces
arising from quantized coadjoint orbits\footnote{It is worth pointing out that that $\C P^{N-1}$ is itself a quantum K\"ahler manifold,
as minimal fuzzy $\C P^{N-1}_N$.}.
Consider the quantum K\"ahler case in more detail.
We can introduce a local holomorphic parametrization of $\cM\subset\C P^{N-1}$ near $\xi$
in terms of $z^k\in\C^n$.
Then any local (!) holomorphic section of the tautological line bundle over
$\C P^{N-1}$ defines via pull-back a local holomorphic section
of the line bundle
\begin{align}
\tilde\cB := \bigcup_{x\in\tilde{\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}}^D} E_x \to \cM \ \hookrightarrow \C P^{N-1}
\end{align}
over $\cM$, denoted by $\|z\rangle$.
This $\|z\rangle$ can be viewed as holomorphic $\C^N$-valued function
on $\cM$, which satisfies
\begin{align}
\frac{\partial}{\partial \bar z^k} \|z\rangle = 0, \qquad \|z\rangle\big|_\xi = |\xi\rangle
\label{holo-anihil-coh}
\end{align}
where $\bar z^k$ denotes the complex conjugate of $z^k$.
Hence $\|z\rangle$ arises from $|x\rangle$
through a re-parametrization and gauge transformation along with a
non-trivial normalization\footnote{$\|z\rangle$
cannot be normalized, since e.g. $\langle y\|z\rangle$ must be holomorphic in $z$.
Apart from that, $\tilde\cB$ is equivalent to $\cB$.} factor;
this is indicated by the double line in $\|z\rangle$.
In other words, the differential of the section
\begin{align}
d\|z\rangle = dz^k \frac{\partial}{\partial z^k}\|z\rangle \qquad \in\ \Omega^{(1,0)}_{z}\cM
\label{holo-diff-sect}
\end{align}
is a $(1,0)$ one-form.
Given this holomorphic one-form $d\|z\rangle$ and the hermitian inner product on $\cH$,
we naturally obtain a $(1,1)$ form
\begin{align}
\omega := (d\|z\rangle)^\dagger \wedge d\|z\rangle
&= \omega_{\bar k l} d\bar z^k \wedge dz^l \qquad \in \ \Omega^{(1,1)}_z\cM \nonumber\\
\omega_{\bar k l} &= (d_k\|z\rangle)^\dagger d_l\|z\rangle
\end{align}
which is closed,
\begin{align}
d\omega = -(d\|z\rangle)^\dagger \wedge d d\|z\rangle + (d d\|z\rangle)^\dagger \wedge d\|z\rangle = 0
\end{align}
using holomorphicity of $\|z\rangle$.
This is the K\"ahler form, which encodes the
$\omega_{ab}$ in \eq{omega-def}.
As in \eq{hab-def},
we can then define the hermitian metric
\begin{align}
h(X,Y) &= \big((d\|z\rangle)^\dagger \otimes d\|z\rangle \big)(X,Y)
\qquad \in T^{(1,1)}
\label{h-def-general}
\end{align}
whose imaginary and real part define the symplectic form and the quantum metric via
\begin{align}
\omega(X,Y) &= -i(h(X,Y) - h(Y,X)^*) = - \omega(Y,X) \nonumber\\
g(X,Y) &= h(X,Y) + h(X,Y)^* = g(Y,X) \ .
\end{align}
Since $h\in T^{(1,1)}$, they satisfy the compatibility condition
\begin{align}
\omega(X,\cJ Y) &= -i(h(X,\cJ Y) - h(\cJ Y,X)^*) \nonumber\\
&= -i(ih(X,Y) +i h(Y,X)^*) \nonumber\\
&= g(X,Y)
\label{Kahler-condition}
\end{align}
(recall that $\cJ=-i$ on anti-holomorphic $(0,1)$ forms).
This means that $\cM$ is a K\"ahler manifold,
and the name ``quantum K\"ahler manifold''
indicates its origin from the matrices $X^a$.
In particular, the coherence length $L_{\rm coh}$ and the uncertainty scale
$L_{NC}$ coincide.
Now we relate this to the local generators $\cX_\mu$ \eq{del-nabla-X-general-2}, \eq{tangent-space-R}.
Introducing real coordinates $z^k = z^k(x^\mu)$ where
$x^\mu$ are the local (Cartesian) embedding coordinates introduced above,
the holomorphicity relation \eq{holo-anihil-coh} can be expressed using \eq{del-nabla-X-general-2} as
\begin{align}
0 = \frac{\partial}{\partial \bar z^k} \|z\rangle
= \frac{\partial x^\mu}{\partial \bar z^k} \frac{\partial}{\partial x^\mu} \|z\rangle
= \frac{\partial x^\mu}{\partial \bar z^k}\, (\cX_\mu+iA_\mu) \|z\rangle \ .
\end{align}
Similarly,
\begin{align}
\frac{\partial}{\partial z^k} \|z\rangle
= \frac{\partial x^\mu}{\partial z^k} \frac{\partial}{\partial x^\mu} \|z\rangle
= \frac{\partial x^\mu}{\partial z^k}(\cX_\mu+iA_\mu) \|z\rangle \ .
\end{align}
We can now introduce new generators\footnote{The $\cA^k, \bar \cA_l$ are matrix-valued functions on $\cM$
just like the $\cX_\mu$, while the $X_a$ are ``constant'' matrices.}
$\cA^k, \bar \cA_l$ via
\begin{align}
\cA^k &= \frac{\partial x^\mu}{\partial \bar z^k}\,(\cX_\mu+iA_\mu) \nonumber\\
\bar\cA_k &= \frac{\partial x^\mu}{\partial z^k}\, (\cX_\mu+iA_\mu)
\end{align}
so that
\begin{align}
\cA^k \|z\rangle &= 0, \qquad
\bar\cA_k \|z\rangle = \frac{\partial}{\partial z^k} \|z\rangle \ .
\end{align}
These are clearly the analogs of the standard annihilation properties
of coherent states.
It is hence appropriate to denote the $\|z\rangle$ on
quantum K\"ahler manifolds as {\bf coherent states}.
Then
\begin{align}
T_{\xi,\C}\cM = \Big\langle \bar\cA_k \|z\rangle \Big\rangle_{\C} \ \cong \C^n \, \qquad k=1,...,n .
\end{align}
The metric tensor and the symplectic form are then determined as usual by the K\"ahler form
\begin{align}
i\omega_{\bar k l} &= (d_k\|z\rangle)^\dagger d_l\|z\rangle
= \langle z\| \bar\cA_k^\dagger \bar\cA_l\|z\rangle
\end{align}
which arises from a local K\"ahler potential,
\begin{align}
\omega_{\bar k l} &= -\frac 12 \bar\partial_k\partial_l \rho
\end{align}
given by the restriction of the
(Fubini--Study) K\"ahler potential on $\C P^N$.
This provides a rather satisfactory concept of quantum K\"ahler geometry, which arises in a natural way from the complex structure in the
Hilbert space.
There is no need to invoke any semi-classical or large $N$ limit.
Not all quantum spaces are of this type,
a counterexample being the minimal fuzzy torus $T^2_2$
as discussed in section \ref{sec:min-fuzzy-T2}.
In \cite{Ishiki:2016yjp}, it is claimed that all quantum manifolds are K\"ahler in the
semi-classical limit, based on \eq{imaginary-P}. However this refers to a
different almost-complex structure and metric which is not intrinsic.
From the present analysis, there is no obvious reason why all
quantum manifolds should be K\"ahler, even in the semi-classical limit.
Since for non-K\"ahler manifolds the
complex tangent space $T_{\C}\cM$ is higher-dimensional,
quantum effects due to loops in Yang-Mills matrix models
may be more significant, and the geometric trace formula
(2.38) in \cite{Steinacker:2016nsc} for string states would need to be replaced with some
higher-dimensional analog.
This suggests that quantum K\"ahler manifolds may be protected
by some sort of non-renormalization theorems.
\section{Coherent states and quantization map for quantum K\"ahler manifolds}
\label{sec:coherent}
We can establish the following lemma, which is well-known for
standard coherent states:
\begin{lem}
\label{lemma-diagonal}
Let $|x\rangle$ be the coherent states of a quantum K\"ahler manifold $\cM$,
and $\cH_0\subset\cH$ their linear span.
Assume $A\in End(\cH_0)$ satisfies $\langle x|A|x\rangle = 0$ for all $x\in\cM$.
Then $A=0$.
\end{lem}
\begin{proof}
Consider the function
\begin{align}
A(\bar y,z) := \langle y\|A\|z\rangle
\end{align}
where $\|z\rangle, \|y\rangle$ are local holomorphic sections of the coherent states in a neighborhood of $\xi\in\cM$.
Clearly this function is holomorphic in $z$ and in $\bar y$.
By assumption, the restriction of $A(\bar y,z)$ to the diagonal
$A(\bar z,z) = \langle z\|A\|z\rangle$ vanishes
identically. But then the standard properties of holomorphic
functions imply (cf. \cite{Perelomov:1986tf}) that $A(\bar y,z)\equiv 0$ identically.
This argument applies
near any given point $\xi\in\cM$, which implies that $A =0$.
\end{proof}
Using this lemma, we can establish the diagonal realization of
operators via coherent states:
\begin{thm}
\label{thm:diag-coh}
Let $|x\rangle$ be the (normalized)
coherent states of a quantum K\"ahler manifold $\cM$,
and $\cH_0\subset\cH$ their linear span.
Then all operators $A\in End(\cH_0)$ can be written as
\begin{align}
A = \int_\cM A(x) \,|x\rangle \langle x|
\label{A-diag-rep}
\end{align}
for some suitable complex-valued function $A(x)$ on $\cM$.
\end{thm}
Note that if the holomorphic coherent states $\|x\rangle$ are used instead of the normalized $|x\rangle$,
then $A(x)$ might have some singularities.
\begin{proof}
Assume that the subspace in $End(\cH_0)$ spanned by the rhs of \eq{A-diag-rep}
is smaller than $End(\cH_0)$. Let $B\in End(\cH_0)$ be in its orthogonal
complement w.r.t. the Hilbert-Schmidt metric.
Then
\begin{align}
0 = Tr( A B) = \int_\cM A(x) \langle x|B|x\rangle \qquad \forall A(x)\in\cC(\cM).
\end{align}
But this implies $\langle x|B|x\rangle = 0 \ \forall x\in\cM$, and then by Lemma
\ref{lemma-diagonal} it follows that $B=0$.
\end{proof}
Consider again the span
$\cH_0\subset \cH$ of all quasi-coherent states $|x\rangle$.
It is natural to conjecture
\begin{conj}
For every irreducible matrix configuration, $\cM$ is connected, and
the quasi-coherent states
are over-complete, i.e.
\begin{align}
\cH_0 = \Big\langle |x\rangle ; x\in\tilde{\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}}^D\Big\rangle_\C = \cH \ .
\end{align}
\end{conj}
In the semi-classical regime this follows from \eq{one-approx}
and \eq{X-comm-Q}, which would give a central element for every connected
component of $\cM$.
A viable general strategy to show this more generally might be to show that the
continuation of the $|x\rangle$ through the singular set $\cK$
provides all eigenstates of $H_x$.
However, this is left as a conjecture.
In any case, we can consider the following restricted form of the
quantization map \eq{Q-map}
\begin{align}
\cQ: \quad \cC(\cM) &\to End(\cH_0) \nonumber\\
\phi(x) &\mapsto \int_\cM \phi(x) \,|x\rangle \langle x|
\end{align}
associating to every classical
function on $\cM$ an operator or observable in $End(\cH_0)$.
The above theorem states that $\cQ$ is surjective for quantum K\"ahler manifolds.
This means that any given operator $A\in End(\cH_0)$ has a representation
of that form, and in fact many. The kernel of $\cQ$ is typically given by functions above some ``energy cutoff''.
Furthermore, it follows that the operators of the form \eq{A-diag-rep} form an algebra,
and every operator can be viewed as quantized function on $\cM$.
Even though this is a very nice result,
surjectivity of $\cQ$ is rather surprising in light of the string states
\eq{string-states}, which are highly non-local.
Nevertheless, even such string states can be represented in the
above diagonal form \eq{A-diag-rep},
but $A(x)$ is then rapidly oscillating and in the UV or deep quantum regime.
Therefore this diagonal representation should be used with caution,
and a representation in terms of non-local
string states is more appropriate in the UV regime.
These can naturally be interpreted as open strings on the embedded
quantum space or brane $\tilde\cM$.
\paragraph{Completeness relation.}
In particular, the theorem \ref{thm:diag-coh} implies that at least for
quantum K\"ahler manifolds, the
identity operator $\mbox{1 \kern-.59em {\rm l}}_{\cH_0}$ can be written in terms of
coherent states:
\begin{align}
\mbox{1 \kern-.59em {\rm l}}_{\cH_0} = \int_\cM \mbox{1 \kern-.59em {\rm l}}(x) |x\rangle \langle x|,
\label{one-H0-coherent}
\end{align}
where the integral is defined as in
\eq{int-def}, and $\mbox{1 \kern-.59em {\rm l}}(x)$ is some function on $\cM$.
This gives
\begin{align}
Tr A &= \int_\cM \mbox{1 \kern-.59em {\rm l}}(x)\langle x|A|x\rangle , \nonumber\\
Tr(\cQ(\phi(x))) &= \int_\cM \mbox{1 \kern-.59em {\rm l}}(x)\phi(x) \ .
\end{align}
The natural guess is
\begin{align}
\mbox{1 \kern-.59em {\rm l}}_{\cH} = \int_\cM |x\rangle \langle x| \ .
\label{complete-simple}
\end{align}
This is well-known e.g. for the quantum spaces given by quantized coadjoint orbits of compact
semi-simple Lie groups, where it follows immediately from Schur's Lemma.
It follows more generally from \eq{X-comm-Q} at least in the semi-classical regime,
but is not evident if $\mbox{1 \kern-.59em {\rm l}}(x)\propto 1_\cM$ for all quantum K\"ahler manifolds.
\section{Remarks and discussion}
\label{sec:remarks}
The results and concepts discussed in this paper call for a number of remarks.
First, we only considered the case where the lowest eigenspace
$E_x$ of $H_x$ is non-degenerate. This excludes many interesting examples
such as fuzzy $S^4_N$ and fuzzy $H^4_n$ as discussed in section \ref{sec:degenerate}. If $E_x$ is an $k$-dimensional (complex) vector space, then
much of the above analysis would go through, replacing
$\cB$ by an $U(k)$ bundle and $\omega$ by the field strength of its
natural (Berry) connection.
Sometimes the degeneracy may also be resolved by adding extra matrices
$X^i$. For example, the abstract quantum space of $S^4_N$ is then recognized as $\C P^{3}$, and similarly in other examples, cf. section \ref{sec:degenerate}. In other words, such degenerate
quantum spaces can be recognized as projections of non-degenerate ones,
by dropping some $X^a$.
There are a number of issues which ask for a better understanding.
One of them is the relation between the symplectic volume of $\cM$
and the dimension of the Hilbert space \eq{Tr-Q}. Even though equality
holds in the standard examples, it is violated for
the minimal fuzzy torus.
Results from geometric quantization suggest a more complicated
relation, and it would be desirable to have quantitative
results for a large class of quantum spaces.
Furthermore, it would be very important to have a more general derivation
or qualification of the completeness relation \eq{one-approx}.
Another open issue is the compactness of $\cM\subset\C P^{N-1}$ for finite-dimensional $\cH$.
It may be tempting to conjecture that all $\cM$ are compact, but
the fuzzy disk \cite{Lizzi:2003ru} is a candidate for a non-compact
quantum space, which remains to be elaborated. However, the closure of
$\cM$ in $\C P^{N-1}$ is clearly compact, and it would be nice to understand
this in more detail.
Small deformations of the basic quantum K\"ahler manifolds $\cM_0$ of dimension $m<D$ typically lead to an ``oxidation'' $\cM$
corresponding to some tubular neighborhood of
$\cM_0$. This leads to the idea of fuzzy extra dimensions
\cite{Aschieri:2006uw,Aoki:2014cya}.
On the other hand, it is well-known that
adding a small perturbation to some quantum manifold $\cM$
can be viewed
as a gauge field on $\cM$, which becomes dynamic
in Yang-Mills matrix models. Relating
this field-theoretic point of view with the above geometric point
of view provides useful insights,
and one may hope to find further statements on stability and/or
non-renormalization in this way.
Similar considerations lead to the emergent gravity
approach based on Yang-Mills matrix models
\cite{Yang:2006dk,Steinacker:2010rh}.
Finally, the present analysis is restricted to the case of
irreducible matrix configurations.
If the matrix configuration is reducible, $\cH = \oplus \cH_i$ decomposes into the
orthogonal sum of irreducible subspaces, and the above considerations apply
to all $\cH_i$. This could be viewed as a stack of branes.
In particular, commuting matrix configurations (cf. \cite{Aoki:1998bq})
have a large stabilizer $U(1)^N$ under the adjoint action of $U(N)$,
so that their
$U(N)$ gauge orbit in Yang-Mills matrix models has smaller dimension
than that of irreducible (noncommutative) matrix configurations. But then their contribution in the ''path`` integral over all matrices is negligible, which defines the quantum theory.
Therefore irreducible
matrix configurations as considered here are expected to play the central
role in these models.
\subsection{Dirac operator}
The present framework has a natural extension to spinors
and Dirac-type operators. Namely, for any matrix configuration $X^a,
a=1,...,D$ we can consider \cite{Berenstein:2012ts,deBadyn:2015sca,Karczmarek:2015gda,Schneiderbauer:2016wub}
\begin{align}
\slashed{D}_x = \Gamma_a(X^a - x^a), \qquad x^a \in {\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}}^D
\label{Dirac-x}
\end{align}
acting on $\cH \otimes \C^s$.
Here $\Gamma_a$ are the gamma matrices generating the Clifford algebra of $SO(D)$ on the irreducible representation $\C^s$.
$\slashed{D}_x$ arises as off-diagonal part of the matrix Dirac operator\footnote{
A chirality operator for $\slashed{D}$ is typically only recovered in the semi-classical regime.}
$\slashed{D}=\Gamma_a]X^a,.]$ in Yang-Mills matrix models such as the IIB or IKKT model,
for the matrix configuration extended by a point brane $X^a \oplus x^a$.
It describes a fermionic string stretched between the
brane and the point $x^a$.
Quite remarkably, numerical investigations \cite{Schneiderbauer:2016wub} strongly suggest that
the Dirac operator $\slashed{D}_x$ always has exact zero modes
\begin{align}
\slashed{D}_x|x,s\rangle = 0 \
\end{align}
at $\cM$, so that there is no need to introduce
the lowest eigenvalue function $\lambda(x)$. This can be justified rigorously for 2-dimensional branes \cite{Berenstein:2012ts},
and some heuristic reasons can be given also in more general cases;
see \cite{Berenstein:2012ts,deBadyn:2015sca,Karczmarek:2015gda} for further work.
However, the presence of extra structure
due to the spinors obscures the relation with the quasi-coherent states and $\cM$ as
introduced here.
This is certainly an interesting topic for further research.
\subsection{Effective metric and relation with matrix models}
\label{sec:eff-metric-MM}
The considerations in this paper are motivated by Yang-Mills matrix models, whose
solutions are precisely matrix configurations as considered here.
Fluctuations in these models are governed by the
{\em matrix Laplacian}
\begin{align}
\Box := \delta_{ab} [X^a,[X^b,.]]: \quad End(\cH) \to End(\cH) \ .
\label{Box}
\end{align}
The displacement Hamiltonian arises as off-diagonal part of the matrix Laplacian for a point or probe brane
added to the matrix configuration
\cite{Schneiderbauer:2016wub}, i.e. for $X^a \oplus x^a$
acting on $\cH\oplus\C$.
It describes a string stretched between the
brane and the point $x^a$.
This can also be viewed as a special case of
intersecting branes \cite{Chatzistavrakidis:2011gs}, one brane being the
point probe.
To understand the effective metric in matrix models,
consider the inner derivations
\begin{align}
[X_a,.] \sim i\theta^{a\mu} \partial_\mu
\end{align}
acting on $End(\cH)$ resp. $\cC_{\rm IR}(\cM)$, which
are (quantizations of)
Hamiltonian vector fields on $\cM$ for almost-local quantum spaces.
By considering the inner product
$\langle \Phi,\Psi\rangle := Tr([X^a,\Phi^\dagger][X_a,\Psi])$ on $Loc(\cH)$,
one can then show \cite{Steinacker:2010rh} that
\begin{align}
\Box \sim e^{\sigma}\Box_G
\label{Box-G}
\end{align}
where $G$ is the {\bf effective metric} on $\cM$ given by
\begin{align}
G^{\mu\nu} &= e^{-\sigma}\,\theta^{\mu\mu'} \theta^{\nu\nu'}
g_{\mu'\nu'} , \qquad
e^{-\sigma} = \frac {|G^{\mu\nu}|^{1/2}}{|\theta^{\mu\nu}|^{1/2}} \
\end{align}
for $\dim\cM >2$. This can be viewed as open-string metric, and it
provides the starting point of the emergent geometry and gravity
considerations in \cite{Steinacker:2010rh,Steinacker:2019fcb}.
In the two-dimensional case, the underlying Weyl invariance leads to
a different interpretation of $\Box$, which is discussed in
\cite{Arnlind:2012cx}.
In the reducible case, $\cM$ decomposes into a foliation of
symplectic leaves. Then the effective metric is non-vanishing only
along this foliation, i.e. it vanishes along the transversal directions.
In the context of Yang-Mills matrix models, this means that fluctuation
modes on such backgrounds only propagate along the symplectic leaves,
so that the resulting gauge theory is lower-dimensional.
This happens on any superficially odd-dimensional quantum space, or
e.g. on $\kappa$ Minkowski space \cite{Lukierski:1993wx} in dimensions larger than 2.
\section{Examples}
\label{sec:examples}
\subsection{The fuzzy sphere}
\label{sec:fuzzy-S2}
The fuzzy sphere
$S^2_{N}$ \cite{hoppe1982QuaTheMasRelSurTwoBouStaPro,Madore:1991bw} is a
quantum space defined in terms of three $N \times N$ hermitian matrices
\begin{align}
X^a = \frac{1}{\sqrt{C_N}}\, J^a_{(N)}, \qquad \ a=1,2,3
\label{fuzzy-S2def}
\end{align}
where $J^a_{(N)}$ are the generators of the $N$-dimensional irrep
of $\mathfrak{su}(2)$ on $\cH = \C^N$, and
$C_N= \frac 14(N^2-1)$ is the value of the quadratic Casimir.
They satisfy the relations
\begin{equation}
[ X^{{a}}, X^{{b}} ] = \frac{i}{\sqrt{C_N}}\varepsilon^{abc}\, X^{{c}}~ ,
\qquad \sum_{{a}=1}^{3} X^{{a}} X^{{a}} = \mbox{1 \kern-.59em {\rm l}}
\end{equation}
choosing the normalization \eq{fuzzy-S2def} such that the radius is one.
The displacement Hamiltonian is
\begin{equation}
H_x=\frac 12\sum_{a=1}^{3}\left(X^{a}-x^{a}\right)^{2}
=\frac 12(\mbox{1 \kern-.59em {\rm l}}+|x|^2) -\sum_{a=1}^{3}x^{a}X^{a}
\label{Hx-S2-fuzzy}
\end{equation}
where $|x|^2 = \sum_a x_a^2$.
Using $SO(3)$ invariance, it suffices
to consider the north pole $x=(0,0,x^{3})=:n$
where
\begin{equation}
H_x =\frac 12 (\mathds{1}+|x|^{2}) - |x|\,X^{3}
\label{eq:rotated_lapl}
\end{equation}
assuming $x^{3}>0$ to be specific.
Hence the ground state of $H_x$ is given by the highest weight vector
$|n\rangle := \ket{\frac{N-1}{2},\frac{N-1}{2}}$ of the $\mathfrak{su}(2)$ irrep $\cH$,
and the eigenvalue is easily found to be \cite{Schneiderbauer:2016wub}
\begin{equation}
\lambda(x)=\frac 12(1+|x|^{2})-|x|\sqrt{\frac{N-1}{N+1}} \ .
\end{equation}
All other quasi-coherent states are obtained by $SO(3)$ acting on $|n\rangle$, hence
the abstract quantum space $\cM$ is given by the group orbit
\begin{align}
\cM = SO(3)\cdot |n\rangle = SO(3)/_{U(1)} \cong S^2\subset \C P^{N-1} \ .
\end{align}
Note that the quasi-coherent states are constant along the radial lines
in agreement with \eq{V-anihil},
\begin{align}
|x\rangle = |\alpha x\rangle\qquad \mbox{for} \quad \alpha > 0 \ .
\end{align}
The equivalence classes $\cN$ consist of the radial lines emanating
from the origin, and the would-be symplectic form $\omega_{ab}$ and the quantum metric $g_{ab}$ vanish if
any one component is radial.
The minima of $\lambda(x)$ on $\cN_x$ describe a sphere with radius
$|x_{0}|=\sqrt{\frac{N-1}{N+1}}=1+\mathcal{O}(\frac{1}{N})$.
This coincides precisely with the embedded quantum space \eq{embedded-M-def}
\begin{equation}
\tilde\cM = \{\langle x|X^a|x\rangle\}
= \big\{x\in\mathds{R}^{3}:\,|x|=\sqrt{\frac{N-1}{N+1}} \big\} \cong S^2 \
\end{equation}
defined by the expectation value ${\bf x^a}$ \eq{M-embedding-symbol}, in accordance with \eq{X-X0-proj}.
At the singular set $\cK=\{0\}$ the Hamiltonian is
$H_0 = C^2 \mbox{1 \kern-.59em {\rm l}}$, so that all energy levels become degenerate and cross. Following
$|x\rangle$ along the radial direction through the origin, it
turns into the highest energy level.
It is easy to see that the would-be symplectic form $\omega$ is the unique $SO(3)$-invariant 2-form on $\cM$
which satisfies the quantization condition \eq{quant-cond-S2} with $n=N$.
Moreover, the abstract quantum space $\cM\cong S^2\subset\C P^{N-1}$
is a quantum K\"ahler manifold, since the complex tangent space \eq{tangent-space-C} is
one-dimensional, spanned by
\begin{align}
T_{n,\C}\cM = \big\langle J^-|n\rangle \big\rangle_\C \
\end{align}
(at $|n\rangle\in\cM$).
This holds because $|n\rangle$ is the highest weight state, so that
\begin{align}
J^+|n\rangle = 0 \ ;
\end{align}
therefore the two tangent vectors $\cX^1|n\rangle, \cX^2|n\rangle \in T_{n}\cM$ \eq{tangent-space-R}
are related by $i$, while $\cX^3|n\rangle$ vanishes at $n$.
Indeed, it is well-known that the coherent states on $S^2_N$ form a Riemann sphere,
and the (quasi-) coherent states coincide with the coherent states
introduced in \cite{Perelomov:1986tf}.
All this holds for any $N\geq 2$. The coherence length is of order
\begin{align}
L_{\rm coh} \approx L_{NC} \sim \frac{1}{\sqrt{N}} \
\end{align}
in the given normalization. Hence
for sufficiently large $N$, the almost-local operators
comprise all polynomials in $X^a$ up to order $O(\sqrt{N})$
(depending on some specific bound), so that $S^2_N$
is an almost-local quantum space.
In contrast for the {\bf minimal fuzzy sphere} $S^2_2$
with $N=2$, the generators reduce to the Pauli matrices $X^a = \sigma^a$,
and the (quasi)coherent states form the well-known Bloch sphere
$\cM = S^2\cong\C P^1$.
This is still a quantum K\"ahler manifold even though the semi-classical regime
is trivial and contains only the constant functions $Loc(\cH) = \C\mbox{1 \kern-.59em {\rm l}}$,
since the coherence length is of the same order
as the entire space $\cM$.
\subsection{Quantized coadjoint orbits for compact semi-simple Lie groups}
\label{sec:QCO}
The above construction generalizes naturally to quantized coadjoint orbits for
any compact semi-simple Lie group $G$ with Lie algebra $\mathfrak{g}$.
For any irreducible representation $\cH_\Lambda$
with highest weight $\Lambda = (n_1,...,n_k)$ labeled by Dynkin indices $n_j$,
the matrix configuration
\begin{align}
X^a = c\, T^a, \qquad a=1,...,D
\label{qco-config}
\end{align}
defines a quantum K\"ahler manifold $\cM \cong G/K$.
Here $T^a$ are orthogonal generators of $\mathfrak{g} \cong {\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}}^D$ acting on $\cH_\Lambda$,
$K$ is the stability group of the highest weight $\Lambda$, and
$c$ is some normalization constant.
Then the displacement Hamiltonian is
\begin{align}
H_x = C^2(\mathfrak{g}) + \frac 12 x_a x^a - x_a T^a\
\end{align}
where $C^2(\mathfrak{g}) \propto \mbox{1 \kern-.59em {\rm l}}$ is the quadratic Casimir.
Using $G$-invariance, we can assume that $x$ is in
(the dual of) the Cartan subalgebra and has maximal weight. Then $|x\rangle = |\Lambda\rangle$
is the highest weight state, so that
the quasi-coherent states are the group orbit
$\cM = G\cdot |\Lambda\rangle \cong G/K$
of the highest weight state with stabilizer $K$. This is a quantum K\"ahler manifold
due to the highest weight property,
and the quantum metric $g_{ab}$ \eq{g-def} and the
symplectic form $\omega$ \eq{omega-def}
are the canonical group-invariant structures on the K\"ahler manifold $\cM$.
For large Dynkin indices $n_j\geq n\gg 1$,
the almost-local operators
comprise all polynomials in $X^a$ up to some order $O(\sqrt{n})$,
so that $\cM$ is an almost-local quantum space.
This is essentially the well-known story of quantized
coadjoint orbits, and the (quasi-) coherent states coincide with the coherent states
introduced in \cite{Perelomov:1986tf}, cf. \cite{Grosse:1993uq}. Perhaps less known is the fact that
if some of the $n_j$ are small,
$\cM$ can be viewed as ``oxidation'' of some lower-dimensional brane,
more precisely as a bundle over $\cM_0$ whose fiber is
very ``fuzzy''. For an application of such a structure
see e.g. section 4.2 in \cite{Sperling:2018hys}.
This construction generalizes further to highest weight (discrete series)
unitary irreducible representation of non-compact semi-simple Lie groups.
A particularly interesting example is given by the ``short'' series of unitary irreps of $SO(4,2)$
known as singletons, which lead to the fuzzy 4-hyperboloids $H^4_n$ discussed below, and
to quantum spaces which can be viewed as cosmological space-time \cite{Sperling:2019xar}.
\paragraph{(Minimal) fuzzy $\C P^{N-1}_N$.}
As an example we consider minimal fuzzy $\C P^{N-1}_N$,
which is obtained using the above general construction for
$G=SU(N)$ and its fundamental representation $\cH = (1,0,...,0)$, so that
$G/K \cong \C P^{N-1}$.
This is the quantum K\"ahler manifold obtained from the matrix configuration
\begin{align}
X^a = \lambda^a \quad \in End(\cH), \qquad \cH = \C^N
\end{align}
for $a=1,...,N^2-1$,
where $\lambda^a$ are a (Gell-Mann) ON basis of $\mathfrak{su}(N)$
in the fundamental representation.
Then
$End(\cH)\cong (0,...,0) \oplus (1,0,...,0,1)$ can be viewed as a minimal
quantization of functions on $\C P^{N-1}$.
The quantization map
\begin{align}
\cQ(\phi) = \int_{\C P^{N-1}}|x\rangle\langle x| \phi(x)
\end{align}
is then the partial
inverse of the symbol map, apart from the constant function:
\begin{align}
\cQ(\langle x|\Phi|x\rangle) = c \Phi\qquad \mbox{if} \ \ Tr(\Phi) = 0 \
\end{align}
for some $c>0$.
Near $|\Lambda\rangle$,
the quasi-coherent states $|x\rangle$ can be organized as holomorphic sections
\begin{align}
\|z\rangle = \exp(z^k T^+_k)|\Lambda\rangle \ ,
\end{align}
where
the $T^+_k, \ k=1,...,N-1$ are the rising operators of a Chevalley basis of
$\mathfrak{su}(N)$. Hence fuzzy $\C P^{N-1}_N$ is a quantum K\"ahler manifold
which coincides with $\C P^{N-1}$, with
K\"ahler form
\begin{align}
\omega_{\bar k l} = \frac{\partial}{\partial \bar z^k}\langle z\|\frac{\partial}{\partial z^l}\|z\rangle \ .
\end{align}
\paragraph{Squashed $\C P^2_N$.}
Further quantum spaces can be obtained by projections of
quantized coadjoint orbits. For example, starting from
fuzzy $\C P^2_N$ with $\cH = (N,0)$, consider the following
matrix configuration
\begin{align}
X^a = T^a, \qquad a=1,2,4,5,6,7
\end{align}
dropping the Cartan generators $T_3$ and $T_8$ from the (Gell-Mann) basis
of $\mathfrak{su}(3)$. Then the displacement Hamiltonian can be written
as
\begin{align}
H_x = \bar H_x - \frac 12(X_3-x_3)^2 - \frac 12(X_8-x_8)^2
\label{disp-H-squashed}
\end{align}
where $\bar H_x$ is the displacement Hamiltonian for $\C P_N^2$.
Although the quasi-coherent states $|x\rangle$ are not known in this case,
they are close to those of $\C P^2_N$ in the large $N$ limit,
cf. \cite{Schneiderbauer:2016wub}.
Indeed then the last two terms in \eq{disp-H-squashed} are small, and
$0 < \lambda(x) \leq \bar \lambda(x)$ gives an upper bound for $\lambda$.
This implies that the displacement is small, and
\begin{align}
\cM \approx \C P^2 \subset \C P^{N(N+3)/2} \ .
\end{align}
Again, the concept of
the abstract quantum space is superior to the notion of an embedded brane, which is a complicated self-intersecting variety in ${\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}}^6$
related to the Roman surface \cite{Steinacker:2014lma}.
\subsection{Degenerate cases}
\label{sec:degenerate}
\paragraph{The fuzzy 4-sphere $S^4_N$.}
Now consider again the quantized coadjoint orbit of $SU(4)\cong SO(6)$
acting on the highest weight irrep $\cH_\Lambda$ with $\Lambda=(N,0,0)$. We have seen just
that the matrix configuration using
all $\mathfrak{so}(6)$ generators $\cM^{ab} = - \cM^{ba}$ as in \eq{qco-config}
would give fuzzy
$\C P^3_N$, with coherent states acting on the highest weight
state $|\Lambda\rangle$. Now instead of using all $\cM^{ab}$,
consider the matrix configuration defined by the following 5 hermitian matrices
\begin{align}
X^a = \cM^{a6} \qquad \in End(\cH_\Lambda), \qquad a = 1,...,5 \ .
\end{align}
Using $SO(5)$ invariance, it suffices to consider the
displacement Hamiltonian at $x=(0,0,0,0,x_5)$,
\begin{align}
H_x = \frac 12\sum_{i=1}^4 X_i^2 + \frac 12 (X_5 - x_5)^2
= \frac 12 (R^2 + x_5^2) \mbox{1 \kern-.59em {\rm l}} - x_5 X^5
\end{align}
since $ \sum_a X_a^2 = R^2\mbox{1 \kern-.59em {\rm l}}$ for $R^2 = \frac 14 N(N+4)$, cf. \cite{Grosse:1996mz,Medina:2012cs}.
Now $|\Lambda\rangle$ is by construction an eigenstate of $X^5$
which commutes with $SO(4)$, with maximal eigenvalue.
Therefore the lowest eigenspace $E_x$ of $H_x$ is spanned by the
orbit $SO(4)\cdot |\Lambda\rangle \cong S^2$, which spans
a $N+1$-dimensional complex vector space. This provides an example of
a degenerate quantum space. The abstract quantum space
$\cM$ is obtained by acting with $SO(5)$ on this $S^2$, which is easily
seen to recover
\begin{align}
\cM \cong \C P^3 \ \subset \C P^{\dim\cH -1}
\end{align}
which is an equivariant $S^2$ bundle over $S^4$.
The $E_x$ naturally form a $SU(N+1)$ bundle $\cB$ over $S^4$,
and $\omega$ is replaced by an $SU(N+1)$ connection.
Again the concept of an abstract quantum space greatly helps
to understand the structure, as it resolves the degeneracy of the
quasi-coherent states. Moreover $\cM$ is clearly a K\"ahler manifold,
and theorem \ref{thm:diag-coh} holds.
\paragraph{The fuzzy 4-hyperboloid $H^4_n$.}
Using an analogous construction for $SO(4,2)$ and its singleton
irreps $\cH_n$ labeled by $n\in\N$, one obtains
fuzzy $H^4_n$ \cite{Sperling:2018xrm,Hasebe:2012mz}. The corresponding matrix
configuration is given by the following 5 hermitian operators
\begin{align}
X^a = \cM^{a5} \qquad \in End(\cH_\Lambda), \qquad a = 0,...,4 \ .
\end{align}
However, it is more appropriate here to define the displacement Hamiltonian
using $\eta_{ab}$, so that $SO(4,1)$ is preserved. Then
we can assume that $x = (x_0,0,0,0,0)$, so that
\begin{align}
H_x = \frac 12\sum_{i=1}^4 X_i^2 - \frac 12 (X_0 - x_0)^2
= \frac 12 (R^2 - x_0^2) \mbox{1 \kern-.59em {\rm l}} + x_0 X^0 \ .
\end{align}
Then the resulting quasi-coherent states form an abstract quantum space
$\cM \cong\C P^{1,2}$, which is an $S^2$ bundle over $H^4$.
It is a K\"ahler manifold, and theorem \ref{thm:diag-coh}
still holds in a weaker sense \cite{Sperling:2018xrm}.
This in turn is the basis of the cosmological space-time solution
$\cM^{3,1}_n$ with an effective metric of FLRW type, as
discussed in \cite{Sperling:2019xar,Steinacker:2019awe}.
\paragraph{Minimal fuzzy $H^4_0$.}
A particularly interesting example
is obtained from $H^4_n$ for $n=0$, which is not a
quantized coadjoint orbit and not even symplectic.
In that case $E_x$ is one-dimensional, and one can check that
$\langle x|\partial_a|x\rangle = 0 = i A_a$ and
$\langle x|[X^a,X^b]|x\rangle = 0$. Therefore
the would-be symplectic form $\omega$ vanishes. The abstract quantum space
is then
\begin{align}
\cM = H^4
\end{align}
but it carries a trivial line bundle $\tilde\cB$. It still satisfies the quantum K\"ahler\footnote{Note that $\dim\cH = \infty$ here,
so that we cannot conclude that $\cM$ is K\"ahler in the usual sense.}
condition \eq{Kahler-cond} and theorem \ref{thm:diag-coh}
should hold (using the $SO(4,1)$-invariant integral) in a weaker sense.
However this is not an almost-local quantum space,
and there is no semi-classical regime.
\subsection{The minimal fuzzy torus}
\label{sec:min-fuzzy-T2}
The minimal fuzzy torus $T^2_2$ turns out
to be a quantum manifold which is not K\"ahler, and not even symplectic.
It is defined in terms of
\begin{align}
U = \begin{pmatrix}
0 & 1 \\
1 & 0
\end{pmatrix} = X_1 + i X_2, \qquad
V = \begin{pmatrix}
1 & 0 \\
0 & -1
\end{pmatrix} = X_3 + i X_4
\end{align}
which defines 4 hermitian matrices $X_i = X_i^\dagger \in End(\C^2)$.
Noting that $[U,U^\dagger] = 0 = [V,V^\dagger]$ and
\begin{align}
(U-z)(U-z)^\dagger
&= \begin{pmatrix}
1 + |z|^2 & -z - z^* \\
-z - z^* & 1 + |z|^2
\end{pmatrix} \nonumber\\
(V-w)(V-w)^\dagger
&=\begin{pmatrix}
|1-w|^2 & 0 \\
0 & |1+w|^2
\end{pmatrix}
\end{align}
where $z = x_1 + i x_2$ and $w = x_3 + i x_4$,
the displacement Hamiltonian is
\begin{align}
H_y &= \sum(X_i - x_i)^2
= \begin{pmatrix}
1 + |z|^2 + |1-w|^2 & -z - z^* \\
-z - z^* & 1 + |z|^2 + |1+w|^2
\end{pmatrix} \ .
\end{align}
The lowest eigenvalue is
\begin{align}
\lambda = 2+|z|^2+|w|^2 - \sqrt{|z+z^*|^2 + |w+w^*|^2}
\end{align}
and the corresponding quasi-coherent states are
\begin{align}
|x\rangle \propto \begin{pmatrix}
\sqrt{|z+z^*|^2 + |w+w^*|^2} +w^*+w \\
z^*+z
\end{pmatrix} \quad \in {\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}}_+ \times {\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}} \subset \C^2 \ .
\label{state-T2-2}
\end{align}
These clearly depend only on the real parts of $z, w$, and
the normalized states
describe a half circle in the upper half plane.
However
the two endpoints of this half-circle corresponding to $(z=1,w=-\infty)$ and $(z=-1,w=-\infty)$
describe the same state $|x\rangle = \begin{pmatrix}
0 \\ \pm 1
\end{pmatrix}$, and should hence
be identified.
Thus $\cM = S^1$, which
is clearly not a K\"ahler manifold any not even symplectic.
Now consider the equivalence classes $\sim$ \eq{cM-equivalence-def}
on ${\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}}^4 \cong \C^2$.
All points $(z,w)\sim (z',w')\in\C^2$ with the same real parts
are identified,
and also all $(z,w) \sim r(z,w) \in {\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}}^2$ for $r>0$.
Among these, $\lambda$ assumes the minimum $\lambda=1$ for $(z,w) = (x,y) \in S^1 \subset \C^2$,
so that again\footnote{
It may seem that the state
corresponding to the point $(z=0,w=-1)$ vanishes, but this is just
an artefact of the improper normalization. It is easy to see that
in that case $H_y$ has indeed an eigenstate $(0,1)$ for $\lambda=1$.}
$\cM \cong \C^2/_\sim \cong S^1$.
Therefore the minimal fuzzy torus $T^2_2$ should really be considered
as a fuzzy circle.
This shows the existence of
''exotic`` quantum spaces which are not quantized symplectic spaces,
but do not have a semi-classical regime.
There are also higher-dimensional such spaces as shown next, and
the above example of minimal $H^4_0$.
\paragraph{Non-K\"ahler quantum space from $T_2^2 \times T_2^2$.}
Now consider the Cartesian product of $T^2_2 \times T^2_2$,
realized through $8$ hermitian matrices $X^a_{(1)}, X^a_{(2)}$ acting on $\C^4 = \C^2 \otimes \C^2$.
All eigenstates of $H_x = H_x^{(1)} + H_x^{(2)}$ are given by the product states
of the two eigenstates \eq{state-T2-2} of $T^2_2$, so that the ground states or quasi-coherent states are given by
\begin{align}
|x_{(1)},x_{(2)}\rangle = |x_{(1)}\rangle \otimes|x_{(2)}\rangle
\end{align}
over ${\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}}^8$. They are again degenerate, and inequivalent states are parametrized by
$(x_{(1)},x_{(2)}) \in S^1 \times S^1$.
Hence the abstract quantum space is a torus $\cM \cong S^1 \times S^1$.
The quantum tangent space is spanned by two vectors
\begin{align}
T_\xi\cM = \Big\langle (\partial_1 |y_{(1)}\rangle) \otimes|y_{(2)}\rangle,
|y_{(1)}\rangle \otimes (\partial_2|y_{(2)}\rangle) \Big\rangle \ \cong \ {\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}}^2
\end{align}
which are linearly independent from the two complexified vectors
$i\partial_1 |y_{(1)}\rangle \otimes|y_{(2)}\rangle$ and $ i|y_{(1)}\rangle \partial_2\otimes|y_{(2)}\rangle$.
Therefore $T_{\xi,\C}\cM \cong \C^2 \cong {\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}}^4$, and
$\cM$ is not a quantum K\"ahler manifold.
\subsection{The Moyal-Weyl quantum plane}
\label{sec:Moyal-Weyl}
The Moyal-Weyl quantum plane is obtained for $X_1 = X$ and $X_2 = Y$
with $[X,Y] = i\mbox{1 \kern-.59em {\rm l}}$. Then $\dim\cH=\infty$,
but all considerations can be carried over easily.
The displacement Hamiltonian
\begin{align}
2 H_x = (X-x)^2 + (Y-y)^2
\end{align}
is nothing but the shifted harmonic oscillator, with ground state
\begin{align}
H_z |z\rangle = \frac 12 |z\rangle \
\end{align}
given by the standard coherent states
\begin{align}
|z\rangle &= U(z)|0\rangle, \qquad z = \frac{1}{\sqrt{2}}(x+iy)
\end{align}
using the identification of ${\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}}^2 \cong\C$. The translation operator is given
as usual by
\begin{align}
U(z) &= \exp(i(yX - xY)) = \exp(z a^\dagger - \bar z a) , \nonumber\\
a &= \frac{1}{\sqrt{2}}(X+iY), \qquad a^\dagger = \frac{1}{\sqrt{2}}(X-iY) \ .
\end{align}
$|0\rangle$ is the ground state of the harmonic oscillator $a|0\rangle = 0$,
and more generally
\begin{align}
(a-z)|z\rangle &= 0
\end{align}
implies
\begin{align}
\langle z |(X+iY)|z\rangle &= x+iy \ .
\end{align}
The derivatives \eq{del-nabla-X-general} are found to be
\begin{align}
(\partial_x -iA_1) |z\rangle &= -i(Y - y) |z\rangle = \cX_1|z\rangle \nonumber\\
(\partial_y-iA_2) |z\rangle &= i(X - x) |z\rangle = \cX_2|z\rangle
\label{transl-Moyal}
\end{align}
where the second expressions arise from \eq{del-nabla-X-general-2},
which are given explicitly by
\begin{align}
\cX_1 &= \big(H_z-\frac 12\big)^{'-1}(X-x) \nonumber\\
\cX_2 &= \big(H_z-\frac 12\big)^{'-1}(Y-y) \ .
\end{align}
The $U(1)$ connection is found to be
\begin{align}
iA_1 = \langle z| \partial_x |z\rangle &= -i \langle z| (Y-\frac 12 y) |z\rangle = -\frac i2 y \nonumber\\
iA_2 = \langle z| \partial_y |z\rangle &= i \langle z| (X - \frac 12 x) |z\rangle = \frac i2 x
\end{align}
with field strength
\begin{align}
F_{12} = \partial_1 A_2 - \partial_2 A_1 = 1 \ .
\end{align}
Therefore \eq{parallel-transport-states} becomes
\begin{align}
|z\rangle = P\exp\Big(\int_0^{z} (\cX_1-iy) dx + (\cX_2+ix) dy\Big)|0\rangle \ .
\end{align}
$\cM\cong\C$ satisfies the quantum K\"ahler condition due to
the constraint $(X+iY)|0\rangle = 0$, which states that
$iY|0\rangle = - X|0\rangle$, so that the complex tangent space
$T_{0,\C}\cM = T_0\cM$ coincides with the real one.
The holomorphic coherent states
are given by
\begin{align}
\|z\rangle = e^{z a^\dagger}|0\rangle
= e^{z a^\dagger}e^{-\bar z a}|0\rangle
= e^{\frac 12|z|^2} |z\rangle \ .
\end{align}
They cannot be normalized, since
the map $z\mapsto\langle w\|z\rangle$ must be holomorphic and hence
unbounded. Thus $\|z\rangle$ should be viewed as holomorphic section of
the line bundle $\tilde\cB$.
\subsection{Commutative quantum spaces}
\label{sec:comm}
In the infinite-dimensional case, one can also consider
matrix configurations associated to commutative manifolds.
The simplest example is the circle $S^1$, which arises from the single
operator
\begin{align}
X = -i\partial_\varphi
\end{align}
acting on $\cC^\infty(S^1) \subset L^2(S^1) = \cH$.
The displacement Hamiltonian is
\begin{align}
H_x = \frac 12 (-i\partial_\varphi - x)^2 , \qquad x \in {\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}} \ .
\end{align}
The quasi-coherent states for $x=n\in{\mathds{Z}}$ are clearly
\begin{align}
|n\rangle = e^{i n \varphi}, \qquad H_n |n\rangle = 0 , \qquad n\in{\mathds{Z}}
\label{quasicoh-del}
\end{align}
so that $\lambda({\mathds{Z}})=0$. For any $x\not\in{\mathds{Z}}$,
all eigenstates of $H_x|\psi\rangle = E|\psi\rangle$
are given by the above states $|n\rangle$, with eigenvalue
\begin{align}
H_x |n\rangle = (-i\partial_\varphi - x)^2 e^{i n \varphi}
= (n - x)^2 e^{i n \varphi} \ .
\end{align}
Therefore
\begin{align}
|x\rangle = |n\rangle , \qquad |n-x| < \frac 12 , \ \ n\in{\mathds{Z}}
\end{align}
while for $x\in{\mathds{Z}}+\frac 12$ the space $E_x$ is two-dimensional, containing
both states $|x\pm\frac 12\rangle$.
Thus the abstract quantum space is the discrete lattice
\begin{align}
\cM = {\mathds{Z}} \subset {\mathds{R}}} \def\C{{\mathds{C}}} \def\N{{\mathds{N}}
\end{align}
and the quantum tangent space vanishes.
This can be generalized to the higher-dimensional
commutative torus $T^n$ with commutative and reducible
matrix configuration $X_\mu = -i\partial_\mu$,
which also leads to a discrete quantum space
without further structure.
Thus classical manifolds are not well captured in the
present framework. This can of course be treated by adding extra structure
as in \cite{connes1995noncommutative}, but such a description is not well suited for
Yang-Mills matrix models.
\paragraph{Acknowledgments}
I would like to thank Bernhard Lamel and Thorsten Schimannek for useful discussions and pointing me to the appropriate literature.
Related collaborations and discussions with
Timon Gutleb, Joanna Karczmarek, Jun Nishimura,
Lukas Schneiderbauer and Jurai Tekel are gratefully acknowledged.
Finally, John Madore's role in shaping the underlying ideas is greatly appreciated.
This work was supported by the Austrian Science Fund (FWF) grant P32086.
\section{Conclusion}
A general framework for quantum geometry was developed,
based on general matrix configurations given in terms of $D$ hermitian
matrices $X^a$.
We have seen that a remarkably rich array of structures can be extracted from
such a matrix configuration, which provide a semi-classical picture and geometric insights. Quasi-coherent states are an optimal set of states where
the matrices are simultaneously ''almost-diagonal``.
They form an abstract quantum space
$\cM\subset\C P^{N}$, which
allows to use geometric tools and even complex analysis.
A class of almost-local operators $Loc(\cH)$ is characterized,
which can be understood as quantized functions on $\cM$ in some IR regime.
Moreover, a natural sub-class of matrix configurations is
identified as quantum K\"ahler manifolds.
Although the present analysis is restricted to the case of
finite-dimensional matrices, the concepts
generalize to the case of selfadjoint operators on
separable Hilbert spaces.
This is illustrated for the Moyal-Weyl quantum plane
and for the fuzzy hyperboloid. In these cases,
the framework exhibits the
finite number of degrees of freedom per unit volume, as well as the stringy nature in the deep quantum regime. It should also be useful
to better understand other quantum spaces such as $\kappa$ Minkowski space \cite{Lukierski:1993wx}, and to resolve a hidden internal structure
in other spaces such as \cite{Fiore:2019rgy}
and in compact quantum spaces with infinite-dimensional $\cH$.
This framework for quantum geometry is
particularly suited for Yang-Mills-type matrix models.
Their description in terms of quantized symplectic spaces
is now understood to be generic, rather than just an ad-hoc choice.
This vindicates describing the low-energy regime of such matrix models
via noncommutative field theory on the embedded
quantum space or brane $\tilde\cM$, leading to dynamical emergent geometry
and possibly gravity, cf. \cite{Steinacker:2010rh,Steinacker:2019awe}. However,
it is important to keep in mind that semi-classical picture
breaks down in the UV or deep quantum regime,
where non-local string states become dominant.
These are naturally interpreted as open strings on the brane $\tilde\cM$.
In particular, the new insights on the structure of $\cM$
should be very useful to interpret the results of numerical simulations
of Yang-Mills matrix models
\cite{Nishimura:2019qal,Aoki:2019tby,Kim:2011cr,Anagnostopoulos:2020xai}. By definition,
the quasi-coherent states
provide an optimal basis where the matrices are ''almost-diagonal``,
which should improve upon simpler approaches based on
block-matrices. They can be obtained numerically
along the lines proposed in \cite{Schneiderbauer:2016wub,lukas_schneiderbauer_2016_45045}, which can now be
refined, notably using the abstract
point of view as $\cM\subset\C P^{N-1}$. It should then be
easier to disentangle the underlying geometry
from the random noise.
The framework should also be useful
for analytical
computations in the context of noncommutative field theory.
Given the natural role of quantum K\"ahler manifolds in this setting,
one may hope that quantum K\"ahler manifolds
play a special and preferred role not only from an analytical point of view,
but also as preferred solutions or configurations in a matrix ``path integral''. For example, loop integrals analogous to \eq{complete-simple} can be formulated in terms of
the completeness relation for string states
\cite{Steinacker:2016nsc}.
In particular, one may hope that some sort of non-renormalization statement
can be made on such spaces.
Finally, it would be desirable to improve some the technical
results in this paper, notably related to the completeness relation and the
regularity of $\cM$.
In particular, one would like to know to which extent the results on
quantum K\"ahler manifolds can be generalized to generic quantum manifolds
with symplectic structure and a metric.
It would also be interesting to develop an analogous approach
based on the
matrix Dirac operator as sketched in section \ref{sec:remarks},
and to relate it to the present approach.
All these are interesting directions for future work.
|
2,869,038,155,875 | arxiv | \section{Introduction}
After the first detections of planets outside our solar system
\citep{1992Natur.355..145W,1995Natur.378..355M}, an intensive search
with various methods began \citep[see][for an
overview]{2002EuRv...10..185S} resulting in currently more than 200
planets (http://exoplanet.eu/). Most of these exoplanet detections
have been performed via the radial velocity (RV) method where the
``wobble'' of the parent star caused by the planet is measured by
spectral line shifts. Since these effects are very small for low-mass
planets in orbits of tens to hundreds of days, the determination of
orbital period, phase, inclination, eccentricity, and RV amplitude
demands RV accuracies of a few meters per second
\citep{2000ApJ...536L..43M}.
Meanwhile, alternative methods for planet detections have also been
successfully applied. The first four microlensing planets have been
detected
\citep{2004ApJ...606L.155B,2005ApJ...628L.109U,2006Natur.439..437B,2006ApJ...644L..37G},
possible first direct images of extra-solar planets were published
\citep{2004A&A...425L..29C,2005A&A...438L..25C,2005A&A...438L..29C,2005A&A...435L..13N,2006ApJ...641L.141B},
and the number of detections due to transit searches is steadily
increasing
\citep{2006ApJ...648.1228M,2006ApJ...acc..B,2006ApJ...651L..61O,2007MNRAS...acc..C}.
The transit method is of special interest, since it permits the
derivation of additional physical parameters of the planet, e.~g. the
radius can be measured either indirectly via the radius of the host
star or directly via detection of the secondary eclipse as observed
with the Spitzer Space Telescope
\citep{2005ApJ...626..523C,2005Natur.434..740D}. If combined with a
radial velocity variation measurement, the mass and mean density can
be determined, revealing constraints for the planetary
structure. Furthermore, transiting systems allow us to investigate the
atmospheres of the planets
\citep{2002ApJ...568..377C,2004ApJ...604L..69V} as well as the
spin-orbit-alignment between the rotational axis of the host star and
the planetary orbit
\citep{2006ApJ...acc..W,2006ApJ...acc..G,2006ApJ...653L..69W}.
\citet[hereafter DC]{2004ApJ...604..379D} published a list of nine
restrictively selected, transiting planet candidates from the MACHO
project \citep{1992ASPC...34..193A}. Only transit light curves with no
indication of gravitational distortion and only those with clear
U-shaped transit events were considered. De-reddened colours as well
as light curve fitting provide a good estimate of the companion
radius. Only companions below 3 Jupiter radii were selected.
Based on high-resolution spectra, the orbital velocities of five
potential host stars of exoplanet candidates have been measured. We
analysed the RV measurements together with MACHO transit light curves
in order to determine the system parameters complemented by a spectral
analysis.
The paper is organized as follows: In the next section, we shortly
describe the spectroscopic observations and the spectral analysis as
well as the Doppler-measurements. In section 3, we describe the
results of the individual systems and summarise in section 4.
\section{Observations and analyses}
In period 75 we secured three spectra for each of the five brightest
candidates. We used ESO's Fibre-fed Extended Range \'Echelle
Spectrograph (FEROS) mounted on the
\begin{table*}[ht!]
\caption{Orbital elements, rotation velocities, and stellar
parameters for all five analysed systems. Components {\it c} and {\it
d} of MACHO 118.18272.189 and component {\it b} of MACHO
120.22041.3265 are not visible in the spectra. $P_{\rm MACHO}$ is
taken from \citet{2004ApJ...604..379D} and $P$ denotes the period
derived using the light curves and RV measurements. $K$ is the
semi-amplitude of the RV variations, $V_0$ the system velocity, and
$i$ the orbital inclination. In case of systems with elliptical
orbits, $e$ is the eccentricity and $\omega$ the longitude of the
periastron. Furthermore, the mass $M^{\rm RV}$ is given in cases where
the RV amplitude of two components is known. Then $T_{\rm eff}^{\rm
RV}$ and $R$ are calculated for zero- and terminal-age main sequence
models. $T_{\rm eff}^{\rm SA}$ is the effective temperature derived
from the spectral analyses. In cases where just $T_{\rm eff}^{\rm SA}$
from the spectral analyses is known, $M^{\rm SA}$ and $R$ are derived
masses and radii from evolution models. All values in this table
relate to the assumption of zero-age main sequence stars.}
\label{table:1}
\centering
\begin{tabular}{l@{\,}l c c r@{\,$\pm$\,}l r@{\,$\pm$\,}l c c c c c c c c}
\hline\hline\noalign{\smallskip}
MACHO ID & & $P$ & $P_{\rm MACHO}$ & \multicolumn{2}{c}{$K$}
& \multicolumn{2}{c}{$V_0$} & $i$ & $e$ & $\omega$ & $M^{\rm RV}$ &
$T_{\rm eff}^{\rm RV}$ & $T_{\rm eff}^{\rm SA}$ & $M^{\rm SA}$ & $R$ \\
& & [days] & [days] & \multicolumn{2}{c}{[km s$^{-1}$]} &
\multicolumn{2}{c}{[km s$^{-1}$]} & [$^\circ$] & & [$^\circ$] &
[$M_\odot$] & [K] & [K] & [$M_\odot$] & [$R_\odot$] \\
\noalign{\smallskip}\hline\noalign{\smallskip}
118.18272.189 & {\it a} & -- & 1.9673 &
\multicolumn{2}{c}{0.00} & $-$25.51&0.03 & -- & -- & -- & -- & -- &
5800 & -- & -- \\
& {\it b} & -- & -- & \multicolumn{2}{c}{0.00} & $+$05.46&0.03 &
-- & -- & -- & -- & -- & 5800 & -- & -- \\
& {\it c} & 3.9346 & -- & \multicolumn{2}{c}{--} &
\multicolumn{2}{c}{--} & (90.0) & -- & -- & 0.41 & 3730 & -- & --
& 0.38 \\
& {\it d} & 3.9346 & -- & \multicolumn{2}{c}{--} &
\multicolumn{2}{c}{--} & & -- & -- & 0.41 & 3730 & -- & -- & 0.38
\\[5pt]
118.18407.57 & {\it a} & 4.7972 & 2.3986 & 78.84&0.10 &
$-$20.48&0.07 & 84.0 & -- & -- & 1.27 & 6430 & 6200 & -- & 1.23 \\
& {\it b} & -- & -- & \multicolumn{2}{c}{0.00} & $-$08.39&0.03 & &
-- & -- & -- & -- & 6600 & -- & -- \\
& {\it c} & 4.7972 & 2.3986 & 89.68&0.09 & $-$20.48&0.06 & & -- & --
& 1.11 & 5980 & 6200 & -- & 1.04 \\[5pt]
118.18793.469 & {\it a} & 4.0744 & 2.0372 & 75.81&0.18 &
$-$56.30&0.11 & 85.6 & 0.041 & 89.94 & 0.90 & 5140 & 5400 & -- &
0.81 \\
& {\it b} & 4.0744 & 2.0372 & 83.67&0.25 & $-$56.30&0.14 & & & &
0.82 & 5070 & 5400 & -- & 0.76 \\[5pt]
120.22041.3265 & {\it a} & 5.4083 & 5.4083 & 22.18&0.06 &
$-$24.00&0.04 & 89.8 & 0.108 & 19.98 & -- & -- & 6200 & 1.19 & 1.15 \\
& {\it b} & 5.4083 & 5.4083 & \multicolumn{2}{c}{114.90} &
$-$24.00&0.04 & & & -- & -- & (3340) & (0.23) & 0.28 \\[5pt]
402.47800.723 & {\it a} & 8.5496 & 4.2748 & 75.91&0.04 &
$+$00.40&0.04 & 85.8 & -- & -- & 1.26 & 6400 & 6400 & -- & 1.22 \\
& {\it b} & -- & -- & \multicolumn{2}{c}{0.00} & $-$26.40&0.04 & &
-- & -- & -- & --& 5800 & -- & -- \\
& {\it c} & 8.5496 & 4.2748 & 68.09&0.07 & $+$00.40&0.07 & & -- & --
& 1.40 & 6820 & 6400 & -- & 1.37 \\
\hline
\end{tabular}
\end{table*}
$2.2$~m telescope at La Silla, Chile. The spectrograph provides a
spectral resolution of ${\rm R} \sim 48\,000$ and covers a wavelength
range from $3500$~\AA\ to $9200$~\AA. The instrumental specifications
list a RMS velocity error of $\sim 25~{\rm m~s^{-1}}$. This is
sufficient to detect faint low-mass star companions and distinguish
them from sub-stellar companions, which was the primary aim of the
observations. The secondary aim is to use the spectra for a spectral
analysis in order to derive the stellar parameters of the host stars.
The observations of the five targets have been performed between August
19 and September 16, 2005. For each object we have obtained three
spectra with exposure times between $2400$~s and $3500$~s, depending
on the brightness of the object. The signal-to-noise ratio is $\sim
10$.
The data were reduced using the FEROS Data Reduction System (DRS). The
\'echelle spectra were bias and flat field corrected and wavelength
calibrated. The latter calibration was additionally quality checked by
cross-correlating the observation with a sky line spectrum. The
spectra were then corrected by applying relative wavelength
shifts. Barycentric and Earth rotation velocity influences to the
wavelengths are accounted for automatically by the DRS.
For the determination of the radial velocities we used the extracted
FEROS spectra and synthetic spectra of main sequence model stars
calculated from LTE model atmospheres using \verb!PHOENIX!
\citep{1999ApJ...512..377H} version 14.2. Both spectra were normalised
and relative fluxes were interpolated on a logarithmic wavelength
scale. A cross-correlation (CC) between a model with $\mbox{$T_{\rm eff}$} =
5600~{\rm K}$ and observation was performed between $5000$~\AA\ and
$7000$~\AA. The CC was implemented using a grid with 200 steps of
$\Delta \log{\left( \lambda/[{\rm \AA}] \right)}=2.2 \cdot 10^{-6}$ in
each direction. This method turned out to be robust against the use of
different model spectra. We could identify up to three spectroscopic
components in our data. Each of the peaks in the CC was then fitted by
a Gaussian and the position of the maximum of the fit gives the radial
velocity. The errors of the RV measurements were calculated from the
standard deviation of the Gaussian plus the accuracy limit of FEROS of
$25~{\rm m~s^{-1}}$. These RV errors are in the range between $50$ and
$350~{\rm m~s^{-1}}$.
The CC function was also used to determine the projected rotation
velocities of the stars. We therefore applied a solar spectrum as
template convolved with rotational profiles following
\citet{2005oasp.book.....G}. This method allows to derive stellar
radii in binaries, assuming a synchronised orbit. In this analysis,
the determined rotational velocity $v \, \sin i$ is of the order of
the uncertainty in most cases, which, due to the low signal-to-noise
ratio, is about $5$~km~s$^{-1}$. These derived radii are consistent
with the ones obtained from main sequence models (see
Table~\ref{table:1}). Additional constraints for the radii of the
binary components visible in the spectra can therefore not be derived.
In order to spectroscopically identify the components of the analysed
systems, we again used the \verb!PHOENIX! model grid which ranges from
$4000$~K to $6800$~K in $\mbox{$T_{\rm eff}$}$ and from $-1.5$ to $0.0$ in relative
solar metallicity $[{\rm Fe/H}]$. It should be noted that this is not
sufficient for a detailed abundance determination, which was not the
aim of this work. The surface gravity is kept constant at $\mbox{$\log\,g$} =
4.5$. Knowing the RV of the individual components of a system, the
models were gauss-folded to the resolution of the observation and
shifted to their position in the observed spectrum. Depending on the
number of spectral components, all possible combinations of model
spectra were tested for each observed spectrum. A $\chi^2$-test was
used to identify the best fitting models. Given the low
signal-to-noise ratio of the spectra, we estimate an uncertainty of
about $400~{\rm K}$ in effective temperature.
In cases where we know the RV amplitudes for two components (MACHO
118.1407.57, 118.18793.469, 402.47800.723), $M \sin i$ is known for
both components. Assuming $i = 90^\circ$ for the first iteration, we
determined radii and effective temperatures ($T_{\rm eff}^{\rm RV}$ in
Table~\ref{table:1}) from interpolation of the Geneva model tracks
\citep{1993A&AS...98..523S} assuming zero-age main sequence (ZAMS) or
terminal-age main sequence (TAMS) stars. We then applied the eclipsing
binary simulation software
\mbox{\texttt{Nightfall}}\footnote{http://www.hs.uni-hamburg.de/DE/Ins/Per/Wichmann/Nightfall.html}
with the derived stellar and orbital parameters from the previous step
as input and calculated a best model fit to the R-band light curve and
radial velocity measurements simultaneously. We used the third light
contribution and the inclination as free parameters and calculated
$\chi^2$-values for the light curve fits assuming ZAMS and TAMS
stars. In a second iteration, we repeated the fit with the now known
inclination (see Fig.~\ref{fig:chi2}). For these three systems the so
derived effective temperature can be compared to the one of the
spectral analyses ($T_{\rm eff}^{\rm SA}$ in
Table~\ref{table:1}). Deviations are within our estimated
uncertainties and show the overall consistency of our main-sequence
solution.
In the other two cases (MACHO 120.22041.3265 and MACHO 118.18272.189),
the effective temperature from the spectral analysis was used to
derive masses and radii of each components, again assuming ZAMS and
TAMS stars. In the light curve simulations for the MACHO R-band
photometry we varied the inclination and the radius $R_2$ of the
potential transiting object, assuming ZAMS and TAMS primary stars.
\section{Results}
We will present results for the five targeted MACHO objects for which
we found an orbital solution that explains the detected transits and
the measured radial velocities. In Fig.~\ref{fig:rvslc_ell} we show
fitted light curves to the photometric MACHO data (bottom panels in
the plots) and RV curve fits to the Doppler-measurements (asterisks in
the top panel of the plots). The dashed lines are for circular orbits
and the solid lines show a best fit elliptical
orbit. Fig.~\ref{fig:rvslc_circ} again shows \mbox{\texttt{Nightfall}}\ light curve
solutions to the photometric data as well as the RV fits to the
Doppler-measurements. Here circular orbits reproduce the observations
best. All stars were assumed to be on the ZAMS for the fits in
Figs.~\ref{fig:rvslc_ell} and \ref{fig:rvslc_circ}. $\chi^2$-contour
plots for both ZAMS and TAMS stars are depicted in
Fig.~\ref{fig:chi2}. The inclination of the orbital plane and the
third light contribution (bottom three plots) and the radius $R_2$
(top two plots) of the potential transiting objects were treated to
vary. A list of the orbital parameters $P_{\rm MACHO}$ \citep[period
given by][]{2004ApJ...604..379D}, the derived period $P$ in our
analyses, the RV amplitude $K$, the system velocity $V_0$, the orbital
inclination $i$, and in cases of systems with elliptical orbits, $e$
the eccentricity and $\omega$ the longitude of the periastron as well
as the mass, effective temperature, and radius is shown in
Table~\ref{table:1}.
\subsection*{MACHO 120.22041.3265}
MACHO 120.22041.3265 is the only system in our sample with just one
component visible in the spectra. Spectral analysis yields $\mbox{$T_{\rm eff}$} =
6200~{\rm K}$ and indicates a low metallicity ($[{\rm
Fe/H}]=-1.0$). The fit of a sinusoidal to the RVs folded to a period
of $5.4083~{\rm d}$ (DC, dashed curve in Fig.~\ref{fig:rvslc_ell})
differs from the RV measurement at a phase of $0.87$ by $\sim 10~{\rm
km~s^{-1}}$. A better fit is provided by an elliptical orbit with an
eccentricity of $e=0.108$, a longitude of periastron of
$\omega=19.98$, and an orbital semi-amplitude of $K=22.18~{\rm
km~s^{-1}}$. For such a system the radius and mass of the secondary is
$R_2 = 0.3~\mbox{$R_{\odot}$}$ and $M_2 = 0.23~\mbox{$M_{\odot}$}$ for a ZAMS and $R_2 =
0.5~\mbox{$R_{\odot}$}$ and $M_2 = 0.25~\mbox{$M_{\odot}$}$ for a TAMS primary (see
Fig.~\ref{fig:chi2}), clearly indicating an M dwarf companion.
With these parameters, the system is very similar to OGLE-TR-78
\citep{2005A&A...438.1123P}.
We used equation~(6.2) of \citet{1977A&A....57..383Z} to calculate an
estimate for the circularisation time of the system. Due to the low
mass ratio $q=M_2/M_1$, we find a circularisation time of the order of
the Hubble time even for this close binary system.
\begin{figure}
\includegraphics[clip,width=8.8cm]{7147fig1.ps}
\includegraphics[clip,width=8.8cm]{7147fig2.ps}
\caption{Radial velocity and light curve fits for systems with
elliptical orbits. The dashed lines show best-fit sinusoidals while
the solid lines show best-fit eccentric orbits. Component {\it a} is
plotted in black, component {\it b} in grey. The system velocity for
the circular orbit is shown by the thin line, and for the elliptical
orbit by the thick dotted line. The solutions shown are calculated
assuming ZAMS stars. The error bars for the RV measurements are of the
size of the symbols.}
\label{fig:rvslc_ell}
\end{figure}
\begin{figure}
\includegraphics[clip,width=8.8cm]{7147fig3.ps}
\includegraphics[clip,width=8.8cm]{7147fig4.ps}
\includegraphics[clip,width=8.8cm]{7147fig5.ps}
\caption{Radial velocity and light curve fits for systems with
circular orbits. Component {\it a} is plotted in black, component
{\it b} in grey, and component {\it c} in a lighter grey. The
solutions shown are calculated assuming ZAMS stars. The error bars
for the RV measurements are of the size of the symbols.}
\label{fig:rvslc_circ}
\end{figure}
\subsection*{MACHO 118.18793.469}
Two spectral components could be identified, each with
$\mbox{$T_{\rm eff}$}=5400~{\rm K}$ and a highly subsolar ($[{\rm Fe/H}]=-1.5$)
metallicity. For the light curve and RV fits with \mbox{\texttt{Nightfall}}\ we used the RV
amplitudes of the two stars to derive masses by the procedure
described in the previous section. A reasonable fit to the RV
measurements folded to twice the period of DC can be achieved with
sinusoidals (dashed curves in Fig.~\ref{fig:rvslc_ell}),
i.~e. assuming a circular orbit for the two components. However, an
improved fit can be achieved by fitting the light curve and radial
velocities in \mbox{\texttt{Nightfall}}\ at the same time to an elliptical orbit (solid
curves in Fig.~\ref{fig:rvslc_ell}). The best fit is achieved with a
small eccentricity of $e = 0.041$ and a periastron longitude of
$\omega=89.94^\circ$. By varying the third light and the inclination,
we construct the $\chi^2$-map shown in Fig.~\ref{fig:chi2}. As
suggested by the spectral analysis of MACHO 118.18793.469, the lowest
$\chi^2$-value is found for a third light of zero. The inclination is
$85.6^\circ$ for the ZAMS and $82.2^\circ$ for the TAMS. The low
depths of the transit is therefore due to a grazing eclipse. This is
also supported by the V-shape of the best-fit model.
The best-fit light curve model shows different transit depths. This is
an indicator for two transits in one orbital period caused by two
stars of slightly different size.
\subsection*{MACHO 118.18407.57}
Three components are visible in the CCs of the three spectra, one of
which shows RV variations below $1~{\rm km~s^{-1}}$. Therefore, this
component is a third component, either in a wider orbit or physically
unrelated to the other two. Component {\it a} and {\it c} show RV
changes of over $100~{\rm km~s^{-1}}$. They can be well fitted with
sinusoidals of twice the period given by DC, i.~e. $4.7972~{\rm
d}$. If the photometric data are phased accordingly, we then see both
transits where the transit depths are reduced due to third light of
component {\it b}.
For the light curve simulation we once more used the RV amplitudes of
{\it a} and {\it c} to get the masses and varied the inclination and
third light coming from component {\it b}. The effective temperatures
and radii of the components are interpolated from the Geneva evolution
tracks assuming young stars on the ZAMS and older stars on the
TAMS. The contribution of component {\it b} meets the expectations
from the spectral analyses ($\mbox{$T_{\rm eff}$} = 6200~{\rm K}$ for components {\it
a} and {\it c} and $\mbox{$T_{\rm eff}$} = 6600~{\rm K}$ for component {\it b}, also
see Fig.~\ref{fig:chi2}). The inclination of the system is $84^\circ$
assuming that the stars are on the ZAMS and $79.5^\circ$ for the
TAMS. The system shows different transit depths, as MACHO
118.18793.469 does.
\subsection*{MACHO 402.47800.723}
The second brightest object in the sample shows three components in
the spectra. Components {\it a} and {\it c} are best fitted by a
model with $\mbox{$T_{\rm eff}$}=6400~{\rm K}$, {\it b} has $\mbox{$T_{\rm eff}$}=5800~{\rm K}$. As
in the case of MACHO 118.18407.57, the masses are derived from the
radial velocities. The RV measurements of {\it a} and {\it c} are well
fitted assuming a circular orbit with twice the period of DC. The
third component only shows small RV variations and therefore seems to
have a larger period than the other two. The fractional third light
contribution for this component is $\sim 1/3$ and an inclination of
the eclipsing system is $85.8^\circ$ assuming stars on the ZAMS and
$82.3^\circ$ for the TAMS (see Fig.~\ref{fig:chi2}). We again see
transit depth differences. Due to the high signal-to-noise ratio of
the light curve, these are quite obvious and amount to $\Delta R =
0.015$. This observation is also expected from the orbital
semi-amplitude differences.
\subsection*{MACHO 118.18272.189}
Each of the three FEROS spectra displays two components. The spectral
analysis reveals that both components have a similar effective
temperature ($\mbox{$T_{\rm eff}$} = 5800~{\rm K}$) and a subsolar metallicity of
$[{\rm Fe/H}]=-0.5$. The cross-correlation shows that component {\it
b} has a constant RV of $\sim 5.46~{\rm km~s^{-1}}$ within the above
mentioned statistical errors. Component {\it a} shows RV variations of
$\sim 3.5~{\rm km~s^{-1}}$. Folding the RV measurements to the orbital
period given by DC, one sees that the two components visible in the
spectrum cannot be responsible for the transit in the light curve
since one RV point is very close to the transit. However, here the
two components should almost have the same RV. This is clearly not
present in the data. The same is the case if we double the period (see
Fig.~\ref{fig:rvslc_circ}). Thus, we exclude the scenario that the two
visible components are responsible for the transit.
\begin{figure*}[ht!]
\includegraphics[clip,width=18cm]{7147fig6.ps}
\caption{$\chi^2$-contour plots for all analysed systems. In the
left column we assume that the stars are on the zero-age main
sequence and in the right column on the terminal-age main
sequence. The bottom three plots show the $\chi^2$-contours for
third light and inclination as fitted parameters. For the top two
the radius of the eclipsing component and the inclination have been
varied. The crosses mark best-fit values.}
\label{fig:chi2}
\end{figure*}
In another plausible scenario, we treat component {\it b} as third
light and assume that component {\it a} is eclipsed by a low mass
object not visible in the spectra. However, if we fit a sinusoidal to
the RV points, in our solution the star would move away from the
observer after the transit while it should do the opposite. We can
therefore discard this scenario.
One could argue that the variations in RV measured for component {\it
a} is just caused by systematic errors and that the eclipse visible in
the light curve is caused by a planet orbiting {\it a} without causing
any noticeable RV changes. We have fitted this scenario taking the
light from component {\it b} into account and found a radius of $R_2 =
0.25~R_\odot$ assuming that {\it a} is on the ZAMS and $R_2 =
0.35~R_\odot$ for {\it a} being on the TAMS (see
Fig.~\ref{fig:chi2}). These values, however, seem unrealisticly high
for planets and we can reject the 3-body scenario.
Finally, one scenario that can explain both the transit light curve
and the measured RVs is a four body system consisting of the two G
stars which are visible in the spectra and two M dwarfs invisible in
the spectra. Here the two faint components orbit each other in twice
the period from DC and eclipse each other twice. We assume an
inclination of $90^\circ$ and two low-mass stars of equal size. The
effective temperature of the eclipsing bodies was derived from the
transit depth of the MACHO R-band light curve using blackbody fluxes
for all four components. The transit depth is reduced by the light of
components {\it a} and {\it b}. The RV variations of component {\it
a} can in this scenario be explained by the reflex motion of {\it a}
to the orbit of the binary M star system with a much larger period. We
therefore do not observe a correlation between the transits and the
RV. This scenario is underlined by the fact that the two RV
measurements in Fig.~\ref{fig:rvslc_circ} at periods of $\sim 1.0$ and
$\sim 1.3$, which have approximately the same RVs, are from two
spectra only taken one day apart, while the third RV value comes from
a spectrum 26 days later. Component {\it b} would be in a very wide
orbit or physically unrelated to the other three stars.
\section{Summary}
For none of the five analysed MACHO-candidates a planetary or brown
dwarf companion could be identified. We therefore confirm the
speculation of DC that due to the depths of the transits in the
photometric data the objects would be low-mass stars rather than
sub-stellar objects. From the five candidates, we found one grazing
eclipse of two nearly identical G stars (MACHO 118.18793.469), two
blends of deep eclipses of G stars with a significant third light
contribution (MACHO 118.18407.57 and MACHO 402.47800.723), one binary
star with a G type primary and an M dwarf secondary (MACHO
120.22041.3265) and one rather complicated, blended system with four
stars, of which each two are nearly identical (G and M type). With
this work we could show that also for deep transit surveys for
extrasolar planets, follow-up observations to weed out false positives
are efficiently possible with moderate effort.
After all, our results once again underline the need for spectroscopic
follow-up of transit planet candidates as already shown by
\citet{2005A&A...431.1105B} and \citet{2005A&A...438.1123P} for the
OGLE survey and \citet{2004ApJ...614..979T} in the case of a blend
scenario.
\begin{acknowledgements}
We would like to thank the referee for very useful comments.
\newline S.D.H. gratefully acknowledges the support of the
German-Israeli Foundation for Scientific Research and Development
grant I-788-108.7/2003.
\newline A.R. has received research funding from the European
Commission's Sixth Framework Programme as an Outgoing International
Fellow (MOIF-CT-2004-002544).
\newline This paper utilizes public domain data obtained by the MACHO
Project, jointly funded by the US Department of Energy through the
University of California, Lawrence Livermore National Laboratory under
contract No. W-7405-Eng-48, by the National Science Foundation through
the Center for Particle Astrophysics of the University of California
under cooperative agreement AST-8809616, and by the Mount Stromlo and
Siding Spring Observatory, part of the Australian National University.
\end{acknowledgements}
\bibliographystyle{bibtex/aa}
|
2,869,038,155,876 | arxiv | \section{INTRODUCTION}\label{intro}
\cite{kats07} first reported and characterized penumbral microjets (PJs) from high-spatial-resolution, high-cadence \ion{Ca}{2}\ H-line chromospheric images of sunspots obtained by the Solar Optical Telescope/Filtergraph \cite[SOT/FG:][]{ichi08} onboard the {\it Hinode} \citep{kosu07} satellite. The \ion{Ca}{2}\ H-line is formed in the chromosphere at a temperature of about 10$^4$ K or less. PJs are scattered, constantly occurring, transient jet events in sunspot penumbrae, with lifetimes of less than a minute, lengths typically between 1000 and 4000 km, some can be longer up to 10000 km, and with widths of less than 600 km. Their apparent speed is more than 100 km~s$^{-1}$. Because they are faint and transient, with an enhanced brightness of 10--20\% as compared to background penumbra, PJs are more clearly visible in running difference images in comparison to direct intensity images.
The fact that PJs were more clearly visible in the limbward side of the penumbra and least visible in the disk-center side of the penumbra, evidently due to foreshortening, led \cite{kats07} to conclude that the jets are aligned to the more vertical component of the penumbral magnetic structure. The penumbral magnetic field is a combination of spines (more vertical field) and interspines (more horizontal field), first classified by \cite{lites93}; see also \cite{titl93,sola93,pill00,sola03,bell04,lang05,borr11,scha11,scha12,tiw13,tiw15aa} for further details on the two components of sunspot penumbral magnetic field. \cite{jurc08} observed an increase in the inclination of PJs towards the outer edge of penumbra from an average inclination of PJs of 35$^\circ$ (with respect to the local normal line) at the umbral-penumbral boundary to 70$^\circ$ at the penumbral/quiet Sun boundary. This inclination change of PJs is compatible with the average field inclination change with radial distance in sunspots, see e.g., Figure 7 of \cite{tiw15aa} for radial variation of field inclination in a sunspot.
From a space-time plot, \cite{kats07} found some PJs to form near bright penumbral grains \citep{mull73,sobo99,rimm06}. However, this remained to be established statistically. \cite{jurc08} found it difficult to locate PJs with respect to penumbral filaments. \cite{kats07} proposed that PJs could be produced as a result of magnetic reconnection between the two magnetic components (spines and interspines). In this picture, the jets are oriented in the direction of the spine field, travel upward along it, and are rooted at an edge of a filament, between the field in a spine and the more horizontal field (along the central axis) in the filament.
Recent magnetohydrodynamic (MHD) simulations support the idea of magnetic reconnection driving these events, either induced by strong outflows along horizontal flux tubes \citep{saka08}, or by assuming the horizontal field in a twisted flux tube \citep{maga10}. An alternative is given by \cite{ryut08}, in which shocks caused by reconnection between neighboring penumbral filaments can produce PJs in a manner that appears consistent with the observations of \cite{rear13}.
In a recent investigation of internal structure of a sunspot's penumbral filaments by applying a spatially coupled depth-dependent inversion code \citep{van12,van13} on spectropolarimetric data of Hinode (SOT/SP), \cite{tiw13} found that the penumbral filaments (interspines) resemble elongated convection cells, and behave as stretched magnetized granules. The bright penumbral grains are the heads of penumbral filaments (the head is the filament's end closer to the umbra). Strong upflows, with a field of the polarity of that of the umbra, are observed in the heads of penumbral filaments. The upflow continues along the horizontal axis of the filament for more than half its length outward, and weakens with length. A stronger downflow is observed at the tails of filaments, which contain field of opposite polarity to that of the umbra and heads of filaments. Weak downflows are also observed at the edges of filaments along the length on the sides of the upflows. These lateral downflows also often contain field of opposite polarity to that of the umbra and heads of filaments. This opposite polarity field was found in one third of the total filaments studied by \cite{tiw13}. Also see, \cite{scha13} and \cite{ruiz13} for observational reports on the presence of opposite polarity field in the sides of penumbral filaments, which was also obtained in three-dimensional magnetohydrodynamic (3D MHD) simulations by \cite{remp12}. Thus, keeping the above picture of a penumbral filament's internal structure in mind, the scenario of formation of PJs \cite[proposed by][]{kats07} e.g., by reconnection between two components of penumbral magnetic field, inclined to each other at an acute angle, should be modified. In Section \ref{sec1}, we present a modified picture of the magnetic configuration for the production of PJs.
The other main issue that we address in this paper is whether PJs have any transition region (TR) or coronal signatures. The estimation of chromospheric thermal energy (3/2 $n k_BTV$) of a PJ as estimated by \cite{kats07} based on the following numbers: n = 10$^{18}~ \mbox m^{-3}$, k$_B$ = 1.38$\times 10^{-23} ~\rm{m^2~ kg ~s^{-2}~ K^{-1}}$, T = 10$^4$ K, $V$ = 2000 km $\times$ (300 km)$^2$, returns a value of 2$\times$10$^{16}$ J, or 2$\times$10$^{23}$ erg, which is of the order of that of a coronal nanoflare \citep{sves76,parker88}. Based on this estimation it was suggested by \cite{kats07} that PJs have potential to appear or display some signatures in the corona, and to contribute in some ways to the coronal heating in active regions.
To test the above hypothesis, one needs coronal observations of sunspot penumbrae at very high spatial resolution because, as mentioned before, the widths of these PJs are in the range 150 -- 600 km (400 km or less according to \cite{kats07}), close to and at the the resolution limit of the telescope (SOT). Coronal observation at such a high resolution was not available until the High-resolution Coronal Imager \cite[Hi-C:][]{koba14,cirt13} obtained images of an active region (AR) in a narrow passband filter centered at 193 \AA\ at a high spatial resolution of about 150 km, which is also the approximate resolution of Hinode (SOT/FG). Fortunately, as a result of coordinated observations, Hi-C and Hinode both observed a part of a penumbra of the Hi-C AR 11520 for 1.75 minutes. Although the part of the observed penumbra was on the disk-center side, which is not the best location for the visibility of PJs, the penumbral field was twisted far from the disk-center direction, so that we could observe several PJs in this part of penumbra within this short time period of observation.
We have extended the study for one hour by using different Atmospheric Imaging Assembly channels. We notice a few locations where larger jets are repeatedly produced. To investigate the magnetic structure and origin of these larger penumbral jets, we have also used the co-temporal Stokes-V data obtained by SOT/FG in the field of view (FOV) of the SOT/FG \ion{Ca}{2}\ movie and G-band images of the penumbra.
\section{DATA}\label{data}
To identify PJs in the penumbra of the NOAA AR 11520 ($\sim$ X $-$125\arcsec, Y $-$325\arcsec) observed by Hi-C \citep{cirt13,koba14}, we have used \ion{Ca}{2}\ H-line broadband filtergraph images (centered at 3968 \AA) obtained by SOT \citep{tsun08,suem08,ichi08} onboard {\it Hinode} \citep{kosu07}. We use observations taken between 18:53 and 19:53 UT on 11 July 2012, which covers 1.75 minutes (18:53:44 -- 18:55:30 UT) overlap with the five minutes observations of Hi-C. Hi-C observed at high-resolution the corona of AR 11520 in a narrow wavelength range centered at 193 \AA. The cadence of the Hi-C 193 \AA\ images is 5 s. The cadence of the \ion{Ca}{2}\ images varies frame to frame from 5 s to 15 s. The spatial resolution of both telescopes is about 150 km.
G-band images from SOT/FG are used to identify the locations of feet of PJs in the photosphere. Heads of filaments appear brighter and tails darker in G-band images. We have also used Stokes-V images obtained by SOT/FG to examine the magnetic polarity at the locations of some larger jets, discussed later. Both the G-band and Stokes-V images have a cadence of about 50 s.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{f2.pdf}
\caption{Overview of observations by Hi-C and Hinode (SOT/FG): (a) Hi-C full field of view 193 \AA\ image; a box covers the region of interest of sunspot penumbra for the present work. (b) Enlargement of area outlined by the box in (a). (c) Hinode (SOT/FG) Ca {\footnotesize II} H-line image of the same FOV at the same time as the Hi-C image. A movie `movie1.mp4' of 1.75 minutes is linked to this figure. The movie contains four panels: two top panels are of Hi-C 193 \AA\ filtergrams and SOT/FG \ion{Ca}{2}\ H-line filtergrams, and two lower panels are of running differences of the corresponding intensity images. Solar north is up and west is to the right in these and all other solar images/movies in this paper.}
\label{f2}
\end{figure*}
In addition, we use one-hour-long (18:53 -- 19:53 UT) image series from 1600, 304, 171, 193, and 94 \AA\ passbands observed by the Atmospheric Imaging Assembly \citep[AIA:][]{leme12} onboard the {\it Solar Dynamics Observatory (SDO)} spacecraft. The pixel size of AIA is 0.6 arcsec, so these images detect but do not resolve the jets. The cadence of the AIA 1600 channel is 24 s, and the cadence is 12 s for each of the other AIA channels used in the present study. All the data used in the paper are calibrated and, whenever applicable, co-registered by using SolarSoft routines.
The 193 \AA\ (AIA and Hi-C) and AIA 94 \AA\ bands are both predominantly coronal \citep{leme12}. The 193 \AA\ band particularly detects \ion{Fe}{12}\ at about 1.5 MK, but also has some response to the transition region emission of $2-3 \times10^5$ K plasma \citep{delz13,wine13}. The 94 \AA\ channel allows mostly hot emissions and is centered on an \ion{Fe}{18}\ line ($6-8\times10^6$ K), but also detects some line emission from Fe ions formed at $\sim 1\times 10^6$ K \citep{warr12,delz13}; see also \cite{test12}. There is no known cool TR contamination in the 94 \AA\ channel.
The 1600 \AA\ AIA passband, which primarily passes lower-chromospheric continuum emission, also covers the two \ion{C}{4}\ lines near 1550 \AA\ formed at T $\approx$ 10$^5$ K in the transition region (TR). The transient brightenings in the 1600 \AA\ band are due to these \ion{C}{4}\ lines, and hence are from the lower TR \citep{leme12}. The 304 \AA\ passband observes the upper chromosphere and lower TR at $\sim$ 50,000 K, in emission primarily from \ion{He}{2}. The 171 \AA\ band observes emission primarily from \ion{Fe}{9}\ formed in the upper TR at 6 $\times 10^{5}$ K.
\section{RESULTS}
In this section, we first present results on detection of TR signatures but no definite coronal signatures of PJs from the analysis of Hinode (SOT/FG) and Hi-C data sets. Then we present results from analysis of Hinode (SOT/FG) and SDO/AIA data of one hour. We discover larger penumbral jets than normal ones reported earlier by \cite{kats07}. We characterize these larger PJs and their magnetic setting.
\begin{figure*}[htp]
\centering
\includegraphics[width=\textwidth]{f3.pdf}
\caption{Two examples of penumbral jets, from their birth to decay, are displayed in the middle row. White arrow points to a PJ whereas pale green arrow points to a location where larger PJs are produced repeatedly, discussed in section \ref{trc2}. Top two rows contain Hi-C images and corresponding running difference (RD) images. Third and fourth rows contain RD and intensity images from \ion{Ca}{2}\ H-line of nearly the same time as that of the Hi-C image. Last row contains G-band images corresponding to beginning and end of the two jets; the first two G-band images are the same and duplicated to point out the penumbral filament head at the feet of the PJ indicated by white arrow. These arrows are duplicated in all images, except in the middle G-band image, which contains two black arrows, pointing to the source locations (feet) of the jets. A red cut on the middle \ion{Ca}{2}\ RD image is to measure the PJ's width, presented in Figure \ref{pjw}. The tickmark separation in each panel is 1\arcsec.}
\label{f3}
\end{figure*}
\subsection{Transition region and coronal signatures of penumbral jets: Hinode (SOT/FG) and Hi-C observations}\label{trc1}
In Figure \ref{f2}(a), we display a full FOV Hi-C 193 \AA\ image with a box on it outlining the FOV of the penumbra studied, which is enlarged in panel (b). A co-temporal \ion{Ca}{2}\ image of the same FOV is shown in panel (c). A movie `movie1.mp4', connected to this figure, contains four panels with two intensity panels of Hi-C and \ion{Ca}{2}, respectively, and two panels of corresponding running difference images, for 1.75 minutes (during 18:53:44 -- 18:55:30) when Hi-C and FG simultaneously looked at the sunspot penumbra. As mentioned before, both instruments have a spatial resolution of about 150 km, which provides a unique opportunity for comparison of the jets at two different heights and temperatures.
At least 10 jets can be observed in the movie `movie1.mp4' of 1.75 minutes; they are especially visible in the \ion{Ca}{2}\ running difference movie. Most of the PJs have brightness enhancement by 10 -- 20\% of penumbral background, however some of the largest jets have enhancement up to 30\%. In the Hi-C 193 \AA\ movie and the corresponding running difference movie, one can notice many bright dots (BDs) that move toward or away the spot center along filaments in the penumbra. These BDs are reported and characterized in detail by \cite{alpert15}. However no signatures of PJs are noticeable in the Hi-C 193 \AA\ movie.
In Figure \ref{f3} we display two examples of jets from start to end, clearly seen in the running difference \ion{Ca}{2}\ images (middle row). To look for any coronal signatures of them, corresponding Hi-C images and their running difference images are shown in the two upper rows. We have also displayed \ion{Ca}{2}\ intensity and G-band images in the last two rows of the Figure \ref{f3}. As noted from the arrows in G-band images it is difficult to identify exactly where PJs are rooted relative to heads of penumbral filaments; this agrees with \cite{jurc08}.
Only two G-band images, corresponding to beginning and decay times of the two PJs, are available. The middle G-band image is the same as in the first column and black arrows on it point to the locations in the penumbra where the example PJs are produced. Although many of the PJs appear forming near heads of filaments, as pointed by a black arrow in the middle-panel G-band image in Figure \ref{f3}, as an example, it cannot be concluded with certainty that most PJs form at heads of filaments. The second jet pointed by the pale green arrow is among one of the larger penumbral jets, which appear to be rooted at the converging tails of a few filaments (location pointed by a black arrow in the middle G-band image), discussed in detail in section \ref{trc2}.
Both chromospheric penumbral jets displayed in Figure \ref{f3} show no signatures in the Hi-C running difference images. The same is true for all other PJs seen in the movie `movie1.mp4'.
In the RD movie of FG \ion{Ca}{2}\ images, we notice a couple of locations where somewhat larger jets repeatedly appear. One such location is pointed by the pale green arrow in Figure \ref{f3}. But less than two minutes is too short a time to confirm this repetition. To explore in detail if jets are repeatedly produced at this location for longer time, and if there are other such locations of penumbral larger jets, and whether they have any TR or coronal signatures, we extended our investigation for about an hour of SOT's \ion{Ca}{2}\ H-line observations of the penumbra. We included SDO/AIA data to look for TR and coronal signatures of larger PJs, as described next (in Section \ref{trc2}).
\subsection{Transition region and coronal signatures of larger penumbral jets: Hinode (SOT/FG) and SDO/AIA observations}\label{trc2}
In the one-hour \ion{Ca}{2}\ H-line running difference (RD) movie, we noticed at least four locations where multiple jets are produced repeatedly. These jets appear brighter, larger and wider than most PJs. In what follows, we characterize these larger penumbral jets and investigate their signatures in the TR and corona. We also look for the formation mechanism of these larger PJs by studying magnetic field polarity using Stokes-V images, which is described in Section \ref{mag}.
\begin{figure*}[htp]
\centering
\includegraphics[width=0.91\textwidth]{f4.pdf}
\caption{An example image of each wavelength of AIA used in this investigation. The nominal wavelength of the AIA channel, and time of observation, which is nearly the same for all, is displayed on each image. Bottom right panel is a SOT/FG running difference (RD) image, taken closest to the time of all AIA images. To better visualize any transient events, i.e., jets, we have linked a movie `movie2.mp4' of RD images for one hour. White arrows indicate the position of a large PJ (location numbered `4' in the movie). The tickmark separation in each panel is 1\arcsec.}
\label{f4}
\end{figure*}
\begin{figure*}[]
\centering
\includegraphics[width=0.91\textwidth]{f5.pdf}
\caption{An example of a large PJ (location numbered `2' in the `movie2.mp4'). RD images of five AIA channels and \ion{Ca}{2}\ H-line at nearly the same time are shown. The green arrow in each panel marks the location of the example jet as seen in the \ion{Ca}{2}\ H-line RD image. The yellow arrow points to the extension of the jet seen in AIA 193, 304 and 171 \AA. A red cut in the bottom right panel is to measure the width of the jet, as presented in the right panel of Figure \ref{pjw}. The tickmark separation corresponds to 1\arcsec.}
\label{f5}
\end{figure*}
First, the AIA image of each wavelength (1600, 304, 171, 193, and 94 \AA) closest in time to each FG \ion{Ca}{2}\ H-line image is selected. Then we make both intensity and RD movies of each of these sets of images. In Figure \ref{f4}, we display one example image of all AIA channels used to investigate the signature of penumbral jets in coronal and TR emission. A movie `movie2.mp4' containing six corresponding panels of RD images, in which the jets are better visible, is connected to this figure. The FG \ion{Ca}{2}\ RD movie of one hour shows many normal PJs and at least four locations of larger PJs. These four locations of larger PJs are each indicated by an arrow and are numbered in the movie. One of these four locations (numbered `2') is the same as that pointed by pale green arrow in Figure \ref{f3}. The larger PJs have enhanced brightness of 30--60\% to that of the background penumbra.
A clear signature of the example larger PJ is visible in 1600 \AA\ in Figure \ref{f5}. The jet appears extending towards north, higher along the magnetic field, and only the front brightens in 193, 171 and 304 \AA\ channels. No intensity enhancement is noticeable in 94 \AA. Many similar larger jets are observed showing signatures in the four above mentioned wavelengths but no signatures of any of these larger jets were observed in 94 \AA, thus indicating that there is no appreciable MK plasma produced in these penumbral jets, howsoever large they are, and that the brightenings in the other channels is from the emission from transition-region-temperature jet plasma \citep[see, e.g.,][]{wine13}.
Also noticeable in the movie are some normal PJs, which are neither repetitive nor exceptionally large, with enhanced intensity of 25--30\% to background, displaying faint signatures in 1600 \AA. However signature of these PJs in 304, 171, 193 and of course in 94 \AA\ is hard to detect if at all. This indicates that some larger-end normal PJs do appear in TR emission, but not in coronal emission, in agreement with the absence of their detection in the Hi-C 193 \AA\ movie of 1.75 minutes described in Section \ref{trc1}.
We measured widths of many of the normal and larger penumbral jets. An example of width of each of them is shown in Figure \ref{pjw}. These examples are ones among the largest and widest in both the cases. Larger PJs can be much wider than normal PJs, as about double in the presented example. Remember the widest normal PJs seen by \cite{kats07} was less than 400 km. The larger PJ shown here is about 500 km wide. When measured by using the method of fitting a Gaussian and taking its full width at half maximum (FWHM), as in Figure \ref{pjw}, the widths of our largest PJ is found to be 600 km.
\begin{figure}[htp]
\centering
\includegraphics[width=\columnwidth]{f6.pdf}
\caption{Widths of a normal jet and a larger jet: left panel is for a normal PJ (along red cut in Figure \ref{f3}), right panel is for a larger PJ (along red cut in Figure \ref{f5}). Black and red colored plots show the original intensity and the fitted Gaussian function, respectively. }
\label{pjw}
\end{figure}
The lifetime of larger PJs is however less than a minute, similar to that of normal PJs or a little longer. Larger PJs are repeatedly produced at the same locations (pointed by arrows in the movie `movie2.mp4') in the sunspot penumbra. The length of the largest PJ is 4200 km, however please note that the location of the sunspot (close to disk center), and particularly of its penumbra (disk-center side) does not allow a measurement of the actual length owing to projection. Nonetheless, the estimated lengths of our larger PJs fall in the range of that of the longest PJs (up to 10000 km) observed earlier by \cite{kats07}, indicating larger PJs are of the same category as normal PJs.
A rough estimation of the speed of the larger penumbral jet at location `4' in `movie2.mov' between the time 19:40:25 and 19:41:03 gives a value of 250 km s$^{-1}$. This speed is much faster than the acoustic speed of 10 km~s$^{-1}$\ in the chromosphere. This speed is also larger than that of normal PJs (100-150 km~s$^{-1}$) by a factor of two. The Alfv\'en speed in the penumbral chromosphere is of order 1000 km~s$^{-1}$. So the speed of the larger PJs is supersonic but sub-Alfv\'enic. Because the deduced observed projected speed is less than the true speed, the fastest larger PJs might be nearly Alfv\'enic.
The width of penumbral filaments is about 550 km \citep{tiw13}. Normal PJs appear rooted at an edge of a filament head, and are usually 150-300 km wide; the lower limit of their width is not well known owing to the resolution limit of the SOT. The size of larger PJs, particularly their widths, seems too large to be produced at the same locations as most PJs are. Although it is difficult to detect opposite polarity signal in absence of SP scans, and advanced processing, e.g., done by \cite{tiw13}, we plausibly can detect the signals in Stokes-V images from SOT/FG, especially if larger PJs are produced at a different location having more reversed-polarity flux than the normal PJs. For this purpose, we analyzed corresponding Stokes-V images of one hour, as presented in the next section.
\subsection{Magnetic setting of larger penumbral jets}\label{mag}
To investigate the magnetic origin of larger PJs, we looked at the Stokes-V data of the same time span as that of the \ion{Ca}{2}\ H-line analyzed in this work. The Stokes-V images can be considered equivalent to line-of-sight (LOS) magnetograms, albeit with arbitrary units. The FOV of Stokes-V images is somewhat smaller than that of the \ion{Ca}{2}\ H-line images in east-west extent; it covers all but a western strip of the \ion{Ca}{2}\ FOV. The cadence of the Stokes-V images is about 50 s, which is poorer than that of the \ion{Ca}{2}\ images. In Figure \ref{stv_ca}, we display two images, one of Stokes-V on the right and the other a \ion{Ca}{2}\ H-line image of nearly the same time and FOV, on the left. A movie `movie3.mp4' of Stokes-V images is linked to the Figure \ref{stv_ca}. Because of smaller FOV only three of the larger PJ locations can be viewed in this movie. Each of the three locations available in this FOV is marked by an arrow.
From Figure \ref{stv_ca}, and `movie3.mp4', it is clear that there is presence of opposite polarity magnetic field patches at the locations of these PJs in the Stokes-V images of FG. The opposite polarity patches present in all the three locations of larger PJs's have different scales, forms and visibility. However, from the G-band images and from the Stokes-V movie, each of them appear to be located at or around the tail of a penumbral filament or at a location where tails of several penumbral filaments converge. Tails of filaments have opposite polarity field to that of spines \citep{tiw13}. Therefore the opposite polarity fields in spine and tails can reconnect and produce larger PJs, in a repetitive manner due to presence of the large amount of opposite polarity flux at tails. Occasionally tails of penumbral filaments converge and form stronger patches of field of opposite polarity to that of spines \citep{van13,tiw13}, thus increasing the possibility of both the size and the frequency of larger PJs.
\begin{figure*}[t]
\centering
\includegraphics[width=\textwidth]{f7.pdf}
\caption{(Left) Same FOV of a \ion{Ca}{2}\ H-line image as that of observed Stokes-V images. (Right) A Stokes-V image of nearly the same time as the \ion{Ca}{2}\ H-line image. The box in each panel covers a location (numbered `2' in the `movie3.mp4') where larger/tail PJs are produced repeatedly. Absolute flux calculated in arbitrary units, by integrating the Stokes-V signal in the area of the box, is shown in Figure \ref{flux}. }
\label{stv_ca}
\end{figure*}
\begin{figure}[htp]
\centering
\includegraphics[width=0.8\columnwidth]{f8.pdf}
\caption{Time plot of absolute flux, inferred from Stokes-V signal, integrated within the box shown in Figure \ref{stv_ca}. Please note that the unit of the absolute flux is arbitrary. A vertical dashed line marks the start time of the example larger PJ indicated by arrow in Figure \ref{f5}. Larger PJs are rooted at this location in the box in Figure \ref{stv_ca}. }
\label{flux}
\end{figure}
The locations of larger PJs on the photosphere may also contain emerging flux, cancellation of emerging flux or both. We show one example in Figure \ref{stv_ca} by outlining the area around a location of larger PJs (numbered as `2' in the `movie2.mp4') by a box. We selected the area within the box, and computed absolute Stokes-V signal contained therein. This is equivalent to absolute flux, however, in an arbitrary unit. Figure \ref{flux}, shows the plot of absolute Stokes-V signal. Clearly, a trend of decreasing flux is seen for about first 40 minutes (during which the location `2' produces larger jets) that indicates a cancellation of Stokes-V signal of opposite polarity field. From the movie `movie3.mp4', we notice that the existence of patches of positive polarity, which seem to be the locations of convergence of several filaments, emerge and cancel with the existing negative flux, as larger PJs get produced there.
Compared to the site `2', similar or smaller patches of opposite polarity fields are observed in the other two locations of larger PJs. These magnetic field patches that have opposite polarity to that of the sunspot umbra appear to be tails of penumbral filaments surrounded by the opposite polarity field of spines or heads of neighboring filaments or both. The bipolar patches could instead be produced by sea-serpent field lines \citep{sain08} that move slowly outward. In either case the reconnection can take place between the field submerging into the photosphere (tails of filaments) and the existing opposite-polarity spine field and/or field inside heads of neighboring filaments. Please remember that the opposite polarity field at the tails of filaments of inner penumbra may not be so clearly visible owing to the Stokes-V signal being dominated by the surrounding spines there.
These observations suggest that the larger PJs are produced by magnetic reconnection between impacted opposite polarity fields. The polarity arrangement might or might not be compatible with standard jet models, see, e.g., \cite{shib95,anti99,moor01,moor10}.
\section{DISCUSSION}
We have addressed two issues concerning penumbral jets in this paper. First, we have investigated whether PJs have any TR or coronal signatures. During the course of this part of the investigation, we identified locations in the penumbra, apparently near tails of filaments, at which repeatedly occurred PJs larger than the normal PJs. We investigated the magnetic setting of these larger PJs using Stokes-V images and also studied their signatures in the TR and corona. Second, based on the recent observational results on the internal structure of penumbral filaments \citep{tiw13}, and the observations of larger PJs repeatedly appearing close to tails of penumbral filaments in the present paper, we propose a modified picture of the formation of PJs, detailed in Section \ref{sec1}.
\subsection{Transition-region/coronal signatures of penumbral Jets}
Taking benefit of high resolution ($\sim$150 km) of both the instruments the Hinode (SOT/FG) and Hi-C, we compared the \ion{Ca}{2}\ H-line and 193 \AA\ images, searching for any signature of chromospheric PJs in the 193 \AA\ coronal images. A comparison of running difference images of 1.75 minutes of both Hinode (SOT/FG) \ion{Ca}{2}\ H-line and Hi-C 193 \AA\ reveals no significant signature of the \ion{Ca}{2}\ jets in 193 images. However, 1.75 minute was too short period for a firm conclusion. An extension of our investigation by combining SDO/AIA multiple channels, albeit with lower resolutions, reveals some repeating larger jets at some locations. In the G-band images and Stokes-V movie of SOT/FG, these locations appear to be tail of a penumbral filament or where tails of a few filaments converge.
We also detected some normal (non-tail, non-repetitive) PJs clearly displaying signatures in 1600 \AA, but not in 193 \AA\ and other AIA channels studied in this paper. This is in agreement with the recent observations of \cite{viss15}, who found TR signatures of PJs in \ion{C}{2}, \ion{Si}{4}\ and \ion{Mg}{2} k\ slit-jaw images of IRIS data. Please note that the sensitivity and resolution of AIA 1600 \AA\ might have prevented the detection of response of PJs in the TR. Or, the PJs selected by \cite{viss15} all being large enough, in the range of ones the largest normal PJs in our case showing the TR response in 1600, cannot be ruled out. Thus, we conclude that although most of the normal PJs do not directly contribute to the TR and coronal heating, some of the largest normal PJs do display signatures in the TR. If smaller normal PJs do contribute to significant coronal heating in some form, it could be only by adding energy through increasing stress and braids in coronal loops, or by sending MHD waves up into the corona.
We observed at least four locations in the FOV of the sunspot penumbra we studied, where larger jets were produced repeatedly. These jets are brighter and larger in size than normal PJs. We find that although the lifetime and length of larger PJs are similar to or at the larger end of the distribution of normal PJs, they are wider, and have higher apparent speeds than the normal PJs. The width of the weakest jets hits the resolution limit of the telescope, SOT, whereas the largest PJs are as wide as 600 km, measured by a Gaussian fit as shown in Figure \ref{pjw}, for an example. The opposite polarity patches found at the base of the larger PJs, which apparently are the locations of tails of filaments, sometimes where multiple filaments tails converge \citep{van13}, suggests a magnetic reconnection process to be responsible for their formation too. Because at tails of filaments there exists larger patches of opposite polarity field than the other parts (head and bulk), a larger scale magnetic reconnection of the field at tails with spine field is plausible. The flux cancellation as shown in Figure \ref{flux} for the example location of larger PJs is compatible with the reconnection being responsible for their production.
The speeds of all jets, either normal PJs or larger/tail PJs, are faster than sonic but lower than the Alfv\'en speed, consistent with these jets being magnetically driven via reconnection. Thus, both the normal and larger/tail penumbral jets appear to be produced by magnetic reconnection process, in some ways similar to the ways other jets, flares and CMEs are produced, see, e.g., \citet{shib95,anti99,moor01,moor10}, and references therein.
Although the velocity of larger/tail PJs and their length are on the large side of that of PJs, we are unable to measure their true values due to projection foreshortening in our penumbra. A future investigation with such high resolution data sets, together with Stokes-V images or magnetograms is required to better establish the characteristics of tail PJs.
Please note that the locations of larger PJs often also produce smaller (normal) size PJs. In other words, not all jets formed at the locations of larger PJs are large enough to appear in the hotter AIA channels (e.g., in 193 \AA). For example, the jet marked by green arrow in the \ion{Ca}{2}\ H-line RD images of Figure \ref{f3} is at a location of larger PJs (numbered `2' in the `movie2.mp4') but does not show any signature in 193 \AA\ RD images of Hi-C and/or AIA. Whereas, the jet displayed in Figure \ref{f5} at the same location but at a different time clearly displays signatures in AIA 1600, 304, 171 and 193 channels.
The estimated thermal energy of chromospheric normal PJs is of the order of 10$^{23}$ erg, and most PJs hardly show any TR or coronal signatures. Whereas, some penumbral jets with larger sizes, that are produced at tails of penumbral filaments, have width of 2-3 times more than that of most PJs, and thus have energy larger by a factor of four to nine, or more, show signatures in the AIA 1600, 304, 171, and 193 \AA\ channels. Because AIA 304, 1600, 171 are formed in the chromosphere, lower corona or TR, and 193 passes some TR emission, as discussed in Section \ref{data}, we cannot rule out that the tail PJs contribute some weak coronal heating. But what we can definitely say from our current observations is that tail PJs display significant signatures in TR emission. However, because of the sparsity of tail PJs, a significant contribution of tail PJs to coronal heating above sunspot penumbrae is very unlikely. High resolution data from future Hi-C flights, Solar-C and DKIST solar observations should be able to address these questions more closely.
\begin{figure*}[tp]
\centering
\includegraphics[trim=0cm 3cm 0cm 1cm,clip=true,width=0.98\textwidth]{cartoon.pdf}
\caption{A cartoon diagram (not showing the true width of the filament) depicting the formation mechanism of penumbral jets. For most PJs (the normal PJs), the reconnections, represented by yellow stars, take place at the edges of a penumbral filament, where the field (dashed red lines) is directed at a right angle or obtuse angle to the spine field (dark orange nearly vertical lines). This is a modified picture of that proposed by \cite{kats07}, where reconnection takes place between two components of field inclined to each other at an acute angle. Tail PJs form at tails of filaments, where more field of opposite polarity (to that of the spine field) is present than elsewhere along the filament. All of the larger-than-normal PJs are tail PJs. The proposed magnetic configuration of the reconnection is closely shown inside the box. To keep the picture clearer, the more vertical component of field inside the head of filament, which has nearly the same inclination angle as of the surrounding spine field, is not drawn.} \vspace{0.8cm}
\label{f1}
\end{figure*}
\subsection{Modified picture of formation of penumbral microjets}\label{sec1}
A sunspot penumbra contains penumbral filaments and spines \citep{tiw15aa}. The recently explored internal structure of sunspot penumbral filaments by \cite{tiw13} suggests the following modification of the formation mechanism of PJs proposed by \cite{kats07}. The penumbral filaments are horizontally elongated magnetized convective cells in which lateral downflows are present. These downflowing lanes also contain opposite polarity field, which is most clearly present near heads of penumbral filaments but also continues to show at sides along the full length of filaments getting strongest at their tails \citep{tiw13}. Although exact location of PJs has not yet been established \citep{jurc08}, PJs possibly occur all along penumbral filaments. On each side of a penumbral filament there is spine field that presses against the filament \citep{tiw13}. The spine field could easily reconnect with the opposite polarity field in the sides of filaments, and produce PJs that travel along the spine field.
Please note that, with distance from the umbra, the density of spines and field strength in them decrease and their field inclination with respect to vertical increases \citep{tiw15aa}. Thus, the rate of production of PJs and their strength are expected to decrease in the outer penumbra. In the current work, we discovered several locations in penumbra where repeated larger PJs are formed. These locations appear to be the tails of penumbral filaments. We propose that the reconnection between the spine field and tail field can generate these larger PJs. They are large and repetitive because the opposite polarity flux is larger in strength and area there as compared to that on the sides of penumbral filaments.
In Figure \ref{f1}, we draw a cartoon diagram of the magnetic configuration resulting in the formation of most/normal PJs and tail/larger PJs via magnetic reconnection discussed above. As displayed in the cartoon, the direction of the PJs is still the same as proposed by \cite{kats07}, see also \cite{jurc08}, along the spine field, the more vertical field lines, which changes inclination with radius getting more inclined outward. This picture fits well with the most recent observations of fine-scale structures of sunspot penumbrae. The earlier picture of \cite{kats07} of formation of PJs by reconnection between the two magnetic flux tubes inclined at an acute angle to each other does not fit the recent observations of the convective nature of penumbral filaments. The magnetic field along a filament's axis in the head of the filament and that in the surrounding spines have nearly similar inclination, so reconnection between the spines and opposite polarity field at the edges of penumbral filaments is more likely than it taking place between the field of spines and the same polarity field inside the heads and along the bulk of filaments. In the present picture, at the sides of the penumbral filaments, where the angle between the spine field and the filament field is greatest, most PJs form. Depending on the size of tails of filaments, or a convergence of tails of more than one filament, magnetic reconnection can repetitively produce larger/tail PJs, in the way shown in the cartoon.
The presence of patches of opposite polarity field in the penumbra at the locations of tail PJs is also compatible with the sea-serpent field \citep[e.g.,][]{sain08}. Although the reconnection of the magnetic field in the tails of penumbral filaments with the opposite polarity spine field seems to be the most plausible mechanism of formation of larger PJs, the reconnection of tail fields with the heads of neighboring filaments generating larger PJs cannot be ruled out from our observations. Neither can the emergence of bipolar field in the penumbra leading to such larger PJs via magnetic reconnection with the existing penumbral field be ruled out. It is also unknown whether most PJs form at sites of flux cancellation or emergence; both of these processes are appropriate for producing them. Our picture needs to be verified by higher resolution observations of future observatories, e.g., Solar-C and DKIST.
Our proposal is supported by the reconnection produced by \cite{maga10} in MHD modeling of PJs, see also modeling by \cite{saka08}. In their model penumbral filaments are assumed to be one twisted flux tube. We believe that this model corresponds to either side of a penumbral filament (Figure \ref{f1}). The reconnection between opposite polarity field at one side of twisted magnetic flux tube can produce jets in a similar way as our proposal for either side of real penumbral filaments.
It is important to mention here that only one third of filaments selected by \cite{tiw13} displayed opposite polarity field in their sides near heads; the number slighly reduces in the middle (to before the tail) of filaments. The fact that this structure is close to the resolution limit of the telescope, and the opposite polarity fields can only be detected by advanced processing of the SP data, e.g., by spatially coupled inversion code \citep{van12,van13,tiw13}, or by deconvolution methods \citep{ruiz13}, or can only be seen in the specially processed ground based observations \citep{scha13}, the possibility of each of the filaments having opposite polarity field not only near their heads but all along the filament cannot be ruled out. In that case, it is possible that the filaments produce PJs all along their sides by the same process as described above, but in some cases we cannot detect them for them being very narrow and small, below the limit of resolution of the present telescopes.
\section{CONCLUSIONS}
The normal PJs show at most weak signatures in the transition region (in AIA 1600 \AA\ images) and none in the corona: neither in any of the AIA coronal channels nor in the high-resolution Hi-C images of 193 \AA. Weak signatures of largest normal PJs in 1600 \AA\ suggest that a few largest normal PJs directly power some localized transient TR heating.
A few locations of larger penumbral jets are found in the penumbra, which are locations of mixed polarity magnetic flux near tails of filaments in the penumbra. Larger/tail PJs are brighter (up to 60\% more than that of the background penumbra), wider (up to 600 km) and faster (can be more than 250 km~s$^{-1}$) than most/normal PJs, but have lifetimes and apparent lengths similar to or at larger ends of that of most PJs. They flash repeatedly at the same location and clearly display signatures in the transition region (in AIA 1600, 304, 171 and 193 channels). However, no pure coronal signature in AIA 94 \AA\ is detected. None of the penumbral jets seem to be heated enough to display a signature in the hot corona, however, all of the larger ones do appear in transition region emission. These results should be verified with higher resolution data of future Hi-C flights and Solar-C mission.
In aggregate, most PJs and tail PJs apparently do not directly produce appreciable coronal heating (normal PJs because of lack of their coronal signatures, larger PJs because of their sparsity and/or their lack of pure coronal signatures), but conceivably contribute significantly to coronal heating via braiding of the coronal field rooted in and around them, or by production of Alfv\'en waves.
We propose a modified picture of the formation mechanism of PJs considering the recent advancements on the observed magnetic structure of sunspot penumbral filaments. It is more likely that PJs are formed by magnetic reconnection between spines and opposite polarity field at the edges of penumbral filaments along each side of penumbral filaments, as found partly in the MHD simulations of \cite{maga10}, as compared to the earlier proposed reconnection between two inclined components of the same polarity field. The larger/tail PJs appear to form as a result of magnetic reconnection between the opposite polarity field of spines and of filament tails.
\acknowledgments
We are grateful to the referee for constructive comments, which resulted in major modification and improvement of the paper. Hinode is a Japanese mission developed and launched by ISAS/JAXA, collaborating with NAOJ as a domestic partner, NASA and STFC (UK) as international partners. Scientific operation of the Hinode mission is conducted by the Hinode science team organized at ISAS/JAXA. This team mainly consists of scientists from institutes in the partner countries. Support for the post-launch operation is provided by JAXA and NAOJ (Japan), STFC (U.K.), NASA, ESA, and NSC (Norway). The AIA and HMI data are courtesy of NASA/SDO and the AIA and HMI science teams. MSFC/NASA led the Hi-C mission and partners include the SAO in Cambridge, Mass.; LMSAL in Palo Alto, Calif.; the UCLan in Lancashire, England; and the LPIRAS in Moscow. SKT is supported by appointment to the NASA Postdoctoral Program at the NASA/MSFC, administered by ORAU through a contract with NASA. For this work SEA was supported by the National Science Foundation under Grant No. AGS-1157027. ARW and RLM are supported by funding from the LWS TRT Program of the Heliophysics Division of NASA's SMD.\\
|
2,869,038,155,877 | arxiv | \section{Introduction}
Precision measurements play a fundamental role in physics, as they constitute a key ingredient of many state-of-the-art applications and experiments testing the limits of scientific theories.
But the accuracy to which such measurements can be performed is itself governed by the laws of physics - and, ultimately, by those of quantum mechanics \cite{giovannetti2004quantum, giovannetti2006quantum, giovannetti2011advances,toth2014quantum}.
A generic measurement protocol for estimating the value of an unknown parameter consists of preparing a probe in a desired initial state, allowing it to interact with the physical system whose state depends on the parameter and, finally, obtaining a measurement result that encapsulates the information about it. This process, however, is often affected by systematic and statistical errors. While the source of the former may stem from imperfect calibration of the measurement instruments, the origin of the latter can either be accidental, due to insufficient control of the measurement chain, or fundamental, deriving from the nature of the physical measurement \cite{helstrom1969quantum, holevo2011probabilistic}. Fortunately, statistical errors, regardless of their origin, can be minimized by repeating the process and averaging the resulting outcomes, as a consequence of the central limit theorem \cite{cramer2016mathematical}. This theorem states that, given a large number $N$ of independent measurement results, their average will converge to a Gaussian distribution with standard deviation scaling as $1/\sqrt{N}$. In metrology, this is referred to as the Standard Quantum Limit (SQL).
For many practical applications that involve noisy and error-prone systems, it is essential to devise protocols that yield precision levels close to the SQL. However, it is known that this limit is not fundamental -- the ultimate limit set by the laws of quantum physics is the Heisenberg Limit (HL) \cite{braginski1975, braginsky1980quantum, braginsky1995quantum, ozawa1989realization}.
Recently, several systems have achieved sufficient levels of technological maturity to allow the experimental exploration of these limits. Due to the non-idealities present in these systems, they are typically referred to as NISQ (noisy intermediate-scale quantum). Largely, the motivation for this development has been rooted in quantum information processing, where efficient and precise measurements and state manipulation is required \cite{Buluta_2011}, and indeed significant progress towards the implementation of quantum gates and algorithms has been made in optical setups \cite{Pryde_2019, Flamini_2018}, NV centers in diamond \cite{Wrachtrup_2006, Prawer_2018}, trapped ions \cite{Lange2012, Sage_2019} and superconducting circuits \cite{Paraoanu2014, Nori_review_2011}.
Since it employs similar control and measurement techniques as quantum computing, the exploration of quantum-enhancing techniques has grown as a separate field, usually referred to as quantum metrology \cite{degen_2017,paris2009quantum}, with applications such as ultrasensitive force detection \cite {biercuk_2010}, adaptive environment sensing \cite{Scerri_2020}, near-surface electric field measurements \cite{brownnutt_2015}, sensing of weak signals \cite{review_Naderi} and even detection of gravitational waves
\cite{abadie2011gravitational}.
The paradigmatic protocol in quantum metrology is quantum phase estimation. The quantum phase is a parameter that cannot be directly measured and yet, it contains information about other quantities of interest, such as electric or magnetic fields.
The traditional quantum metrology approach to phase estimation has been through the use of highly entangled states such as NOON states \cite{giovannetti2004quantum, giovannetti2011advances, Smerzi_RMP}, as well as other specially optimized states \cite{berry2000optimal,berry2001optimal,berry2009perform}. However, highly entangled states tend to be very sensitive to noise: for NOON states even the loss of a single qubit (photon) results in a separable state. Thus, the implementation of this method is challenging with respect to real-life applications. Fortunately, it has been later realized that precise quantum phase estimation does not necessarily require entanglement. In optics, it was demonstrated that adaptive homodyne phase measurements yield uncertainties close to the quantum uncertainty of coherent states \cite{Wiseman1995,Mabuchi2002}. It was also shown how to adaptively estimate the angle of a half waveplate that characterizes the linear polarization of photons \cite{Fujiwara_2006,Takeuchi2012} as well as how to perform phase-shift estimation with polarization-encoded qubits \cite{Paris2010}. Furthermore, the inverse quantum Fourier transform, which is the last step in Shor's factorization algorithm and which typically requires gate entangling, can be carried out using local measurements and feedback \cite{PhysRevLett.76.3228}. This approach has been used to break the SQL in experiments with photons \cite{higgins_2007}, superconducting qubits \cite{danilin2018quantum} and NV centers in diamond \cite{Hanson2016}.
The incorporation of machine learning techniques in estimation protocols is a natural step forward. Seminal theoretical work on employing reinforcement learning algorithms has demonstrated the potential of these methods for reaching sensitivities below the SQL when used in conjunction with entanglement \cite{Sanders2010, Sanders2011, Lovett2013, palittpongarnpim2016single, palittapongarnpim2016controlling, palittapongarnpim2017learning, palittapongarnpim2018robustness}. Recently, some of these methods have been tested experimentally in an optics setup with their estimation precision being limited by the SQL \cite{Lumino2018}. This only upholds the potential of applying machine learning for the optimization of parameter estimation under resources and noise level constraints, but the foundations of these methods are still poorly understood. In mathematical statistics, the limits of machine learning algorithms are an active area of investigation -- for example deriving bounds on the number of samples needed to reach a certain accuracy \cite{Wossnig2020}. However, the issue at stake is that typically all the mathematical results, including the SQL, are derived from the so-called central limit $N \gg 1$ and for ideal noiseless systems. Yet the relevant regime for the present status of quantum technologies in the NISQ era is that of moderate $N$ and significant noise. In this context no general unbiased estimator has been found yet \cite{Smerzi_RMP}.
The objective of this work is to employ two machine learning algorithms, the DE and the PSO algorithm, in the design of adaptive estimation schemes capable of driving the measurement precision close to the the SQL. We numerically show that the SQL is still a valid bound even in the regime of not too large $N$. We also demonstrate that machine learning algorithms can be used even in the presence of various experimental noises, consequently providing robust policies with better performance than non-adaptive protocols. For practical devices such as magnetometers based on single qubits, these methods can be directly applied. More precisely, we observe that in order to increase the precision of these devices, one straightforward option is to increase the sensitivity to the parameter to be estimated by, for example, using higher-energy states or by increasing the response using a different biasing point. However, both of these strategies result in an increased coupling to noise sources and, therefore, a compromise needs to be reached in order to achieve the maximum performance. This will be further detailed in the paper when analyzing the experimental realizations.
Both the DE and the PSO algorithm can also be employed as subroutines in order to enhance other quantum algorithms. For example, algorithms that use variable sensing times, or multipass techniques, are in principle able to breaks the SQL and reach the HL. Instead of averaging at every sensing time, which is the typical approach used one can further increase the sensitivity by using our technique. Beyond quantum metrology, machine learning protocols could become useful in other quantum-information paradigmatic problems that involve phase estimation, such as factorization, sampling, and computation of molecular spectra \cite{NielsenChuang}. In particular, our calculations are relevant for NISQ quantum technologies, where the number of qubits is limited and subject to errors and noise. Overall, by benchmarking these two important protocols for the task of quantum phase estimation, we hope that machine learning algorithms will be more prominently employed in applications such as optical interferometry and magnetometry, where the increase of precision is essential and where the aim is set on reaching the HL.
The paper is organized in the following sections. Section II describes the general concept of adaptive quantum phase estimation in Ramsey interferometry, discussing the updating procedure as well as the relevant sources of noise. Section III presents the two machine learning algorithms, the DE and PSO algorithm. Section IV presents our main results, where we show how the two algorithms allow us to approach the SQL. In this section we also provide an analysis on the effects of Gaussian noise, Random Telegraph noise (RTN) and quantum decoherence on the performance of the algorithms. Section V discusses the implementation of our protocol on several experimental platforms, namely Mach-Zehnder optical interferometers, superconducting qubits, trapped ions and defects in diamond. Finally, we conclude our results in Section VI.
\section{Adaptive Quantum Phase Estimation Scheme}
To perform the estimation of an unknown phase $\phi$, we use a generic adaptive scheme that is able to adjust the value of a control phase $\theta$ to match the value of $\phi$, based on the results of previous measurements in a Ramsey interferometric sequence.
\begin{figure}[h!]
\centering
[width=300pt]{protocol.png}
\caption{Adaptive quantum phase estimation scheme. A qubit $m$ is injected into the quantum circuit either in the quantum state $\ket{0}$ or $\ket{1}$ and its measured outcome $\zeta_m \in \{0,1\}$ is used together with some policy phase instruction $x_m \in \left[0,2\pi\right[$ to prepare the circuit for the next round of measurements. The quantum circuit consists of two Hadamard gates $\mathrm{H}$, a phase shift gate $\mathrm{U}_{\phi ,\theta}$, a detector D, and a processing unit with phase update instructions for the controllable phase shifter.}
\label{fig:aqem}
\end{figure}
The schematic representation of the adaptive quantum phase estimation scheme displayed in Fig. \ref{fig:aqem} consists of a quantum circuit made of a Hadamard gate $\mathrm{H}$, a phase shift gate $\mathrm{U}_{\phi ,\theta}$, another Hadamard gate $\mathrm{H}$ and a detector augmented with a processing unit to calculate the value of $\theta$ for the next round of measurements. Using standard notations in quantum information, the Hadamard and phase shift gates can be respectively defined as
\begin{equation}
\mathrm{H} = \frac{1}{\sqrt{2}}
\begin{bmatrix}
1 & 1 \\ 1& -1
\end{bmatrix}
\quad \textrm{and} \quad
\mathrm{U}_{\phi ,\theta} =
\begin{bmatrix}
1 & 0 \\ 0 & e^{i (\phi-\theta)}
\end{bmatrix}.
\label{umatrix}
\end{equation}
Under this adaptive quantum estimation scheme, an ensemble of $N$ qubits is injected sequentially into the circuit in randomly chosen quantum states, either in state $\ket{0}$ or state $\ket{1}$, and their outcome is measured to prepare the value of the controllable phase shifter for the next incoming qubit. The input state of the quantum circuit takes the form $\ket{0}_{1}\otimes \ket{1}_{2} \otimes \ket{1}_{3} \otimes .... \otimes \ket{0}_{N}$, which is manifestly separable. After the last $N^{th}$ qubit is injected and its outcome measured, the final phase value of $\theta$ is considered to be the estimated value of $\phi$. The initial states of the qubits can be represented in Dirac notation as
\begin{equation*}
\ket{0} = \begin{bmatrix} 1 \\0 \end{bmatrix}
\quad \textrm{and} \quad
\ket{1} = \begin{bmatrix} 0 \\1 \end{bmatrix}.
\end{equation*}
The idea is to have at the end of the process an estimated controllable phase value $\theta$ as close as possible to the value of the unknown phase $\phi$. The state of each qubit after the second Hadamard gate is
\begin{equation*}
\ket{\psi_{\pm}} = \frac{1}{2} \left[ 1 \pm e^{i (\phi - \theta)} \right] \ket{0} + \frac{1}{2} \left[ 1 \mp e^{i (\phi - \theta)} \right] \ket{1},
\end{equation*}
where the upper sign corresponds to a qubit whose initial state was $\ket{0}$ and the lower sign to one whose initial state was $\ket{1}$. This yields measurement results $\zeta \in \{0,1\}$ with probabilities
\begin{equation}
\mathcal{P}_{+}(\zeta = 0|\phi , \theta ) =
\mathcal{P}_{-}(\zeta = 1|\phi , \theta ) =
\cos^{2}\frac{\phi - \theta }{2},
\label{eq:prob0}
\end{equation}
and
\begin{equation}
\mathcal{P}_{+}(\zeta = 1|\phi , \theta ) =
\mathcal{P}_{-}(\zeta = 0|\phi , \theta ) =
\sin^{2}\frac{\phi - \theta}{2}.
\label{eq:prob1}
\end{equation}
If the phase difference $\phi - \theta$ is zero, then $\theta$ approximates very well the unknown phase $\phi$ and a qubit prepared in state $\ket{0}$ will also exit in state $\ket{0}$. Similarly, the same logic can be applied for a qubit injected in the quantum state $\ket{1}$.
In order to estimate the phase $\phi$ at every step $m$ of the algorithm the controllable phase $\theta_{m}$ has to be updated based on the previous result $\zeta_{m-1}$ of the measurement. Here we adopt a standard updating rule
\begin{equation}
\theta_m = \theta_{m-1} + (-1)^{\zeta_m} x_m,
\label{eq:update}
\end{equation}
which has already been employed successfully in similar setups \cite{palittpongarnpim2016single, palittapongarnpim2016controlling, palittapongarnpim2017learning}, and where the initial value of $\theta$ can be fixed to $\theta_0=0$ without any loss of generality.
Note that the term $x_m$ in Eq. (\ref{eq:update}) represents the update policy which will be determined by the machine learning algorithms. This update rule can be viewed as a decision tree where for each qubit $m$, the controllable phase shifter must update its value $\theta_m$ either by adding or by subtracting $x_m$ depending on the measured outcome state of the previous qubit $\zeta_{m} \in \{0,1\}$. Consequently, for $N$ qubits the number of possible $\theta_m$ values increases exponentially as $2^{(N+1)}-1$. This is a Markovian feedback, since the new controllable phase $\theta_m$ depends only on the latest measured outcome state $\zeta_{m}$.
To evaluate the performance of a policy, we use the Holevo variance \cite{berry2009perform,berry2001optimal,berry2000optimal}, defined as
\begin{equation}
V_{\rm H} = S^{-2}-1 = |\langle e^{i(\phi-\theta)}\rangle|^{-2}-1,
\label{eq:holevo}
\end{equation}
where $\langle e^{i(\phi-\theta)} \rangle$ represents the average value of $e^{i(\phi-\theta)}$ for different phase values $\phi$ and their respective estimates $\theta$ considered in the learning process of the machine learning algorithms. Here we make the notation abbreviation $\theta = \theta_N$, since the Holevo variance of a policy can only be calculated after the last qubit $N$ is injected into the circuit and its outcome measured. The quantity $S = \langle e^{i(\phi-\theta)} \rangle \in [0,1]$ is called the sharpness of the phase distribution, where the value $S=1$ corresponds to a perfect estimation of $\phi$. For periodically bounded variables such as the phase, the Holevo variance is a direct measure of the standard deviation by $(\Delta\phi)^2=V_{\rm H}$. Therefore we have $V_{\rm H} \sim 1/N$ for the SQL.
It is also important for the model of the designed adaptive quantum phase estimation scheme to include the possibility of errors and imperfections that occur in a real experimental situation. This provides an important test to the robustness of the algorithms to relatively general sources of noise which can be encountered on most experimental platforms.
The first source of noises to be considered are the noises in the application of the controlled unitary phase shift transformation $\mathrm{U}_{\phi - \theta}$, namely the Gaussian and the Random Telegraph noise. Note that the Random Telegraph noise is particularly relevant for experiments involving solid-state qubits.
In the scenario of Gaussian noise, the noise follows a normal distribution parametrized by the standard deviation $\sigma$. Letting $\theta^{{\scriptscriptstyle(\mathrm{GSN})}}$ represent the actual value of the controllable phase shifter subjected to the noise, the Gaussian noise distribution can be defined as:
\begin{equation}
p(\theta^{{\scriptscriptstyle\mathrm{GSN}}}_{m}) = \frac{1}{\sqrt{2\pi}\sigma}
\exp\left[-\frac{1}{2\sigma^2}(\theta^{{\scriptscriptstyle\mathrm{GSN}}}_{m}-\theta_{m})^2\right].
\label{eq:gaussian}
\end{equation}
In the scenario of Random Telegraph noise, the noise follows a discrete distribution where at each time step the the controllable phase shifter value can be randomly offset by a fixed valued $\lambda$ with a probability $\eta$. Letting $\theta^{{\scriptscriptstyle (\mathrm{RTN})}}$ represent the value of the controllable phase shifter subject to this source of noise, the Random Telegraph noise distribution can be described as:
\begin{equation}
p(\theta^{{\scriptscriptstyle\mathrm{RTN}}}_{m}) =
\begin{cases}
1 - \eta , & \text{$\theta^{{\scriptscriptstyle\mathrm{RTN}}}_{m} = \theta$}_{m}\\
\eta , & \text{$\theta^{{\scriptscriptstyle\mathrm{RTN}}}_{m} = \theta_{m} + \lambda$}.
\end{cases}
\label{eq:rtn}
\end{equation}
The other important type of noise to be considered affects the qubit itself. This modifies the unitary evolution generated by a Hermitian Hamiltonian $\mathcal{H}$ to a non-unitary evolution given by the Lindblad master equation
\begin{equation}
\dot{\rho} = -\frac{i}{\hbar} [\mathcal{H}, \rho ] + \Gamma_{1} \mathcal{D}[\sigma_{-}](\rho ) +
\frac{\Gamma_{\varphi}}{2} \mathcal{D}[\sigma_{z}](\rho )
\label{eq:Lindblad}
\end{equation}
where $\Gamma_{1}$ is the relaxation rate, $\Gamma_{\varphi}$ is the pure dephasing rate and
the dissipation superoperator $\mathcal{D}$ acting on the density matrix $\rho$ is defined by
$\mathcal{D}[A](\rho ) = A \rho A^{\dag} - \frac{1}{2} \{A^{\dag}A, \rho \}$. It is also useful to introduce the standard notations $T_{1}$ and $T_{2}$ for the relaxation and decoherence time,
$T_1 = 1/\Gamma_{1}$ and $T_{2} = 1/\left ( \Gamma_{1}/2 + \Gamma_{\varphi}\right)$.
To implement the phase gate $U_{\phi - \theta_{m}}$ from Eq. (\ref{umatrix}) the Hamiltonian must be of $\sigma_z$ type, with a component for the unknown phase $\phi$ and another one for the control $\theta_{m}$. Typically, for Ramsey experiments with a phase accumulation time $\tau$, we have $\mathcal{H}_{m}=\frac{\hbar}{2}(\phi/\tau - \theta_{m}/\tau)\sigma_{z}$ at the step $m$, with $U_{\phi - \theta_{m}} = \exp [- i \mathcal{H}_{m} \tau /\hbar]$, up to a global phase factor. The solution of Eq. (\ref{eq:Lindblad}) is a $2 \times 2$ matrix with elements $\rho_{00}(\tau ) = 1- \exp (-\tau /T_{1})\rho_{11}(0)$, $\rho_{01}(\tau ) = \exp (-i \phi + i \theta_{m} -\tau /T_{2})\rho_{01}(0)$, $\rho_{10}(\tau ) = \exp (i \phi - i \theta_m -\tau /T_{2})\rho_{10}(0)$ and $\rho_{11}(\tau ) = \exp (- \tau /T_{1})\rho_{11}(0)$. If the state at $\tau=0$ is prepared by the action of the Hadamard gate from either $|0\rangle$ or $|1\rangle$, corresponding respectively to the $+$ and $-$ signs below, and at the final time $\tau$ we apply again a Hadamard gate, we obtain that at every step $m$ of the algorithm the probabilities are modified as
\begin{equation}
P_{\pm}(\zeta_{m}|\phi, \theta_{m}) = \frac{1}{2} \left[1 \pm (-1)^{\zeta_{m}} \nu \cos (\phi - \theta_{m}) \right],
\label{eq:decoherence}
\end{equation}
where $\nu = \exp( - \tau /T_{2})$ is called interference visibility. One can check that for maximum visibility, $\nu=1$, we recover Eqs. (\ref{eq:prob0}) and (\ref{eq:prob1}). Further considerations can be found in Appendix \ref{apx:holevo}.
\section{Machine Learning Algorithms}
The problem of quantum phase estimation relies on a sequential and cumulative set of measurements to drive the estimation process, thus making it an ideal problem for reinforcement learning algorithms. In this work, we considered the Differential Evolution (DE) \cite{storn1997differential, price2006differential} and the Particle Swarm Optimization (PSO) \cite{kennedy2011particle, eberhart1995new, shi1998modified}, among other reinforcement learning algorithms, as they are the most commonly employed for similar tasks in literature \cite{Sanders2010, Sanders2011, Lovett2013, palittpongarnpim2016single, palittapongarnpim2016controlling, palittapongarnpim2017learning, palittapongarnpim2018robustness}.
These algorithms employ a direct search method to the exploration of the search space generated by all the possible policy configurations. Direct search methods use a greedy criterion to drive their exploration. Such methods guarantee fairly fast convergence times, even though such fast convergence times often come at the expense of becoming trapped in a local minimum. This comes as result of the greedy criterion promoting decisions that lead to shorter and more immediate term rewards usually in detriment of fully exploring the total search space. To avoid this scenario, it is important to perform a thorough study on the controllable parameters of each of the mentioned algorithms before applying them to the quantum phase estimation task. To evaluate the performance of an algorithm there are two fundamental criteria: its ability to converge within the imposed number of iterations to a solution and its ability to converge to a valid solution.
\subsection{Differential Evolution}
The implementation of the DE algorithm to the current problem starts with a set of $P$ populations, each representing a candidate solution for the adaptive scheme update policy. Each of these populations is a vector of size $N$, with each entry representing a new phase instruction to prepare the controllable phase shifter for each of the qubits being injected into the system. The DE algorithm at each iteration will employ its direct search method to $P$ $N$-dimensional vectors $x^{G}_{i,j}$, where $i \in \lbrace 1, 2, ..., P \rbrace$ and $j \in \lbrace 1, 2, ..., N \rbrace$ represent each entry of the candidate solutions vectors and $G$ represents the generation of the population vectors.
Each of these vectors is initialized with random values for each entry in the interval $x^0_{i,j} \in [0, 2\pi]$. Afterwards, at each iteration, the DE algorithm generates possible new candidate solution vectors for the next generation by adding the weighted difference between four population vectors to a fifth vector. This process is referred to as mutation:
\begin{equation*}
\tilde{u}^{G+1}_{i,j} = x^{G}_{r_1,j} + F \cdot (x^{G}_{r_2,j} + x^{G}_{r_3,j} - x^{G}_{r_4,j} - x^{G}_{r_5,j}).
\end{equation*}
Here $F$ represents a constant value in the interval $[0,1]$ which controls the amplification of the difference between the considered populations. Hence, $F$ will be referred to as the amplification parameter. Note as well that all indexes $\lbrace r_1, ..., r_5\rbrace$ are randomly chosen integer values always different between themselves. At this point, the entries of the newly mutated vectors $\tilde{u}^{G+1}_{i,j}$ are randomly mixed with the originally corresponding vectors to increase their diversity. This process is referred to as crossover:
\begin{equation*}
\tilde{x}^{G+1}_{i,j} = \left\{
\begin{array}{ll}
x^{G}_{i,j} \quad \quad \text{, if $R_1 > C$ and $j \neq R_2$}\\
\tilde{u}^{G+1}_{i,j} \quad \; \text{, if $R_1 \le C$ or $j = R_2$}.
\end{array}
\right.
\end{equation*}
This process is controlled by the crossover parameter $C$ which can take any value in the interval $[0,1]$. Hence, a crossover only occurs if the random value $R_1$, which is generated for each population member at each iteration, is below or equal to the chosen crossover parameter. The value of $R_2$ is an integer randomly chosen for each population at each iteration of the evaluation process to ensure that at least one entry from the newly mutated vectors $\tilde{u}^{G+1}_{i,j}$ is passed to the trial vector for the next generation $\tilde{x}^{G+1}_{i,j}$.
Finally, the new trial vectors are compared against the population vectors of the previous generation to see which perform best against the cost function of the problem and, as a result, become a member of the next generation of populations. This process is referred to as selection:
\begin{equation*}
x^{G+1}_{i,j} = \left\{
\begin{array}{ll}
x^{G}_{i,j} \quad \quad \text{, if $f\big(x^{G}_{i,j}\big) < f\big(\tilde{x}^{G+1}_{i,j}\big)$}\\
\tilde{x}^{G+1}_{i,j} \quad \; \text{, if $f\big(x^{G}_{i,j}\big) \ge f\big(\tilde{x}^{G+1}_{i,j}\big)$}.
\end{array}
\right.
\end{equation*}
Here $f(\cdot)$ represents the cost function associated to the quantum phase estimation which is defined by the Holevo variance in Eq. (\ref{eq:holevo}). Therefore, if the new trial vectors $\tilde{x}^{G+1}_{i,j}$ minimize this cost function when compared to the previous generation vectors $x^{G}_{i,j}$, then they become part of the next generation of populations. Otherwise, the previous generation populations survive for the next generation. This entire process illustrates the adapted DE algorithm implemented in this work and is schematically represented in Fig. \ref{fig:DEstages}.
\begin{figure}[h!]
\centering
[width=300 pt]{de_overview_of_stages}
\caption{Overview of the three main stages of the DE algorithm. (a) Mutation: each entry of the candidate solution vector is mutated, generating a new mutated candidate solution vector; (b) Crossover: a new candidate solution vector is created with entries from the original and the newly created mutated vector; (c) Selection: the new and the original candidate solution vector are tested against the cost function and the one with the best results is propagated for the next generation.}
\label{fig:DEstages}
\end{figure}
\subsection{Particle Swarm Optimization}
The implementation of the PSO algorithm starts with a set of $P$ particles, each representing an individual candidate solution to the update policy of the adaptive scheme. Each particle can move in the $N$-dimensional search space associated with the $N$ different phase instructions of the controllable phase shifter for each of the input qubits. Therefore, each particle will be represented by a position vector $x^{G}_{i,j}$ and velocity vector $v^{G}_{i,j}$, where $i \in \lbrace 1, 2, ..., P \rbrace$ and $j \in \lbrace 1, 2, ..., N \rbrace$ represent each entry and $G$ the generation of the vectors. Note that the position vector $x^{G}_{i,j}$ corresponds to a candidate solution vector, while the velocity vector $v^{G}_{i,j}$ represents the change in direction of the corresponding position vector in the search space.
All entries of these vectors are initialized with random values in the interval $[0, 2\pi]$. At each iteration, each particle evaluates its current position according to the cost function of the search problem, defined by the Holevo variance in Eq. (\ref{eq:holevo}), and compares its value with the positions previously visited by itself and with the positions previously visited by the entire collective of particles. If the current position is better than its own previously visited positions, the particle stores it in a variable $pbest_i$, where $i \in [1, ..., P]$ is the identifier of that particle. If, in addition, the current position is better than all of the other previously visited positions by the entire collective ensemble, the same position is stored in a variable $gbest$ shared among all other particles. This process is illustrated in Fig. \ref{fig:PSOillustration}.
Both of these variables will determine the entire exploration of the search space by the $P$ particle candidate solutions. After each iteration, each particle will use this collective knowledge to adjust its displacement for the next turn according to
\begin{equation*}
x^{G+1}_{i,j} = x^{G}_{i,j} + w \cdot v^{G+1}_{i,j}
\end{equation*}
and
\begin{equation*}
v^{G+1}_{i,j} = v^{G}_{i,j} + \alpha \cdot R_a \cdot (pbest_{i,j}-x_{i,j}) + \beta \cdot R_b \cdot (gbest_j-x_{i,j})\textrm{.}
\end{equation*}
Here the parameter $\alpha$ controls the desirability of each particle to move towards its best found position, while the parameter $\beta$ controls the desirability of each particle to move towards the best found solution by the entire collective. Both $R_a$ and $R_b$ are uniformly distributed random values in the interval $[0,1]$. In addition, the parameter $w$ works as a damping weight controlling the change of direction imposed by the new velocity vector at the current position of each particle.
To steer the direction of each particle to a converging candidate policy solution and to avoid overstepping the found minima, an additional parameter $v_{max}$ is imposed to the algorithm which determines the maximum value that each entry in the velocity vector may take. As the algorithm advances in its search for the optimal policy solution, this parameter will decay with the number of iterations, consequently reducing the step size of each particle and forcing them to converge to a solution. This final adjustment completes the description of the adapted PSO algorithm implemented in this work.
\begin{figure}[h!]
\centering
[width=350pt]{pso_mesh_topology.png}
\caption{Visual representation of the PSO algorithm. Initially, all the particles representing the different candidate solution vectors are initialized at random positions with random velocities. At each iteration the particles explore the search space of the problem, eventually converging around the collectively found global optimum. The particles are able to share knowledge regarding the best-found position with other particles, which is generically represented by the red fuzzy lines. In our implementation the topology of inter-particle communication is such that all particles are able to share information with every other particle.}
\label{fig:PSOillustration}
\end{figure}
\subsection{Parameters Analysis}
It is important to understand the behaviour and performance of these two algorithms under the different possible configurations of their controllable parameters. To verify the convergence of the solutions, it is important to remember that at each iteration of both algorithms there are $P$ different candidate solutions, each one represented by a given population of $N$ phase value policies. As the algorithm iterates, the candidate solutions should move closer to each other until converging to a final solution. Thus, one way of inferring this convergence value is by calculating the deviation from the average value of each population for each entry of their candidate solution and then averaging these values. To do so, the convergence value $L$ is defined as
\begin{equation}
L = \sum^N_j\frac{1}{N}\left(\sum^P_i\frac{\bar{x}_j-x_{i,j}}{P}\right)
\textrm{,}
\label{eq:dispersion}
\end{equation}
where $\bar{x}_j$ corresponds to the average value of entry $j$ over all the candidate solutions and $x_{i,j}$ corresponds to the entry $j$ of the candidate solution vector $i$. Therefore, lower values of $L$ occur when all the candidate solutions are relatively close to each other and the algorithm has converged to a solution. On the other hand, larger values of $L$ indicate that the algorithm was not able to converge to a solution at the given iteration.
The algorithms should also converge to a valid solution. It is not enough that the algorithms converge to a solution, if it is not the correct one. Letting $K$ represent the total number of different phase values of $\phi$ considered in the learning task imposed to the machine learning algorithms, the performance of a policy can be evaluated by the Holevo variance in Eq.(\ref{eq:holevo}). This equation, however, is computationally expensive in its current form. So instead, we can approximate it numerically \cite{palittapongarnpim2017learning,berry2000optimal,Sanders2011} to reduce computational time. A more efficient evaluation of the Holevo variance can be described as
\begin{equation}
V_{H} =
S^{-2} - 1 =
\bigg\vert \frac{1}{K} \sum^K_{k=1} e^{i\left[\phi^{(k)}-\theta_{N}^{(k)}\right]}\bigg\vert^{-2}-1
\textrm{,}
\label{eq:holevo_simplified}
\end{equation}
where values for $\phi^{(k)}-\theta_{N}^{(k)}$ close to zero signify lower values of imprecision (sharpness $S\approx 1$) and therefore better performance.
Additionally, the performance $V_H$ of each candidate policy vector is evaluated $M=5$ separate times and the results averaged in order to smoothen small fluctuations. Repeating the simulation multiple times allows for a more accurate representation of the performance of each policy and thus more consistent results. The number of training instances $K$ can be any arbitrarily large enough number given that it does not hinder the computational time. A large number of training instances $K$ also ensures a faithful representation of $\phi$ in the interval $[0, 2\pi[$, since they are sampled uniformly and randomly from that same interval. A reasonable choice that satisfies these criteria is $K=10N^2$, a number sufficiently large to guarantee the convergence of the algorithms \cite{palittapongarnpim2017learning}. Overall, the time complexity of the algorithms scales with $\mathcal{O}(P \cdot G \cdot N \cdot K \cdot M) \sim \mathcal{O}(N^3)$ which is polynomial in time.
At this point it is possible to completely evaluate the performance of each algorithm according to the different possible configurations of each of their controllable parameters and choose those that achieve better results. While Eq. (\ref{eq:dispersion}) provides a measurement for the convergence of the algorithms, Eq. (\ref{eq:holevo_simplified}) shows the precision to which they are able to estimate the value of the unknown phase $\phi$.
A thorough study on the performance of both algorithms under the different possible controllable parameter configurations can be found in appendix \ref{apx:evolution} and appendix \ref{apx:optimization}. The optimal parameter configuration obtained for each algorithm is summarized in Table \ref{tab:optimal_parameters}.
\begin{table}[H]
\centering
\renewcommand{\arraystretch}{1.4}
\begin{tabularx}{0.75\textwidth}{c|cccccc}
\hline\hline
Algorithm & $F$ & $C$ & $\beta$ & $\alpha$ & $w$ & $v_{max}$ \\
\hline
Differential Evolution & $0.7$ & $0.8$ & - & - & - & - \\
Particle Swarm Optimization & - & - & $0.8$ & $0.8$ & $0.8$ & $0.2$\\
\hline\hline
\end{tabularx}
\caption{Optimal parameters for the DE and PSO algorithms obtained through the analysis described in Appendices \ref{apx:evolution} and \ref{apx:optimization}.}
\label{tab:optimal_parameters}
\end{table}
Along with a fixed configuration for each algorithm, it is also important to ensure that all the other variable parameters remain the same in order to draw comparable results. Thus, it is important that for the same number of input qubits $N$ being injected into the system, the population number of candidate solution vectors $P$ and the number of training instances $K$ remain the same under all different scenarios. As previously mentioned, the number of training instances was set to $K=10N^2$, while the number of populations was defined as $P=20+2\cdot {\rm int}(N/10)-1$, where ${\rm int}(\cdot)$ represents the integer part of the division. Note that for an increasing number of qubits being injected into the system, the population size of candidate solution vectors $P$ and the number of training instances $K$ must also increase to accommodate the increasing complexity of the problem search space.
Ideally, both algorithms would be allowed to run until all the different candidate solution vectors would have converged to a successful policy vector. However, due to time constraints the number of iterations for which both algorithms were allowed to run for each number $N$ of input qubits was set to $G=100$, regardless of having reached convergence or not. Thus, both algorithms would stop either when they had converged to a solution, or when they reached iteration $G=100$, and the final policy vector would be the average of all the different candidate solution vectors at that point.
\section{Results}
In order to provide a benchmark for the two machine learning algorithms discussed above, we first introduce a non-adaptive protocol that can also be run in the presence of noise. This protocol has the advantage of simplicity and we find that for moderate noise values it yields results that are better or comparable with machine learning algorithms. On the other hand, for increased noise values the machine learning protocols show better results.
We start by discussing the ideal configuration where any source of noise or quantum decoherence is neglected, then we consider the Gaussian and Random Telegraph noise configurations and, finally, the visibility loss due to decoherence. These different sources of imperfection were applied to the quantum phase estimation configuration independently. The number of varying qubits used under all the different scenarios was set in the interval $N \in [5,25]$ and all the remaining parameters were left the same across all the different scenarios. For $N>20$ we found that the algorithms were already taking more than five days to arrive at a successful policy vector, which ultimately made any larger value of $N$ computationally impracticable under a reasonable amount of time.
The break in performance in non-ideal systems is clearer for larger values of noise, since increasing the value of noise in the estimation process consequently leads to an increased complexity of the search space for both algorithms. Thus, for larger values of $N$ this added level of complexity becomes even more evident in the scattering of the obtained results. However, it is important to stress that when both algorithms had enough time to converge to a successful policy, they were able to perform with precision values close to the SQL, thus attesting their resilience to noise in the quantum phase estimation scheme.
\subsection{A Non-Adaptive Protocol and the Standard Quantum Limit Benchmark}
The SQL is defined strictly speaking in the limit of large $N$. Since we do not work in this asymptotic regime, it is important to devise a non-adaptive protocol that reproduces the SQL for $N \gg 1$, yet it yields results also at $N \approx 5-25$.
This protocol can be outlined as follows. We consider the random phase that should be estimated $\phi$ and a fixed control phase $\theta$. Based on these values, we calculate the probability $\mathcal{P}_{\pm}(0\vert \phi, \theta) $, see Eqs. (\ref{eq:prob0}), (\ref{eq:prob1}) or Eq. (\ref{eq:decoherence}) with $\nu =1$. Then, for each $N$ we generate $N$ uniformly random numbers in the interval $[0,1]$. If the random number is less than $\mathcal{P}_{\pm}(0\vert \phi, \theta)$ we add it into the $0$ bin, otherwise we add it to $1$. Next, we count the number of elements in the $0$ stack, $N_{\pm}(0)$. Finally, we find an estimation for the phase $\phi^{est}$, as $\phi^{est}=\theta + \arccos(\pm 2N_{\pm}(0)/N\mp 1)$, and we calculate the Holevo variance with the exponent in Eq. (\ref{eq:holevo_simplified}) as $\exp[i(\phi-\phi^{est})]$.
The number of elements $N_{+}(0)$ and $N_{+}(1)=N-N_{+}(0)$ follows the binomial distribution as shown in Fig. \ref{fig:Binomial}. This distribution is obtained from two constant phase estimation by 50 measurements, repeated 250000 times.
This procedure is repeated $K=10 N^2$ times for each phase $\phi$ uniformly distributed in the interval $[0,2\pi]$. The resulting Holevo variance is represented by the blue squares in Fig. \ref{fig:res_clean}. We have verified numerically that the non-adaptive method reproduces asymptotically the SQL. For example, even at $N=100$ the difference
between simulated $V_H$ and $1/N$ is 10 \% and reaches 4 \% at $N=800$. Also, as it is seen from the inset at Fig. \ref{fig:res_clean}, the slope for this non-adaptive protocol is equal to $-1.0265$, while for the ideal SQL this slope should be $-1$.
\begin{figure}[h!]
\centering
[width=380pt]{Binom.png}
\caption{Distribution of $N_{+}(0)$ outcomes for 25000 experiments of fixed phase estimation for two values of $\mathcal{P}_{+}$ .}
\label{fig:Binomial}
\end{figure}
The same procedure is used to simulate the variance in the presence of different noises, as it is shown below.
\subsection{Adaptive Protocols in the Ideal Noiseless Configuration}
Before analysing the evolution of the performance obtained by the DE and PSO algorithms under a increasing number of qubits $N$, it is important to show that this increase in $N$ does indeed lead to better estimation values. To do so, the two algorithms were first allowed to converge to a given value of the unknown phase $\phi$ for different values of $N$. This experiment was repeated $1000$ times for each value of $N$ and, at each time, the value of the estimated phase $\theta$ was recorded. Finally, these $1000$ different results of $\theta$ for each value of $N$ were fitted to a probability density function (PDF) and centered all around the same value $\theta = \pi$ to better compare the results. The results for each algorithm are displayed side by side in Fig. \ref{fig:probabilities}.
\begin{figure}[h!]
\centering
[width=380pt]{Without_noises.png}
\caption{Estimation precision based on the logarithm of the Holevo variance $\ln(V_H)$ of both the DE and PSO algorithms under the ideal configuration for $N\in[5,25]$ qubits.}
\label{fig:res_clean}
\end{figure}
\begin{figure}[h!]
\centering
[width=350pt]{probabilities_cropped.png}
\caption{Probability density function of the DE and PSO algorithms as a function of the phase value $\theta$.}
\label{fig:probabilities}
\end{figure}
Considering the results obtained in Fig. \ref{fig:probabilities} it is indeed possible to see that by increasing the number of qubits $N$ at the input level the estimation process leads to values of $\theta$ converging further towards the given value of the unknown phase $\phi$. It is also possible to see that as $N$ goes higher, this increase in precision starts to be less evident as the algorithms start running into convergence issues. Given that the complexity of the search space of each algorithm scales polynomial in $N$ and that due to time constraints the algorithms were not allowed to run for more than $G=100$, a decrease in performance under this time limitation for large values of $N$ is expected.
Having arrived at this conclusion, the final results obtained under the ideal scenario, where there is no source of noise or quantum decoherence, are shown in Fig. \ref{fig:res_clean}. Observing the results it is possible to see that both algorithms were able to follow the SQL for the ideal-case scenario. It is also possible to reinforce the previous found conclusion that an increase in $N$ leads to better results. In fact, for $N\in[5,15]$ the scaling of the Holevo variance is well approximated by a power law $V_{H}\sim N^{-\alpha}$, where $\alpha_{\rm DE}=0.75$ for the DE algorithm and $\alpha_{\rm PSO}=0.64$ for the PSO algorithm, while the the corresponding value for the SQL is $\alpha_{\rm SQL}=1$. This also shows that the DE performs slightly better than PSO at reaching low variances, which is consistent with results obtained in other contexts, such as estimating the bias in the quantum walk algorithm \cite{Lovett2013}. This scaling in precision close to the SQL for $N\in[5,15]$ is consistent with what has been numerically observed in other algorithms \cite{berry2009perform}.
The fact that the machine learning algorithms do not reach the SQL is also consistent with the known results that the adaptive measurements cannot saturate the Cram\'er-Rao inequality even for optimal measurements \cite{PhysRevA.94.022334,Garcia2020}. It is also noticeable that for larger values of $N$ the performance starts to break due to the limiting number of iterations $G=100$ imposed for the convergence of the algorithms. This is not a malfunction of the algorithms, but a direct consequence of the restricted time available. As the complexity of the search space of the algorithms increases, so does the number of generations required to converge to a successful policy.
\subsection{Configurations with Noise}
Considering the results obtained under the ideal configuration, it is important to study the resilience of the algorithms to different sorts of imperfections that can be found in an experimental quantum phase estimation scheme. First, the performance of the algorithms was evaluated in the presence of Gaussian noise, followed by Random Telegraph noise and finally in the presence of quantum decoherence.
\subsubsection{Gaussian Noise}
In this scenario the algorithms were tested for increasing amounts of Gaussian noise $\sigma=\{0.2,0.4,0.8\}$ when dealing with the controllable phase shifter $\theta$ according to Eq. (\ref{eq:gaussian}). The results obtained under these conditions are presented in Fig. \ref{fig:res_noise_gaussian}.
\begin{figure}[h!]
\centering
[width=380pt]{Gaussian_noise.png}
\caption{Estimation precision based on the logarithm of the Holevo variance $\ln(V_H)$ of both the DE and PSO algorithms under increasing values of Gaussian noise $\sigma$ for $N \in [5,25]$ qubits.}
\label{fig:res_noise_gaussian}
\end{figure}
It is possible to see that as the noise fluctuations increase the precision of both adaptive and non-adaptive algorithms starts to diminish. This is, nevertheless, expected for any estimation process being conducted under increasing values of noise. The break in performance is also perceptible for larger values of $N$. However, we see from Fig. \ref{fig:res_noise_gaussian} that the adaptive algorithms are more sensitive to increasing values of Gaussian noise and that for $\sigma=0.8$ the policies obtained from machine learning become clearly superior to the non-adaptive protocol.
\subsubsection{Random Telegraph Noise}
Considering now the scenario where the estimation scheme was subject to a Random Telegraph noise following Eq. (\ref{eq:rtn}), the algorithms were tested against increasing values of $\lambda=\{0.2,0.4,0.8\}$ while keeping the probability of switching to the erroneous phase value fixed at $\eta=0.4$. The results obtained by the algorithms under these configurations are displayed in Fig. \ref{fig:res_noise_rtn}.
\begin{figure}[h!]
\centering
[width=380pt]{Telegraph_noise.png}
\caption{Estimation precision based on the logarithm of the Holevo variance $\ln(V_H)$ of both the DE and PSO algorithms under increasing values of Random Telegraph noise $\lambda$ with a fixed probability of $\eta=0.4$ for $N \in [5,25]$ qubits.}
\label{fig:res_noise_rtn}
\end{figure}
Similarly to the results obtained under the Gaussian noise, the results obtained in Fig. \ref{fig:res_noise_rtn} show that both adaptive and non-adaptive algorithms partly follow the SQL curve even in the presence of Random Telegraph noise. The poorer performance for larger values of noise, as well as the break in performance for larger values of $N$, is also evident under this scenario for the same reason as before. We can also see that for larger values of $\lambda$ the machine learning adaptive algorithms are more robust than the non-adaptive algorithm.
\subsubsection{Quantum Decoherence}
In this last scenario, we study the impact of quantum decoherence in the performance of both algorithms. This is a key source of noise in all non-optical qubits (\textit{e.g.} trapped ions, superconducting qubits, NV centers). The algorithms were tested against decreasing values of visibility $\nu=\{0.9,0.8,0.6\}$ according to Eq. (\ref{eq:decoherence}). Note that unlike the ideal scenario, the reduced visibility also impacts the SQL, see Eq. (\ref{eq:SQL}) in Appendix A. Also note that the value $\nu = e^{-0.5} \approx 0.6$ appears in certain tasks of non-adaptive parameter estimation as the visibility corresponding to an optimal phase accumulation time $\tau$ of half the decoherence time \cite{degen_2017}. The results obtained under this configuration are shown in Fig. \ref{fig:res_decoherence}.
\begin{figure}[h!]
\centering
[width=380pt]{Decoherence.png}
\caption{Estimation precision based on the logarithm of the Holevo variance $\ln(V_H)$ of both the DE and PSO algorithms under decreasing values of visibility $\nu=\{0.9, 0.8, 0.6\}$ for $N \in [5,25]$ qubits.}
\label{fig:res_decoherence}
\end{figure}
Considering the results obtained in Fig. \ref{fig:res_decoherence}, it is evident that reduced values of visibility have a significant impact on the performance of the algorithms. This is an expected behaviour, since it is known that quantum enhancement for systems operated at around the decoherence rate is asymptotically limited to only a constant factor improvement over the SQL \cite{sekatski2017quantum}. This can be confirmed by an analysis of the Fisher information:
\begin{eqnarray}
\mathcal{F}_{\phi ,\theta_{m}} =
\overline{[\partial_{\phi}\ln P_{\pm} (\zeta_{m}|\phi, \theta_{m})]^{2}} =
\sum_{\zeta_{m} = 0,1} \frac{\left[\partial_{\phi} P_{\pm} (\zeta_{m}|\phi, \theta_{m})\right]^2}{P_{\pm} (\zeta_{m}|\phi, \theta_{m})} =
\frac{\nu^2 \sin^2 (\phi - \theta_{m})}{1- \nu^2 \cos^2 (\phi - \theta_{m})}
\label{eq:Fisher}
\end{eqnarray}
The Fisher information quantifies the knowledge gained at each measurements. It can also be extended from projective to positive operator-valued measurements, see \cite{Paraoanu_2011}, and it is used to define the SQL (see Appendix \ref{apx:holevo}). Indeed, from Eq. (\ref{eq:Fisher}) it can be observed that $\nu$ reduces the information extracted after each measurement from its maximum value $\mathcal{F}_{\phi ,\theta_{m}} =1$ at $\nu =1$.
Similarly here we observe that relatively large values of decoherence (visibility around 0.6) result in the suppression of precision obtained by the non-adaptive protocol, while the adaptive DE and PSO are less sensitive.
\section{Experimental Implementations}
Summing up the theoretical results obtained so far, we can see that even for noisy systems it is possible to reduce the Holevo variance to values in a band of typically $[-1.2, -1.7]$ (standard deviations between 0.5 and 0.4) with a moderate numbers of $N \in [10,25]$ qubits. This protocol can be implemented on several experimental platforms, using either optical or microwave frequencies. We describe here the main experimental platforms and we show how the problem of optimizing the precision in the presence of noise can be addressed using the machine learning framework.
The most straightforward implementation is optical Mach-Zehnder interferometry. In this case, the operator $U_{\theta-\phi}$ can be realized by placing a phase shifter $\phi$ in one branch of the interferometer and a variable-phase shifter $\theta$ in the other branch. The latter can be realized physically as a half-wave plate placed on a rotation stage controlled by a computer \cite{Higgins2007}. The states $|0\rangle$ and $|1\rangle$ correspond to a single photon in one branch or the other of the interferometer (dual-rail encoding). Our results can be compared also with those obtained from running the more powerful Kitaev algorithm, which for the same number of resources (from 10 to 25) results in standard deviations ranging from 0.35 to 0.16 \cite{Higgins2007}, only marginally better than the data reported here. Also, our results are consistent with the theoretical limits reported in \cite{Pryde2018} for the case of non-entangled initial states ($V_{H} = 0.5609$, $\ln V_{H} =-0.25$). We obtain approximately a factor of 2 improvement in the standard deviation. For implementations employing optical interferometry, the visibility is typically very close to 1, and the main noise sources in the feedback loop concern changes in the refractive index of the beam-splitters and variable phase shifter, as well as variations in the optical paths typically caused by temperature.
More recently, an experiment with optical photons has tested the PSO algorithm with sequential non-entangled photons \cite{Lumino2018}. The single photons were obtained non-deterministically, by generating entangled pairs through spontaneus parametric down-conversion in a 2 mm long beta-barium borate (BBO) crystal and heralding over detection events in another detector. The unknown and the control phases in the two arms of the Mach-Zehnder interferometer were realized with liquid crystal devices, where the relative phase between the horizontal and vertical polarization can be changed depending on the applied electric field. The results with the PSO algorithm obtained agree with our analysis: PSO is quite efficient at reaching values approaching the SQL, especially when the number of resources (number of photons $N$) is limited. Similarly to our findings, for $N$ exceeding approximately 15 photons the Holevo variance tends to saturate. The robustness with respect to Gaussian phase noise and dephasing noise was also demonstrated by artificially adding errors to the feedback phase.
In the case of qubits based on discrete levels (solid-state and ions/atoms), Mach-Zehnder interferometry corresponds to Ramsey interference \cite{paraoanu2006,danilin_2018}. To understand how the phase information is embedded in this system, consider the following generic qubit Hamiltonian driven by a microwave field \cite{Silveri2017},
\begin{equation}
\mathcal{H} = \frac{\hbar}{2}\omega_{01}\sigma_{z} + \hbar\Omega \cos \omega t \sigma_{x}.
\end{equation}
In a frame defined by the unitary $\exp (i \omega t \sigma_{z})$ (a clockwise rotation
around the $z$-axis), using $\exp (i \omega t \sigma_{z})\sigma_{x}\exp (-i \omega t \sigma_{z}) = \sigma_{x}\cos \omega t - \sigma_{y} \sin \omega t$ and by
further applying the rotating-wave approximation we get the effective Hamiltonian
\begin{equation}
\mathcal{H} = \frac{\hbar}{2}(\omega_{01}-\omega) \sigma_{z}+ \frac{\hbar\Omega}{2}\sigma_{x}.
\end{equation}
A non-zero Rabi frequency $\Omega \neq 0$ can be thus used to create the two Hadamard gate, while the time in-between is spend for the actual sensing. Indeed, if $\Omega = 0$ for a sensing time $\tau$, the resulting unitary becomes
\begin{equation}
\mathrm{U}_{\phi ,\theta} = \begin{bmatrix}
1 & 0 \\ 0 & e^{i (\omega_{01} - \omega)\tau} \end{bmatrix},
\end{equation}
which is exactly Eq. (\ref{umatrix}) up to an overall irrelevant phase and the identification $\phi = \omega_{01}\tau$, $\theta = \omega \tau$.
Consider now a concrete problem, that of evaluating the magnetic field using a superconducting qubit, a trapped ion,
or a nitrogen-vacancy (NV) center in diamond. In these cases,
for typical experiments using Ramsey interferometry, the probability is given by Eq. (\ref{eq:decoherence}), where the dependence of the visibility $\nu$ on the sensing time $\tau$ can be sometimes different from a simple exponential decay $\nu(\tau) = \exp[-\tau/T_{2}]$,
{\hl valid for the case of Gaussian noise (see the Lindblad equation used in Section II)}.
For example, another dependence is
$\nu(\tau) = \exp[-(\tau/T_{2})^2]$ if the noise experienced by the qubit is 1/f, see e.g. \cite{Silveri2017,danilin_2018,Hanson2016}. Since $\tau$ is bounded by $T_{2}$, in these setups one might attempt to increase the precision by increasing the sensitivity of $\omega_{01}$ to the magnetic field. However, this means increasing the coupling to the magnetic field, which
at the same time this will increase the exposure of the qubit to noise. Thus, a tradeoff must be reached between these two competing effects. Our results demonstrate that the increase in noise can be mitigated successfully by the use of machine learning strategies.
In the case of a superconducting qubit in the symmetric transmon design as used in recent magnetometry experiments \cite{danilin_2018,shlyakhov_2018,Danilin2021}, the information about magnetic field is embedded in the energy level separation as
\begin{equation}
\omega_{01}(B) = \frac{1}{\hbar} \left( \sqrt{8 E_{\rm C} E_{\rm J\Sigma} \cos\left| \pi \frac{BS}{\Phi_{0}}\right|} - E_{\rm C} \right),
\end{equation}
where $B$ is the magnetic field piercing the SQUID area $S$ of the transmon and modulating the total Josephson energy $E_{\rm J\Sigma}$. In order to evaluate $B$, we can keep $\tau$ fixed and adjust the frequency $\omega$ (generated by an external signal generator) at every step. In the case of superconducting qubits, with relatively standard values $T_{2}$ = 10 $\mu$s and $\tau=1$ $\mu$s we obtain $\nu = 0.9$ (one of the values used in Sec. IV) if the noise is Gaussian and $\nu = 0.99$ if it is 1/f noise.
In order to increase the precision of determining the magnetic field, we can use a higher excited state \cite{shlyakhov_2018,Perelshein2021} - for example, the second excited state $|2\rangle$, and considering the harmonic approximation for the transmon, the relevant accumulated phase will be $\approx 2 \omega_{01}\tau$; or we can bias the transmon to a magnetic field value where the slope $d\omega_{01}/dB$ is larger. Both situations result in an increase in the noise. In the first case, this is due to the fact that higher energy levels have higher couplings to the electromagnetic environment. This causes an increase in $T_{2}$, cause due to both an increase in the $T_{1}$ time and an increase in the pure dephasing time. For example, if the second excited state is used, the decay rate of the $1-2$ transition is twice that of the $0-1$ transition, which for $T_{1}$-limited qubits results in a significant reduction of the interferometric visibility. In the second situation, due to biasing far from the sweet point. In the latter case, let us restrict ourselves for simplicity to the interval $BS \in [-\Phi_{0}/2, \Phi_{0}/2 ]$. We then have
\begin{equation*}
\frac{d\omega_{01}}{dB} = - \frac{\pi S \omega_{01}}{\Phi_{0}}\tan \left( \pi \frac{BS}{\Phi_{0}}\right).
\end{equation*}
This slope is infinite for $BS = \pm \Phi_{0}/2$, apparently allowing us to measure $B$ with an infinite precision. However, the displacement of the bias point from the sweet point is accompanied by a significant increase in noise, since the qubit is no longer protected against linear-oder fluctuations. This results again in a visibility $\nu$ below unity.
The noises in the control phase $\theta_{m}$ are typically not caused by electronics in the feedback look, but they result from uncontrollable frequency shifts that are poorly understood and controlled. Experimentally, these shifts are of the order of a few MHz. If $\tau$ is of the order of microseconds, then the resulting values of $\lambda$ and $\sigma$ are well below those considered in this work. Therefore, this type of noise will not affect the performance of our protocols when run on a superconducting qubit.
In the case of trapped ions, sensitive magnetometry using the well-known $^{171}\mathrm{Yb}^{+}$ ion has been demonstrated \cite{PhysRevLett.116.240801}. This uses four hyperfine states, $\ket{\mathrm{F} = 0, m_{\mathrm{F}} = 0}$, $\ket{\mathrm{F} = 1, m_{\mathrm{F}} = 1}$, $\ket{\mathrm{F} = 1, m_{\mathrm{F}} = -1}$, and
$\ket{\mathrm{F} = 1, m_{\mathrm{F}} = 0}$ belonging to the $^{2}S_{1/2}$ manifold.
The latter three states are degenerate and they are separated by the hyperfine splitting $\omega_{\rm hf}/(2\pi) = 12.642$ GHz from the first state. The degeneracy of these three states can be lifted by the application of a magnetic field. In the first order in magnetic field, the state $\ket{\mathrm{F} = 1, m_{\mathrm{F}} = 0}$ remains unmodified but
$\ket{\mathrm{F} = 1, m_{\mathrm{F}} = \pm 1}$ acquires frequency shifts of $\pm (g_{e} \mu_{\rm B}/2\hbar) B$, where $g_{e} \approx 2$ is the $g$-factor of the electron and $\mu_{\textrm{B}}$ is the Bohr magnetron. Thus, for magnetic field detection one could in principle use the state $\ket{\mathrm{F} = 0, m_{\mathrm{F}} = 0}$ and either of the magnetic-sensitive states $\ket{\mathrm{F} = 1, m_{\mathrm{F}} = \pm 1}$ and drive resonant Ramsey $\pi/2$ microwave pulses at around 12 GHz with $\tau$ time separation. Then the information about magnetic field is obtained from the phase $\phi = (\omega_{\rm hf} \pm g_{e} \mu_{\rm B}B/2\hbar )\tau$. These states would be exposed not only to the magnetic field that we would like to sense, but also to magnetic field noise, making our results for the noisy case relevant.
Further improvements may be achieved by the use of a continuous dynamical decoupling technique, where one could identify the
$\ket{\mathrm{F} = 1, m_{\mathrm{F}} = 0} \equiv \ket{0}$ and the dark state $\frac{1}{\sqrt{2}}\left(\ket{\mathrm{F} = 1, m_{\mathrm{F}} = - 1} + \ket{\mathrm{F} = 1, m_{\mathrm{F}} = 1}\right) \equiv \ket{1}$ as a dressed-qubit states, with a $T_{2}$ time exceeding one second \cite{Wunderlich2011}, three orders of magnitude more than the bare atomic states which is a few miliseconds.
Similar ideas can be applied to NV centers in diamond. These defects have very long decay times, of the order of tens of miliseconds, and total decoherence times are of the order of microseconds and can be extended to hundreds of microseconds by the use of dynamical decoupling pulses \cite{Lukin2008}.
Single NV centers have a triplet $S=1$ ground-state structure,
with the states $m_{\rm S}=0$ and $m_{\rm S}=\pm 1$ separated by the so-called zero-field splitting
$D = 2.87$ GHz. By applying a magnetic field along the crystal axis, the levels $m_{\rm S}=\pm 1$ can be further split by the Zeeman effect \cite{Walsworth2020}; the resulting energy level difference between these levels is $2 g_{e} \mu_{\rm B} B/\hbar = 2 \gamma_{e}B $ where $\mu_{\textrm{B}}$ is the Bohr magnetron and $g_{e}$ is the electronic g-factor, $g_{e}\approx 2$. The gyromagnetic ratio $\gamma_{e}$ is defined as $\gamma_{e} = g_{e} \mu_{\rm B}/\hbar$.
Because the triplet states can be selectively addressed with microwaves \cite{Walsworth2020} we can immediately identify
for example $|0\rangle = |m_{\rm S}=0 \rangle $ and $|1\rangle = |m_{\rm S}= 1 \rangle$
as the basis corresponding to our formalism, and we obtain $\omega_{01} = 2\pi D + \gamma_{e} B$. The encoding of the magnetic field in frequency and subsequently in the phase is linear, $\Phi =
(2\pi D + \gamma_{e} B)\tau$.
To summarize, each of these experimental systems involved relatively well characterized imperfections, and an evaluation of our protocols for any specific setup can be realized by inspecting the general results obtained for various types of noise in Section IV. We find that the precision can be optimized by using machine learning protocols to mitigate the noise resulting from the increase in sensitivity. We also note that this tradeoff is expected to occur also if machine learning algorithms are included as subroutines in protocols that use highly entangled states to increase the precision, since highly entangled states are also very susceptible to noise.
\section{Conclusion}
The objective of this work was to study the performance of machine learning algorithms, namely the DE and PSO algorithm, in a quantum phase estimation scheme and determine their ability to come close to the SQL without resorting to multi-particle entanglement. To this end, both algorithms were tested against different configuration scenarios and were able to follow the SQL curve up to a given value of input qubits $N$. Under the constraint of $G=100$ it was possible to notice that the algorithms start to loose performance for larger values of $N$. This becomes even more relevant in the scenario with quantum decoherence. However, it is important to reiterate that this is not a deficiency of the algorithms, but a direct consequence of the time and computational resources available.
These limitations can be overcome in future works by optimizing the code and making it more time efficient in order to allow both algorithms a larger number of iterations before converging to a valid solution. An immediate improvement would be to fully vectorize the code which would make it significantly faster. Another direct improvement would be to parallelize the code so that it could leverage the power of stronger computers with higher number of cores. This would allow for the different independent simulation threads of both algorithms to be conducted in parallel, making the estimation process even more time efficient. One can also explore recurrent neural networks for the quantum phase estimation task, as they are particularly well suited for regression-based prediction models on time series events. It would be interesting to see how architectures such as the Long Short-Term Memory (LSTM), the Gated-Recurrent Unit (GRU) and the Temporal Convolutional Networks (TCN) would compare against our reinforcement learning algorithms.
Overall, we have shown that machine learning algorithms can provide noise-robust policies and we benchmarked their performances for various types of noise in close connection to experimental realizations. Above a certain critical value of noise, we found that machine learning based protocols show better results than non-adaptive methods. This opens a door to future developments that may push even further the precision of quantum phase estimation with classical machine learning algorithms without resorting to preparation and measurement of highly entangled states.
|
2,869,038,155,878 | arxiv | \section*{Test with a model}
\vspace*{-0.15cm}
In order to illustrate the usefulness of the PAs,
we will first use a phenomenological model as a theoretical laboratory
to check our method.
A VFF phase-shift with the right
threshold behaviour is considered~\cite{Guerrero,Cillero,Portoles}:
\vspace*{-0.15cm}
\begin{equation}\label{model2}
\delta(t)=\arctan \left[\frac{\hat{M}_{\rho}
\hat{\Gamma}_{\rho}(t)}{\hat{M}_{\rho}^2-t} \right]\ ,
\end{equation}
\vspace*{-0.15cm}
with the $t$-dependent width given by~\cite{Guerrero,Cillero}
\vspace*{-0.15cm}
\begin{equation}\label{width}
\hat{\Gamma}_{\rho}(t)= \Gamma_{0}\ \left( \frac{t}{\hat{M}_{\rho}^2} \right)\ \frac{\sigma^3(t)}{\sigma^3(\hat{M}_{\rho}^2)}\ \theta\left( t- 4 \hat{m}_{\pi}^{2} \right)\ ,
\end{equation}
\vspace*{-0.15cm}
and $\sigma(t)=\sqrt{1-4 \hat{m}_{\pi}^{2}/t}$.
The input parameters are chosen to be close to their physical values
$ \Gamma_{0} = 0.15$~GeV, $\hat{M}_{\rho}^2= 0.6$~GeV$^2$,
$4 \hat{m}_{\pi}^{2}= 0.1$~GeV$^2$.
The form-factor
is then recovered through a once-subtracted Omn\'es relation,
\vspace*{-0.15cm}
\begin{equation}\label{model}
F(Q^2)= \exp\left \{-\frac{Q^2}{\pi}
\int_{4 \hat{m}_{\pi}^{2}}^{\infty}\ dt\ \frac{\delta(t)}{t (t+Q^2)}\right\}\ .
\end{equation}
\vspace*{-0.15cm}
At low energies, the form-factor is given by the Taylor expansion,
\vspace*{-0.15cm}
\begin{equation}\label{expmodel}
F(Q^2
\, =\, 1 \, + \, \sum_{k=1}^\infty a_k\, (- Q^2)^k \,\, ,
\end{equation}
\vspace*{-0.15cm}
where the coefficients $a_k$ are known since they are determined by
$\Gamma_0$, $\hat{M}_\rho^2$ and $4 \hat{m}_\pi^2$. The condition $F(0)=1$
has been already incorporated.
In order to recreate the experimental
situation~\cite{Amendolia}-\cite{Dally}, an emulation of the
space-like experimental
data is generated in the range $0.01$~GeV$^2 \leq Q^2\leq 10$~GeV$^2$~\cite{PadePeris}.
The Pad\'e Approximants $P^{L}_{1}(Q^2)$ are then fitted
to these euclidean ``data'' points, providing a prediction for the low-energy coefficients $a_k$.
It is found that as L increases the sequence of PAs $P^L_1$ converges
to the exactly known results, although
in a hierarchical way, i.e. much faster for $a_1$ than for $a_2$, and this one much faster than $a_3$,
etc... The relative error achieved in determining the coefficients $a_k$
by the Pad\'e $P^4_1$ was,
respectively, $1.5\%$ and $10\%$ for $a_1$ and $a_2$.
These results will be taken as a rough estimate of the systematic uncertainties
when fitting the real experimental data with Pad\'es in next section,
and they will be added to the final error~\cite{PadePeris}.
\vspace*{-0.15cm}
\section*{Experimental data analysis}
\vspace*{-0.15cm}
All the available experimental data in the space-like region have
been employed~\cite{Amendolia}-\cite{Dally},
ranging in momentum from $Q^2=0.015$ up to 10~GeV$^2$.
As discussed in the introduction, the prominent role of the rho meson contribution motivates
that we start with the $P^{L}_{1}$ Pad\'e sequence.
\begin{figure}[!t]
\center
\includegraphics[width=7cm,clip]{DataAllFQ2.eps}
\vspace*{-0.85cm}\caption{{\small The sequence of $P^L_1$ PAs is compared to the available space-like
data~\cite{Amendolia}-\cite{Dally}:
$P^0_1$ (brown dashed), $P^1_1$ (green thick-dashed),
$P^2_1$ (orange dot-dashed), $P^3_1$ (blue long-dashed),
$P^4_1$ (red solid).}}
\label{fig:VFF}
\end{figure}
The fit to $P^L_1$ yields a determination for the pion VFF and
the coefficients $a_{k}$.
Nonetheless, according to Ref.~\cite{brodsky-lepage},
the VFF is supposed to fall like $1/Q^2$ (up to logarithms) for $Q^2\to\infty$.
This means that, for any given value of $L$, one may expect a good fit
only up to a finite value of $Q^2$, but not
for asymptotically large momentum.
This is clearly seen in Fig.~\ref{fig:VFF}, where the Pad\'{e} sequence
$P^L_1$ is compared to the space-like data. The Pad\'es converge to
the real VFF at low and mid energies but eventually diverge.
Fig.~\ref{fig:a1PL1} shows the evolution of the fit results for the Taylor
coefficients $a_1$ and $a_2$.
As one can see, after a few Pad\'es they become stable~\cite{PadePeris}.
Thus, our best fit is provided by $P_1^4$~\cite{PadePeris}, yielding
\vspace*{-0.15cm}
\begin{equation}
\begin{array}{c}
a_1\, =\, 1.92 \pm 0.03\,\,\mbox{GeV}^{-2} \, ,\\
a_2\, =\, 3.49 \pm 0.26\,\,\mbox{GeV}^{-4} \,,
\end{array}
\end{equation}
\vspace*{-0.15cm}
with a $\chi^2/\mathrm{dof}=117/90$.
\begin{figure}[!t]
\center
\includegraphics[width=5.5cm]{a1-PL1.eps} \\
\includegraphics[width=5.5cm]{a2-PL1.eps}
\vspace*{-0.85cm}
\caption{{\small $a_1$ and $a_2$ Taylor coefficients for the
$P^L_1$ PA sequence. }}\label{fig:a1PL1}
\end{figure}
Alternative rational approximations were also constructed~\cite{PadePeris}:
two-pole PAs, Pad\'e-Types and Partial-Pad\'es~\cite{PerisMasjuan08}.
They all provided compatible results.
\vspace*{-0.15cm}
\section*{Conclusions}
\vspace*{-0.15cm}
\begin{table*}[!t]
\setlength{\tabcolsep}{1.5pc}
\catcode`?=\active \def?{\kern\digitwidth}
\caption{{\small Our results for the quadratic radius $\langle r^2\rangle_V^\pi$ and second derivative $a_2$ are
compared to other
determinations~\cite{Portoles,Colangelo,ColangeloB,Yndurain,op6-VFF,lattice}.
Our first error is
statistical. The second one is systematic, based on the analysis
of the VFF model of the previous section~\cite{PadePeris}.}}
\label{results}
\begin{tabular*}{\textwidth}{@{}l@{\extracolsep{\fill}}ccc}
\hline
& $\langle r^2\rangle_V^\pi$ (fm$^2$) & $a_2$ (GeV$^{-4}$) \\ \hline \hline
This work~\cite{PadePeris} & $0.445\pm 0.002_{\mathrm{stat}}\pm 0.007_{\mathrm{syst}}$ & $3.30\pm 0.03_{\mathrm{stat}}\pm 0.33_{\mathrm{syst}}$ \\
CGL~\cite{Colangelo,ColangeloB}& $ 0.435\pm 0.005$ & ... \\
TY~\cite{Yndurain} & $ 0.432\pm 0.001 $ & $ 3.84\pm 0.02$ \\
BCT~\cite{op6-VFF} & $0.437\pm 0.016$ & $3.85\pm 0.60$ \\
PP~\cite{Portoles} & $0.430\pm 0.012$ & $3.79\pm 0.04$ \\
Lattice~\cite{lattice} & $0.418\pm 0.031$ & ... \\
\hline
\end{tabular*}
\end{table*}
Combining all these outcomes, we obtained the results shown in Table~\ref{results}.
For comparison with previous analyses, the value
of the quadratic radius is provided
(given by $\langle r^2 \rangle_V^\pi \, =\, 6 \, a_1$, with our best determination
$a_1=1.907\pm 0.010_{_{\rm sta}}\pm 0.03_{_{\rm sys}}$~GeV$^{-2}$).
In summary, in this work we have used rational approximations as a tool for
fitting the pion vector form factor in the euclidean range.
Since these approximants are capable of
going beyond the low-energy region, they are rather suitable for the
description of space-like data.
They allow to extract the low-energy coefficients, improving on their determination
by incorporating also the euclidean high-energy data information.
As one can see in Table~\ref{results},
the achieved degree of uncertainty is
shown to be competitive with previous analyses
existing in the literature~\cite{Portoles,Colangelo,ColangeloB,Yndurain,op6-VFF,lattice}.
\vspace*{-0.4cm}
|
2,869,038,155,879 | arxiv | \section{\ignorespaces}{\unskip\bibn@me}
\bigbreak\bgroup
\ifx\ninepoint\undefined\relax\else\ninepoint\fi}
\let\refsp@ce=\
\let\bibleftm@rk=[
\let\bibrightm@rk=]
\def\numero{n\raise.82ex\hbox{$\fam0\scriptscriptstyle
o$}~\ignorespaces}
\newcount\equationc@unt
\newcount\bibc@unt
\newif\ifref@changes\ref@changesfalse
\newif\ifpageref@changes\ref@changesfalse
\newif\ifbib@changes\bib@changesfalse
\newif\ifref@undefined\ref@undefinedfalse
\newif\ifpageref@undefined\ref@undefinedfalse
\newif\ifbib@undefined\bib@undefinedfalse
\newwrite\@auxout
\def\re@dreferences#1#2{{%
\re@dreferenceslist{#1}#2,\undefined\@@}}
\def\re@dreferenceslist#1#2,#3\@@{\def#1}\newl@bel#2}{#2}%
\expandafter\ifx\csname#1@@\meaning#1}\newl@bel#2}\endcsname\relax
??\immediate\write16
{Warning, #1-reference "#1}\newl@bel#2}" on page \the\pageno\space
is undefined.}%
\global\csname#1@undefinedtrue\endcsname
\else\csname#1@@\meaning#1}\newl@bel#2}\endcsname\fi
\ifx#3\undefined\relax
\else,\refsp@ce\re@dreferenceslist{#1}#3\@@\fi}
\def\newlabel#1#2{{\def#1}\newl@bel#2}{#1}\newl@bel#2}}
\def\newl@bel#1#2{%
\expandafter\xdef\csname ref@@\meaning#1}\newl@bel#2}\endcsname{#1}%
\expandafter\xdef\csname pageref@@\meaning#1}\newl@bel#2}\endcsname{#2}}
\def\label#1{{%
\toks0={#1}\message{ref(\lastref) \the\toks0,}%
\ignorespaces\immediate\write\@auxout%
{\noexpand\newlabel{\the\toks0}{{\lastref}{\the\pageno}}}%
\def#1}\newl@bel#2}{#1}%
\expandafter\ifx\csname ref@@\meaning#1}\newl@bel#2}\endcsname\lastref%
\else\global\ref@changestrue\fi%
\newlabel{#1}{{\lastref}{\the\pageno}}}}
\def\ref#1{\re@dreferences{ref}{#1}}
\def\pageref#1{\re@dreferences{pageref}{#1}}
\def\bibcite#1#2{{\def#1}\newl@bel#2}{#1}%
\expandafter\xdef\csname bib@@\meaning#1}\newl@bel#2}\endcsname{#2}}}
\def\cite#1{\bibleftm@rk\re@dreferences{bib}{#1}\bibrightm@rk}
\def\beginthebibliography#1{\bibliographym@rk
\setbox0\hbox{\bibleftm@rk#1\bibrightm@rk\enspace}
\parindent=\wd0
\global\bibc@unt=0
\def\bibitem##1{\global\advance\bibc@unt by 1
\edef\lastref{\number\bibc@unt}
{\toks0={##1}
\message{bib[\lastref] \the\toks0,}%
\immediate\write\@auxout
{\noexpand\bibcite{\the\toks0}{\lastref}}}
\def#1}\newl@bel#2}{##1}%
\expandafter\ifx
\csname bib@@\meaning#1}\newl@bel#2}\endcsname\lastref
\else\global\bib@changestrue\fi%
\bibcite{##1}{\lastref}
\medbreak
\item{\hfill\bibleftm@rk\lastref\bibrightm@rk}%
}
}
\def\egroup\par{\egroup\par}
\outer\def \par\vfill\supereject\end{\@closeaux
\par\vfill\supereject\end}
\def\@closeaux{\closeout\@auxout
\ifref@changes\immediate\write16
{Warning, changes in references.}\fi
\ifpageref@changes\immediate\write16
{Warning, changes in page references.}\fi
\ifbib@changes\immediate\write16
{Warning, changes in bibliography.}\fi
\ifref@undefined\immediate\write16
{Warning, references undefined.}\fi
\ifpageref@undefined\immediate\write16
{Warning, page references undefined.}\fi
\ifbib@undefined\immediate\write16
{Warning, citations undefined.}\fi}
\immediate\openin\@auxout=\jobname.aux
\ifeof\@auxout \immediate\write16
{Creating file \jobname.aux}
\immediate\closein\@auxout
\immediate\openout\@auxout=\jobname.aux
\immediate\write\@auxout {\relax}%
\immediate\closeout\@auxout
\else\immediate\closein\@auxout\fi
\input\jobname.aux \par
\immediate\openout\@auxout=\jobname.aux
\def\bibn@me{R\'ef\'erences bibliographiques}
\catcode`@=11
\def\bibliographym@rk{\bgroup}
\outer\def \par\vfill\supereject\end{ \par\vfill\supereject\end}
\magnification=1200
\font\tenbfit=cmbxti10
\font\sevenbfit=cmbxti10 at 7pt
\font\sixbfit=cmbxti5 at 6pt
\newfam\mathboldit
\textfont\mathboldit=\tenbfit
\scriptfont\mathboldit=\sevenbfit
\scriptscriptfont\mathboldit=\sixbfit
\def\bfit
{\tenbfit
\fam\mathboldit
}
\def{\bf {Q}}{{\bf {Q}}}
\def{\bf {K}}{{\bf {K}}}
\def{\bf {Z}}{{\bf {Z}}}
\def{\bf {C}}{{\bf {C}}}
\def{\bf {R}}{{\bf {R}}}
\def{\bf {N}}{{\bf {N}}}
\def{\bf {Q}}} \def\F{{\bf F}{{\bf {Q}}}
\def{\bf {K}}} \def\k{{\bf k}{{\bf {K}}}
\def {O_{\K}}} \def\U{{U_K}{ {O_{{\bf {K}}} \def\k{{\bf k}}}} \def\U{{U_K}}
\def{\bf N}} \def\L{{\bf L}{{\bf N}} \def\L{{\bf L}}
\def\Z{{\bf Z}} \def\A{{\bf A}} \def\tA{{{\tilde\A}}} \def\tN{{{\tilde N}}}
\def{\bf R}} \def\M{{\bf M}} \def{\cal {K}}} \def\cC{{\cal {C}}} \def\cS{{\cal {S}}{{\bf B}} \def\tw{{\tilde w}{{\bf R}} \def\M{{\bf M}} \def{\cal {K}}} \def\cC{{\cal {C}}} \def\cS{{\cal {S}}{{\bf B}} \def\tw{{\tilde w}}
\def{{\tilde{\cal {K}}} \def\cC{{\cal {C}}} \def\cS{{\cal {S}}}}} \def\U{{\bf U}} \def\tU{{{\tilde\U}}{{{\tilde{\cal {K}}} \def\cC{{\cal {C}}} \def\cS{{\cal {S}}}}} \def\U{{\bf U}} \def\tU{{{\tilde\U}}}
\def{\tilde \Psi}{{\tilde \Psi}}
\def{\bf C}} \def\ord{{\rm ord}{{\bf C}} \def\ord{{\rm ord}}
\def{\cal {P}}{{\cal {P}}}
\def{\cal {K}}} \def\cC{{\cal {C}}} \def\cS{{\cal {S}}{{\cal {K}}} \def\cC{{\cal {C}}} \def\cS{{\cal {S}}}
\def{\cal H}{{\cal {H}}} \def\cJ{{\cal {J}}} \def\cV{{\cal {V}}}
\def{\cal {A}}} \def\cB{{\cal {B}}} \def\cN{{\cal {N}}{{\cal {A}}} \def\cB{{\cal {B}}} \def\cN{{\cal {N}}}
\def{\cal {E}}} \def\cHI{{\cal {H}}_{\infty}{{\cal {E}}} \def\cHI{{\cal {H}}_{\infty}}
\def{\rm card}{{\rm card}}
\def{\underline x}} \def\si{{\sigma}{{\underline x}} \def\si{{\sigma}}
\def{\varepsilon}} \def\dzeta{{\zeta}{{\varepsilon}} \def\dzeta{{\zeta}}
\def{\hbox{B{\sevenrm {AD}}}}{{\bfit Bad}}
\def{\overline U}{{\overline U}}
\def{\overline V}{{\overline V}}
\def{\overline W}{{\overline W}}
\def{\overline X}{{\overline X}}
\def{\overline A}{{\overline A}}
\def\noindent {\bf{D\'emonstration : }}{\noindent {\bf{D\'emonstration : }}}
\def\noindent {\bf{Remarque : }}{\noindent {\bf{Remarque : }}}
\def{\it cf. }} \def\resp{{\it resp. }{{\it cf. }} \def\resp{{\it resp. }}
\def{\thinspace}} \def\diam{{\rm diam}{{\thinspace}} \def\diam{{\rm diam}}
\def\house#1{\setbox1=\hbox{$\,#1\,$}%
\dimen1=\ht1 \advance\dimen1 by 2pt \dimen2=\dp1 \advance\dimen2 by 2pt
\setbox1=\hbox{\vrule height\dimen1 depth\dimen2\box1\vrule}%
\setbox1=\vbox{\hrule\box1}%
\advance\dimen1 by .4pt \ht1=\dimen1
\advance\dimen2 by .4pt \dp1=\dimen2 \box1\relax}
\def\MM{{\rm M}} \def\h{{\rm h}} \def\J{{\rm J}}
\def{\rm Norm}} \def\de{\delta _{\K}} \def\res{{\rm Res}{{\rm Norm}} \def\de{\delta _{{\bf {K}}} \def\k{{\bf k}}} \def\res{{\rm Res}}
\def\NK{{\rm N}_{{\bf {K}}} \def\k{{\bf k} / {\bf {Q}}} \def\F{{\bf F}}} \def\NL{{\rm N}_{\L / {\bf {Q}}} \def\F{{\bf F}}}
\def{\rm N}_{\K(s) / \Q}} \def\NLL{{\rm N}_{\L(e_1) / \Q}{{\rm N}_{{\bf {K}}} \def\k{{\bf k}(s) / {\bf {Q}}} \def\F{{\bf F}}} \def\NLL{{\rm N}_{\L(e_1) / {\bf {Q}}} \def\F{{\bf F}}}
\def\Df{{\Delta_f}} \def\NLK{{\rm N}_{\L / {\bf {K}}} \def\k{{\bf k}}}
\def\Ga{{\Gamma}} \def\al{{\alpha}}
\def{\kappa}} \def\eps{{\varepsilon}{{\kappa}} \def\eps{{\varepsilon}}
\def\ga{{\gamma}} \def\La{{\Lambda}}
\def{\beta}} \def\de{{\delta}{{\beta}} \def\de{{\delta}}
\def{\lambda}} \def\th{{\theta}{{\lambda}} \def\th{{\theta}}
\def{\cal H}{{\cal H}}
\def{\underline{v}}} \def\ux{{\underline{x}}} \def\uy{{\underline y}{{\underline{v}}} \def\ux{{\underline{x}}} \def\uy{{\underline y}}
\def\displaystyle} \def\scr{\scriptstyle{\displaystyle} \def\scr{\scriptstyle}
\def\smallskip} \def\ens{\enspace} \def\noi{\noindent{\smallskip} \def\ens{\enspace} \def\noi{\noindent}
\def\build#1_#2^#3{\mathrel{\mathop{\kern 0pt#1}\limits_{#2}^{#3}}}
\def\date {le\ {\the\day}\ \ifcase\month\or janvier
\or fevrier\or mars\or avril\or mai\or juin\or juillet\or
ao\^ut\or septembre\or octobre\or novembre
\or d\'ecembre\fi\ {\oldstyle\the\year}}
\font\fivegoth=eufm5 \font\sevengoth=eufm7 \font\tengoth=eufm10
\newfam\gothfam \scriptscriptfont\gothfam=\fivegoth
\textfont\gothfam=\tengoth \scriptfont\gothfam=\sevengoth
\def\fam\gothfam\tengoth{\fam\gothfam\tengoth}
\def\pp{{\fam\gothfam\tengoth p}} \def\aa{{\fam\gothfam\tengoth a}} \def\bb{{\fam\gothfam\tengoth b}}
\def{\goth c}} \def\qq{{\goth q}} \def\PP{{\goth P}{{\fam\gothfam\tengoth c}} \def\qq{{\fam\gothfam\tengoth q}} \def\PP{{\fam\gothfam\tengoth P}}
\def\noindent {\it Proof. }{\noindent {\it Proof. }}
\def\vbox{\hrule\hbox{\vrule height 1 ex\kern 1 ex\vrule}\hrule}{\vbox{\hrule\hbox{\vrule height 1 ex\kern 1 ex\vrule}\hrule}}
\def\hfill \smallsquare\vskip 3mm{\hfill \vbox{\hrule\hbox{\vrule height 1 ex\kern 1 ex\vrule}\hrule}\vskip 3mm}
\def{\bf {Q}}} \def\F{{\bf F}{{\bf {Q}}} \def\F{{\bf F}}
\def{\bf {K}}} \def\k{{\bf k}{{\bf {K}}} \def\k{{\bf k}}
\centerline{}
\vskip 4mm
\centerline{
\bf Badly approximable numbers and Littlewood-type problems}
\vskip 8mm
\centerline{Yann B{\sevenrm UGEAUD}
\ \& \ Nikolay M{\sevenrm OSHCHEVITIN}
\footnote{}{\rm
2000 {\it Mathematics Subject Classification : 11J13; 11J25, 11K60} .}
}
{\narrower\narrower
\vskip 12mm
\proclaim Abstract. {
We establish that the set of pairs $(\alpha, \beta)$ of real numbers
such that
$$
\liminf_{q \to + \infty} \, q \cdot
(\log q)^2 \cdot \Vert q \alpha \Vert \cdot \Vert q \beta \Vert > 0,
$$
where $\Vert \cdot \Vert$ denotes the distance to the
nearest integer,
has full Hausdorff dimension in ${\bf R}} \def\M{{\bf M}} \def{\cal {K}}} \def\cC{{\cal {C}}} \def\cS{{\cal {S}}{{\bf B}} \def\tw{{\tilde w}^2$.
Our proof rests on a method introduced by Peres and Schlag,
that we further apply to various Littlewood-type problems.
}
}
\vskip 15mm
\centerline{\bf 1. Introduction}
\vskip 6mm
A famous open problem in simultaneous
Diophantine approximation, called
the Littlewood conjecture \cite{Lit68}, claims that,
for any given pair $(\alpha, \beta)$ of real numbers,
we have
$$
\inf_{q \ge 1} \, q \cdot \Vert q \alpha \Vert
\cdot \Vert q \beta \Vert = 0,
\eqno (1.1)
$$
where $\Vert \cdot \Vert$ denotes the distance to the
nearest integer.
Throughout the present paper, we denote by ${\hbox{B{\sevenrm {AD}}}}$ the set of badly
approximable numbers, that is,
$$
{\hbox{B{\sevenrm {AD}}}} = \{ \alpha \in {\bf R}} \def\M{{\bf M}} \def{\cal {K}}} \def\cC{{\cal {C}}} \def\cS{{\cal {S}}{{\bf B}} \def\tw{{\tilde w} : \inf_{q \ge 1} \,
q \cdot \Vert q \alpha \Vert > 0\},
$$
and we recall that ${\hbox{B{\sevenrm {AD}}}}$ has Lebesgue measure zero
and full Hausdorff dimension \cite{Ja28}.
Consequently, (1.1) holds for almost every
pair $(\alpha, \beta)$ of real numbers.
Recently, this result was considerably improved by
Einsiedler, Katok and Lindenstrauss \cite{EKL},
who established that the set of pairs $(\alpha, \beta)$
for which (1.1) do not hold has Hausdorff dimension zero;
see also \cite{PoVe} for a weaker statement,
and Section 10.1 of
\cite{BuLiv} for a survey of related results.
Another metrical statement connected
to the Littlewood conjecture was
established by Gallagher \cite{Gal62}
in 1962 and can be formulated as follows
(see e.g. \cite{BeVe07}).
\proclaim Theorem G.
Let $n$ be a positive integer.
Let $\Psi : {\bf R}} \def\M{{\bf M}} \def{\cal {K}}} \def\cC{{\cal {C}}} \def\cS{{\cal {S}}{{\bf B}} \def\tw{{\tilde w}_{>0} \to {\bf R}} \def\M{{\bf M}} \def{\cal {K}}} \def\cC{{\cal {C}}} \def\cS{{\cal {S}}{{\bf B}} \def\tw{{\tilde w}_{>0}$ be a non-increasing
function.
The set of points $(x_1, \ldots , x_n)$ in ${\bf R}} \def\M{{\bf M}} \def{\cal {K}}} \def\cC{{\cal {C}}} \def\cS{{\cal {S}}{{\bf B}} \def\tw{{\tilde w}^n$
such that there are infinitely many positive integers $q$
satisfying
$$
\prod_{i=1}^n \, \Vert q x_i \Vert < \Psi (q)
$$
has full Lebesgue measure if the sum
$$
\sum_{h \ge 1} \, \Psi(h)^n \, (\log h)^{n-1}
$$
diverges, and has zero Lebesgue measure otherwise.
In particular, it follows from Gallagher's theorem that
$$
\liminf_{q \to + \infty} \, q \cdot
(\log q)^2 \cdot \Vert q \alpha \Vert \cdot \Vert q \beta \Vert = 0
\eqno (1.2)
$$
for almost every pair $(\alpha, \beta)$ of real numbers.
The main purposes of the present note
are to establish the existence of
exceptional pairs $(\alpha, \beta)$ which do not satisfy (1.2)
--- a result first proved in \cite{Mo5} ---,
and to prove that the set of these pairs has
full Hausdorff dimension in ${\bf R}} \def\M{{\bf M}} \def{\cal {K}}} \def\cC{{\cal {C}}} \def\cS{{\cal {S}}{{\bf B}} \def\tw{{\tilde w}^2$.
We further consider various questions closely
related to the Littlewood conjecture.
Our main results are stated in Section 2
and proved in Sections 4 and 5,
with the help of auxiliary lemmas gathered in Section 3.
Several additional results are given in Section~6.
Throughout this paper, $\lfloor x \rfloor$ and $\lceil x \rceil$
denote the greatest integer less than or equal to $x$
and the smallest integer greater than or equal to $x$, respectively.
\vskip 8mm
\centerline{\bf 2. Main results}
\vskip 6mm
Our first result shows that there are many pairs $(\alpha, \beta)$
of real numbers that are not well multiplicatively approximable.
\proclaim Theorem 1.
For every real number $\alpha$ in ${\hbox{B{\sevenrm {AD}}}}$, the set of real numbers $\beta$
such that
$$
\liminf_{q \to + \infty} \, q \cdot
(\log q)^2 \cdot \Vert q \alpha \Vert
\cdot \Vert q \beta \Vert > 0 \eqno (2.1)
$$
has full Hausdorff dimension.
The proof of Theorem 1 uses a method
introduced by Peres and Schlag \cite{PeSc},
which was subsequently applied in \cite{Mo1,Mo2,Mo3,Mo4,Mo5}.
Since the set ${\hbox{B{\sevenrm {AD}}}}$ has full Hausdorff dimension,
the next result follows from Theorem~1 by an
immediate application of
Corollary 7.12 from \cite{Fal90}.
\proclaim Theorem 2.
The set of pairs $(\alpha, \beta)$ of real numbers satisfying
$$
\liminf_{q \to + \infty} \, q \cdot
(\log q)^2 \cdot \Vert q \alpha \Vert \cdot \Vert q \beta \Vert > 0
$$
has full Hausdorff dimension in ${\bf R}} \def\M{{\bf M}} \def{\cal {K}}} \def\cC{{\cal {C}}} \def\cS{{\cal {S}}{{\bf B}} \def\tw{{\tilde w}^2$.
Theorem 1 can be viewed as a complement to the following
result of Pollington and Velani \cite{PoVe}.
\proclaim Theorem PV.
For every real number $\alpha$ in ${\hbox{B{\sevenrm {AD}}}}$, there exists a
subset $G(\alpha)$ of ${\hbox{B{\sevenrm {AD}}}}$ with full Hausdorff dimension
such that, for any $\beta$ in $G(\alpha)$, there
exist arbitrarily large integers $q$ satisfying
$$
q \cdot (\log q) \cdot \Vert q \alpha \Vert \cdot \Vert q \beta \Vert \le 1.
$$
In \cite{AdBu06}, the authors constructed explicitly for every $\alpha$
in ${\hbox{B{\sevenrm {AD}}}}$ uncountably many $\beta$ in ${\hbox{B{\sevenrm {AD}}}}$ such that the
pair $(\alpha, \beta)$ satisfies (1.1), and even a strong form of
this inequality. It would be very interesting to construct
explicit examples of pairs of real numbers that satisfy (2.1).
A modification of an auxiliary lemma
yields a slight improvement on Theorem~1.
\proclaim Theorem 3.
Let $a$ be a real number with $0 < a < 1$.
For every real number $\alpha$ in ${\hbox{B{\sevenrm {AD}}}}$, the set of real numbers $\beta$
such that
$$
\liminf_{q \to + \infty} \, q \cdot
(\log q)^{2-a} \cdot (\log 1/\Vert q \alpha \Vert)^a
\cdot \Vert q \alpha \Vert \cdot \Vert q \beta \Vert > 0
$$
has full Hausdorff dimension.
Theorem 3 is stronger than Theorem 1 since,
for every $\alpha$ in ${\hbox{B{\sevenrm {AD}}}}$, there exists
a positive real number $\delta$ such that
$\log (1/\Vert q \alpha \Vert) \le \delta \log q$ holds
for every integer $q \ge 2$.
Cassels and Swinnerton-Dyer \cite{CaSw}
proved that (1.1)
is equivalent to the equality
$$
\inf_{(x, y)\in \Z \times \Z \setminus\{(0,0)\}} \,
\max\{\vert x \vert,1\}\cdot\max\{\vert y \vert,1\}\cdot \Vert x \alpha
+ y \beta\Vert = 0,
$$
and used it
to show that (1.1) holds if $\alpha$ and $\beta$ belong to the same
cubic number field (see also \cite{Peck}).
In this context, we have the following metrical result,
extracted from page 455 of \cite{BeKlMa}.
For integers $q_1, \ldots , q_n$, set
$$
\Pi(q_1, \ldots , q_n) = \prod_{i=1}^n \, \max\{1, \vert q_i \vert\}.
$$
\proclaim Theorem BKM.
Let $n$ be a positive integer.
Let $\Psi : {\bf R}} \def\M{{\bf M}} \def{\cal {K}}} \def\cC{{\cal {C}}} \def\cS{{\cal {S}}{{\bf B}} \def\tw{{\tilde w}_{>0} \to {\bf R}} \def\M{{\bf M}} \def{\cal {K}}} \def\cC{{\cal {C}}} \def\cS{{\cal {S}}{{\bf B}} \def\tw{{\tilde w}_{>0}$ be a non-increasing
function.
The set of points $(x_1, \ldots , x_n)$ in ${\bf R}} \def\M{{\bf M}} \def{\cal {K}}} \def\cC{{\cal {C}}} \def\cS{{\cal {S}}{{\bf B}} \def\tw{{\tilde w}^n$
such that there are infinitely many integers $q_1, \ldots, q_n$
satisfying
$$
|| q_1 x_1 + \ldots + q_n x_n || < \Psi \bigl( \Pi(q_1, \ldots , q_n) \bigr)
\eqno (2.2)
$$
has full Lebesgue measure if the sum
$$
\sum_{h \ge 1} \, \Psi(h) \, (\log h)^{n-1} \eqno (2.3)
$$
diverges, and has zero Lebesgue measure otherwise.
For $n \ge 2$, there is no known example
of points $(x_1, \ldots , x_n)$ in ${\bf R}} \def\M{{\bf M}} \def{\cal {K}}} \def\cC{{\cal {C}}} \def\cS{{\cal {S}}{{\bf B}} \def\tw{{\tilde w}^n$
and of a function $\Psi$ as in Theorem BKM such that the sum
(2.3) diverges and (2.2) has only finitely many solutions.
The Peres--Schlag method allows us
to show that such examples do exist.
\proclaim Theorem 4.
The set of pairs $(\alpha, \beta)$ of real numbers satisfying
$$
\liminf_{x, y \ge 0} \, \max\{2, |xy|\} \cdot
\Vert x \alpha + y \beta \Vert \cdot
(\log \max\{2, |xy|\})^2 > 0
$$
has full Hausdorff dimension in ${\bf R}} \def\M{{\bf M}} \def{\cal {K}}} \def\cC{{\cal {C}}} \def\cS{{\cal {S}}{{\bf B}} \def\tw{{\tilde w}^2$.
The proof of Theorem 4 is briefly outlined in Section 5.
Note that Theorem 4 (resp. Theorem 1)
does not follow from Theorem 1 (resp. Theorem 4)
by some transference principle.
In analogy with the Littlewood conjecture,
de Mathan and Teuli\'e \cite{BdMTe}
proposed recently a `mixed Littlewood
conjecture'.
For any prime number $p$, the usual
$p$-adic absolute value $| \cdot |_p$ is
normalized in such a way that $|p|_p = p^{-1}$.
\proclaim De Mathan--Teuli\'e conjecture.
For every real number $\alpha$ and every prime
number $p$, we have
$$
\inf_{q \ge 1} \, q \cdot \Vert q \alpha \Vert \cdot
\vert q \vert_p = 0.
$$
Despite several recent results \cite{EiKl07,BDM},
this conjecture is still unsolved.
The following metrical statement,
established in \cite{BuHaVe}, should be
compared with Theorem G.
\proclaim Theorem BHV.
Let $k$ be a positive integer.
Let $p_1, \ldots , p_k$ be distinct prime numbers.
Let $\Psi : {\bf R}} \def\M{{\bf M}} \def{\cal {K}}} \def\cC{{\cal {C}}} \def\cS{{\cal {S}}{{\bf B}} \def\tw{{\tilde w}_{>0} \to {\bf R}} \def\M{{\bf M}} \def{\cal {K}}} \def\cC{{\cal {C}}} \def\cS{{\cal {S}}{{\bf B}} \def\tw{{\tilde w}_{>0}$ be a non-increasing
function.
The set of real numbers $\alpha$
such that there are infinitely many positive integers $q$
satisfying
$$
\Vert q \alpha \Vert \cdot |q|_{p_1} \cdots
|q|_{p_k} < \Psi (q)
$$
has full Lebesgue measure if the sum
$$
\sum_{h \ge 1} \, \Psi(h) \, (\log h)^k
$$
diverges, and has zero Lebesgue measure otherwise.
As an immediate consequence of Theorem BHV, we get that,
for every prime number $p$, almost every real number $\alpha$
satisfies
$$
\inf_{q \ge 2} \, q \cdot
(\log q)^2 \cdot (\log \log q)
\cdot \Vert q \alpha \Vert \cdot \vert q \vert_p = 0 \eqno (2.4)
$$
The method of proof of Theorem 1 allows us to confirm the
existence of real numbers for which (2.4) does not hold.
\proclaim Theorem 5.
Let $a$ be a real number with $0 \le a < 1$.
For every prime number $p$, the set of real numbers $\alpha$
such that
$$
\liminf_{q \to + \infty} \, q \cdot
(\log q)^{2-a} \cdot \Vert q \alpha \Vert \cdot \vert q \vert_p
\cdot (\log 2/ \vert q \vert_p)^a > 0
$$
has full Hausdorff dimension.
We display an immediate consequence of Theorem 5.
\proclaim Corollary 1.
For every prime number $p$, the set of real numbers $\alpha$
such that
$$
\liminf_{q \to + \infty} \, q \cdot
(\log q)^2 \cdot \Vert q \alpha \Vert \cdot \vert q \vert_p > 0
$$
has full Hausdorff dimension.
In the present note, we have restricted our attention to
$2$-dimensional questions. However, our method can be successfully
applied to prove that, given an integer $n \ge 2$,
there are real numbers $\alpha_1, \ldots , \alpha_n$
such that
$$
\liminf_{q \to + \infty} \, q \cdot
(\log q)^n \cdot \Vert q \alpha_1 \Vert \cdots \Vert q \alpha_n \Vert > 0,
$$
as well as real numbers $\beta_1, \ldots , \beta_n$
such that
$$
\liminf_{x_1, \ldots , x_n \ge 0} \, \max\{2, |x_1 \ldots x_n|\} \cdot
\Vert x_1 \beta_1 + \ldots + x_n \beta_n \Vert \cdot
(\log \max\{2, |x_1 \ldots x_n|\})^n > 0
$$
This will be the subject of subsequent work by E. Ivanova.
\vskip 6mm
\centerline{\bf 3. Auxiliary results}
\vskip 4mm
The original method of Peres and Schlag is a construction of
nested intervals.
A useful tool for estimating from below the Hausdorff
measure of a Cantor set is the mass distribution principle,
which we recall now.
We consider a set ${\cal {K}}} \def\cC{{\cal {C}}} \def\cS{{\cal {S}}$ included in a bounded interval
$E$, and defined as follows.
Set ${\cal {E}}} \def\cHI{{\cal {H}}_{\infty}_0 = E$ and assume that, for any positive integer $k$,
there exists a finite family ${\cal {E}}} \def\cHI{{\cal {H}}_{\infty}_k$ of
disjoint compact intervals in $E$ such that any interval $U$ belonging to
${\cal {E}}} \def\cHI{{\cal {H}}_{\infty}_k$ is contained in exactly
one of the intervals of ${\cal {E}}} \def\cHI{{\cal {H}}_{\infty}_{k-1}$ and contains at least two intervals
belonging to ${\cal {E}}} \def\cHI{{\cal {H}}_{\infty}_{k+1}$. Suppose also that the
maximum of the lengths of the intervals in ${\cal {E}}} \def\cHI{{\cal {H}}_{\infty}_k$ tends to 0
when $k$ tends to infinity. For $k \ge 0$, denote by $E_k$
the union of the intervals belonging to the family
${\cal {E}}} \def\cHI{{\cal {H}}_{\infty}_k$, and set
$$
{\cal {K}}} \def\cC{{\cal {C}}} \def\cS{{\cal {S}} := \bigcap_{k=1}^{+ \infty} \, {E_k}.
$$
\proclaim Lemma 1.
Keep the same notation as above.
Assume further that there exists
a positive integer $k_0$ such that, for any
$k \ge k_0$, each interval of $E_{k-1}$ contains at least $m_k \ge 2$
intervals of $E_k$, these being separated by at least $\eps_k$, where
$0 < \eps_{k+1} < \eps_k$. We then have
$$
\dim {\cal {K}}} \def\cC{{\cal {C}}} \def\cS{{\cal {S}} \ge \liminf_{k \to + \infty} \,
{\log (m_1 \ldots m_{k-1}) \over - \log (m_k \eps_k)}.
$$
\noindent {\it Proof. } This is Example 4.6 in \cite{Fal90},
see also Proposition 5.2 in \cite{BuLiv}. \hfill \smallsquare\vskip 3mm
\proclaim Lemma 2.
Let $\alpha$ be in ${\hbox{B{\sevenrm {AD}}}}$.
There exists a positive constant $C(\alpha)$ such that, for every
integer $q \ge 2$, we have
$$
\sum_{x=q}^{q^3} \, {1 \over ||\alpha x||\, x\log_2^2 x} \le C(\alpha).
$$
\noindent {\it Proof. }
This is a straightforward consequence of Example 3.2 on page 124
of \cite{KuNi}, where it is established that there exists
a positive constant $C_1(\alpha)$ such that
$$
\sum_{x=1}^{m} \, {1 \over ||\alpha x||\, x} \le C_1(\alpha) (\log m)^2,
$$
for all positive integers $m$. \hfill \smallsquare\vskip 3mm
Theorem 3 depends on the following refinement of Lemma 2.
\proclaim Lemma 3.
Let $\alpha$ be in ${\hbox{B{\sevenrm {AD}}}}$.
Let $a$ be a real number with $0 < a < 1$.
There exists a positive constant $C(\alpha)$ such that, for every
integer $q \ge 2$, we have
$$
\sum_{x=q}^{q^3} \, {1 \over ||\alpha x||\, x \,
(\log 1/\Vert x \alpha \Vert)^a \cdot (\log x)^{2-a}} \le C(\alpha).
$$
\noindent {\it Proof. }
Let $(p_j/q_j)_{j \ge 0}$ denote the sequence of convergents
to $\alpha$. Let $m$ (resp. $n$) be the largest
(resp. the smallest) integer $j$ such that $q_j \le q$
(resp. $q_j \ge q^3$).
As the sequence $(q_j)_{j \ge 0}$
grows exponentially fast, we have
$$
\log q \ll n \ll m \ll \log q,
$$
where, as throughout this proof, the numerical constants
implied by $\ll$ depend only on $\alpha$.
Let $j$ be an integer satisfying $m \le j < n$
and consider
$$
S_j := \sum_{x=q_j}^{q_{j+1}} \, {1 \over ||\alpha x||\, x \,
(\log 1/\Vert x \alpha \Vert)^a}.
$$
A classical result asserts that the points
$\{ \alpha x \}$, $x = 1, \ldots , q_{j+1}$, are very well distributed
in $(0, 1)$. Consequently,
$$
\eqalign{
S_j \ll {1 \over q_j} \,
\sum_{x=1}^{q_{j+1}} \, {1 \over ||\alpha x||\,
(\log 1/\Vert x \alpha \Vert)^a}
& \ll {1 \over q_j} \,
\sum_{x=1}^{q_{j+1}/2} \, {(q_{j+1} /x) \over
(\log (q_{j+1}/x))^a} \cr
& \ll {q_{j+1} \over q_j} \, \int_2^{q_{j+1}} \,
{du \over u (\log u)^{1 - a}}
\ll (\log q_j)^{1 - a}, \cr}
$$
since $q_{j+1} / q_j$ is bounded from above by an absolute
constant depending only on $\alpha$. Now,
$$
\sum_{j=m}^n \, S_j \ll (\log q)^{2 - a},
$$
which proves the lemma. \hfill \smallsquare\vskip 3mm
The key tool for the proof of Theorem 5 is Lemma 4 below.
\proclaim Lemma 4.
Let $p$ be a prime number.
Let $a$ be a real number with $0 \le a < 1$.
There exists a positive constant $C(a,p)$ such that, for every
integer $q \ge 2$, we have
$$
\sum_{x=q}^{q^3} \, {1 \over x \cdot |x|_p \, (\log (2/|x|_p))^a
\cdot (\log x)^{2-a}} \le C(a,p).
$$
\noindent {\it Proof. }
Observe that
$$
\sum_{x=q}^{q^3} \, {1 \over x \cdot |x|_p \cdot (\log (2/|x|_p))^a}
\ll \sum_{j=0}^{3 \log q} \,
\sum_{x= \lceil q/p^j \rceil}^{\lfloor q^3/p^j \rfloor}
\, {1 \over x (j+1)^a},
$$
where the second summation is taken over the integers $x$
that are not divisible by $p$.
Consequently,
$$
\sum_{x=q}^{q^3} \, {1 \over x \cdot |x|_p \cdot (\log (2/|x|_p))^a}
\ll \sum_{j=0}^{3 \log q} \, {\log q \over (j+1)^a}
\ll (\log q)^{2-a},
$$
and the lemma is proved. \hfill \smallsquare\vskip 3mm
\vskip 6mm
\centerline{\bf 4. Proof of Theorem 1}
\vskip 4mm
Let $\alpha$ be in ${\hbox{B{\sevenrm {AD}}}}$ and $\delta$
be a positive real number satisfying
$$
q \cdot || q \alpha || \ge \delta,
\quad \hbox{for every $q \ge 1$}. \eqno (4.1)
$$
Let $\eps$ be such that
$$
0 < \eps < \bigl(2^{10} C(\alpha) \bigr)^{-1}, \eqno (4.2)
$$
where $C(\alpha)$ is given by Lemma 2.
We follow a method introduced
by Peres and Schlag \cite{PeSc}.
First, we construct `dangerous' sets of real numbers.
These sets depend
on $\alpha$, but, to simplify the notation, we choose not to
indicate this dependence.
For integers $x$ and $y$ with $x \ge 2$ and $0\le y\le x$, define
$$
E (x,y) = \biggl[ {y \over x} -{\varepsilon \over ||\alpha x || x^{2}\log_2^2 x},
{y \over x}
+{\varepsilon \over ||\alpha x || x^{2}\log_2^2 x}
\biggr] \eqno (4.3)
$$
and
$$
E (x) =\bigcup_{y =0}^{x} \,
\bigl( \, E (x,y) \cap [0,1] \, \bigr). \eqno (4.4)
$$
Set also
$$
l_0 = 0, \quad
l_x = \lfloor\log_2 (||\alpha x || x^{2}\log_2^2 x /(2\varepsilon) ) \rfloor ,
\quad \hbox{for $x \in \Z_{\ge 1}$}. \eqno (4.5)
$$
Each interval from the union $E (x)$ defined in (4.4)
can be covered by an open dyadic interval of the form
$$
\left( {b \over 2^{l_x }}, {b+z \over 2^{l_x }}\right),
\quad z = 1,2, \quad
b \in \Z_{\ge 0}.
$$
Let $A (x)$ be the smallest union of all such dyadic intervals which
covers the whole set $E (x)$ and put
$$
A^c (x) = [0,1] \setminus A (x).
$$
Observe that $A^c (x)$ is a union of closed intervals of the form
$$
\left[ {a \over 2^{l_x}}, {a+1 \over 2^{l_x}}\right] ,
\,\,\, a \in \Z_{\ge 0}.
$$
Let $q_0$ be an integer such that
$$
q_0 \ge (100 \eps)^3 \quad
\hbox{and} \quad
|| q_0 \alpha || \ge 1/4. \eqno (4.6)
$$
For $q \ge q_0 $, define
$$
B_q =\bigcap_{x=q_0}^q A^c_{} (x).
$$
The sets $B_q$, $q \ge q_0$, are closed and nested.
Our aim is to show inductively that they are non-empty.
Set $L_0 = l_0$ and
$$
q_k := q_0^{3^k}, \quad
L_k = \lfloor\log_2 ( q_k^2 \log_2^2 q_k / (4\varepsilon) ) \rfloor,
\quad k \ge 1. \eqno (4.7)
$$
Observe that $l_x \le L_k$ when $x \le q_k$.
For every integer $k \ge 0$ we construct inductively subsets $C_{q_k}$
and $D_{q_k}$ of $B_{q_k}$ with the following property $(P_k)$:
\medskip
{\it The set
$C_{q_k}$ is the union of $2^{-5k-3 + L_k}$
intervals of length $2^{-L_k}$, separated by at least
$2^{-L_k}$, and such that at least $2^{-5k-5 + L_k}$
among them include at least $2^{L_{k+1}-L_k -3}$
intervals composing $B_{q_{k+1}}$, which are also separated by
at least $2^{-L_{k+1}}$. Let denote by
$C_{q_{k+1}}$ (resp. by $D_{q_k}$) the union of
$2^{-5(k+1) - 3 + L_{k+1}}$ of these intervals
(resp. of the corresponding $2^{-5k-5 + L_k}$
intervals from $C_{q_k}$).
In particular, we have ${\rm mes} C_{q_k}
= 4 {\rm mes} D_{q_k} = 2^5 {\rm mes} C_{q_{k+1}}$.}
\medskip
We deduce from (4.2), (4.3) and Lemma 2 that
$$
{\rm mes} (B_{q_1}) \ge 1 - \sum_{x=q_0}^{q_1} \, {\rm mes} A(x)
\ge 31/32.
$$
Consequently, $B_{q_1}$ is the union of at least $2^{L_1-1}$
intervals of length $2^{-L_1}$.
By (4.6), the set $B_{q_0}$ is the union of at least
$2^{L_0-1}$ intervals of length $2^{-L_0}$.
This allows us to define the sets $C_{q_0}, D_{q_0}$
and $C_{q_1}$.
This proves $(P_0)$.
Let $k$ be a non-negative
integer such that $(P_k)$ holds, and consider
the set $B'_{q_{k+2}} := C_{q_{k+1}} \cap B_{q_{k+2}}$.
Observe that
$$
B'_{q_{k+2}} = C_{q_{k+1}} \setminus
\biggl(\, \bigcup_{x=q_{k+1}+1}^{q_{k+2}} A (x) \biggr),
$$
hence
$$
{\rm mes} B'_{q_{k+2}} \ge {\rm mes} C_{q_{k+1}} -
\sum_{x=q_{k+1}+1}^{q_{k+2}} {\rm mes}
\bigl(C_{q_{k+1}}\cap A (x) \bigr).
\eqno (4.8)
$$
By construction,
the set $C_{q_k}$
can be written as a union, say
$$
C_{q_k} = \bigcup_{\nu = 1}^{T_{q_k} } J_\nu,
$$
of $T_{q_k}$ dyadic intervals $J_\nu$ of the form
$$
\left[ {a \over 2^{L_k}}, {a+1 \over 2^{L_k}}\right],
\quad a\in \Z_{\ge 0},
$$
where $L_k$ is given by (4.7). Let $x \ge q_k^3$ be an integer.
Since, by (4.6),
$$
2^{L_k} \le {q_k^2 \log_2^2 q_k \over 4 \eps} \le {q_k^3 \over 2}
\le {x \over 2},
$$
each interval $J_\nu$
contains at least the rationals $y/x, (y+1)/x$
for some integer $y$, and we infer from (4.3) that
$$
{\rm mes} (J_\nu \cap A(x)) \le
{2^4\varepsilon \over ||\alpha x||\, x\log_2^2 x}
\times {\rm mes} J_\nu. \eqno (4.9)
$$
Summing (4.9) from $\nu = 1$ to $\nu = T_{q_k}$, we get
$$
{\rm mes } ( C_{q_k} \cap A (x) )
\le {2^4\varepsilon \over ||\alpha x||\, x\log_2^2 x}
\times {\rm mes} C_{q_k}. \eqno (4.10)
$$
It then follows from (4.10) that
$$
\eqalign{
{\rm mes}(C_{q_{k+1}}\cap A (x)) & \le {\rm mes}(C_{q_{k}}\cap A (x))
\cr & \le {2^4\varepsilon \over ||\alpha x||\, x\log_2^2 x}
\times {\rm mes} C_{q_{k}} \le
{2^{9} \varepsilon \over ||\alpha x||\, x\log_2^2 x}
\times {\rm mes} C_{q_{k+1}}. \cr }
$$
Combined with (4.8) and Lemma 2, this gives
$$
{\rm mes} B'_{q_{k+2}} \ge ( {\rm mes} C_{q_{k+1}}) \,
\biggl( 1-
\sum_{x=q_{k+1}+1}^{q_{k+2}} \,
{2^{9} \varepsilon \over ||\alpha x||\, x\log_2^2 x} \, \biggr) \ge
{ {\rm mes} C_{q_{k+1}} \over 2}.
$$
Thus, at least one quarter of the intervals composing $C_{q_{k+1}}$
contains at least $2^{L_{k+2}-L_{k+1} - 2}$
intervals composing $B'_{q_{k+2}}$,
thus at least $2^{L_{k+2}-L_{k+1} - 3}$
intervals composing $B'_{q_{k+2}}$, if we impose that
these intervals are mutually distant by at least $2^{-L_{k+2}}$.
This allows us to define the sets $C_{q_{k+2}}$ and
$D_{q_{k+1}}$ with the required properties.
This proves $(P_{k+1})$.
It then follows that the set
$$
{\cal K} := \bigcap_{k \ge 0} \, D_{q_k}
$$
is non-empty.
By construction, every point $\beta$ in this set avoids all the
intervals $E(x, y)$ with $x \ge q_0$, thus, the pair $(\alpha, \beta)$
satisfies (2.1).
To establish that the set ${\cal K}$ has full Hausdorff
dimension, we apply Lemma 1 with
$$
m_k = 2^{L_{k+1}-L_k - 5} \quad
\hbox{and} \quad \eps_k := 2^{- L_{k+1}}.
$$
Note that
$$
{\log (m_1 \ldots m_{k-1}) \over - \log (m_k \eps_k)}
\ge {\log(32^{-k} 2^{L_k}) \over - \log (2^{-L_k + 5} )}
$$
We infer from (4.1), (4.5) and (4.7) that
$$
2^{L_k} \ge \delta q_0^{3^k}.
$$
Consequently,
$$
\lim_{k \to + \infty} \,
{\log (m_1 \ldots m_{k-1}) \over - \log (m_k \eps_k)} = 1,
$$
and it follows from Lemma 1 that
the set ${\cal K}$ has full Hausdorff dimension.
This completes the proof of our theorem. \hfill \smallsquare\vskip 3mm
\vskip 6mm
\centerline{\bf 5. Proofs of Theorems 3, 4, and 5}
\vskip 4mm
The proofs of Theorems 3 and 5 follow exactly the same steps
as that of Theorem 1. Instead of the intervals
$$
E (x,y) = \biggl[ {y \over x} -{\varepsilon \over ||\alpha x || x^{2}\log_2^2 x},
{y \over x}
+{\varepsilon \over ||\alpha x || x^{2}\log_2^2 x}
\biggr],
$$
we use respectively the intervals
$$
\biggl[ {y \over x} -{\varepsilon \over ||\alpha x || x^{2}
(\log_2 x)^{2-a} \, (\log 1/||\alpha x ||)^a},
{y \over x}
+{\varepsilon \over ||\alpha x || x^{2}
(\log_2 x)^{2-a} \, (\log 1/||\alpha x ||)^a}
\biggr]
$$
and
$$
\biggl[ {y \over x} -{\varepsilon \over |x|_p x^{2}
(\log_2 x)^{2-a} \, (\log 2/|x|_p)^a},
{y \over x}
+{\varepsilon \over |x|_p x^{2}
(\log_2 x)^{2-a} \, (\log 2/|x|_p)^a}
\biggr].
$$
Furthermore, we apply Lemmas 3 and 4 in place of Lemma 2.
\bigskip
For the proof of Theorem 4, we work directly in the plane.
The idea is the following.
For a triple $(x, y, z)$ of integers and a positive $\eps$,
the inequality $|x X + y Y + z| \le \eps$ defines
a strip composed of points $(X, Y)$ close to the line
$x X + y Y + z =0$.
Since we are working in the unit square, to a given pair $(x, y)$
of integers corresponds a unique $z$, and the length of the intersection
of the line with the unit square is at most equal to $\sqrt{2}$.
Setting
$$
\eps_{x, y} = {\eps \over |xy| \log^2 |xy|},
$$
for a given (very small) positive $\eps$, the strips
$|x X + y Y + z| \le \eps_{x, y}$ play the same role as the
intervals (4.3) in the proof of Theorem 1.
Since, for every large integer $q$, we have
$$
\eqalign{
\sum_{q \le xy \le q^3} \, \eps_{x,y}
& \ll \sum_{x=1}^{q^3} \,
\sum_{y = \lfloor q/x \rfloor}^{\lfloor q^3 x \rfloor} \,
{\eps \over |xy| \log^2 q} \cr
& \ll \sum_{x=1}^{q^3} \, {\eps \over \log q} \ll \eps, \cr}
$$
the Peres--Schlag method can be applied as in the
proof of Theorem 1. We omit the details.
\vskip 6mm
\centerline{\bf 6. Further results}
\vskip 4mm
We gather in the present section several results that
can be obtained with the same method as in the proof
of Theorem 1.
\bigskip
* A result on lacunary sequences.
\proclaim Theorem 6.
Let $M$ be a positive real number
and $(t_j)_{j \ge 1}$ be a sequence
such that $t_{j+1}/t_j > 1 + 1/M$ for $j \ge 1$.
Let $c$ be a real number with $0 < c < 1/10$.
Let $\eps$ be a positive real number.
Then, the Hausdorff dimension of the set
$$
\{ \xi \in [0,1] : \forall n \ge 1, ||\xi t_n || \ge c / (M \log M)\}.
$$
is at least $1-\eps$ if $M$ is sufficiently large.
Theorem 6 complements the results from \cite{PeSc,Mo1}.
\bigskip
* The use of the mass distribution principle
enables us to improve Theorem 1 of \cite{Mo3}.
\proclaim Theorem 7.
Let $C_1, C_2$ and $\gamma$ be positive real numbers.
Let $(t_n)_{n \ge 1}$ be a sequence of real numbers such that
$$
C_1 n^{\gamma} \le t_n \le C_2 n^{\gamma},
\quad \hbox{for $n \ge 1$}.
$$
Then, there exist a positive $C$ and an integer $n_0$ such that
the set
$$
\bigcap_{n \ge n_0} \, \biggl\{ \xi \in {\bf R}} \def\M{{\bf M}} \def{\cal {K}}} \def\cC{{\cal {C}}} \def\cS{{\cal {S}}{{\bf B}} \def\tw{{\tilde w} :
|| \xi t_n || > {C \over n \log n} \biggr\} \eqno (6.1)
$$
has full Hausdorff dimension.
It is established in \cite{Mo3} that the
Hausdorff dimension of the set (6.1)
is at least $\gamma / (\gamma + 1)$.
As an immediate application, we get that
the set of real numbers $\xi$ for which
$$
\liminf_{n \to + \infty} \, n (\log n) ||\xi n^2|| > 0
$$
has full Hausdorff dimension.
\bigskip
* We have stated homogeneous statements, but the
method as well allows us to deal with
inhomogeneous approximation.
\bigskip
* By means of dyadic arguments as it was done
in the preprint \cite{Mo5}, it is possible to
generalize Lemmas 2 and 3 and, eventually, to
establish the following statement.
\proclaim Theorem 8.
Consider real paremeters $A>1,\, 0<\varepsilon<1$ and $\delta >0$.
Let $\alpha$ be a badly approximable real number, such that
$$
\inf_{q\ge 1} \, q \, ||q\alpha||\ge \delta>0.
$$
Consider real-valued functions $\psi_j, \,\,j=0,1,2,$, defined over the non-negative real numbers, satisfying tho following conditions:
$$
\hbox{$\psi_0(x)>0$ for $x$ large enough and $\Psi_0$
is increasing;} \leqno (i)
$$
$$
\psi_j(0)=0,\,\,\, j=1,2; \leqno (ii)
$$
$$
\hbox{$\psi_j$ increases in some interval
of the type $ [0,\xi ],\,\, \xi >0$
and $ \max_{0\le x\le \xi}\psi_j (x) \le 1$}; \leqno (iii)
$$
$$
\max_{x\in {\bf N}} \,
\psi_0(x)\psi_2(x^{1-A}) \le \varepsilon. \leqno (iv)
$$
Define $\psi_2^{-1}(x)$
to be the inverse function to $\psi_2(x)$
so $\psi_2^{-1}(\psi_2(x)) = x$.
Suppose that
$$
\sup_{X\in {\bf N}}\,\,\,\,
\sum_{X\le \nu<AX}\,\,
\sum_{1\le \mu \le \nu +2-\log_2\delta}\,\,
2^{\nu-\mu }\,\times \, \psi_2^{-1}\left(
{\varepsilon \over \psi_0(2^\nu)\psi_1(2^{-\mu})} \right)
\le {1 \over 2^6}.
$$
Take an arbitrary sequence of reals $(\eta_q)_{q \ge 1}$.
Then there exists a real number $\beta $ such that
$$
\liminf_{q\to+\infty} \,
\psi_0(q)\psi_1(||q \alpha||)\psi_2 (||q \beta +\eta_q||) >\varepsilon .
$$
The proof of Theorem 8 follows directly the arguments from \cite{Mo5}.
For a real number $a$ with $ 0\le a<1$, if we put
$$
\psi_0(x) = x\log^{2-a} x,\,\,
\psi_1(x)= x (\log 1/x)^a,\,\,
\psi_2 (x) =x ,\,\,\,\, \eta_q=0,\, q \ge 1,
$$
then Theorem 8 implies that there exists
a real number $\beta$ such that
$$
\liminf_{q \to + \infty} \, q \cdot
(\log q)^{2-a} \cdot (\log 1/\Vert q \alpha \Vert)^a
\cdot \Vert q \alpha \Vert \cdot \Vert q \beta \Vert > 0
$$
a result which corresponds to Theorem 3 (and to Theorem 1 if $a=0$),
with the exception of the assertion on the Hausdorff dimension.
Unfortunately, we cannot put here $a=1$.
\vskip 7mm
\centerline{\bf References}
\vskip 7mm
\beginthebibliography{999}
\bibitem{AdBu06}
B. Adamczewski and Y. Bugeaud,
{\it On the Littlewood conjecture in simultaneous
Diophantine approximation},
J. London Math. Soc. 73 (2006), 355--366.
\bibitem{BeVe07}
V. V. Beresnevich and S. L. Velani,
{\it A note on simultaneous Diophantine approximation on planar curves},
Math. Ann. 337 (2007), 769--796.
\bibitem{BeKlMa}
V. Bernik, D. Kleinbock, and G. A. Margulis,
{\it Khintchine-type theorems on manifolds: the convergence
case for standard and multiplicative versions},
Internat. Math. Res. Notices (2001), 453--486.
\bibitem{BuLiv}
Y. Bugeaud,
Approximation by algebraic numbers,
Cambridge Tracts in Mathematics 160,
Cambridge, 2004.
\bibitem{BDM}
Y. Bugeaud, M. Drmota, and B. de Mathan,
{\it On a mixed Littlewood conjecture in Diophantine approximation},
Acta Arith. 128 (2007), 107--124.
\bibitem{BuHaVe}
Y. Bugeaud, A. Haynes, and S. Velani,
{\it Metric considerations concerning the mixed
Littlewood conjecture}.
In preparation.
\bibitem{CaSw}
J. W. S. Cassels and H. P. F. Swinnerton-Dyer,
{\it On the product of three homogeneous linear forms and indefinite
ternary quadratic forms}, Philos. Trans. Roy. Soc. London,
Ser. A, 248 (1955), 73--96.
\bibitem{EKL}
M. Einsiedler, A. Katok, and E. Lindenstrauss,
{\it Invariant measures and the set of exceptions to the
Littlewood conjecture},
Ann. of Math. 164 (2006), 513--560.
\bibitem{EiKl07}
M. Einsiedler and D. Kleinbock,
{\it Measure rigidity and $p$-adic Littlewood-type problems},
Compositio Math. 143 (2007), 689--702.
\bibitem{Fal90}
K. Falconer,
Fractal geometry. Mathematical foundations and applications.
John Wiley \& Sons, Ltd., Chichester, 1990.
\bibitem{Gal62}
P. Gallagher,
{\it Metric simultaneous Diophantine aproximations},
J. London Math. Soc. 37 (1962), 387 -- 390.
\bibitem{Ja28}
V. Jarn\'\i k,
{\it Zur metrischen Theorie der diophantischen
Approximationen}, Pr\'ace Mat.-Fiz. 36 (1928/29), 91--106.
\bibitem{KuNi}
L. Kuipers and H. Niederreiter,
Uniform distribution of sequences.
Pure and Applied Mathematics. Wiley-Interscience [John Wiley \& Sons],
New York-London-Sydney, 1974.
\bibitem{Lit68}
J. E. Littlewood,
Some problems in real and complex analysis.
D. C. Heath and Co. Raytheon Education Co.,
Lexington, Mass., 1968.
\bibitem{BdM03}
B. de Mathan,
{\it Conjecture de Littlewood et r\'ecurrences lin\'eaires},
J. Th\'eor. Nombres Bordeaux 13 (2003), 249--266.
\bibitem{BdMTe}
B. de Mathan et O. Teuli\'e,
{\it Probl\`emes diophantiens simultan\'es},
Monatsh. Math. 143 (2004), 229--245.
\bibitem{Mo1}
N. G. Moshchevitin,
{\it A version of the proof for Peres-Schlag's theorem on lacunary sequences}.
Available at arXiv: 0708.2087v2 [math.NT] 15Aug2007.
\bibitem{Mo2}
N. G. Moshchevitin,
{\it Density modulo 1 of sublacunary sequences: application of
Peres-Schlag's arguments}.
Preprint, available at arXiv: 0709.3419v2 [math.NT] 20Oct2007
\bibitem{Mo3}
N. G. Moshchevitin,
{\it On small fractional parts of polynomials},
J. Number Theory 129 (2009), 349--357.
\bibitem{Mo4}
N. G. Moshchevitin,
{\it Towards BAD conjecture}.
Available at arXiv: 0712.2423v2 12Apr2008.
\bibitem{Mo5}
N. G. Moshchevitin,
{\it Badly approximable numbers related to the Littlewood conjecture}. Preprint, available at arXiv: 0810.0777.
\bibitem{Peck}
L. G. Peck,
{\it Simultaneous rational approximations to algebraic numbers},
Bull. Amer. Math. Soc. 67 (1961), 197--201.
\bibitem{PeSc}
Yu. Peres and W. Schlag,
{\it Two Erd\H os problems on lacunary sequences: chromatic numbers
and Diophantine approximations}.
Available at: arXiv: 0706.0223v1.
\bibitem{PoVe}
A. D. Pollington and S. Velani,
{\it On a problem in simultaneous Diophantine approximation:
Littlewood's conjecture},
Acta Math. 185 (2000), 287--306.
\egroup\par
\goodbreak
\vskip 8mm
Yann Bugeaud \hfill Nikolay Moshchevitin
Universit\'e Louis Pasteur \hfill Moscow State University
Math\'ematiques \hfill Number Theory
7, rue Ren\'e Descartes \hfill Leninskie Gory 1
67084 STRASBOURG Cedex (France) \hfill MOSCOW (Russian federation)
\medskip
{\tt [email protected]} \hfill {\tt [email protected]}
\par\vfill\supereject\end
\proclaim Lemma 4.
Suppose that $\varepsilon $ is small enough
and that $q_0$ is large enough.
If there exists a positive integer $k$ such that
$$
{\rm mes} B_{q_{k+1}} \ge {\rm mes} B_{q_k}/2 > 0 \eqno (4.12)
$$
then
$$
{\rm mes} B_{q_{k+2}} \ge {\rm mes} B_{q_{k+1}}/2>0. \eqno (4.13)
$$
\noindent {\it Proof. }
Observe that
$$
B_{q_{k+2}} = B_{q_{k+1}} \setminus
\left(\bigcup_{x=q_{k+1}+1}^{q_{k+2}} A (x) \right),
$$
hence
$$
{\rm mes} B_{q_{k+2}} \ge {\rm mes} B_{q_{k+1}} -
\sum_{x=q_{k+1}+1}^{q_{k+2}} {\rm mes} \bigl( B_{q_{k+1}}\cap A (x) \bigr).
$$
For $x \ge q_k^3$, we infer from (4.10) and (4.12) that
$$
\eqalign{
{\rm mes}(B_{q_{k+1}}\cap A (x)) & \le {\rm mes}(B_{q_{k}}\cap A (x))
\cr & \le {2^4\varepsilon \over ||\alpha x||\, x\log_2^2 x} \times {\rm mes} B_{q_{k}} \le
{2^5\varepsilon \over ||\alpha x||\, x\log_2^2 x} \times {\rm mes} B_{q_{k+1}}. \cr } \eqno (4.15)
$$
It then follows from (4.15) that
$$
{\rm mes} B_{q_{k+2}} \ge ( {\rm mes} B_{q_{k+1}}) \, \biggl( 1-
\sum_{x=q_{k+1}+1}^{q_{k+2}} \,
{2^5\varepsilon \over ||\alpha x||\, x\log_2^2 x} \, \biggr) \ge
( {\rm mes} B_{q_{k+1}})/2,
$$
and the lemma is proved. \hfill \smallsquare\vskip 3mm
For $q \ge q_0$, the set $B_q$ can be
written, by construction, as a union, say
$$
B_q = \bigcup_{\nu = 1}^{T_q } J_\nu,
$$
of dyadic intervals $J_\nu$ of the form
$$
\left[ {a \over 2^{l_q}}, {a+1 \over 2^{l_q}}\right], \quad a\in \Z_{\ge 0},
$$
where $l_q$ is given by (4.5). Let $x \ge q^3$ be an integer. Since
$$
2^{l_q} \le {q^2 \log_2^2 q \over 4 \eps} \le {q^3 \over 2}
\le {x \over 2},
$$
each interval $J_\nu$
contains at least the rationals $y/x, (y+1)/x$
for some integer $y$, and we infer from (4.3) that
$$
{\rm mes} (J_\nu \cap A(x)) \le
{2^4\varepsilon \over ||\alpha x||\, x\log_2^2 x} \times {\rm mes} J_\nu. \eqno (4.9)
$$
Summing (4.9) from $\nu = 1$ to $\nu = T_q$, we get
$$
{\rm mes }\left( B_q \bigcap A (x) \right)
\le {2^4\varepsilon \over ||\alpha x||\, x\log_2^2 x} \times {\rm mes} B_q. \eqno (4.10)
$$
It then follows from Lemma 4 that the set
$$
\bigcap_{q\ge q_0} \, B_q
$$
is non-empty. To conclude the proof of our theorem,
it remains for us to establish that this set has full Hausdorff
dimension.
For simplicity, set
$$
C_k = B_{q_k}, \enspace L_k = \elL_k, \quad k \ge 0.
$$
By construction, the set $B_{q_k}$ is a union of dyadic closed
intervals of the form
$$
\biggl[{a \over 2^{L_k}}, {a+1 \over 2^{L_k}} \biggr].
$$
In the inductive construction, we may replace $C_{k+1}$
by a disjoint union $D_{k+1}$ of intervals contained in $C_k \cap D_k$.
To do this, we delete at most one every two intervals.
Let $m_k$ denote the number of dyadic intervals of the above form
that compose $D_k$ and are contained in the
same interval of $D_{k-1}$. By maximality, we see that
$$
m_k \ge 2^{L_k - L_{k-1}} \mu(C_k) / 2.
$$
Furthermore, the minimal distance between two consecutive intervals
from $D_k$ is
$$
\eps_k := 2^{- L_k}.
$$
Note that
$$
{\log (m_1 \ldots m_{k-1}) \over - \log (m_k \eps_k)}
\ge {\log(c^k 2^{L_{k-1}}) \over - \log (2^{L_{k-1}} )}.
$$
\medskip
* We are also able to improve Theorem 1 of \cite{Mo4}.
For non-negative real numbers $\alpha, \beta$ and $\delta$
with $\alpha + \beta = 1$ and $0 < \delta < 1/2$, set
$$
{\rm BAD} (\alpha, \beta; \delta) =
\bigl\{ \xi = (\xi_1, \xi_2) \in [0, 1]^2 :
\inf_{p \ge 1} \, \max \{
p^{\alpha} ||p \xi_1 ||,
p^{\beta} ||p \xi_2 || \} \ge \delta \bigr\}
$$
and
$$
\eqalign{
{\rm BAD}^*(\alpha, \beta; \delta) =
\bigl\{ \xi = (\xi_1, \xi_2) \in [0, 1]^2 :
\inf_{p \ge 1} \, \max \{
(p & \log (p+1))^{\alpha} ||p \xi_1 ||, \cr
& (p \log (p+1))^{\beta} ||p \xi_2 || \} \ge \delta \bigr\}. \cr}
$$
\proclaim Theorem.
Let $\alpha_1, \alpha_2, \beta_1, \beta_2$ be non-negative real numbers
with $\alpha_1 + \beta_1 = \alpha_2 + \beta_2 = 1$.
Let $\delta$ be a real number with $0 < \delta \le 2^{-20}$.
Then the set
$$
{\rm BAD} (0, 1; \delta) \, \cap \, {\rm BAD}^* (\alpha_1, \beta_1; \delta) \,
\cap \, {\rm BAD}^* ( \alpha_2, \beta_2; \delta)
$$
has full Hausdorff dimension in $[0, 1]^2$.
\medskip
* We can also impose that the real numbers $\beta$
occurring in Theorem 1 are in ${\hbox{B{\sevenrm {AD}}}}$.
|
2,869,038,155,880 | arxiv | \section{Introduction}
\label{sec:intro}
In computer vision and machine learning community, text image rectification\cite{liang2008geometric,ye2015text} from planar transformation have long been a topic of concern. Often a system of OCR and image information retrieval has a pipeline with several steps, including rectification, detection and recognition\cite{ye2015text} helping improve performance.
Unlike other object recognition problem, text line patches in text images usually have large aspect ratio as shown in Figure~\ref{fig:samplecorr}. This property becomes problematic when image is distorted with planar transformation like rotation and perspective. A horizontal bounding box based algorithm usually pick out the lines with low percentage of interest pixels from distorted images.
\begin{figure}[htp]
\centering
\includegraphics[width=0.46\textwidth]{sampleouts.jpg}
\caption{Sample rectifications performed grouped in 3 columns: first column is distorted samples, middle shows rectification on perspective, and rotation rectification are presented in last columns.}
\label{fig:samplecorr}
\end{figure}
Some solutions attempt to solve recognition with distortion by data augmentation with more labeled samples. It's very expensive to collect and label large dataset with different rotational and perspective distortions. Besides, more complicated models are required to learn on larger dataset, which bring more utilization of computation and storage resources in training and application. Synthesis data are recently proposed and used in learning scene text images\cite{jaderberg2014synthetic}. For Chinese characters, this would be difficult and impractical to overcome this problem using data augmentation due to the scale of charset.
An orthogonal way is to learn its transforming parameters and recover its original view as processed by human. Methods worked in this way usually have assumptions like some borderline exists, camera parameters available\cite{cambra2011towards} or low rank property of text images\cite{zhang2012tilt} which may not be satisfied in many situations when noise and complex textures exist.
Since the emergence of deep neural network(DNN)\cite{krizhevsky2012imagenet,lecun1998gradient,lecun2015deep}, it gradually reshapes the methodology of machine learning by pushing the state of the art of different tasks much forward, and bring out novel applications\cite{simonyan2014very,ren2015faster}. Convolution neural network\cite{lecun1995convolutional} with deep architecture has proven to be top performance in many computer vision tasks\cite{girshick2015fast,Zhang_2016_CVPR}. In text images analysis, deep methods for OCR using and LSTM\cite{graves2006connectionist} has shown superior performance in recognition but may fail with planar transformation.
Recently, some researches try to solve affine and perspective transformation using deep neural network. Among them, one research is spatial transformer network(STN)\cite{jaderberg2015spatial} which attempts to improve classification accuracy by inserting a transformer layer into an end-to-end neural network. It could detect and transform features learned into better view hence increase accuracy. However, one of its drawbacks is its inability to rectify images from planar transformation stably as human beings.
In this paper, a new framework of DNN is proposed combining the advantages of STN and supervised learning to learn rectification of planar transformations as shown in Figure \ref{fig:samplecorr}. The only assumption is that existence of a few parallel human readable text lines in the context of rectified images. We give transformation parameters directly to DNN, training it to learn rectification parameters in stage-wise method. Classification of rotation degrees is used instead of regression by discretizing range of rotation into intervals\cite{sun2015learning}. Initialization of convolution kernel is different with commonly used methods\cite{glorot2010understanding} to achieve better performance.
The advantages of this model include no only much milder assumption with no formats or prior knowledge of datasets. Deep models have much better robustness, and adaptive to different variations than traditional image processing methods. Besides, we found although no segmentation labels is directly provided to the model, it learns to distinguish part of the input image into different types according to context. It seems the model have mastered learning to focus on meaningful elements and regions with rectification training.
This model is benchmarked on a new dataset collected containing Chinese texts of different types and illumination conditions. We use real world data collected and transformed with generate parameters for training and testing. The outcome indicates the effectiveness and robustness of the proposed architecture.
\section{Methods and Models}
\label{sec:arch}
\subsection{Background, Notation and Formulation}
Planar transformations are common in image processing and can be modeled as a perspective transformation. Define $I$ as original image, $I'$ as transformed image. Each pixel $p$ in $I$ with coordinate $(x,y)$ in one image is rectified and interpolated to $p'$ with coordinate $(x',y')$ in $I'$. Here define related parameters including rotation as $\theta$, scaling factor $\alpha$, perspective parameters $(P_x, P_y)$ and translation of $(T_x,T_y)$. Planar transformation can be formulated as:
\begin{equation}
\label{eq:perspective}
\begin{bmatrix}
\alpha\cos(\theta) & \alpha\sin(\theta) & T_x\\
-\alpha\sin(\theta) & \alpha\cos(\theta) & T_y\\
P_x & P_y & 1
\end{bmatrix}
\begin{bmatrix}
x\\
y\\
1
\end{bmatrix}
=\begin{bmatrix}
\hat{x}\\
\hat{y}\\
\hat{z}
\end{bmatrix},\\
\end{equation}
\begin{equation}
x' = \hat{x}/\hat{z},y' = \hat{y}/\hat{z}.
\end{equation}
Most recognition algorithms use bounding boxes and object detection methods to locate text area which is able to deal with different $(T_x,T_y)$ and $\alpha$. While some algorithms can handle problem of non-horizontal text lines, most of them require parallel text lines\cite{koo2012text}, or simply separately locate each word or neighboring texts\cite{Zhang_2016_CVPR}. With perspective transformation, parallel condition will disappear and a crossing point of these lines can be estimated using image processing method with strong assumptions as surveyed in \cite{ye2015text}.
\subsection{Parameter Entangling and Stage-wise Model}
Given the label data, it would be straight forward to build a CNN and solve it as a regression problem. The capability of feature learning, high representative ability and robustness to noise of CNN make it best choice for this task. However, it's hard to estimate all the parameters simultaneously due to the nonlinearity of the mapping to approximate. Rotation parameters are serious interfered by perspective transformation as parallel property disappears as shown in (\ref{eq:perspective}).
Also, parameters of transformation are in multiple orders of magnitude. This problem is defined as ill-condition problem\cite{boyd2004convex} and is hard for neural networks to learn\cite{saarinen1993ill}. Since most existing deep learning research focus on tasks with discrete labels, they mainly concern numerically about initialization and gradient vanishing for nonlinear activation functions\cite{glorot2010understanding}.
Though difficult to regress parameters simultaneously, we could build models in stage-wise style inspired by past pipeline based research. If we eliminate perspective distortion first, and recover the parallel property of text lines, estimation of $\theta$ would be feasible. For a mathematic expression, we can decompose (\ref{eq:perspective}) into:
\begin{equation}
\begin{bmatrix}
1 & 0 & 0\\
0 & 1 & 0\\
P_x & P_y & 1
\end{bmatrix}
\begin{bmatrix}
\alpha\cos(\theta) & \alpha\sin(\theta) & T_x\\
-\alpha\sin(\theta) & \alpha\cos(\theta) & T_y\\
0 & 0 & 1
\end{bmatrix}
\begin{bmatrix}
x\\
y\\
1
\end{bmatrix}
=\begin{bmatrix}
\hat{x}\\
\hat{y}\\
\hat{z}
\end{bmatrix}.
\end{equation}
Assuming the position text area not biased much over the image, right-down corner element varies quite a little from 1. It means, transformation is equivalent to translation first, then rotation and perspective transformation in order. We rectify it in the inverse order, by learning $P_x$ and $P_y$ first, then estimating rotation angle $\theta$, and locating the text area at last. There have been lots of research done on locating the text areas, here we mainly solve the first 2 steps.
Transformation parameters in these 2 steps are learned using 2-stage training model. The CNN architecture proposed in this paper rectify perspective and rotation transformation parameters for images with text in 2-stages using loss function
\begin{equation}
\label{eq:loss}
\min \sum_{i=1}^n||P_i-\mathcal{P}(X_i)||_2^2-\lambda\sum_{i=1}^n\sum_{j=1}^Ct_{ij}\log(\mathcal{A}(X_i)_j).
\end{equation}
In (\ref{eq:loss}), $X_i$ is the $i$th sample in the dataset. $P_i$ is a tuple $(P_{xi},P_{yi})$ as the corresponding perspective parameter. $t_{ij}$ is one-hot expression of angle parameter in predefined discretized precision which in our paper set to 2 degrees. The loss function consists 2 parts: the first part is $\mathcal{P}$, a neural network aiming at learning to estimate $P_x$ and $P_y$. While the second part is an angle classifier $\mathcal{A}$ to estimate $\theta$ using cross-entropy loss instead of regression.
Supervised STN is used to connect these 2 components and form an end-to-end neural system, which will be introduced in section \ref{subsection:sstn}. We give explicit rectification value to DNN and use its prediction as input of STN, hoping this end-to-end model could better understand the geometric concept from the perspective of human.
\subsection{Perspective Learning}
\label{subsec:persp}
\begin{figure}[htp]
\centering
\includegraphics[width=0.4\textwidth]{perspective.eps}
\caption{Architecture of DNN learning $P_x$ and $P_y$.}
\label{fig:archpsp}
\end{figure}
In traditional methods, researchers studied the relationship between different indicators and $P_x$ and $P_y$. There indicators either summarize local features, or attempt to find the vanishing point brought by the perspective transformation, which measures level of parallel distortion. In estimation of $P_x$ and $P_y$, they are on the same magnitude, hence its Jacobian matrix is not ill-conditioned any more. Therefore, we can numerically approximate the mapping using fully connected neural networks.
Input sample images pass through a few convolution substructure formed by convolution, pooling and batched normalization layer. Features extracted from CNN are used by fully connected layer to approximate $P_x$ and $P_y$.
\subsection{Rotation Learning}
\label{sec:angle}
\begin{figure}[htp]
\centering
\includegraphics[width=0.4\textwidth]{rotation.eps}
\caption{Architecture of DNN learning $\theta$.}
\label{fig:archrotation}
\end{figure}
To get accurate prediction, it's intuitive to regress $\theta$ on low level features as a continuous variable. $\ell_2$ norm, which is defined as $||x||_2=\sqrt{\sum |x_i|^2}$ is usually used to learn the regression. However, after several layers of pooling, straight lines become zigzag shape in digital images. And difference between close samples are so small on feature map in value of losses. Therefore this mapping has high nonlinear property and hard to regress use classical neural network, especially difficult to identify the proper structure of hidden layer.
It is a general method to discretize continuous variable into disjoint intervals. That is using intervals labels as the surrogate of continuous value. Despite that we use discrete labels as output, what we care is not the accuracy, but the regression residue error since we are estimate an ordinal not cardinal value.
Clearly, discretize the range into infinite number of intervals is equivalent to continuous regression. Actually we can improve effectiveness of classifier as an estimator of $\theta$ by adding other penalty term. To prove this, a penalty term aiming at minimize the distance between ground truth and prediction is added, and use $\mu$ to tradeoff between $\ell_2$ loss and cross entropy. Classification accuracy of this additional loss term has no significant difference with (\ref{eq:loss}), but variance of errors decreases. Cross entropy makes no assumption on prior distribution while the $\ell_2$ term partly introduce Gaussian prior into estimation. From another view, this term add penalty to large bias from ground truth.
\subsection{Supervised Spatial Transformer Network}
\label{subsection:sstn}
\begin{figure}[htp]
\centering
\begin{subfigure}[htp]{0.35\textwidth}
\includegraphics[width=\textwidth]{stn.eps}
\caption{STN}
\label{fig:stn}
\end{subfigure}
\begin{subfigure}[htp]{0.37\textwidth}
\includegraphics[width=\textwidth]{stnsupervised.eps}
\caption{Supervised STN}
\label{fig:sstn}
\end{subfigure}
\caption{STN and Supervised STN. $\Omega$ is the transformation parameter, while $\hat{\Omega}$ is an estimator of $\Omega$.}
\end{figure}
Angle classification in our model need features free from perspective transformation. An end-to-end neural network is more appropriate and usually have better performance. We proposed supervised STN to connect stages to an integral system.
In overcoming the problem of transformation, researchers introduce spatial transformer network\cite{jaderberg2015spatial} to DNN by inserting one or several transformer layer into the stacked layers as shown in Figure~\ref{fig:stn}. The transformer layer is a multilayer neural network whose input is feature map from previous layers' output, while its output is a set of parameters which could describe the transformation it learned. No other information is fed into the layer, and features used for learning only comes from back-propagation gradient of classification loss. After epochs of training, the transformer are capable of transforming feature maps best for classification.
However, its transformation is different from human vision since no information on concept of geometric rectified from human perspective is provided for training. In practice, we find the performance of STN is quite sensitive to the ratio of object area respective to the image size. If the object of interest is small relative to image size, it fails to locate the object, nor estimation of other transformation parameters.
The difference between transformation of STN and human vision is still an obstacle for better recognition and understanding of document images. Supervised STN proposed in this paper shown in Figure~\ref{fig:sstn}, connect the hidden layer of localization network of STN with labels of transformation parameter $\Omega$, and put it as a component of final loss function. This method enables the neural network to obtain an estimator $\hat{\Omega}$ learning rectifications from humans' perspective. In other words, supervised information is provided to the hidden layer of STN, enabling these layers to learn its specified objective. For even larger neural network with more objectives, it can be used as a essential part aiming at transform the features into more proper spaces.
Like STN, several supervised STN can be inserted into network layers with different loss components. If the transformation is not feasible with single stage, we can decompose the transformation into several stages, and arrange each stage with corresponding subset of label values. In this way, transformation with high complexity can be approximated and eliminated, helping to build an end-to-end neural system.
\subsection{Integral Architecture}
\begin{figure}[htp]
\centering
\includegraphics[width=0.48\textwidth]{archss.eps}
\caption{Architecture of DNN learning transformation parameters in 2 stages.}
\label{fig:arch}
\end{figure}
The primary architecture of model we used in this paper is shown in Figure~\ref{fig:arch}. Each stage is a supervised STN using the respective transformation parameters as labels and targets. $P_x$ and $P_y$ are estimated in the first stage, using CNN describe in section \ref{subsec:persp}. The low level feature map produced by convolution is transformed by the supervised STN, and feed into the second stage. In back-propagation, if the parameters in first stage is trained with acceptable accuracy, the feature map can be inversely transformed backward to provide gradient to transformation parameters in the first stage.
Inside the network, dropout\cite{srivastava2014dropout} is applied to increase generalization. A batch normalization\cite{ioffe2015batch} layer is inserted after each pooling layer and connected it with next convolution substructure with positive effect. STN receives transformation parameters from regression result of $P_x$ and $P_y$ and input feature map output by front layers, then inversely transforms the input tensor with linear interpolation.
On estimation of $\theta$, output feature map from 1st stage STN are processed with convolutions and pooling for the first. The input image went through STN of both stages in succession with $P_x,P_y$ and $\theta$ given as output to produce the rectified image.
This model, unlike traditional separated pipeline framework, put each component together to form an end-to-end DNN. Training such an integral system altogether would benefit from enhancement of combined feature learning.
\subsection{Convolution Kernel Setting}
Angle detection is a special task. Unlike most tasks that have much diversified samples, like classification between birds and chair, rotation of the same object of arbitrary shape will have nearly the same output feature map. But for strings of text in image, we can take it as a flat rectangles or wide lines. To reduce number of parameters in model, global average layer\cite{szegedy2015going} like GoogleNet are chosen instead of dense layer. This structure helps improving generalization ability with much less model parameters.
If the convolution kernel is a tilted banded matrix which has direction $\theta$, with nonzero elements in banded and others setting to zero. Then convolve it with a tilted banded matrix, then its convolution will be maximized on that location if they are tilted in the same direction. If they perfectly match, then the maximum element in the convolution output should be the maximum among all candidate angles. This initialization could help the system find a better starting point, and the mathematic derivation make it not a surprise that global average pooling using maximum method have better performance than fully connected model. This reduces much of model parameters and increase its generalization ability.
Another important hyper-parameter is the kernel size of angle convolution. Kernel of smaller size have less representative capability of direction. Drawbacks of larger $k$ include worse generalization as more parameters are involved in model. Besides, the time complexity of convolution which is $O(c'cmnk^2)$ increasing in quadratic order of $k$ and scale of model parameters also increase quadratically. We need to balance between less computation, better generalization, and representative power.
\section{Experiments}
\subsection{Dataset}
\label{sec:data}
As far as we know, no public text image rectification dataset is appropriate for this task. We need the dataset collected follows several properties: 1). No marks, borders or boundaries of regular shape in patches exists unless they occur in the context of image content; 2). Text comes from large scale charset with more diversified patterns; 3) Lines of text should vary in font, fontsize, and other settings. Diversification on illumination condition and camera parameters is also necessary.
The dataset are divided into captions and texts samples. The caption samples contain a image and a paragraph of text under it, and the font and font-size of text are the same for each single patch. For text images, we generate random setting for each very line, and draw them on each patch.
Six patches are put in a $2\times3$ grid and drawn on a large sample image. Four different kind of marks are drawn on the four very corner of samples images leaving enough space to keep generated patches free from marks, so that patches used for training contain no interfering marks.
We then printed these samples on different type of paper, including plain A4 papers of different colors, and kraft paper, card paper with different texture types. These printed samples are paragraphed using cameras with different configuration under various illumination conditions.
These original digital images are manually labeled on marks around the corners and aligned. 243 images, including 158 texts and 85 of captions are collected. Using preserved location values, we get 6 patches from each sample. For each patch, we generate several random transformation parameters of translation, scaling, rotation and perspective transformations and applied on it. These generated parameters and sample patches works as target values and samples for training and verification. Some sample patches are shown in Figure~\ref{fig:samplecorr}. The only difference of this synthetic dataset with real world data comes from interpolation used when applying the transformations, which is not sensitive to deep models.
\subsection{Implementation Details}
\label{sec:impl}
\begin{table}[htp]
\caption{Architecture details}
\label{tab:detailA}
\centering
\begin{tabular}{c|l|c|l}
\hline
\textbf{A} & \textbf{1st stage} & \textbf{B} & \textbf{2nd stage} \\
\hline
1 & conv(2,64,3,3) & 1& conv(128,128,3,3)\\
2 & pool(2,2) & 2 & pool(2,2)\\
3 & batch-norm & 3 & conv(n,128,k,k)\\
4 & conv(2,128,3,3) & 4 & global\\
5 & pool(2,2) & 5 & fc(512)\\
6 & batch-norm & 6 & softmax(n)\\\cline{3-4}
7 & conv(1,128,3,3) & 1 & conv(2,64,3,3)\\
8 & pool(2, 2) & 2 & pool(2,2)\\
9 & batch-norm & 3 & conv(2,128,3,3)\\
10 & conv(1,128,3,3) & 4 & pool(2,2)\\
11 & pool(2, 2)& 5 & conv(128,128,3,3)\\
12 & batch-norm & 6 & pool(2,2) \\
13 & fc(3,1024) & 7 & conv(n,128,k,k)\\
14 & fc(1, 2) & 8 & global \\
15 & STN(15, 8) & 9 & fc(512) \\
& & 10 & softmax(n) \\
\hline
\end{tabular}
\end{table}
We establish DNN architecture as shown in Tab.~\ref{tab:detailA}, where $n$ stands for number of angle intervals, conv$(m,n,t,t)$ stands for m connected convolution layers of $n$ channel with kernel size $t$, STN(A, B) means a STN layer with A as transformation parameters and B the feature map to transform, and $k$ the chosen kernel size. We will use indexes in Tab.~\ref{tab:detailA} to indicate layers in later description.
Our input image size is 256 pixels wide. Fully connected layers act as approximation learner, and we need to balance the scale of them. Also it is necessary to guarantee that the output feature map is not too small for angle convolution layer detecting angles. We choose 4 convolution-pooling layer at last to keep model parameters scale appropriate.
Larger kernel size could improve accuracy of angle estimation, but decrease generalization power and increase time for training and validation. We finally choose 9 for B3. We choose $\lambda$ as 1.0, to balance between 2 component of loss.
We train regression NN firstly and use trained model parameters for angle classifier. Choice of learning rate is more art than science that neither large or small is appropriate: larger learning rate will make learning oscillate far from optimal while with small ones training may stuck on bad local minimums\cite{glorot2010understanding}. We finetune learning rate carefully from $\{1e^{-4}, 2e^{-5}, 1e^{-5}\}$.
\subsection{Performance and Analysis}
\label{sec:perf}
The experiment is running on a server with GTX 1080 GPGPU and Xeon-2630v3 CPU. For this paper, we use 10000 patches in the dataset and break into 8000 for training, 1000 for validation, and 1000 for test. We generate $P_x$ and $P_y$ in uniform distribution with interval $[-9e^{-4},9e^{-4}]$, which give largest parallel distortion of 24 degrees. $P_x$ and $P_y$ are transformed using $1e^4\times(P+1e^{-3})$, as ReLU can only give positive response, and inversely transformed in STN. Angles for rotation of different scale are generated and used to test whether its learning ability is correlated with entangling transformation parameter range. $T_x,T_y$ and $\alpha$ are generated with guarantee that the generated patch will not cross the boundaries after all four transformation are applied.
We conduct many experiments with different model configurations and hyper-parameter settings. Comparison between these settings could give more detailed understanding about the method and its functioning principles.
\subsubsection{Perspective Regression}
Some researches for estimation of $P_x$ and $P_y$ need assumptions on range of $\theta$, or find better performance in small scale range. Estimation of $P_x$ and $P_y$ is the fundamental part of entire system, hence we need to check whether the range of $\theta$ have large effect on estimation of $P_x$ and $P_y$.
We generate dataset with different scale of $\theta$, in $\{\pi/3,\pi/2,2\pi/3\}$. If learning capability is limited by scale of rotation, their performance should be quite different. We give $\ell_2$ loss and $\ell_1$ bias of regression result, which is shown in Tab.~\ref{tab:perspective}.
\begin{table}[h]
\caption{Experiment Result on $P_x$ and $P_y$ Regression}
\label{tab:perspective}
\centering
\begin{tabular}{l|cc|cc}
\hline
\textbf{Expr.} & \multicolumn{2}{c|}{\textbf{$\ell_2$}} & \multicolumn{2}{c}{\textbf{$\ell_1$}} \\
\hline
& train & val & train & val\\
$\pi/3$ & 0.7873 & 1.9140 & 0.0848 & 0.2217 \\
$\pi/2$ & 0.8163 & 1.9370 & 0.0887 & 0.2156 \\
$2\pi/3$ & 0.8407 & 1.8729 & 0.0872 & 0.2152 \\
\hline
\end{tabular}
\end{table}
Figures in Tab.~\ref{tab:perspective} imply the generalization capability under different angle scales. The outcome varies yet within a regular range. With these results, we can conclude that deep regression have nice generality in different $\theta$ scale. But there are still bias on validation dataset, whether it's accurate enough to estimate $\theta$ remains to be tested.
\subsubsection{Angle Classification}
\begin{table*}[htp]
\caption{Expr. Angle Classification}
\label{tab:aclass}
\centering
\begin{tabular}{|l|cc|cc|cc|cc|}
\hline
\multirow{2}{*}{\textbf{Expr.}} & \multicolumn{2}{c|}{\textbf{Acc}} & \multicolumn{2}{c|}{\textbf{Var}} & \multicolumn{2}{c|}{\textbf{Top~2}} & \multicolumn{2}{c|}{\textbf{Top~5}} \\\cline{2-9}
& Valid & Test & Valid & Test & Valid & Test & Valid & Test \\\hline
Shared & 49.60\% & 47.00\% & 0.6340 & 0.6860 & 80.0\% & 78.6\% & \textbf{98.7}\% & 99.0\\
Independent & 48.40\% & 43.50\% & 0.9650 & 0.9250 & 77.9\% & 78.0\% & 97.3\% & 97.8\%\\
$\ell_2$ Reg. & \textbf{53.50}\% & \textbf{47.10}\% & \textbf{0.5920} & \textbf{0.6690} & \textbf{83.0}\% & \textbf{79.9}\% & 98.5\% & \textbf{99.1}\%\\
\hline
\end{tabular}
\end{table*}
As introduced in section \ref{sec:impl}, training of angle classifier are based on model parameters trained in the perspective stage. We designed experiments on verify angle classifier in different settings. Experiment result on angle classification are shown in Tab.~\ref{tab:aclass}. We choose parameters with best benchmark performance according to its results on training and validation dataset, and test its capability on test dataset. Models in comparison include original shared kernel model, independent kernel model, and loss with $\ell_2$ penalty. The consistence and effectiveness of $\theta$ estimation are analyzed. Furthermore, we investigate more on output by analyzing top-k accuracy, which means accuracy of closest prediction within the k largest output of softmax.
\paragraph{Shared versus Independent Kernels}
\begin{figure}[tb]
\centering
\includegraphics[width=.42\textwidth]{violins.eps}
\caption{Violin plot of angle classification result. The band draws smoothed distribution, and inside the band is the boxplot of corresponding indicator.}
\label{fig:violin}
\end{figure}
As we explained in previous section, kernel parameter sharing reuse convolution kernels in front layers. Besides reducing model parameters, the collaborated training process of these two tasks helps the model to learn better features for estimation of $P_x,P_y$ and $\theta$. Compare the first 2 rows in Tab.~\ref{tab:aclass}, we found parameter sharing bring improvement in faster convergence and better accuracy: in same epochs of training, it get higher training accuracy compared with independent model and its performance on validation and test dataset also show its better generalization power.
Also, as shown in Tab.~\ref{tab:aclass}, variance of prediction error, altogether prove that shared kernel model have better effectiveness. We demonstrate the property of error in violin plot as Figure~\ref{fig:violin}. Judging from the distribution, prediction error is more concentrated to 0 in shared kernel model. And Tab.~\ref{tab:aclass} also indicated its better robustness in all entries.
The nature of this result could be attributed to difficulty of training large neural network for angle prediction. Using shared kernels will have optimization of front layers to start from a better initial point. But for independent model, gradient propagated to front layers would have little effect since differences between samples of neighboring angles is quite small after pooling. It's then difficult to learn appropriate features for this task.
\paragraph{Classification With Additional $\ell_2$ Term}
Another experiment focus on the impact of $\ell_2$ term. Based on the analysis, we expect adding an $\ell_2$ term in the loss function will give larger penalty to large bias. From another view, the $\ell_2$ term assume a Gaussian prior. This constraint, if meaningful, would help improve the consistence and effectiveness of $\theta$ estimation.
As shown in Tab.\ref{tab:aclass}, within same period of training, although result on training dataset implies model without $\ell_2$ have better performance, on validation and test dataset, model with $\ell_2$ penalty have achieved best result. However, their outcomes are pretty close. The difference on test dataset are within 3\% at most. Indicators in Tab.~\ref{tab:aclass} also show similar conclusion.
This comparison evaluation is reasonable, since the prior knowledge and penalty maybe not helpful in learning such a complicated mapping. Choice of $\mu$ doesn't affect much since loss component of cross-entropy and estimation of $P_x$ and $P_y$ only takes $1/5$ if we choose $\mu=0.01$.
\subsubsection{Integral Performance}
\paragraph{Internal Evolution Mechanism}
\begin{figure*}[htp]
\centering
\includegraphics[width=0.78\textwidth]{featuremap.eps}
\caption{Feature map output by convolution layer.}
\label{fig:featuremap}
\end{figure*}
\label{subsection:inner}
There has always critical statements on deep learning about its black box property with puzzling internal mechanism. As many research using classical image processing method to rectify each transformation have been proposed, it's better to figure out how it evolves to learn rectification and how we can reference them to understand DNN's methodology.
Internal mechanism is explored by observing and comparing feature map output of middle layers. It's amazing to find even no explicit segmentation or context information is given to the DNN, it evolve itself focusing on meaningful areas: some kernels segmented input features into background, text lines, space between lines, and other meaningful non-text regions.
Figure~\ref{fig:featuremap} visually explains this statement which contains several manually picked but typical component of output feature map. The first 4 rows come from first stage, and last row from the second. These 2 patches are representative in dataset: one contains only text lines, while the other is caption image with lines of description under the image. From front to end layer, kernels gradually learn to focus on different image elements. For 1-3 rows, feature maps describe image in different scopes, from local to larger scale, from edge detection towards a fuzzy shape contour segmentation. From the third row of Figure~\ref{fig:featuremap} of text line case, we observe that convolution tries to make a distinction between regions of text, line spacing and background. To the caption case, with complicated background to analyze, it achieves comparable result.
The way it learns to estimate $P_x$ and $P_y$ in rear layers is hard to analyze. Here some components of feature map indicate how it works in text line case: given the contour and segmentation from front layers, it locates the very upper, bottom and left most border, these line should be perpendicular or parallel lines. The bias from its proper outlook is used by the fully connection layers to estimate $P_x$ and $P_y$. In contrast, however, it's more puzzling to analyze mechanism for caption case.
For angle estimation stage, after processed by first STN, convolution in layer 2 in shared kernel seems to segment text area more significantly. In both cases, text area are different in value with other part of feature map. This implies that even with no explicit segmentation label, after learning the concept of rectification, the DNN understand roles of different elements inside images to some extent.
\paragraph{Case Analysis}
\begin{figure}[htb]
\centering
\includegraphics[width=0.45\textwidth]{good.jpg}
\caption{Some sample cases of rectification. For each group of three image from left to right is the distorted image, perspective rectified, and rotation rectified output.}
\label{fig:good}
\end{figure}
\begin{figure}[htb]
\centering
\includegraphics[width=0.45\textwidth]{bad.jpg}
\caption{Problematic cases.}
\label{fig:bad}
\end{figure}
In this part, we will give both good and bad cases in overall rectification. We choose from test datasets, and show them in order of sample, perspective rectified, and final output. Figure \ref{fig:good} shows some patches perfectly rectified including both caption and text line patches.
It matters more on bad cases plotted in Figure \ref{fig:bad}. We found our model works pretty well and stable on text line patches. But in case of caption, it sometimes fails to restore its original outlook. With detailed analysis, we guess the problem mainly exists in the regression performance in perspective stage: in caption case, with fewer lines of text, $P_x$ and $P_y$ estimation rely more on information of non-text area. Context images are different and varies in style as observed in samples, that some are geometric shapes while some others are complicated images. With such complexity, regression need more data and more various types of context images to learn more accurate and robust regression mapping.
\section{Conclusion and Future Work}
\label{sec:conclusion}
In this paper, a new deep architecture aiming at rectifying images with parallel text lines are proposed with thorough experiments comparing with different configurations and hyper-parameters. Experiments on newly collected dataset show its effectiveness and robustness.
However, even though the dataset we use have thousands of text image formats, more diversified data is needed to verify model's robustness to variations of formats. Common cases include name card, course slides, or more complicated texts in nature scenes. With more data collected, we can get better generalization capability.
\clearpage
\bibliographystyle{elsarticle-num}
|
2,869,038,155,881 | arxiv | \section{Introduction}
In recent years, text removal has attracted increasing research interests in the computer vision community.
It aims to remove the text and fill the regions with plausible content.
Text removal can help avoid privacy leaks by hiding some private messages such as ID numbers and license plate numbers. Besides, it can be widely used for document restoration in the field of intelligent education.
It is also a crucial prerequisite step for text editing~\cite{wu2019editing,yang2020swaptext,yu2021mask,krishnan2021textstylebrush,shimoda2021rendering} and has wide applications in areas such as augmented reality translation.
Recent text removal methods~\cite{zhang2019ensnet,liu2020erasenet,wang2021simple,tang2021stroke,tursun2019mtrnet} have achieved significant improvements with the development of GAN~\cite{gan,cgan,miyato2018spectral}.
Though the state-of-the-art methods~\cite{liu2020erasenet,wang2021simple,tang2021stroke,tursun2020mtrnet++} have reported promising performance, the restoration for complex backgrounds still remains a main challenge. To solve this problem, some researchers propose to directly predict the text stroke~\cite{tang2021stroke,bian2022scene} and focus text region inpainting only on these stroke regions.
However, text stroke prediction is an another challenging problem to be addressed, especially on image-level (with more than one text)~\cite{xu2021rethinking,wang2021semi}.
Inspired by previous image inpainting methods~\cite{liu2020rethinking,ren2019structureflow,nazeri2019edgeconnect},
we consider that directly transforming the raw image to a final text-erased image in a unified framework is one of the major causes of inconsistent results for text removal.
This is due to the imbalance between text erasure and the subsequent background restoration. The corruption of text region while erasing may mislead the reconstruction of the high-frequency textures.
The results with blur and artifacts are shown in Fig. \ref{fig:text} (b).
To address this issue, we propose to mine more efficient context guidance from the existing data in a step-by-step manner to reduce the artifacts of text-erased regions and produce plausible content.
\begin{figure}[t]
\subfigbottomskip=2pt
\subfigcapskip=2pt
\setlength{\abovecaptionskip}{-0.0cm}
\setlength{\belowcaptionskip}{-0.6cm}
\centering
\subfigure[Input]{
\begin{minipage}[t]{0.21\linewidth}
\centering
\includegraphics[width=2.3cm,height=2cm]{./figure/head/intro/mot/input/291.pdf}
\includegraphics[width=2.3cm,height=2cm]{./figure/head/intro/mot/input/322.pdf}
\end{minipage}%
}
\subfigure[w/o Context]{
\begin{minipage}[t]{0.21\linewidth}
\centering
\includegraphics[width=2.3cm,height=2cm]{./figure/head/intro/mot/wo_con/291.pdf}
\includegraphics[width=2.3cm,height=2cm]{./figure/head/intro/mot/wo_con/322.pdf}
\end{minipage}%
}
\subfigure[w Context]{
\begin{minipage}[t]{0.21\linewidth}
\centering
\includegraphics[width=2.3cm,height=2cm]{./figure/head/intro/mot/w_con/pred_291.pdf}
\includegraphics[width=2.3cm,height=2cm]{./figure/head/intro/mot/w_con/pred_322.pdf}
\end{minipage}%
}
\subfigure[GT]{
\begin{minipage}[t]{0.21\linewidth}
\centering
\includegraphics[width=2.3cm,height=2cm]{./figure/head/intro/mot/gt/291.pdf}
\includegraphics[width=2.3cm,height=2cm]{./figure/head/intro/mot/gt/322.pdf}
\end{minipage}%
}
\centering
\caption{ Examples of scene text removal, which also show the comparison of the results with and without context guidance and feature modeling. Zoom-in for best view.} \label{fig:text}
\end{figure}
Specifically, we propose a novel text removal model, termed as CTRNet. CTRNet decouples the text removal task into four main components: Text Perception Head, Low-level/High-level Contextual Guidance blocks (LCG, HCG), and a Local-global Content Modeling (LGCM) block, as shown in Fig. \ref{fig:model}. Text Perception Head is firstly introduced to detect the text regions and generate text masks. Subsequently,
the LCG predicts the structure of text-erased images to provide low-level contextual priors,
which is represented by the edge-preserved smoothing method RTV~\cite{xu2012structure}.
Besides, we incorporate an HCG block to learn the high-level discriminative context in latent feature space as another guidance. Structure is served as a local guide for the image encoder, while high-level context provides global knowledge.
As the filling of text regions not only focuses on the information of their own and surroundings, but also uses the global affinity as reference, CTRNet introduces LGCM by the cooperation of CNNs and Transformer-Encoder \cite{vaswani2017attention} to extract local features and establish the long-term global relationship among the pixels, meanwhile incorporates context guidance for both feature modeling and decoding phase. Through such designs, CTRNet can capture sufficient contextual information to remove the text more thoroughly and restore backgrounds with more visually pleasing textures, as shown in Fig.~\ref{fig:text}~(c).
Extensive experiments on the benchmark datasets, SCUT-EnsText \cite{liu2020erasenet} and SCUT-Syn~\cite{zhang2019ensnet} are conducted to verify the effectiveness of CTRNet. Additionally, qualitative experiment is conducted on an in-house examination paper dataset to verify the generalizability of our model.
Text removal takes complete text image as input and aims to preserve the original background of text regions, whereas image inpainting will directly mask the regions for restoration based only on the surrounding texture. Simply applying image inpainting methods to text removal will cause inaccurate background generation.
We conduct experiments to compare our method with the state-of-the-art image inpainting models in Sec.4.5/4.6, which practically illustrates the difference between these two tasks.
We summarize the contributions of this work as follows:
\begin{itemize}
\item We propose to learn both Low-level and High-level Contextual Guidance (LCG, HCG), which we find are important and useful as prior knowledge for text erasure and subsequent background texture synthesis.
\item We propose Local-global Content Modeling blocks (LGCM) to extract local features and capture long-range dependency among the pixels globally.
\item The context guidance is incorporated into LGCM for the feature modeling and decoding phase, which further promotes the performance of CTRNet.
\item Extensive experiments on the benchmark datasets demonstrate the effectiveness of CTRNet not only in removing the text but recovering the background textures as well, significantly outperforming existing SOTA methods.
\end{itemize}
\section{Related work}
\noindent \textbf{Deep learning-based text removal} can be categorized into one-stage methods and two-stage methods. One-stage methods are implemented in an end-to-end manner, requiring models to automatically detect the text regions and remove them in a unified framework. Nakamura et al.~\cite{ste} proposed a patch-based auto-encoder~\cite{bengio2013representation} with skip connections, termed as SceneTextEraser. It was also the first DNN-based text removal method.
Text removal can be also regarded as image-to-image translation.
Following the idea of Pix2pix~\cite{isola2017image}, EnsNet \cite{zhang2019ensnet} adopted four refined losses and employed a local-aware discriminator to maintain the consistency of text-erased regions. Liu et al. \cite{liu2020erasenet} proposed EraseNet by introducing a coarse-to-refinement architecture and an additional segmentation head to help locate the text. MTRNet++ \cite{tursun2020mtrnet++} shared the same spirit with EraseNet, but separately encoded the image content and text mask in two branches. Cho et al. \cite{cho2021detecting} proposed to jointly predict the text stroke and inpaint the background, allowing the model to focus only on the restoration of text stroke regions. Wang et al. \cite{wang2021simple} presented PERT, which contained a novel progressive structure with shared parameters to remove text more thoroughly, and a region-based modification strategy to effectively guide the erasure process only on text regions.
Two-stage methods follow the procedure of detecting the text, removing it, and then filling the background with plausible content.
We further divide them into word-level and image-level.
Word-level methods first crop the text regions according to the detected results, then operate the text removal process with single text~\cite{qin2018automatic,tang2021stroke}. Qin et al. \cite{qin2018automatic} utilized cGAN~\cite{gan,cgan} with one encoder and two decoders for both text stroke prediction and background inpaint. Tang et al. \cite{tang2021stroke} proposed to predict the text strokes on word images, then both strokes and images were fed into an image inpainting network with Partial Convolution \cite{pc} to generate the text-erased results. For image-level methods, after obtaining the text mask through detection, they directly predict the results on the entire images. MTRNet \cite{tursun2019mtrnet}, based on Pix2pix, implemented a text mask as an extra input. The method proposed by Keserwani et al. \cite{keserwani2021text} was similar to MTRNet, but employed an additional local discriminator for better prediction.
Zdenek et al. \cite{Zdenek_2020_WACV} considered the lack of pixel-wise training data and proposed a weak supervision method by introducing a pretrained PSENet \cite{wang2019shape} to detect the text, and then inpainted the text regions through another pretrained image inpainting method \cite{zheng2019pluralistic}. Conrad et al. \cite{conrad2021two} borrowed the concept developed by Zdenek et al. \cite{Zdenek_2020_WACV}, but they proposed to further predict the text stroke before the application of a pretrained EdgeConnect \cite{nazeri2019edgeconnect} for background inpainting. Bian et al. \cite{bian2022scene} proposed a cascaded generative model, which decoupled text removal into text stroke detection and stroke removal.
\begin{figure*}[t]
\centering
\setlength{\abovecaptionskip}{-0.0cm}
\setlength{\belowcaptionskip}{-0.4cm}
\includegraphics[width = 10.5cm, height = 6cm]{./figure/model/model.pdf}
\centering
\caption{The overview of the proposed CTRNet.
} \label{fig:model}
\end{figure*}
\section{Proposed Method}
Fig. \ref{fig:model} shows the pipeline of the proposed CTRNet. First, we introduce text perception head to detect the text regions and generate the text masks. To better restore the backgrounds of text regions, we propose to learn more contextual priors from the existing data, including low-level background structure with LCG and high-level context features with HCG. Structure information is served as a local guide and directly fed into the image encoder, while the high-level context feature is embedded into the high-dimensional feature space as a global guide with the Incor operation. Finally, we propose LGCM blocks to capture both local features and long-term correlation among all the pixels, so that CTRNet can make full use of different levels of information for feature decoding.
\subsection{Text Perception Head}
For scene text removal on image-level,
purely feeding a text image into a model without any positional indication results in failed, mistaken, and incomplete erasures of text regions~\cite{zhang2019ensnet,liu2020erasenet,wang2021simple}.
Therefore, we introduce a text perception head to help localize the text regions. With the detected results, we generate the corresponding masks and send them together with original images into the subsequent network. We propose to replace the original 0-1 mask (hard mask) with soft mask to help eliminate the defects and discontinuities between text regions and non-text regions.
The procedure for soft mask generation is as follows: (1) The vanilla bounding boxes $B$ are shrunk using the Vatti clipping algorithm \cite{vatti1992generic} with the ratio of 0.9 to obtain $B_{s}$, meanwhile dilated with the same offset to $B_{d}$; (2) The soft border of text regions is defined as the minimum distance between the pixel in $B_{s}$ and $B_{d}$. Fig. \ref{fig:label} (c) displays the example of soft-mask. Only the pixels in $B_{s}$ are set to 1, while the range of pixels between $B_{s}$ and $B_{d}$ is $(0, 1)$. The effectiveness of the soft mask is verified in Section 4.3.
\subsection{Contextual Guidance Learning}
\noindent \textbf{Low-level Contextual Guidance (LCG) block:}
Scene text removal aims to not only erase the text, but also restore the backgrounds of text regions and synthesize their corresponding textures. Previous methods~\cite{zhang2019ensnet,liu2020erasenet,tursun2020mtrnet++,wang2021simple} follow an end-to-end training and inference procedure by directly predicting the results with scene text images as input. However, they suffer from some texture artifacts when dealing with complicated backgrounds, as shown in Fig.~\ref{fig:scut-enstext} and \ref{fig:sota_ens}. We propose to first predict the low-frequency structure of the image, and take it as low-level guidance for the subsequent network. Inspired by Ren et al. \cite{ren2019structureflow} and Liu et al. \cite{liu2020rethinking}, the structure image is constructed by the edge-preserved smooth method RTV \cite{xu2012structure}, which removes high-frequency textures with only sharp edges and smooth structure remain.
RTV consists of a pixel-wise windowed total variation measure and a windowed inherent variation to remove image texture.
Fig. \ref{fig:label} (d) and (e) display an example of the structure image $S_{in}$ and its ground-truth $S_{gt}$ generated from $I_{in}$ and $I_{gt}$, respectively.
Learning a mapping between two low-frequency structures, $S_{in}$ and $S_{gt}$, is much easier than removing text directly. The structural clues for text regions can effectively simplify texture generation and enhance the performance by indicating the structure semantic of text regions, as shown in Fig. \ref{fig:scut-enstext} (e) (f) in the ablation study.
As shown in Fig. \ref{fig:model}, LCG block consists of RTV method and a background structure generator $G_{bg\_s}$.
$G_{bg\_s}$ is an encoder-decoder architecture that takes both the structure $S_{in}$ of scene text images and the soft mask $M_{s}$ as input, and predicts the background structure $S_{out}$ with text-erased. We take $S_{out}$ as local guidance, and directly feed it into the image encoder with $I_{in}$ to encode image features $F_{s} \in \mathbb R^{\frac{H}{4} \times \frac{W}{4} \times C}$.
\begin{figure}[t]
\subfigbottomskip=-1pt
\subfigcapskip=-1pt
\centering
\subfigure[Input]{
\begin{minipage}[t]{0.14\linewidth}
\centering
\includegraphics[width=1.9cm,height=2cm]{./figure/head/SM/input/488.pdf}\\
\end{minipage}%
}
\subfigure[HM]{
\begin{minipage}[t]{0.14\linewidth}
\centering
\includegraphics[width=1.9cm,height=2cm]{./figure/head/SM/hm/488.pdf}
\end{minipage}%
}
\subfigure[SM]{
\begin{minipage}[t]{0.14\linewidth}
\centering
\includegraphics[width=1.9cm,height=2cm]{./figure/head/SM/sm/488.pdf}\\
\end{minipage}%
}
\subfigure[$S_{in}$]{
\begin{minipage}[t]{0.14\linewidth}
\centering
\includegraphics[width=1.9cm,height=2cm]{./figure/head/SM/structure/488.pdf}\\
\end{minipage}%
}
\subfigure[$S_{gt}$]{
\begin{minipage}[t]{0.14\linewidth}
\centering
\includegraphics[width=1.9cm,height=2cm]{./figure/head/SM/gt_str/488.pdf}\\
\end{minipage}%
}
\subfigure[GT]{
\begin{minipage}[t]{0.14\linewidth}
\centering
\includegraphics[width=1.9cm,height=2cm]{./figure/head/SM/gt/488.pdf}\\
\end{minipage}%
}
\centering
\caption{The basic elements of CTRNet. HM and SM denote hard mask and soft mask, respectively. $S_{in}$ and $S_{gt}$ represent the structure of the input and ground-truth. Zoom in for best view.} \label{fig:label}
\end{figure}
\noindent \textbf{High-level Contextual Guidance (HCG) block:}
In addition to the low-level structure priors, we propose to explore potential high-level contextual guidance in latent feature spaces.
Previous study~\cite{liu2020rethinking,ren2019structureflow,liu2020erasenet} with Perceptual/Style Loss~\cite{johnson2016perceptual,style}
demonstrates the effectiveness of high-level contextual supervision for image generation and translation. Therefore, we make our CTRNet to utilize such discriminative context as additional guidance information for both text removal and background restoration, instead of taking it merely as supervision for optimization. Inspired by Zhang et al. \cite{SPL}, we incorporate an HCG block into our CTRNet to learn high-level context features.
The architecture of HCG block is illustrated in the left-bottom of Fig. \ref{fig:model}. The block consists of two feature encoders ($E_{c}(\cdot)$ and $E_{c}^{'}(\cdot)$), and a Feature Align Module (FAM), as done in \cite{SPL}. $E_{c}(\cdot)$ encodes the concatenation of the original image $I_{in}$ and its soft-mask $M_{s}$ to obtain the features $F_{hc} \in \mathbb R^{\frac{H}{4} \times \frac{W}{4} \times C}$, whereas $E_{c}^{'}(\cdot)$ extracts the context features $F_{hc}^{'} \in \mathbb R^{\frac{H}{4} \times \frac{W}{4} \times C}$ from the paired labels $I_{gt}$. Here, $E_{c}^{'}(\cdot)$ is a classification model, termed as TResNet \cite{ridnik2021asymmetric}. We directly use its pretrained model on the OpenImages datasets \cite{kuznetsova2020open} to extract $F_{hc}^{'}$ with frozen weights during the training procedure. After feature dimension mapping with $1 \times 1$ convolution layers in FAM, feature align loss $L_{align}$ is applied to approximate the distribution of $F_{hc}$ to $F_{hc}^{'}$.
The process can be formulated as
\begin{equation}
\begin{aligned}
\label{fam}
F_{hc}^{'} = E_{c}^{'}(I_{gt}); F_{hc} = E_{c}(I_{in}, M_{s})\\
L_{align} = \left\| F_{hc} - F_{hc}^{'} \right\|_{1} * (1 + \alpha M_{s})
\end{aligned}
\end{equation}
$\alpha$ is set to 2.0. In this way, $F_{hc}$ based on $I_{in}$ can be transferred to contain context information of background $I_{gt}$, which can provide a high-level global guidance for feature modeling and decoding.
\subsection{Local-global Content Modeling (LGCM)}
While erasing text regions and filling them with reasonable textures as background, beyond considering text regions as a reference, it is necessary for a text removal method to use the pixel information from the surrounding and global backgrounds. Therefore, we propose a feature content modeling block for both local (text regions) and global (surrounding and the entire background) levels.
As shown in Fig. \ref{fig:model}, the image content features $F_{s} \in \mathbb R^{\frac{H}{4} \times \frac{W}{4} \times C}$, incorporated with the high-level discriminative feature guidance, $F_{hc}$ are sent to LGCM to model the local-global contextual features and enhance their representations. And the right-bottom of Fig. \ref{fig:model} displays the architecture of a single LGCM block.
CNNs operate locally at a fixed size (e.g. 3$\times$3) to effectively extract features of specific regions and establish the relationship between the pixels in each local window. Therefore, four stacked vanilla $4 \times 4$ convolutions layers are utilized for local content modeling.
In addition, features can be downsampled by CNNs to reduce the computation required for the subsequent global modeling operation.
For global content modeling, we apply Transformer-Encoder as our basic module. Transformer-Encoder \cite{vaswani2017attention}, which can effectively capture global interactions between pixels among the whole features and model their long-range dependency. Then two deconvolution layers are applied to upsample the modeled features and bring the inductive bias of CNN \cite{liang2021swinir}.
LGCM follows an iterative process with $k$ stages ($k=8$ empirically \cite{SPL}).
At the final convolution of each stage, $F_{hc}$ are incorporated into the LGCM with ResSPADE \cite{park2019semantic,SPL}.
The details for LGCM and ResSPADE are presented in supplement materials. The output of the $i-th$ LGCM is denoted as $F_{l_{i}}$ ($F_{l_{0}} = F_{s}$).
Finally, Feature Decoder reconstructs the final text-erased output by decoding both features $F_{l_{8}}$ from the final LGCM ($8th$) block and shadow content features $F_{s}$.
, which can be formulated as
\begin{equation}
\label{feature_decoder}
I_{out} = H_{fd}(F_{l_{8}} + F_{s})
\end{equation}
\subsection{Training Objective}
We adopt the following losses to train our text removal network, including structure loss, multi-scale text-aware reconstruction loss, perceptual loss, style loss, and adversarial loss.
\noindent \textbf{Structure loss} The structure loss is used to measure the $L_{1}$ distance between the background structure output $ S_{out} $ and the ground truth $ S_{gt}$, which is defined by:
\begin{equation}
\label{structure}
L_{str} = \left\| S_{gt} - S_{out}\right\|_{1} * (1 + \gamma M_{s})
\end{equation}
\noindent $(1 + \gamma M_{s}) $ denotes higher weight for text region. $\gamma$ is set to 3.0.
\noindent \textbf{Multi-scale text-aware reconstruction loss} The $L_{1}-norm$ difference is proposed to measure the output and the ground truth. We first predict multi-layer outputs with text removed in different sizes, then assign higher weight to text regions when computing the loss:
\begin{equation}
\begin{aligned}
\label{msr}
L_{msr} = \sum_{n} \left\| (I_{out_{n}} - I_{gt_{n}}) \right\|_{1} * (1 + \theta_{n} M_{s})
\end{aligned}
\end{equation}
\noindent $n$ denotes $n$-th output in the scales of $\frac{1}{16}$, $\frac{1}{4}$ and 1 of the input. $\theta_{1}, \theta_{2}, \theta_{3}$ is set to ${2,3,4}$, respectively.
\noindent \textbf{Perceptual loss} Except for low-level image-to-image supervision with reconstruction loss, we also adopt perceptual loss \cite{johnson2016perceptual} to capture high-level semantics and try to simulate human perception of image quality. Both the straight output $I_{out}$ and the original image with text-removed $I_{com}$ are included as loss terms. Besides, the structure output $ S_{out} $ is also taken into consideration.
\begin{equation}
\setlength{\belowdisplayskip}{3pt}
\begin{aligned}
\label{peloss1}
I_{com} = I_{in} * (1 - M_{s}) + I_{out} * M_{s}
\end{aligned}
\end{equation}
\begin{equation}
\begin{aligned}
\label{peloss}
L_{per} = \sum_{i} \sum_{j} \left\| {\phi}_{j}(I_{i}) - {\phi}_{j}(I_{gt})\right\|_{1}
+ \sum_{j} \left\| {\phi}_{j}(S_{out}) - {\phi}_{j}(S_{gt})\right\|_{1}
\end{aligned}
\end{equation}
\noindent where $I_{i}$ represent $I_{out}$ and $I_{com}$. ${\phi}_{j}(.)$ denotes the activation maps of the $j$-th ($j =1,2,3$) pooling layer of VGG-16 pretrained on ImageNet \cite{deng2009imagenet}.
\noindent \textbf{Style loss} We also utilize style loss to release the artifacts of the generated results. Style loss \cite{style} construct a Gram matrix $Gr(.)$ from each selected activation map in perceptual loss. Style loss can be defined as
\begin{equation}
\begin{aligned}
\label{styloss}
L_{style} = \sum_{i} \sum_{j} \frac {\left\| {Gr}_{j}(I_{i}) - {Gr}_{j}(I_{gt})\right\|_{1}}{H_{j} W_{j} C_{j}}
+ \sum_{j} \frac {\left\| {Gr}_{j}(S_{out}) - {Gr}_{j}(S_{gt})\right\|_{1}}{H_{j} W_{j} C_{j}}
\end{aligned}
\end{equation}
\noindent \textbf{Adversarial loss}
The adversarial loss encourages our model to generate more plausible details for the final results with text removed. Here we defined our adversarial loss as:
\begin{equation}
\begin{aligned}
\label{ganloss}
L_{adv} = E_{x\sim {P}_{\text {data}}(x)}\left [\log D(x) \right ] + E_{x\sim {P}_{\text {z}(z)}}\left [\log\left (1 - D(G(z)) \right ) \right ]
\end{aligned}
\end{equation}
$z$ is the input $I_{in}$ and $x$ represents the corresponding ground-truth $I_{gt}$.
\noindent \textbf{Total loss} The overall loss function for our text removal network is defined as:
\begin{equation}
\begin{aligned}
\label{styloss}
L_{total} = \lambda_{al} L_{align} + \lambda_{str}L_{structure} + \lambda_{m}L_{msr} \\
+ \lambda_{p}L_{per} +\lambda_{s}L_{style} + \lambda_{a}L_{adv}
\end{aligned}
\end{equation}
\noindent $\lambda_{al}, \lambda_{str}, \lambda_{m}, \lambda_{p}, \lambda_{s}, \lambda_{a}$ are the trade-off parameters.
In our implementation, we empirically set $\lambda_{al}=1.0, \lambda_{str}=2.0, \lambda_{m}=10.0, \lambda_{p}=0.01, \lambda_{s}=120, \lambda_{a}=1.0$.
\section{Experiments}
\subsection{Datasets and Evaluation Metrics}
\noindent{\textbf{Datasets}}
To evaluate the effectiveness of our proposed CTRNet, we conduct experiments on the two widely used benchmarks, SCUT-Syn \cite{zhang2019ensnet} and SCUT-EnsText~\cite{liu2020erasenet}.
\noindent \textbf{(1) SCUT-Syn}:
SCUT-Syn contains a training set of 8,000 images and a testing set of 800 images. It is a synthetic dataset with \cite{gupta2016synthetic}. The background images are mainly collected from ICDAR-2013 \cite{ic13} and MLT-2017 \cite{nayef2017icdar2017}, and the text instances are manually erased.
\noindent \textbf{(2) SCUT-EnsText}:
SCUT-EnsText is a comprehensive real-world dataset with 2,749 images for training and 813 images for testing. These images are collected from public scene text benchmark, including
ICDAR-2013 \cite{ic13}, ICDAR-2015 \cite{karatzas2015icdar}, MS COCO-Text \cite{veit2016coco},
SVT \cite{svt}, MLT-2017 \cite{nayef2017icdar2017}, MLT-2019 \cite{mlt2019}, and ArTs \cite{chng2019icdar2019},
which consists of SCUT-CTW1500 \cite{yuliang2017detecting} and Total-Text \cite{ch2017total}. All the images are carefully annotated with Photoshop.
~\\
\noindent{\textbf{Evaluation metrics:}}
To comprehensively evaluate the performance of our CTRNet, we utilize both Image-Eval and Detection-Eval as used in EraseNet \cite{liu2020erasenet}. (1) Image-Eval includes the following metrics for image quality evaluation. (1) Peak signal
to noise ratio (PSNR); (2) Multi-scale Structural Similarity (MSSIM); (3) Mean Square Error (MSE); (4) Fréchet Inception Distance (FID) \cite{heusel2017gans}.
A higher PSNR, MSSIM and lower MSE, FID denotes better results. (2) Detection-Eval evaluates the Recall (R), Precision (P), F-measure (F), TIoU-Recall (TR), TIoU-Precision (TP), and TIoU-F-measure (TF) for the results under the protocols of ICDAR 2015 \cite{karatzas2015icdar} and T-IoU \cite{liu2019tightness}. CRAFT \cite{Baek2019Character} is served as the text detector for evaluation. The lower R, P and F indicate that more text can be removed.
\subsection{\textbf{Implement details}}
We utilize Pixel Aggregation Network (PAN)~\cite{wang2019efficient} as text perception head for CTRNet. The input size is set to $512 \times 512$. Adam solver \cite{kingma2014adam} is used to optimized our method, and the $\beta$ is set to (0.0, 0.9) as default.The batch size is set to 2. All experiments are conducted on a workstation with two NVIDIA 2080TI GPUs. More training details and the architectures of each component are provided in the supplementary materials.
\begin{table*}[t]
\caption{Ablation Study on SCUT-EnsText. MSSIM and MSE are represented by $\%$ in the table.}
\label{table:aba}
\centering
\newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}}
\small
\begin{tabular}{c|cccc|c|c|c|c|c|c|c|c}
\hline \Xhline{0.3pt}
\multirow{2}*{ } & \multicolumn{4}{c|}{Components} & \multicolumn{4}{c|}{Evaluation on $I_{out}$} & \multicolumn{4}{c}{Evaluation on $I_{com}$}\\
\cline{2-13}
& HCG & LGCM & SM & LCG & PSNR & MSSIM & MSE & FID & PSNR & MSSIM & MSE & FID \\
\Xhline{0.3pt}
\hline
baseline & - & - & - & - & 32.39 & 95.45 & 0.13 & 20.75 & 33.21 & 95.52 & 0.11 & 22.15 \\
\hline
Ours+ & $\checkmark$ & & & & 32.90 & 96.62 & 0.11 & 17.40 & 34.88 & 97.09 & 0.10 & 19.42 \\
Ours+ & $\checkmark$ & $\checkmark$ & & & 35.10 & 97.36 & 0.09 & 14.36 & 35.30 & 97.20 & 0.09 & 17.91 \\
Ours+ & $\checkmark$ & $\checkmark$ & $\checkmark$ & & 35.16 & \textbf{97.38} & 0.09 & 14.33 & 35.83 & \textbf{97.42} & 0.09 & 15.02 \\
Ours+ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & \textbf{35.20} & 97.36 & \textbf{0.09} & \textbf{13.99} & \textbf{35.85} & 97.40 & \textbf{0.09} & \textbf{14.57} \\ \hline \Xhline{0.3pt}
\end{tabular}
\end{table*}
\begin{figure*}[t]
\subfigbottomskip=2pt
\subfigcapskip=2pt
\setlength{\abovecaptionskip}{-0.0cm}
\setlength{\belowcaptionskip}{-0.4cm}
\centering
\subfigure[]{
\begin{minipage}[t]{0.127\linewidth}
\centering
\includegraphics[width=1.6cm,height=1.8cm]{./figure/abalation/input/105.pdf}\\
\includegraphics[width=1.6cm,height=1.8cm]{./figure/abalation/input/149.pdf}\\
\includegraphics[width=1.6cm,height=1.8cm]{./figure/abalation/input/img_256.pdf}\\
\end{minipage}%
}%
\subfigure[]{
\begin{minipage}[t]{0.121\linewidth}
\centering
\includegraphics[width=1.6cm,height=1.8cm]{./figure/abalation/baseline/105_2.pdf}\\
\includegraphics[width=1.6cm,height=1.8cm]{./figure/abalation/baseline/149_2.pdf}\\
\includegraphics[width=1.6cm,height=1.8cm]{./figure/abalation/baseline/img_256.pdf}\\
\end{minipage}%
}
\subfigure[]{
\begin{minipage}[t]{0.121\linewidth}
\centering
\includegraphics[width=1.6cm,height=1.8cm]{./figure/abalation/spl/105_2.pdf}\\
\includegraphics[width=1.6cm,height=1.8cm]{./figure/abalation/spl/149_2.pdf}\\
\includegraphics[width=1.6cm,height=1.8cm]{./figure/abalation/spl/pred_img_256.pdf}\\
\end{minipage}%
}
\subfigure[]{
\begin{minipage}[t]{0.121\linewidth}
\centering
\includegraphics[width=1.6cm,height=1.8cm]{./figure/abalation/te_spade_sm/105_2.pdf}\\
\includegraphics[width=1.6cm,height=1.8cm]{./figure/abalation/te_spade_sm/149_2.pdf}\\
\includegraphics[width=1.6cm,height=1.8cm]{./figure/abalation/te_spade_sm/pred_img_256.pdf}\\
\end{minipage}%
}
\subfigure[]{
\begin{minipage}[t]{0.121\linewidth}
\centering
\includegraphics[width=1.6cm,height=1.8cm]{./figure/abalation/final/105_2.pdf}\\
\includegraphics[width=1.6cm,height=1.8cm]{./figure/abalation/final/149_2.pdf}\\
\includegraphics[width=1.6cm,height=1.8cm]{./figure/abalation/final/pred_img_256.pdf}\\
\end{minipage}%
}
\subfigure[]{
\begin{minipage}[t]{0.121\linewidth}
\centering
\includegraphics[width=1.6cm,height=1.8cm]{./figure/abalation/structure/pred_105.pdf}\\
\includegraphics[width=1.6cm,height=1.8cm]{./figure/abalation/structure/pred_149.pdf}\\
\includegraphics[width=1.6cm,height=1.8cm]{./figure/abalation/structure/pred_img_256.pdf}\\
\end{minipage}%
}
\subfigure[]{
\begin{minipage}[t]{0.12\linewidth}
\centering
\includegraphics[width=1.6cm,height=1.8cm]{./figure/abalation/gt/105.pdf}\\
\includegraphics[width=1.6cm,height=1.8cm]{./figure/abalation/gt/149.pdf}\\
\includegraphics[width=1.6cm,height=1.8cm]{./figure/abalation/gt/img_256.pdf}\\
\end{minipage}%
}
\centering
\caption{Qualitative results for ablation studies on HCG, LGCM, LCG. (a) The input images; (b) Baseline results; (c) Baseline + HCG; (d) Baseline + HCG + LGCM; (e) Baseline + HCG + LGCM + LCG; (f) The structure output from LCG for the model (e);(g) The ground-truth.
Zoom in for best view.} \label{fig:scut-enstext}
\end{figure*}
\subsection{Ablation Study}
In this section, we conduct experiments on SCUT-EnsText to verify the contributions of different components in CTRNet.
Our baseline model is implemented by a Pix2pix-based model, which takes both the images and the corresponding masks as input.
As the text perception head is frozen when training the other components, the detected text regions remain the same in each experiment during inference; thus, we only employ Image-Eval to evaluate the performance.
Apart from the direct output $I_{out}$, we also paste the erased text regions back to the input images based on the detected results to obtain $I_{com}$. The quantitative results for both outputs are presented in Table \ref{table:aba}. The qualitative results are displayed in Fig. \ref{fig:scut-enstext}.
Besides, we evaluate each loss item and their corresponding hyper-parameters, and the results are presented in our supplement materials.
\noindent \textbf{HCG:} HCG block aims to learn high-level discriminative context feature representation, which can effectively guide the process of feature modeling and decoding. As shown in Table \ref{table:aba},
the incorporation of HCG into the modeling and decoding phase with ResSPADE blocks yields significant improvement on all metrics,
with the increases of 0.51, 1.17, 0.02, 3.35 for $I_{out}$ and 1.67, 1.57, 0.01, 2.73 for $I_{com}$ in PSNR, MSSIM, MSE, and FID, respectively. The qualitative results shown in Fig. \ref{fig:scut-enstext} also illustrate the effect of this component.
Comparing with the results of the baseline model in Fig. \ref{fig:scut-enstext} (b), the results in Fig. \ref{fig:scut-enstext} (c) indicate that the HCG block can help generate a more plausible background and release more artifacts in the output.
\noindent \textbf{LGCM:} As shown in Table \ref{table:aba}, the incorporation of our LGCM significantly facilitates performance improvement of 2.20, 0.74, 0.02, 3.04 for $I_{out}$ in PSNR, MSSIM, MSE, and FID, respectively, while 0.42, 0.11, 0.01, 1.51 for $I_{com}$. Such a remarkable promotion benefits a lot from both the local and global modeling for the features and the learned context prior, which can capture not only the long-range dependency among pixels around the feature maps but their relationship at a fixed window as well. Therefore, LGCM enables our CTRNet to take advantage of both local and global information. The qualitative results are presented in Fig. \ref{fig:scut-enstext} (d).
In comparison, the outputs of our model without LGCM exhibit some obvious defects on the text regions (the up/bottom row of Fig. \ref{fig:scut-enstext} (c)), while those with LGCM are more favorable, though there still exist mistaken erasure (e.g. the bottom of Figure \ref{fig:scut-enstext} (d), but the restored background is smoother than (c)).
Besides, with the long-rang dependency, the text can be removed more thoroughly with incorrect detection results, which is presented in the middle row of Fig. \ref{fig:scut-enstext} (c) and (d). Furthermore, we discuss the number of LGCM blocks in the supplementary materials.
\begin{figure}[t]
\subfigbottomskip=2pt
\subfigcapskip=2pt
\setlength{\belowcaptionskip}{-0.1cm}
\setlength{\abovecaptionskip}{-0.0cm}
\centering
\subfigure[Input]{
\begin{minipage}[t]{0.18\linewidth}
\centering
\includegraphics[width=2.3cm,height=1.7cm]{./figure/sota/ENS/input/328.pdf}\\
\includegraphics[width=2.3cm,height=1.7cm]{./figure/sota/ENS/input/img_363.pdf}\\
\end{minipage}%
}
\subfigure[GT]{
\begin{minipage}[t]{0.18\linewidth}
\centering
\includegraphics[width=2.3cm,height=1.7cm]{./figure/sota/ENS/gt/328.pdf}\\
\includegraphics[width=2.3cm,height=1.7cm]{./figure/sota/ENS/gt/363.pdf}\\
\end{minipage}%
}
\subfigure[EN\cite{liu2020erasenet}]{
\begin{minipage}[t]{0.188\linewidth}
\centering
\includegraphics[width=2.4cm,height=1.7cm]{./figure/sota/ENS/erasenet/328.pdf}\\
\includegraphics[width=2.4cm,height=1.7cm]{./figure/sota/ENS/erasenet/img_363.pdf}\\
\end{minipage}%
}
\subfigure[Stroke\cite{tang2021stroke}]{
\begin{minipage}[t]{0.188\linewidth}
\centering
\includegraphics[width=2.4cm,height=1.7cm]{./figure/sota/ENS/tang/img_828.pdf}\\
\includegraphics[width=2.4cm,height=1.7cm]{./figure/sota/ENS/tang/img_363.pdf}\\
\end{minipage}%
}
\subfigure[Ours]{
\begin{minipage}[t]{0.18\linewidth}
\centering
\includegraphics[width=2.3cm,height=1.7cm]{./figure/sota/ENS/ours/pred_328.pdf}\\
\includegraphics[width=2.3cm,height=1.7cm]{./figure/sota/ENS/ours/pred_img_363.pdf}\\
\end{minipage}%
}
\centering
\caption{Qualitative results on SCUT-EnsText for comparing our model with previous scene text removal methods. EN denotes EraseNet~\cite{liu2020erasenet} and Stroke denotes the method proposed by Tang et al.~\cite{tang2021stroke}. Zoom in for best view.} \label{fig:sota_ens}
\end{figure}
\begin{table*}[t]
\centering
\setlength{\belowcaptionskip}{0.1cm}
\caption{Comparison with state-of-the-art methods on SCUT-EnsText. The methods with ``*'' denote that the text mask are generated with the GT instead of the detected results. MSSIM and MSE are represented by $\%$ in the table. Bold indicates SOTA, while Underline indicates second best. }
\label{table:sota_enstext}
\setlength{\tabcolsep}{1mm}{
\begin{tabular}{ c | c c c c | c c c | c c c }
\hline \Xhline{0.3pt}
\multirow{2}{*}{Methods} & \multicolumn{4}{c|}{Image-Eval} & \multicolumn{6}{c}{Detection-Eval} \\
\cline{2-11}
& PSNR & MSSIM & MSE & FID & R & P & H & T-R & T-P & T-H \\
\hline \Xhline{0.3pt}
Original & - & - & - & - & 69.5 & 79.4 & 74.1 & 50.9 & 61.4 & 55.7 \\
\hline
Pix2pix\cite{isola2017image} & 26.70 & 88.56 & 0.37 & 46.88 &35.4 & 69.7 & 47.0 & 24.3 & 52.0 & 33.1 \\
\hline
STE\cite{ste} & 25.46 & 90.14 & 0.47 & 43.39 & 5.9 & \underline{40.9} & 10.2 & 3.6 & \underline{28.9} & 6.4 \\
\hline
EnsNet\cite{zhang2019ensnet} & 29.54 & 92.74 & 0.24 & 32.71 &32.8 & 68.7 & 44.4 & 50.7 & 22.1 & 30.8 \\
\hline
EraseNet\cite{liu2020erasenet} & 32.30 & 95.42 & 0.15 & 19.27 & 4.6 & 53.2 & 8.5 & 2.9 & 37.6 & 5.4 \\
\hline
PERT\cite{wang2021simple} & \underline{33.25} & \underline{96.95} & \underline{0.14} & - & \underline{2.9} & 52.7 & \underline{5.4} & \underline{1.8} & 38.7 & \underline{3.5} \\ \hline
Ours ($I_{out}$) & \textbf{35.20} & \textbf{97.36} & \textbf{0.09} & \textbf{13.99} & \textbf{1.4} & \textbf{38.4} & \textbf{2.7} & \textbf{0.9} & \textbf{28.3} & \textbf{1.7} \\
\hline \Xhline{0.3pt}
Tang et al.\cite{tang2021stroke} & \underline{35.34} & 96.24 & 0.09 & - & 3.6 & - & - & - & - & - \\
\hline
Ours ($I_{com}$) & \textbf{35.85} & \textbf{97.40} & \textbf{0.09} & \textbf{14.57} & \textbf{1.7} & \textbf{40.1} & \textbf{3.3} & \textbf{1.1} & \textbf{29.4} & \textbf{2.1} \\
\hline \Xhline{0.3pt}
Tang et al.*\cite{tang2021stroke} & \underline{37.08} & 96.54 & \textbf{0.05} & - & - & - & - & - & - & - \\
\hline
Ours* ($I_{com}$) & \textbf{37.20} & \textbf{97.66} & \underline{0.07} & \textbf{11.72} & - & - & - & - & - & - \\
\hline \Xhline{0.3pt}
\end{tabular}}
\end{table*}
\noindent \textbf{LCG:} According to the results shown in Table \ref{table:aba}, under the same experimental setting, CTRNet with LCG yields slightly improvement in PSNR and FID by approximately 0.03 and 0.40 on average, respectively for both $I_{out}$ and $I_{com}$, while the other metrics remain comparable. One of the basic challenges of scene text removal is the background restoration, the generated structure can indicate the region clues of text-erased so that provide the guidance for texture synthesis of the background. Fig. \ref{fig:scut-enstext} (e) and (f) show the outputs of CTRNet with LCG and the generated background structure. With the background structure, our text removal network can restore a more natural background texture, as indicated by the red boxes in the figures. Besides, as shown in the bottom row of Fig. \ref{fig:scut-enstext} (e), there exist wrong detected results, but LCG can still help retain the corresponding region and predict more reasonable contents than others.
\noindent \textbf{Soft Mask:}
The application of soft-mask mainly aims to eliminate the discontinuity and inconsistency at the junction of text/non-text regions on the output. Soft mask only achieves only a slight improvement on $I_{out}$, but a significant promotion on $I_{com}$ by 0.53 in PSNR and 0.22 in MSSIM, respectively. Qualitative results of $I_{com}$ for the comparison on hard-mask (0-1) and soft-mask are shown in the supplement file. With soft-mask, the output can preserve smoother edges between text and non-text regions. Besides, as the soft-mask is expanded, the texts are removed more completely and thoroughly.
\begin{figure}[t]
\subfigbottomskip=2pt
\subfigcapskip=2pt
\setlength{\belowcaptionskip}{-0.1cm}
\setlength{\abovecaptionskip}{-0.0cm}
\centering
\subfigure[Input]{
\begin{minipage}[t]{0.18\linewidth}
\centering
\includegraphics[width=2.3cm,height=1.8cm]{./figure/sota/SYN/input/img_353.pdf}\\
\includegraphics[width=2.3cm,height=1.8cm]{./figure/sota/SYN/input/img_565.pdf}\\
\end{minipage}%
}
\subfigure[GT]{
\begin{minipage}[t]{0.18\linewidth}
\centering
\includegraphics[width=2.3cm,height=1.8cm]{./figure/sota/SYN/gt/img_353.pdf}\\
\includegraphics[width=2.3cm,height=1.8cm]{./figure/sota/SYN/gt/img_565.pdf}\\
\end{minipage}%
}
\subfigure[EN\cite{liu2020erasenet}]{
\begin{minipage}[t]{0.188\linewidth}
\centering
\includegraphics[width=2.4cm,height=1.8cm]{./figure/sota/SYN/erasenet/img_353.pdf}\\
\includegraphics[width=2.4cm,height=1.8cm]{./figure/sota/SYN/erasenet/img_565.pdf}\\
\end{minipage}%
}
\subfigure[Stroke\cite{tang2021stroke}]{
\begin{minipage}[t]{0.188\linewidth}
\centering
\includegraphics[width=2.4cm,height=1.8cm]{./figure/sota/SYN/tang/img_353.pdf}\\
\includegraphics[width=2.4cm,height=1.8cm]{./figure/sota/SYN/tang/img_565.pdf}\\
\end{minipage}%
}
\subfigure[Ours]{
\begin{minipage}[t]{0.18\linewidth}
\centering
\includegraphics[width=2.4cm,height=1.8cm]{./figure/sota/SYN/ours/pred_img_353.pdf}\\
\includegraphics[width=2.4cm,height=1.8cm]{./figure/sota/SYN/ours/pred_img_565.pdf}\\
\end{minipage}%
}
\centering
\caption{Qualitative results on SCUT-Syn for comparing our model with previous scene text removal methods. EN denotes EraseNet~\cite{liu2020erasenet} and Stroke denotes the method proposed by Tang et al.~\cite{tang2021stroke}. Zoom in for best view.} \label{fig:sota_syn}
\end{figure}
\begin{table*}[t]
\centering
\caption{Comparison with state-of-the-art methods on SCUT-Syn. MSSIM and MSE are represented by ($\%$) in the table. Bold indicates SOTA, while Underline indicates second best.}
\label{table:sota_syn}
\setlength{\tabcolsep}{1.3mm}{
\begin{tabular}{ c | c c c c }
\hline \Xhline{0.3pt}
\multirow{2}{*}{Methods} & \multicolumn{4}{c}{Image-Eval} \\
\cline{2-5}
& PSNR & MSSIM & MSE & FID \\
\hline \Xhline{0.3pt}
Pix2pix\cite{isola2017image} & 26.76 & 91.08 & 0.27 & 47.84 \\
\hline
STE\cite{ste} & 25.40 & 90.12 & 0.65 & 46.39 \\
\hline
EnsNet\cite{zhang2019ensnet} & 37.36 & 96.44 & 0.21 & - \\
\hline
EnsNet (reimplemented)\cite{zhang2019ensnet} & 36.23 & 96.76 & 0.04 & 19.96 \\
\hline
EraseNet\cite{liu2020erasenet} & 38.32 & 97.67 & 0.02 & 9.53 \\
\hline
MTRNet++\cite{tursun2020mtrnet++} & 34.55 & 98.45 & \underline{0.04} & - \\
\hline
Zdenek et al.\cite{Zdenek_2020_WACV} & 37.46 & 93.64 & - & - \\
\hline
Conrad et al.\cite{conrad2021two} & 32.97 & 94.90 & - & - \\
\hline
PERT\cite{wang2021simple} & \underline{39.40} & \underline{97.87} & 0.02 & - \\
\hline
Tang et al.\cite{tang2021stroke} & 38.60 & 97.55 & 0.02 & - \\
\hline
Ours & \textbf{41.28} & \textbf{98.50} & \textbf{0.02} & \textbf{3.84} \\
\hline \Xhline{0.3pt}
\end{tabular}}
\end{table*}
\subsection{Comparison with the State-of-the-arts}
In this section, we conduct experiments to evaluate the performance of CTRNet and the relevant SOTA methods on scene text removal on both SCUT-EnsText and SCUT-Syn. The quantitative results on SCUT-EnsText are given in Table \ref{table:sota_enstext}, and
the quantitative results on SCUT-Syn are given in Table \ref{table:sota_syn}.
The results for these two datasets demonstrate that our proposed model outperforms existing state-of-the-art methods on both Image-Eval and Detection-Eval, indicating that our model can effectively remove the text on th images and meanwhile restore more reasonable background textures. Only the results of Tang et al.~\cite{tang2021stroke} preserve non-text regions of the input (i.e. $I_{com}$) while the others are direct model output $I_{out}$, we evaluate all their performance for fair comparisons.
The qualitative comparison with other methods on SCUT-EnsText is shown in Fig. \ref{fig:sota_ens}, and for SCUT-Syn in Fig. \ref{fig:sota_syn}.
In Fig. \ref{fig:sota_ens},
the results generated by EraseNet~\cite{liu2020erasenet} and Tang et al.~\cite{tang2021stroke} still contain artifacts and discontinuities on the output when dealing with complex text images.
By utilizing local-global content modeling and different level contextual guidance, our model can predict more realistic textures for text regions and obtain significantly fewer noticeable inconsistencies.
Besides, in Fig. \ref{fig:sota_syn}, our model can also generate results of higher quality with more visually pleasing and plausible contents for synthetic data. More cases for comparison are given in the supplement materials.
\begin{table}[t]
\centering
\caption{Comparison with state-of-the-art image inpainting methods on SCUT-EnsText. MSSIM and MSE are represented by ($\%$) in the table. }
\label{table:sota_inpaint}
\setlength{\tabcolsep}{1.3mm}{
\begin{tabular}{ c | c c c c }
\hline \Xhline{0.3pt}
\multirow{2}{*}{Methods} & \multicolumn{4}{c}{Image-Eval} \\
\cline{2-5}
& PSNR & MSSIM & MSE & FID \\
\hline \Xhline{0.3pt}
CTSDG\cite{Guo_2021_ICCV} & 33.10 & 95.55 & 0.14 & 20.01 \\
\hline
SPL\cite{SPL} &35.41 & 97.39 & 0.07 & 17.85 \\
\hline
Ours & \textbf{37.20} & \textbf{97.66} & \textbf{0.07} & \textbf{11.72} \\
\hline \Xhline{0.3pt}
\end{tabular}}
\end{table}
\begin{figure}[t]
\subfigbottomskip=2pt
\subfigcapskip=2pt
\setlength{\abovecaptionskip}{-0.0cm}
\setlength{\belowcaptionskip}{-0.2cm}
\centering
\subfigure[Input]{
\begin{minipage}[t]{0.17\linewidth}
\centering
\includegraphics[width=2.2cm,height=1.7cm]{./figure/other_task/inpaint/input/51.pdf}\\
\includegraphics[width=2.2cm,height=1.7cm]{./figure/other_task/inpaint/input/img_422.pdf}\\
\end{minipage}%
}
\subfigure[GT]{
\begin{minipage}[t]{0.17\linewidth}
\centering
\includegraphics[width=2.2cm,height=1.7cm]{./figure/other_task/inpaint/gt/51.pdf}\\
\includegraphics[width=2.2cm,height=1.7cm]{./figure/other_task/inpaint/gt/img_422.pdf}\\
\end{minipage}%
}
\subfigure[CTDSG\cite{Guo_2021_ICCV}]{
\begin{minipage}[t]{0.17\linewidth}
\centering
\includegraphics[width=2.2cm,height=1.7cm]{./figure/other_task/inpaint/ctdsg/51.pdf}\\
\includegraphics[width=2.2cm,height=1.7cm]{./figure/other_task/inpaint/ctdsg/img_422.pdf}\\
\end{minipage}%
}
\subfigure[SPL\cite{SPL}]{
\begin{minipage}[t]{0.17\linewidth}
\centering
\includegraphics[width=2.2cm,height=1.7cm]{./figure/other_task/inpaint/spl/pred_51.pdf}\\
\includegraphics[width=2.2cm,height=1.7cm]{./figure/other_task/inpaint/spl/pred_img_422.pdf}\\
\end{minipage}%
}
\subfigure[Ours]{
\begin{minipage}[t]{0.17\linewidth}
\centering
\includegraphics[width=2.2cm,height=1.7cm]{./figure/other_task/inpaint/ours/pred_51.pdf}\\
\includegraphics[width=2.2cm,height=1.7cm]{./figure/other_task/inpaint/ours/pred_img_422.pdf}\\
\end{minipage}%
}
\centering
\caption{Qualitative results on SCUT-EnsText for comparing our model with state-of-the-art image inpainting methods. Zoom in for best view.} \label{fig:sota_inpaint}
\end{figure}
\begin{figure}[]
\subfigbottomskip=2pt
\subfigcapskip=2pt
\setlength{\abovecaptionskip}{-0.0cm}
\setlength{\belowcaptionskip}{-0.0cm}
\centering
\subfigure[Input]{
\begin{minipage}[t]{0.23\linewidth}
\centering
\includegraphics[width=2.3cm,height=2.0cm]{./figure/other_task/hehe/input/3909.pdf}\\
\includegraphics[width=2.3cm,height=2.0cm]{./figure/other_task/hehe/input/315.pdf}\\
\end{minipage}%
}
\subfigure[SPL\cite{SPL}]{
\begin{minipage}[t]{0.23\linewidth}
\centering
\includegraphics[width=2.3cm,height=2.0cm]{./figure/other_task/hehe/inpaint/pred_3909.pdf}\\
\includegraphics[width=2.3cm,height=2.0cm]{./figure/other_task/hehe/inpaint/pred_10315.pdf}\\
\end{minipage}%
}
\subfigure[EraseNet\cite{liu2020erasenet}]{
\begin{minipage}[t]{0.23\linewidth}
\centering
\includegraphics[width=2.3cm,height=2.0cm]{./figure/other_task/hehe/erasenet/3909.pdf}\\
\includegraphics[width=2.3cm,height=2.0cm]{./figure/other_task/hehe/erasenet/10315.pdf}\\
\end{minipage}%
}
\subfigure[Ours]{
\begin{minipage}[t]{0.23\linewidth}
\centering
\includegraphics[width=2.3cm,height=2.0cm]{./figure/other_task/hehe/ours/pred_3909.pdf}\\
\includegraphics[width=2.3cm,height=2.0cm]{./figure/other_task/hehe/ours/pred_10315.pdf}\\
\end{minipage}%
}
\centering
\caption{Qualitative results on examination papers. Zoom in for best view.} \label{fig:hehe}
\end{figure}
\subsection{Comparison with State-of-the-art Image Inpainting Methods}
We conduct experiments to compare CTRNet with existing SOTA image inpainting methods, CTSDG \cite{Guo_2021_ICCV} and SPL \cite{SPL} on SCUT-EnsText. The quantitative and qualitative results are given in Table \ref{table:sota_inpaint} and Fig. \ref{fig:sota_inpaint}, respectively. Our model outperforms these two methods in all metrics with a remarkable margin, meanwhile can restore the text region background with more reasonable and realistic textures.
The reason is that while we simply apply image inpainting methods on scene text removal, the text regions will be directly abandoned by masking according to the bounding boxes (blue boxes in Fig. \ref{fig:sota_inpaint} (a)), causing that the model can not effectively deduce the background information.
\subsection{Application on Handwritten Text Removal}
In this section, we apply CTRNet to help restore document images to verify its generalizability. We collect 1,000 in-house examination paper images and manually annotate them by erasing the handwriting with PhotoShop. Then we train CTRNet and EraseNet~\cite{liu2020erasenet} on the data and evaluate them with other paper images. Besides, we also train SPL \cite{SPL} for comparison to further illustrate the difference between text removal and image inpainting. The visualization results are shown in Fig. \ref{fig:hehe}. Our method can retain more printed words than SPL and EraseNet, which is more suitable for document restoration task.
More results are given in the supplement materials.
\section{Conclusion}
In this paper, we propose
a new text removal model called CTRNet. CTRNet introduces both low-level and high-level contextual guidance, which can effectively promote the performance on texture restoration for complex backgrounds.
We further use smooth structure images and discriminative context features to represent the low-level and high-level context, respectively.
Besides, the learned contextual guidance is incorporated into the image features and modeled in a local-global manner to effectively capture both sufficient context information and long-term correlation among all of the pixels.
The experiments conducted on three benchmark datasets have demonstrated the effectiveness of the proposed
CTRNet, outperforming previous state-of-the-art methods significantly.
\par\vfill\par
\clearpage
|
2,869,038,155,882 | arxiv | \section{Introduction}
\label{introduction}
Aqueous solutions containing polymers and small associating
molecules such as folded proteins and amphiphiles (surfactants)
are commonly found in biological systems and industrial
applications. As a result, extensive efforts have been devoted in
the past few decades to the study of polymer--surfactant
interactions \cite{ps_book1,ps_book2}. In addition, there has been
growing interest in the interactions between DNA macromolecules
and surfactants, lipids and short polyamines \cite
{Hayakawa,Shirahama,Sergeyev1,Sergeyev5,Khokhlov,Sergeyev6,Reimer,Bhatta}.
These interactions are relevant to various biochemical
applications such as DNA extraction and purification
\cite{Sergeyev6,Reimer,Bhatta} and genetic delivery systems
\cite{delivery}. Association of folded proteins (\eg RecA) with
DNA plays a key role in genetic regulatory mechanisms. Structural
details of this association have been studied in recent
experiments \cite{Chatenay,Feingold}.
Recently, we have presented a general theory for the self-assembly
in aqueous solutions of polymers and smaller associating molecules
\cite{ourEPL,ourMM}. Two different scenarios emerge, depending on
the flexibility of the polymer. If the polymer is flexible enough,
it actively participates in the self-assembly, resulting in mixed
aggregates jointly formed by the two species. The polymer
conformation changes considerably upon self-assembly but remains
extended on a global scale, as the chain undergoes only {\em
partial collapse} \cite{ourEPL,ourMM,deGennes_partial}. On the
other hand, if the polymer is stiff, partial collapse is
inhibited.
The criterion determining the `flexible' {\it vs.} `stiff'
scenarios concerns the polymer statistics on a mesoscopic length
scale characterizing correlations in the solution (usually a few
nanometers). It was found \cite{ourEPL,ourMM}
that the flexible (stiff) scenario holds
if the exponent $\nu$, relating the number of monomers $N$ to the
spatial size $R$ they occupy, $R\sim N^\nu$, is smaller (larger)
than $2/d$ on that length scale ($d$ being the dimensionality).
This distinction is analogous to the one made in the critical
behavior of certain disordered systems \cite{Fisher,Harris}
--- if the critical exponent $\nu$ of a system satisfies
$\nu<2/d$, the critical behavior would be smeared by impurities
(in analogy to the partial collapse), whereas if $\nu>2/d$, the
critical point remains intact. Indeed, neutral flexible polymers
in three dimensions, having $\nu\simeq 3/5<2/3$, are found by
scattering experiments to associate with surfactants in the form
of a `chain of wrapped aggregates' \cite{Cabane,Reed}. On the
other hand, stiff DNA molecules, having $\nu=1$ on the relevant
length scale, are found to either remain unperturbed by surfactant
binding \cite{Sergeyev5,Reimer}, or undergo a discontinuous
coil-to-globule transition \cite{Sergeyev1}, provided the chain is
much longer than the persistence length.
In previous publications \cite{ourEPL,ourMM} we concentrated on
the flexible case and the corresponding partial collapse, where
the polymer degrees of freedom play an important role. In the
opposite extreme limit of stiff, rod-like molecules, the
conformational degrees of freedom of the polymer can be neglected
and the chain may be regarded as a linear `binding substrate'.
Models for stiff polymers, inspired by the Zimm-Bragg theory
\cite{ZimmBragg}, treat the bound molecules as a one-dimensional
lattice-gas (or Ising) system with nearest-neighbor interactions
\cite{Ising_models}. They have been widely used to fit
experimental binding isotherms for polyelectrolytes and oppositely
charged surfactants \cite{review_ps_charge}. Recently, more
detailed electrostatic models have been proposed for the
interaction between rod-like polyelectrolytes and oppositely
charged surfactants \cite{Levin,Colby}.
In addition, a theoretical work focusing on the {\it specific}
binding of proteins to
DNA has been recently presented \cite{Rudnick},
treating a pair of bound proteins as geometrically constraining
inclusions on the DNA chain.
In the current work we address the intermediate case of {\em
semiflexible} polymers. The polymer we consider is stiff in the
sense defined above, \ie its persistence length, $\lp$, exceeds
several nanometers and, hence, the polymer is characterized by
$\nu=1>2/3$ on that length scale. The total chain length, however,
is considered to be much larger than $\lp$, and the entire polymer
cannot be regarded, therefore, as a single rigid rod. This case
corresponds, in particular, to experiments on long DNA molecules
\cite{Hayakawa,Shirahama,Sergeyev1,Sergeyev5,Khokhlov,Sergeyev6,Reimer,Bhatta},
whose persistence length is typically very large (of order 50 nm),
but much smaller than the total chain length (which is usually
larger than a micron) \cite{Gorelov}. We argue that such an
intermediate system may, in certain cases, be governed by
different physics. Although the polymer is too stiff to change
conformation and actively participate in the self-assembly, its
degrees of freedom induce attractive correlations between bound
molecules. Those fluctuation-induced correlations are weak but
have a long spatial range (of order $\lp$) and, hence, may
strongly affect the binding thermodynamics.
The model is presented in Sec.~\ref{model}. Bound molecules are
assumed to modify the local features of polymer conformation, \eg
change its local stiffness. In the limit of weak coupling, our
model reduces to the Kac-Baker model \cite{Lieb,Kac,Baker}, which
is solvable exactly. This limit is discussed in Sec.~\ref{weak}.
Although turning out to be of limited interest in practice, the
weak-coupling limit provides insight into the mechanism of
association, and helps us justify further approximations. Section
\ref{strong} presents a mean-field calculation for an arbitrary
strength of coupling. This analysis leads to our main conclusions,
and in Sec.~\ref{sec_tension} it is extended to polymers under
external tension. The results are summarized in
Sec.~\ref{conclusions}, where we also discuss several relevant
experiments involving DNA and point at future directions.
\section{The Model}
\label{model}
Small molecules bound to stiff polymers are commonly modeled as a
one-dimensional lattice gas (or Ising system) \cite{Ising_models}.
Each monomer serves as a binding site, which can either
accommodate a small molecule or be empty, and the surrounding
dilute solution is considered merely as a bulk reservoir of small
molecules. In the current work we stay at the level of a
one-dimensional model, assuming that the polymer is still quite
(yet not infinitely) stiff, \ie the persistence length is much
larger than the monomer size. In addition, a dilute polymer limit
is assumed, where inter-chain effects can be neglected. We focus
on the effect of introducing the polymer degrees of freedom and,
hence, seek a simple meaningful coupling between the polymer and
the bound `lattice gas'.
A polymer configuration is defined by a set of vectors,
$\{\vecu_n\}_{n=1\ldots N}$, specifying the lengths and
orientations of the $N$ monomers. In addition, each monomer serves
as a binding site which can be either empty ($\ph_n=0$) or
occupied by a small molecule ($\ph_n=1$). A configuration of the
entire system is defined, therefore, by specifying
$\{\vecu_n,\ph_n\}_{n=1\ldots N}$.
Since the polymer is assumed to be locally stiff, a natural choice
would be to couple $\ph_n$ with the square of the local chain
curvature, $\ph_n(\vecu_{n+1}-\vecu_n)^2$, thus modifying the
local chain stiffness. However, in the usual Kratky-Porod
worm-like-chain model of semiflexible polymers \cite{WLC},
chain segments are taken as rigid rods of fixed length
($|\vecu_n|=\mbox{const}$), and each squared-curvature term
contains only one degree of freedom (\eg the angle $\theta_n$
between $\vecu_n$ and $\vecu_{n+1}$). Consequently, this coupling,
$\ph_n\cos\theta_n$, would leave $\{\ph_n\}$ uncorrelated, leading
merely to a trivial shift in the chemical potential of bound
molecules \cite{Rudnick2}.
One option to proceed is to consider higher-order
extensions of the worm-like-chain Hamiltonian, involving three consecutive
monomers. This will introduce correlations between bound molecules
at different sites.
We take, however, a simpler route and modify
the worm-like-chain model by allowing the monomer length to fluctuate.
This modification was originally presented by Harris and Hearst
\cite{HH}, using a single global constraint for the average
chain length. The modified model was shown to successfully reproduce
the results of the Kratky-Porod model as far as thermodynamic averages
(\eg correlation functions, radius of gyration) were concerned.
It was less successful, however, in recovering more detailed statistics
of the worm-like chain (\eg distribution function, form factor),
particularly in the limit of large stiffness.
The Harris-Hearst model was later refined by Lagowski {\it et al.}
\cite{Lagowski}
and Ha and Thirumalai \cite{HT,HT_tension}, replacing the single global
constraint by a set of local constraints for the average segment lengths.
This further modification was shown to be equivalent to a
stationary-phase approximation for the chain partition function,
yielding reliable results for average quantities, as well as more
detailed statistics \cite{HT}.
We note that a similar
approach was used in a recent model of semiflexible polymer
collapse \cite{Parsegian}.
It should be borne in mind that, despite its success in the past,
the constraint relaxation remains essentially an uncontrolled
approximation. In the current work we restrict ourselves to
thermodynamic averages, such as monomer-monomer correlations and
free energies, for which the modified model with a single
global contraint can be trusted.
Thus, the rigid constraints of the original Kratky-Porod model,
$u_n^2=1$, are relaxed into thermodynamic-average ones,
$\langle u_n^2\rangle=1$, where the
mean-square monomer size is taken hereafter as the unit length.
Using the modified model for the chain,
each $\ph_n(\vecu_{n+1}-\vecu_n)^2$ term involves two consecutive
monomers (and not merely the angle between them), leading to a
meaningful coupling between binding and polymer conformation.
The partition function of the combined system of polymer and bound
molecules is written, therefore, as
\begin{eqnarray}
Z &=& \mathop{\mbox{Tr}}_{\{\ph_n=0,1\}} \int\prod_{n=1}^N \rmd
\vecu_n \exp(-\cH)
\nonumber\\
\cH &=& \frac{3}{4}\lp \sum_{n=1}^{N-1}
(1+\eps\ph_n) (\vecu_{n+1}-\vecu_n)^2 + \sum_{n=1}^N
\lambda_n u_n^2 - \mu\sum_{n=1}^N\ph_n.
\label{Z1}
\end{eqnarray}
In Eq.~(\ref{Z1}) $\lp$ is the persistence length of the bare
chain, characterizing its intrinsic stiffness. It is assumed to be
much larger than the monomer size, $\lp\gg 1$. The coupling is
introduced through the stiffness term, assuming that a bound
molecule modifies the local stiffness by a fraction $\eps>-1$,
which may be either negative or positive but cannot change the
positive sign of the overall stiffness term
\cite{negative_stiffness}. The second term contains a set of
multipliers, $\lambda_n$, to be chosen so that the constraints
$\langle u_n^2\rangle=1$ are satisfied. However, replacement of
the entire set $\{\lambda_n\}$ by a single multiplier $\lambda$
can be shown to yield a non-extensive correction \cite{HT}, which
becomes negligible in the limit $N\rightarrow\infty$. Hence, we
use hereafter a single multiplier, $\lambda$. Finally, the system
is assumed to be in contact with a reservoir of solute molecules.
The last term in Eq.~(\ref{Z1}) accounts for this contact along
with any other factors which couple linearly to the degree of
binding. Typically, $\mu$ contains the chemical potential of the
solute reservoir and the direct energy of solute molecule--monomer
binding. (All energies in this work are expressed in units of the
thermal energy, $k_{\rm B}T$.) Note that we have not included in
Eq.~(\ref{Z1}) any direct short-range (\eg nearest-neighbor)
interactions between bound molecules. Thus, all interactions in
the model arise from the coupling to the polymer degrees of
freedom. Short-range interactions between bound molecules do exist
in physical systems. Yet, in the limit of $\lp\gg 1$ and
$|\eps|\gtrsim 1$, which is of interest to the current work,
such direct interactions have a minor effect on binding, as is
demonstrated in the following sections. Hence, we omit them for
the sake of brevity.
As a reference, let us start with the previously studied
partition function of the bare polymer \cite{HT},
\begin{equation}
Z_{\rm p} = \int\prod_n\rmd\vecu_n \exp[ -\frac{3}{4}\lp \sum_n
(\vecu_{n+1}-\vecu_n)^2 - \lambda\sum_n u_n^2].
\label{Zp1}
\end{equation}
It is a Gaussian integral which can be calculated either by
transforming it to Fourier space and integrating, or by analogy to
the path integral of a three-dimensional quantum oscillator
\cite{QM}. The result in the limit $N\rightarrow\infty$ and for
$\lp\gg 1$ is
\begin{equation}
Z_{\rm p}^{1/N} = \left(\frac{4}{3\pi\lp}\right)^{3/2}
\exp\left(3-\sqrt{3\lambda/\lp}\right).
\label{Zp2}
\end{equation}
The multiplier $\lambda$ can now be determined according to
\begin{equation}
-\frac{1}{N} \frac{\partial\log Z_{\rm p}}{\partial\lambda} =
\langle u_n^2\rangle_{\rm p} = 1 \ \ \Longrightarrow \ \ \lambda =
\frac{3}{4\lp},
\label{lambda_sol}
\end{equation}
where $\langle\cdots\rangle_{\rm p}$ denotes a thermal average
over the bare chain statistics (\ie using $Z_{\rm p}$). The
corresponding free energy per monomer (in the ensemble of {\em
constrained} $\vecu_n$) is
\begin{equation}
f_{\rm p} = -\frac{1}{N}\log Z_{\rm p} - \lambda
= \frac{3}{2}\log\lp + \frac{3}{4\lp} + \mbox{const}.
\label{fp}
\end{equation}
Various correlations in the bare chain can be calculated. The pair
correlation between segment vectors along the chain sequence is
\begin{equation}
\langle\vecu_m\cdot\vecu_n\rangle_{\rm p} = \rme^{-|m-n|/\lp},
\label{segment_correlation}
\end{equation}
which explains why the parameter $\lp$ has been defined as the
persistence length. Two higher-order pair correlations are
calculated as well:
\begin{eqnarray}
g_1 &\equiv& \langle(\vecu_{n+1}-\vecu_n)^2\rangle_{\rm p}
= \frac{2}{\lp} + \cO(\lp^{-2})
\nonumber\\
g_2(m,n) &\equiv& \langle (\vecu_{m+1}-\vecu_m)^2(\vecu_{n+1}-\vecu_n)^2
\rangle_{\rm p} - g_1^2
= \frac{8}{3\lp^3}\rme^{-2|m-n|/\lp} + \cO(\lp^{-4}),
\label{g12}
\end{eqnarray}
and will be of use in the next section, where we re-examine the
coupled system.
\section{Weak Coupling}
\label{weak}
Let us return to the full partition function (\ref{Z1}),
which can be equivalently written as
\begin{equation}
Z = Z_{\rm p} \mathop{\mbox{Tr}}_{\{\ph_n\}} \exp(\mu\sum_n\ph_n)
\left\langle \exp[ -\frac{3\lp\eps}{4} \sum_n
\ph_n(\vecu_{n+1}-\vecu_n)^2 ] \right\rangle_{\rm p}.
\label{Z2}
\end{equation}
First we consider the weak-coupling limit, $|\eps|\ll 1$, where
the partition function (\ref{Z2}) can be treated by a cumulant
expansion. In this limit the model becomes analogous to the
exactly solvable Kac-Baker model \cite{Lieb,Kac,Baker},
and we show that identical
results are derived from a simple mean-field calculation. We then
use this observation to justify a mean-field calculation for an
arbitrary value of $\eps$.
A cumulant expansion of Eq.~(\ref{Z2}) to 2nd order in $\eps$
leads to
\begin{equation}
Z \simeq Z_{\rm p} \mathop{\mbox{Tr}}_{\{\ph_n\}} \exp \left[ \left( \mu -
\frac{3\lp\eps}{4}g_1 \right) \sum_n\ph_n + \half\left(
\frac{3\lp\eps}{4}\right)^2 \sum_{m,n} g_2(m,n)\ph_m\ph_n \right],
\label{Z3}
\end{equation}
where the correlations $g_1$ and $g_2$ were defined in
Eq.~(\ref{g12}). Substituting expressions (\ref{g12}), the
partition function is decoupled into a polymer contribution and an
effective contribution from the bound solute molecules,
\begin{eqnarray}
Z &\simeq& Z_{\rm p} Z_{\rm s} = Z_{\rm p} \mathop{\mbox{Tr}}_{\{\ph_n\}}
\exp(-\cH_{\rm s})
\nonumber\\
\cH_{\rm s} &=& \half\sum_{m\neq n} V_{mn}\ph_m\ph_n
- \hat{\mu} \sum_n\ph_n,
\label{Z4}
\end{eqnarray}
where
\begin{eqnarray}
V_{mn} &\equiv& -\frac{3\eps^2}{2\lp} \rme^{-2|m-n|/\lp}
\nonumber\\
\hat{\mu} &\equiv& \mu - \frac{3\eps}{2} + \frac{3\eps^2}{4\lp}.
\label{vmn}
\end{eqnarray}
The introduction of the polymer degrees of freedom and their
coupling to the binding ones have led to two effects, as compared
to previous lattice-gas theories. First, there is a shift in the
chemical potential, $\mu\rightarrow\hat{\mu}$. This is equivalent
to an effective change in the affinity between the small molecules
and the chain. As expected, if binding strengthens the local
stiffness of the chain ($\eps>0$), the affinity is reduced (\ie
the isotherm is shifted to higher chemical potentials), whereas if
it weakens the stiffness ($\eps<0$), the shift is to lower $\mu$.
The second, more interesting effect is that bound molecules
experience an attractive potential, $V_{mn}$, along the chain. The
amplitude of this effective interaction is small
($\sim\eps^2/\lp$), but its range is large
--- of order $\lp$. When $\lp$ is increased there are two opposing
consequences
--- the interaction amplitude diminishes, while the interaction
range is extended. The overall effect on the thermodynamics of
binding, therefore, has to be checked in detail.
\subsection{Analogy with the Kac-Baker Model}
The effective Hamiltonian of the bound solute, $\cH_{\rm s}$, is a
lattice-gas version of the Kac-Baker model \cite{Lieb,Kac,Baker},
which is exactly solvable. Moreover, the procedure relevant to our
semiflexible polymer, \ie increasing $\lp$ while keeping
$1\ll\lp\ll N$, is precisely the one studied in detail by Kac and
Baker. Their results, as applied to our binding problem, can be
summarized as follows. For any finite $\lp$, the bound molecules
are always in a disordered state along the polymer chain, as in
any one-dimensional system with finite-range interactions.
Consequently, the binding isotherm, \ie the binding degree
$\ph\equiv\langle\ph_n\rangle$ as function of $\mu$ (see, \eg
Fig.~\ref{fig_mf}a), is a continuous curve. However, in the limit
$\lp\rightarrow\infty$, taken {\em after} the infinite-chain limit
$N\rightarrow\infty$, there is a critical value of coupling above
which the binding exhibits a discontinuous (1st-order) transition.
According to Baker's rigorous calculation \cite{Baker}, the
critical value of the potential amplitude multiplied by $\lp$
(equal, in our case, to $3\eps_{\rm c}^2/2$) is 4, \ie
\begin{equation}
\eps_{\rm c}^\pm = \pm\sqrt{8/3} \simeq \pm 1.63.
\label{eps_c1}
\end{equation}
Note that the symmetry with respect to the sign of $\eps$ is
merely an artificial consequence of our 2nd-order expansion,
Eq.~(\ref{Z3}). In general, the results should not be the same if
the stiffness is weakened ($\eps<0$) or strengthened ($\eps>0$),
as is demonstrated in Sec.~\ref{strong}.
The negative critical value in Eq.~(\ref{eps_c1}), $\eps_{\rm
c}^-\simeq -1.63$, lies outside the range of validity of the
original polymer binding model, $\eps>-1$ [cf. Eq.~(\ref{Z1})].
The positive value, $\eps_{\rm c}^+\simeq 1.63$, does not satisfy
the assumption of weak coupling, $|\eps|\ll 1$, which have led to
the analogy with the Kac-Baker model in the first place. Thus, the
sharp binding isotherms obtained from the Kac-Baker model for
$|\eps|>\eps_{\rm c}$ do not apply, strictly speaking, for our
polymer binding problem. The weak-coupling calculation does
demonstrate, however, how fluctuations in polymer conformation
induce long-range attraction between bound molecules. This basic
feature is expected to remain when one considers stronger
coupling, $|\eps|>1$, and the resulting many-body terms omitted in
Eq.~(\ref{Z3}). This is further discussed in the following
sections.
Finally, the polymers we consider have a large but finite $\lp$.
For example, the persistence length of a DNA macromolecule is
typically of order 50--100 nm, whereas the length of a single base
pair is $0.34$ nm. Hence, $\lp$ is of order $10^2$ (in units of
monomer length) . It is worth checking to what extent the
sharpness of binding in the Kac-Baker model for $|\eps|>\eps_{\rm
c}$ is affected by finite $\lp$. For this purpose, let us define a
{\it cooperativity parameter} for the binding, measuring the
maximum slope of the binding isotherm,
\begin{equation}
C \equiv \left. \frac{\partial\ph}{\partial\mu}
\right|_{\rm max} - \frac{1}{4}.
\label{cooperativity1}
\end{equation}
This parameter is equivalent to the zero magnetic field
susceptibility in the analogous spin system, and is commonly
measured from the slope of binding isotherms obtained in
potentiometric experiments \cite{ps_book1,ps_book2}. It has been
defined in Eq.~(\ref{cooperativity1}) so as to yield zero for
vanishing interaction ($\eps=0$) and diverge at a critical point.
(In the current weak-coupling limit, the maximum slope is obtained
for $\langle\ph\rangle=1/2$.) Given $\lp$ and $\eps$, the
cooperativity is numerically calculated using Kac's exact solution
\cite{Lieb,Kac}, as is explained in the Appendix.
Figure~\ref{fig_Kac} presents the results for $\lp=10$ and 50. For
$\lp=50$ the binding becomes highly cooperative for
$|\eps|>\eps_{\rm c}$. For even larger values of $\lp\sim 10^2$
(relevant, \eg to DNA) the binding will be hardly distinguishable
from that of an infinite $\lp$.
\subsection{Mean-Field Calculation}
In fact, the results of the Kac-Baker model in the limit
$N\rightarrow\infty, \lp\rightarrow\infty$, while keeping $\lp<N$,
can be also obtained from a simple mean-field calculation
\cite{Lieb,Schwartz}. The heuristic argument for this agreement is
the following: as $\lp$ is increased, the range of interaction is
extended and each bound molecule interacts with an increasing
number of neighbors. As a result, the averaging assumption
underlying the mean-field approximation is justified, and becomes
{\em exact} when the range of interaction is taken to infinity.
The correspondence between infinite-range models and mean field
was rigorously proved by Lebowitz and Penrose for a more general
class of potentials \cite{Lebowitz}.
Indeed, employing a mean-field approximation for the potential
(\ref{vmn}) in the limit of very large $\lp$,
\[
\sum_{mn} V_{mn}\ph_m\ph_n \rightarrow -\frac{3\eps^2}{2\lp}
\left(\sum_{mn}\rme^{-2|m-n|/\lp}\right) \ph^2
\simeq -\frac{3\eps^2}{2}N\ph^2,
\]
where $\ph$ is an average, uniform binding degree,
we are led to the following mean-field free energy
per monomer:
\begin{equation}
f = f_{\rm p} + f_{\rm s} \simeq
f_{\rm p} + \ph\log\ph + (1-\ph)\log(1-\ph) -
\frac{3\eps^2}{4}\ph^2 - \hat{\mu}\ph,
\ \ \ \ \mbox{for~~} \lp\rightarrow\infty.
\label{f_mf1}
\end{equation}
It is easily verified that the critical point of this free energy
is $\eps_{\rm c}^2=8/3$, in agreement with the rigorous result,
Eq.~(\ref{eps_c1}). The cooperativity parameter can be calculated
as well from Eq.~(\ref{f_mf1}), yielding
\begin{equation}
C = \frac{\eps^2}{4(\eps_{\rm c}^2-\eps^2)}, \ \ \ \
\mbox{for~~} \lp\rightarrow\infty.
\label{C_mf1}
\end{equation}
This expression shows the usual critical behavior obtained from
mean-field theories, $C\sim|\eps-\eps_{\rm c}|^{-\gamma}$ with
$\gamma=1$. The dependence of $C$ on $\eps$ according to
Eq.~(\ref{C_mf1}) is shown by the solid line in
Fig.~\ref{fig_Kac}. The curves obtained from Kac's solution
approach it, as expected, when $\lp$ is increased. Recall
that expressions (\ref{f_mf1}) and (\ref{C_mf1}) correspond to the
original problem of bound molecules only in the limit of small
$\eps$.
\section{Strong Coupling}
\label{strong}
The interesting part of our theory requires $|\eps|\gtrsim 1$ and
thus limits the interest in the analogy to the Kac-Baker model.
Nevertheless, based on the heuristic argument given above, it is
reasonable to assume that, in the limit $\lp\gg 1$, the mean-field
approximation is good for larger values of $|\eps|$ as well
\cite{proof}. The preceding section, discussing the Kac-Baker
model in the weak-coupling limit, may be regarded, therefore, as a
justification for using the mean-field approximation for
one-dimensional models with large $\lp$ and $|\eps|\gtrsim 1$.
Applying a mean-field approximation to the binding degrees of
freedom $\ph_n$ in our starting point, Eq.~(\ref{Z1}), the
tracing over $u_n$ can be done exactly. The resulting free energy
is composed of the polymer free energy, $f_{\rm p}$, evaluated
with an effective persistence length,
$\lp\rightarrow\lp(1+\eps\ph)$, and the entropy of mixing for
$\ph$,
\begin{equation}
f = \left.f_{\rm p}\right|_{\lp\rightarrow\lp(1+\eps\ph)}
+ \ph\log\ph + (1-\ph)\log(1-\ph) - \mu\ph.
\end{equation}
Using Eq.~(\ref{fp}), we obtain
\begin{equation}
f = \ph\log\ph + (1-\ph)\log(1-\ph) +
\frac{3}{2}\log[\lp(1+\eps\ph)] + \frac{3}{4\lp(1+\eps\ph)}
- \mu\ph.
\label{f_mf2}
\end{equation}
For $\eps\ll 1$ and $\lp\gg 1$ this expression reduces, as
expected, to our previous result for the weak-coupling limit,
Eq.~(\ref{f_mf1}).
In the limit $\lp\gg 1$ the critical points of the free energy
(\ref{f_mf2}) are
\begin{equation}
\eps_{\rm c}^- = \frac{2}{3} \left(2-\sqrt{10}\right)
\simeq -0.775,\ \ \
\eps_{\rm c}^+ = \frac{2}{3} \left(2+\sqrt{10}\right)
\simeq 3.44,
\label{eps_c2}
\end{equation}
both of which lie within our general range of validity, $\eps>-1$.
(Note the loss of symmetry with respect to the sign of $\eps$,
which was a consequence of the weak-coupling approximation in
Sec.~\ref{weak}.) The corresponding critical chemical potentials
are
\begin{equation}
\mu_{\rm c}^\pm = \frac{3\eps_{\rm c}^\pm(\eps_{\rm c}^\pm+2)}
{4(\eps_{\rm c}^\pm+1)} - \log(\eps_{\rm c}^\pm + 1)
\simeq \pm 1.67.
\label{mu_c}
\end{equation}
The binding isotherm, $\ph=\ph(\mu)$, as derived from
Eq.~(\ref{f_mf2}), satisfies
\begin{equation}
\mu = \log\frac{\ph}{1-\ph} + \frac{3\eps}{2(1+\eps\ph)},
\ \ \ \ \ \lp\gg 1.
\label{iso_mf2}
\end{equation}
Figure \ref{fig_mf}a shows three binding isotherms for three
different values of $\eps$ below and above the critical point.
The corresponding binding cooperativity is
\begin{equation}
C = \frac {8(1+\eps)^2} {3(2+\eps)^2 (\eps-\eps_{\rm c}^-)
(\eps_{\rm c}^+-\eps)} - \frac{1}{4}, \ \ \ \ \
\lp\gg 1.
\label{C_mf2}
\end{equation}
As in Eq.~(\ref{C_mf1}), this expression exhibits the usual
mean-field critical behavior, $C\sim|\eps-\eps_{\rm c}|^{-\gamma}$
with $\gamma=1$. The dependence of $C$ on $\eps$ is plotted in
Fig.~\ref{fig_mf}b.
Finally, the binding phase diagram arising from Eq.~(\ref{f_mf2})
in the limit $\lp\gg 1$ is depicted in Fig.~\ref{fig_pd}. At the
lower limit of model validity, $\eps\rightarrow -1$, the spinodal
approaches a finite value, $\mu_{\rm sp}=\log(2/3)-5/2\simeq
-2.91$, whereas the binodal diverges. Indeed, for $\eps\rightarrow
-1$ the free energy (\ref{f_mf2}) tends to $-\infty$ for
$\ph=1$, regardless of the value of $\mu$, and the binodal is thus
obtained at $\mu\rightarrow -\infty$. In this respect, the limit
$\eps=-1$ for the bound molecules is similar to the limit of zero
temperature
--- the induced interaction is so strong that the molecules condense for
any value of the chemical potential. Note that in this special
limit, $\eps\rightarrow -1, \ph\rightarrow 1$, the effective
stiffness, $\lp(1+\eps\ph)$, becomes vanishingly small. This limit
cannot be accurately treated within the continuum form of the
semiflexible polymer Hamiltonian \cite{negative_stiffness}.
Equations (\ref{eps_c2})--(\ref{C_mf2}) and the phase diagrams in
Fig.~\ref{fig_pd} summarize the results obtained so far. They
indicate that in cases of semiflexible polymers, where binding of
small molecules significantly affects local chain features, the
binding should be a very sharp process. For finite $\lp$ the slope
of the binding isotherm is finite, \ie the binding is always
continuous, yet for $\lp\sim 10^2$ like in DNA, the behavior will
be practically indistinguishable from a discontinuous phase
transition.
It should be borne in mind that the sharp binding, obtained
despite the one-dimensionality of the model, relies on the long
range of the induced interaction. A direct short-range interaction
between bound molecules could not produce a similar effect. Hence,
such a short-range interaction (\eg a nearest-neighbor
interaction), which was omitted in Eq.~(\ref{Z1}) for the sake of
brevity, does not have an important effect on the binding in the
domain of interest, \ie $\lp\gg 1$ and $|\eps|\gtrsim 1$.
\section{Chains under Tension}
\label{sec_tension}
In addition, we consider binding to semiflexible chains which are
subject to external tension. This scenario is relevant to recent
single-molecule manipulation experiments \cite{Chatenay,Feingold}.
Since the tension suppresses chain fluctuations, it is expected to
have a significant effect on the fluctuation-induced mechanism
presented in the preceding sections.
In order to incorporate the external tension into our model, a
term is to be added to the chain Hamiltonian [cf. Eq.~(\ref{Z1})]
\cite{HT_tension},
\begin{eqnarray}
Z &=& \mathop{\mbox{Tr}}_{\{\ph_n=0,1\}} \int\prod_{n=1}^N \rmd
\vecu_n \exp(-\cH-\cH_{\rm t})
\nonumber\\
\cH_{\rm t} &=& -\vect\cdot\sum_{n=1}^N \vecu_n,
\label{Z1_tension}
\end{eqnarray}
where $\cH$ has been defined in Eq.~(\ref{Z1}), and $\vect$ is the
exerted tension (in units of $k_{\rm B}T$ divided by monomer
length).
As in Sec.~\ref{model}, we begin with the previously studied
problem of a bare semiflexible chain, yet it is now a chain under
tension \cite{HT_tension,MarkoSiggia}. The additional tension term
has not changed the Gaussian form of the polymer part of $Z$. It
can be calculated, therefore, in a similar way to that of
Sec.~\ref{model}, yielding
\begin{equation}
Z_{\rm pt}^{1/N} = Z_{\rm p}^{1/N} \exp(t^2/4\lambda),
\end{equation}
where $Z_{\rm p}$ is the tensionless polymer partition function
given in Eq.~(\ref{Zp2}). The equation for the multiplier
$\lambda$ is, in this case,
\begin{equation}
\half\left(\frac{3}{\lp\lambda}\right)^{1/2} +
\frac{t^2}{4\lambda} = 1,
\label{lambda_sol_tension}
\end{equation}
which reduces to Eq.~(\ref{lambda_sol}) for $t=0$. The resulting
polymer free energy is
\begin{equation}
f_{\rm pt} = \frac{3}{2}\log\lp +
\left(\frac{3\lambda}{\lp}\right)^{1/2} -
\frac{t^2}{4\lambda} - \lambda,
\label{fpt}
\end{equation}
where $\lambda=\lambda(\lp,t)$ is the solution to
Eq.~(\ref{lambda_sol_tension}).
For $\lp t\ll1$, the solution for $\lambda$ is
\[
\lambda \simeq \frac{3}{4\lp} \left[ 1 + \frac{8}{9}(\lp t)^2 +
\cO(\lp t)^4 \right],
\]
and the free energy becomes
\begin{equation}
f_{\rm pt} \simeq f_{\rm p} - \frac{\lp}{3}t^2 + \cO(\lp^3 t^4),
\ \ \ \ t \ll 1/\lp,
\label{fpt_weak}
\end{equation}
where $f_{\rm p}$ is the tensionless free energy given in
Eq.~(\ref{fp}). This is the elastic regime, where the energy is
quadratic (\ie the relative chain extension is linear) in tension
\cite{HT_tension,dG_book_tension}. Since we assume a large
persistence length, this regime corresponds to very weak tension,
$t\ll 1/\lp \ll 1$. In the opposite limit, $\lp t \gg 1$, the
solution to Eq.~(\ref{lambda_sol_tension}) becomes
\[
\lambda \simeq \frac{t}{2} \left[ 1 + \half\left(\frac{3}{2\lp
t}\right)^{1/2} + \cO(\lp t)^{-1} \right],
\]
and the corresponding free energy is
\begin{equation}
f_{\rm pt} \simeq \frac{3}{2}\log\lp - t +
\left(\frac{3t}{2\lp}\right)^{1/2} + \cO(\lp^{-1}t^0),
\ \ \ \ t \gg 1/\lp.
\label{fpt_strong}
\end{equation}
In this regime the chain extension changes like the inverse square
root of tension \cite{HT_tension,Odijk}.
Let us turn now to the effect of tension on the system of polymer
and bound molecules, Eq.~(\ref{Z1_tension}). As in
Sec.~\ref{strong}, we employ the mean-field approximation, valid
for $\lp\rightarrow\infty$. The resulting free energy is the same
as Eq.~(\ref{f_mf2}), but with $f_{\rm pt}$ instead of $f_{\rm
p}$,
\begin{equation}
f = \left.f_{\rm pt}\right|_{\lp\rightarrow\lp(1+\eps\ph)}
+ \ph\log\ph + (1-\ph)\log(1-\ph) - \mu\ph.
\label{f_mf_tension}
\end{equation}
Due to the additional degree of freedom, namely tension, the
binding phase diagrams of Fig.~\ref{fig_pd} become
three-dimensional. In particular, the critical points $\eps_{\rm
c}^\pm$ become critical lines, $\eps_{\rm c}^\pm (t)$. (Note that
$\vect$ is an external field coupled to $\{\vecu_n\}$ rather than
$\{\ph_n\}$, and, hence, it does not destroy the critical
behavior.) The `condensation' of bound molecules in our model
results from attraction induced by polymer fluctuations. By
suppressing fluctuations, the tension should weaken the attraction
and shift the critical coupling to higher values, \ie increase the
positive critical point, $\eps_{\rm c}^+$, and decrease the
negative one, $\eps_{\rm c}^-$. Using Eqs.
(\ref{lambda_sol_tension}), (\ref{fpt}) and (\ref{f_mf_tension}),
the critical lines, $\eps_{\rm c}^\pm (t)$, can be calculated. The
results are shown in Fig.~\ref{fig_ec_tension}.
Before getting into the detailed effect of tension, we address the
question whether the critical behavior can survive {\em any}
strength of tension. In this respect there is an essential
difference between stiffness-strengthening binding ($\eps>0$) and
stiffness-weakening one ($\eps<0$). In the former case, since the
value of $\eps$ is unbound, there exists $\eps_{\rm c}^+ (t)$ for
any value of $t$, such that the binding is a sharp transition for
$\eps>\eps_{\rm c}^+ (t)$. In other words, the critical line
$\eps_{\rm c}^+ (t)$ exists for any $0\leq t<\infty$. Indeed,
substituting $\eps\rightarrow\infty$ in Eq.~(\ref{f_mf_tension})
while using Eq.~(\ref{fpt_strong}), we find that the free energy
always describes a sharp transition, regardless of the value of
$t$. On the other hand, in the latter case of stiffness-weakening
binding, there is a lower bound for $\eps$, $\eps>-1$, where the
validity of the entire approach breaks (see previous section).
Substituting $\eps=-1$ in Eqs. (\ref{f_mf_tension}) and
(\ref{fpt_strong}), we find that a critical point exists only for
$t<t^*$, where
\begin{equation}
\frac{t^*}{\lp} = \frac{4}{9} \left( 33-7\sqrt{21} \right)
\simeq 0.410.
\end{equation}
Thus, the critical line $\eps_{\rm c}^- (t)$ terminates at the
point $(t^*,\eps_{\rm c}^*=-1)$, beyond which a sharp binding
transition cannot be attained. This situation is similar to a case
where the critical temperature $T_{\rm c}$ coincides with $T=0$
(\eg in a one-dimensional Ising model), and the system is
disordered at all temperatures $T>0$.
Several regimes are found as function of tension. For very weak
tension, $t<1/\lp$, the leading-order term which couples
binding and tension is found from Eqs. (\ref{fpt_weak}) and
(\ref{f_mf_tension}) to scale like $\lp t^2\eps\ph$, \ie it is
only linear in $\ph$. Hence, to leading order in $\lp t$ there is
no effect on the critical point. Although the tension influences
chain fluctuations (\eg causing the chain to extend linearly with
$t$), it is too weak to affect the fluctuation-induced
interactions between bound molecules. The next-order term scales
like $\lp^3 t^4(1+\eps\ph)^3$, leading to a very small shift of
$\sim\lp^3 t^4$ in the critical point (see also
Fig.~\ref{fig_ec_tension}).
For $t>1/\lp$, the leading-order term in the free energy,
according to Eqs. (\ref{fpt_strong}) and (\ref{f_mf_tension}), is
$\sim(t/\lp)^{1/2}(1+\eps\ph)^{-1/2}$. Here two regimes should be
distinguished. For intermediate tension, $1/\lp<t<\lp$, the
critical line scales like $(t/\lp)^{1/2}$, reflecting a more
significant, yet still weak effect of tension. Although the chain
conformation is significantly stretched by tension in this regime,
the induced interaction between bound molecules is not strongly
affected. However, for $t>\lp$, the tension term in the free
energy [$\sim(t/\lp)^{1/2}(1+\eps\ph)^{-1/2}$] becomes dominant,
leading to a linear dependence of the critical point on tension,
$\eps_{\rm c}^+\sim t/\lp$.
The above analysis for the dependence of the critical coupling on
tension is summarized in the following expression:
\begin{equation}
|\eps_{\rm c}^\pm (t) - \eps_{\rm c}^\pm (0)| ~\sim~ \left\{
\begin{array}{ll}
\lp^3 t^4 \ \ \ \ \ \ \ \ \ \ & t<1/\lp \\ & \\
(\lp/t)^{1/2} & 1/\lp<t<\lp \\ & \\
\lp/t & t>\lp, ~\mbox{relevant only to $\eps_{\rm c}^+$}.
\end{array} \right.
\end{equation}
The various regimes are also clearly seen in
Fig.~\ref{fig_ec_tension}. Note that for the large values of $\lp$
considered in this theory the intermediate tension region,
$1/\lp<t<\lp$, is very wide.
\section{Discussion and Conclusions}
\label{conclusions}
We have considered binding of small molecules to isolated
semiflexible polymer chains, where the persistence length $\lp$ is
much larger than the monomer size but still smaller than the total
chain length $N$. We have demonstrated that in such systems
polymer fluctuations induce attraction between bound molecules.
The long range of this interaction (of the same order as the
persistence length) can lead to strong effects on the binding
process. In particular, if bound molecules significantly affect
local features of the chain, \eg weaken or strengthen the
stiffness by a factor of about 5 ($\eps<\eps_{\rm c}^-$ or
$\eps>\eps_{\rm c}^+$), then the binding is predicted to be
extremely cooperative, occurring as a transition for a sharply
defined solute concentration. This is an unusual, yet practical
example for a one-dimensional system exhibiting a sharp transition
due to long-range interactions. The results of the model should
apply, in particular, to the association of DNA with smaller
molecules such as surfactants and compact proteins.
Subjecting the polymer to external tension has been studied as
well. By suppressing the fluctuation-induced interaction, the
applied tension may strongly affect the binding. The effect is
significant for sufficiently strong tension of order $t\sim\lp$.
[For DNA this implies $t\sim 10^2 k_{\rm B}T/(10 \mbox{\AA}) \sim
10^2$ pN.] In cases where binding weakens the chain stiffness,
such a high tension should make the sharp binding transition
disappear altogether (\ie regardless of the strength of coupling
or temperature). In cases where binding strengthens the chain
stiffness, a tension of $t\gtrsim\lp$ significantly shifts $\eps_{\rm
c}^+$ to higher values.
It is worth mentioning that tension-induced pairwise interaction
between {\it specifically} bound proteins on a DNA chain was
studied in a previous work \cite{Rudnick}.
The interaction of DNA with oppositely charged cationic
surfactants has been thoroughly studied by potentiometric
techniques \cite{Hayakawa,Shirahama} and fluorescence microscopy
\cite{Sergeyev1,Sergeyev5}. Isotherms measured by potentiometry
reveal a very cooperative, albeit continuous binding. Fluorescence
microscopy convincingly demonstrated, however, that the binding to
a {\em single} DNA molecule has a discrete nature resembling a
1st-order phase transition. It is usually accompanied by a
coil-to-globule collapse of the DNA chain (which lies outside the
scope of the current theory). The smoothness of potentiometric
isotherms was shown to arise from averaging over an {\em ensemble}
of DNA molecules, coexisting in bound and unbound
states \cite{Sergeyev1}.
Similar results were obtained for the association of DNA with
spermidine \cite{Khokhlov}. The microscopic origin of the observed
cooperativity (or even discontinuous transition) has not been
clarified. It is usually fitted to a phenomenological parameter
describing strong interaction between nearest-neighboring bound
molecules \cite{Ising_models}. On the other hand, it is reasonable
to expect that oppositely charged surfactants bound to DNA chains
significantly modify the chain stiffness (probably weakening it).
Thus, our model demonstrates that the strong cooperativity
observed in experiments can be well accounted for by weak, yet
long-range interactions induced by polymer fluctuations.
Recently, the kinetics of non-specific binding of RecA proteins to
DNA has been studied by single-molecule manipulation
\cite{Chatenay,Feingold}. RecA is a bacterial protein involved in
DNA recombination and known to cause significant changes in the
local structure of the double strand upon binding \cite{Stasiak}.
It was found to increase the DNA stiffness by a large factor,
estimated around 10 in one study \cite{Chatenay} and above 4 in
another \cite{Feingold}. This corresponds to a large, positive
$\eps$ in our model. A very cooperative nucleation-and-growth
kinetics was observed, as expected from the current model.
Moreover, in certain situations it was possible to achieve a
smaller increase of stiffness by binding of RecA. This led,
correspondingly, to a less cooperative process \cite{Feingold}.
Yet probably the most compelling evidence is that the binding
cooperativity was shown to be sensitive to external tension of
order 10--100 pN. It was consequently deduced that DNA
conformational fluctuations play a key role in RecA binding
\cite{Chatenay}, in accord with the model.
The current work is restricted to one-dimensional interactions
along the chain sequence, assuming that the polymer is locally
stiff and obeys the worm-like-chain description. Apart from
changing local properties of the polymer, an important feature not
treated by the model is that bound molecules may also modify {\em
volume} interactions between the monomers, thus affecting the
three-dimensional conformation of the polymer. For example,
binding of oppositely charged surfactants to a DNA molecule
locally neutralizes the DNA charge. This should lead, indeed, to a
modified stiffness, but also to a reduced 2nd virial coefficient,
which may drive a coil-to-globule collapse \cite{Sergeyev1}. The
collapse can be also driven by fluctuations in the concentration
of ions adjacent to the chain, as has been demonstrated by recent
theoretical studies \cite{Parsegian,Kardar}.
In order to check the theory presented in this work more
experiments are required, focusing, in particular, on the effect
of persistence length and tension on binding. The fluorescence
microscopy techniques, which have been successfully used for
DNA--surfactant association, may be applied to chains under
tension or flow, thus examining the role of fluctuations. It
would be interesting to study a system consisting of a
semiflexible polymer and bound molecules in computer simulations,
and thereby check the applicability of our mean-field
approximation. An important extension of the model, as mentioned
above, would be to introduce volume interactions and obtain
binding-induced collapse as observed in experiments.
\acknowledgments
We greatly benefited from discussions and correspondence
with R. Bar-Ziv, M. Feingold, A. Libchaber, R. Netz, A. Parsegian,
R. Podgornik, M. Schwartz and V. Sergeyev.
Partial support from the Israel Science Foundation
founded by the Israel Academy of Sciences and Humanities ---
Centers of Excellence Program, and the Israel--US Binational
Science Foundation (BSF) under grant No. 98-00429,
is gratefully acknowledged. HD would
like to thank the Clore Foundation for financial support.
|
2,869,038,155,883 | arxiv | \section{Conclusion}
In this paper, we have proposed a framework for formalizing the process of understanding as an integration of conceptual material evoked by the words in an utterance, guided by local and global constraints. We have argued that this integration mechanism naturally accounts for the specialization of context-independent lexical meanings into token meanings. Our examples throughout the paper highlight particular characteristics of meaning in context, which we briefly summarize next.
First, meaning in context calls on phenomena beyond word senses. The representation of a discourse referent spans higher-level and lower-level conceptual objects such as scenarios and property sets, as seen in our `individuals'. Second, token meaning does not end at the token but involves a network of constraints associating the conceptual representation of an individual to its envisioned environment, including other individuals. That is, meanings are inextricably linked to each other. Third, the interpretation of a word or linguistic constituent in a particular utterance is not some kind of absolute identification of a referent and its properties. There are many situation descriptions that can represent a token meaning -- in some cases, there are several senses that might fit that token -- and interpretation is the awareness of \textit{all} of them, as encoded in a particular probability distribution.
Going forward, there are many questions about our formalisation that need to be
addressed with respect to the theoretical frameworks we draw on.
We note that the basic formalization proposed in this paper does not yet exploit the strengths of DRT. Considering understanding as an active process taking place over a text (i.e. a piece of discourse), a natural step forward would be to integrate DRT's treatment of anaphora with a notion of dynamic envisioning, where the representation of individuals and their context evolves over time. Similarly, the treatment of determiners in DRT may be a stepping stone to formalize quantificational effects in the conceptual tier of our system, including non-trivial ones (e.g. which donkeys are envisioned, if any, when hearing \textit{The farmer does not own a donkey}?)
Further, crossing over logical and conceptual representations, we need to develop an account of semantic construction for our framework. This will involve explaining how operations in the logic translate into particular conceptual constraints, including classic puzzles from the formal semantics literature (e.g. is a \textit{stone lion} a \textit{lion}?)
Finally, as previously mentioned, we must work out a scalable implementation of the framework that will retain our ability to trace the interpretation of a given utterance in a fully explainable manner.
We hope, at any rate, that our proposal provides a stepping stone for developing a description of what Charles Fillmore had in mind when talking of `envisioning'.
As we have seen, formalizing the process has
consequences for the way we define meaning, in the lexicon and beyond.
We believe that approaching these questions from the point of view of a formal description of comprehension can fruitfully contribute to their elucidation.
\section{A situation description system for modeling word meaning in context}
\label{sec:constraints}
In our definition of situation description
systems (SDSs) in the previous section, we have not included any
specific constraints on word meaning in context. We have
kept the definition general so that we can construct SDSs with
different sets of constraints on meaning in context. In this section,
we introduce one particular set of constraints on word meaning in
context, comprising selectional constraints as well as constraints
from the overall scenario.
\subsection{Concepts, roles, and scenarios.}
In this SDS, we use three different types of conceptual structures.
\paragraph{Concepts.} We assume that ``word meanings are made up of
pieces of conceptual structure''~\citep[p.391]{Murphy:02}. So at a
coarse-grained level, a word occurrence evokes a concept, which
disambiguates the word. For example the word
\textit{star} may evoke (among other things) either a \emph{well-known
person} concept or a \emph{sun} concept. We further assume that
concepts store knowledge about concept members, some of which comes in
the form of possible entailments. If a discourse referent is a \textit{star} in
the \textit{sun} sense, then it is also a \textit{sun}, and is often
\textit{made of plasma}.
\paragraph{Semantic roles.} Semantic roles characterize the participants of
events by the way they are involved. There are many proposals which spell out the set of semantic roles one should assume, as well as their level of granularity. We do
not need to commit to any
particular semantic role inventory, it suffices to assume that some
concepts are associated with semantic roles, with role labels that
could be specific to the concept or general across concepts. Crucially,
we assume that semantic roles
characterize the goodness of role fillers through selectional
constraints. Following models from psycholinguistics, including
\citet{McRaeThemrolesAsConcepts} and \citet{Pado:09} we assume these
constraints are actually selectional \emph{preferences} and model
them probabilistically, as a distribution over concepts that could
fill the role.
\paragraph{Scenarios.} Scenarios hold knowledge about groups of events
that usually happen together, and the kinds of
participants that are usually involved. Knowledge like this is discussed in psychology
under the name of \emph{generalized event
knowledge}~\citep{McRae:LLC}, in artificial intelligence as
scripts or narrative schemas~\citep{schank_abelson77,Chambers:08}, and
in some of the frames of FrameNet~\citep{Fillmore:Framenet}. We
hypothesize that scenario knowledge is what
disambiguates sentences like \emph{an athlete ran to a ball} (example
\ref{ex:ray} above).
\subsection{An overview of the generative story}
\begin{figure}[tb]
\centering
\begin{footnotesize}
\begin{tabular}[b]{p{20em}p{20em}}
Draw a distribution $\theta$ over scenarios, and draw a
situation description size $n$\\
Do $n$ times:\\
\textbullet~ Draw a scenario $s$ from $\theta$\\
\textbullet~ Draw a concept $z$ from $s$\\
\textbullet~ For each role $r$:\\
~~~~ -~Sample whether to realize $r$ for $z$. If yes:\\
~~~~~~ -~Sample a scenario $s'$ from $\theta$.\\
~~~~~~ -~ Sample a filler concept $z'$ jointly from $\langle z,
r\rangle$\\
~~~~~~~~~~and $s'$ (Product of Experts)\\
\\
For each concept token $z$:\\
\textbullet~ Sample DRS conditions for the referent\\
~~~~associated with $z$\\
For each concept/role token $\langle z, r\rangle$:\\
\textbullet~ Sample DRS conditions\\
~~~~for the pair of
associated referents\\
& \smash{\raisebox{0.2\height}{\includegraphics[scale=0.4]{figs/model_new_v1.pdf}}}
\end{tabular}
\end{footnotesize}
\caption{Generative model, and situation description for \textit{A star is shining.}}
\label{tab:model_overview}
\end{figure}
Figure~\ref{tab:model_overview} shows the generative story (left) and
an example situation description (SD) for the utterance \textit{A star is
shining} (right), with values of random variables shown in the
nodes. The graph is analogous to the graphical model in
Fig.~\ref{fig:astronomer_graphical} but has values filled in for the
random variables. In the SD, we have concepts in light blue,
semantic roles in dark blue, and scenarios in green, along with the
scenario mix of the utterance in yellow. In the language fragment we cover, we
assume that every discourse referent is described through only one
underlying concept. So we have a one-to-one correspondence between concepts and
discourse referents, as shown in the
figure.
The generative story is similar to Latent Dirichlet Allocation (LDA): An
utterance is characterized by a categorical distribution over scenarios, mirroring the case
where a document is characterized by a categorical distribution over
topics. Scenarios are categorical distributions over concepts while in LDA, topics are categorical distributions over words.
Reading the generative story in the direction
of inference -- bottom to top in the table --,
words, appearing as predicates in
the eDRS, evoke concepts. Concepts, in
turn, link to scenarios. (We draw each concept's scenario as a
separate random variable, with possible repeated values, as in the
table, where \textsc{stargaze} appears three times.) We hypothesize
that listeners expect the scenario mix of an utterance to typically
only contain a single scenario, or few different ones,
and that it is this expectation that drives disambiguation in cases
like \textit{the athlete/violinist ran to the ball}, example
\ref{ex:ray} above. Concepts that appear as arguments to semantic
roles depend on the role's selectional preference, but they also
depend on a scenario. So they are influenced by two constraints,
from the selectional preference and from the sparseness of the
scenario mix.
In an SDS, inference proceeds not only bottom to top,
from eDRS to underlying conceptual structure, but also top to bottom
to flesh out the situation description. This is analogous to
\citet{Wallach:Thesis} in the previous chapter, who used topic models
to guess the second half of a document given the first half. These
inferences are shown in gray in the table. Concepts can generate
additional entailments, like $sun(x)$
from the \textsc{star} concept. It is also possible to generate additional
events and entities, in this case a \textsc{sky} concept and its
discourse referent.
We make some simplifying assumptions to keep the formalism simple. We
assume that the utterance describes every discourse referent through
only one predicate, so we can say $x$ is a \textit{star} but not a
\textit{bright star}. And we assume that concepts that appear in
argument positions never have any semantic roles to fill.
\subsection{The generative story in more detail}
\begin{table}[tb]
\centering
\begin{tabular}{lp{18em}p{8em}}
param. & explanation & assoc. with\\\hline
$\alpha$ & Dirichlet concentration parameter: sparseness of scenario
mixtures\\
$\theta$ & scenario mixture (categorical)\\
$\varphi$ & distribution over concepts (categorical) & scenario\\
$\rho$ & realize this role? (Bernoulli) & concept\\
$\chi$ & distribution over filler concepts (categorical) & concept/role pair\\
$\pi$ & generate a condition with this predicate? (Bernoulli) & concept\\
\end{tabular}
\caption{Generative model: notation}
\label{tab:genparameters}
\end{table}
We now go through generative story of Table~\ref{tab:model_overview}
in detail. We assume a finite inventory of scenarios, a finite
inventory of concepts, and a finite inventory of roles for each
concept, \footnote{This is a restriction, but one that we think is
reasonable to make, given that we assume a finite cognizer.} where each role label can appear at most once for each
concept token.
\paragraph{Drawing a distribution over scenarios}
We describe each utterance as bringing to mind one or more scenarios that the
cognizer knows. We formalize this as a scenario mix $\theta$, a
categorical distribution over scenarios. It
is sampled from a Dirichlet, a distribution over categorical
distributions. The Dirichlet has one
parameter, the concentration parameter $\alpha$, which
controls whether many or few
scenarios will have a nonzero probability in $\theta$. We use a value
$\alpha <1$, which makes the Dirichlet distribution prefer sparse
distributions.
We list all the parameters of the generative model in Table~\ref{tab:genparameters}.
\paragraph{Drawing the size of the situation description.}
As discussed in \S~\ref{sec:infinity}, we need SDs
to be finite in size. So we
assume that there is some maximum SD size that a cognizer can
keep in mind, and we sample an SD size up to this
maximum. For now, we do this in the simplest possible way: We assume
that the concepts in the conceptual graph form a collection of
separate trees, each at most two levels deep: the concept at the root
of a tree can have semantic roles, and there are concepts that fill
those semantic roles, but the fillers do not have semantic roles
of their own.\footnote{There can be multiple occurrences/tokens of
the same concept in a conceptual graph, for example when the
utterance mentions two different astronomers. But one and the same concept
token cannot be the filler of two different roles, for now.} So to
sample the size of the SD, we sample a number $n$ of
\emph{predicate-argument trees} that will be in the
conceptual graph. We sample $n$ from some discrete distribution over
the numbers from 1 to the maximum size, for example a uniform
distribution. This is enough to guarantee a finite-size SD: The number
of role filler concepts that we sample for one
tree is restricted by the maximum number of roles that a
concept can have. And the number of discourse referents is restricted
by the number of concepts in the SD. We next describe how to sample
each of the $n$ trees.
\paragraph{Drawing the concept at the root of a predicate-argument
tree.} We draw a scenario
$s$ from the scenario mix, that is, from the categorical distribution
parametrized by $\theta$.
We model a scenario as a categorical distribution over
concepts, such that we can repeatedly draw from the
same scenario to get a coherent story. We write $\varphi_s$ for the
categorical distribution associated with the scenario $s$. We then
sample a concept $c$ from $\varphi_s$. This forms the root of the
predicate-argument tree.
\paragraph{Drawing roles and role fillers.} Some semantic roles are
mandatory, others are not. We
operationalize this as follows: We write $roles(c)$ for the set of
roles that concept $c$ can possibly realize. For any $r$ in
$roles(c)$, we equip $c$ with a probability $\rho_{c, r}$ of realizing
$r$. For an obligatory role $r$, we would have
$\rho_{c, r} = 1$.
We formulate the selectional constraint of a role $r$ of concept $c$
simply as a probability distribution over concepts, giving higher
probability to concepts that are likely fillers. (Again, a
hard selectional constraint can be modeled by giving a probability of
zero to concepts that are assumed to be infelicitous as fillers.)
Formally, this is another categorical distribution, whose parameters
we call $\chi_{r}$.
Then we proceed as follows with the root concept $c$ of our current
tree. We sample, for each
role $r \in roles(c)$, whether $c$ will realize it. If the answer is
yes, then we sample a concept to fill the role $r$ of $c$. We do not
simply sample this concept from $\chi_r$ because we want each concept
to be drawn from a scenario. So we
again first sample a scenario $s'$ from the categorical distribution with
parameter $\theta$. Then we sample a
filler concept \emph{jointly} from $s'$ and $r$, using a Products of
Experts.
In general, a Product of Experts works like this. Say we have
two categorical distributions over $k$ categories, with
event probabilities $a_1, \ldots, a_k$ and $b_1, \ldots, b_k$
respectively. Then the product-of-experts probability of class $i$ is
\[\frac{a_i b_i}{\sum_j a_jb_j}\]
The numerator is the probability that both distributions ``vote
for'' $i$, and the denominator is the sum of the probabilities of
all outcomes where both
distributions agree on their ``vote''.
In our case, the two categorical distributions we are combining, or
rather their parameters, are $\varphi_{s'}$ for the scenario and
$\chi_{r}$ for the semantic role.~\footnote{If we wanted to allow
argument concepts to themselves have roles, we could simply sample
them at this point. The only thing to watch out for is, again,
a finite overall SD size. So along with the number $n$ of
predicate-argument trees we would
sample a maximum depth $d$ to restrict sampling. To make sure that
the overall probability distribution is not ill-defined, we would
have to set up role probabilities to not allow for infinite sequences like ``I think that he thinks that he thinks that he
thinks that...''}
\paragraph{Complications.} It could happen that the scenario $s$ and
the selectional preference of $r$ do not
have any concept that they could agree on -- that is, that there is no
concept to which they both give a nonzero probability. In that
case the Product of Experts distribution would not be not
well-defined. It is possible to adapt all formulas to explicitly
account for such pathological
cases, but this makes formulas more complex. So we only
formulate them for the ``well-behaved'' case here.~\footnote{The
overall probability distribution would become ill-defined if, for
example, every SD one could sample would involve a required role that
cannot be filled at all. We can prevent such problems by
stipulating that every scenario that can generate an event concept
must also be able to generate at least one filler concept matching
each of the event concept's selectional constraints.}
\paragraph{Accounting for equivalence between SDs.}
As defined in \S\ref{sec:situation_description_def}, an SD is an equivalence
class, with respect to graph homomorphisms for the conceptual graphs
and with respect to variable names for eDRSs. We can pick a
representative of each equivalence class as the one having some canonical ordering of
nodes in the conceptual graph and of discourse referents in the eDRS,
for example by imposing some ordering on scenario labels, on concept
labels, and on semantic roles, assigning nodes in lexicographic
order, and assigning discourse referent names in the same
order. Below, we define SDSs as generating only these
representatives. There is one thing to note, namely that there are
$n!$ ways for the generative model to generate the same
representative, based on the order in which it generates
predicate-argument trees;
this $n!$ will appear in the definition of $\Delta_1$ (the first of
the two components of the probability distribution $\Delta$ of the
SDS) below.
\paragraph{Drawing conditions for the DRS.} Each concept can
generate DRS conditions that describes its members. For example, a
\textsc{bat-animal} concept may describe a member as a \textit{bat} and
\textit{animal} that may or may not be \textit{furry}. We formalize
this by associating each concept $c$ with a set $preds(c) \subseteq
PS$ of unary predicates that
it can use. Each concept will also be associated with Bernoulli
probabilities for its predicate symbols: If $Q$ is a predicate symbol in
$preds(c)$, then $c$ comes with a probability $\pi_{c, Q}$ that $Q$
would be true of a member of $c$. For example we could have $\pi_{\text{\textsc{bat-animal}}, animal}
= 1.0$ (a bat is always an animal), and
$\pi_{\text{\textsc{bat-animal}}, furry} = 0.9$ (a bat is likely to be
furry).
Each concept node in the
conceptual graph has a particular discourse referent for which it
generates conditions. The SD specifies this through the function $g$ that links
nodes to ``their'' discourse referents. Say $c$ is a concept
that appears in the conceptual graph, and
$x_c$ is the discourse referent associated with it. Then $c$ will
generate eDRS conditions of the form $Q(x_c)$ or $\neg Q(x_c)$ for
predicate symbols $Q \in preds(c)$, where $\pi_{c, Q}$ is the
probability of generating $Q(x_c)$, and $1-\pi_{c, Q}$ is the
probability of generating $\neg Q(x_c)$ instead.
Analogously, a semantic role can generate conditions for a pair of
discourse referents, referring to the event and the role filler. For
example, it may be able to generate the binary predicate symbol
\textit{Agent}, and maybe \textit{volitional} if we
wanted to use proto-role features~\citep{Dowty:proto}. So we will
assume that any role $r$ is associated with a set $preds(r) \subseteq
PS$ of binary predicate symbols, and with Bernoulli probabilities
$\pi_{r, Q}$. A semantic role node is linked to its pair of discourse
referents via the function $g$. Say that it is linked to $\langle x_1,
x_2\rangle$, where $x_1$ is a discourse referent connected to an event
concept node, and $x_2$ is a discourse referent connected to a filler
concept node. Then the semantic role node will generate conditions
like $Agent(x_1, x_2)$ or $volitional(x_1, x_2)$ ($x_2$ is volitionally
involved in event $x_1$).
\paragraph{Putting it all together.}
We now have all the pieces in place to formulate the probability
distribution $\Delta$ of our SDS, factored into a probability
$\Delta_1$ of generating the conceptual graph and a probability
$\Delta_2$ of generating the eDRS given the conceptual graph (or, more
precisely, their equivalence classes).
Say $[G]$ is an SD where $G$
contains $n$ subgraphs. Then the probability $\Delta_1$ of sampling
$[G]$ is the probablity of sampling its scenario mix and size, and of sampling each of its
subgraphs in any order ($n$ factorial). The probability of the $i$-th
subgraph is the probability of sampling the scenario $s_i$ and concept
$c_i$ for
the top-level concept, and for each realized role $r$, the probablity of
sampling the role and its filler (scenario $s_{ri}$ and concept $c_{ri}$), and for each non-realized role, the
probability of not realizing it.
\begin{align}
\label{eq:delta1}
\Delta_1([G]) =\: & p(\theta|\alpha) \:p(n) \nonumber \\
& n!\:\prod_{i=1}^n \Bigg( p(s_i \mid \theta)
p(c_i \mid s_i)\nonumber \\
&~~~~~~\Big(\prod_{r \in roles(c_i)\text{ realized in }G}
\rho_{c_i, r}~ p(s_{ri} \mid \theta)
p(c_{ri} \mid c_i, r, s_{ri})\Big) \nonumber\\
&~~~~~~ \prod_{r \in roles(c_i) \text{ not realized in} G } (1 -
\rho_{r, c_i})\Bigg)
\end{align}
For the probability $\Delta_2$ of the eDRS given the graph, we will
add notation to simplify the formula: Say $G$ contains $m$ concept
labels overall on graph
nodes (where the same concept label can appear multiple times),
then we call them $c_1, \ldots, c_m$ , with matching scenario labels $s_1,
\ldots, s_m$, and we write $x_{c_i}$ for the discourse referent
associated with $c_i$. Say $G$ contains $\ell$ role-labeled nodes
overall, called $r_1, \ldots, r_\ell$, and $r_j$ is a role of $c_i$
filled by $c_k$. Then we write $x_{r_i} = x_{c_i}$ and $x_{r_i}' =
x_{c_k}$.
The probability of the eDRS given the
graph is the joint probability
of all its conditions. The probability is zero for eDRSs that contain
a condition $Q(x_c)$ or $\neg Q(x_c)$ for a $Q \not \in preds(c)$, and
analogously for binary predicates. For other
eDRSs, the probability is
\begin{align}
\label{eq:delta2}
\Delta_2([\langle G, D, g]\mid [G]) =& \prod_{i=1}^m
\prod_{Q \in preds(c_i), Q(x_{c_i}) \text{ in } D} \pi_{c_i, Q}\nonumber\\
& ~~~\prod_{Q \in preds(c_i), \neg Q(x_{c_i}) \text{ in } D} (1 - \pi_{c_i, Q})\nonumber\\
& \prod_{j=1}^\ell\prod_{Q \in preds(r_j), Q(x_{r_j}, x-{r_j}') \text{ in } D} \pi_{r_j, Q}\nonumber\\
&~~~ \prod_{Q \in preds(r_j), \neg Q(x_{r_j},x_{r_j}') \text{ in } D} (1 - \pi_{r_j, Q})
\end{align}
\subsection{Learning}
\label{sec:learning}
Approaches that describe lexical meaning in all its nuances face a
dilemma. On the one hand, it is vital that they scale up to the
``vastness of word meaning''~\citep[p. 241]{Baroni:FregeInSpace}, to
show that they really capture the idiosyncrasies of the lexicon.
This scaling up can be done with computational methods that learn
lexical representations from large amounts of corpus data. On the
other hand, such approaches often only provide us a
blurry view of what the model is saying about the meaning of a word in
a particular context, and there is often not
much to be learned from a model's success or failure on an
individual example. Ideally we would want to be able to ``zoom in'' on an
individual example to critique the model's predictions in detail, and
also to scale up to the whole lexicon.
Situation Description Systems as we have defined them here allow us to
``zoom in,'' as we will demonstrate in the following section. But we
do not currently learn from corpus data to scale up: In this paper, we
only use hand-crafted concept labels and entailments. That raises
the question: Can this approach in principle be scaled up to meet the
challenge of the gigantic lexicon? In Latent Dirichlet Allocation
models in
computational linguistics, all parameters are usually learned together from data,
e.g.\ in \citet{Blei:03}, but usually these are simpler graphical
models without the interacting constraints that we have. An
alternative would be to first establish
concepts, then learn scenarios and selectional constraints from observed
co-occurrences of concepts. This alternative is less mathematically elegant but would provide more
control over the learned structures; it also has some analogy with how
we know children to learn concepts and larger structures. As concepts,
we could either use hand-created semantic classes such as
FrameNet~\citep{Fillmore:Framenet}, or automatically generated
clusters of word occurrence embeddings as we used them in
\citet{chronis-erk-2020-bishop}. We think that SDSs would retain some
of their ``zoom-in'' interpretability even when scaled up because they
separate out different types of context effects.
\subsection{Extensions}
\paragraph{Compatibility constraints} In the current SDS, we have
expressed compatibility constraints on predicates and their
arguments. But in a larger fragment of English we need compatibility
constraints concerning any two predicates that might be stated of the
same discourse referent, for example compatibility between heads and
modifiers. Briefly, this can be done by re-defining selectional preferences in terms
of a semantic feature space (which could be an automatically acquired
space, or a hand-defined space), and using the feature space for
compatibility constraints between any pair of concepts.
\paragraph{Structured scenarios.} In the current paper, we have
defined scenarios in a very simple way, as collections of
concepts. But it makes sense to assume that a scenario has structure,
with events connected to possible participants, and to other events,
as is the case in generalized events~\citep{McRae:LLC},
scripts~\citep{schank_abelson77,Chambers:08}, and FrameNet
frames~\citep{Fillmore:Framenet}. Then we could formalize a scenario
as a graph, where each node is associated with a distribution over
concepts that can instantiate that node. And repeated sampling from the
same scenario could be formalized as a graph random walk.
\subsection{Related approaches that integrate fine-grained word meaning with sentence meaning}
To recap, our system descriptions system has three main features: a) it emphasizes the cognitive aspects of understanding; b) it links conceptual content to logical representations; c) it represents dependencies between different conceptual elements and/or logical constituents in a specific, network-like structure. These various features can be found in several recent approaches to the integration of lexical semantics with sentence semantics. We will briefly review them now.
\paragraph{Link with cognitive approaches} A number of proposals focus on finding appropriate representations for the conceptual aspects of compositionality. Those proposals usually assume that the `meaning' of individual lexical items live in a multi-dimensional vector space, and that individual items can be either directly combined to generate sentence representations, or be used to enrich the standard process of referential composition. This mirrors the view from cognitive science that concepts can be represented by feature lists \citep{Murphy:02}.
From a theoretical semantics point of view, \citet{asher2011} is one such proposal, which develops a type-theoretic characterization of word meaning which is then implemented in a vector space model in \citet{AsherEtAl:CL}. Similarly, \cite{McNallyBoleda}, as well as \cite{McNally:2017tr} and \cite{GehrkeMcNally}, integrate a characterization of ``descriptive meaning'' in the form of word embeddings into logical forms. \cite{Zeevat:RepresentingTheLexicon} focuses on the representation of the lexicon itself, drawing on structured feature representations called frames \citep{Barsalou:2017}.
From a more computational perspective, we should also mention approaches to ``compositional distributional semantics''~\citep{Baroni:FregeInSpace,grefenstette-sadrzadeh:2011:EMNLP,SadrzadehMuskens:18}, which take the route of reformulating some aspects of formal semantics in terms of vectors and tensors.
The closest frameworks to our system description systems can be found in the works of Chersoni et al~(\citeyear{chersoni_santus_pannitto_lenci_blache_huang_2019}) and Emerson~\citep{Emerson,emerson-2020-autoencoding,EmersonQuantified}. Chersoni and colleagues compute embeddings for words in a sentence, integrated with a DRS, and use a global event graph that records predicate-argument co-occurrences to influence embeddings. Their global event graph is very similar to our scenarios, and they also explicitly draw on the psychological literature to justify this.
Emerson describes conceptual meanings of words as depending on the conceptual meanings of their neighbors in a semantic graph. He formulates his model as an autoencoder with
energy-based constraints between neighbors in the semantic graph, and learns embeddings within this model from scratch.
\paragraph{Integration with logic}
Whilst the above approaches have many attractive features from the point of view of integrating conceptual meaning into sentence representations, many encounter challenges when it comes to their relation to logical structures. Exceptions are Asher's and McNally et al's approaches, which can deal with reference phenomena, quantifiers and negation. Asher represents meaning type-theoretically while McNally et al build their representations over the notion of kind.
An additional feature of e.g. Asher's approach is the ability to project entailments back to logical forms. Emerson's approach also provides this feature but does not accommodate entities and events in the situation that are not explicitly mentioned. Frameworks such as Chersoni et al's also only provide partial integration of the conceptual and logical tiers. While from a theoretical standpoint, we should strive for more encompassing approaches, we should however note that there is a trade-off between model complexity and the ability to extend accounts to additional phenomena, as well as providing scalable implementations \citep{AsherEtAl:CL}.
Our approach does not cater yet for a full account of the relation between sentence representations and logical structures, but in principle, it can be worked out from the integration of DRT in the framework. We also provide a way to relate entailments to the logical tier of our system. As mentioned in \S\ref{sec:learning}, our current implementation only allows us to inspect the behaviour of specific utterances. We would like to scale up our approach without losing the ability to trace and interpret the system's individual decisions.
\paragraph{Semantic structure} Turning to the question of structural properties, existing approaches make different decisions with regard to the internal architecture of their semantics. To make sense of them, it is perhaps worthwhile coming back to the notion of compositionality itself. The `compositionality principle' states that the meaning of sentences is derived in a bottom-up fashion from the meaning of their parts \citep{Partee1995lexical}. But this is in problematic contradiction with another principle: the `contextualisation principle', or the idea that a linguistic constituent only has meaning in the context of a sentence, top-down \citep{Pelletier2001}. The integration of the two principles is an unsolved question, but we can see some current approaches as attempting such an integration, usually with a stronger focus on one of the principles: either strict compositionality from types, or a more `network-like' notion of context, emphasizing the multiple dependencies between sentence constituents. Exemplifying the first type of approaches, \cite{Sutton:Vague} and \cite{Cooper:2015vj} propose probabilistic type-theoretic accounts of semantics in which situations have types that are propositions. Central to their accounts is the probability that a situation would be of a type $t2$ given that it is of type $t1$ for example: Given that a situation is of the type where \textit{Kim is smiling}, how likely is it also of the type where \textit{Kim is happy}? Their frameworks are similar to ours in that an utterance is taken to describe a situation, from which we can probabilistically draw inferences. But they take situations as the basic compositional block of their semantics. Our framework, on the other hand, takes situations apart into a conceptual graph consisting of mental representations of entities and events and their properties, and the emphasis is on the meaning of individual lexical items \textit{given} the context of the sentence. In that sense, the present paper is closer to the second type of approaches, exemplified also by the work of Chersoni et al, or Emerson.
\section{Probabilistic graphical models for situation descriptions}
\label{sec:generative}
Probabilistic graphical models form the technical core of our approach. They let us represent concepts underlying content words as nodes in a graph, where the edges indicate weighted, interacting constraints. In this section, we give a short introduction to these models.
\begin{figure}[tb]
\centering
\includegraphics[scale=0.4]{figs/catgraphical.pdf}
\caption{A graphical model that shows dependencies between random variables: The cat may run wildly because it's hunting time (especially if there is a piece of Lego on the floor), if she hears the dinner bowl, or if there are loud noises. Loud noises will make her less likely to hunt.}
\label{fig:cat_graphical}
\end{figure}
\paragraph{Graphical models.} A \emph{random variable} is something that can take on different values depending on some random influence, for example the outcome of a coin flip, or whether or not your cat will suddenly run around wildly. Often we want to describe not only a single random variable, but multiple random variables together, a \emph{joint distribution}, which for five random variables $A-E$ we would write as $p(A \wedge B \wedge C \wedge D \wedge E)$. In such joint distributions, it can be hard to keep track of all combinations of values of the random variables, and their probabilities. Figure~\ref{fig:cat_graphical} shows a joint distribution as a \emph{graphical model}. This is a characterization of cases when a cat suddenly runs around wildly. This may happen if the cat decides that it's hunting time (which is facilitated by Lego pieces lying on the floor), if there are loud noises, or if she hears the dinner bowl. If there are loud noises, it will probably not want to hunt but hide. Each node in the graphical model is a random variable, and the edges are \emph{dependencies}. This graph says that our joint distribution is not maximally complex, which it would be if there were edges between any pair of nodes. Rather, we can describe the whole joint distribution simply through conditional probabilities linking neighboring nodes. So the probability of \textit{Hunting time} in the joint distribution is $p(\text{Hunting time} \mid \text{Lego on floor}, \text{Loud noises})$.
\begin{figure}[tb]
\centering
\includegraphics[scale=0.4]{figs/clusters.pdf}
\caption{Data points that seem to come in three clusters, and a generative way to describe this: The data points look like they have been generated from three center points, shown on the right.}
\label{fig:generative_clusters}
\end{figure}
\paragraph{Generative stories.} \textit{Generative stories} describe graphical models where some of the random variables are \emph{observed}, and others are \emph{latent}. The latent variables are our assumptions about underlying structure in the data. They describe structure in the data as a process that \emph{could} have generated such data. For example, the data points in the left panel of Figure~\ref{fig:generative_clusters} are not equally scattered, they look as if they come in three clusters. A probabilistic generative way of describing this structure would be to tell a generative story that goes like this: The data could have been generated by three random variables that we can imagine as three center points. Each of those center points is associated with a probability distribution that states where ``its'' point in space would lie, preferring points close by. By repeatedly generating, or sampling, points in space from each of the center points, we would obtain a pattern like the one in Figure~\ref{fig:generative_clusters}. This generative story is illustrated in the right panel, where three possible centers are shown as stars, and points are colored in to match the star that generated them.
A probabilistic generative model does not claim that the data was actually generated following the generative story. Rather, the generative story is only a convenient way of stating structure in the data that is assumed by the probabilistic model. The usual way to use such a model is in the opposite direction from the generative story: Given a set of random variables that are observations, in our case the data points shown in the left panel, probabilistic inference is used to determine likely values for the latent random variables, in our case the positions of the three cluster centers. Latent variables form the heart of the generative model: the underlying structure in the data that is hypothesized by the model.
\paragraph{Probabilistic inference.} There are several standard methods for probabilistic inference that can be used across probabilistic models. One particularly simple method is \emph{rejection sampling}: The generative story is enacted in the order in which it is told, ``top-down'', generating samples from latent variables. If a generated sample coincides with the observations, it is kept, otherwise it is rejected. Values for the latent variables can then be read off from the non-rejected samples. This method only works in toy-size models, but is useful for prototyping. We use it below to test the predictions of small situation description systems. Other probabilistic inference methods that work for larger models include Markov chain Monte Carlo methods such as Gibbs sampling, which iteratively improve estimates of latent variables, and variational methods, which approximate latent variables by making additional assumptions on how they are distributed.
\paragraph{Latent Dirichlet Allocation.} In computational linguistics, Latent Dirichlet Allocation~\citep{Blei:03}, also called topic modeling, has been a particularly influential probabilistic generative model. Topic modeling describes documents as being about different topics, where a single document often touches on more than one topic. For example a newpaper article about economic policy could be described as touching on \textit{politics} and \textit{finance}. A topic is formalized simply through typical words, maybe \textit{government}, \textit{enact}, and \textit{policy} for \textit{politics} and \textit{invest}, \textit{bank} and \textit{monetary} for \textit{finance}.
Specifically, a topic is a probability distribution over words, where the \textit{politics} topic would have a much higher probability for the word \textit{enact} than, say, the word \textit{vampire}. A document, in turn, is a probability distribution over topics. The generative model describes a document as generated from different topics in a similar way in which the data points in Figure~\ref{fig:generative_clusters} are generated from different center points.
\paragraph{Useful probability distributions.} It will be useful to look at the probability distributions in topic models in more detail, since they are the same ones we will be using below. A topic, as a distribution over words, is characterized by a \emph{categorical distribution}, the kind of distribution that describes, for example, rolling a single die. With a fair six-sided die, the probability of each side is $\frac{1}{6}$, so we can describe the associated categorical distribution as $\langle \frac{1}{6}, \frac{1}{6}, \frac{1}{6}, \frac{1}{6}, \frac{1}{6}, \frac{1}{6}\rangle$, a sequence of parameters that are the probabilities of outcome categories. For a trick die, the parameter sequence might be, say, $\langle \frac{1}{3}, \frac{1}{6}, \frac{1}{6}, \frac{1}{6}, \frac{1}{12}, \frac{1}{12}\rangle$. For a topic, the outcomes are not 1-6 as on a die, but all the words in the dictionary. A document is also associated with a categorical distribution: a distribution over topics. If we assume that we have three topics overall, \textit{politics}, \textit{finance} and \textit{gothic\_novel}, the economic policy document from above might have an associated distribution of $\langle 0.6, 0.4, 0.0\rangle$. Another important distribution is the \emph{Dirichlet distribution}, which can be viewed as a distribution over categorical distributions, or rather over their parameter sequences: A sample from a third-order Dirichlet distribution could yield an outcome that is $\langle 0.3, 0.3, 0.4\rangle$, or $\langle 0.9, 0.0, 0.1\rangle$, a sequence of probabilities that sum up to one. The Dirichlet distribution has a parameter of its own, the \emph{concentration parameter} $\alpha$. When this parameter is $\ge 1$, samples from the Dirichlet distribution will be sequences of values that are similar to one another, like $\langle 0.3, 0.3, 0.4\rangle$. When the concentration parameter is smaller than 1, the distribution will mostly generate samples that focus their probability mass on a single value, like $\langle 0.9, 0.0, 0.1\rangle$. Latent Dirichlet Allocation assumes that the topic distribution of a document, which is a categorical distribution, is drawn from a Dirichlet distribution. So by varying the concentration parameter we can vary our \emph{prior} assumption about what documents look like. For example, by setting $\alpha < 1$ we can say that a document does not typically draw on many different topics, instead it typically has one or two dominant topics. Another distribution we will need below is the \emph{Bernoulli distribution}. This is a distribution with two outcomes, 0 and 1 -- or heads and tails. The Bernoulli distribution can be viewed as describing a coin flip. Its one parameter is the probability of the coin coming up heads.
\paragraph{An example of a generative story.} The generative story of topic modeling goes like this. To generate a document, first sample a distribution over topics from a Dirichlet distribution. Then, sample a document length $n$, for example from a uniform distribution. Then, $n$ times, sample a topic $z$ from the document's topic distribution, and sample a word from $z$'s categorical distribution. One thing to note is that this generative story has not just one level of latent variables, but multiple levels, with latent variables generating values for other latent variables. It is a strength of probabilistic models that assumptions about structure can be encoded as interactions between variables.
Another thing to note is that this generative story is clearly not a description of how documents actually come about. But it is a formalization that allows us to reason about some facets of documents, in particular their topics.
Probabilistic inference can be used to answer several questions. First, given a document collection, we can infer likely topics, as clusters of words that often co-occur in documents. Second, once we have inferred a collection of topics, we can use probabilistic inference to analyze a new document in terms of these topics, that is, we can answer the question: What is the likeliest topic distribution to have generated this document? We can even use probabilistic inference to guess the words in the second half of a document, given its first half; in this case, probabilistic inference goes from observations to latent variables, and from there to additional possible observations~\citep{Wallach:Thesis}.
\paragraph{Variants of Latent Dirichlet Allocation.} In computational linguistics, topic models have been highly fruitful as a template for probabilistic generative models for a wide variety of phenomena. For example, \citet{Seaghdha:2010vi} describes the selectional constraint that a verb imposes on its argument position as a distribution over latent semantic classes, where a semantic class is a distribution over role filler nouns. In this formulation the semantic classes are like topics, and a selectional constraint is a mixture of semantic classes much like a document is a mixture of topics. \citet{dinu2010} describe the meaning of a word as a mixture of context classes, where each context class is a distribution over context words. \citet{Ferraro:2016} describe a script in the sense of \citet{schank_abelson77} -- a frequent, typical event chain -- as a mixture of event classes, where each event class is a distribution over events and their semantic roles.
\section{Analyzing some utterances}
\label{sec:illustration}
In this section, we perform ``zoom in'' analyses on a number of individual sentences to see how the model handles them. We use manually constructed concepts and scenarios and manually set probabilities throughout. The sentences are:
\ex.\label{ex:illustrations} \a. \label{ex:simple} A bat was sleeping.
\b. \label{ex:ambiguous} A player was holding a bat.
\c. \label{ex:leave_illu} A woman left the room / the house / the country / the job / her friend.
\d. \label{ex:situation} A vampire was eating.
\e. \label{ex:astronomer_illu} An astronomer married a star.
\subsection{A simple example with selectional constraints}
We start with sentence \ref{ex:simple} to demonstrate the influence of selectional constraints. We ``turn off'' scenario influences for now, which we can do by assuming a single scenario that has an equal probability of generating any of the following concepts:
\begin{quote}
\textsc{armadillo}, \textsc{bat(animal)}, \textsc{cat}, \textsc{dodo}, \textsc{bat(stick)}, \textsc{stone}, \textsc{sleep}, \textsc{push}.
\end{quote}
Say that \textsc{sleep} has one role it can generate, $roles(\text{\textsc{sleep}}) = \{sleep\_Theme\}$. We formulate this role as specific to the concept \textsc{sleep} so we do not have to engage with the question of the granularity of semantic roles. We set $\rho_{\text{\textsc{sleep}}, sleep\_Theme} = 1$, that is, the $Theme$ role is always realized for \textsc{sleep}. Next we need to define $\chi_{sleep\_Theme}$, the selectional preference of the role. We set
\[
\chi_{sleep\_Theme}(c) = \left\{\begin{array}{ll}
0.25 & \text{for }c \in \{\text{\textsc{armadillo}}, \text{\textsc{bat(animal)}}, \text{\textsc{cat}}, \text{\textsc{dodo}}\}\\
0 & \text{else}
\end{array}\right.
\]
This says that armadillos, bats that are animals, cats and dodos can sleep, but sticks, stones and sleeping events cannot. Finally, for simplicity, we assume that each concept just generates a single entailment matching the concept's name (but without the disambiguation to \textit{animal} or \textit{stick} for \textit{bat}). That is, $\pi_{\text{\textsc{armadillo}}, armadillo} = 1$, $\pi_{\text{\textsc{bat(animal)}}, bat} = 1$, $\pi_{\text{\textsc{bat(stick)}}, bat} = 1$, and so on.
We use the following simple eDRS for sentence \ref{ex:simple}:
\begin{quote}
\drs{e, x}{bat(x), sleep(e), Theme(e, x)}
\end{quote}
We walk through the generative story top-down, assuming a \emph{rejection sampling} regime: If at any point, we have sampled a conceptual graph that cannot possibly generate the given eDRS, we abandon the sample and start over.
\textit{Drawing a distribution over scenarios:} This is trivial because we only have a single scenario. \textit{Drawing the size of the situation description:} We sample a number $n$ of predicate-argument trees. If we sample $n>1$ the system will imagine additional entities and events for the situation. We do not demonstrate that here, and assume we sample $n=1$.
\textit{Drawing the root concept of the tree:} If we sample anything other than \textsc{sleep}, the sample will fail because it would not be able to generate the eDRS. So say we sample \textsc{sleep}. \textit{Drawing roles and role fillers.} The set $roles({\text{\textsc{sleep}}})$ only contains $sleep\_Theme$, which is realized with a probability of 1. To draw the filler, we first draw a scenario, which in this case does not add anything, so the product of experts is the same as $\chi_{sleep\_Theme}$. The only filler concepts with nonzero probability are \textsc{armadillo}, \textsc{bat(animal)}, \textsc{cat}, \textsc{dodo}. If we draw any of these concept other than \textsc{bat(animal)}, the sample will fail because it would not be able to emit the predicate symbol $bat$.
So for $n=1$, the only possible conceptual graph in this case consists of a \textsc{sleep} concept, its $sleep\_Theme$ role and a filler that is a \textsc{bat(animal)}, which disambiguates the meaning of the word \textit{bat}.
\subsection{More entailments}
\label{subsec:entailments}
Theories of concepts in psychology typically assume that concepts contain rich information about members, which are often represented as features associated with the concept. For example, \citet{McRaeEtAlNorms:05} elicited definitional features of concepts from participants, including these features for bat that are animals:
\begin{quote}
animal, flies, has\_fangs, is\_black, is\_scary
\end{quote}
In our framework, we could use such features as entailments that a concept can generate as DRS predicates. In the previous section, we have associated each concept $c$ with a set $preds(c)$ of predicates it can generate -- these would be the features. We have said that a concept $c$ has probabilities $\pi_{c, Q}$ of generating predicate $Q$ as positive rather than negative. In the simplest case, we could imagine that the cognizer keeps track of the percentage of bats they observe that fly, that are black, and so on, and uses those as feature probabilities. \citet{Herbelot:Vecchi:2016} collected judgments from human participants that yielded such probabilities for the \citet{McRaeEtAlNorms:05} features, for bats for example $p=1.0$ for flying, and $p=0.75$ for being black. Using these probabilities, we can sample conditions for the discourse referent $x$ from the eDRS above, for example generating
\[bat(x), animal(x), flies(x), has\_fangs(x), \neg is\_black(x), \neg is\_scary(x)\]
\subsection{An ambiguous example}
We next turn to sentence \ref{ex:ambiguous}, which is ambiguous between the \textit{animal} and the \textit{stick} reading of \textit{bat}. Its eDRS is
\begin{quote}
\drs{e, x, y}{player(x), hold(e), bat(y), Agent(e, x), Theme(e, y)}
\end{quote}
We will assume a small collection of concepts, as follows:
\begin{quote}
\textsc{hold}, \textsc{player}, \textsc{bat(stick)}, \textsc{ball}, \textsc{stone}, \textsc{bat(animal)}, \textsc{vampire}, \textsc{cat}, \textsc{candle}.
\end{quote}
The concept \textsc{hold} has two roles, hold$\_$Agent and hold$\_$Theme (which generate the predicates Agent and Theme, respectively), both mandatory, so $\rho_{\text{\textsc{hold}, hold$\_$Agent}} = \rho_{\text{\textsc{hold}, hold$\_$Theme}} = 1$. We keep the selectional preferences simple: any animate entity can hold something, and any concrete object can be held. That is, we set
\[\chi_{\text{hold$\_$Agent}}(c) = \left\{\begin{array}{ll}
0.25 & \text{for } c \in\{ \text{\textsc{player}, \textsc{bat(animal)}, \textsc{vampire},\textsc{cat}}\}\\
0 & \text{else}
\end{array}\right.
\]
and
\[
\chi_{\text{hold$\_$Theme}}(c) = \left \{ \begin{array}{ll}
0 & \text{for } c = \textsc{hold}\\
0.125 & \text{else}\\
\end{array}\right.
\]
We again assume that each concept generates a single entailment matching the name of the concept.
If selectional preference is the only constraint on the meaning of the word \textit{bat}, both senses of \textit{bat} come out as equally likely. This can be seen in the same way as in the first example, by walking through the generative story ``top to bottom'' with rejection sampling.
For the root concept of the predicate-argument tree, we cannot sample any concept other than \textsc{hold}, or it would not generate the predicate $hold$, and likewise \textsc{player} has to be sampled for the Agent. For the Theme, \textsc{bat(animal)} and \textsc{bat(stick)} are the only concepts that could be sampled that would be able to generate \textit{bat}, but both are equally likely fillers of the role.
\begin{figure}[tb]
\centering
\includegraphics[scale=0.5]{figs/dirichlet.png}
\caption{Dirichlet distribution over two scenarios: Probabilities of different probability values for scenario 1. Black: concentration parameter $\alpha = 0.5$, blue: $\alpha = 0.1$.}
\label{fig:dirichlet}
\end{figure}
But in sentence \ref{ex:ambiguous} there is a clear preference for the \textit{stick} sense of the word. In our framework, we describe this preference as stemming from the overall scenario of the sentence, as we show next. We extend our example to have two scenarios, \textsc{baseball} and \textsc{gothic}. The \textsc{baseball} scenario can generate the concepts \textsc{hold}, \textsc{player}, \textsc{bat(stick)}, \textsc{ball}, \textsc{stone}, all with equal probability, while the \textsc{gothic} scenario generates \textsc{hold}, \textsc{bat(animal)}, \textsc{vampire}, \textsc{cat}, and \textsc{candle} with equal probability. Then the generative story proceeds as follows. \textit{Drawing a distribution over scenarios:} We draw a distribution over the two scenarios from a Dirichlet distribution (which is a distribution over probability distributions). Figure~\ref{fig:dirichlet} shows the probabilities of different probability values for the first scenario, in black for Dirichlet concentration parameter $\alpha = 0.5$, and in blue for $\alpha = 0.1$. As the figure shows, it is much more likely, with concentration parameter values smaller than one, to draw a distribution like $\langle 0.8, 0.2\rangle$ or $\langle 0.2, 0.8\rangle$ than $\langle 0.5, 0.5\rangle$, and the lower the concentration parameter, the likelier we are to draw a distribution that strongly prefers one scenario over the other. We will consider two cases, $\langle 0.8, 0.2\rangle$ and $\langle 0.2, 0.8\rangle$; these are not the only possibilities, but they exemplify typical options. We start with $\langle 0.8, 0.2\rangle$ with a higher probability for \textsc{baseball}.
\textit{Drawing the size of the situation description:} This is as before.
\textit{Drawing the root concept of the subgraph:} We sample a scenario, either \textsc{baseball} or \textsc{gothic}, it does not matter, as they can both generate \textsc{hold}. If we sample anything other than \textsc{hold}, the sample will fail because it would not be able to generate the eDRS. \textit{Drawing roles and role fillers.} The Agent role is realized with a probability of 1. To draw the filler, we first draw a scenario. If we draw \textsc{baseball}, we will be able to draw \textsc{player} from it and succeed. If we draw the scenario \textsc{gothic}, then the sample will fail because \textsc{gothic} cannot generate any concept that could generate the predicate $player$. Since \textsc{baseball} is more likely than \textsc{gothic} in our scenario distribution, we have a reasonable chance of successfully sampling \textsc{player}. The Theme role is also realized with a probability of 1. To draw the filler, we first draw the scenario, then a concept. If we draw \textsc{baseball}, then the only way for the sample to succeed is if we draw the concept \textsc{bat(stick)}, which has a nonzero probability under both the scenario and the selectional preference. Conversely, if we draw the scenario \textsc{gothic}, the only way for the sample to succeed is if we draw \textsc{bat(animal)}. We are more likely to draw the scenario \textsc{baseball} over \textsc{gothic} for the filler because it is more likely under the scenario distribution.
We now turn to the case of the scenario distribution $\langle 0.2, 0.8\rangle$, with a higher probability for \textsc{gothic}. When drawing the filler for the Agent, we are now more likely to sample \textsc{gothic} than \textsc{baseball}, and thus more likely to see the whole sample fail because \textsc{gothic} cannot sample \textsc{player}. When drawing the filler for the Theme, we are again more likely to sample \textsc{gothic} than \textsc{baseball}, and hence more likely to obtain \textsc{bat(animal)} than \textsc{bat(stick)}. But because the Agent has to be sampled from \textsc{baseball}, more samples with this scenario distribution will fail.
\begin{table}[tb]
\centering
\begin{tabular}{l|ll}
setting& p(stick) & p(animal)\\\hline
one scenario & 0.501 & 0.499\\
two scenarios, $\alpha = 0.5$ & 0.752 & 0.248\\
two scenarios, $\alpha = 0.1$ & 0.926 & 0.074\\
\end{tabular}
\caption{Empirical probabilities for the "stick" and "animal" senses of \textit{bat} in "A player was holding a bat", with either one or two scenarios (WebPPL simulation)}
\label{tab:player_bat}
\end{table}
With sentence \ref{ex:simple} we were able to work out the resulting probabilities of SDs by hand. The interacting constraints in sentence \ref{ex:ambiguous} make the computation more complex. So we implement the generative story in the programming language WebPPL~\citep{webppl}. WebPPL is a probabilistic programming language, with statements that have a probabilistic outcome. For example, it is possible to state
\begin{lstlisting}
var x = categorical({ps: [0.2, 0.8],
vs: ["baseball", "gothic"]})}
\end{lstlisting}
to store in the variable \texttt{x} the outcome of a draw from a categorical distribution with probabilities (``ps'') 0.8 and 0.2 for outcomes (``vs'') ``baseball'' and ``gothic'', respectively. Because we can program random draws, we can straightforwardly program the generative story, ``top down.'' To sample multiple outcomes from the generative story, we use a command of the type
\begin{lstlisting}
var dist = Infer({method: 'rejection', samples: 2000,
model:probfunction})
\end{lstlisting}
This call takes as one of its arguments a probabilistic function \verb+probfunction+, which it executes 2000 times using rejection sampling, recording each sample that finishes successfully, and storing the resulting empirical probabilities of samples in the variable \texttt{dist}. We use this number, 2000 sample SDs, in our experiments here in order to see each outcome a sufficient number of times to get stable probability estimates. The third core ingredient in WebPPL is a command of the type
\begin{lstlisting}
condition(sampled_predicate == utterance_predicate)
\end{lstlisting}
which rejects the sample that is currently in progress if the DRS predicate we are sampling in the top-down process is not the observed predicate in the utterance.
Table~\ref{tab:player_bat} shows the results of a WebPPL simulation for sentence \ref{ex:ambiguous} with 2000 generated samples. With one scenario, only the selectional preference influences the meaning of \textit{bat}, with both senses being equally likely. Empirical probabilities do not necessarily exactly match the theoretical ones: We get a probability of slightly over 0.5 for ``stick'' and slightly below 0.5 for ``animal''. With two scenarios, the word \textit{player} can exert an influence on the sense of \textit{bat}, and we get a higher probability for the ``stick'' sense than the ``animal'' sense. The lower the concentration parameter $\alpha$, the more pronounced the preference for the ``stick'' sense.
\subsection{Fine-grained meaning distinctions}
\begin{table}[tb]
\centering
\begin{footnotesize}
\begin{tabular}{l|llll}
& \textsc{leave1} & \textsc{leave2} &\textsc{leave5} & \textsc{leave8}\\\hline
\textsc{woman}, \textsc{friend} & & 0.41 & &0.05\\
\textsc{room}, \textsc{lobby} & 0.07 & & 0.36 & \\
\textsc{house}, \textsc{building} & 0.23&& 0.14 & \\
\textsc{country}, \textsc{region} & 0.2&&&\\
\textsc{job}, \textsc{task} && 0.09 &&0.15\\
\textsc{team}, \textsc{school} & &&&0.3\\
\end{tabular}
\end{footnotesize}
\caption{Settings of selectional preference probabilities for Theme roles of \textit{leave} for the experiment in Table~\ref{tab:leave}. In the first row, \textsc{woman} and \textsc{friend} each get a probability of 0.41 for \textsc{leave2}, and each get 0.05 for \textsc{leave8}, and analogously for the other rows. Empty cells are probability 0.}
\label{tab:leave_selpref}
\end{table}
We now turn to the group of sentences in \ref{ex:leave_illu}. They involve the verb \textit{leave}, which has much more nuanced meaning differences than the previous examples. Leaving the country can mean emigrating, while leaving the room or the house usually does not mean leaving for good. Leaving a job does not just entail leaving the place where the job is located, but stopping to associate with a group and no longer working there.
We assume that all of a cognizer's conceptual knowledge is involved in language understanding, including these entailments of \textit{leave},\footnote{There is an ongoing debate on whether lexical knowledge should be considered ``thin'', with a single, underspecified core, or ``rich'', with many detailed entailments~\citep{falkumPolysemyCurrentPerspectives2015,hogeweg_vicente_2020}. Our assumption that that all of a cognizer's conceptual knowledge is involved in language understanding puts us on the ``rich'' side of the debate.} especially as these entailments are from frequent uses of the verb \textit{leave} and can thus be assumed to be memorized.\footnote{In this paper, we focus on context effects that select between stored meanings. Context effects also involve modulation beyond stored senses, but we are not modeling that yet. We think it will be possible to model modulation with an extension of SDSs that use a semantic space.}
But how should we assume these different nuances of \textit{leave} to be stored? It seems far-fetched to consider them to be separate concepts. Exemplar models of concepts~\citep{Nosofsky:1986} assume structure below the level of concepts, and so do multi-prototype models like \citet{Anderson:Rational}.\footnote{In fact, there seems to be no clear distinction between on the one hand, exemplar models, models that remember individual observations, under memory decay, and on the other hand, models that aggregate observations into groups below the concept level.} Anderson assumes that a cognizer stores a collection of exemplar clusters, where multiple clusters can go with the same concept label. In an Anderson-like framework, we could assume that the representation of \textit{leave} consists of multiple clusters, which do not have to be distinct, they can overlap substantially in selectional constraints as well as entailments.
We can then use these clusters in the same way that we have used concepts previously, probabilistically selecting a cluster for an occurrence of \textit{leave} in a sentence context. If two clusters have considerable overlap, they may both have a high probability for the same occurrence of \textit{leave}.
We now demonstrate the use of such clusters for the verb \textit{leave} in the sentences in \ref{ex:leave_illu}. For the sake of concreteness, we imagine a cognizer whose clusters for \textit{leave} happen to coincide exactly with the senses of \textit{leave} in the WordNet dictionary~\citep{Fellbaum:98}. This lets us obtain selectional preferences for \textit{leave\_Theme} for the different clusters from the WordNet-annotated SemCor corpus~\citep{SemCor}: We extract Theme headwords for the four senses of \textit{leave} in SemCor that are relevant to \ref{ex:leave_illu} (that is, all the senses that involve an entity departing without dying), and manually group the extracted headwords into semantic classes. We show below the four relevant WordNet senses, as well as all (sufficiently frequent) semantic classes of Theme headwords, with frequencies relative to counts in the SemCor corpus:\footnote{Theme headwords were extracted using a simple heuristic: We used the first noun or pronoun after an occurrence of \textit{leave} if it was separated from \textit{leave} only by determiners and adjectives. This procedure may have missed some headwords.}
\begin{description}
\item[Sense 1:] go forth, go away, as in: ``At what time does your train leave?''
house, home: 21/73; country, region; 18/73; room: 7/73
\item[Sense 2:] go and leave behind, either intentionally or by neglect or forgetfulness
person: 19/40; task: 4/40
\item[Sense 5:] move out or depart from as in: ``leave the room''
room: 5/11; house, home: 2/11
\item[Sense 8:] remove oneself from an association with or participation in
school, group, team: 6/10; task, job: 3/10; person: 1/10
\end{description}
We use these four WordNet senses as our clusters, and estimate the probability of each cluster for the verb \textit{leave} in the sentences in \ref{ex:leave_illu} through a WebPPL experiment with 2000 generated samples and with a Dirichlet parameter of $\alpha = 0.5$. Along with the four clusters of \textit{leave}, we use two concepts for each semantic class of filler words. This gives us the following list of clusters:
\begin{quote}
\textsc{leave1}, \textsc{leave2}, \textsc{leave5}, \textsc{leave8}, \textsc{woman}, \textsc{friend}, \textsc{room}, \textsc{lobby}, \textsc{house}, \textsc{building}, \textsc{country}, \textsc{region}, \textsc{job}, \textsc{task}, \textsc{team}, \textsc{school}
\end{quote}
\begin{table}[tb]
\centering
\begin{tabular}{p{5em}|llll}
The woman left the\ldots & \textsc{leave1} &\textsc{leave2}& \textsc{leave5} & \textsc{leave8}\\\hline
\ldots room & 0.145 & & 0.856 \\
\ldots house & 0.599 & & 0.401\\
\ldots country & 1.0\\
\ldots job & & 0.382 & & 0.618\\
\ldots friend & & 0.883 & & 0.117\\
\end{tabular}
\caption{Empirical probabilities for clusters of \textit{leave} in each of the sentences in \ref{ex:leave_illu}, WebPPL simulation}
\label{tab:leave}
\end{table}
We use four scenarios \textsc{s-leave1}, \textsc{s-leave2}, \textsc{s-leave5}, \textsc{s-leave8}, each of which generates ``its'' sense of \textit{leave} along with its filler concepts.\footnote{We do not model the fact that the senses differ in their frequency.} Each sense of \textit{leave} has mandatory Agent and Theme roles, where the Agent has a probability of $p=0.5$ each for \textsc{woman} and \textsc{friend}. We set the selectional constraints of the Themes based on the observed fillers in SemCor, as shown in Table~\ref{tab:leave_selpref}. Table~\ref{tab:leave} shows the results of the WebPPL simulation. We do indeed get multiple applicable clusters in most cases, which matches our intuition that they are not distinct.\footnote{Our current model can only assign a single sense (a single cluster) in each situation description; This is an OR over senses, when intuitively there should be an AND. We would want there to be multiple senses of \textit{leave} active in a single situation description. We hope to allow for an AND of senses in the future.}
\subsection{Imagining a situation}
\begin{table}[tb]
\centering
\begin{tabular}{l|l|l}
& eat\_Theme & eat\_Location\\\hline
prob. of realization & 1.0 & 0.21\\\hline
vampire& 0.004 &0.004\\
bat (animal) & 0.004 & 0.004\\
blood orange & 0.337 & 0.003\\
steak & 0.319 & 0.003\\
salad & 0.338 & 0.003\\
castle & 0.0 & 0.1\\
beach & 0.0 & 0.094\\
\end{tabular}
\caption{Imagining a situation: Empirical probabilities of added participants in "A vampire was eating."}
\label{tab:vampire_eating}
\end{table}
A situation description system probabilistically ``imagines'' the situation of a given utterance. This includes adding entailments about entities and events mentioned in the utterance, as illustrated above in \S\ref{subsec:entailments}. But an SDS can also add entities and events not mentioned in the utterance that are likely to be in the situation. We demonstrate this with sentence \ref{ex:situation}. For simplicity, we assume a single scenario, along with a small collection of concepts:
\begin{quote}
\textsc{eat}, \textsc{vampire}, \textsc{bat(animal)}, \textsc{blood\_orange}, \textsc{steak}, \textsc{salad}, \textsc{castle}, \textsc{beach}
\end{quote}
The only concept with roles is \textsc{eat}. We assume three roles: eat\_Agent and eat\_Theme are mandatory, but for eat\_Location we assume a lower probability of realization, $\rho_{\text{\textsc{eat}}, eat\_Location} = 0.2$. We define selectional preferences as follows: The eat\_Agent has to be a \textsc{vampire} or \textsc{bat(animal)}, with p=0.5 for each. The Theme is preferred to be edible:
\[\chi_{\text{eat$\_$Theme}}(c) = \left\{\begin{array}{ll}
0.33 & \text{for } c \in \{\text{\textsc{blood\_orange}, \textsc{steak}, \textsc{salad}}\}\\
0.005 & \text{for } c \in\{\text{\textsc{vampire}, \textsc{bat(animal)}}\}\\
0& \text{else}
\end{array}\right.\]
The Location has a strong preference for beaches and castles, but it is also possible, though dispreferred, to eat near, say, a salad:
\[\chi_{\text{eat$\_$Location}}(c) = \left\{\begin{array}{ll}
0.45 & \text{for } c \in\{\text{\textsc{castle}, \textsc{beach}}\}\\
0.02 & \text{for } c \in\{\text{\textsc{vampire}, \textsc{bat(animal)}, \textsc{blood\_orange},}\\
& \text{\textsc{steak}, \textsc{salad}}\}\\
0 & \text{else}
\end{array}\right.\]
We again assume that each concept generates a single predicate matching the name of the concept.
Because eat\_Theme is mandatory, the SDS will always probabilistically ``imagine'' some object that the vampire is eating. It also sometimes adds a location. Empirical probabilities, again estimated from 2000 samples, are shown in Table~\ref{tab:vampire_eating}. The probabilities of role fillers are relative to the realization probability of the role, so \textsc{castle}, at a probability of 0.1 of appearing as an eat\_Location, has a probabliity of almost 0.5 of being chosen given that there is a location. As an example of a generated SD, we have a probability of 0.031 of sampling a vampire eating a blood orange at a beach.
\subsection{Conflicting constraints}
\begin{figure}[tb]
\centering
\begin{tabular}{c@{\hspace{4em}}c}
\includegraphics[scale=0.4]{figs/astronomer_nof.pdf}
&
\includegraphics[scale=0.4]{figs/astronomer2_nof.pdf}
\end{tabular}
\caption{Conflicting constraints in the sentence ``The astronomer married the star": Either the concept for \textit{star} conflicts with the selectional constraint (left), or it conflicts with the preference for a coherent scenario (right)}
\label{fig:astronomer_pic}
\end{figure}
\begin{table}[tb]
\centering
\begin{tabular}{l|l|l}
$\alpha$ &\textsc{star(person)}& \textsc{star(sun)}\\\hline
0.5& 0.800 & 0.200\\
0.1& 0.529 & 0.471\\
\end{tabular}
\caption{Conflicting constraints: Empirical probabilities for either a ``person'' or a ``sun'' interpretation of \textit{star} in ``The astronomer married the star'', for different settings of the Dirichlet concentration parameter $\alpha$}
\label{tab:astronomer}
\end{table}
We finally turn to the pun example from the beginning, repeated above as \ref{ex:astronomer_illu}. The point of this example is that selectional preference and scenario constraints can conflict and ``pull in different directions,'' as illustrated in Figure~\ref{fig:astronomer_pic}.
We have two scenarios. The scenario \textsc{stargazing} generates the concepts \textsc{astronomer}, \textsc{star(sun)}, and \textsc{marry}, while the scenario \textsc{stage} generates the concepts \textsc{star(person)} and \textsc{marry}. (For simplicity, we have added \textsc{marry} to both scenarios instead of adding a third scenario.) The concept \textsc{marry} has mandatory Agent and Theme roles, both with a strong preference for human role fillers: We set $\chi_{\text{marry$\_$Theme}}(c) =0.475$ for a concept $c=$\textsc{astronomer} or $c=$\textsc{star(person)} and $\chi_{\text{marry$\_$Theme}}(c)=0.05$ for $c=$\textsc{star(sun)}. We again assume that each concept generates a single predicate matching its name, where \textsc{star(person)} and \textsc{star(sun)} both generate $star$. Table~\ref{tab:astronomer} shows empirical probabilities for \textit{star} being interpreted as either a person or a sun, again estimated from 2000 samples, for two different values for the concentration parameter $\alpha$. Both $\alpha$ values generate a pun effect, and the more emphasis there is on a coherent scenario (the lower the value of $\alpha$), the more probability mass is given to the situation where an astronomer marries a giant ball of plasma.
\subsection{The probability space}
\label{sec:infinity}
In Situation Description Systems, the representation of an utterance is a distribution over situation descriptions, where a situation description is a combination of a logical form and a conceptual graph. Before we cast this idea in a definition, we first need to make sure that we have a well-defined, cognitively plausible probability distribution. \citet{Cooper:2015vj} argue that probability distributions over worlds (which are used in \citet{VanBenthem:2009te,vanEijckLappin,Zeevat:2013wc, Lassiter:adjectives} and \citet{Erk:alligators}) are not cognitively plausible, and that neither are probability distributions over situations (as used by \citet{Emerson} and \citet{Bernardy:18}). We agree -- but, as we will argue, situation descriptions avoid the problems of world distributions and situation distributions.
A world is an unimaginably gigantic object. This is the reason why \citet{Cooper:2015vj} say it is unrealistic to assume that a cognizer could represent a whole world in their mind, let alone a distribution over worlds. A world is a maximal set of consistent propositions~\citep{Carnap}, and no matter the language in which the propositions are expressed, we cannot assume that a cognizer would be able to enumerate them.
But the cognitive plausibility on which Cooper et al.\ focus is not the only problem. Another problem is that we do not know enough about a world as a mathematical object. \citet{Rescher:1999dq} argues that objects in the real world have an infinite number of properties, either actual or dispositional. This seems to imply that worlds can only be represented over an infinite-dimensional probability space. When defining a probability measure, it is highly desirable to use a finite-dimensional probability space -- but it is not clear whether that is possible with worlds.
Maybe a world can be `compressed' into a finite-dimensional vector, but we simply do not know enough about worlds to say for certain.
Situations, or partial worlds, may be smaller in size, but they still present similar problems, both in terms of cognitive plausibility and because they are underdefined. \citet{Cooper:2015vj} discuss the plausibility angle, so we concentrate on underdefinedness here. How large is, say, a situation where \emph{Zoe is playing a sonata}? Both \citet{Emerson} and \citet{Bernardy:18} assume, when defining a probability distribution over situations, that there is a given utterance (or set of utterances) and that the only entities and properties present in the situation are the ones that are explicitly mentioned in the utterance(s). But arguably, a sonata-playing situation should contain an entity filling some instrument role, even if it is not explicitly mentioned.
Going one step further, \citet{Clark:Bridging} discusses inferences that are ``an integral part of the message'', including bridging references such as ``I walked into the room. \textit{The windows} looked out to the bay.'' This raises the question of whether any situation containing a room would need to contain all the entities that are available for bridging references, including windows and even possibly a chandelier. (Note that there is little agreement on which entities should count as available for bridging references: see \citealp{PoesioVieira:Bridging}.)
The point is that there does not seem to be a fixed size that can be assumed for the situation where \emph{Zoe is playing a sonata}.~\footnote{\citet{GoodmanLassiter} offer what looks like a way out of the problem of worlds. They assume that interpretation is always relative to a question under discussion, and that a world is generated only to the extent that it is relevant to the question under discussion. However, this still presumes enough knowledge about worlds to define a probability measure over them in the first place -- but more importantly, questions under discussion do not have a fixed size any more than situations do.}
Our solution is to use a probability distribution over situation descriptions, which are objects in the mind of the listener rather than in some actual state-of-affairs. As human minds are finite in size, we can assume that each situation description only comprises a finite number of individuals, with a finite number of possible properties -- this addresses the problem that worlds are too huge to imagine. But we also assume that the size of situation descriptions is itself probabilistic rather than fixed, and may be learned by the listener through both situated experience and language exposure. Doing so, we remain agnostic about what might be pertinent for describing a particular situation.
\section{Introduction}
\label{sec:introduction}
Word meaning is flexible. This flexibility is often characterised by distinguishing the `context-independent' meaning of a lexical item (its definition(s) in a dictionary) and its `speech act' or `token' meaning -- the one it acquires by virtue of being used in the context of a particular sentence \citep{Grice1968}. The generation of a token meaning goes well beyond word sense disambiguation and typically involves speakers' knowledge of the world as well as their linguistic knowledge. For instance, \citet[pp.222-223]{Searle1980} reminds us that \textit{to cut grass} and \textit{to cut a cake} evoke different tools in the mind of the comprehender (a lawnmower vs a knife).
The question of context dependence is associated with long-standing debates in both linguistics and philosophy, with theoretical positions ranging from semantic minimalism to radical contextualism. Our goal in this paper is not to take a side in those debates, but rather to give an integrated account of the many different ways context interacts with lexical meaning. In particular, we will set up formal tools to talk about the dependencies that exist between the lexicon and the various layers involved in utterance interpretation, from logical effects to situational knowledge.
Let us first consider what contextual influences might play a role in shifting the meaning of a word. The first effect that comes to mind might be \textit{local context}. Specific combinations of predicates and arguments activate given senses of the lexical items involved in the composition. This is known as `selectional preference' and can be demonstrated with the following example:
\ex. \label{ex:blade} She drew a blade.
In this case, where words in both the predicate and the argument positions have multiple senses, the sentence can mean that the agent sketched either a weapon or a piece of grass, or that she randomly sampled either a weapon or a piece of grass, or that she pulled out a weapon. It is much less likely that she pulled out a piece of grass. In this example, both the predicate and argument are ambiguous, and they seem to disambiguate each other, which then makes the ``pull out a piece of grass'' reading unavailable. This makes disambiguation hard: We have no ``solid ground'' to stand on, all words are potentially ambiguous.
But word meaning is not only influenced by semantic-role neighbors. \textit{Global context} is involved. \ref{ex:ray} is a contrast pair adapted from an example by Ray Mooney (p.c.), with different senses of the word \textit{ball} (sports equipment vs dancing event). Arguably, the sense of the predicate \textit{run} is the same in \ref{ex:ray_a} and \ref{ex:ray_b}, so the difference in the senses of \textit{ball} must come from something other than the syntactic neighbors, some global topical context brought about by the presence of \textit{athlete} in the first sentence, and \textit{violinist} in the second.
\ex. \label{ex:ray} \a. \label{ex:ray_a} The athlete ran to the ball.
\b. \label{ex:ray_b} The violinist ran to the ball.
There is even a whole genre of jokes resting on a \textit{competition of local and global topical constraints} on meaning: the pun. Sentence \ref{ex:astronomer} shows an example.
\ex. \label{ex:astronomer} The astronomer married the star.
This pun rests on two senses of the word \textit{star}, which can be paraphrased as `well-known person' and `sun'. It is interesting that this sentence should even work as a pun: The predicate that applies to \textit{star}, \textit{marry}, clearly selects for a person as its theme. So if the influence of local context were to apply strictly before global context, \textit{marry} should immediately disambiguate \textit{star} towards the `person' sense as soon as they combine. But the `sun' sense is clearly present.\footnote{In fact, our own intuitions about sentence \ref{ex:astronomer} vary. One of us prominently perceives the reading where the astronomer weds a gigantic ball of fire; for the other one of us, the sentence oscillates between the two different senses of \textit{star}.} In other words, local context and global topical context seem to be competing.
If lexical meaning cannot easily be pinned down to sense disambiguation, and if it is indeed dependent on the interaction of a number of constraints that may go beyond the lexicon, a model of meaning in context should answer at least two core questions: Is it possible to predict not one, but all the interpretations a set of speakers might attribute to a word or phrase? How exactly does the interaction of various constraints take place in the shift from context-independent to token meaning? This paper takes on the task of formalising a semantic framework which accounts for the wide flexibility of word meaning and the range of interpretations it can take in a given sentence. We frame the question as modelling the comprehension process undergone by the hearer of an utterance: we ask what kind of `meaning hypotheses' are invoked by a listener when presented with a given utterance, in particular, what those hypotheses are made of, and how they relate to sentence constituents.
Our solution is a model which we will refer to as \textit{situation description system.} Following previous accounts of understanding (e.g. \citealp{Fillmore:Usemantics}), our account assumes that an interpreter actively `envisions' the entire situation evoked by a sentence. Our claim is that envisioning naturally implements lexical constraints, and that modeling the imagination process required to arrive at an interpretation of the utterance automatically provides contextualised meaning representations.
Our formalization draws on two inspirations, one from cognitive semantics and one from computer science. The first one draws an explicit connection between the lexicon and cognitive processes, as stated by e.g. \citet{Murphy:02}: ``word meanings are made up of pieces of conceptual structure'' (p391). We will draw on this insight, also supported by some theories of lexical semantics \cite{Geeraerts:lexsem}, and assume that `meaning hypotheses' are made of conceptual material.
The second inspiration we draw on is the idea of probabilistic graphical models. Such models describe complex joint probability distributions as a graph where edges indicate dependencies, or probabilistic constraints. They are designed to handle situations with multiple sources of uncertainty that mutually constrain each other, and are therefore well suited to represent a distribution over outcomes rather than a single label. In the context of this paper, we propose to use them to generate different competing interpretations for a word in context, and thus represent our degree of uncertainty over several outcomes.
This will allow us to model word meaning as a distribution over hypotheses, constrained by the interplay between pieces of conceptual structures and sentence representation.
The focus of the present paper is on formalization. We will illustrate the behaviour of our system with simple utterances containing verbs and object nouns, inspecting whether our system implementation outputs results that match our intuition. In particular, we want to observe that multiple senses can be activated during comprehension, for instance when processing a pun. For more commonplace sentences such as \textit{a player was carrying a bat}, we would like more of the probability mass going to the sport equipment interpretation of \textit{bat}, while reserving some readings for the unlikely case where the player carries an actual animal. Given our original setting, we then hope to expand the system in further work, including more complex sentences and longer stretches of discourse. Ultimately, we envisage that the full system could be evaluated with respect to its ability to simulate human behaviour on tasks like sense annotation or lexical substitution.
In what follows, we first give a brief introduction to probabilistic graphical models and highlight how they can be used in linguistic settings (\S\ref{sec:generative}). We then proceed with a general formalization of our situation description system (\S\ref{sec:situationdescriptions}), and define the specific constraints that can be implement within the framework to account for lexical meaning (\S\ref{sec:constraints}). Finally, \S\ref{sec:illustration} shows an implementation of the framework, applied to various phenomena: selection constraints, entailment, ambiguity, fine-grained meaning distinctions. We finish that section with an illustration of the system's behaviour when required to `envision' the situation associated with a simple utterance, and when dealing with conflicting constraints, as in the pun example above.
\section{Situation description systems for word meaning in context}
\label{sec:situationdescriptionsystems}
\label{sec:situationdescriptions}
\begin{figure}[tb]
\centering
\includegraphics[scale=0.4]{figs/astronomergraphical.pdf}
\caption{Graphical model for the sentence \textit{The astronomer married the star}: Influence from selectional constraints (blue solid lines), influence from scenario (black dotted lines).}
\label{fig:astronomer_graphical}
\end{figure}
In this section, we formally define Situation Description Systems. Before we do that, we look at a sketch of what we want the definition to do. At their heart, Situation Description Systems have a probabilistic graphical model, like the one in Figure~\ref{fig:astronomer_graphical} for the sentence \textit{The astronomer married the star}. We are indicating the meanings of words in the sentence through concepts that are linked to the words. These concepts are nodes (random variables) in the probabilistic graphical model. So for example the meaning of \textit{star} is characterized by a random variable for the concept underlying \textit{star}. There may be multiple concepts that could possibly underly each of the words \textit{astronomer}, \textit{married}, and \textit{star}. The choice of concept is influenced by contextual constraints, shown as arrows in the graph. The blue solid arrows indicate selectional constraints. Even though the arrows are directed, they constrain both the predicate and the argument: If we know the concept for the predicate, we know what arguments go well with it. If we know the concept underlying the argument, we know what predicate concepts prefer to have this concept as an argument. The black dotted arrows in the graph indicate scenario constraints: Each concept depends on a scenario, which again depends on a mixture of scenarios for the whole sentence. We could form a graphical model with only the blue arrows, using only selectional constraints, or a graphical model with only the black arrows, using only scenario constraints. But we use the graphical model with both the black and the blue arrows, so that we can study the interaction of scenario constraints and selectional constraints.
What is missing in Figure~\ref{fig:astronomer_graphical} is a link from these conceptual structures to a meaning representation for the sentence as a whole. We want to link the concepts to the logical form of the sentence, and show that they enable additional entailments. To do this, we link the concepts in the graphical model to discourse referents in a Discourse Representation Structure, DRS for short~\citep{KampReyle}. We formalize this through a generative model in which the observations are logical forms, and the latent variables are scenarios and concepts.
As is typical with probabilistic generative models, we perform inferences in the opposite direction from the ``generative story,'' reasoning from the observations to the values of the underlying latent variables. This gives us a process for utterance understanding: The listener observes an utterance in the shape of a DRS, with discourse referents that stand for entities and events and whose properties are described through predicates. The listener reasons ``back'' from the DRS to the latent variables that could have generated the descriptions of the discourse referents. From there, they reason ``forward,'' and use the latent variables to generate additional properties for the entities and events in the utterance. They may also generate additional entities and events. So utterance understanding, under this formalization, is a process of probabilistically imagining the situation that the utterance refers to. We say that an assignment of values to all random variables in the model is a \emph{situation description}.
\input{infinity}
\input{uncertainty}
\input{situationdescriptions_formally}
\subsection{Defining situation description systems}
\label{sec:situation_description_def}
We now formally define situation descriptions and situation description systems. As discussed above, we use a probabilistic generative model where DRS conditions are generated from conceptual latent variables. We first define situation descriptions as a combination of a DRT fragment and a conceptual graph, then situation description systems.
\paragraph{A DRT fragment.} A DRS is a pair consisting of a set of \emph{discourse referents} $\{x_1, \ldots, x_n\}$ and a set of \emph{conditions} $\{C_1, \ldots, C_m\}$, written
\[\langle \{x_1, \ldots, x_n\}, \{C_1, \ldots, C_m\}\rangle\]
Intuitively, a DRS characterizes a discourse through the discourse referents that are being talked about, by stating constraints on those discourse referents in the conditions. For example, if we assume a Neo-Davidsonian representation, with discourse referents for both entities and events, then we can represent the sentence \emph{a vampire was sleeping} as
\[\langle\: \{x, e\}, \{vampire(x), sleep(e), Agent(e, x)\}\:\rangle\]
if we ignore phenomena like tense and aspect. In the simplest case, a condition is an atomic formula, like $sleep(x)$, but a condition can also be a negated DRS, or it can be an implication $D_1\Rightarrow D_2$, where $D_1$ and $D_2$ are DRSs.
In this paper we only consider a simple fragment of DRT, where the set of conditions consists of atomic formulas and negated atomic formulas. This makes it easy to model the condition set as generated from latent variables. We refer to this fragment of DRT as \emph{eDRT}, short for existential conjunctive DRT, as it cannot represent either universal quantifiers or disjunctions. We also only consider a simple fragment of the English language, of the form \textit{a Noun Verbed} or \textit{a Noun Verbed a Noun}, where the nouns denote objects, determiners are all indefinite, we only use singular forms and ignore phenomena like tense and aspect. For such sentences of English, the DRSs are actually eDRSs.
We use a Neo-Davidsonian representation, with discourse referents for both entities and events, as exemplified above, so we restrict ourselves to only unary and binary predicate symbols.
Formally, eDRSs are defined as follows. Let $REF$ be a set of discourse referents, and $PS$ a finite set of predicate symbols with arities of either one or two. In the following definition, $x_i$ ranges over the set $REF$, and $F$ over the set of predicate symbols $PS$. The language of \emph{eDRSs} (existential and conjunctive DRSs) is defined by:
\begin{description}
\item[conditions] $C::= F(x) \mid \neg F(x) \mid F(x_1, x_2) \mid \neg F(x_1, x_2)$
\item[eDRSs] $D ::= (\{x_1, \ldots, x_n\}, \{C_1, \ldots, C_m\})$
\end{description}
We assume the standard model-theoretic interpretation for DRT. We further assume that the DRS for the utterance is fixed and has been built with one of the standard DRS construction methods, for example the top-down algorithm from \citet{KampReyle}.\footnote{The assumption that the DRS is constructed ahead of time is a simplification, and ignores interactions between structural ambiguity and word meaning ambiguity.}
\paragraph{Situation descriptions (SDs).}
A situation description consists of two parts. One part is an eDRS, and the other part is a \emph{conceptual graph} that looks like the graphical model in Figure~\ref{fig:astronomer_graphical} but has values for all the random variables. The two halves are connected through a partial mapping $g$ that links graph nodes to discourse referents. Each graph node generates conditions for ``its'' discourse referent(s). For example, a latent variable that is a concept might generate conditions only for a discourse referent $x$, describing it as both $bat(x)$ and $animal(x)$. Another latent variable that is a semantic role might generate conditions for a pair $\langle e, x\rangle$ of discourse referents, for example it might generate $Theme(e, x)$, saying that $x$ is the Theme in an event to which $e$ refers. To say this, we need some more notation: We write $Var(C)$ for the sequence of variables in an eDRS condition $C$. So $Var(F(x)) = Var(\neg F(x)) = \langle x \rangle$, and $Var(F(x_1, x_2)) = Var(\neg F(x_1, x_2)) = \langle x_1, x_2\rangle$.
Intuitively, different eDRSs that only differ in variable names do not constitute different situation descriptions.\footnote{Equivalence modulo variable renaming is simple in our current case, where we only consider one sentence at a time. In a dynamic DRS setting, variable renaming would have to be at the level of an agent's complete mental state.}
Likewise, in the conceptual part of a situation description, it is important what the values of the random variables are, and how they are connected, but the identity of the underlying nodes
makes no difference for the situation description. Accordingly, we define situation descriptions as equivalence classes of proto-situation descriptions, which we now define:
\vspace{2ex}
\begin{definition}[Proto-situation description]
Given a set $Conc$ of directed graphs with node labels, a proto-situation description over $Conc$ is a tuple
\[\langle G, D, g\rangle\]
of a graph $G \in Conc$, an eDRS $D$, and a
partial mapping $g$ from nodes of $G$ to sequences of variables in
$D$ such that $range(g) = \{ V \mid \exists \text{ condition
} C \text{ of } D: Var(C) = V\}$.
\end{definition}
We group proto-situation descriptions into equivalence classes with respect to variable renaming in the eDRSs, and homomorphisms on the graphs. In the practical experiments below, we sample proto-situation descriptions, but group them in equivalence classes before we read off the results.
We say that two proto-situation descriptions $S_1 = \langle G_1, D_1, g_1\rangle$ and $S_2 = \langle G_2, D_2, g_2\rangle$ are equivalent if $D_1, D_2$ are equivalent via a variable mapping $a$, and $G_1$ and $G_2$ are equivalent with respect to a homomorphism $b$ that respects node labels, and for any node $v_1$ of $G_1$ it holds that $v_1 \in dom(g_1)$ iff $b(v_1) \in dom(g_2)$, and if $v_1 \in dom(g_1)$ then $a(g_1(v_1)) = g_2(b(v_1))$.
\vspace{2ex}
\begin{definition}[Situation description]
A \emph{situation description} over $Conc$ is an equivalence class of proto-situation descriptions over $Conc$.
\end{definition}
We write $\langle G, D, g\rangle_\sim$ for the equivalence class containing the proto-situation description $\langle G, D, g\rangle$, and $G_\sim$ for an equivalence class only of conceptual graphs with respect to homomorphisms.
\paragraph{Situation description systems (SDSs).}
As discussed in \S~\ref{sec:generative}, probabilistic generative models are usually formulated as if their aim was to generate data from scratch, according to their generative story. But they are typically used in the opposite direction, by reasoning ``back'' from an observation to the latent variables that, according to the generative story, could have created the observation. We proceed in the same way here: We define SDSs first as generative systems that generate situation descriptions, then say how a given observation restricts the generative process. In an SDS, the meaning of an utterance is characterized as a probability distribution over situation descriptions, allowing for uncertainty in the listener's understanding. Matching our intended generative story, where latent conceptual variables generate conditions in a DRS, we factor the SDS probability $\Delta$ into two parts $\Delta_1$ and $\Delta_2$. The probability of a situation description $\langle G, D, g\rangle_\sim$ is defined as the probability $\Delta_1$ of generating the graph $G$, and the probability $\Delta_2$ of generating the conditions of $D$ given $G$.
There is one additional restriction we impose on situation description systems. As discussed in \S~\ref{sec:infinity}, SDs are mental representations that are finite in nature. We assume that there is some maximum size of SDs, both DRSs and graphs, that an agent can grasp. We call a set $\cal S$ of situation descriptions \emph{finitely embeddable} if there is some dimensionality $d$ and a function $f: {\cal S} \to \realnumbers^d$ that is injective. This is a weak requirement, but it is necessary in view of the unimaginably large size of whole worlds, and of the possibility that the number of properties in the world might not be enumerable.
In the last part of the definition, we define the SDS representation for a given utterance. Reasoning ``backwards'' through the generative story, the utterance rules out some conceptual graphs, namely the ones that cannot generate the utterance. Conceptual graphs that generate the utterance plus additional eDRS conditions are not ruled out. To express this formally, we need the notion of a situation description \emph{containing} the eDRS $D_u$. If $S$ is an SD, then we write $D_u \sqsubseteq S$ iff there is a member $\langle G, D, g\rangle$ in the equivalence class $S$ such that the discourse referents in $D_u$ are a subset of the discourse referents in $D$, and the conditions in $D_u$ are a subset of the conditions in $D$. With this notation, we can say how an SDS ${\cal D}$ represents an utterance $D_u$: Its representation is again an SDS, ${\cal D}'$, that is like ${\cal D}$ except that all situation descriptions that do not contain $D_u$ have probability zero (and all other probabilities are renormalized).
\vspace*{2ex}
\begin{definition}[Situation description system]
A \emph{situation description system SDS} over a set $Conc$ of conceptual graphs is
a tuple \[{\cal D} = \langle {\cal S}, \Delta\rangle\]
where ${\cal S}$ is a finitely embeddable set of situation descriptions over $Conc$, and $\Delta$ is a probability distribution over ${\cal S}$ that factors into probability distributions $\Delta_1, \Delta_2$ as
\[
\Delta(\langle G, D, g\rangle_\sim) = \Delta_1(G_\sim) ~\Delta_2(\langle G, D, g\rangle_\sim \mid G_\sim)\\
\]
For an utterance given as an eDRS $D_u$, the representation ${\cal D}(D_u)$ is an SDS ${\cal D}' = \langle {\cal S}, \Delta'\rangle$ with
\[\Delta'(S) \propto \left\{\begin{array}{ll}
\Delta(S) & \text{if } D_u \sqsubseteq S\\
0 & \text{else}
\end{array}\right.\]
\end{definition}
\subsection{Uncertainty about situation descriptions}
We represent the meaning of an utterance as a \emph{distribution} over situation descriptions rather than a single best interpretation. There are several reasons for this choice. First, we want to be able to express uncertainty about the situation mentioned in the utterance. Going back to the example from above, \textit{Zoe was playing a sonata}, it is clear that some instrument must be involved, but not which instrument; and it is not clear what other participants or props are in the situation: a chair? a room? a teacher? other players? With a distribution over situation descriptions, we can express uncertainty about the number and nature of the participants and props in the situation.
Second, we want to be able to express uncertainty about the properties or entailments generated from a concept. Prevalent theories of concept representation in psychology (as reviewed for example in~\citet{Murphy:02}) assume that there is no core set of necessary and sufficient features of concept members. We can model this by assuming that properties or entailments apply probabilistically to members of the concept, or in terms of a probabilistic generative model, that a concept is endowed with probabilities of generating different entailments. Take, for example, the utterance \textit{Mary lied}. Following the analysis of \citet{ColemanKay:81} of the verb \textit{to lie}, we could characterize the meaning of the utterance through multiple situation descriptions that differ in whether what Mary said was actually untrue, and whether she was intending to deceive.\footnote{Taking for a moment the speaker perspective, a speaker may have full information about the situation, and know exactly which entailments hold. But that is not always the case: If I read in a newspaper that \textit{Mary lied}, then I am the reader/listener and may not have full information about all entailments. If I then want to relay this information to someone else, then I become the speaker, but that does not suddenly clear up all my uncertainties about Mary's intents and beliefs.}
Third, we have uncertainty about word sense. The pun sentence from the introduction, repeated here as \ref{ex:astronomer_again}, is ambiguous between two very different senses of \textit{star}. Sentences can also be ambiguous with respect to word sense without being puns, for example sentence \ref{ex:checkboat}, reported by \citet{Hanks2000}. The sentence could be saying either that the man was inspecting the boat or stopping it. (He was stopping it, as the wider discourse context reveals.) We can describe \ref{ex:astronomer_again} by probabilistically generating situation descriptions that differ in the concept underlying the word \textit{star}, and similarly with \ref{ex:checkboat} and \textit{check}.
\ex. \a. \label{ex:astronomer_again} The astronomer married the \underline{star}.
\b. \label{ex:checkboat} Then the boat began to slow down. She saw that the man who owned it was
hanging on to the side and \underline{checking} it each time it swung.
For words with multiple related meanings, it seems dubious to assume that those meanings map to separate concepts. \citet{hogeweg_vicente_2020} summarize the relevant work in psychology, where sense enumeration approaches to polysemy are broadly rejected (though there is no agreement on the best alternative). In the situation description framework, we could instead posit that different but related senses correspond to different entailments from the same concept. Another option is to assume that concepts have internal structure, that they consist of multiple clusters, each with their own entailments and selectional constraints. We explore the latter option below in \S\ref{sec:illustration}.
|
2,869,038,155,884 | arxiv | \section{Introduction}}
\IEEEPARstart{S}{alient} object detection (SOD) aims at detecting the objects in a scene that humans would naturally focus on \cite{cheng2015global,borji2014salient,zhao2019egnet}. It has numerous useful applications, including
object segmentation/recognition \cite{Liu2012,ye2017salient,zhoucvpr2020,jerripothula2016image,
Rutishauser2004,han2006unsupervised}, image/video compression \cite{Guo2010},
video detection/summarization \cite{Ma2005,fan2019shifting}, content-based image editing
\cite{wang2017deep,Stentiford2007,Marchesotti2009,Ding2011,Goferman2010}, informative common object
discovery \cite{zhang2016detection,zhang2017co,zhang2020adaptive}, and image retrieval \cite{Chen2009,Gao2015,liu2013model}.
Many SOD models have been developed under the assumption that the inputs are individual
RGB/color images \cite{wang2016correspondence,zhang2017amulet,zhang2017learning,zhang2018progressive,feng2019attentive,piao2019deep,fu2019deepside} or sequences \cite{wang2019learning,wang2019zero,song2018pyramid,wang2019revisiting,fang2019salient}. As depth cameras, such as Kinect and RealSense, become more and more
popular, SOD from RGB-D inputs (``D'' refers to depth) is emerging as an attractive research
topic. Although a number of previous works have attempted to explore the role of depth
in saliency analysis, several issues remain:
\begin{figure}
\includegraphics[width=0.485\textwidth]{Fig_Motivation-min}
\vspace{-15pt}
\caption{Applying deep saliency models DHS \cite{liu2016dhsnet} and
DSS \cite{hou2019deeply}, which are fed with an RGB image (1$^{st}$ row) or a depth map (2$^{nd}$ row).
Both of the models are trained on a single RGB modality. By contrast, our \emph{JL-DCF}~considers both modalities and thus generates better results (last column).}
\label{fig_motivation}
\end{figure}
\textbf{(i) Deep-based RGB-D SOD methods are still under-explored:}
Despite more than one hundred papers on RGB SOD models being published since 2015 \cite{Fan2018SOC,wang2019salient,wang2019iterative,wang2018salient,wang2019salient2}, there are only a few deep
learning-based works focusing on RGB-D SOD. The first model utilizing
convolutional neural networks (CNNs) for RGB-D SOD \cite{qu2017rgbd}, which adopts a shallow CNN as the saliency map integration model, was introduced in 2017. Since then, only a dozen deep models have been proposed, as summarized in \cite{fan2019rethinking,Zhang2020UCNet}, leaving large room for further improvement in performance.
\textbf{(ii) Ineffective feature extraction and fusion:}
Most learning-based models fuse features of different modalities
either by early-fusion \cite{song2017depth,liu2019salient,huang2018rgbd,fan2019rethinking,zhang2021bilateral} or late-fusion \cite{han2017cnns,wang2019adaptive}.
Although these two simple strategies have achieved encouraging
progress in this field in the past (as pointed out in \cite{chen2018progressively}), they face challenges in either extracting representative multi-modal
features or effectively fusing them. While some other works have adopted a middle-fusion strategy \cite{chen2018progressively,zhu2019pdnet,chen2019three},
which conducts independent feature extraction and fusion using individual CNNs, their
sophisticated network architectures and large number of parameters require an elaborately designed training process and large amount of training data.
Unfortunately, high-quality depth maps are still sparse \cite{zhao2019contrast},
which may lead to sub-optimal solutions of deep learning-based models.
\noindent\textbf{Motivation.} To tackle RGB-D SOD, we propose a novel joint learning and densely cooperative
fusion (\emph{JL-DCF}) architecture that
outperforms existing deep learning-based techniques.
Our method adopts the middle-fusion strategy mentioned above.
However, different from previous works which conduct independent feature
extraction from RGB and depth views\footnote{In this paper, ``view'' and ``modality'' are used interchangeably.}, \emph{JL-DCF}~effectively extracts deep
hierarchical features from both inputs simultaneously, through a Siamese network \cite{chopra2005learning} (shared backbone).
The underlying motivation is that, although depth and RGB
images come from different modalities, they nevertheless share similar features/cues, such as strong figure-ground contrast \cite{niu2012leveraging,
peng2014rgbd,cheng2014depth}, closure of object contours \cite{feng2016local,
shigematsu2017learning}, and connectivity to image borders \cite{wang2017rgb,
liang2018stereoscopic}. This makes cross-modal transferring feasible, even for deep models. As evidenced in Fig. \ref{fig_motivation}, a model trained on a single RGB modality, like DHS \cite{liu2016dhsnet}, can sometimes perform well in the depth view. Nevertheless, a similar model, like DSS \cite{hou2019deeply}, could also fail in the depth view without proper adaption or transferring.
To the best of our knowledge, the proposed \emph{JL-DCF}~scheme is \emph{the first to
leverage such transferability for deep models}, by treating a depth image as a
special case of a color image and employing a Siamese CNN for both RGB and depth
feature extraction. Additionally, we develop a densely cooperative fusion strategy to
reasonably combine the learned features of different modalities. In a nutshell, this paper provides three main contributions:
\begin{itemize}
\item This work is the first to leverage the commonality and transferability between RGB and depth views through a Siamese architecture. As a result, we introduce a general framework for RGB-D SOD,
called \emph{JL-DCF}, which consists of two sub-modules: joint learning and
densely cooperative fusion. The key features of these two components are their robustness and effectiveness, which will be beneficial for future modeling in related multi-modality tasks in computer vision.
In particular, we advance the state-of-the-art (SOTA) by a significant average of $\sim$2\% (max F-measure) across seven challenging datasets. Further, by improving \emph{JL-DCF}~through bridging between RGB and RGB-D SOD, even more gains can be obtained (see Section \ref{sec43}).
The code is available at
\href{https://github.com/kerenfu/JLDCF}{https://github.com/kerenfu/JLDCF/}.
\item We present a thorough evaluation of 14 SOTA methods~\cite{ju2014depth,feng2016local,cong2016saliency,song2017depth,
guo2016salient,qu2017rgbd,wang2019adaptive,han2017cnns,chen2019multi,
chen2018progressively,chen2019three,zhao2019contrast,fan2019rethinking,
Piao2019depth}.
Besides, we conduct a comprehensive ablation study, including using different input sources, learning schemes, and feature fusion strategies, to demonstrate the effectiveness of \emph{JL-DCF}.
Some interesting findings also encourage further research in this field.
\item In our experiments, we show that, in addition to the RGB-D SOD task, \emph{JL-DCF}~is also directly applicable to other multi-modal detection tasks, including RGB-T (thermal infrared) SOD and video SOD (VSOD). Again, \emph{JL-DCF}~achieves comparable or better performance against SOTA methods on these two tasks, further validating its robustness and generality. To the best of our knowledge, this appears to be the first time in the saliency detection community that a proposed framework is proved effective on such diverse tasks. \fkr{In addition, we conduct the first attempt to link \emph{JL-DCF}~to RGB-D semantic segmentation, which is a closely related field, compare it with SOTA segmentation models, and draw underlying discussions.}
\end{itemize}
The remainder of the paper is organized as follows. Section \ref{sec2} discusses related work on RGB-D SOD, Siamese networks in computer vision, \fdp{and also RGB-D semantic segmentation}. Section \ref{sec3} describes the proposed \emph{JL-DCF}~in detail. Experimental results, performance evaluations and comparisons are included in Section \ref{sec4}. Finally, conclusions are drawn in Section \ref{sec5}.
\section{Related Work}\label{sec2}
\subsection{RGB-D Salient Object Detection}\label{sec21}
\noindent\textbf{Traditional models.}
The pioneering work for RGB-D SOD was produced by Niu \emph{et al.} \cite{niu2012leveraging}, who introduced disparity contrast and domain knowledge into stereoscopic photography to measure stereo saliency. After Niu's work, various hand-crafted features/hypotheses originally proposed for RGB SOD were extended to RGB-D, such as center-surround difference\cite{ju2014depth,guo2016salient}, contrast \cite{cheng2014depth,peng2014rgbd,cong2016saliency}, background enclosure \cite{feng2016local}, center/boundary prior \cite{cheng2014depth,liang2018stereoscopic,wang2017rgb,cong2019going}, compactness \cite{cong2016saliency,cong2019going}, or a combination of various saliency measures \cite{song2017depth}. All the above models rely heavily on heuristic hand-crafted features, resulting in limited generalizability in complex scenarios.
\vspace{5pt}
\noindent\textbf{Deep models.}
Recent advances in this field have been obtained by using deep learning and CNNs.
Qu \emph{et al.} \cite{qu2017rgbd} first utilized a CNN to fuse different low-level saliency cues for judging the saliency confidence values of superpixels. Shigematsu \emph{et al.} \cite{shigematsu2017learning} extracted ten superpixel-based hand-crafted depth features capturing the background enclosure cue, depth contrast, and histogram distance. These features are fed to a CNN, whose output is shallowly fused with the RGB feature output to compute superpixel saliency.
A recent trend in this field is to exploit fully convolutional neural networks (FCNs) \cite{Long2017Fully}. Chen \emph{et al.} \cite{chen2018progressively} proposed a bottom-up/top-down architecture \cite{pinheiro2016learning}, which progressively performs cross-modal complementarity-aware fusion in its top-down pathway. Han \emph{et al.} \cite{han2017cnns} modified/extended the structure of the RGB-based deep neural network in order for it to be applicable for the depth view and then fused the deep representations of both views via a fully connected layer. A three-stream attention-aware network was proposed in \cite{chen2019three}, which extracts hierarchical features from RGB and depth inputs through two separate streams. Features are then progressively combined and selected via attention-aware blocks in the third stream. A new multi-scale multi-path fusion network with cross-modal interactions was proposed in \cite{chen2019multi}. Works \cite{liu2019salient} and \cite{huang2018rgbd} formulated a four-channel input by concatenating RGB and depth data. The input is later fed to a single-stream recurrent CNN and an FCN with short connections, respectively. The model in \cite{zhu2019pdnet} employed a subsidiary network to obtain depth features and used them to enhance the intermediate representation in an encoder-decoder architecture. Zhao \emph{et al.} \cite{zhao2019contrast} proposed a model that generates a contrast-enhanced depth map, which is later used as a prior map for feature enhancement in subsequent fluid pyramid integration. Fan \emph{et al.} \cite{fan2019rethinking} constructed a new RGB-D dataset called the Salient Person (SIP) dataset, and introduced a depth-depurator network to judge whether a depth map should be concatenated with the RGB image to formulate an input signal. Piao \emph{et al.} \cite{Piao2019depth} proposed a depth-induced multi-scale recurrent attention network, where the multi-scale fused features are re-weighted by a depth-induced vector and then processed by a recurrent attention module.
As concurrent works, Liu \emph{et al.} \cite{liu2020learning} proposed for RGB-D saliency detection a selective self-mutual attention mechanism inspired by the non-local model \cite{wang2018non}.
Zhang \emph{et al.} \cite{zhang2020select} designed a complimentary interaction module to discriminatively select representation from RGB and depth data, after which the learning was enhanced by a new compensation-aware loss.
Piao \emph{et al.} \cite{Piao2020a2dele} proposed attentive and adaptive depth distillation to learn an enhanced RGB salient object detector by transferring depth knowledge.
Zhang \emph{et al.} \cite{Zhang2020UCNet,zhang2020uncertainty} introduced the conditional variational autoencoder to model uncertainty in saliency annotation, which generates multiple potential saliency maps to be voted by a consensus module.
\begin{figure}
\includegraphics[width=0.485\textwidth]{Fig_SOTAscheme}
\vspace{-15pt}
\caption{Typical schemes for RGB-D saliency detection. (a) Early-fusion. (b) Late-fusion. (c) Middle-fusion. }
\label{fig_sotascheme}
\end{figure}
\vspace{5pt}
\noindent\textbf{Categorization and discussion.}
Generally, as summarized by previous literature \cite{chen2018progressively,zhao2019contrast}, most of the above approaches can be divided into three categories: (a) early-fusion \cite{song2017depth,liu2019salient,huang2018rgbd,fan2019rethinking}, (b) late-fusion \cite{han2017cnns,wang2019adaptive} and (c) middle-fusion \cite{chen2018progressively,zhu2019pdnet,chen2019three,chen2019multi,Piao2019depth,liu2020learning,zhang2020select}.
Fig. \ref{fig_sotascheme} (a)-(c) illustrate these three fusion strategies.
\emph{Early-fusion} (Fig. \ref{fig_sotascheme} (a)) uses simple concatenation to conduct input fusion. It may be difficult to capture the complementary interactions between the RGB and depth views, because these two types of information are blended in the very first stage but the supervision signal is finally far away from the blended input. The learning process is prone to local optima, where only either RGB or depth features are learned, and therefore may not guarantee improvement after view fusion. Besides, performing deep supervision for RGB and depth views individually is infeasible. This makes learning towards correct direction difficult.
\emph{Late-fusion} (Fig. \ref{fig_sotascheme} (b)) explicitly extracts RGB and depth features using two parallel networks. This ensures that both the RGB and depth views contribute to the final decision. Also, it is very straightforward to apply individual view-specific supervision in this scheme. However, the drawback is that this scheme fails to mine complex intrinsic correlations between the two views, \emph{i.e.}, the highly non-linear complementary rules.
\emph{Middle-fusion} (Fig. \ref{fig_sotascheme} (c)) complements (a) and (b), since both feature extraction and subsequent fusion are handled by relatively deep CNNs. As a consequence, high-level concepts can be learnt from both modalities and complex integration rules can be mined. Meanwhile, adding extra individual deep supervision for RGB and depth data is straightforward.
The proposed \emph{JL-DCF}~scheme falls under the middle-fusion category. However, unlike the aforementioned methods \cite{chen2018progressively,zhu2019pdnet,chen2019three,chen2019multi,Piao2019depth,liu2020learning,zhang2020select,fan2020bbs,zhai2020bifurcated}, where the two feature extraction streams are independent, we propose to utilize a Siamese architecture \cite{chopra2005learning}, where both the network architecture and weights are shared, as illustrated by the red part in Fig. \ref{fig_sotascheme} (c). This results in two major benefits: 1) Cross-modal knowledge-sharing becomes straightforward via joint learning; 2) Model parameters are largely reduced as only one shared network is needed, leading to a facilitated learning process.
\begin{figure*}
\includegraphics[width=\textwidth]{Fig_Blockdiagram-min}
\vspace{-15pt}
\caption{Block diagram of the proposed \emph{JL-DCF}~framework for RGB-D SOD. The JL (joint learning) component is shown in gray, while the DCF (densely cooperative fusion) component is shown in light green. CP1$\sim$CP6: Feature compression modules. FA1$\sim$FA6: Feature aggregation modules. CM1$\sim$CM6: Cross-modal fusion modules. ``\emph{H}'' denotes the spatial size of output feature maps at a particular stage. See Section \ref{sec3} for details.}
\label{fig_blockdiagram}
\end{figure*}
\subsection{Siamese Networks in Computer Vision}\label{sec22}
The concept of ``Siamese network'' was introduced by Bromley \emph{et al.} \cite{bromley1994signature} in the 1990s for hand-written signature verification. In their work, two identical (\emph{i.e.}, ``Siamese'') neural networks with exactly the same parameters were introduced to handle two input signatures, and the feature vectors obtained were constrained by some distance measure during learning. This idea of a Siamese network was later extended to various computer vision tasks including face verification \cite{chopra2005learning,taigman2014deepface}, one-shot image recognition \cite{koch2015siamese}, stereo matching \cite{zagoruyko2015learning,zbontar2015computing,luo2016efficient,kendall2017end,khamis2018stereonet}, object tracking \cite{bertinetto2016fully,sun2019deep,guo2020siamcar,voigtlaender2020siam,chen2020siamese}, and semi-supervised video object segmentation \cite{cheng2018fast,wug2018fast,wang2019fast}. The essence of the Siamese network and why it can be applied lies in that it is suitable for learning general feature representations with a distance (or similarity) metric from two similar inputs, such as two face images \cite{chopra2005learning}, two image patches \cite{zagoruyko2015learning,zbontar2015computing}, a rectified pair of stereo images \cite{kendall2017end,khamis2018stereonet}, or a template image and a search image \cite{bertinetto2016fully}. After training, a Siamese network can be considered as an embedding serving in a comparison function. Recent works have attempted to manipulate features obtained from Siamese networks to formulate an elegant end-to-end framework \cite{kendall2017end,khamis2018stereonet}, or achieve more accurate feature learning and matching \cite{guo2020siamcar,voigtlaender2020siam,chen2020siamese}.
A comprehensive summary of the Siamese network is beyond the scope of this work. The readers can refer to the recently released survey work~\cite{marvasti2019deep} for more details.
Different from all the above works, in this paper, the Siamese network is aimed at exploiting saliency-aware cross-modal commonality and complementarity instead of matching or measuring distance. In other words, deep RGB and depth cues from the Siamese network are fused/merged, rather than compared, in order to achieve the desired RGB-D saliency prediction. It is worth noting that the Siamese network has not yet been introduced to multi-modal saliency detection, and even in the entire saliency detection community, there are only very few works utilizing Siamese networks.
\fdp{
\subsection{RGB-D Semantic Segmentation}\label{sec23}
RGB-D semantic segmentation is a close research area to RGB-D SOD. Although these two fields have different definitions on tasks, both of them aim at region segmentation. In contrast to segmenting salient object regions in RGB-D SOD, RGB-D semantic segmentation is to label all pixels within pre-defined categories given RGB-D inputs \cite{couprie2013indoor,Long2017Fully,gupta2014learning,hazirbas2016fusenet}. As a representative work, Shelhamer \emph{et al.} \cite{Long2017Fully} used FCNs to handle RGB-D semantic segmentation and experimented with early-fusion, \emph{i.e.}, by concatenating RGB and depth as a new input, as well as late-fusion, \emph{i.e.}, by averaging scores from RGB and HHA inputs \cite{gupta2014learning}. Existing RGB-D semantic segmentation techniques can be grouped into three types: 1) Treat depth as an additional input source and combine derived features with those from the RGB one \cite{couprie2013indoor,Long2017Fully,gupta2014learning,hazirbas2016fusenet,wang2016learning,li2016lstm,cheng2017locality,park2017rdfnet,deng2019rfbnet,chen2020bi}. 2) Recover 3D data from RGB-D sources and process with 3D/volumetric CNNs to handle appearances and geometries simultaneously \cite{qi20173d,song2017semantic}. 3) Utilize depth clues as auxiliary assistance to augment feature extraction from the RGB modality \cite{lin2017cascaded,wang2018depth,chen20193d,xing2020malleable,Chen2021spatial}, such as the depth-aware convolution/pooling \cite{wang2018depth}, 2.5D convolution \cite{xing2020malleable}, and S-Conv \cite{Chen2021spatial}. Also note that, among the models of the first type, further, they can be divided similarly (as in Fig. \ref{fig_sotascheme}) into three categories, \emph{i.e.}, early-fusion \cite{couprie2013indoor,Long2017Fully}, late-fusion \cite{Long2017Fully,gupta2014learning,li2016lstm,cheng2017locality}, and middle-fusion \cite{hazirbas2016fusenet,wang2016learning,park2017rdfnet,deng2019rfbnet,chen2020bi}, as summarized in recent literature \cite{park2017rdfnet,deng2019rfbnet}. This fact reveals a potential strong correlation between RGB-D SOD and semantic segmentation. We will further draw discussions between the proposed scheme and RGB-D semantic segmentation in Section \ref{sec47}.
}
\section{Methodology}\label{sec3}
The overall architecture of the proposed \emph{JL-DCF}~is shown in Fig. \ref{fig_blockdiagram}. It follows the classic bottom-up/top-down strategy \cite{pinheiro2016learning}. For illustrative purpose, Fig. \ref{fig_blockdiagram} depicts an example backbone with six hierarchies that are common in the widely used VGG \cite{Simonyan14c} and ResNet \cite{He2015Deep}. The architecture consists of a JL component and a DCF component. The JL component conducts joint learning for the two modalities using a Siamese network. It aims to discover the commonality between these two views from a ``model-sharing'' perspective, since their information can be merged into the model parameters via back-propagation. As seen in Fig. \ref{fig_blockdiagram}, the hierarchical features jointly learned by the backbone are then fed to the subsequent DCF component. DCF is dedicated to feature fusion and its layers are constructed in a densely cooperative way. In this sense, the complementarity between RGB and depth modalities can be explored from a ``feature-integration'' perspective. To perform cross-view feature fusion, in the DCF component, we elaborately design a cross-modal fusion module (CM module in Fig. \ref{fig_blockdiagram}). Details about \emph{JL-DCF}~will be given in the following sections.
\subsection{Joint Learning (JL)}\label{sec32}
As shown in Fig. \ref{fig_blockdiagram} (gray part), the inputs of the JL component are an RGB image together with its corresponding depth map. We first normalize the depth map into intervals [0, 255] and then convert it to a three-channel map through color mapping. In our implementation, we simply use the vanilla gray color mapping, which is equivalent to replicating the single channel map into three channels. Note that other color mappings \cite{al2016creating} or transformations, like the mean used in \cite{han2017cnns}, could also be considered for generating the three-channel representation. Next, the three-channel RGB image and transformed depth map are concatenated to formulate a \emph{batch}, so that the subsequent CNN backbone can perform parallel processing. Note that, unlike the early-fusion schemes previously mentioned, which often concatenate the RGB and depth inputs in the 3$^{rd}$ channel dimension, our scheme concatenates them in the 4$^{th}$ dimension, often called the batch dimension. For example, in our case, a transformed $320\times320\times3$ depth and a $320\times320\times3$ RGB map will formulate a batch of size $320\times320\times3\times2$, rather than $320\times320\times6$.
The hierarchical features from the shared CNN backbone are then leveraged in a side-output way like \cite{hou2019deeply}. Since the side-output features have varied resolutions and channel numbers (usually the deeper, the more channels), we first employ a set of CP modules (CP1$\sim$CP6 in Fig. \ref{fig_blockdiagram}, practically implemented by convolutional layers plus ReLU non-linearities) to compress them to an identical, smaller number, denoted as $k$. We do this for the following two reasons: (1) Using a large number of feature channels for subsequent decoding is memory and computationally expensive and (2) unifying the number of feature channels facilitates various element-wise operations. Note that, here, the outputs from our CP modules are still batches, which are denoted as the thicker black arrows in Fig. \ref{fig_blockdiagram}.
Coarse localization can provide the basis for the following top-down refinement \cite{pinheiro2016learning}.
In addition, jointly learning the coarse localization guides the shared CNN to learn to extract independent hierarchical features
from the RGB and depth views simultaneously.
In order to enable the CNN backbone to coarsely locate the targets from both the RGB and depth views, we apply deep supervision to the JL component in the last hierarchy.
To achieve this, as shown in Fig. \ref{fig_blockdiagram}, we add a $(1 \times 1,1)$ convolutional layer after the CP6 module to achieve coarse prediction. The depth and RGB-associated outputs are supervised by the down-sampled ground truth map. The loss generated in this stage is called the global guidance loss $\mathcal{L}_{g}$.
\subsection{Densely Cooperative Fusion (DCF)}\label{sec33}
As shown in Fig. \ref{fig_blockdiagram} (light green part), the output batch features from the CP modules contain depth and RGB information. They are fed to the DCF component, which can be considered a decoder that performs multi-scale cross-modal fusion. Firstly, we design a CM (cross-modal fusion) module to split and then merge the batch features (Fig. \ref{fig_blockdiagram}, bottom-right). This module first splits the batch data and then conducts ``addition and multiplication'' feature fusion, which we call \emph{cooperative fusion}. Mathematically, let a batch feature be denoted by $\{{X}_{rgb}, {X}_d\}$, where ${X}_{rgb}$, ${X}_d$ represent the RGB and depth feature tensors, each with $k$ channels, respectively. The CM module conducts the fusion as:
\begin{equation} \label{equ_cm}
CM(\{{X}_{rgb}, {X}_d\})={X}_{rgb} \oplus {X}_d \oplus ( {X}_{rgb} \otimes {X}_d),
\end{equation}
\noindent where ``$\oplus$'' and ``$\otimes$'' denote element-wise addition and multiplication. The blended features output from the CM modules are still made up of $k$ channels. Equ. (\ref{equ_cm}) enforces explicit feature fusion indicated by ``$\oplus$'' and ``$\otimes$'', where ``$\oplus$'' exploits \emph{feature complementarity} and ``$\otimes$'' puts more emphasis on \emph{feature commonality}. These two properties intuitively gather different clues, as
shown in Fig. \ref{fig_featurevisualize}, and are generally important in cross-view fusion. On the other hand, Equ. (\ref{equ_cm}) can also be deemed as a kind of mutual residual attention \cite{wang2017residual} combining ``$A+A\otimes B$'' and ``$B+B\otimes A$'', where $A$ and $B$ are the two types of features (\emph{i.e.}, ${X}_{rgb}, {X}_d$) each of which attends the elements in the other in a residual way.
\begin{figure}
\includegraphics[width=0.485\textwidth]{Fig_FeatureVisualize-min}
\vspace{-15pt}
\caption{Intermediate feature visualization in CM6, where the RGB and depth features after batch split are visualized. Generally, addition and multiplication operations gather different cross-modal clues, making the features of both dolls enhanced after Equ. (\ref{equ_cm}).}
\label{fig_featurevisualize}
\end{figure}
One may argue that the above CM module could be replaced by channel concatenation, which generates $2k$-channel concatenated features. However, we find such a choice tends to result in the learning process being trapped in a local optimum, where it becomes biased towards only RGB information. The reason seems to be that the channel concatenation does indeed involve feature selection rather than explicit feature fusion. This leads to degraded learning outcomes, where RGB features easily dominate the final prediction. Note that, as will be shown in Section \ref{sec44}, solely using RGB input can also achieve fairly good performance in the proposed framework. Comparisons between our CM modules and concatenation will be given in Section \ref{sec44}.
\begin{figure}
\includegraphics[width=0.48\textwidth]{Fig_Inception-min}
\vspace{-10pt}
\caption{Inception structure used for the FA modules in Fig. \ref{fig_blockdiagram}. All convolutional and max-pooling layers have stride 1, therefore maintaining spatial feature sizes. Unlike the original Inception module \cite{szegedy2015going}, we adapt it to have the same input/output channel number $k$.}
\label{fig_inception}
\end{figure}
As shown in Fig. \ref{fig_blockdiagram}, the fused features from CM1$\sim$CM6 are fed to a decoder augmented with a dense connection \cite{huang2017densely}. Using the dense connection promotes the blending of depth and RGB features at various scales. Therefore, unlike the traditional UNet-like decoder \cite{ronneberger2015u}, an aggregation module FA takes inputs from all levels deeper than itself. Specifically, FA denotes a feature aggregation module performing non-linear aggregation and transformation. To this end, we use the Inception module \cite{szegedy2015going} shown in Fig. \ref{fig_inception}, which performs multi-level convolutions with filter size $1\times1,3\times3,5\times5$, and max-pooling. Note that the FA module in our framework is quite flexible. Other modules (\emph{e.g.}, \cite{hu2018squeeze,wang2018non,woo2018cbam,gao2019res2net}) may also be considered in the future to improve the performance.
Finally, the FA module with the finest features is denoted as FA1, whose output is then fed to a $(1 \times 1,1)$ convolutional layer to generate the final activation and then ultimately the saliency map. This final prediction is supervised by the resized ground truth (GT) map during training. We denote the loss generated in this stage as $\mathcal{L}_{f}$.
\subsection{Loss Function}\label{sec34}
The total loss function of our scheme is composed of the global guidance loss $\mathcal{L}_{g}$ and final loss $\mathcal{L}_{f}$. Assume that $G$ denotes supervision from the ground truth, $S^c_{rgb}$ and $S^{c}_{d}$ denote the coarse prediction maps contained in the batch after module CP6, and $S^{f}$ is the final prediction after module FA1. The total loss function is defined as:
\begin{equation} \label{equ_loss}
\mathcal{L}_{total}=\mathcal{L}_{f}(S^{f}, G) + \lambda \sum_{x \in \{rgb, d\}}\mathcal{L}_{g}(S^c_{x}, G),
\end{equation}
\noindent where $\lambda$ balances the emphasis of global guidance, and we adopt the widely used cross-entropy loss for $\mathcal{L}_{g}$ and $\mathcal{L}_{f}$ as:
\begin{equation} \label{equ_cel}
\mathcal{L}(S,G)=-\sum_{i}[G_i \log (S_i) + (1-G_i) \log (1-S_i)],
\end{equation}
\noindent where $i$ denotes pixel index, and $S \in \{S^c_{rgb}, S^c_{d}, S^{f}\}$.
\subsection{Bridging between RGB and RGB-D SOD}\label{sec35}
Since the RGB and depth modalities share the same master CNN backbone for feature extraction in \emph{JL-DCF}, it is easy to adapt \emph{JL-DCF}~to a single modality (\emph{e.g.},~RGB or depth) by replacing all the batch-related operations, such as the batch formulation and CM modules in Fig. \ref{fig_blockdiagram}, with identity mappings while keeping all the other settings unchanged, including the dense decoder and deep supervision. In this way, one can get a full-resolution saliency estimation result from either RGB or depth input. As a consequence, we can bridge between RGB and RGB-D SOD in terms of a data perspective in the training phase of \emph{JL-DCF}. The underlying motivation is to use more RGB data to augment the generalizability of the JL component in \emph{JL-DCF}, as the JL component is shared by both the RGB and depth views. The newly incorporated RGB-based knowledge could help improve the Siamese network regarding both RGB and depth modalities.
To this end, we propose to further extend the JL component in a multi-task manner by considering RGB and RGB-D SOD as two simultaneous tasks. As shown in Fig. \ref{fig_bridge}, the JL component is shared across RGB and RGB-D SOD, and is jointly optimized by the data sources (\emph{i.e.}, training datasets) of these two tasks. Note that the RGB SOD datasets that can currently be used for training are much larger than the RGB-D ones, leading to a potential boost in generalizability. Practically, we obtain a coarse saliency map for the RGB SOD task from the JL component, and therefore the overall loss function, $\mathcal{L}_{total}^*$ in this case, can be written as the sum of the losses for the two tasks:
\begin{equation} \label{equ_joint_loss}
\mathcal{L}_{total}^*=\mathcal{L}_{f}(S^{f}, G) + \lambda \sum_{x \in \{ rgb, d, rgb*\}}\mathcal{L}_{g}(S^c_{x}, G),
\end{equation}
\noindent where $S^c_{rgb*}$ denotes the obtained coarse saliency map corresponding to the RGB SOD task, while other notations are defined the same as in Equ. (\ref{equ_loss}). More specifically, an RGB image for the RGB SOD task is concatenated with the RGB-D data in the batch dimension to formulate a single batch, which is then fed to the CNN backbone. The coarse prediction associated with the RGB SOD task is obtained by batch splitting and then supervised by the corresponding ground truth. Following the same supervision of $S_{rgb}^c$ in the RGB-D task, we use the standard cross-entropy loss for the RGB SOD task. Finally, it is worth noting that our above scheme aims at leveraging additional RGB SOD data to augment RGB-D SOD, while in contrast the recent work \cite{li2020is} aims at using additional RGB-D SOD data for training in order to augment RGB SOD.
\begin{figure}\centering
\includegraphics[width=0.5\textwidth]{Fig_Bridge-min}
\vspace{-15pt}
\caption{Bridging the RGB and RGB-D SOD tasks through \emph{JL-DCF}, where the JL and DCF components are detailed in Fig. \ref{fig_blockdiagram}. During training, the network of \emph{JL-DCF}~is simultaneously trained/optimized in an online manner for both tasks.}
\label{fig_bridge}
\end{figure}
\begin{table*}[t!]
\renewcommand{\arraystretch}{0.8}
\caption{Quantitative measures: S-measure ($S_\alpha$) \cite{Fan2017}, max F-measure ($F_{\beta}^{\textrm{max}}$) \cite{Borji2015TIP}, max E-measure ($E_{\phi}^{\textrm{max}}$) \cite{fan2018enhanced} and MAE ($M$) \cite{Perazzi2012} of SOTA methods and the proposed \emph{JL-DCF}~and $\emph{JL-DCF}^*$ (jointly trained with both RGB-D and RGB datasets) on six RGB-D SOD datasets.
The best and second best results are highlighted in \textbf{bold} and \emph{italics}, respectively$^{\rm a}$.}\label{tab_sota}
\centering
\footnotesize
\setlength{\tabcolsep}{1mm}
\begin{tabular}{p{0.8mm}p{0.8mm}r||c|c|c|c|c|c|c|c|c|c|c|c|c|c||c|c}
\hline
& & & \multicolumn{5}{c|}{Traditional}
&\multicolumn{11}{c}{Deep Learning} \\
\cline{4-19}
&& Metric & \tabincell{c}{ACSD\\\cite{ju2014depth}} & \tabincell{c}{LBE\\\cite{feng2016local}} & \tabincell{c}{DCMC\\\cite{cong2016saliency}} & \tabincell{c}{MDSF\\\cite{song2017depth}} & \tabincell{c}{SE\\\cite{guo2016salient}} & \tabincell{c}{DF\\\cite{qu2017rgbd}} & \tabincell{c}{AFNet\\\cite{wang2019adaptive}} & \tabincell{c}{CTMF\\\cite{han2017cnns}} & \tabincell{c}{MMCI\\\cite{chen2019multi}} & \tabincell{c}{PCF\\\cite{chen2018progressively}} & \tabincell{c}{TANet\\\cite{chen2019three}} & \tabincell{c}{CPFP\\\cite{zhao2019contrast}} & \tabincell{c}{DMRA\\\cite{Piao2019depth}}& \tabincell{c}{D3Net\\\cite{fan2019rethinking}}& \tabincell{c}{\emph{JL-DCF}\\Ours}&\tabincell{c}{\emph{JL-DCF}$^*$\\Ours}\\
\hline
\hline
\multirow{4}{*}{\begin{sideways}\textit{NJU2K}\end{sideways}} & \multirow{4}{*}{\begin{sideways}\cite{ju2014depth}\end{sideways}} & $S_\alpha\uparrow$ &0.699&0.695&0.686&0.748&0.664&0.763&0.772&0.849&0.858&0.877&0.878&0.879&0.886&0.895&\emph{0.903}&\textbf{0.911}\\
&& $F_{\beta}^{\textrm{max}}\uparrow$ &0.711&0.748&0.715&0.775&0.748&0.804&0.775&0.845&0.852&0.872&0.874&0.877&0.886&0.889&\emph{0.903}&\textbf{0.913}\\
&& $E_{\phi}^{\textrm{max}}\uparrow$ &0.803&0.803&0.799&0.838&0.813&0.864&0.853&0.913&0.915&0.924&0.925&0.926&0.927&0.932&\emph{0.944}&\textbf{0.948}\\
&& $M\downarrow$ &0.202&0.153&0.172&0.157&0.169&0.141&0.100&0.085&0.079&0.059&0.060&0.053&0.051&0.051&\emph{0.043}&\textbf{0.040}\\
\hline
\multirow{4}{*}{\begin{sideways}\textit{NLPR}\end{sideways}} & \multirow{4}{*}{\begin{sideways}\cite{peng2014rgbd}\end{sideways}}& $S_\alpha\uparrow$
&0.673&0.762&0.724&0.805&0.756&0.802&0.799&0.860&0.856&0.874&0.886&0.888&0.899&0.906&\emph{0.925}&\textbf{0.926}\\
&& $F_{\beta}^{\textrm{max}}\uparrow$ &0.607&0.745&0.648&0.793&0.713&0.778&0.771&0.825&0.815&0.841&0.863&0.867&0.879&0.885&\emph{0.916}&\textbf{0.917}\\
&& $E_{\phi}^{\textrm{max}}\uparrow$ &0.780&0.855&0.793&0.885&0.847&0.880&0.879&0.929&0.913&0.925&0.941&0.932&0.947&0.946&\emph{0.962}&\textbf{0.964}\\
&& $M\downarrow$ &0.179&0.081&0.117&0.095&0.091&0.085&0.058&0.056&0.059&0.044&0.041&0.036&0.031&0.034&\textbf{0.022}&\emph{0.023}\\
\hline
\multirow{4}{*}{\begin{sideways}\textit{STERE}\end{sideways}}& \multirow{4}{*}{\begin{sideways}\cite{niu2012leveraging}\end{sideways}} & $S_\alpha\uparrow$ &0.692&0.660&0.731&0.728&0.708&0.757&0.825&0.848&0.873&0.875&0.871&0.879&0.886&0.891&\emph{0.905}&\textbf{0.911}\\
&& $F_{\beta}^{\textrm{max}}\uparrow$ &0.669&0.633&0.740&0.719&0.755&0.757&0.823&0.831&0.863&0.860&0.861&0.874&0.886&0.881&\emph{0.901}&\textbf{0.907}\\
&& $E_{\phi}^{\textrm{max}}\uparrow$ &0.806&0.787&0.819&0.809&0.846&0.847&0.887&0.912&0.927&0.925&0.923&0.925&0.938&0.930&\emph{0.946}&\textbf{0.949}\\
&& $M\downarrow$ &0.200&0.250&0.148&0.176&0.143&0.141&0.075&0.086&0.068&0.064&0.060&0.051&0.047&0.054&\emph{0.042}&\textbf{0.039}\\
\hline
\multirow{4}{*}{\begin{sideways}\textit{RGBD135}\end{sideways}} & \multirow{4}{*}{\begin{sideways}\cite{cheng2014depth}\end{sideways}} & $S_\alpha\uparrow$ &0.728&0.703&0.707&0.741&0.741&0.752&0.770&0.863&0.848&0.842&0.858&0.872&0.900&0.904&\emph{0.929}&\textbf{0.936}\\
&& $F_{\beta}^{\textrm{max}}\uparrow$ &0.756&0.788&0.666&0.746&0.741&0.766&0.728&0.844&0.822&0.804&0.827&0.846&0.888&0.885&\emph{0.919}&\textbf{0.929}\\
&& $E_{\phi}^{\textrm{max}}\uparrow$ &0.850&0.890&0.773&0.851&0.856&0.870&0.881&0.932&0.928&0.893&0.910&0.923&0.943&0.946&\emph{0.968}&\textbf{0.975}\\
&& $M\downarrow$ &0.169&0.208&0.111&0.122&0.090&0.093&0.068&0.055&0.065&0.049&0.046&0.038&0.030&0.030&\emph{0.022}&\textbf{0.021}\\
\hline
\multirow{4}{*}{\begin{sideways}\textit{LFSD}\end{sideways}} & \multirow{4}{*}{\begin{sideways}\cite{li2014saliency}\end{sideways}} & $S_\alpha\uparrow$ &0.734&0.736&0.753&0.700&0.698&0.791&0.738&0.796&0.787&0.794&0.801&0.828&0.847&0.832&\emph{0.862}&\textbf{0.863}\\
&& $F_{\beta}^{\textrm{max}}\uparrow$ &0.767&0.726&0.817&0.783&0.791&0.817&0.744&0.792&0.771&0.779&0.796&0.826&0.857&0.819&\textbf{0.866}&\emph{0.862}\\
&& $E_{\phi}^{\textrm{max}}\uparrow$ &0.837&0.804&0.856&0.826&0.840&0.865&0.815&0.865&0.839&0.835&0.847&0.872&0.901&0.864&\textbf{0.901}&\emph{0.900}\\
&& $M\downarrow$ &0.188&0.208&0.155&0.190&0.167&0.138&0.134&0.119&0.132&0.112&0.111&0.088&0.075&0.099&\emph{0.071}&\textbf{0.071}\\
\hline
\multirow{4}{*}{\begin{sideways}\textit{SIP}\end{sideways}} & \multirow{4}{*}{\begin{sideways}\cite{fan2019rethinking}\end{sideways}} & $S_\alpha\uparrow$ &0.732&0.727&0.683&0.717&0.628&0.653&0.720&0.716&0.833&0.842&0.835&0.850&0.806&0.864&\emph{0.879}&\textbf{0.892}\\
&& $F_{\beta}^{\textrm{max}}\uparrow$ &0.763&0.751&0.618&0.698&0.661&0.657&0.712&0.694&0.818&0.838&0.830&0.851&0.821&0.862&\emph{0.885}&\textbf{0.900}\\
&& $E_{\phi}^{\textrm{max}}\uparrow$ &0.838&0.853&0.743&0.798&0.771&0.759&0.819&0.829&0.897&0.901&0.895&0.903&0.875&0.910&\emph{0.923}&\textbf{0.949}\\ && $M\downarrow$ &0.172&0.200&0.186&0.167&0.164&0.185&0.118&0.139&0.086&0.071&0.075&0.064&0.085&0.063&\emph{0.051}&\textbf{0.046}\\
\hline
\end{tabular}
\begin{flushleft}
\justifying
\fkr{$^{\rm a}$We also have implemented Pytorch versions of \emph{JL-DCF}~with different backbones, \emph{i.e.}, ResNet-101, ResNet-50, and VGG-16. They all achieve SOTA performance comparing with the models in this table. Generally, due to differences between deep learning platforms, the Pytorch versions are found to perform moderately better than the Caffe implementation. Results and models can be found on our project site.}
\end{flushleft}\vspace{-0.5cm}
\end{table*}
\section{Experiments}\label{sec4}
\subsection{Datasets and Metrics}\label{sec41}
Experiments are conducted on six classic RGB-D benchmark datasets: NJU2K \cite{ju2014depth} (2,000 samples), NLPR \cite{peng2014rgbd} (1,000 samples), STERE \cite{niu2012leveraging} (1,000 samples), RGBD135 \cite{cheng2014depth} (135 samples), LFSD \cite{li2014saliency} (100 samples), and SIP \cite{fan2019rethinking} (929 samples), as well as the recently proposed dataset DUT-RGBD \cite{Piao2019depth} (only testing subset, 400 samples). Following \cite{zhao2019contrast}, we choose the same 700 samples from NLPR and 1,500 samples from NJU2K, resulting in 2,200 samples in total, to train our algorithms. The remaining samples are used for testing. Besides, when jointly training \emph{JL-DCF}~with both RGB-D and RGB sources, for the RGB dataset we use the training set (10,553 images) of DUTS \cite{wang2017learning}, which is currently the largest saliency detection benchmark with an explicit training/test evaluation protocol, commonly used for training recent RGB SOD models \cite{zhang2018progressive,feng2019attentive,wang2019salient}. For fair comparison, we apply the model trained on this training set to other datasets.
For evaluation purposes, we adopt five widely used metrics, namely precision-recall curve \cite{Achanta2009,cheng2015global,Borji2015TIP}, S-measure ($S_\alpha$) \cite{Fan2017}, maximum F-measure ($F_{\beta}^{\textrm{max}}$) \cite{Borji2015TIP,hou2019deeply}, maximum E-measure ($E_\phi^{\textrm{max}}$) \cite{fan2018enhanced}, and MAE ($M$) \cite{Perazzi2012,Borji2015TIP}. Given a saliency map $S_{map}$ and the ground truth map $G$, the definitions for these metrics are as follows:
\begin{enumerate}
\item \emph{Precision-Recall (PR)} \cite{Achanta2009,cheng2015global,Borji2015TIP} is defined as:
\begin{equation} \label{equ9}
\begin{array}{l}
\textrm{Precision}(T)=\frac{|M(T)\cap{G}|}{|M(T)|},~~\textrm{Recall}(T) =\frac{|M(T)\cap{G}|}{|G|},
\end{array}
\end{equation}
where $M(T)$ is the binary mask obtained by directly thresholding the saliency map $S_{map}$ with the threshold $T$, and $|\cdot|$ is the total area of the mask(s) inside the map. By varying $T$, a precision-recall curve can be obtained.
\item \emph{S-measure ($S_{\alpha}$)}~\cite{Fan2017} was proposed to measure the spatial structure similarities of saliency
maps:
\begin{equation} \label{equ13}
S_{\alpha}=\alpha \ast S_{o} + (1- \alpha)\ast S_{r},
\end{equation}
where $\alpha$ is a balance parameter between object-aware structural similarity $S_{o}$ and region-aware structural similarity $S_{r}$. We set $\alpha=0.5$, as in \cite{Fan2017}.
\item \emph{F-measure} ($F_{\beta}$) \cite{Borji2015TIP,hou2019deeply} is defined as:
\begin{equation} \label{equ10}
F_{\beta}=\frac{(1+\beta^{2})\textrm{Precision} \cdot \textrm{Recall}}{\beta^{2}\cdot {\textrm{Precision}}+\textrm{Recall}},
\end{equation}
where $\beta $ is the weight between the precision and the recall. $\beta^{2}=0.3$ is usually set since the precision is often weighted more than the recall. In order to get a single-valued score, a threshold is often applied to binarize a saliency map into a foreground mask map. In this paper, we report the maximum F-measure, \emph{i.e.},~$F_{\beta}^{\textrm{max}}$, computed from the precision-recall curve by running all threshold values (\emph{i.e.},~[0, 255]).
\item \emph{E-measure ($E_{\phi}$)} was proposed in \cite{fan2018enhanced} as an enhanced-measure for comparing two binary maps. This metric first aligns two binary maps according to their global means and then computes the local pixel-wise correlation. Finally, an enhanced alignment matrix $\phi$ of the same size as the binary maps is obtained and $E_{\phi}$ is defined as:
\begin{equation} \label{equ10_2}
E_{\phi}=\frac{1}{W \cdot H}\sum_{x=1}^{W}\sum_{y=1}^{H}\phi(x,y),
\end{equation}
where $\phi(x,y)$ denotes the matrix entry at pixel location $(x,y)$. $W$ and $H$ are the width and height of $S_{map}$. The range of $E_{\phi}$ lies in intervals [0, 1]. To extend it for comparing a non-binary saliency map against a binary ground truth map, we follow a similar strategy to $F_{\beta}^{\textrm{max}}$. Specifically, we first binarize a saliency map into a series of foreground maps using all possible threshold values in [0, 255], and report the maximum E-measure, \emph{i.e.},~$E_{\phi}^{\textrm{max}}$, among them.
\begin{figure*}[t!]
\centering
\includegraphics[width=.98\textwidth]{Fig_SOTApr-min}
\vspace{-10pt}
\caption{Precision-recall curves of SOTA methods and the proposed \emph{JL-DCF}~and \emph{JL-DCF}$^*$ across six datasets.}
\label{fig_sotapr}
\end{figure*}
\item \emph{Mean Absolute Error (MAE)} \cite{Perazzi2012,Borji2015TIP} is defined as:
\begin{equation} \label{equ11}
M=\frac{1}{W \cdot H}\sum_{x=1}^{W}\sum_{y=1}^{H}|S_{map}(x,y)-G(x,y)|,
\end{equation}
where $S_{map}(x,y)$ and $G(x,y)$ correspond to the saliency value and ground truth value at pixel location $(x,y)$. $W$ and $H$ are the width and height of the saliency map $S_{map}$.
\end{enumerate}
\noindent In summary, for the five metrics above, higher precision-recall curves, $S_{\alpha}$, $F_{\beta}^{\textrm{max}}$, $E_{\phi}^{\textrm{max}}$, and lower $M$ indicate better performance.
\begin{table}\centering
\caption{Details of the two extra convolutional (Conv.) layers inserted into the \emph{side} \emph{path1}$\sim$\emph{path6} (for both the VGG-16 and ResNet-101 configuration). Parameters in the below brackets from left to right are: kernel size, channel number, stride, dilation rate, and padding.}\label{tab_sideconv}
{
\renewcommand{\tabcolsep}{5.5mm}
\begin{tabular}{c||c||c}
\hline
\emph{No.}\textbackslash Layers & 1$^{st}$ Conv. layer & 2$^{nd}$ Conv. layer \\
\hline
\emph{Side path1} & (3, 128, 1, 1, 1) & (3, 128, 1, 1, 1)\\
\hline
\emph{Side path2} & (3, 128, 1, 1, 1) & (3, 128, 1, 1, 1)\\
\hline
\emph{Side path3} & (5, 256, 1, 1, 2) & (5, 256, 1, 1, 2)\\
\hline
\emph{Side path4} & (5, 256, 1, 1, 2) & (5, 256, 1, 1, 2)\\
\hline
\emph{Side path5} & (5, 512, 1, 1, 2) & (5, 512, 1, 1, 2)\\
\hline
\emph{Side path6} & (7, 512, 1, 2, 6) & (7, 512, 1, 2, 6)\\
\hline
\end{tabular}
}
\end{table}
\subsection{Implementation Details}\label{sec42}
The proposed \emph{JL-DCF}~scheme is generally independent from the network backbone. In this work, we implement two versions of \emph{JL-DCF}~based on VGG-16 \cite{Simonyan14c} and ResNet-101 \cite{He2015Deep}, respectively. We fix the input size of the network as $320 \times 320 \times 3$. Simple gray color mapping is adopted to convert a depth map into a three-channel map.
\textbf{VGG-16 configuration:} For the VGG-16 with fully connected layers removed and meanwhile having 13 convolutional layers, the \emph{side} \emph{path1}$\sim$\emph{path6} are successively connected to \emph{conv1\_2}, \emph{conv2\_2}, \emph{conv3\_3}, \emph{conv4\_3}, \emph{conv5\_3}, and \emph{pool5}. Inspired by \cite{hou2019deeply}, we add two extra convolutional layers into \emph{side} \emph{path1}$\sim$\emph{path6}.
To augment the resolution of the coarsest feature maps from \emph{side} \emph{path6}, while at the same time preserving the receptive field, we let \emph{pool5} have a stride of 1 and instead use dilated convolution \cite{chen2017deeplab} with a rate of 2 for the two extra side convolutional layers. Details of the extra side convolutional layers are given in Table \ref{tab_sideconv}.
Generally, the coarsest features produced by our modified VGG-16 backbone have a spatial size of $20 \times 20$, as in Fig. \ref{fig_blockdiagram}.
\textbf{ResNet-101 configuration:} Similar to the VGG-16 case above, the spatial size of the coarsest features produced by our modified ResNet backbone is also $20 \times 20$. As the first convolutional layer of ResNet already has a stride of 2, the features from the shallowest level have a spatial size of $160 \times 160$. To obtain the full size ($320\times320$) features without trivial up-sampling, we borrow the \emph{conv1\_1} and \emph{conv1\_2} layers from VGG-16 for feature extraction. \emph{Side} \emph{path1}$\sim$\emph{path6} are connected to \emph{conv1\_2}, and \emph{conv1}, \emph{res2c}, \emph{res3b3}, \emph{res4b22}, \emph{res5c} of the ResNet, respectively. We also change the stride of the \emph{res5a} block from 2 to 1, but subsequently use dilated convolution with rate 2.
\textbf{Decoder configuration:} All CP modules in Fig. \ref{fig_blockdiagram} are $3 \times 3$ convolutions with $k=64$ filters, and all FA modules are Inception modules. Up-sampling is achieved by simple bilinear interpolation. As depicted in Fig. \ref{fig_blockdiagram}, to align the feature sizes in the decoder, the output from an FA module is up-sampled by various factors. In an extreme case, the output from FA5 is up-sampled by a factor of $2$, $4$, $8$, and $16$. The final output from FA1 has a spatial size of $320 \times 320$, which is identical to the initial input.
\begin{figure*}[t!]
\includegraphics[width=.98\textwidth]{Fig_SOTAVisual-min2}
\vspace{-10pt}
\caption{Visual comparisons of \emph{JL-DCF}~(trained with only RGB-D data) \fdp{and \emph{JL-DCF}$^*$ (trained with both RGB-D and RGB data)} with SOTA RGB-D saliency models. The jointly learned coarse prediction maps ($S^c_{rgb}$ and $S^c_{d}$) from RGB and depth are also shown together with the final maps ($S^{f}$) of \emph{JL-DCF}.}
\label{fig_sotavisual}
\end{figure*}
\textbf{Training setup:} We implement \emph{JL-DCF}~on Caffe \cite{jia2014caffe}. During training, the backbone~\cite{Simonyan14c,He2015Deep} is initialized by the pre-trained model,
and other layers are randomly initialized. We fine-tune the entire network through end-to-end joint learning. Training data is augmented by mirror reflection to generate double the amount of data. The momentum parameter is set as 0.99, the learning rate is set to $lr=10^{-9}$, and the weight decay is 0.0005. The weight $\lambda$ in Eq. (\ref{equ_loss}) is set as 256 (=16$^2$) to balance the loss between the low- and high-resolution predictions. Stochastic Gradient Descent learning is adopted and accelerated by an NVIDIA 1080Ti GPU.
The training time is about 20 hours/18 hours for 40 epochs under the ResNet-101/VGG-16 configuration. Incorporating RGB data for multi-task training of the same epochs on ResNet-101 requires seven more hours.
\subsection{Comparisons to SOTAs}\label{sec43}
\begin{figure}\centering
\includegraphics[width=0.38\textwidth]{Fig_DUTpr}
\vspace{-15pt}
\caption{Precision-recall curves of SOTA methods and the proposed \emph{JL-DCF}~on DUT-RGBD dataset \cite{Piao2019depth}.}
\label{fig_dutpr}
\end{figure}
We compare \emph{JL-DCF}~(ResNet configuration) with 14 SOTA methods. Among the competitors, DF~\cite{qu2017rgbd}, AFNet~\cite{wang2019adaptive}, CTMF~\cite{han2017cnns}, MMCI~\cite{chen2019multi}, PCF~\cite{chen2018progressively}, TANet~\cite{chen2019three}, CPFP~\cite{zhao2019contrast}, D3Net~\cite{fan2019rethinking}, and DMRA~\cite{Piao2019depth} are recent deep learning-based methods, while ACSD~\cite{ju2014depth}, LBE~\cite{feng2016local}, DCMC~\cite{cong2016saliency}, MDSF~\cite{song2017depth}, and SE~\cite{guo2016salient} are traditional techniques using various hand-crafted features/hypotheses.
Specifically, ``\emph{JL-DCF}'' refers to the model obtained using only RGB-D training data, while ``\emph{JL-DCF}$^*$'' refers to training the model with both RGB-D and RGB data.
Quantitative results on the six widely used datasets are shown in Table \ref{tab_sota}\footnote{There was a small error in the LFSD scores in our previous conference version \cite{fu2020jldcf}, as we later found there was a GT map ``29.png'' corrupted due to format conversion. This error led to a small performance drop for all models, but did not change their relative rankings. We have corrected this GT map as well as all the scores.}. Notable performance gains of \emph{JL-DCF}~over existing and recently proposed techniques, like CPFP\cite{zhao2019contrast}, D3Net\cite{fan2019rethinking}, and DMRA\cite{Piao2019depth}, can be seen in all four metrics. This validates the consistent effectiveness of \emph{JL-DCF}~and its generalizability. Besides, as seen in Table \ref{tab_sota},~\emph{JL-DCF}$^*$ improves the performance over \emph{JL-DCF}~on most datasets, showing that transferring knowledge from the RGB task to the RGB-D task does benefit the latter and brings solid improvement, \emph{e.g.}, 0.6\% average gain on $S_\alpha$ across all six datasets. Comparisons of precision-recall curves are given in Fig. \ref{fig_sotapr}, where \emph{JL-DCF}~and \emph{JL-DCF}$^*$ achieve the best results compared to all existing techniques.
\begin{figure*}[t!]
\includegraphics[width=1.0\textwidth]{Fig_AblationVisual-min}
\vspace{-15pt}
\caption{Visual examples from NLPR, STERE, RGB135, SIP datasets for ablation studies. Generally, the full implementation of \emph{JL-DCF}~(ResNet+CM+RGB-D, highlighted in the red box) achieves the closest results to the ground truth.}
\label{fig_ablationvisual}
\end{figure*}
Visual examples are shown in Fig. \ref{fig_sotavisual}. \fdp{\emph{JL-DCF}~and \emph{JL-DCF}$^*$ appear} to be more effective at utilizing depth information for cross-modal compensation, making it better for detecting target objects in the RGB-D mode. Additionally, the deeply-supervised coarse predictions of \fdp{\emph{JL-DCF}} are listed in Fig. \ref{fig_sotavisual}. One can see that they provide basic object localization support for the subsequent cross-modal refinement, and our densely cooperative fusion architecture learns an adaptive and ``image-dependent'' way of fusing this support with the hierarchical multi-view features. This proves that the fusion process does not degrade in either of the two views (RGB/depth), leading to boosted performance after fusion.
Table \ref{tab_dutrgbd} and Fig. \ref{fig_dutpr} further show the comparative results on the latest DUT-RGBD dataset \cite{Piao2019depth}. Our \emph{JL-DCF}~again shows superior performance against all SOTA models. Note that the experimental results on this dataset clearly validate the elegant generalizability of \emph{JL-DCF}, because it was not trained additionally on the training set of DUT-RGBD which has 800 pairs of RGB and depth images, but still can outperform DMRA, whose training data has included the training set of DUT-RGBD, with notable margins.
\begin{table}[t!]
\renewcommand{\arraystretch}{0.8}
\caption{Quantitative measures on the DUT-RGBD testing set (400 images) \cite{Piao2019depth}. Compared models are those whose results on this dataset are publicly available and include: DF \cite{qu2017rgbd}, CTMF \cite{han2017cnns}, MMCI \cite{chen2019multi}, PCF \cite{chen2018progressively}, TANet \cite{chen2019three}, CPFP \cite{zhao2019contrast}, DMRA \cite{Piao2019depth}, \emph{JL-DCF}~(Ours) and \emph{JL-DCF}$^*$ (Ours$^*$).
}\label{tab_dutrgbd}
\centering
\footnotesize
\setlength{\tabcolsep}{0.95mm}
\begin{tabular}{r||c|c|c|c|c|c|c|c|c}
\hline
Metric & \tabincell{c}{\cite{qu2017rgbd}} & \tabincell{c}{\cite{han2017cnns}} & \tabincell{c}{\cite{chen2019multi}} & \tabincell{c}{\cite{chen2018progressively}} & \tabincell{c}{\cite{chen2019three}} & \tabincell{c}{\cite{zhao2019contrast}} & \tabincell{c}{\cite{Piao2019depth}}& \tabincell{c}{Ours}&\tabincell{c}{Ours$^*$}\\
\hline
\hline
$S_\alpha\uparrow$ &0.730&0.831&0.791&0.801 &0.808&0.749&0.889&\emph{0.905}&\textbf{0.913}\\
$F_{\beta}^{\textrm{max}}\uparrow$ &0.734&0.823& 0.767 & 0.771& 0.790 & 0.718& 0.898 & \emph{0.911} & \textbf{0.916}\\
$E_{\phi}^{\textrm{max}}\uparrow$ & 0.819 & 0.899& 0.859& 0.856& 0.861 & 0.811& 0.933 & \emph{0.943} & \textbf{0.949}\\
$M\downarrow$ & 0.145 & 0.097& 0.113& 0.100& 0.093 & 0.099& 0.048 & \emph{0.042} & \textbf{0.039}\\
\hline
\end{tabular}
\end{table}
\subsection{Ablation Studies}\label{sec44}
We conduct thorough ablation studies by removing or replacing components from the full implementation of \emph{JL-DCF}. We set the ResNet version of \emph{JL-DCF}~(trained with only RGB-D data) as reference, and then compare various ablated/modified models to it. We denote this reference version as ``\emph{JL-DCF}~(ResNet+CM+RGB-D)'', where ``CM'' refers to the usage of CM modules and ``RGB-D'' refers to both RGB and depth inputs.
Firstly, to compare different backbones, a version ``\emph{JL-DCF}~(VGG+CM+RGB-D)'' is trained by replacing the ResNet backbone with VGG, while keeping other settings unchanged. To validate the effectiveness of the adopted cooperative fusion modules, we train another version ``\emph{JL-DCF}~(ResNet+C+RGB-D)'', by replacing the CM modules with a concatenation operation. To demonstrate the effectiveness of combining RGB and depth, we train two versions ``\emph{JL-DCF}~(ResNet+RGB)'' and ``\emph{JL-DCF}~(ResNet+D)'' respectively, where all the batch-related operations (such as CM modules) in Fig. \ref{fig_blockdiagram} are replaced with identity mappings, while all the other settings, including the dense decoder and deep supervision, are kept unchanged. Note that this validation is important to show that our network has learned complementary information by fusing RGB and depth. Lastly, to illustrate the benefit of joint learning, we train a scheme ``SL-DCF (VGG+CM+RGB-D)'' using two separate backbones for RGB and depth. ``SL'' stands for ``Separate Learning'', in contrast to the proposed ``Joint Learning''. In this test, we adopt VGG-16, which is smaller, since using two separate backbones leads to almost twice the overall model size.
Quantitative comparisons for various metrics are shown in Table \ref{tab_ablation}. Two SOTA methods, CPFP \cite{zhao2019contrast} and D3Net \cite{fan2019rethinking}, are listed for reference. Fig. \ref{fig_ablationvisual} shows visual ablation comparisons. Five different observations can be made:
\textbf{I. ResNet-101 $vs.$ VGG-16:} From the comparison between columns ``\texttt{A}'' and ``\texttt{B}'' in Table \ref{tab_ablation}, the superiority of the ResNet backbone over VGG-16 is evident, which is consistent with previous works. Note that the VGG version of our scheme still outperforms the leading methods CPFP (VGG-16 backbone) and D3Net (ResNet backbone).
\textbf{II. Effectiveness of CM modules:} Comparing columns ``\texttt{A}'' and ``\texttt{C}'' demonstrates that changing the CM modules into concatenation operations leads to a certain amount of degeneration. The underlying reason is that the whole network tends to bias its learning towards only RGB information, while ignoring depth, since it is able to achieve fairly good results (column ``\texttt{D}'') by doing so on most datasets. Although concatenation is a popular way to fuse features, the learning may become easily trapped without appropriate guidance. In contrast, our CM modules perform the ``explicit fusion operation'' across RGB and depth modalities.
\begin{figure}
\centering
\includegraphics[width=0.40\textwidth]{Fig_Learningcurve-min}
\vspace{-10pt}
\caption{Learning curve comparison between joint learning (\emph{JL-DCF}) and separate learning (SL-DCF).}
\label{fig_learningcurve}
\vspace{-0.3cm}
\end{figure}
\textbf{III. Combining RGB and depth:} The effectiveness of combining RGB and depth for boosting the performance is clearly validated by the consistent improvement over most datasets (compare column ``\texttt{A}'' with columns ``\texttt{D}'' and ``\texttt{E}''). The only exception is on STERE \cite{niu2012leveraging}, with the reason being that the quality of depth maps in this dataset is much worse compared to other datasets. Visual examples are shown in Fig. \ref{fig_ablationvisual}, in the 3$^{rd}$ and 4$^{th}$ rows. We find that many depth maps from STERE are too coarse and have very inaccurate object boundaries, misaligning with the true objects. Absorbing such unreliable depth information may, in turn, degrade the performance. Quantitative evidence can be seen in Table \ref{tab_ablation}, column ``\texttt{E}'' (STERE dataset), where solely using depth cues achieves much worse performance (about 16\%/20\% lower on $S_{\alpha}$/$F_{\beta}^{\textrm{max}}$ compared to RGB) than on other datasets.
\textbf{IV. RGB only $vs.$ depth only:} The comparison between columns ``\texttt{D}'' and ``\texttt{E}'' in Table \ref{tab_ablation} proves that using RGB data for saliency estimation is superior to using depth in most cases, indicating that the RGB view is generally more informative. However, using depth information achieves better results than RGB on SIP \cite{fan2019rethinking} and RGBD135 \cite{cheng2014depth}, as visualized in Fig. \ref{fig_ablationvisual}. This implies that the depth maps from the two datasets are of relatively good quality.
\textbf{V. Efficiency of JL component:}
Existing models usually use separate learning approaches to extract features from RGB and depth data, respectively. In contrast, our \emph{JL-DCF}~adopts a joint learning strategy to obtain the features simultaneously.
We compare the two learning strategies and
find that using separate learning (two separate backbones) is likely to increase the training difficulties.
Fig. \ref{fig_learningcurve} shows typical learning curves for such a case. In the separate learning setting, where the initial learning rate is $lr=10^{-9}$, the network is easily trapped in a local optimum with high loss, while the joint learning setting (shared network) can converge nicely.
Further, for separate learning, if the learning rate is set to $lr=10^{-10}$, the learning process is rescued from local oscillation but converges slowly compared to our joint learning strategy. As shown in columns ``\texttt{B}'' and ``\texttt{F}'' in Table \ref{tab_ablation}, the resulting converged model after 40 epochs achieves worse performance than \emph{JL-DCF}, namely 1.1\%/1.76\% overall drop on $S_{\alpha}$/$F_{\beta}^{\textrm{max}}$. We attribute the better performance of \emph{JL-DCF}~to its joint learning from both RGB and depth data.
\begin{table}[t!]
\renewcommand{\arraystretch}{0.8}
\caption{Quantitative evaluation for ablation studies described in Section \ref{sec44}. For different configurations, ``\texttt{A}'': JL-DCF (ResNet+CM+RGB-D), ``\texttt{B}'': JL-DCF (VGG+CM+RGB-D), ``\texttt{C}'': JL-DCF (ResNet+C+RGB-D), ``\texttt{D}'': JL-DCF (ResNet+RGB), ``\texttt{E}'': JL-DCF (ResNet+D), ``\texttt{F}'': SL-DCF (VGG+CM+RGB-D).
}\label{tab_ablation}
\centering
\footnotesize
\setlength{\tabcolsep}{1.0mm}
\begin{tabular}{p{0.8mm}p{0.8mm}r||c|c|c|c|c|c|c|c}
\hline
&& Metric & \tabincell{c}{CPFP} & \tabincell{c}{D3Net} & \tabincell{c}{\texttt{A}} & \tabincell{c}{\texttt{B}} & \tabincell{c}{\texttt{C}} & \tabincell{c}{\texttt{D}} & \tabincell{c}{\texttt{E}} & \tabincell{c}{\texttt{F}}\\
\hline
\hline
\multirow{4}{*}{\begin{sideways}\textit{NJU2K}\end{sideways}} & \multirow{4}{*}{\begin{sideways}\cite{ju2014depth}\end{sideways}}
& $S_\alpha\uparrow$ &0.878&0.895&\textbf{0.903}&0.897
&0.900&0.895&0.865&0.886\\
&& $F_{\beta}^{\textrm{max}}\uparrow$ &0.877&0.889
& \textbf{0.903} & 0.899& 0.898 & 0.892& 0.863 & 0.883\\
&& $E_{\phi}^{\textrm{max}}\uparrow$ & 0.926 & 0.932& \textbf{0.944}& 0.939& 0.937 & 0.937& 0.916 & 0.929\\
&& $M\downarrow$ & 0.053 & 0.051& \textbf{0.043}& 0.044& 0.045 & 0.046& 0.063 & 0.053\\
\hline
\multirow{4}{*}{\begin{sideways}\textit{NLPR}\end{sideways}} & \multirow{4}{*}{\begin{sideways}\cite{peng2014rgbd}\end{sideways}}& $S_\alpha\uparrow$ & 0.888 & 0.906& \textbf{0.925}& 0.920& 0.924 & 0.922& 0.873 & 0.901\\
&& $F_{\beta}^{\textrm{max}}\uparrow$ & 0.868& 0.885& \textbf{0.916}& 0.907& 0.914 & 0.909& 0.843 & 0.881\\
&& $E_{\phi}^{\textrm{max}}\uparrow$ & 0.932 & 0.946& \textbf{0.962}& 0.959& 0.961 & 0.957& 0.930 & 0.946\\
&& $M\downarrow$ & 0.036 & 0.034& \textbf{0.022}& 0.026& 0.023 & 0.025& 0.041 & 0.033\\
\hline
\multirow{4}{*}{\begin{sideways}\textit{STERE}\end{sideways}}& \multirow{4}{*}{\begin{sideways}\cite{niu2012leveraging}\end{sideways}} & $S_\alpha\uparrow$ & 0.879& 0.891& 0.905& 0.894& 0.906 & \textbf{0.909}& 0.744 & 0.886\\
&& $F_{\beta}^{\textrm{max}}\uparrow$ & 0.874& 0.881& \textbf{0.901}& 0.889& 0.899& 0.901 & 0.708& 0.876\\
&& $E_{\phi}^{\textrm{max}}\uparrow$ & 0.925 & 0.930& \textbf{0.946}& 0.938& 0.945 & 0.946& 0.834 & 0.931\\
&& $M\downarrow$ & 0.051 & 0.054& 0.042& 0.046& 0.041 & \textbf{0.038}& 0.110 & 0.053\\
\hline
\multirow{4}{*}{\begin{sideways}\textit{RGBD135}\end{sideways}}& \multirow{4}{*}{\begin{sideways}\cite{cheng2014depth}\end{sideways}} & $S_\alpha\uparrow$ & 0.872& 0.904& \textbf{0.929}& 0.913& 0.916 & 0.903& 0.918 & 0.893\\
&& $F_{\beta}^{\textrm{max}}\uparrow$ & 0.846& 0.885& \textbf{0.919}& 0.905& 0.906 & 0.894& 0.906 & 0.876\\
&& $E_{\phi}^{\textrm{max}}\uparrow$ & 0.923 & 0.946& \textbf{0.968}& 0.955& 0.957 & 0.947& 0.967 & 0.950\\
&& $M\downarrow$ & 0.038 & 0.030& \textbf{0.022}& 0.026& 0.025 & 0.027& 0.027 & 0.033\\
\hline
\multirow{4}{*}{\begin{sideways}\textit{LFSD}\end{sideways}}& \multirow{4}{*}{\begin{sideways}\cite{li2014saliency}\end{sideways}} & $S_\alpha\uparrow$ & 0.820& 0.832& \textbf{0.862}& 0.841& 0.860 & 0.853& 0.760 & 0.834\\
&& $F_{\beta}^{\textrm{max}}\uparrow$ & 0.821& 0.819& \textbf{0.866}& 0.844& 0.858 & 0.850& 0.768 & 0.832\\
&& $E_{\phi}^{\textrm{max}}\uparrow$ & 0.864 & 0.864 & \textbf{0.901}& 0.885& 0.901 & 0.897& 0.824 & 0.872\\
&& $M\downarrow$ & 0.095 & 0.099& \textbf{0.071}& 0.084& 0.071 & 0.076& 0.119 & 0.093\\
\hline
\multirow{4}{*}{\begin{sideways}\textit{SIP}\end{sideways}}& \multirow{4}{*}{\begin{sideways}\cite{fan2019rethinking}\end{sideways}} & $S_\alpha\uparrow$ & 0.850& 0.864& \textbf{0.879}& 0.866& 0.870 & 0.855& 0.872 & 0.865\\
&& $F_{\beta}^{\textrm{max}}\uparrow$ & 0.851& 0.862& \textbf{0.885}& 0.873& 0.873 & 0.857& 0.877 & 0.863\\
&& $E_{\phi}^{\textrm{max}}\uparrow$ & 0.903 & 0.910& \textbf{0.923}& 0.916& 0.916 & 0.908& 0.920 & 0.913
\\ && $M\downarrow$ & 0.064 & 0.063& \textbf{0.051}& 0.056& 0.055& 0.061& 0.056 & 0.061\\
\hline
\end{tabular}
\end{table}
\fkr{
\textbf{Further ablation analyses:}
Besides the five key observations above, there are also other flexible parts in \emph{JL-DCF}~to discuss, such as the FA modules and dense connections. Consequently, we have formed extra configurations ``\texttt{G}''$\sim$``\texttt{J}'', where
``\texttt{G}'': removing all FA modules from ``\texttt{A}\footnote{It indicates the aforementioned ``\texttt{A}'' in Table \ref{tab_ablation}, and similarly hereinafter.}'' to get a degenerated decoder which linearly sums up skips from all scales;
``\texttt{H}'': removing all dense connections from ``\texttt{A}''; ``\texttt{I}'': removing all dense connections from ``\texttt{A}'', while leaving only the skip connection from FA5 to FA1, as a residual way;
``\texttt{J}'': replacing the ResNet-101 backbone with a more powerful DenseNet-161 \cite{huang2017densely} to show whether potential boost of \emph{JL-DCF}~can be obtained by other advanced backbones. For ``\texttt{J}'', the DenseNet is incorporated into \emph{JL-DCF}~by connecting \emph{side} \emph{path1}$\sim$\emph{path6} to \emph{conv1\_2} of VGG-16 (similar to the ResNet configuration in Section \ref{sec42}), and \emph{conv0}, \emph{denseblock1}$\sim$\emph{denseblock4} of the DenseNet.
}
\fkr{
Results are shown in Table \ref{tab_ablation2}. In brief, we find that adding FA modules (\emph{i.e.}, ``\texttt{A}'') for non-linear aggregation makes the network more powerful, while removing all FA modules (\emph{i.e.}, ``\texttt{G}'') results in average $\sim$1.38\% $F_{\beta}^{\textrm{max}}$ drop.
Regarding the employed dense connections, one can see that ``\texttt{A}'' achieves improvement over ``\texttt{H}'' on most datasets (except on RGBD135 where similar results are obtained), showing that dense connections could somewhat enhance robustness of the network. Another interesting observation is that the residual connection ``\texttt{I}'' works comparably well on NJU2K, STERE and RGBD135. This is because although the residual connection is simplified from the dense connections, it alleviates the gradual dilution of deep location information and offers extra high-level guidance, as also observed in \cite{liu2019simple}.
About ``\texttt{J}'', we witness very encouraging boost by further switching the backbone from ResNet-101 to DenseNet-161. This indicates that more powerful backbones are able to play their roles in our \emph{JL-DCF}~framework.
}
\begin{table}[t!]
\centering
\caption{Further ablation analyses, where details about ``\texttt{G}''$\sim$``\texttt{J}'' can be found in Section \ref{sec44}. Here $F_{\beta}$ means $F_{\beta}^{\textrm{max}}$, whose superscript is omitted for the sake of space. }\label{tab_ablation2}
\footnotesize
{
\setlength{\tabcolsep}{0.3mm}
\begin{tabular}{c||c|c|c|c|c|c|c|c|c|c|c|c}
\hline
& \multicolumn{2}{c|}{\textit{NJU2K}} & \multicolumn{2}{c|}{\textit{NLPR}} & \multicolumn{2}{c|}{\textit{STERE}} & \multicolumn{2}{c|}{\textit{RGBD135}} & \multicolumn{2}{c|}{\textit{LFSD}} & \multicolumn{2}{c}{\textit{SIP}}\\
\hline
& $S_{\alpha}\uparrow$ & $F_{\beta}\uparrow$ & $S_{\alpha}\uparrow$ & $F_{\beta}\uparrow$ & $S_{\alpha}\uparrow$ & $F_{\beta}\uparrow$ & $S_{\alpha}\uparrow$ & $F_{\beta}\uparrow$ & $S_{\alpha}\uparrow$ & $F_{\beta}\uparrow$ & $S_{\alpha}\uparrow$ & $F_{\beta}\uparrow$\\
\hline
\hline
\texttt{A} & 0.903 & 0.903 & 0.925 & 0.916 & 0.905 & 0.901 & 0.929 & 0.919 & 0.862 & 0.866 & 0.879 & 0.885\\
\hline
\texttt{G} & 0.893 & 0.893 & 0.911 & 0.894 & 0.893 & 0.884 & 0.924 & 0.912 & 0.855 & 0.852 & 0.870 & 0.872\\
\hline
\texttt{H} & 0.902 & 0.902 & 0.922 & 0.911 & 0.904 & 0.898 & 0.930 & 0.923 & 0.854 & 0.857 & 0.874 & 0.879\\
\hline
\texttt{I} & 0.904 & 0.906 & 0.924 & 0.913 & 0.905 & 0.901 & 0.929 & 0.921 & 0.859 & 0.861 & 0.876 & 0.881\\
\hline
\texttt{J} & \textbf{0.917} & \textbf{0.917} & \textbf{0.934} & \textbf{0.924} & \textbf{0.909} & \textbf{0.905} & \textbf{0.934} & \textbf{0.926} & \textbf{0.863} & \textbf{0.868} & \textbf{0.894} & \textbf{0.903}\\
\hline
\end{tabular}
}
\end{table}
\subsection{Computational Efficiency}\label{sec45}
We evaluate the computation time of \emph{JL-DCF}~on a desktop equipped with an Intel I7-8700K CPU (3.7GHz), 16G RAM, and NVIDIA 1080Ti GPU. \emph{JL-DCF}~ is implemented on Caffe \cite{jia2014caffe}. We test the inference time of our models using the Matlab interface of Caffe, over the 100 samples (resized to $320 \times 320$) from the LFSD dataset. The average GPU inference times are given in Table \ref{tab_time}.
\begin{table}[!htb]
\centering
\caption{Average GPU inference times (second) of \emph{JL-DCF}.}\label{tab_time}
{
\renewcommand{\tabcolsep}{4mm}
\begin{tabular}{c||c||c||c}
\hline
Backbones\textbackslash Components & Overall & JL & DCF \\
\hline
\hline
VGG-16 & 0.089 & 0.065 & 0.024\\
ResNet-101 & 0.111 & 0.087 & 0.024\\
\hline
\end{tabular}
\end{table}
As can be seen, the JL (joint learning) component in \emph{JL-DCF}, which includes the shared backbone, consumes most of the time, while the DCF (densely cooperative fusion) component takes only 0.024s. \fdp{Note that the latter fact actually implies that our introduced CM and FA modules, as well as dense connections, result in only little computation load, since the entire DCF component is generally efficient. For example, the extra dense connections lead to only 0.008s gain.} Besides, the ResNet-101 is computationally 0.022s slower than VGG-16 due to its higher number of network parameters. This reveals that, in \emph{JL-DCF}, the backbone dominates the time cost, and a way for acceleration is to utilize a light-weighted backbone; however the impact on detection accuracy should be considered at the same time.
\subsection{Application to Other Multi-modal Fusion Tasks}\label{sec46}
Although the proposed \emph{JL-DCF}~is originally motivated by and evaluated on the RGB-D SOD task, thanks to its general design for exploiting cross-modal commonality and complementarity, it can be applied to other closely-related multi-modal SOD tasks, such as RGB-T (``T'' refers to thermal infrared) SOD \cite{tu2020multi,tu2019rgb,tang2019rgbt,zhang2019rgb} and video SOD (VSOD) \cite{wang2017video,li2018flow,li2018unsupervised,song2018pyramid,fan2019shifting,gu2020pyramid}. Intuitively, salient objects can present similar saliency characters in thermal infrared images (Fig. \ref{fig_multimoda} upper part) and optical flow images (Fig. \ref{fig_multimoda} lower part) as they generally present in RGB images. Therefore, for SOD, there exists certain commonality between thermal/flow images and RGB images, as indicated by many traditional models \cite{cong2019video,xu2019video,xu2019video2} that are based on hand-crafted features. Examples for explaining this concept are shown in Fig. \ref{fig_multimoda}. To apply \emph{JL-DCF}~to RGB-T SOD and VSOD, we just change the training data of \emph{JL-DCF}~from paired RGB and depth data to paired RGB and thermal/flow data, without any other modification to the framework. In addition, because the thermal and flow images are commonly converted to the three-channel RGB format, applying a Siamese network to RGB \emph{vs.} thermal/flow is straightforward.
\begin{figure}[thbp]
\centering
\includegraphics[width=0.40\textwidth]{Fig_Multimoda-min}
\vspace{-10pt}
\caption{Illustration of the commonality and complementarity of thermal infrared images (upper two rows) and optical flow images (lower two rows) to the RGB ones for SOD. Complementary to the RGB view, salient objects sometimes are easier to distinguish in these two views. Meanwhile, salient objects ``stand out'' in these two views like they do in the RGB view, indicating certain commonality.}
\label{fig_multimoda}
\end{figure}
\textbf{RGB-T (thermal infrared) SOD.} Since, to date, there are only a small number of works related to RGB-T SOD \cite{tu2020multi,tu2019rgb,tang2019rgbt,zhang2019rgb}, there lack universally-agreed evaluation protocols and benchmarks. Following a recent work \cite{tu2020multi}, we test \emph{JL-DCF}~on VT821 \cite{tang2019rgbt}, which is an RGB-T SOD dataset having 821 samples of aligned RGB and thermal images, and compare our results with those provided by the authors of \cite{tu2020multi}. VT1000 \cite{tu2019rgb}, which contains 1000 samples, is adopted as the training set. The method proposed in \cite{tu2020multi} is referred to as MIED. Meanwhile, \cite{tu2020multi} also provides us the adapted results of DMRA \cite{Piao2019depth}, retrained on VT1000.
Quantitative evaluation results on VT821 are shown in Table \ref{tab_vt821}, where we report four different versions of \emph{JL-DCF}: ``\emph{JL-DCF}'', ``\emph{JL-DCF}(T)'', ``\emph{JL-DCF}$^*$'', ``\emph{JL-DCF}$^*$(T)''. ``\emph{JL-DCF}'' and ``\emph{JL-DCF}$^*$'' are the same models tested in Tab. \ref{tab_sota}, trained on the RGB-D SOD task. ``\emph{JL-DCF}(T)'' and ``\emph{JL-DCF}$^*$(T)'' refer to the \emph{JL-DCF}~models retrained on the RGB-T data, \emph{i.e.}, VT1000 (40 epochs, initialized consistently as the RGB-D task), where the latter means training the model jointly with both RGB-T and RGB data, similar to the RGB-D case mentioned before. From Table \ref{tab_vt821}, first, it can be seen that our four models outperform MIED and DMRA consistently on the two metrics. Surprisingly, even the models (\emph{e.g.}, \emph{JL-DCF}, \emph{JL-DCF}$^*$) trained by RGB-D data can generalize well to this RGB-T SOD task, further validating the robustness and generalizability of our framework. Still, co-training with more RGB data enhances the detection accuracy, whereas re-training \emph{JL-DCF}~by RGB-T data better adapts it to the specific task. Undoubtedly, the best performance is attained by model ``\emph{JL-DCF}$^*$(T)'', surpassing MIED by 2.6\% on $S_{\alpha}$.
\begin{table}[t!]
\renewcommand{\arraystretch}{0.8}
\caption{Comparing \emph{JL-DCF}~to existing RGB-T SOD models on VT821~\cite{tang2019rgbt} dataset.
}\label{tab_vt821}
\centering
\footnotesize
\setlength{\tabcolsep}{1.05mm}
\begin{tabular}{r||c|c|c|c|c|c}
\hline
Metric & MIED & DMRA & \emph{JL-DCF} & \emph{JL-DCF}(T) & \emph{JL-DCF}$^*$ & \emph{JL-DCF}$^*$(T)\\
\hline
\hline
$S_\alpha\uparrow$ & 0.866 & 0.844 & 0.873 & 0.876 & \emph{0.885} & \textbf{0.892}\\
$M\downarrow$ & 0.053 & 0.049 & 0.037 & 0.037 & \textbf{0.031} & \emph{0.033}\\
\hline
\end{tabular}
\end{table}
\begin{table}[t]
\renewcommand{\arraystretch}{0.8}
\caption{Comparing \emph{JL-DCF}~to existing VSOD models on five widely used benchmark datasets.
}\label{tab_vsod}
\centering
\footnotesize
{
\setlength{\tabcolsep}{0.5mm}
\begin{tabular}{r|cc|cc|cc|cc|cc}
\hline
& \multicolumn{2}{c|}{\textit{DAVIS-T}} & \multicolumn{2}{c|}{\textit{FBMS-T}} & \multicolumn{2}{c|}{\textit{ViSal}} & \multicolumn{2}{c|}{\textit{VOS}} & \multicolumn{2}{c}{\textit{DAVSOD}} \\
& \multicolumn{2}{c|}{\cite{perazzi2016benchmark}} & \multicolumn{2}{c|}{\cite{ochs2013segmentation}} & \multicolumn{2}{c|}{\cite{wang2015consistent}} & \multicolumn{2}{c|}{\cite{li2017benchmark}} & \multicolumn{2}{c}{\cite{fan2019shifting}} \\
\cline{2-11}
Model & $S_{\alpha}\uparrow$ & $M\downarrow$ & $S_{\alpha}\uparrow$ & $M\downarrow$ & $S_{\alpha}\uparrow$ & $M\downarrow$ & $S_{\alpha}\uparrow$ & $M\downarrow$ & $S_{\alpha}\uparrow$ & $M\downarrow$ \\
\hline
\hline
\scriptsize{DLVS \cite{wang2017video}} & 0.794 & 0.061 & 0.794 & 0.091 & 0.881 & 0.048 & 0.760 & 0.099 & 0.657 & 0.129 \\
\scriptsize{FGRN \cite{li2018flow}} & 0.838 & 0.043 & 0.809 & 0.088 & 0.861 & 0.045 & 0.715 & 0.097 & 0.693 & 0.098 \\
\scriptsize{MBNM \cite{li2018unsupervised}} & 0.887 & 0.031 & 0.857 & 0.047 & 0.898 & 0.020 & 0.742 & 0.099 & 0.637 & 0.159 \\
\scriptsize{PDBM \cite{song2018pyramid}} & 0.882 & 0.028 & 0.851 & 0.064 & 0.907 & 0.032 & 0.818 & 0.078 & 0.698 & 0.116 \\
\scriptsize{SSAV \cite{fan2019shifting}} & 0.893 & 0.028 & 0.879 & \textbf{0.040} & 0.943 & 0.020 & 0.819 & 0.073 & 0.724 & 0.092 \\
\scriptsize{PCSA \cite{gu2020pyramid}} & 0.902 & 0.022 & 0.866 & 0.041 & \textbf{0.946} & 0.017 & \textbf{0.827} & 0.065 & 0.741 & \textbf{0.086} \\
\hline
\emph{JL-DCF}$^*$ & \textbf{0.903} & \textbf{0.022} & \textbf{0.884} & 0.044 & 0.940 & \textbf{0.017} & 0.825 & \textbf{0.063} & \textbf{0.756} & 0.091 \\
\hline
\end{tabular}
}
\end{table}
\textbf{Video SOD.} \emph{JL-DCF}~can also be applied to VSOD. We first compute forward optical flow maps of RGB frames using FlowNet 2.0 \cite{ilg2017flownet}, which is a SOTA deep model for optical flow estimation. A computed flow map originally has two channels for indicating motion displacements. To input it to the JL component of \emph{JL-DCF}, we convert it to a three-channel color map by using the common flow-field color coding technique \cite{ilg2017flownet}. We train \emph{JL-DCF}~using the official training sets of DAVIS (30 clips) \cite{perazzi2016benchmark} and FBMS (29 clips) \cite{ochs2013segmentation}, resulting in a total of 2373 samples of paired RGB and flow images. Besides, we find that in this task, co-training with RGB data is essential to the generalization of the model\footnote{Note that most of the existing deep-based VSOD works adopt additional RGB SOD data during their training, such as \cite{fan2019shifting,gu2020pyramid,song2018pyramid}.}, because the scene diversity of the training samples is quite limited\footnote{Most samples are consecutive frames with the same objects with similar background.}. Following \cite{fan2019shifting}, evaluation is conducted on five widely used benchmark datasets: FBMS-T \cite{ochs2013segmentation} (30 clips), DAVIS-T \cite{perazzi2016benchmark} (20 clips), ViSal \cite{wang2015consistent} (17 clips), MCL \cite{kim2015spatiotemporal} (9 clips), UVSD \cite{liu2016saliency} (18 clips), VOS \cite{li2017benchmark} (40 clips selected by \cite{fan2019shifting}), DAVSOD \cite{fan2019shifting} (the easy subset with 35 clips).
As can be seen from Tab. \ref{tab_vsod}, although \emph{JL-DCF}~is not specially designed for VSOD (no any long-term temporal consideration \cite{fan2019shifting,gu2020pyramid,song2018pyramid}), it is able to achieve comparable performance against SOTAs by learning from RGB and motion images, achieving the best on six out of the ten scores. This, again, shows that \emph{JL-DCF}~may become a unified and general framework for solving multi-modal feature learning and fusion problems, as it is the first work to exploit both the cross-modal commonality and complementarity. Fig. \ref{fig_vsod} shows several visual comparisons
\begin{figure}[t!]
\centering
\includegraphics[width=0.46\textwidth]{Fig_Semantic-min}
\vspace{-5pt}
\caption{\fdp{An testing example of applying \emph{JL-DCF}~to RGB-D semantic segmentation. Although the class ``floormat'' (yellow box) is almost indistinguishable in the depth view, its main part is correctly identified in the final prediction.}}
\label{fig_semantic}
\end{figure}
\subsection{\fkr{Linking to RGB-D Semantic Segmentation}}\label{sec47}
\fdp{To the best of our knowledge, almost no an existing model has adopted the Siamese network for RGB-D semantic segmentation. In contrast, most of them adopt a two-stream middle-fusion fashion \cite{hazirbas2016fusenet,wang2016learning,park2017rdfnet,deng2019rfbnet,chen2020bi}. The proposed \emph{JL-DCF}~can be adapted to this task by simply replacing the prediction heads \cite{li2016deepsaliency}, \emph{i.e.,} changing the two $(1 \times 1,1)$ convolutional layers before coarse/final prediction into $(1\times 1,\mathcal{C})$ convolutions, where $\mathcal{C}$ indicates the number of classes in the semantic segmentation. Then, the network is trained with a pixel-wise multi-class cross-entropy loss against the ground truth label map. Following the standard 40-class train/test protocol on NYUDv2 \cite{gupta2014learning,chen2020bi,park2017rdfnet}, we obtained 35.0\% mIOU by directly applying \emph{JL-DCF}~(Fig. \ref{fig_semantic}) without any other modification, and such a result is shown feasible on this task according to \cite{Long2017Fully}.}
\begin{figure*}
\includegraphics[width=1\textwidth]{Fig_VSOD-min}
\vspace{-15pt}
\caption{Visual comparisons of \emph{JL-DCF}~with two latest SOTA VSOD models: SSAV-CVPR19 \cite{fan2019shifting} and PCSA-AAAI20 \cite{gu2020pyramid}. The bottom right group of images show a failure case where distracting objects are detected by all models. However, note that only \emph{JL-DCF}~gives responses to the small target dog.}
\label{fig_vsod}
\end{figure*}
\fdp{
We note that a potential challenge of using the Siamese network for RGB-D semantic segmentation is the huge commonality gap between RGB and depth modalities on this task, because RGB-D semantic segmentation is to identify \emph{category-specified} regions, where RGB and depth present huge gaps (\emph{i.e.}, weak commonality). This is in clear contrast to RGB-D SOD, where \emph{category-agnostic} salient objects usually ``pop out'' consistently in the two modalities, as illustrated in Fig. \ref{fig_motivation} and Fig. \ref{fig_multimoda}. This actually motivates a topic that, besides RGB-D SOD and the applications shown in this paper, in what other cases a Siamese network is suitable. We believe such a topic is interesting and worthy of deeper investigation in the future.
}
\fkr{To better understand how \emph{JL-DCF}~performs against existing semantic segmentation models and bridge these two fields, we carefully adapt several open-source SOTA segmentation models including PSPNet \cite{zhao2017pyramid}, RDFNet \cite{park2017rdfnet}, DANet \cite{fu2019dual}, SA-Gate \cite{chen2020bi}, and SGNet \cite{Chen2021spatial}, to the RGB-D SOD task.
We replace their multi-class classification heads with the corresponding saliency prediction heads (as mentioned above), and conduct evaluation on RGB-D SOD datasets. Note that RDFNet \cite{park2017rdfnet}, SA-Gate \cite{chen2020bi}, and SGNet \cite{Chen2021spatial} are three RGB-D semantic segmentation models, and they can be transferred directly. While PSPNet \cite{zhao2017pyramid} and DANet \cite{fu2019dual} are two representative RGB segmentation models, we adapt them by the late-fusion strategy adopted in \cite{Long2017Fully}. Also, HHA maps \cite{gupta2014learning,park2017rdfnet,chen2020bi} originally used by some RGB-D semantic segmentation models like RDFNet and SA-Gate were replaced with three-channel depth maps as input in our experiments for fair comparison. Comparative results are shown in Table \ref{tab_semantic}, where all the models were based on ResNet-101 and re-trained using the same training dataset as \emph{JL-DCF}. We can see that \emph{JL-DCF}~generally outperforms these semantic segmentation models on five representative datasets. Interestingly, we have also observed ``not bad'' results from some SOTA models, especially the latest SA-Gate, which even performs better than those tailored models in Table \ref{tab_sota}. This fact, experimentally reveals the underlying connection and transferability between these two fields, which we believe is interesting to study in the future. Besides, their differences may also exist, as indicated by the less satisfactory behavior of SGNet. The degraded performance of SGNet on this task is probably caused by the fact it relies on depth as a guidance to filter RGB features. However, in the task of RGB-D SOD, depth information may become less reliable. Another issue regarding these models we have observed is their coarse prediction with large output strides, leading to less accurate boundary details.}
\begin{table}[!htb]
\renewcommand{\arraystretch}{0.8}
\caption{Comparing \emph{JL-DCF}~to existing semantic segmentation models transferred to the RGB-D saliency task. Symbol ``$\dag$'' means those RGB semantic segmentation models adapted to this task by the late-fusion strategy \cite{Long2017Fully}.
}\label{tab_semantic}
\centering
\footnotesize
{
\setlength{\tabcolsep}{0.4mm}
\begin{tabular}{r|cc|cc|cc|cc|cc}
\hline
& \multicolumn{2}{c|}{\textit{NJU2K}} & \multicolumn{2}{c|}{\textit{NLPR}} & \multicolumn{2}{c|}{\textit{STERE}} & \multicolumn{2}{c|}{\textit{RGBD135}} & \multicolumn{2}{c}{\textit{SIP}} \\
& \multicolumn{2}{c|}{\cite{ju2014depth}} & \multicolumn{2}{c|}{\cite{peng2014rgbd}} & \multicolumn{2}{c|}{\cite{niu2012leveraging}} & \multicolumn{2}{c|}{\cite{cheng2014depth}} & \multicolumn{2}{c}{\cite{fan2019rethinking}} \\
\cline{2-11}
Model & $S_{\alpha}\uparrow$ & $M\downarrow$ & $S_{\alpha}\uparrow$ & $M\downarrow$ & $S_{\alpha}\uparrow$ & $M\downarrow$ & $S_{\alpha}\uparrow$ & $M\downarrow$ & $S_{\alpha}\uparrow$ & $M\downarrow$ \\
\hline
\hline
\scriptsize{PSPNet$^\dag$\cite{zhao2017pyramid}} & 0.901 & 0.045 & 0.918 & 0.028 & 0.899 & 0.046 & 0.909 & 0.026 & 0.856 & 0.066 \\
\scriptsize{RDFNet \cite{park2017rdfnet}} & 0.891 & 0.050 & 0.910 & 0.031 & 0.897 & 0.047 & 0.919 & 0.027 & 0.875 & 0.055 \\
\scriptsize{DANet$^\dag$\cite{fu2019dual}} & 0.900 & 0.044 & 0.912 & 0.027 & 0.889 & 0.048 & 0.896 & 0.027 & 0.870 & 0.056 \\
\scriptsize{SA-Gate \cite{chen2020bi}} & 0.898 & 0.051 & 0.923 & 0.028 & 0.896 & 0.054 & \textbf{0.941} & 0.022 & 0.874 & 0.059 \\
\scriptsize{SGNet \cite{Chen2021spatial}} & 0.873 & 0.060 & 0.888 & 0.039 & 0.883 & 0.055 & 0.899 & 0.034 & 0.832 & 0.075 \\
\hline
\emph{JL-DCF} & \textbf{0.903} & \textbf{0.043} & \textbf{0.925} & \textbf{0.022} & \textbf{0.905} & \textbf{0.042} & 0.929 & \textbf{0.022} & \textbf{0.879} & \textbf{0.051} \\
\hline
\end{tabular}
}\vspace{-0.2cm}
\end{table}
\section{Conclusion}\label{sec5}
We present a novel framework for RGB-D based SOD, named \emph{JL-DCF}, which is based on joint learning and densely cooperative fusion.
Experimental results show the feasibility of learning a Siamese network for salient object localization in RGB and depth views, simultaneously, to achieve accurate prediction. Moreover, the densely cooperative fusion strategy employed is effective for exploiting cross-modal complementarity. \emph{JL-DCF}~shows superior performance against SOTAs on seven benchmark datasets and is supported by comprehensive ablation studies. The generality and robustness of our framework has also been validated on two closely related tasks, \emph{i.e.}, RGB-Thermal (RGB-T) SOD and VSOD, \fkr{and also by comparing with SOTA semantic segmentation models}. The SOTA performance of \emph{JL-DCF}~shows it could become a unified framework for multi-modal feature learning and fusion tasks, and we hope this work would serve as a catalyst for progressing many cross-modal tasks in the future.
\noindent \textbf{Acknowledgments.}\quad
This work was supported in part by the NSFC, under No. 61703077, 61773270, 61971005, U19A2052. We thank Yao Jiang and Suhang Li for their help on implementing \emph{JL-DCF}~in Pytorch.
{
\bibliographystyle{IEEEtran}
|
2,869,038,155,885 | arxiv | \subsection*{Organization} The rest of this paper is a series of
sections sketching some version of the program described above for a
number of random matrix ensembles. Sections \ref{S:Wigner} and section
\ref{S:Wishart} discusses Wigner and Wishart matrices, combining
eigenvalue rigidity arguments of Dallaporta
\cite{Dallaporta1,Dallaporta2} with measure concentration. Section
\ref{S:groups} discusses random matrices drawn uniformly from
classical compact matrix groups, and Section \ref{S:powers} discusses
powers of such matrices; both those sections follow \cite{MM-powers}
and also use the eigenvalue rigidity approach. The next three
sections use the entropy method: Sections \ref{S:sums} and
\ref{S:compressions} discusses randomized sums and random compressions
of Hermitian matrices, following \cite{MM-concentration}, and Section
\ref{S:qsc} discusses Hamiltonians of quantum spin glasses, following
\cite{BuMe}. Finally, Section \ref{S:Ginibre}, following
\cite{MM-Ginibre}, demonstrates in case of the complex Ginibre
ensemble, how eigenvalue rigidity alone allows one to carry our much of our
program even without the use of a general concentration phenomenon
together with Lemma \ref{T:Lipschitz}.
\section{Wigner matrices}\label{S:Wigner}
In this section we outline how our approach can be applied to the most
central model of random matrix theory, that of Wigner matrices. We
begin with the most classical case: the Gaussian Unitary Ensemble
(GUE). Let $M_n$ be a random $n\times n$ Hermitian matrix, whose
entries $\Set{[M_n]_{jk}}{1 \le j \le k \le n}$ are independent random
variables, such that each $[M_n]_{jj}$ has a $N(0,n^{-1})$
distribution, and each $[M_n]_{jk}$ for $j<k$ has independent real and
imaginary parts, each with a $N(0,(2n)^{-1})$ distribution. Since
$M_n$ is Hermitian, it has real eigenvalues
$\lambda_1 \le \dots \le\lambda_n$. Wigner's theorem implies that the
empirical spectral measure
\[
\mu_n = \frac{1}{n} \sum_{j=1}^n \delta_{\lambda_j}
\]
converges to the semicircle law $\rho_{sc}$. The following result
quantifies this convergence.
\begin{thm}\label{T:Wigner}
Let $M_n$ be as above, and let $\mu_n$ denote its spectral
measure. Then
\begin{enumerate}[label=(\alph*)]
\item
\label{P:Wigner-expected-distance}
$\displaystyle \mathbb{E} W_2(\mu_n,\rho_{sc})\le C \frac{\sqrt{\log(n)}}{n},$
\item \label{P:Wigner-distance-tails}
$\displaystyle \mathbb{P}\left[W_2(\mu_n,\rho_{sc})\ge C
\frac{\sqrt{\log(n)}}{n}+t\right]\le e^{-n^2 t^2 / 2}$ for all
$t \ge 0$, and
\item \label{P:Wigner-as-convergence} with probability 1, for
sufficiently large $n$, $\displaystyle W_2(\mu_n,\rho_{sc}) \le C'
\frac{\sqrt{\log(n)}}{n}.$
\end{enumerate}
\end{thm}
Here and in what follows, symbols such as $c,C,C'$ denote constants
which are independent of dimension.
Part \ref{P:Wigner-expected-distance} of Theorem \ref{T:Wigner} was
proved by Dallaporta in \cite{Dallaporta1} using the eigenvalue
rigidity approach; the proof is
outlined below.
Lemma \ref{T:Lipschitz} and the Gaussian concentration of measure
property (Proposition \ref{T:Gaussian-concentration}), imply that if
$F$ is a 1-Lipschitz function (with respect to the Hilbert--Schmidt
distance) on the space of Hermitian matrices, then
\begin{equation}
\label{E:Wigner-concentration}
\mathbb{P}\left[F(M_n)\ge \mathbb{E} F(M_n)+t\right] \le e^{-nt^2/2}
\end{equation}
for all $t \ge 0$. This fact, together with part
\ref{P:distance-is-Lipschitz} of Lemma \ref{T:Lipschitz} and part
\ref{P:Wigner-expected-distance} of Theorem \ref{T:Wigner} now imply
part \ref{P:Wigner-distance-tails}. Finally, part
\ref{P:Wigner-as-convergence} follows from part
\ref{P:Wigner-distance-tails} by the Borel--Cantelli lemma. So it
remains only to prove part \ref{P:Wigner-expected-distance}.
\medskip
Define $\gamma_j \in \mathbb{R}$ such that
$\rho_{sc}((-\infty,\gamma_j])=\frac{j}{n}$; this is the predicted
location of the $j^{th}$ eigenvalue $\lambda_j$ of $M_n$. The
discretization $\nu_n$ of the semi-circle law $\rho_{sc}$ is given by
\[
\nu_n := \frac{1}{n}\sum_{j=1}^n\delta_{\gamma_j}.
\]
It can be shown that that $W_2(\rho_{sc},\nu_n) \le
\frac{C}{n}$. Furthermore, by the definition of $W_2$,
\[
\mathbb{E} W_2^2(\mu_n,\nu_n)
\le \frac{1}{n} \sum_{j=1}^n \mathbb{E}\abs{\lambda_j-\gamma_j}^2.
\]
This reduces the proof of part \ref{P:Wigner-expected-distance} to
estimating the latter expectations.
It is a classical fact that the eigenvalues of the GUE form a
determinantal point process with kernel
\[
K_n(x,y) = \sum_{j=0}^n h_j(x) h_j(y) e^{-(x^2 + y^2)/2},
\]
where the $h_j$ are the orthonormalized Hermite polynomials
\cite[Section 6.2]{Mehta}. (The reader is referred to \cite{HKPV06}
for the definition of a determinantal point process.) The following
is a then a special case of some important general properties of
determinantal point processes \cite[Theorem 7]{HKPV06},
\cite{Gustavsson}.
\begin{prop}
\label{T:GUE-DPP}
For each $x \in \mathbb{R}$, let $\mathcal{N}_x$ denote the number of
eigenvalues of $M_n$ which are less than or equal to $x$. Then
\[
\mathcal{N}_x\overset{d}{=}\sum_{i=1}^n\xi_i,
\]
where the $\xi_i$ are independent $\{0,1\}$-valued Bernoulli random
variables.
Moreover,
\[
\mathbb{E} \mathcal{N}_x = \int_{-\infty}^x K_n(u,u) \ du
\qquad \text{and} \qquad
\var \mathcal{N}_x = \int_{-\infty}^x \int_x^\infty K_n(u, v)^2 \
du \ dv.
\]
\end{prop}
The first part of this result can be combined
with the classical Bernstein inequality to deduce that for each $t >
0$,
\begin{equation*
\mathbb{P}\left[\abs{\mathcal{N}_x - \mathbb{E} \mathcal{N}_x} > t \right]
\le 2 \exp \left(-\frac{t^2}{2\sigma_x^2+t}\right),
\end{equation*}
where $\sigma_x^2 = \var \mathcal{N}_x$. Using estimates on $\mathbb{E}
\mathcal{N}_x$ due to G\"otze and Tikhomirov \cite{GoTi05} and on
$\sigma_x^2$ due to Gustavsson \cite{Gustavsson} (both of which can be
deduced from the second part of Proposition \ref{T:GUE-DPP}), this
implies that for $x\in(-2+\delta,2-\delta)$,
\begin{equation*
\mathbb{P}\left[\abs{\mathcal{N}_x - n\rho_{sc}((-\infty,x])} > t + C\right]
\le 2 \exp \left(-\frac{t^2}{2c_\delta\log(n)+t}\right)
\end{equation*}
for each $t \ge 0$. Combining this with the observation
that
\begin{equation*
\mathbb{P}\left[\lambda_j>\gamma_j + t \right]
=\mathbb{P}\left[\mathcal{N}_{\gamma_j + t}<j\right],
\end{equation*}
one can deduce, upon integrating by parts, that
\[
\mathbb{E} \abs{\lambda_j - \gamma_j}^2 \le C_\varepsilon \frac{\log(n)}{n^2}
\]
for $j \in [\varepsilon n, (1-\varepsilon) n]$. This provides the necessary
estimates in the
bulk of the spectrum. Dallaporta established
similar but weaker bounds for the soft edge of the spectrum using
essentially the last part of Proposition \ref{T:GUE-DPP}, and for the hard
edge using tail estimates due to Ledoux and Rider \cite{LeRi}.
This completes the proof of Theorem \ref{T:Wigner}.
\medskip
The real symmetric counterpart of the GUE is the Gaussian Orthogonal
Ensemble (GOE), whose entries $\Set{[M_n]_{jk}}{1 \le j \le k \le n}$
are independent real random variables, such that each $[M_n]_{jj}$ has
a $N(0,n^{-1})$ distribution, and each $[M_n]_{jk}$ for $j<k$ has a
$N(0,(\sqrt{2}n)^{-1})$ distribution. The spectrum of the GOE does not
form a determinantal point process, but a close distributional
relationship between the eigenvalue counting functions
of the GOE and GUE was found in \cite{FoRa,ORourke}. Using this,
Dallaporta showed that part \ref{P:Wigner-expected-distance} of
Theorem \ref{T:Wigner} also applies to the GOE. Part
\ref{P:Wigner-distance-tails} then follows from the Gaussian
concentration of measure property as before, and part
\ref{P:Wigner-as-convergence} from the Borel--Cantelli lemma.
To move beyond the Gaussian setting, Dallaporta invokes the Tao--Vu
four moment theorem \cite{TaVu1,TaVu2} and a localization theorem due
to Erd\H{o}s, Yau, and Yin \cite{ErYaYi} to extend Theorem
\ref{T:Wigner}\ref{P:Wigner-expected-distance} to random matrices with
somewhat more general entries. The proofs of these results involve
the kind of hard analysis which it is our purpose to avoid in this
paper. However, it is straightforward, under appropriate hypotheses,
to extend the measure concentration argument for part
\ref{P:Wigner-distance-tails} of Theorem \ref{T:Wigner}, and we
indicate briefly how this is done.
A probability measure $\mu$ on $\mathbb{R}$ is said to satisfy a quadratic
transportation cost inequality (QTCI) with constant $C > 0$ if
\[
W_2(\mu, \nu) \le \sqrt{C H(\nu \vert \mu)}
\]
for any probability measure $\nu$ which is absolutely continuous with
respect to $\mu$, where $H(\nu \vert \mu)$ denotes relative entropy.
\begin{prop}[{see \cite[Chapter 6]{Ledoux-book}}]
\label{T:tci}
Suppose that $X_1, \dots, X_n$ are independent random variables
whose distributions each satisfy a QTCI with constant $C$. If
$F: \mathbb{R}^n \to \mathbb{R}$ is a $1$-Lipschitz function, then
\[
\mathbb{P} \left[ F(X) - \mathbb{E} F(X) \ge t \right] \le e^{-t^2/C}
\]
for all $t > 0$.
\end{prop}
A QTCI is the most general possible hypothesis which implies
subgaussian tail decay, independent of $n$, for Lipschitz functions of
independent random variables; see \cite{Gozlan}. It holds in
particular for any distribution satisfying a logarithmic Sobolev
inequality, including Gaussian distributions, or a distribution with a
density on a finite interval bounded above and below by positive
constants. Using Dallaporta's arguments for part
\ref{P:Wigner-expected-distance} and substituting Proposition
\ref{T:tci} in place of the Gaussian concentration phenomenon, we
arrive at the following generalization of Theorem \ref{T:Wigner}.
\begin{thm}
\label{T:Wigner-tci}
Let $M_n$ be a random Hermitian matrix whose entries satisfy each of
the following:
\begin{itemize}
\item The random variables $\left\{\Re M_{jk}\right\}_{1 \le j \le k
\le n}$ and $\left\{\Im M_{jk}\right\}_{1 \le j < k \le n}$ are
all independent.
\item The first four moments of each of these random variables is
the same as for the GUE (respectively, GOE).
\item Each of these random variables satisfies a QTCI with constant
$c n^{-1/2}$.
\end{itemize}
Let $\mu_n$ denote the spectral measure of $M_n$. Then
\begin{enumerate}[label=(\alph*)]
\item $\displaystyle \mathbb{E} W_2(\mu_n,\rho_{sc})\le C \frac{\sqrt{\log(n)}}{n},$
\item $\displaystyle \mathbb{P}\left[W_2(\mu_n,\rho_{sc})\ge C
\frac{\sqrt{\log(n)}}{n}+t\right]\le e^{- c n^2 t^2}$ for all
$t \ge 0$, and
\item with probability 1, for sufficiently large $n$,
$\displaystyle W_2(\mu_n,\rho_{sc}) \le C' \frac{\sqrt{\log(n)}}{n}.$
\end{enumerate}
\end{thm}
As mentioned above, a QTCI is a minimal assumption to reach exactly
this result by these methods. A weaker and more classical assumption
would be a Poincar\'e inequality, which implies subexponential decay
for Lipschitz functions, and is the most general hypothesis implying
any decay independent of $n$; see \cite{GoRoSa} and the references
therein. If the third condition in Theorem \ref{T:Wigner-tci} is
replaced by the assumption of a Poincar\'e inequality with constant
$cn^{-1/2}$, then the same kind of argument leads to an almost sure
convergence rate of order $\frac{\log(n)}{n}$; we omit the details.
\section{Wishart matrices}\label{S:Wishart}
In this section we apply the strategy described in the introduction to
Wishart matrices (i.e., random sample covariance matrices). Let
$m \ge n$, and let $X$ be an $m\times n$ random matrix with i.i.d.\
entries, and define the Hermitian positive-semidefinite random matrix
\[
S_{m,n}:=\frac{1}{m}X^*X.
\]
We denote the eigenvalues of $S_{m,n}$ by
$0 \le \lambda_1 \le \dots \le \lambda_n$ and the empirical spectral
measure by
\[
\mu_{m,n} = \frac{1}{n} \sum_{j=1}^n \delta_{\lambda_j}.
\]
It was first proved in \cite{MaPa} that, under some moment conditions,
if $\frac{n}{m}\to\rho>0$ as $n,m\to\infty$, then $\mu_{m,n}$
converges to the Marchenko--Pastur law $\mu_\rho$ with parameter
$\rho$, with compactly supported density given by
\[
f_\rho(x)=\frac{1}{2\pi x}\sqrt{(b_\rho-x)(x-a_\rho)},
\]
on $\left(a_\rho,b_\rho\right)$, with $a_\rho=(1-\sqrt{\rho})^2$ and $b_\rho=(1+\sqrt{\rho})^2$. The
following result quantifies this convergence for many distributions.
\begin{thm}
\label{T:Wishart}
Suppose that for each $n$, $0 < c \le \frac{n}{m} \le 1$, and that
$X$ is an $m \times n$ random matrix whose entries satisfy each of
the following:
\begin{itemize}
\item The random variables
$\left\{\Re X_{jk}\right\}_{\substack{1 \le j \le m \\ 1 \le k \le
n}}$
and
$\left\{\Im X_{jk}\right\}_{\substack{1 \le j \le m \\ 1 \le k \le
n}}$ are all independent.
\item The first four moments of each of these random variables are
the same as for a standard complex (respectively, real) normal
random variable.
\item Each of these random variables satisfies a QTCI with constant
$C$.
\end{itemize}
Let $\rho = \frac{n}{m}$ and let $\mu_{m,n}$ denote the spectral
measure of $S_{m,n} = \frac{1}{m} X^* X$. Then
\begin{enumerate}[label=(\alph*)]
\item \label{P:Wishart-expected-distance}
$\displaystyle \mathbb{E} W_2(\mu_{m,n},\mu_\rho)\le C \frac{\sqrt{\log(n)}}{n},$
\item \label{P:Wishart-distance-tails}
$\displaystyle \mathbb{P}\left[W_2(\mu_{m,n},\mu_\rho)\ge C
\frac{\sqrt{\log(n)}}{n}+t\right]\le e^{- c m \min\{n t^2,
\sqrt{n} t\}}$ for all $t \ge c \frac{\sqrt{\log(n)}}{n}$, and
\item \label{P:Wishart-as-convergence} with probability 1, for sufficiently large $n$,
$\displaystyle W_2(\mu_{m,n},\mu_\rho) \le C' \frac{\sqrt{\log(n)}}{n}.$
\end{enumerate}
\end{thm}
Strictly speaking, part \ref{P:Wishart-as-convergence} does not, as
stated, imply almost sure convergence of $\mu_{m,n}$, since $\rho$ and
hence $\mu_{\rho}$ itself depends on $n$. However, if $\rho = \rho(n)$ has a
limiting value $\rho^*$ as $n \to \infty$ (as in the original Marchenko--Pastur
result), then the measures $\mu_{\rho}$ converge to $\mu_{\rho^*}$.
This convergence can easily be quantified, but we will not pursue the
details here.
\begin{proof}
Part \ref{P:Wishart-expected-distance} was proved by Dallaporta in
\cite{Dallaporta2}, by the same methods as in Theorem
\ref{T:Wigner-tci}\ref{P:Wigner-expected-distance} discussed in the
last section. First, when the entries of $X$ are complex normal
random variables (in which $S_{m,n}$ is the unitary Laguerre
ensemble), the eigenvalues of $S_{m,n}$ form a determinantal point
process. This implies an analogue of Proposition \ref{T:GUE-DPP},
from which eigenvalue rigidity results can be deduced, leading to
the estimate in part \ref{P:Wishart-expected-distance} in this
case. The result is extended to real Gaussian random matrices using
interlacing results, and to more general distributions using
versions of the four moment theorem for Wishart random matrices.
The reader is referred to \cite{Dallaporta2} for the details.
\medskip
The proof of part \ref{P:Wishart-distance-tails} is more complicated
than in the previous section, because the random matrix $S_{m,n}$
depends quadratically on the independent entries of $X$. However,
we can still apply the machinery of measure concentration by using
the fact that $S_{m,n}$ possesses local Lipschitz behavior, combined
with a truncation argument. Indeed, if $X,Y$ are $m\times n$
matrices over $\mathbb{C}$,
\begin{equation}\begin{split}
\label{E:not-quite-lipschitz}
\norm{\frac{1}{m}X^*X-\frac{1}{m}Y^*Y}_{HS}
& \le \frac{1}{m}\norm{X^*(X-Y)}_{HS} +
\frac{1}{m}\norm{(X^*-Y^*)Y)}_{HS}\\
& \le\frac{1}{m}\left(\norm{X}_{op} + \norm{Y}_{op}\right)\norm{X-Y}_{HS},
\end{split}\end{equation}
where we have used the facts that both the Hilbert--Schmidt norm
$\norm{\cdot}_{HS}$ and the operator norm $\norm{\cdot}_{op}$ are
invariant under conjugation and transposition, and that
$\norm{AB}_{HS}\le\norm{A}_{op}\norm{B}_{HS}$.
Thus, for a given $K > 0$, the function
\[
X \mapsto \frac{1}{m}X^*X
\]
is $\frac{2K}{\sqrt{m}}$-Lipschitz on
$\Set{X\in\Mat{m,n}{\mathbb{C}}}{\norm{X}_{op}\le K\sqrt{m}},$ and so by
Lemma \ref{T:Lipschitz}\ref{P:distance-is-Lipschitz}, the function
\[
F: X\mapsto W_2(\mu_{m,n},\mu_\rho)
\]
is $\frac{2K}{\sqrt{mn}}$-Lipschitz on this set. We can therefore
extend $F$ to a $\frac{2K}{\sqrt{mn}}$-Lipschitz function
$\widetilde{F}:\Mat{m,n}{\mathbb{C}}\to\mathbb{R}$ (cf.\ \cite[Theorem
3.1.2]{EvGa}); we may moreover assume that $\widetilde{F}(X) \ge 0$
and
\begin{equation}
\label{E:bounded-extension}
\sup_{X\in\Mat{m,n}{\mathbb{C}}} \widetilde{F}(X)
=\sup_{\norm{X}_{op}\le K\sqrt{m}} W_2(\mu_{m,n}, \mu_\rho).
\end{equation}
Proposition \ref{T:tci} now allows us to control $\widetilde{F}(X)$
and $\norm{X}_{op}$, which are both Lipschitz functions of $X$.
First, an elementary discretization argument using Proposition
\ref{T:tci} (cf.\ \cite[Theorem 5.39]{Vershynin}, or alternatively
Lemma \ref{T:entropy} below) shows that
\begin{equation}
\label{E:op-norm-bound}
\mathbb{P}\left[\norm{X}_{op} > K \sqrt{m} \right] \le 2 e^{-c m}
\end{equation}
for some $K, c > 0$. We will use this $K$ in the following.
Next, Proposition \ref{T:tci} implies that
\begin{equation}
\label{E:F-tail}
\mathbb{P}\left[\widetilde{F}(X)>t\right] \le C e^{-cmnt^2}
\end{equation}
as long as $t \ge 2 \mathbb{E} \widetilde{F}(X)$. Now
\begin{equation}
\label{E:E-F-tilde}
\begin{split}
\mathbb{E}\widetilde{F}(X) & = \mathbb{E} W_2(\mu_{m,n},\mu_\rho) +
\mathbb{E}\left[\left(\widetilde{F}(X) -
W_2(\mu_{m,n},\mu_\rho)\right) \ind{\norm{X}_{op}>K\sqrt{m}} \right] \\
&\le C\frac{\sqrt{\log(n)}}{n}+\left(\sup_{\norm{X}_{op} \le
K\sqrt{m}}W_2(\mu_{m,n}, \mu_\rho)\right) \mathbb{P}[\norm{X}_{op}>K\sqrt{m} ]
\end{split}\end{equation}
by part \ref{P:Wishart-expected-distance} and \eqref{E:bounded-extension}.
Since $\mu_\rho$ is supported on $[a_\rho,b_\rho]$, and $\mu_{m,n}$
is supported on
$\left[0,\norm{\frac{1}{m}XX^*}_{op}\right] =
\left[0,\frac{1}{m}\norm{X}_{op}^2\right]$,
\[
\sup_{\norm{X}_{op}\le K\sqrt{m}} W_2(\mu_{m,n},\mu_\rho)
\le \max \{b_\rho, K^2\} \le C,
\]
and so by \eqref{E:op-norm-bound} and \eqref{E:E-F-tilde},
\[
\mathbb{E}\widetilde{F}(X)\le C\frac{\sqrt{\log(n)}}{n}+ C e^{-c m}
\le C'\frac{\sqrt{\log(n)}}{n}.
\]
Finally, we have
\begin{equation}
\label{E:W2-Wishart-bound-K}
\begin{split}
\mathbb{P}\left[W_2(\mu_{m,n},\mu_\rho)>t\right]&\le
\mathbb{P}\left[W_2(\mu_{m,n},\mu_\rho)>t, \norm{X}_{op}\le
K \sqrt{m}\right]+\mathbb{P}\left[\norm{X}_{op}> K\sqrt{m}\right]\\
&\le\mathbb{P}\left[\widetilde{F}(X)>t\right]+\mathbb{P}\left[\norm{X}_{op}>
K\sqrt{m}\right] \\
& \le C' e^{-cmn t^2}
\end{split}\end{equation}
for $c_1 \frac{\sqrt{\log(n)}}{n} \le t \le \frac{c_2}{\sqrt{n}}$ by
\eqref{E:op-norm-bound} and \eqref{E:F-tail}. We omit the details of
the similar argument to obtain a subexponential bound for
$t > \frac{c_2}{\sqrt{n}}$. This concludes the proof of part
\ref{P:Wishart-distance-tails}.
Part \ref{P:Wishart-as-convergence} follows as before using the
Borel--Cantelli lemma.
\end{proof}
An alternative approach to quantifying the limiting behavior of the
spectrum of Wishart matrices is to consider the singular values
$0 \le \sigma_1 \le \dots \le \sigma_n$ of $\frac{1}{\sqrt{m}}X$; that
is, $\sigma_j = \sqrt{\lambda_j}$. Lemma \ref{T:Lipschitz} can
be applied directly in that context, by using the fact that the eigenvalues of
the Hermitian matrix $\begin{bmatrix} 0 & X \\ X^* & 0 \end{bmatrix}$
are $\{\pm \sigma_j\}$. However, if one is ultimately interested in
the eigenvalues $\{ \lambda_j\}$, then translating the resulting
concentration estimates to eigenvalues ends up requiring the same kind
of analysis carried out above.
\section{Uniform random matrices from the compact classical groups}
\label{S:groups}
Each of the compact classical matrix groups $\Orthogonal{n}$,
$\SOrthogonal{n}$, $\Unitary{n}$, $\SUnitary{n}$, $\Symplectic{n}$
possesses a uniform (Haar) probability measure which is invariant under
translation by a fixed group element. Each of these uniform measures
possesses a concentration of measure property making it amenable to the
program laid out in the introduction; moreover, the eigenvalues of a
random matrix from any of these groups is a determinantal point
process, meaning that the eigenvalue rigidity approach used in Section
\ref{S:Wigner} applies here as well. The limiting empirical spectral
measure for all of these groups is the uniform probability measure on
the circle, as first shown in \cite{DiSh}. This convergence is
quantified in the following result, proved in \cite{MM-powers}.
\begin{thm}\label{T:groups}
Let $M_n$ be uniformly distributed in any of
$\Orthogonal{n}$, $\SOrthogonal{n}$, $\Unitary{n}$, $\SUnitary{n}$,
$\Symplectic{n}$, and let $\mu_n$ denote its spectral measure. Let
$\mu$ denote the uniform probability measure on the unit circle
$\mathbb{S}^1\subseteq\mathbb{C}$. Then
\begin{enumerate}[label=(\alph*)]
\item \label{P:groups-expected-distance}
$\displaystyle \mathbb{E} W_2(\mu_n,\mu)\le C\frac{\sqrt{\log(n)}}{n},$
\item \label{P:groups-distance-tails}
$\displaystyle \mathbb{P}\left[W_2(\mu_n,\mu) \ge
C\frac{\sqrt{\log(n)}}{n}+t\right]\le e^{-cn^2t^2},$ and
\item \label{P:groups-as-convergence} with probability 1, for
sufficiently large $n$,
$\displaystyle W_2(\mu_n,\mu) \le C\frac{\sqrt{\log(n)}}{n}.$
\end{enumerate}
\end{thm}
We briefly sketch the proof below; for full details, see \cite{MM-powers}.
\medskip
Part \ref{P:groups-expected-distance} is proved using the eigenvalue
rigidity approach described in Section \ref{S:Wigner} for the GUE. We
first order the eigenvalues of $M_n$ as
$\{e^{i \theta_j}\}_{1 \le j \le n}$ with
$0 \le \theta_1 \le \dots \le \theta_n < 2\pi$, and define the
discretization $\nu_n$ of $\mu$ by
\[
\nu_n := \frac{1}{n} \sum_{j=1}^n \delta_{e^{2\pi i j /n}}.
\]
It is easy to show that $W_2(\mu,\nu_n) \le \frac{C}{n}$, and by the
definition of $W_2$,
\[
\mathbb{E} W_2^2(\mu_n, \nu_n) \le \frac{1}{n} \sum_{j=1}^n \mathbb{E} \abs{e^{i
\theta_j} - e^{2\pi i j/ n}}^2
\le \frac{1}{n} \sum_{j=1}^n \mathbb{E} \abs{\theta_j - \frac{2\pi j}{n}}^2,
\]
so that part \ref{P:groups-expected-distance} can be proved by
estimating the latter expectations.
For these estimates, as for the GUE, one can make use of the determinantal
structure of the eigenvalue processes of uniformly distributed random
matrices. For the case of the unitary group $\Unitary{n}$, the
eigenvalue angles $\{\theta_j\}$ form a determinantal point process on
$[0,2\pi)$ with kernel
\[
K_n := \frac{\sin \left(\frac{n(x-y)}{2}\right)}{\sin
\left(\frac{(x-y)}{2}\right)};
\]
this was first proved by Dyson \cite{Dyson}. The determinantal
structure provides an analogue
of Proposition \ref{T:GUE-DPP}:
\begin{prop}
\label{T:groups-DPP}
For each $0 \le x < 2\pi$, let $\mathcal{N}_x$ denote the number of
eigenvalues $e^{i \theta_j}$ of $M_n \in \Unitary{n}$ such that
$\theta_j \le x$. Then
\begin{equation}\label{E:groups-counting-bernoullis}
\mathcal{N}_x\overset{d}{=}\sum_{i=1}^n\xi_i,
\end{equation}
where the $\xi_i$ are independent $\{0,1\}$-valued Bernoulli random
variables.
Moreover,
\begin{equation}\label{E:groups-means-variances}
\mathbb{E} \mathcal{N}_x = \int_{0}^x K_n(u,u) \ du
\qquad \text{and} \qquad
\var \mathcal{N}_x = \int_{0}^x \int_x^{2\pi} \infty K_n(u, v)^2 \
du \ dv.
\end{equation}
\end{prop}
Appropriately modified versions of Proposition \ref{T:groups-DPP} hold for the other
groups as well, due to determinantal structures in those contexts identified by Katz and Sarnak \cite{KaSa}.
Using \eqref{E:groups-means-variances}, one can estimate $\mathbb{E} \mathcal{N}_x$ and
$\var \mathcal{N}_x$, and then use \eqref{E:groups-counting-bernoullis} and
Bernstein's inequality to deduce
that
\begin{equation}\label{E:groups-Bernstein-with-mean-variance}
\mathbb{P}\left[\abs{\mathcal{N}_x - \frac{nx}{2\pi}} > t + C\right]
\le 2 \exp \left(-\frac{t^2}{2c\log(n)+t}\right)
\end{equation}
for all $t > 0$. Combining this with the observation
that
\begin{equation*
\mathbb{P}\left[\theta_j>\frac{2\pi j}{n} + t \right]
=\mathbb{P}\left[\mathcal{N}_{\frac{2\pi j}{n} + t}<j\right],
\end{equation*}
one can deduce, upon integrating by parts, that
\[
\mathbb{E} \abs{\theta_j - \frac{2\pi j}{n}}^2 \le C \frac{\log(n)}{n^2}
\]
for each $j$, which completes the proof of part
\ref{P:groups-expected-distance}. Observe that this is made slightly
simpler than the proof of Theorem
\ref{T:Wigner}\ref{P:Wigner-expected-distance} for the GUE by the fact
that all of the eigenvalues of a unitary matrix behave like ``bulk''
eigenvalues.
\medskip
Part \ref{P:groups-distance-tails} of Theorem \ref{T:groups} follows
from part \ref{P:groups-expected-distance} and the following
concentration of measure property of the uniform measure on the
compact classical groups. (There is an additional subtlety in dealing
with the two components of $\Orthogonal{n}$, which can be handled by
conditioning on $\det M_n$.)
\begin{prop}\label{T:groups-concentration}
Let $G_n$ be one of $\SOrthogonal{n}$, $\Unitary{n}$,
$\SUnitary{n}$, or $\Symplectic{n}$, and let $F:G_n\to\mathbb{R}$ be
1-Lipschitz, with respect to either the Hilbert--Schmidt distance or
the geodesic distance on $G_n$. Let $M_n$ be a uniformly
distributed random matrix in $G_n$. Then
\[
\mathbb{P}\left[\abs{F(M_n)-\mathbb{E} F(M_n)}>t\right]\le e^{-cnt^2}
\]
for every $t > 0$.
\end{prop}
For $\SOrthogonal{n}$, $\SUnitary{n}$, and $\Symplectic{n}$, this
property goes back to the work of Gromov and Milman \cite{GrMi}; for
the precise version stated here see \cite[Section 4.4]{AnGuZe}. For
$\Unitary{n}$ (which was not covered by the results of \cite{GrMi}
because its Ricci tensor is degenerate), the concentration in
Proposition \ref{T:groups-concentration} was proved in
\cite{MM-powers}.
Finally, part \ref{P:groups-as-convergence} follows from part
\ref{P:groups-distance-tails} via the Borel-Cantelli lemma, thus
completing the proof of Theorem \ref{T:groups}.
\section{Powers of uniform random matrices}
\label{S:powers}
The approach used with random matrices from the compact classical
groups in the previous section can be readily generalized to powers of
such matrices, as follows.
\begin{thm}
\label{T:groups-powers}
Let $M_n$ be uniformly distributed in any of
$\Orthogonal{n}$, $\SOrthogonal{n}$, $\Unitary{n}$, $\SUnitary{n}$,
$\Symplectic{n}$. Let $m\ge 1$, and let $\mu_{m,n}$ denote the
spectral measure of $M_n^m$. Let
$\mu$ denote the uniform probability measure on the unit circle
$\mathbb{S}^1\subseteq\mathbb{C}$. There are universal constants $C,c$ such that
\begin{enumerate}[label=(\alph*)]
\item
\label{P:powers-expected-distance}$\displaystyle \mathbb{E} W_2(\mu_{m,n},\mu)\le
C\frac{\sqrt{m\left(\log\left(\frac{n}{m}\right)+1\right)}}{n},$
\item
\label{P:powers-distance-tails}$\displaystyle \mathbb{P}\left[W_2(\mu_{m,n},\mu)\ge
C\frac{\sqrt{m\left(\log\left(\frac{n}{m}\right)+1\right)}}{n}+t\right]\le
e^{-cn^2t^2},$ and
\item \label{P:powers-as-convergence}with probability 1, for
sufficiently large $n$,
$\displaystyle W_2(\mu_{m,n},\mu)\le
C\frac{\sqrt{m\left(\log\left(\frac{n}{m}\right)+1\right)}}{n}.$
\end{enumerate}
\end{thm}
In fact, the same proof works for $m>1$ as in the previous section,
because of the following result of Rains \cite{Ra03}. The result is
stated in the unitary case for simplicity, but analogous results hold
in the other compact classical matrix groups.
\begin{prop}
\label{T:Rains}
Let $m\le n$ be fixed.
If $M_n$ is uniformly distributed in $\Unitary{n}$, the eigenvalues
of $M_n^m$ are distributed as those of $m$ independent uniform
unitary matrices of sizes
$\left\lfloor \frac{n}{m} \right\rfloor:=\max\left\{k\in\mathcal{N}\mid
k\le\frac{n}{m}\right\}$
and
$\left\lceil \frac{n}{m} \right\rceil:=\min\left\{k\in\mathcal{N}\mid
k\ge\frac{n}{m}\right\}$,
such that the sum of the sizes of the matrices is $n$.
\end{prop}
As a consequence, if $\mathcal{N}_x$ is the number of eigenvalues
of $M_n^m$ lying in the arc from 1 to $e^{ix}$, then
\[
\mathcal{N}_x\overset{d}{=}\sum_{j=0}^{m-1}\mathcal{N}_x^j,
\]
where the $\mathcal{N}_\theta^j$ are the counting functions of $m$
independent random matrices, each uniformly distributed in
$\Unitary{\left\lfloor\frac{n}{m}\right\rfloor}$ or
$\Unitary{\left\lceil\frac{n}{m}\right\rceil}$. In particular, by
Proposition \ref{T:groups-DPP} $\mathcal{N}_x$ is equal in
distribution to a sum of independent Bernoulli random variables, and
its mean and variance can be estimated using the available estimates
for the individual summands established in the previous section. One
can thus again apply Bernstein's inequality to obtain eigenvalue
rigidity, leading to a bound on $\mathbb{E} W_2(\mu_{m,n},\mu)$.
Crucially, the concentration phenomenon on the compact classical groups
tensorizes in a dimension-free way: the product of uniform measure on
the $m$ smaller unitary groups above has the same concentration
property as any one of those groups. This is a consequence of the fact
that the uniform measures on the compact classical groups satisfy
logarithmic Sobolev inequalities; see \cite[Section 4.4]{AnGuZe} and
the Appendix of \cite{MM-powers}. This allows for the full program
laid out in the introduction to be carried out in this case, yielding
Theorem \ref{T:groups-powers} above.
\section{Randomized sums}\label{S:sums}
In this section we show how our approach can be applied to randomized
sums of Hermitian matrices. In this and the following two sections,
we no longer have a determinantal structure allowing us to use
eigenvalue rigidity. Instead we will use entropy methods to bound the
expected distance between the empirical spectral measure and its mean.
Let $A_n$ and $B_n$ be fixed $n\times n$ Hermitian matrices, and let
$U_n \in \Unitary{n}$ be uniformly distributed. Define
\[
M_n := U_n A_n U_n^* + B_n;
\]
the random matrix $M_n$ is the so-called randomized sum of $A_n$ and
$B_n$. This random matrix model has been studied at some length in
free probability theory; the limiting spectral measure was studied
first by Voiculescu \cite{Vo} and Speicher \cite{Sp}, who showed that
if $\{A_n\}$ and $\{B_n\}$ have limiting spectral distributions
$\mu_A$ and $\mu_B$ respectively, then the limiting spectral
distribution of $M_n$ is given by the free convolution
$\mu_A\boxplus\mu_B$.
The following sharpening of this convergence is a special case of
Theorem 3.8 and Corollary 3.9 of \cite{MM-concentration}; we present
below a slightly simplified version of the argument from that paper.
\begin{thm}
\label{T:sums}
In the setting above, let $\mu_n$ denote the empirical spectral
measure of $M_n$, and let $\nu_n = \mathbb{E} \mu_n$. Then
\begin{enumerate}[label=(\alph*)]
\item \label{P:sums-expected-distance}
\(\displaystyle \mathbb{E} W_1(\mu_n, \nu_n) \le
\frac{C\norm{A_n}_{op}^{2/3}(\norm{A_n}_{op}+\norm{B_n}_{op})^{1/3}}{n^{2/3}}, \)
\item \label{P:sums-distance-concentration}
\(\displaystyle \mathbb{P} \left[ W_1(\mu_n, \nu_n) \ge \frac{C\norm{A_n}_{op}^{2/3}(\norm{A_n}_{op}+\norm{B_n}_{op})^{1/3}}{n^{2/3}} + t
\right] \le e^{ - c n^2 t^2 / \norm{A_n}_{op}^2} \), and
\item \label{P:sums-as-distance}with probability $1$, for sufficiently
large $n$,
\[
W_1(\mu_{n}, \nu_n) \le
C'\norm{A_n}_{op}^{2/3}(\norm{A_n}_{op}+\norm{B_n}_{op})^{1/3}
n^{-2/3}.
\]
\end{enumerate}
\end{thm}
In the most typical situations of interest, $\norm{A_n}_{op}$ and
$\norm{B_n}_{op}$ are bounded independently of $n$. If $\{A_n\}$ and
$\{B_n\}$ have limiting spectral distributions $\mu_A$ and $\mu_B$
respectively, then the rate of convergence of the (deterministic)
measures $\nu_n$ to $\mu_A \boxplus \mu_B$ will depend strongly on the
sequences $\{A_n\}$ and $\{B_n\}$; we will not address that question
here.
The Lipschitz property which is a crucial ingredient of our approach
to prove Theorem \ref{T:sums} is provided by the following lemma.
\begin{lemma}
\label{T:sums-Lipschitz}
For each
$1$-Lipschitz function $f:\mathbb{R} \to \mathbb{R}$, the maps
\[
U_n \mapsto \int f \ d\mu_n
\qquad \text{and} \qquad
U_n \mapsto W_1(\mu_n,\nu_n)
\]
are $\frac{2\norm{A_n}_{op}}{\sqrt{n}}$-Lipschitz on $\Unitary{n}$.
\end{lemma}
\begin{proof}
Let $A$ and $B$ be $n \times n$ Hermitian matrices, and let $U, V
\in \Unitary{n}$. Then it is straightforward to show that
\[
\norm{\bigl(UAU^* + B\bigr) - \bigl(VAV^* + B\bigr)}_{HS}
\le 2 \norm{A}_{op} \norm{U-V}_{HS}
\]
(see \cite[Lemma 3.2]{MM-concentration}). The lemma now follows by
Lemma \ref{T:Lipschitz}.
\end{proof}
Part \ref{P:sums-distance-concentration} of Theorem \ref{T:sums} now
follows from part \ref{P:sums-expected-distance} using Lemma
\ref{T:sums-Lipschitz} and the concentration of measure phenomenon for
$\Unitary{n}$ (Proposition \ref{T:groups-concentration}), and part
\ref{P:sums-as-distance} follows as usual by the Borel--Cantelli
lemma. It remains to prove part \ref{P:sums-expected-distance}; as
mentioned above, this is done
using entropy techniques for bounding the supremum of a stochastic
process.
The following lemma summarizes what is needed here. This fact
is well-known to experts, but we were not able to find an explicit
statement in the literature.
\begin{lemma}
\label{T:entropy}
Suppose that $(V,\norm{\cdot})$ be a finite-dimensional normed space
with unit ball $\mathcal{B}(V)$, and that $\Set{X_v}{v\in V}$ is a family
of centered random variables such that
\[
\mathbb{P}[\abs{X_u - X_v} \ge t] \le 2 e^{-t^2/K^2 \norm{u-v}^2}
\]
for every $t \ge 0$. Then
\[
\mathbb{E} \sup_{v \in \mathcal{B}(V)} X_v \le C K \sqrt{\dim V}.
\]
\end{lemma}
\begin{proof}
This can be proved via an elementary $\varepsilon$-net argument, but a
quicker proof can be given using Dudley's entropy bound (see
\cite[p.\ 22]{Talagrand} for a statement, and \cite[p.\ 70]{Talagrand}
and \cite{Dudley} for
discussions of the history of this result and its name).
By rescaling it suffices to assume that $K=1$. Let $N(\varepsilon)$ denote
the number of $\varepsilon$-balls in $V$ needed to cover the unit ball
$\mathcal{B}(V)$. A standard volumetric argument (see e.g.\
\cite[Lemma 5.2]{Vershynin}) shows that
\( N(\varepsilon) \le (3/\varepsilon)^{\dim V} \)
for each $0 < \varepsilon < 1$; of course $N(\varepsilon) = 1$ for $\varepsilon \ge
1$. Then Dudley's bound yields
\[
\mathbb{E} \sup_{v \in \mathcal{B}(V)} X_v \le C \int_0^\infty \sqrt{\log
(N(\varepsilon))} \ d\varepsilon \le C \sqrt{\dim V} \int_0^1
\sqrt{\log(3/\varepsilon)} \ d\varepsilon \le C' \sqrt{\dim V}.
\qedhere
\]
\end{proof}
To apply this lemma in our setting, denote by
\[
\Lip_0 := \Set{f:\mathbb{R} \to \mathbb{R}}{\abs{f}_L < \infty \text{ and } f(0) =
0},
\]
so that $\Lip_0$ is a Banach space with norm $\abs{\cdot}_L$. For
each $f \in \Lip_0$, define the random variable
\begin{equation}
\label{E:X_f}
X_f := \int f \ d\mu_n - \mathbb{E} \int f \ d\mu_n.
\end{equation}
By the Kantorovich--Rubinstein theorem,
\begin{equation}
\label{E:W1-Xf}
W_1(\mu_n,\nu_n) = \sup\left\{ X_f : f \in \mathcal{B}(\Lip_0) \right\}.
\end{equation}
Lemma \ref{T:sums-Lipschitz} and Proposition
\ref{T:groups-concentration} imply that
\begin{equation}
\label{E:sums-increments}
\mathbb{P}\left[\abs{X_f-X_g} \ge t\right] = \mathbb{P}\left[\abs{X_{f-g}} \ge
t\right]
\le 2 \exp \left[-\frac{cn^2 t^2}{\norm{A_n}_{op}^2 \abs{f-g}_L^2}\right].
\end{equation}
We would like to appeal to Lemma \ref{T:entropy}, but unfortunately,
$\Lip_0$ is infinite-dimensional. We can get around this problem with
an additional approximation argument.
Observing that $\mu_n$ is supported on
$[-\norm{M_n}_{op},\norm{M_n}_{op}]$ and
$\norm{M_n}_{op} \le \norm{A_n}_{op} + \norm{B_n}_{op}$, we begin by
replacing $\Lip_0$ with
\[
\Lip_0([-R,R]) := \Set{f:[-R,R] \to \mathbb{R}}{\abs{f}_L < \infty \text{ and } f(0) =
0},
\]
with $R = \norm{A_n}_{op} + \norm{B_n}_{op}$, for \eqref{E:X_f},
\eqref{E:W1-Xf}, and \eqref{E:sums-increments} above. Now for an
integer $m \ge 1$, let $\Lip_0^m([-R,R])$ be the $2m$-dimensional
space of piecewise affine functions $f \in \Lip_0([-R,R])$ such that
$f$ is affine on each interval
$\left[-R + \frac{(k-1)R}{m},-R+\frac{kR}{m}\right]$ for
$k = 1, \dots, 2m$. Given $f \in \Lip_0([-R,R])$, there is a unique
function $g \in \Lip_0^m([-R,R])$ such that
$g(\frac{jR}{m}) = f(\frac{jR}{m})$ for each integer $j \in [-m,m]$;
and this $g$ satisfies
\[
\abs{g}_L \le \abs{f}_L
\qquad \text{and} \qquad
\norm{f-g}_\infty \le \frac{\abs{f}_LR}{2m}.
\]
Thus by \eqref{E:W1-Xf},
\[
W_1(\mu_n,\nu_n) \le \frac{R}{2m} + \sup \Set{X_g}{g \in \mathcal{B}(\Lip_0^m([-R,R]))}.
\]
Now by \eqref{E:sums-increments} and Lemma \ref{T:entropy},
\[
\mathbb{E} W_1(\mu_n,\nu_n) \le \frac{R}{2m} + C\frac{\norm{A_n}_{op}\sqrt{m}}{n}.
\]
Part \ref{P:sums-expected-distance} now follows by optimizing over
$m$. This completes the proof of Theorem \ref{T:sums}.
\medskip
An additional conditioning argument allows one to consider the case
that $A_n$ and $B_n$ are themselves random matrices in Theorem
\ref{T:sums}, assuming a concentration of measure property for their
distributions. We refer to \cite{MM-concentration} for details.
\begin{comment}
\bigskip
\[
X_f = X_{f_R}
\]
as long as $R > \norm{A_n} + \norm{B_n}$, and so in that case
\[
W_1(\mu_n,\nu_n) = \sup\left\{ X_f : f \in \Lip_1^0 \text{ and } \supp
f \subseteq [-2R,2R] \right\}.
\]
We next fix an integer $m \ge 1$. Given any function
$f \in \Lip_1^0$, there is a piecewise affine function $g$ such that
$g(\frac{k}{m}) = f(\frac{k}{m})$ for every $k \in \mathbb{Z}$. This function
satisfies $g \in \Lip_1^0$, $\norm{f-g}_\infty \le \frac{1}{2m}$, and
if $\supp f \subseteq [-2R,2R]$, then
$\supp g \subseteq [-(2R+ \frac{1}{m}),2R+\frac{1}{m}]$. It follows
that $\abs{X_f - X_g} \le \frac{1}{2m}$ almost surely, and
\begin{equation}
\label{E:discretized-sup}
W_1(\mu_n,\nu_n) \le \sup\left\{ X_g : g \in \Lip_1^0 \text{is affine
on each interval $[\frac{k}{m},\frac{k+1}{m}]$ and } \supp
g \subseteq [-(2R+ \frac{1}{m}),2R+\frac{1}{m}] \right\}.
\end{equation}
\[
f_R(x) = \begin{cases} f(x) & \text{ if } \abs{x} \le R; \\
f(R) + \bigl[\sgn(f(R))\bigr](R-x) & \text{ if } R < x < R + \abs{f(R)})\\
f(-R)+\bigl[\sgn(f(-R))\bigr](x-R) & \text{ if } -\abs{f(-R)} - R < x < -R;\\
0 & \text{ if } x \le -R - \abs{f(-R)}] \text{ or } x \ge R +
\abs{f(R)};
\end{cases}
\]
that is, $f_R=f$ for $\abs{x}\le R$ and then drops off linearly to
zero, so that $f_R$ is 1-Lipschitz, $f(x)=0$ for $\abs{x} > 2R$, and
$\abs{f(x)-f_R(x)} \le \abs{x}$ for all $x \in \mathbb{R}$. By Fubini's
theorem,
\begin{equation*}\begin{split}
\abs{\int f \ d\mu_M-\int f_R \ d\mu_M}&
\le \int_{\abs{x} > 2R} \abs{x} \ d\mu_M(x) \\
& \le 2R \int_{\abs{x} > 2R} \ d\mu_M(x)
+ \int_{2R}^\infty \mu_M((t,\infty)) \ dt
+ \int_{-\infty}^{-2R}\mu_M((-\infty,t)) \ dt\\&=2R\mu_M((-\infty,-2R)\cup(2R,\infty))+\int_{2R}^\infty\mu_M\bigl((-\infty,-t)\cup(t,\infty)\bigr)
\end{split}\end{equation*}
Taking the supremum over $f$ followed by expectation over $M$, and
using the trivial bound
$\mathbb{E}\mu_M\bigl((-\infty,t)\cup(t,\infty)\bigr)\le
\mathbb{P}[\norm{M}_{op}\ge t]$ yields
\[
\mathbb{E}\sup\left\{\abs{\int (f-f_R) \ d\mu_M} : f \in B(\Lip_0(\mathbb{R}))
\right\}\le
2R\mathbb{P}[\norm{M}_{op}\ge 2R]+\int_{2R}^\infty \mathbb{P}[\norm{M}_{op}\ge t]dt;
\]
applying the concentration of $\norm{M}_{op}$ given by
\eqref{E:sums-concentration} then yields
\[\mathbb{E}\sup\left\{\abs{\int (f-f_R) \ d\mu_M} : f \in B(\Lip_0(\mathbb{R}))
\right\}\le CRe^{-cn(2R-\mathbb{E}\norm{M}_{op})^2}\]
(for $2R\ge\mathbb{E}\norm{M}_{op}$), and the same holds if $\mu_M$ is
replaced by $\mu$. Taking $2R=\mathbb{E}\norm{M}_{op}+1$ gives that
\[
\mathbb{E}\sup\left\{\abs{X_f - X_{f_R}} : f \in B(\Lip_0(\mathbb{R})) \right\}\le
C\bigl(\mathbb{E}\norm{M}_{op}\bigr)e^{-cn}.
\]
Consider therefore the process $X_f$ indexed by
$\Lip^0_{1,\frac{1}{2}(\mathbb{E}\norm{M}_{op}+1)}$ (with norm $\abs{\cdot}_{\Lip}$),
where
\[
\Lip^0_{a,b} := \left\{f:\mathbb{R}\to\mathbb{R} : \abs{f}_{\Lip}\le a; f(0)=0; f(x)=0 \text{ if }
\abs{x} > b \right\}.
\]
The above argument shows that
\begin{equation}\label{trunc_error}
\mathbb{E}\Bigl[W_1(\mu_M,\mu) \Bigr] \le
\mathbb{E}\Bigl[\sup\left\{X_f : f \in \Lip^0_{1,\frac{1}{2}
(\mathbb{E}\norm{M}_{op}+1)} \right\} \Bigr]
+ C \bigl(\mathbb{E}\norm{M}_{op}\bigr)e^{-cn}.
\end{equation}
\medskip
Now consider the (centered) stochastic process
$\left\{X_f:f\in \Lip^0_{1,\frac{1}{2}(\mathbb{E}\norm{M}_{op}+1)}\right\}$,
for $X_f$ as defined in \eqref{E:X_f}. Fix $m \in \mathbb{N}$, to be
determined later, and let $\mathcal{L}_{m,R}$ be the
$(2mR-2)$-dimensional subspace of $\Lip_0(\mathbb{R})$ consisting of
functions $f:\mathbb{R}\to\mathbb{R}$ which are 1-Lipschitz, supported on $[-R,R]$,
have $f(0)=0$, and are affine on each subinterval
$\bigl[\frac{(k-1)}{m}, \frac{ k }{m} \bigr]$ for
$-2Rm+1\le k\le 2Rm$.
Each $f \in \Lip^0_{1,\frac{1}{2}(\mathbb{E}\norm{M}_{op}+1)}$ can be
approximated by $g \in \mathcal{L}_{m,R}$ (where
$R=\frac{1}{2}(\mathbb{E}\norm{M}_{op}+1) $) such that
$g\bigl(\frac{k}{m}\bigr) = f\bigl(\frac{k}{m}\bigr)$ for every $k$.
Then $\norm{f - g}_\infty \le \frac{1}{2m}$, so that
\[
\abs{X_f - X_g} \le \frac{1}{2m}
\]
almost surely. It follows that
\begin{equation}\label{E:group-pre-Dudley}
W_1(\mu_U, \mu) \le \sup \bigl\{ X_g \mid g \in \mathcal{L}_{m,R}
\bigr\} + \frac{1}{2m}.
\end{equation}
Let $g, h \in \mathcal{L}_{m,R}$. Then $X_g-X_h=X_{g-h}$, and by
Lemma \ref{T:Lipschitz}, $X_{g-h}$ is a Lipschitz function of $M$,
with Lipschitz constant $\abs{g-h}_{\Lip}$. It thus follows from
\eqref{E:sums-concentration} that
\[
\mathbb{P} \bigl[ \abs{X_g - X_h} \ge t \bigr] = \mathbb{P} \bigl[
\abs{X_{g-h}} \ge t \bigr] \le C e^{-c n^2 t^2 / \abs{g-h}_{\Lip}^2}
\]
for every $t > 0$; that is, the stochastic process
$\{X_g:g\in\mathcal{L}_{m,R}\}$ is subgaussian, with respect to
$\frac{\abs{\cdot}_{\Lip}}{n}$. It then follows by Dudley's entropy
bound \cite[Theorem 11.17]{LeTa} (see \cite{Dudley} for a discussion
of the history of this result and its name) that
\begin{equation}\label{E:group-Dudley}
\mathbb{E} \sup \bigl\{ X_g \mid g \in \mathcal{L}_{m,R} \bigr\}
\le \frac{C}{n} \int_0^\infty \sqrt{\log N(\mathcal{L}_{m,R},
\abs{\cdot}_{\Lip}, \varepsilon)} \ d\varepsilon,
\end{equation}
where $N(\mathcal{L}_{m,R}, \abs{\cdot}_{\Lip}, \varepsilon)$ denotes the
minimum number of $\varepsilon$-balls with respect to $\abs{\cdot}_{\Lip}$
needed to cover $\mathcal{L}_{m,R}$. Since $\mathcal{L}_{m,R}$ is
itself a ball with respect to the norm $\abs{\cdot}_{\Lip}$, the
standard volumetric estimate \cite[Lemma 2.6]{MiSc} gives that
\[
N(\mathcal{L}_{m,R}, \abs{\cdot}_{\Lip}, \varepsilon)
\le \left(\frac{3}{\varepsilon}\right)^{2mR-2}.
\]
Inserting this into \eqref{E:group-Dudley} and then inserting the
resulting estimate into \eqref{E:group-pre-Dudley} yields
\[
\mathbb{E} d_1(\mu_U, \mu) \le C \frac{\sqrt{m\mathbb{E}\norm{M}_{op}}}{n} + \frac{1}{2m}.
\]
Picking $m=(\mathbb{E}\norm{M}_{op})^{-1/3}n^{2/3}$ yields that
\[
\mathbb{E} d_1(\mu_U, \mu) \le \frac{C(\mathbb{E}\norm{M}_{op})^{1/3}}{n^{2/3}}.
\]
\end{comment}
It seems that the entropy method does not usually result in sharp
rates; for example, in \cite{MM-concentration}, we used the entropy
approach for Wigner and Haar-distributed matrices, and the results
were not as strong as those in Sections \ref{S:Wigner} and \ref{S:groups}. On the other hand,
the entropy method is more widely applicable than the determinantal
point process methods which yielded the results of Sections
\ref{S:Wigner} and \ref{S:groups}. In addition to the randomized sums
treated in this section, we show in Sections \ref{S:compressions} and
\ref{S:qsc} how the entropy method can be used for random compressions
and for the Hamiltonians of quantum spin glasses. The paper
\cite{MM-concentration} also used the entropy approach to prove
convergence rates for the empirical spectral measures of the circular
orthogonal ensemble and the circular symplectic ensemble, which we
have omitted from this paper.
\section{Random compressions}\label{S:compressions}
Let $A_n$ be a fixed $n\times n$ Hermitian (respectively, real
symmetric) matrix, and let $U_n$ be uniformly distributed in
$\Unitary{n}$ (respectively, $\Orthogonal{n}$). Let $P_k$ denote the
projection of $\mathbb{C}^n$ (respectively $\mathbb{R}^n$) onto the span of the first
$k$ standard basis vectors. Finally, define a random matrix $M_n$ by
\begin{equation}\label{E:compression}
M:=P_k U_n A_n U_n^* P_k^*.
\end{equation}
Then $M_n$ is a compression of $A_n$ to a random $k$-dimensional
subspace. In the case that $\{A_n\}_{n\in\mathbb{N}}$ has a limiting spectral
distribution and $\frac{k}{n}\to\alpha$, the limiting spectral
distribution of $M_n$ can be determined using techniques of free
probability (see \cite{Sp}); the limit is given by a free-convolution
power related to the limiting spectral distribution of $A_n$ and the
value $\alpha$.
For this random matrix model, the program laid out in the introduction
produces the following (cf.\ Theorem 3.5 and Corollary 3.6 in
\cite{MM-concentration}).
\begin{thm}\label{T:compressions}
In the setting above, let $\mu_n$ denote the empirical spectral
distribution of $M_n$, and let $\nu_n=\mathbb{E} \mu_n$. Then
\begin{enumerate}[label=(\alph*)]
\item \label{P:compressions-expected-distance} \(\displaystyle
\mathbb{E} W_1(\mu_n, \nu_n) \le \frac{C\norm{A_n}_{op}}{((kn)^{1/3}},
\)
\item \label{P:compressions-distance-concentration}
\(\displaystyle \mathbb{P} \left[ W_1(\mu_n, \nu_n) \ge
\frac{C\norm{A_n}_{op}}{(kn)^{1/3}} + t \right] \le e^{ - c kn
t^2/\norm{A_n}_{op}^2} \), and
\item \label{P:compressions-as-distance} with probability $1$, for
sufficiently large $n$,
\(\displaystyle
W_1(\mu_n, \nu_n) \le C' \norm{A_n}_{op} (kn)^{-1/3}.
\)
\end{enumerate}
\end{thm}
The proof is essentially identical to the one in the previous section;
the $k$-dependence in the bounds is a consequence of the fact that
$k$, not $n$, is the size of the matrix when Lemma \ref{T:Lipschitz}
is applied. As with Theorem \ref{T:sums}, an additional conditioning
argument allows one to consider the case that $A_n$ is random, with
distribution satisfying a concentration of measure property.
\section{Hamiltonians of quantum spin glasses}
\label{S:qsc}
In this section we consider the following random matrix model for the
Hamiltonian of a quantum spin glass: let
$\{Z_{a,b,j}\}_{\substack{1\le a,b\le 3\\1\le j\le n}}$ be independent
standard Gaussian random variables, and define the $2^n\times 2^n$
random Hermitian matrix $H_n$ by
\begin{equation}
\label{E:H_n}
H_n := \frac{1}{\sqrt{9n}} \sum_{j=1}^n \sum_{a,b=1}^3 Z_{a,b,j}
\sigma^{(a)}_j \sigma^{(b)}_{j+1},
\end{equation}
where for $1\le a\le 3$,
\[
\sigma^{(a)}_j := I_n^{\otimes(j-1)}\otimes\sigma^{(a)}\otimes
I_2^{\otimes(n-j)},
\]
with $I_2$ denoting the $2\times 2$ identity matrix, $\sigma^{(a)}$
denoting the $2\times 2$ matrices
\[
\sigma^{(1)}:=\begin{bmatrix}0&1\\1&0\end{bmatrix}\qquad
\sigma^{(2)}:=\begin{bmatrix}0&-i\\i&0\end{bmatrix}\qquad
\sigma^{(3)}:=\begin{bmatrix}1&0\\0&-1\end{bmatrix},
\]
and the labeling cyclic so that
$\sigma^{(b)}_{n+1}:=\sigma^{(b)}_{1}.$ The random matrix $H_n$ acts
on the space $(\mathbb{C}^2)^{\otimes n}$ of $n$ distinguishable qubits; the
specific structure of $H_n$ above corresponds to nearest neighbor
interaction on a circle of qubits.
If $\mu_n$ denotes the empirical spectral measure of $H_n$, then the
ensemble average $\nu_n = \mathbb{E} \mu_n$ is known in this context as the
density of states measure $\mu_n^{DOS}$. Recently, Keating, Linden and
Wells \cite{KLW} showed that $\mu_n^{DOS}$ converges weakly to
Gaussian, as $n\to\infty$; i.e., they showed that the empirical
spectral measure of $H_n$ converges to Gaussian in expectation. The
paper \cite{KLW} gives a similar treatment for more general
collections of (still independent) coupling coefficients, and more
general coupling geometries than that of nearest-neighbor
interactions. In more recent work, Erd\H{os} and Schr\"oder \cite{ES}
have considered still more general coupling geometries, and found a
sharp transition in the limiting behavior of the density of states
measure depending on the size of the maximum degree of the underlying
graph, relative to its number of edges.
The following result, essentially proved in \cite{BuMe}, quantifies
this convergence.
\begin{thm}\label{T:quantum}
Let $\mu_n$ be the spectral measure of $H_n$ and let $\gamma$ denote
the standard Gaussian distribution on $\mathbb{R}$. Then
\begin{enumerate}[label=(\alph*)]
\item \label{P:quantum-expected-distance}\(\displaystyle \mathbb{E} W_1(\mu_n,\gamma)\le \frac{C}{n^{1/6}},\)
\item
\label{P:quantum-distance-concentration}\(\displaystyle
\mathbb{P}\left[W_1(\mu_n,\gamma)\ge\frac{C}{n^{1/6}}+t\right]\le
e^{-9nt^2/2},\) and
\item \label{P:quantum-as-distance}with probability 1, for all
sufficiently large $n$,
\[
W_1(\mu_n,\gamma)\le \frac{C'}{n^{1/6}}.
\]
\end{enumerate}
\end{thm}
Because the coefficients $Z_{a,b,j}$ in \eqref{E:H_n} are taken to be
i.i.d.\ Gaussian random variables, the Gaussian concentration of
measure phenomenon (Proposition \ref{T:Gaussian-concentration}) can be
combined with Lemma \ref{T:Lipschitz} to carry out a version of the
approach used in the cases of random sums and random compressions
(Sections \ref{S:sums} and \ref{S:compressions}). The following lemma
provides the necessary link between Lemma \ref{T:Lipschitz} and
Proposition \ref{T:Gaussian-concentration} for this random matrix
model.
\begin{lemma}\label{T:quantum-lipschitz}
Let $\mathbf{x}=\{x_{a,b,j}\}\in\mathbb{R}^{9n}$ (with, say, lexicographic
ordering), and assume that $n\ge 3$. Define $H_n(\mathbf{x})$ by
\[
H_n(\mathbf{x}) := \frac{1}{3\sqrt{n}} \sum_{a,b=1}^3
\sum_{j=1}^nx_{a,b,j} \sigma^{(a)}_j \sigma^{(b)}_{j+1}.
\]
Then the map $\mathbf{x}\mapsto H_n$ is
$\frac{2^{n/2}}{3\sqrt{n}}$-Lipschitz.
\end{lemma}
Lemma \ref{T:quantum-lipschitz} and Lemma
\ref{T:Lipschitz}\ref{P:distance-is-Lipschitz} together show that
\[
\mathbf{x}\mapsto W_1(\mu_n,\gamma)
\]
is a $\frac{1}{3\sqrt{n}}$-Lipschitz function of $\mathbf{x}$. Part
\ref{P:quantum-distance-concentration} of Theorem \ref{T:quantum} then
follows from part \ref{P:quantum-expected-distance} and Proposition
\ref{T:Gaussian-concentration}, and part \ref{P:quantum-as-distance}
follows by the Borel--Cantelli lemma.
The proof of part \ref{P:quantum-expected-distance} has two main
components. First, $W_1(\mu_n,\mathbb{E}\mu_n)$ is estimated via the approach
used in Sections \ref{S:sums} and \ref{S:compressions}: Lemma
\ref{T:quantum-lipschitz}, Lemma
\ref{T:Lipschitz}\ref{P:integral-is-Lipschitz}, and Proposition
\ref{T:Gaussian-concentration} show that the stochastic process
\[
X_f:=\int f \ d\mu_n-\mathbb{E}\int f\ d\mu_n
\]
satisfies a subgaussian increment condition as in Lemma
\ref{T:entropy}, which can then be used to show that.
\[
\mathbb{E} W_1(\mu_n, \mathbb{E} \mu_n) \le \frac{C}{n^{1/6}}.
\]
Second, the convergence in expectation proved in \cite{KLW} was done
via a pointwise estimate of the difference between the characteristic
functions of $\mathbb{E} \mu_n $ and $\gamma$; this estimate can be parlayed
into an estimate on $W_1(\mathbb{E}\mu_n,\gamma)$ via Fourier analysis. This
is carried out in detail in \cite{BuMe} for the bounded-Lipschitz
distance; a similar argument shows that
\[
W_1(\mathbb{E}\mu_n,\gamma)\le\frac{C}{n^{1/6}},
\]
completing the proof of Theorem \ref{T:quantum}.
\section{The complex Ginibre ensemble}
\label{S:Ginibre}
Let $G_n$ be an $n \times n$ random matrix with i.i.d.\ standard
complex Gaussian entries; $G_n$ is said to belong to the \emph{complex
Ginibre ensemble}. It was first established by Mehta that if
$\mu_n$ is the empirical spectral measure of $\frac{1}{\sqrt{n}} G_n$,
then as $n \to \infty$, $\mu_n$ converges to the circular law; i.e.,
to the uniform measure $\mu$ on the unit disc
$D:=\Set{z \in \mathbb{C}}{\abs{z} \le 1}$.
This is the one ensemble we treat in which the general concentration
of measure approach does not apply. The issue is that while there is
a concentration phenomenon for the i.i.d.\ Gaussian entries of $G_n$,
the spectral measure of a nonnormal matrix ($G_n$ is nonnormal with
probability 1) is not a Lipschitz function of the matrix.
Nevertheless, the eigenvalue process of $G_n$ is a determinantal point
process, and so some of the techniques used above are still available.
We sketch the basic idea below; full details can be found in
\cite{MM-Ginibre}
\medskip
The eigenvalues of $G_n$ form a determinantal point process on $\mathbb{C}$
with the kernel
\begin{equation} \label{E:DPP}
\begin{split}
K(z,w) & = \frac{1}{\pi} e^{-(\abs{z}^2 + \abs{w}^2) / 2}
\sum_{k=0}^{n-1} \frac{(z\overline{w})^k}{k!}.
\end{split}
\end{equation}
This means that in principle, the determinantal approach to eigenvalue
rigidity used in the case of the GUE (Section \ref{S:Wigner}) and of
the compact classical groups (Section \ref{S:groups}) can be used for
this model. A challenge, however, is the lack of an obvious order
on the eigenvalues of an arbitrary matrix over $\mathbb{C}$; without one,
there is no hope to assign predicted locations around which the
individual eigenvalues concentrate. We therefore impose an order on
$\mathbb{C}$ which is well-adapted for our purposes; we refer to this as the
\emph{spiral order}. Specifically, the linear order $\prec$ on $\mathbb{C}$
is defined by making $0$ initial, and for nonzero $w, z \in \mathbb{C}$, we
declare $w \prec z$ if any of the following holds:
\begin{itemize}
\item $\lfloor \sqrt{n} \abs{w} \rfloor < \lfloor \sqrt{n} \abs{z} \rfloor$.
\item $\lfloor \sqrt{n} \abs{w} \rfloor = \lfloor \sqrt{n} \abs{z} \rfloor$ and
$\arg w < \arg z$.
\item $\lfloor \sqrt{n} \abs{w} \rfloor = \lfloor \sqrt{n} \abs{z} \rfloor$,
$\arg w = \arg z$, and $\abs{w} \ge \abs{z}$.
\end{itemize}
Here we are using the convention that $\arg z \in (0,2\pi]$.
We order the eigenvalues according to $\prec$: first the eigenvalues
in the disc of radius $\frac{1}{\sqrt{n}}$ are listed in order of
increasing argument, then the ones in the annulus with inner radius
$\frac{1}{\sqrt{n}}$ and outer radius $\frac{2}{\sqrt{n}}$ in order of
increasing argument, and so on.
We then define predicted locations $\tilde{\lambda}_j$ for (most of)
the eigenvalues based on the spiral order: $\tilde{\lambda}_1 = 0$,
$\{\tilde{\lambda}_2, \tilde{\lambda}_3, \tilde{\lambda}_4\}$ are
$\frac{1}{\sqrt{n}}$ times the $3^{\mathrm{rd}}$ roots of unity (in
increasing order with respect to $\prec$), the next five are
$\frac{2}{\sqrt{n}}$ times the $5^{\mathrm{th}}$ roots of unity, and
so on. Letting $\nu_n$ denote the normalized counting measure
supported on the $\{\tilde{\lambda}_j\}$, it is easy to show that
\[
W_2(\nu_n, \mu) \le \frac{C}{\sqrt{n}}.
\]
(In fact, there is a slight modification for about $\sqrt{n\log(n)}$
of the largest eigenvalues, the details of which we will not discuss
here.)
The same type of argument as in the earlier determinantal cases gives
a Bernstein-type inequality for the eigenvalue counting function on an
initial segment with respect to the spiral order, which in turn leads
to eigenvalue rigidity for most of the eigenvalues. The largest
eigenvalues can be treated with a more elementary argument, leading
via the usual coupling argument to the bound
\[
\mathbb{E} W_2(\mu_n,\nu_n)\le C \left(\frac{\log(n)}{n}\right)^{1/4}.
\]
(One can deduce a slightly tighter bound for $\mathbb{E} W_p(\mu_n,\nu_n)$ for
$1 \le p < 2$, and a weaker one for $p > 2$.)
In this setting we cannot argue that the concentration of
$W_1(\mu_n,\mu)$ is immediate from general concentration properties of
the ensemble, but the eigenvalue rigidity itself can be used as a
substitute. Indeed,
\[
W_2(\mu_n,\nu_n)^2 \le \frac{1}{n} \sum_{j=1}^n \abs{\lambda_j-\tilde{\lambda}_j}^2,
\]
and so
\[
\mathbb{P}\left[W_2(\mu_n,\nu_n)^2 > t\right]
\le \mathbb{P}\left[\sum_{j=1}^n\abs{\lambda_j-\tilde{\lambda}_j}^2 > nt\right]
\le \sum_{j=1}^n
\mathbb{P}\left[\abs{\lambda_j-\tilde{\lambda}_j}^2 > t\right].
\]
For most of the eigenvalues the eigenvalue rigidity about
$\tilde{\lambda}_j$ is strong enough to bound this quite sharply; as
before, for about $\sqrt{n\log(n)}$ of the largest eigenvalues a
more trivial bound is used. Since this approach does
not produce a particularly clean tail inequality for
$W_2(\mu_n,\nu_n)$, we will instead simply state the almost-sure
convergence rate which follows by the Borel--Cantelli lemma.
\begin{thm}\label{T:as-rate}
Let $\mu_n$ denote the empirical spectral measure of
$\frac{1}{\sqrt{n}} G_n$, and let $\mu$ denote the uniform measure
on the unit disc in $\mathbb{C}$. Then with probability $1$, for
sufficiently large $n$,
\[
W_2(\mu_n, \mu) \le C \frac{\sqrt{\log n}}{n^{1/4}}.
\]
\end{thm}
\nocite{Chatterjee, Kargin, ChLe, NgWa}
\section*{Acknowledgements}
This research was partially supported by grants from the U.S. National Science Foundation
(DMS-1308725 to E.M.) and the Simons
Foundation (\#315593 to M.M.). This paper is an expansion of the first-named
author's talk at the excellent workshop ``Information Theory and
Concentration Phenomena'' at the Institute for Mathematics and its
Applications, as part of the IMA Thematic Year on Discrete Structures:
Analysis and Applications. The authors thank the IMA for its hospitality.
\bibliographystyle{plain}
|
2,869,038,155,886 | arxiv | \section{Introduction}
Denote by $U = \{ 1, ..., N\}$ the population of units-in-space, or simply \emph{units}. By \emph{graph spatial sampling (GSS)}, one would first \emph{design} a graph $G = (U, A)$ and then sample from $G$ -- hence its node set $U$ -- by graph sampling methods (Zhang, 2022; Zhang and Patone, 2017). The key idea is to sensibly introduce the edge set $A$, for which we consider only simple undirected graphs in this paper, in order to achieve certain desired spatial properties.
For instance, many spatial sampling methods aim to drastically reduce the chance of sampling contiguous (or nearby) units, compared to directly sampling from $U$ by non-spatial methods. To illustrate the idea in terms of GSS, three graphs are given in Figure \ref{fig:GSS} for sampling 2 out of 9 spatial units given as the nodes in $G$, where the edges defining the adjacency among the nodes are introduced in various ways. Depending on a chosen graph, one can employ different means for reducing the chance of selecting two contiguous units.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.9]{plot-G1-G3.pdf}
\caption{$G_1$, adjacency among all contiguous units; $G_2$, circular adjacency mostly among contiguous units; $G_3$, all contiguous units are non-adjacent} \label{fig:GSS}
\end{figure}
First, in the popular generalised random tessellation stratified (GRTS) method (Stevens and Olsen, 2004), systematic sampling is applied to the units in $U$ arranged on a path of $N-1$ edges which is a special graph $G = (U, A)$. More generally, the nodes can be arranged in a circle, as in the graph $G_2$ in Figure \ref{fig:GSS}. One can select a systematic sample of size 2 along the circle either clockwise or anti-clockwise. One can obtain an $(N-1)$-path for GRTS design by deleting one edge from $G_2$; however, it would then be impossible to select a systematic sample that is always of the size 2. Thus, the approach of GSS encompasses GRTS sampling.
Moreover, instead of tessellation, one can also consider the graph $G_3$ in Figure \ref{fig:GSS}, where none of the contiguous units are adjacent. As it will be explained later, one can apply random walk without backtracking in $G_3$ and take as the sample the nodes visited by two successive steps of the walk at equilibrium, which are never contiguous.
The local pivotal method (LPM) by Grafström et al. (2012) is another popular spatial sampling method, which can be applied to the graph $G_1$ in Figure \ref{fig:GSS}, where the contiguous units (adjacent in $G_1$) constitute the nearest units. As will be shown later, the LPM greatly reduces the chance of selecting an adjacent pair of nodes in $G_1$, compared to random sampling from $U$ directly; whereas GSS from $G_3$ avoids this altogether as described above.
Notice that all the designs above are immeasurable. The GSS design using $G_3$ can be made measurable by allowing `random jumps' in addition, as will be explained later. However, contiguous units can then be selected, e.g. by a random jump from 1 to 2. Generally, measurability is not considered a priority in spatial sampling for the sake of improved efficiency, but it does create a problem for variance estimation when the sampling is without replacement.
Below, we first develop a general technique of walk sampling from graphs in Section \ref{technique}, which achieves the desired sampling probabilities. In Section \ref{method}, we explain and illustrate how graph sampling can provide a more flexible approach to spatial sampling, compared to the existing popular methods. A discussion of some future topics is given in Section \ref{discussion}.
\section{Lagged Metropolis-Hastings walk} \label{technique}
Given $G = (U, A)$, let $a_{ij} =1$ if $(ij)\in A$ and 0 otherwise. Let $d_i = \sum_{j\in U} a_{ij}$ be the degree of node $i$. We assume $d_i \geq 2$ for all the nodes in $G$ and there are no loops, such as the case with all the graphs in Figure \ref{fig:GSS}. At discrete time step $t>1$, let $X_t$ denote the state (i.e. the current node) of a \textit{lagged Metropolis-Hastings walk (LMHW)} in $G$, where $(X_0, X_1)$ are the two initial states, and the LMHW transition probability is given by
\begin{align}
p_{(ih)j} \coloneqq \Pr(X_{t+1} =j \mid X_t =h, X_{t-1} =i) & = \frac{r u_j }{d_h +r}
+ \mathbb{I}(j=i) \frac{w a_{hj}}{d_h +r} \min\Big( \frac{u_j}{u_h}, 1 \Big) \notag \\
& \hspace{-20mm} + \mathbb{I}(j\neq i) \frac{d_h- w a_{ih}}{d_h +r} \big( \frac{a_{hj}}{d_h - a_{ih}} \big) \min\Big(\frac{u_j}{u_h}, 1\Big)
\label{LMHW}
\end{align}
where $\bm{u} = (u_1, ..., u_N)$ is a positive \emph{preference vector} satisfying $\sum_{i\in U} u_i = 1$. That is,
the walk either jumps randomly from $X_t = h$ to any node $j$ (in $U$) with the probability $u_j r/(d_h + r)$, or it moves to an adjacent node $j$ with the probability $d_h/(d_h +r)$. In the latter case, it can either backtrack to the \emph{previous} $X_{t-1} =i$ (if adjacent) with a probability regulated by $w$ or move to \emph{another} adjacent node, both of which are subject to a Metropolis-Hastings (MH) acceptance mechanism, hence the term LMHW. There would be no backtracking under LMHW if $w=0$, and random jumps are disallowed if $r=0$ as long as $G$ is connected.
The LMHW \eqref{LMHW} generalises the lagged random walk (LRW) proposed by Zhang (2021) where $u_i \equiv 1$, i.e. without the MH mechanism. Moreover, in the case of $w=1$, the LRW reduces to targeted random walk (TRW) of Avrachenkov et al. (2010), under which there is no difference between the previous state $X_{t-1}$ (if adjacent) and the other nodes adjacent to $X_t$. Thompson (2006) considers random walk (not lagged) subject to MH acceptance mechanism.
The process $\{ X_t : t\geq 0\}$ is non-Markovian if $w<1$. Let $\bm{x}_t = (X_{t-1}, X_t)$ for $t\geq 1$. Given any initial $\bm{x}_1 = (X_0, X_1)$, LMHW \eqref{LMHW} generates a Markov chain $\{ \bm{x}_t : t\geq 1\}$, since
\begin{equation} \label{markov}
\Pr(\bm{x}_{t+1} | \bm{x}_t, ..., \bm{x}_1) = \Pr(\bm{x}_{t+1} | \bm{x}_t) = \Pr(X_{t+1} | \bm{x}_t)
\end{equation}
It is irreducible, if $G$ is connected or if random jumps are allowed generally, such that there exists a unique stationary distribution, $\Pr\big(\bm{x}_t = (h,j)\big)$, which is given by
\begin{equation} \label{px}
p_{(hj)} = \sum_{i\in U} p_{(ih)} p_{(ih)j}
\end{equation}
A unique stationary distribution of $X_t$ follows, which satisfies the \emph{mixed} equation
\begin{equation} \label{mix}
p_h \coloneqq \Pr(X_t = h) = \sum_{i\in U} \Pr\big( \bm{x}_t = (i,h)\big)
= p_h p_{hh} + \sum_{\substack{\bm{x} = (i,h)\\ i\in \nu_h}} p_{\bm{x}} + \sum_{\substack{i\not \in \nu_h\\ i\neq h}} \frac{p_i r u_h}{d_i +r}
\end{equation}
where $p_{hh} \coloneqq \Pr(X_t = h | X_{t-1}=h)$ at equilibrium, and $\nu_h = \{ i : a_{ih} =1, i\in U\}$ is the \emph{neighbourhood} of $h$ (containing its adjacent nodes), and a transition from $h$ to any node outside $\nu_h$ can only be accomplished by a random jump. Notice that $\bm{x}_t = (h,h)$ for $t>1$ is possible if a random jump from $h$ lands on $h$ itself, or if a proposed move to an adjacent node is rejected. Appendix \ref{proof} gives a proof that the stationary probability is given by
\begin{equation} \label{p}
p_h \propto (d_h + r) u_h
\end{equation}
We have $p_h = \pi_h/n$ if $(d_h + r) u_h \propto \pi_h$ for all $h\in U$, where $\{ \pi_i : i\in U\}$ are the given sample inclusion probabilities by $\pi$ps sampling without replacement from $U$, where $\sum_{i\in U} \pi_i = n$.
\section{GSS by LMHW} \label{method}
\subsection{Equal-probability spatial sampling without replacement}
\emph{Equal-probability spatial sampling without replacement (EpSSWoR)} of sample size $n$ has the same sample inclusion probability $n/N$ as simple random sampling without replacement (SRSWoR) from $U$ directly, but the second-order inclusion probability of SRSWoR (which is the same for any pair of distinct units) can be modified to achieve desired spatial properties.
LMHW can yield a GSS method for EpSSWoR. To ensure sampling without replacement, it is necessary to set $(r,w) = (0,0)$. In addition, to achieve equal probability \eqref{p} at equilibrium and to remove the possibility of rejecting any proposed transition to an adjacent node, devise a connected \emph{2-regular} graph $G$, where $d_i \equiv 2$, and set $u_i \equiv 1$. Now that $d_i \equiv 2$ and there is no backtracking, $X_{t+1}$ must be an adjacent node to $X_t$ which is not visited by the walk in the previous $N-1$ steps. It follows that any $n$-sequence of states $(X_t, X_{t+1}, ..., X_{t+n-1})$ is a sample of $n$ distinct units from $U$, where $p_h \equiv 1/N$ for any node $h$ in the sequence.
\subsubsection{Illustration}
Now, one can devise the 2-regular graph $G$ according to the desirable spatial sampling properties. Suppose one would like to reduce the probability of selecting contiguous units, denoted by $\xi$. Consider below the spatial population $U$ in Figure \ref{fig:GSS} for an illustration.
First, let the sample size be 2. There are 12 contiguous pairs (as can be seen in $G_1$) out of 36 possible pairs of distinct units, such that $\xi = 1/3$ under SRSWoR from $U$ directly. Simulations of the LPM1 version of LPM (Grafström et al., 2012) from $G_1$ yields $\xi = 0.116$. The GRTS method cannot ensure the sample size is always 2 in this case. For GSS by clockwise systematic sampling from $G_2$ in Figure \ref{fig:GSS}, there are 9 systematic samples of size 2, i.e. $\{1,6\}$, $\{5,9\}$, ..., $\{7,2\}$ and $\{4,3\}$, where only $\{8,5\}$ contains contiguous units, such that $\xi =1/9 = 0.111$.
Meanwhile, for EpSSWoR by LMHW \eqref{LMHW}, one can use a 2-regular graph that does not contain any edge connecting two contiguous units in $U$. There are many such graphs, two of which are shown in Figure \ref{fig:2G}. Using such a 2-regular graph, we obtain $\xi = 0$ by construction.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.9]{plot-G4-G5.pdf}
\caption{Two 2-regular graphs for EpSSWoR from 9 units-in-space} \label{fig:2G}
\end{figure}
Next, let the sample size be 3. There are $84$ distinct samples by SRSWoR from $U$, where 22 of them do not contain any contiguous units, such that $\xi =62/84 =0.738$. Simulations of the LPM1 from $G_1$ yield $\xi = 0.478$. There are 3 systematic samples by the GRTS method, because $N/n = 3$. For instance, let the 8-path be given by $(1,4,7,8,9,6,3,2,5)$, i.e. removing the edge between 1 and 5 in $G_2$, the three samples are $\{1,8,3\}$, $\{4,9,2\}$ and $\{7,6,5\}$, such that $\xi = 1/3$. The same holds for GSS by systematic sampling from $G_2$ in Figure \ref{fig:GSS}.
Meanwhile, for EpSSWoR by LMHW from $G_4$ (Figure \ref{fig:2G}), there are 9 distinct samples, where contiguous units are present in the four samples $\{3,9,2\}$, $\{9,2,8\}$, $\{4,6,7\}$ and $\{6,7,5\}$, such that $\xi = 4/9 = 0.444$. However, suppose one instead adopts $G_5$ in Figure \ref{fig:2G}, then only the sample $\{6,2,9\}$ contains contiguous units, such that $\xi = 1/9 = 0.111$.
\subsubsection{Implementation} \label{G2r}
One can construct 2-regular graphs by means of the recursive partitions used for GRTS design. The example of Stevens and Olsen (2004) is given in Figure \ref{fig:part} (left), containing 64 units divided into 16 parts. Instead of connecting the nearby units as in the GRTS design, one can connect the more distant units, as illustrated for the 0-units in Figure \ref{fig:part} (right), which are non-contiguous due to the other units 1, 2, 3. The starting and end nodes are underlined in Figure \ref{fig:part}. Without loss of generality, suppose the one in the bottom-left corner is the end node. One can connect it to one of the 1-nodes that is not contiguous to the starting 0-node. Similarly for the other units 1, 2, 3. Finally, since the 3-nodes and 0-nodes are never contiguous here, connecting the end 3-node and the starting 0-node yields a non-contiguous 2-regular graph $G$.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.5]{GRTS-recursivePartition.png} \hspace{20mm}
\includegraphics[scale=1.0]{plot-0nodes.pdf}
\caption{Recursive partition for GRTS design (left), 0-nodes in a 2-regular graph (right)} \label{fig:part}
\end{figure}
Numerous 2-regular graphs can be devised like this; denote the collection of them by $\mathbb{G}$. For each $G$ in $\mathbb{G}$, let $\Omega_G$ contain the $N$ possible samples of the given size $n$. One can either calculate or simulate some \emph{design measure} over $\Omega_G$, denoted by $\tau_G$, such as $\xi$ above or the expected sample spatial balance (SSB) measure of Stevens and Olsen (2004) or the sampling variance given $\{ y_i : i\in U\}$ generated by a suitable spatial population model. One can explore $(G, \tau_G)$ over $G\in \mathbb{G}$ and choose the graph $G$ that has the best design measure $\tau_G$.
Given EpSSWoR by GGS-LMHW, a design-unbiased estimator of the population total
\[
Y = \sum_{i\in U} y_i
\]
is the Horvitz-Thompson estimator. However, unbiased estimation of its sampling variance is impossible as long as the sampling design is immeasurable.
\subsection{Unequal-probability spatial sampling}
Let the graph $G$ for GSS be connected so that random jumps are unnecessary and set $r=0$. To further reduce the chance of selecting the same node more than once, set $w=0$ so that $\Pr(X_{t+1} = X_{t-1}) = 0$. Finally, set the preference vector $\bm{u}$, such that
\[
p_i = d_i u_i \eta = \frac{\pi_i}{n} \qquad\text{and}\qquad u_i = \frac{\pi_i}{n d_i \eta} \qquad\text{and}\qquad \eta = \frac{1}{n} \sum_{i\in U} \frac{\pi_i}{d_i}
\]
at equilibrium for any $i\in U$. Now, as long as $u_i$ is not a constant over $U$, one cannot avoid selecting some node more than once due to the rejections. Since the inclusion probability of any given node in $\{ X_{t+1}, X_{t+2}, ..., X_{t+m} \}$ becomes intractable as $m$ increases, we use the stationary sampling probabilities $p_i$ for unbiased estimation of the total $Y$.
\subsubsection{Based on $m$-sequence at equilibrium}
Let $(X_{t+1}, X_{t+2}, ..., X_{t+m})$ be a sequence of $m$ states from the LMHW at equilibrium. An unbiased estimator of $Y$ can be given in various forms as
\begin{equation} \label{HH}
\hat{Y}_W = \frac{1}{m} \sum_{j=1}^m \frac{y_{X_{t+j}}}{p_{X_{t+j}}}
= \frac{n}{m} \sum_{j=1}^{m} \frac{y_{X_{t+j}}}{\pi_{X_{t+j}}}
= \frac{1}{m} \sum_{j=1}^{m} \sum_{i\in U} \frac{y_{X_{t+j}}}{p_{X_{t+j}}} \mathbb{I}(X_{t+j} = i)
\end{equation}
This includes $\pi_i \equiv n/N$ as a special case. We have $E(\hat{Y}_W | m) = Y$ because $\Pr\left[ \mathbb{I}(X_{t+j} = i)\right] = p_i$ given any time step $t+j$ under LMHW at equilibrium. Notice that the $m$ states in \eqref{HH} are not independent of each other, although the first expression of $\hat{Y}_W$ looks the same as the Hansen-Hurwitz estimator under sampling with replacement.
Let $(X_{t+a}, ..., X_{t+b})$ be a subsequence of $(X_{t+1}, ..., X_{t+m})$, where $1\leq a \leq b\leq m$. It is said to a \emph{tie} of order $b-a+1$ for some node $h\in U$, denoted by $\kappa_{a,b}^h$, if
\[
X_{t+a-1} = i \neq h,~ X_{t+a} = \cdots = X_{t+b} = h,~ X_{t+b+1} = j \neq h
\]
Appendix \ref{tie} gives an unbiased estimator of $Y$ based on all the ties in an $m$-sequence. However, simulations suggest that it is typically less efficient than the simpler estimator \eqref{HH}.
\subsubsection{Illustration} \label{style}
Consider the following stylised examples of spatial populations $\bm{y}_U$ for Figure \ref{fig:GSS}:
\begin{align*}
& \left[ \begin{array}{ccc} 1 & 2 & 1\\ 2 & 3 & 2\\ 1 & 2 & 1 \end{array} \right] \qquad
\left[ \begin{array}{ccc} 3 & 2.5 & 2\\ 2.5 & 2 & 1.5\\ 2 & 1.5 & 1 \end{array} \right] \qquad
\left[ \begin{array}{ccc} 3 & 2 & 1\\ 2 & 1 & 2\\ 1 & 2 & 3 \end{array} \right] \qquad
\left[ \begin{array}{ccc} 3 & 2 & 3\\ 2 & 1 & 2\\ 3 & 2 & 3 \end{array} \right] \\
& \hspace{4mm} \text{Centre} \hspace{22mm} \text{Corner} \hspace{23mm} \text{Polar} \hspace{19mm} \text{Vortex}
\end{align*}
Let $\bm{\pi}_U$ be all equal if $\pi_5/\pi_i \equiv 1$, or unequal if $\pi_5/\pi_i \equiv 2$ or $\pi_5/\pi_i \equiv 0.5$, for all $i\neq 5$. We apply the LPM1 (Grafström et al., 2012) to select a sample of size 2, as well as LMHW sampling from each of $G_1$ - $G_5$ with $m=2$ and $r=w=0$. Simulations yield the relative efficiency (RE) of a given sampling method against SRSWoR with $n=2$.
\begin{table}[ht]
\centering
\caption{RE of LPM1 ($n=2$) or GSS ($m=2$) from $G_1$ - $G_5$}
\begin{tabular}{lcrrrrrr} \toprule
$\bm{y}_U$ & $\pi_5/\pi_i$ & LPM1 & $G_1$ & $G_2$ & $G_3$ & $G_4$ & $G_5$ \\ \midrule
\multirow{4}{*}{Centre} & 1 & 0.92 & 1.27 & 0.58 & 0.57 & 0.87 & 0.85 \\
& & & (0.18) & & & & \\ \cmidrule{2-8}
& 2 & 0.62 & 0.32 & 0.08 & 0.56 & 0.84 & 0.82 \\
& & & (0.20) & (0.1) & (0.1) & (0.1) & (0.1) \\ \midrule
Corner & 1 & 0.71 & 1.91 & 1.72 & 0.77 & 0.66 & 0.68 \\ \midrule
\multirow{3}{*}{Polar} & 1 & 1.04 & 1.40 & 0.88 & 0.89 & 0.65 & 0.67 \\ \cmidrule{2-8}
& 0.5 & 0.81 & 1.08 & 0.96 & 0.81 & 0.66 & 0.67 \\
& & & (0.27) & (0.07) & (0.07) & (0.07) & (0.07) \\ \midrule
\multirow{3}{*}{Vortex} & 1 & 0.91 &1.27 & 0.58 & 0.58 & 0.88 & 0.84 \\ \cmidrule{2-8}
& 0.5 & 0.58 & 0.45 & 0.18 & 0.46 & 0.74 & 0.73 \\ \bottomrule
\end{tabular} \label{tab:RE2} \\
Note: Positive $\Pr(n=1)$ by GSS given in parentheses
\end{table}
The results in Table \ref{tab:RE2} are based on $10^4$ simulations of each sampling method given $\bm{y}_U$. It is possible here that $\Pr(n=1) = \sum_{h\in U} p_{(hh)} >0$ under LMHW sampling due to the rejected moves, where the probability depends only on $(G, \bm{\pi}_U)$ but not $\bm{y}_U$. Setting $\pi_5/\pi_i \equiv 2$ can only be plausible for the centre $\bm{y}_U$, similarly as setting $\pi_5/\pi_i \equiv 0.5$ for the polar or vortex $\bm{y}_U$. Given $\pi_5/\pi_i \equiv 2$ for the centre $\bm{y}_U$, the two GSS methods using $G_1$ or $G_2$ select mostly contiguous units, both of which are actually more efficient than the other methods that aim to avoid selecting contiguous units; similarly given $\pi_5/\pi_i \equiv 0.5$ for the vortex $\bm{y}_U$. This serves as a reminder not to treat any particular sample spatial balance property as a panacea for design efficiency, without taking into account the spatial distribution of $\bm{y}_U$.
For equal-probability sampling across the 4 populations, although the LPM1 improves upon SRSWoR except in one case, it is always dominated by some (or all) of the GSS methods using $G_3$ - $G_5$. Among these GSS methods, using $G_4$ or $G_5$ yields essentially the same RE here, using $G_3$ is more efficient for the centre and vortex $\bm{y}_U$ but not otherwise. It is thus important to consider different graph designs for different spatial distributions of $\bm{y}_U$.
\subsection{Comparison of designs by simulation}
Grafström et al. (2012) suggest the LPM can yield large gains over the GRTS method for populations with smooth spatial trends, particularly in their Example 5 with $400$ units evenly spread over the unit square and $\pi_i \equiv n/N$ for $i\in U$, where the $y$-values are given by
\[
\text{sinTrend:}\quad y(x_1, x_2) = 3(x_1 + x_2) + \sin\{ 6(x_1 + x_2) \}
\]
and $(x_1, x_2)$ are the coordinates. We consider also the four types of $\bm{y}_U$ in Section \ref{style} for this $U$, where $0.5 \leq y_i\leq 5$ for $i\in U$, which is about the same range as the sinTrend $\bm{y}_U$ above.
\begin{table}[ht]
\centering
\caption{RE and ESSB of LPM1, $G_6$SS or $G_7$SS}
\begin{tabular}{llcccccc} \toprule
& & \multicolumn{5}{c}{RE} & \\ \cmidrule{3-7}
Sample & Method & sinTrend & Centre & Corner & Polar & Vortex & ESSB \\ \midrule
\multirow{3}{*}{$n=16$} & LPM1 & 0.151 & 0.248 & 0.127 & 0.221 & 0.244 & 0.080 \\
& $G_6$SS & 0.561 & 0.025 & 0.801 & 0.060 & 0.025 & 0.079 \\
& $G_7$SS & 0.044 & 1.371 & 0.016 & 1.047 & 1.362 & 0.192 \\ \midrule
\multirow{3}{*}{$n=32$} & LPM1 & 0.090 & 0.147 & 0.072 & 0.132 & 0.150 & 0.074 \\
& $G_6$SS & 0.925 & 0.020 & 1.288 & 0.077 & 0.020 & 0.111 \\
& $G_7$SS & 0.027 & 1.489 & 0.009 & 1.227 & 1.543 & 0.238 \\ \midrule
\multirow{3}{*}{$n=48$} & LPM1 & 0.067 & 0.111 & 0.053 & 0.098 & 0.114 & 0.079 \\
& $G_6$SS & 1.138 & 0.015 & 1.595 & 0.085 & 0.015 & 0.154 \\
& $G_7$SS & 0.022 & 1.421 & 0.007 & 1.321 & 1.375 & 0.238 \\ \bottomrule
\end{tabular} \label{tab:RE20}
\end{table}
Two 2-regular graphs are used for GSS here. The graph $G_6$ follows the description in Section \ref{G2r} (Figure \ref{fig:part}), with the $4\times 4$-partition of $U$ and 25 nodes in each part. The graph $G_7$ uses the $2\times 2$-partition as follows. First, index each unit $(x_1, x_2)$ as $(r_1, r_2)$, where $r_1$ is the rank of $x_1$ and $r_2$ that of $x_2$. Next, each pair of units $(r_1, r_2)$ and $(20-r_1+1, 20-r_2+1)$ are made adjacent, for $r_1, r_2 = 1, ..., 20$, i.e. between top-left and bottom-right parts as well as between top-right and bottom-left parts. Finally, the units in the top-left and bottom-left parts are randomly paired to be adjacent, likewise for the top-right and bottom-right parts.
Table \ref{tab:RE20} gives the RE-results (each by $10^4$ simulations) and the expected sample spatial balance (ESSB), where the sample size $n\in \{ 16, 32, 48\}$ as in Grafström et al. (2012). For any $n$, $G_6$SS improves greatly over LPM1 for the Centre and Vortex $\bm{y}_U$, whereas $G_7$SS does so for the Corner and sinTrend $\bm{y}_U$. For the Polar $\bm{y}_U$, the RE is seen to become closer between LPM1 and $G_6$SS as $n$ increases, while both are considerably more efficient than SRSWoR. Notice that, since the ESSB is a constant given $n$ here, whichever the spatial population $\bm{y}_U$, one cannot anticipate the design efficiency \emph{only} based on such a measure.
There exists a trend along $x_1+x_2$ in both the Corner and sinTrend $\bm{y}_U$, apart from a sinus undulation in the latter. The results suggest that the merits of $G_7$SS vs. LPM for the sinTrend $\bm{y}_U$ can be anticipated based on the Corner $\bm{y}_U$. Due to the structural similarity between the Centre and Vortex $\bm{y}_U$, the merits of $G_6$SS vs. LPM for one population can be anticipated from that for the other. The results for the Polar $\bm{y}_U$ suggest there may be room for improving the graph design for GSS as $n$ increases for this and similar spatial populations.
\section{Some future topics} \label{discussion}
Random walk has numerous applications (e.g. Masuda et al., 2017; Brin and Page, 1998). LMHW offers a more flexible technique, which allows one to choose the desired stationary probabilities via the preference vector $\bm{u}$ while controlling the probability of back-tracking by $w$. It can be considered for many problems beyond spatial sampling.
Both the GRTS method and the LPM can be motivated from the perspective of improving the expected SSB compared to sampling from $U$ directly. GSS provides a flexible approach to accommodate the anticipated spatial distribution of $\bm{y}_U$ in addition. It encompasses the GRTS method and, as illustrated above, suitable graph designs can yield large gains over the LPM. To facilitate the practice of GSS, one should develop suitable graph design algorithms that scale as the population size increases, and investigate their properties for various typical spatial distributions of $\bm{y}_U$ in a more systematic manner.
For spatial sampling without replacement from $U$, variance estimation does not admit a theoretical solution. For GSS that allows for repeated selection of a given unit by LMHW, one can initiate multiple independent walks, each yielding an unbiased estimator \eqref{HH} --- one can use the mean of them to estimate $Y$ and use the between-walk variance of them for unbiased variance estimation, which is a standard technique in MCMC.
|
2,869,038,155,887 | arxiv | \section{Introduction}
According to Kohn's theorem
\cite{Kohn61:1242,Maksym90:108,Bakshi90:7416,Pfannkuche93:6} a spatially
homogeneous electric field can only excite the rigid center of mass motion of
all the electrons confined parabolically in a quantum dot or wire.
In far-infrared (FIR) spectroscopy this condition is usually satisfied due to
the small size of the individually isolated dots and wires
compared to the wavelength of the radiation.
In order to detect some of the internal structure or relative motion of the
two-dimensional electron gas (2DEG) through FIR spectroscopy
the confining potential has to deviate from the parabolic form.
The deviation is found to influence the FIR response of the 2DEG in two
different ways, besides a trivial blue or red shift:
First, the center of mass mode (the magnetoplasmon)
interacts with higher order harmonics of the cyclotron resonance forming
complicated anticrossing patterns \cite{Demel90:788a,Gudmundsson95:17744}.
Second, the formation of compressible and incompressible stripes
\cite{Beenakker90:216,Chklovskii92:4026,Lieb95:10646}
in the electron density modulates slightly the dispersion of the lower
lying branch in quantum dots \cite{Bollweg96:2774}.
In addition, effects of the stripe formation in quantum
dots have been measured both in tunneling \cite{Vaart94:320} and
transport experiments \cite{Stopa96:2145}.
In this report we elucidate further the connection between the
modulation of the lower magnetoplasmon branch in quantum dots
and the formation of compressible and incompressible stripes
in the electron density. We explain the apparent weakness of this
phenomenon in the upper main magnetoplasmon branch in quantum dots
and the main branch in quantum wires. In both systems we find
similar modulation in the dispersion of weak higher order
collective oscillations.
In order to have consistent results for the absorption we use
a corresponding self-consistent method for the ground state properties
and the excited states of the 2DEG.
\section{Model}
The ground state properties of a single quantum dot or a wire
are calculated using the Hartree approximation for the
electron-electron Coulomb interaction
\cite{Gudmundsson95:17744,Gudmundsson91:12098}.
The spin degree of freedom is neglected.
The circular symmetric lateral confinement
potential for the 2DEG in the quantum dot is
\begin{equation}
V_{\mbox{conf}}(r)=\frac{1}{2}\hbar\omega_0\left[\left(
\frac{r}{l_0}\right)^2+a\left(\frac{r}{l_0}\right)^4\right] ,
\label{dot_conf}
\end{equation}
where $\hbar\omega_0$ is the characteristic energy of the parabolic part
of the potential and the confinement length is
$l_0=\sqrt{\hbar /(m^*\omega_0)}$. The dielectric constant and the effective
mass of an electron are denoted by $\kappa$ and $m^*$, respectively.
The confinement in the third direction, the $z$-direction, is considered
strong enough to justify treating the system as exactly two-dimensional.
The combined effects of the perpendicular homogeneous external magnetic field
${\mathbf B}=B{\mathbf\hat z}$ and $V_{\mbox{conf}}$ produce the
effective frequency of the dot
$\tilde\omega =(\omega_c^2+4\omega_0^2)^{\frac{1}{2}}$ and the effective
length $\lambda =[\hbar /(m^*\tilde\omega)]^{\frac{1}{2}}$, replacing the
cyclotron frequency $\omega_c=eB/(m^*c)$ and the magnetic length
$l_c=[\hbar /(m^*\omega_c)]^{\frac{1}{2}}$, respectively.
The single electron Hartree energies
$\varepsilon_{n,M}$ and the states $|n,M)$ are
labeled by the Landau-band (LB) index $n=0,1,2,\cdots$ and the
angular quantum number $M=-n,\cdots ,0,1,2,\cdots$,
The confinement potential for the quantum wire extended
in the $x$-direction is of the form
\begin{equation}
V_{\mbox{conf}}(y)=\frac{1}{2}\hbar\omega_0\left[\left(
\frac{y}{l_0}\right)^2+a\left(\frac{y}{l_0}\right)^4\right] .
\label{wire_conf}
\end{equation}
The effective frequency for the transverse motion of the 2DEG is
$\Omega =(\omega_c^2+\omega_0^2)^{\frac{1}{2}}$ and the effective width
is $L=[\hbar /(m^*\Omega)]^{\frac{1}{2}}$. The single electron
Hartree energies $\varepsilon_{n,y_k}$ and the states $|n,y_k)$ are labeled
by the Landau-band index $n=0,1,2,\cdots$ and the center coordinate
$y_k=kL^2=2\pi L^2N/L_x$,
where $N$ is an integer and $L_x$ is the length of the wire in
the $x$-direction. The coefficient $a$ determines the deviation from
the parabolic confinement.
The absorption of the quantum dot \cite{Gudmundsson91:12098}
and wire \cite{Gudmundsson95:17744,Brataas96:jp} is calculated
as a linear response to the self-consistent electrostatic potential
$\phi_{\mbox{sc}}=\phi_{\mbox{ext}}+\phi_{\mbox{ind}}$, where
$\phi_{\mbox{ind}}$ is the induced potential and $\phi_{\mbox{ext}}$
is the external potential. In case of the quantum wire
$\phi_{\mbox{ext}}({\mathbf r},t)=y{\cal E}_{\mbox{ext}}\exp(-i\omega t)$
and for the quantum dot $\phi_{\mbox{ext}}({\mathbf r},t)=
r{\cal E}_{\mbox{ext}}\exp(-iN_p\varphi-i\omega t)$. The external
electrostatic field is linearly polarized transverse to the quantum wire
but in case of the quantum dot it is circularly polarized with
the choice $N_p=\pm 1$. The power absorption of the systems is calculated
from the Joule heating of the 2DEG caused by $\phi_{\mbox{sc}}$.
Kohn's theorem states that if the confinement potential is parabolic
and the external electric field is spatially homogeneous then only center
of mass motion can be excited
\cite{Kohn61:1242,Maksym90:108,Bakshi90:7416,Pfannkuche93:6}.
In the wire the dispersion is then
\begin{equation}
\Omega_p = \Omega = \sqrt{\omega_c^2+\omega_0^2} ,
\end{equation}
and for the dot the dispersion has two branches
\begin{equation}
\omega_{\pm} = \frac{1}{2}\sqrt{\omega_c^2+4\omega_0^2}
\pm\frac{\omega_c}{2} ,
\label{QD_disp}
\end{equation}
with $\omega_{-}$ corresponding to the polarization $N_p=+1$ and
$\omega_{+}$ to the choice $N_p=-1$.
\section{Results of model calculation}
The calculations for the quantum dot have been performed using GaAs parameters,
$m^*=0.067m_0$ and $\kappa =12.4$. The results for
a confinement frequency $\omega_0 =3.37\:$meV are shown in Fig.\ \ref{QD_3sub}.
The number of electrons, $N_s=60$, guarantees that states in more
than one Landau band are occupied for $1<B<6\:$T. The dispersion of the
$\omega_-$ mode ($N_p=+1$) shows slight oscillations that correlate
with the average filling factor of the 2DEG or the location of the
chemical potential $\mu$ with respect to the Landau bands.
The density profiles of the 2DEG for $B=3.7\:$T and $4.9\:$T corresponding,
respectively, to a minimum and a maximum in the oscillations of
the $\omega_-$ mode are presented in Fig.\ \ref{QD_3sub}a.
The radial electron density for the whole range of the magnetic field
considered here is seen in Fig.\ \ref{QD_3D_dens}.
Clearly visible are the ``layers'' corresponding to electrons in specific
Landau bands, that are separated by sharp steps in larger quantum dots
in a strong magnetic field, i.e.\ incompressible regions separated by
narrow compressible regions. In the relatively small quantum dot
considered here the steps are not quite so exact due to screening effects,
finite temperature, and the fact that the effective magnetic length $\lambda$
is not that very much smaller than the size of the quantum
dot $R\sim 100\:$nm. For integer values of the average filling factor $\nu$
the chemical potential $\mu$ lies between the bulk parts of two Landau bands
as is seen in Fig.\ \ref{QD_E}a, otherwise $\mu$ lies in the bulk states
of a particular Landau band as Fig.\ \ref{QD_E}b shows. Comparison of
Fig.'s \ref{QD_3sub}-\ref{QD_E} shows that the oscillations of the
$\omega_-$ mode take a maximum value for an integer $\nu$ and concurrently
the steps in the density are clear. The oscillations of the $\omega_-$ mode
with $\nu$ have been explained in terms of an oscillating radius of the
electronic system \cite{Darnhofer96:xx}, an effect due to the screening
properties of the system. We prefer to use the picture of compressible and
incompressible states to explain the oscillations \cite{Bollweg96:2774}.
For the $\omega_-$ mode these pictures are probably equivalent
for intermediate sized
dots, but the latter one is more convenient in case of other modes to be
discussed here. Kohn's theorem is not programmed explicitly into the
numerical calculation but in the density response function all
relevant single electron-hole transitions are summed over and in the case
of a parabolic confinement only the center of mass modes predicted by the
theorem are visible. Slight deviations from the parabolic confinement
result in other modes that in the case of few electrons
\cite{Pfannkuche94:1221} can be traced back to certain single electron
transitions or groups thereof. The dipole active collective
modes of the 2DEG in a
quantum dot are such that only transitions of electrons observing
$M\rightarrow M-1$ contribute to the $\omega_+$ mode, such that just
below $\mu$ a hole state with quantum number $M$ is formed but above
$\mu$ an electron state with $M-1$ is formed. An inspection of
Fig.\ \ref{QD_E} shows that the single electron transitions involved
are almost exclusively interband transitions satisfying $n\rightarrow n+1$.
The $\omega_-$ mode, on the other hand, has strong contributions from
intraband $M\rightarrow M+1$, hence it's much lower energy.
This is also supported by the induced density which for the
$\omega_-$ mode is usually concentrated close to the boundary of the
quantum dot where the concerning $M$ states have their heaviest weight.
Now the different properties of the $\omega_-$ mode with respect to an
integer filling factor or not should be clear; When $\nu$ is not close to
an integer $\mu$ is pinned to the bulk states of a particular Landau
band, where the band rises above $\mu$ its slope is much less than
at a crossing point with edge states with higher $M$.
The single electron transitions contributing from this area thus lower
the total energy of the collective mode. The oscillations of the
$\omega_-$ mode take a maximum for $\nu\sim\mbox{integer}$ and a minimum
for half integers. The states pinned to $\mu$ are compressible and the
induced density for the $\omega_-$ mode is concentrated around the narrow
stripes of compressible states close to the edge of the dot or close
to the edge of a broader region of compressible bulk states in a
quantum dot with not all bulk states of a particular band filled.
The oscillations of the $\omega_-$ mode are thus a direct consequence
of the formation of compressible and incompressible stripes.
The main contribution to the $\omega_+$ mode comes from transitions
of occupied bulk states ($n,M$) to empty ($n+1,M-1$) states.
The energy of these interband transitions oscillates weakly with
the filling factor, but the transitions all add up to the collective
motion that can be identified as a rigid oscillation
of the center of mass of the
2DEG, thus, the dispersion of the main branch of the $\omega_+$ mode shows
only much weaker oscillations than are present in the $\omega_-$ mode.
Fig.\ \ref{QD_P1m} shows the dispersion of
the $\omega_+$ mode for a parabolic confinement potential with the same
deviation as was used for the calculation of the $\omega_-$ mode
in Fig.\ \ref{QD_3sub}. Around $B=2\:$T the $\omega_+$ mode shows an
anticrossing behavior just left of the line $E=2\hbar\omega_c$, the
magnetoplasmon is interacting with the first harmonic of the cyclotron
resonance, a Bernstein mode \cite{Gudmundsson95:17744}. The emerging lower
branch of the $\omega_+$ mode, the branch corresponding to the rigid
center of mass motion, becomes independent of $\nu$, but the higher
vanishing mode, that takes on the character of a higher order magnetoplasmon,
oscillates with $\nu$. The quartic deviation to the parabolic confinement
strengthens the contribution of single electron transitions of the form
($n,M$)$\rightarrow$($n+2,M-1$) to the collective oscillations,
thus introducing the higher order magnetoplasmons that are blocked by Kohn's
theorem in a parabolically confined 2DEG. The oscillations of the side
branch of the $\omega_+$ mode (just above the main branch)
are in antiphase to the oscillations of the
$\omega_-$ mode, explanation can be found in Fig.\ \ref{QD_E}. Without
an interaction between the electrons the Landau bands would be parallel,
and thus equidistant, interaction and screening properties of the 2DEG
change this. When $\mu$ lies in the bulk states of a particular Landau
band, i.e.\ a large compressible region is present in the density, then
each Landau band develops a flat region, but due to the increasing spatial
extent of the wave functions of the higher bands the flat regions shrink
with growing $n$. The relative distance between the Landau bands has therefore
a maximum when the a band is pinned to $\mu$ and a large compressible
region in the electron density is present. The finite size of the
wave functions that increases with $n$ also explains why the ``layers''
in the density of a relatively small quantum dot get thinner and smaller
with increasing $n$. A large quantum dot does not exhibit this effect.
The observation of higher order dipole active magnetoplasmons opens the
question what effects the deviation from the parabolic confinement has
on the quadruple active magnetoplasmon
\cite{Gudmundsson91:12098,Glattli85:1710,Merkt91:7320},
$N_p=\pm 2$, even though it has not been
measured directly in quantum dots. This collective mode has contributions
of single electron transitions with $\Delta M=\pm 2$ and is seen in
Fig.\ \ref{QD_modes} together with the dipole active modes discussed
above. The quadrupole $\omega_+$ mode shows a very clear and simple
$2\omega_c$ anticrossing, much simpler than in the dipole case.
The quadrupole active $\omega_-$ mode shows stronger filling factor
oscillations than the dipole counterpart and the oscillations are in phase
since they have the same origin. In the lower split-off branch of the
quadrupole active $\omega_+$ mode weak oscillations with $\nu$
are found in phase with the dipole counterpart.
In light of the results above for quantum dots it would not be unexpected
to find some filling factor dependent effects in quantum wires, even though
the magnetoplasmon in a parabolically confined system
has only one branch corresponding to the $\omega_+$ mode of a quantum dot.
The electronic density of a quantum wire develops clearly separate
compressible and incompressible regions with increasing wire width, and
the steps are also clear in the Hartree energy spectra.
In the case of a dipole excitation the single electron transitions
fulfill the conservation of the center coordinate, $\Delta y_k=0$.
The calculation for the wire has been performed with the following
parameters, $\hbar\omega_0=3.94\:$meV, $a=0.03$, and $\kappa =12.53$, other
parameters are the same as for the quantum dot. The results for the
dispersion are seen in Fig.\ \ref{QW_modes}. Again we have the $2\omega_c$
anticrossing, now just right of the $E=2\hbar\omega_c$ line. For $B>3\:$T
the lower branch gains oscillator strength and corresponds to the rigid
center of mass motion of the whole 2DEG. The upper branch is a higher order
magnetoplasmon as the induced density confirms. The dispersion has a very
similar overall appearance as the dispersion for the $\omega_+$ mode
of the quantum dot seen in Fig.\ \ref{QD_P1m}, and the Hartree energy
spectra show similar properties as the spectra for quantum dots, thus
leading to the same explanation of the origin of the oscillations
with $\nu$.
\section{Summary}
We have been able to explain filling factor dependent oscillations
occurring in the magnetoplasmon dispersion for quantum dots and wires
in terms of the formation of compressible and incompressible regions
in the electronic density of the systems. The oscillations have been
found in the $\omega_-$ mode of quantum dots \cite{Bollweg96:2774} and
some indications of them have been found in wires and in the
$\omega_+$ mode for dots. Their appearance in wires and in the $\omega_+$
mode for quantum dots seems to depend on the concerning system to be
of intermediate size, i.e.\ the effective magnetic length needs to
be not much less than one order of magnitude smaller than the system.
The far-infrared measurements are therefore yet another method to
observe the internal structure of not parabolically confined quantum
dots and wires, the stripes formed by the compressible and incompressible
states.
\acknowledgements{This research was supported in part by the Icelandic
Natural Science Foundation, the University of Iceland Research Fund,
and a NorFA Network Grant.}
\bibliographystyle{prsty}
|
2,869,038,155,888 | arxiv | \section{Introduction}~
The study of Dirac points, i.e. the contact points between different energy bands with an approximate linear dispersion relation, has become a major issue since the experimental breakthrough in graphene-based electronics \cite{Novoselov04,Berger04}. Indeed, the low-energy electronic properties of graphene are governed by a pseudo-relativistic 2D Dirac equation for massless fermions situated at the $K$ and $K'$ corners of the Brillouin zone \cite{CastroNeto09}. The Dirac points are topologically protected and a gap is opened only when the inversion symmetry of the lattice or the time-reversal symmetry are broken.
The possibility to generate topological phase transitions in graphene-like systems has recently attracted a great deal of attention. Within a tight-binding description, an anisotropy in the nearest-neighbor hopping parameters makes the Dirac points move away from the high-symmetry $K$ and $K'$ points and, under appropriate conditions, merge at time-reversal invariant points in the first Brillouin zone \cite{Hasegawa06,Dietl,Montambaux09}. Most saliently, this merging of Dirac points is associated with a topological phase transition between a semimetallic phase and a gapped band-insulating phase. An experimental investigation of the merging transition in graphene turns out to be problematic, since in order to appropriately modify the hopping parameters, an unphysically large strain needs to be applied to the graphene sheet \cite{Strain0809}.
An alternative system for the study of such topological transitions is that of ultracold atoms trapped in a honeycomb optical lattice. Since the seminal realization of the superfluid-Mott-insulator transition in the Bose-Hubbard model, ultracold atoms in optical lattices have become promising
systems to emulate condensed-matter physics. Indeed, the lattice geometry, the dimensionality, the atomic species, as well as the interactions can be engineered with a high degree of precision. The more involved triangular and honeycomb geometries were recently realized experimentally and exotic correlated states of matter have been observed experimentally \cite{Honeycomb10} or predicted theoretically \cite{Sorella92,Meng10,Wang11,Bermudez10}.
The application of a time-periodic perturbation on the optical lattice introduces yet another parameter scale into the system. A periodic shaking of the optical lattice, up to the kHz frequency range, has been implemented by placing one of the mirrors used to create the optical lattice on a piezoelectric material, such that the mirror can be moved back and forth in the direction of the beam \cite{Lignier07,Zenesini09}. The Floquet formalism shows that the hopping energy of the atoms in the shaken lattice is renormalized by a Bessel function, as a function of the shaking frequency and amplitude, thus allowing both the magnitude and the sign of the hopping parameter to change. This rather counter-intuitive phenomenon, as compared to the standard tight-binding physics, has been experimentally observed in a one-dimensional cold-atomic system \cite{Zenesini09}.
In this paper, we consider ultracold fermions trapped in a \textit{shaken} honeycomb optical lattice. Within the Floquet formalism, we derive an effective Hamiltonian that generalizes that of a graphene-like material under strain. In particular, we find that the alignment and merging of Dirac points in momentum space are now accessible with ultracold fermions in the shaken optical lattice and the phase diagram consists of various phases of the corresponding solid-state system that are otherwise difficult to realize. Furthermore, by taking into account a Hubbard-like interaction for spinful fermions, we study the density profiles for the homogenous and the trapped gas within a Hartree-Fock theory.
The outline of this paper is the following: in Sec.\,\ref{sec01a} we introduce the time-dependent Hamiltonian and in Sec.\,\ref{sec01b} we derive the time-independent one, by applying the Floquet formalism. In Sec.\,\ref{sec02} we investigate the merging and alignment of Dirac points, when the optical lattice is shaken along specific directions. The description is extended to include interactions in Sec.\,\ref{sec03}, where we derive the dependence of the density on the chemical potential. Implications of our results for experiments are discussed in Sec.\,\ref{sec04}. Finally, our conclusions are presented in Sec.\,\ref{conc}.
\section{The shaken honeycomb lattice} \label{sec01}
In this section, we derive a time-independent effective description for ultracold atoms trapped in a periodically shaken honeycomb optical lattice by utilizing a Floquet theory. For simplicity, we focus on a system of single-component fermionic atoms and consider only single-particle terms in this section. The results from the Floquet theory are valid for fermionic atoms with internal degrees of freedom as well as for bosonic atoms. In particular, the hyperfine state of fermionic atoms, playing the role of an effective spin-$1/2$ degree of freedom for electrons, will be considered when interaction effects are taken into account in Sec. IV.
\subsection{Time-dependent Hamiltonian} \label{sec01a}
In the tight-binding limit, the system of ultracold fermionic atoms trapped in a 2D \textit{shaken} honeycomb optical lattice can be described by the Hamiltonian
\begin{equation}\label{basicH}
H(t) = H_0 + W(t) ,
\end{equation}
which consists of two distinct parts.
The static part
\begin{align}\label{eq:H0}
H_{0} = &- \gamma \sum_{j=1}^3 \sum_{\textbf{r} \in A} \left( a^\dagger_\textbf{r} b_{\textbf{r} + \textbf{d}_j} + b^\dagger_{\textbf{r} + \textbf{d}_j} a_\textbf{r} \right) \nonumber \\
&- \gamma' \sum^3_{i=1} \sum^3_{j=1,j\neq i} \bigg( \sum_{\textbf{r} \in A} a^\dagger_\textbf{r} \, a_{\textbf{r} + \textbf{d}_i - \textbf{d}_j} + \sum_{\textbf{r} \in B} b^\dagger_\textbf{r} b_{\textbf{r} + \textbf{d}_i - \textbf{d}_j} \bigg) \nonumber \\
&- \mu \bigg( \sum_{\textbf{r} \in A} a^\dagger_\textbf{r} a_\textbf{r} + \sum_{\textbf{r} \in B} b^\dagger_\textbf{r} b_\textbf{r} \bigg)
\end{align}
is simply the tight-binding Hamiltonian in the honeycomb lattice,
where
$a^\dagger_\textbf{r}$ ($b^\dagger_\textbf{r}$) and $a_\textbf{r}$ ($b_\textbf{r}$) are, respectively, fermionic creation and annihilation operators on the lattice site $\textbf{r}$ in the $A$ ($B$) sublattice. The three vectors
\begin{equation}
\textbf{d}_1=d \hat{e}_x, \,\textbf{d}_2=\frac{d}{2}(-\hat{e}_x+\sqrt{3}\hat{e}_y), \,\textbf{d}_3=\frac{d}{2}(-\hat{e}_x-\sqrt{3}\hat{e}_y),
\label{eq:nn-vec}
\end{equation}
connect an $A$-lattice site with its three nearest-neighbor (nn) $B$-lattice sites and are given in terms of the distance $d=8\pi/3\sqrt{3}k$ between nn sites, where $k$ is the laser wave number (see Fig.\,\ref{fig:lattice}). Here, $\gamma,\gamma'>0$ characterize the energy gained in hopping to the nn and next-nearest-neighbor (nnn) sites, respectively, and $\mu$ is the on-site energy. We remark that the nnn hopping is taken into account because the nn hopping may be rendered vanishingly small in the effective time-independent description. In this regime, the nnn may become the dominant kinetic term. In a square lattice, where the potential is separable in independent $\hat{e}_x$ and $\hat{e}_y$ components, the nnn hopping is identically zero \cite{Liberto11}. However, the nnn hopping can be nonzero in the honeycomb lattice, since its potential is not separable in $\hat{e}_x$ and $\hat{e}_y$ components. Nevertheless, it may be expressed as the sum of two triangular lattices.
\begin{figure}[h]
\centering
\includegraphics[scale=0.3, angle=0]{figure1paper}
\caption{(Color online) Laser configuration to create the honeycomb lattice, which consists of two triangular sublattices ($A$, red dots, and $B$, blue dots). The vectors $\textbf{d}_1$, $\textbf{d}_2$, and $\textbf{d}_3$ connect a site on the $A$ sublattice to its nearest neighbors on the $B$ sublattice.}
\label{fig:lattice}
\end{figure}
The time-dependent part of the Hamiltonian (\ref{basicH}),
\begin{equation}
W(t) = m \Omega^2 \cos(\Omega t) \left( \sum_{\textbf{r} \in A} \textbf{r}\cdot\boldsymbol{\rho} \, a^\dagger_\textbf{r} a_\textbf{r} + \sum_{\textbf{r} \in B} \textbf{r}\cdot\boldsymbol{\rho} \, b^\dagger_\textbf{r} b_\textbf{r} \right),
\end{equation}
describes the harmonic shaking of the lattice in the direction $\boldsymbol{\rho}$ with a driving frequency $\Omega$ in the co-moving frame of Ref. \cite{Gluck02}. As a consequence of the transformation to the co-moving frame, $W(t)$ describes atoms of mass $m$ experiencing a position-dependent sinusoidal force.
\subsection{Effective Hamiltonian} \label{sec01b}
The unavoidable complexity that arises when dealing with a quantum many-body system out of equilibrium has recently motivated the development of new theoretical tools, for example time-dependent density matrix renormalization group \cite{Rengroup0405}, time-dependent dynamical mean field theory \cite{Dynmeanfield0609}, and exact diagonalization \cite{Rigol09}.
However, for a periodically driven quantum system, the Floquet theory offers a simplified description of the system, in the form of a time-independent effective Hamiltonian, if the period $T=2\pi/\Omega$ is the shortest time scale in the problem \cite{Grifoni98}. In this limit, the atoms cannot follow the shaking motion adiabatically and remain thus at their average lattice position, albeit with renormalized hopping parameters. The system is thus considered to be in a stationary state and the knowledge of equilibrium physics can be employed.
Let us consider the Floquet Hamiltonian defined by $H_F = H(t) - i \hbar \partial_t$, where $H(t+T)=H(t)$ is periodic in time \cite{Grifoni98}. The eigenvalue equation is then given by
\begin{equation}
H_F |\phi (q,t) \rangle = \epsilon_{\phi} | \, \phi (q,t) \rangle,
\label{eq:floquet-schrodinger}
\end{equation}
where $\epsilon_{\phi}$ is the quasienergy defined uniquely up to a multiple of $\hbar \Omega$. Any solution $| \, \phi (q,t) \rangle$ is part of a set of solutions $ \exp(i n \Omega t) | \, \phi (q,t) \rangle$ with integer $n$, which all correspond to the same physical solution. Hence, the spectrum of the Floquet Hamiltonian possesses a Brillouin-zone-like structure \cite{Grifoni98}. The interest therefore lies with the states in the first Brillouin zone, i.e. states with quasi-energies $ - \hbar \Omega /2 < \epsilon_\phi \leq \hbar \Omega /2 $.
The space in which the states $| \, \phi (q,t) \rangle$ are defined is the composite of the Hilbert space spanned by square integrable functions on configuration space, $|\alpha (q) \rangle$, and the space of $T$-periodic functions. The state $| \, \phi (q,t) \rangle$ may be written down in an orthonormal basis in the composite space according to
\begin{equation}
| \phi (q,t) \rangle = \sum_{n=0}^\infty \sum_{\alpha} c_{n,\alpha} \exp[-i \hat{F}(t)+ i n \Omega t ] |\alpha (q) \rangle,
\end{equation}
where $c_{n,\alpha}$ are coefficients to normalize $| \, \phi (q,t) \rangle$ and the operator $\hat{F}(t)$ can be any $T$-periodic Hermitian operator. Therefore, we can conveniently choose $\hat{F}(t)$ to be $\hat{F}(t) = \hbar^{-1} \int_0^t dt' W(t')$, such that $ H(t) - \hbar \partial_t \hat{F}(t) = H_0$.
\begin{widetext}
If the condition
\begin{equation}
\langle \alpha'(q) | \langle \exp[i \hat{F}(t) ] \, \exp[ i (n-n') \Omega t] \, H_0 \exp[-i \hat{F}(t) ] \rangle_T | \, \alpha (q) \rangle \ll \hbar \Omega
\label{eq:condition}
\end{equation}
is satisfied for any two states $| \, \alpha (q) \rangle$ and $| \, \alpha' (q) \rangle$, then the eigenvalues $\epsilon_\phi$ are approximately
\begin{equation}
\epsilon_\phi = \langle \phi (q,t) | \langle H_F\rangle_T | \phi (q,t) \rangle \approx \sum_{\alpha, \alpha'} c_{0,\alpha'} c_{0,\alpha} \langle \alpha'(q) | \, \langle \exp[i \hat{F}(t) ] \, H_0 \, \exp[-i \hat{F}(t) ] \rangle_T \,| \,\alpha (q) \rangle .
\label{eq:floquet-approximation}
\end{equation}
\end{widetext}
Here, $\langle \mathcal{O}(t) \rangle_T=T^{-1}\int_0^{T}dt\, \mathcal{O}(t)$ denotes the time average of the operator $\mathcal{O}(t)$ over the period $T$. The condition ($\ref{eq:condition}$) will hold for $n \neq n'$, if $H_0$ is nearly constant during the period $T$, which is small if $\Omega$ is large. In this case, states with different $n$ do not mix. If $\Omega$ is large enough, such that the condition ($\ref{eq:condition}$) also holds for $n=n'$, then the energy spectrum will split up into energy bands labelled by an index $n$, where the details within the energy band are determined by $H_0$. Because the states with a different index $n$ are separated by an energy which is a multiple of $\hbar \Omega$ and because the spectrum possesses a Brillouin-zone-like structure, only the terms with $n=0$ need to be taken into account. The effective Hamiltonian $H_\textrm{eff}$, which gives rise to the same spectrum as the Floquet Hamiltonian, is then defined by \cite{Hemmerich10}
\begin{align}
H_\textrm{eff} &= \bigg< \exp[i \hat{F}(t) ] \, H_{0} \, \exp[-i \hat{F}(t) ] \bigg>_T \nonumber \\
&= \bigg< \sum^\infty_{n=0} \frac{i^n}{n !} [\hat{F}(t), H_{0}]_n \bigg>_T.
\label{eq:effective-ham}
\end{align}
Here, $[\hat{F},\hat{G}]_n$ denotes the multiple commutator, which is defined by $[\hat{F},\hat{G}]_{n+1} = [\hat{F},[\hat{F},\hat{G}]_n]$ and $[\hat{F},\hat{G}]_0 = \hat{G}$.
Effective Hamiltonians corresponding to Eq.\,(\ref{eq:effective-ham}) have been derived for linear shaking of a one-dimensional lattice \cite{Eckardt05} and for elliptical shaking of a triangular lattice \cite{Eckardt10}. For the shaken honeycomb lattice studied here, the condition ($\ref{eq:condition}$) is satisfied if $\gamma \ll \hbar \Omega$, and the effective Hamiltonian becomes
\begin{align}
H_{\textrm{eff}} = &- \sum_{j=1}^3 \sum_{\textbf{r} \in A} \gamma_j \left( a^\dagger_\textbf{r} b_{\textbf{r} + \textbf{d}_j} + b^\dagger_{\textbf{r} + \textbf{d}_j} a_\textbf{r} \right) \nonumber \\
&- \sum^3_{i=1} \sum^3_{j=1,j\neq i} \gamma'_{i,j} \bigg( \sum_{\textbf{r} \in A} a^\dagger_\textbf{r} \, a_{\textbf{r} + \textbf{d}_i - \textbf{d}_j}
+ \sum_{\textbf{r} \in B} b^\dagger_\textbf{r} b_{\textbf{r} + \textbf{d}_i - \textbf{d}_j} \bigg) \nonumber \\
&- \mu \bigg( \sum_{\textbf{r} \in A} a^\dagger_\textbf{r} a_\textbf{r} + \sum_{\textbf{r} \in B} b^\dagger_\textbf{r} b_\textbf{r} \bigg),
\label{Hamiltonian-eff}
\end{align}
where the renormalized nn hopping parameters $\gamma_j$ are given by
\begin{equation}
\gamma_j = \gamma J_0 \left( \bigg| \textbf{d}_j \cdot \boldsymbol{\rho} \frac{m \Omega}{\hbar} \bigg| \right) ,
\label{eq:renormalized-gamma}
\end{equation}
and the renormalized nnn hopping parameters are given by
\begin{equation}
\gamma'_{i,j} = \gamma' J_0 \left( \bigg| (\textbf{d}_i - \textbf{d}_j) \cdot \boldsymbol{\rho} \frac{m \Omega}{\hbar} \bigg| \right)
\label{eq:renormalized-gammaprime}
\end{equation}
(see Appendix \ref{app1} for detailed calculations). In these expressions, $J_0(x)$ denotes the zeroth order Bessel function of the first kind, which shows a damped oscillation around zero.
In terms of the renormalized nn and nnn hopping parameters, the diaginalization of the effective Hamiltonian (\ref{Hamiltonian-eff})
yields the dispersion relation
\begin{equation}
\label{DispersionRel}
\epsilon_{\lambda}(\textbf{q})=h(\textbf{q})+\lambda|f(\textbf{q})|,
\end{equation}
where $\lambda=\pm$ is the band index, and we have defined the functions
\begin{equation}\label{fq}
f(\textbf{q}) = \sum_j \gamma_j \exp(-i \textbf{q} \cdot \textbf{d}_j)
\end{equation}
and
\begin{equation}
h(\textbf{q})=2\sum_{i<j}\gamma_{i,j}^{\prime}\cos\left[\textbf{q}\cdot(\textbf{d}_i-\textbf{d}_j)\right].
\end{equation}
\section{Merging and alignment of Dirac points} \label{sec02}
In this section, the honeycomb lattice with anisotropic hopping is studied. In the first two subsections, only nn hopping is considered
for illustration reasons. Indeed, this allows for a simple understanding of the main consequences of shaking on
Dirac-point motion and dimensional
crossover. In Subsec. \ref{sec02c}, we discuss how the picture evolves when nnn hopping is included, and the sign of the hopping
parameters is investigated in Subsec. \ref{sec02d}.
Since the system with two nn equal hopping parameters and a single independent one captures the essential features of the systems with three independent nn hopping parameters, we will focus on this system. The numbering of the $\gamma_j$s is chosen such that $|\gamma_2| = |\gamma_3| = \gamma_{2,3}$, which can be achieved by shaking in a direction parallel or perpendicular to $\textbf{d}_1$.
Although the atoms in the optical lattice are charge-neutral objects, we shall adopt the language from condensed-matter physics and call a zero-gap phase with a pair of Dirac cones and a vanishing density of states at the band-contact points a \textit{semimetal}, whereas a gapped phase is called \textit{band insulator}. Furthermore, nnn hopping induces a \textit{metallic} phase for small values of $\gamma_j$ because of an overlap between the two bands that yields a non-vanishing density of states at the energy level of the band-contact points.
\subsection{Merging of Dirac points} \label{sec02a}
If the latice is shaken in the direction perpendicular to $\textbf{d}_1$ (direction 1 in Fig.\,\ref{fig:lattice}), $\gamma_1$ remains equal to $\gamma$, whereas $\gamma_2$ and $\gamma_3$ are renormalised to a smaller value. An increase in the shaking amplitude results in a decrease in $\gamma_{2,3} = \gamma_2 = \gamma_3$, which is depicted by the arrow $M$ in Fig.\,\ref{fig:phase-diagram-abs}(a). When the hopping parameters change according to this arrow $M$, the energy spectrum evolves from Fig.\,\ref{fig:merging-energy-dispersion}(a) to Fig.\,\ref{fig:merging-energy-dispersion}(b). The Dirac points, originally situated at the corners $K$ and $K'$ of the first Brillouin zone, start to move in the $q_y$-direction along the vertical edges of the latter. This motion is depicted by the arrows in Fig.\,\ref{fig:phase-diagram-abs}(b). Even if the two Dirac points are no longer located at the high-symmetry points $K$ and $K'$, they remain related by time-reversal symmetry, such that their Berry phases $\pi$ and $-\pi$ are opposite. This non-zero Berry phase topologically protects each of the Dirac points and thus the semimetallic phase remains robust until $\gamma_{2,3}=\gamma_1/2$, where the two points merge at a time-reversal invariant momentum, i.e. half of a reciprocal lattice vector \cite{Montambaux09}. In the present example, this point is situated at the center of the vertical edges of the first Brillouin zone, and the band dispersion becomes parabolic in the $y$-direction while remaining linear in the $x$-direction [see Fig.\,\ref{fig:merging-energy-dispersion}(b)]. The merged Dirac points are no longer topologically protected due to the annihilation of the opposite Berry phases. Consequently, a further increase of the shaking amplitude, which results in a further decrease of $\gamma_{2,3}$, leads to the opening of a gap between the two bands. Thus, the system undergoes a topological phase transition from a semimetal to a band insulator. This merging transition was also studied in a static setup in Ref. \cite{Lee09}, where the hopping amplitudes $\gamma$ were proposed to be modified by a change in the intensity of one of the lasers used to create the optical lattice. In contrast to this static setup, shaking the honeycomb lattice allows one to completely annihilate some of the nn hopping parameters and to even change their sign. This sign change occurs at the zeros of the Bessel function [see Eq.\,(\ref{eq:renormalized-gamma})]. For an example system of $^{40}$K atoms in a lattice created by lasers with a wavelength of 830 nm, which is shaken in the direction perpendicular to $\textbf{d}_1$, the situation $\gamma_{2,3}=0$ is encountered for
{\begin{equation}
\rho = 180 \textrm{nm}; \; \Omega/2\pi = 6 \textrm{kHz},
\label{eq:dim-cross-over-0d}
\end{equation}
which corresponds to the first zero of the Bessel function. At this particular point, and if $\gamma'=0$ in addition, the system consists of a set of effectively decoupled horizontal bonds along which the atoms are solely allowed to hop. This yields two flat bands at $\pm \gamma_1$ [see Fig.\,\ref{fig:merging-energy-dispersion}(c)] that may be viewed as the extreme limit of the band-insulating phase. Alternatively, one may view this situation upon decreasing the value of $\gamma_{2,3}$ as a dimensional crossover from a 2D band insulator to a zero-dimensional (0D) system. A small non-zero value of $\gamma_{2,3}$ simply provides a weak dispersion of these decoupled bands (not shown).
\begin{figure}[h]
\begin{minipage}{4cm}
\centering
\includegraphics[scale=0.16, angle=0]{spectrum-hom}
\end{minipage}
\begin{minipage}{4cm}
\centering
\includegraphics[scale=0.16, angle=0]{spectrum-merge}
\end{minipage}
\begin{minipage}{4cm}
\centering
\includegraphics[scale=0.16, angle=0]{spectrum-zerod}
\end{minipage}
\begin{minipage}{4cm}
\centering
\includegraphics[scale=0.16, angle=0]{spectrum-align}
\end{minipage}
\caption{(Color online) Energy dispersion for the shaken honeycomb optical lattice, with $k=1$, $\gamma =1$, and $\gamma'=0$. The labels $q_x$ and $q_y$ represent the $x$ and $y$ components of the momentum, respectively. The $x$ and $y$ axes have been chosen such that the nn vectors are given by Eq.\,(\ref{eq:nn-vec}). (a) The isotropic case, where $\gamma_1=\gamma_{2,3}$. (b) The merged Dirac points, where $\gamma_{2,3} = \gamma_1/2$. (c) The zero dimensional case, where $\gamma_{2,3}=0$. (d) The aligned Dirac points, where $\gamma_1=0$.}
\label{fig:merging-energy-dispersion}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[scale=0.3, angle=0]{phase-diagram-abs}
\caption{(Color online) (a) Phase diagram, showing the phase transition between the zero-gap semimetallic phase and the insulating phase, which happens at $\gamma_1 = 2 \gamma_{2,3}$. Here, we have chosen $\gamma'=0$. (b) Dirac-point motion in the first Brillouin zone for a shaking direction perpendicular to $\textbf{d}_1$ [direction $M$ in the phase diagram (a)]. (c) Dirac-point motion in the first Brillouin zone for a shaking direction parallel to $\textbf{d}_1$ [direction $A$ in the phase diagram (a)]. The contour plots depict the dispersion of the isotropic system with an arbitrary color scale. The area with higher contrast is the first Brillouin zone.}
\label{fig:phase-diagram-abs}
\end{figure}
\subsection{Alignment of Dirac points} \label{sec02b}
Another dimensional crossover, from 2D to 1D, may be obtained if the lattice is shaken in the direction parallel to one of the nn vectors (direction 2 in Fig.\,\ref{fig:lattice}). Here, we choose ${\bf d}_1$ to maintain the symmetry $\gamma_{2,3}=\gamma_2=\gamma_3$. In this case, both $\gamma_1$ and $\gamma_{2,3}$ are renormalized by Bessel functions, albeit with different arguments. Since all hopping parameters are renormalized, the trajectory of the system in the phase space upon increasing the shaking amplitude is not a straight line, as was the case for shaking perpendicular to a hopping direction, and has a new feature: the alignment of Dirac points, which occurs for $\gamma_1 = 0$. The first zero of $\gamma_1$ is found at
\begin{equation}
\rho = 92 \textrm{nm}; \; \Omega/2\pi = 6 \textrm{kHz},
\label{eq:dim-cross-over-1d}
\end{equation}
for the same system of $^{40}$K atoms mentioned above. Here, for illustrative purposes, a simplified trajectory of the system is depicted in Fig.\,\ref{fig:merging-energy-dispersion} by arrow $A$, which corresponds to the motion of the Dirac points in reciprocal space as shown in Fig.\,\ref{fig:phase-diagram-abs}(c). As $\gamma_1$ approaches zero, the Dirac points align in lines parallel to the $x$-axis at $q_y=\pm \pi/\sqrt{3}d$ and the energy barriers between the aligning points are lowered. Consequently, when $\gamma_1 = 0$, the energy spectrum contains lines where the two energy band meet and the dispersion is linear, as is shown in Fig.\,\ref{fig:merging-energy-dispersion}(d). The dispersion relation (\ref{DispersionRel}) then reads simply
\begin{equation}
\epsilon_{\lambda}(\textbf{q})=2\lambda \gamma_{2,3}\left|\cos\left(\frac{\sqrt{3}}{2}q_yd\right)\right|,
\end{equation}
and one clearly sees the 1D character. Indeed, there is no dispersion in the $q_x$-direction, as is also evident from
Fig.\,\ref{fig:merging-energy-dispersion}(d), and the system may be viewed as completely decoupled 1D chains in which the zig-zag
arrangement is of no importance. In this particular limit, the sites A and B are therefore no longer inequivalent such that the
unit cell is effectively divided by two, and the size of the first Brillouin zone is consequently doubled. The aligned Dirac points
may thus, alternatively, be viewed as due to an artificial folding of the second (outer) half of the first Brillouin zone into its
inner half. However, this aspect is very particular in that the Brillouin zone immediately retrieves its original size when
$\gamma_1$ is small, but non-zero, or if nnn hoppings are taken into account. In both cases, one needs to distinguish the two
different sublattices and one obtains a dispersion in the $q_x$-direction.
The actual behavior of the system for an increasing shaking amplitude is discussed in Sec.\,\ref{sec04}. This behavior is more complicated because all three nn hopping parameters are renormalized, which, beyond the alignment, leads to the merging of Dirac points and the opening of a gap also in the case of shaking parallel to one of the nn vectors. In the absence of nnn hopping, the 0D limit can be reached in addition.
\subsection{Next-nearest-neighbor hopping} \label{sec02c}
The major consequence of nnn hopping is to break particle-hole symmetry, as may be seen from Eq. (\ref{DispersionRel}), where non-zero
values of $\gamma_{i,j}^{\prime}$ yield $\epsilon_{\lambda}(\textbf{q})\neq -\epsilon_{-\lambda}(\textbf{q})$. Its relevance depends sensitively on the
shaking direction, because of the different renormalization of the nn hopping parameters. The band structure with nnn hopping included
is depicted in Fig.\,\ref{fig:nnn-energy-dispersion} for different shaking directions.
\subsubsection{Shaking in the direction perpendicular to $\textbf{d}_1$}
In the case of a shaking perpendicular to $\textbf{d}_1$, only $\gamma_{2,3}$ are decreased, whereas $\gamma_1=\gamma$ remains
the leading energy scale in the band structure \footnote{We concentrate on $\textbf{d}_1$ as a reference direction, but it
may naturally be replaced by any other direction $\textbf{d}_j$, in which case $\gamma_j=\gamma$ remains constant.}.
The band structure for the unshaken lattice is depicted in Fig.\,\ref{fig:nnn-energy-dispersion}(a) for
$\gamma'/\gamma=0.1$, and one notices that the main features of the band structure, namely the Dirac points, are unaltered
with respect to the case $\gamma'=0$ in Fig.\,\ref{fig:merging-energy-dispersion}(a), apart from the flattening of the upper band
as compared to the lower one. When approaching the merging transition $\gamma_{2,3}=\gamma_1/2$, the value of which is determined
by the zeros of $f(\textbf{q})$ in Eq. (\ref{fq}) and that therefore does not depend on the nnn hopping parameters, the band width
remains dominated by the largest hopping parameter $\gamma_1$, such that the band structure [Fig.\,\ref{fig:nnn-energy-dispersion}(b)]
at the transition is essentially the same as in Fig.\,\ref{fig:merging-energy-dispersion}(b) for $\gamma'=0$. In the 0D limit,
with $\gamma_{2,3} = 0$ the originally flat bands [Fig.\,\ref{fig:merging-energy-dispersion}(c)] acquire the weak dispersion
of a triangular lattice as a
consequence of the non-zero nnn hopping parameters. However, as expected from the above arguments, the dispersion is on the
order of $\gamma'$ and thus small as compared to the energy separation $\sim 2\gamma_1=2\gamma$ between the two bands.
\subsubsection{Shaking in the direction parallel to $\textbf{d}_1$}
In contrast to a shaking direction perpendicular to $\textbf{d}_1$, nnn hopping has more drastic consequences if the lattice is
shaken in the direction parallel to $\textbf{d}_1$. In this case, all nn hopping parameters are decreased, and the relative
importance of nnn hopping is enhanced. Notice further that the nnn lattice vectors $\pm(\textbf{d}_2-\textbf{d}_3)$ are now
perpendicular to the shaking direction such that $\gamma_{2,3}^{\prime}=\gamma'=0.1\gamma$ remains unrenormalized.
Also in this case, the system is approaching the 1D limit, with $\gamma_{1}=0$
[see Fig.\,\ref{fig:nnn-energy-dispersion}(d)]. However, in contrast to Fig.\,\ref{fig:merging-energy-dispersion}(d),
the chains remain coupled by nnn hopping that yields a dispersion in the $q_x$-direction. Furthermore, as mentioned above,
the A and B sites are now not equivalent from a crystallographic point of view, such that the outer parts of the first
Brillouin zone cannot be folded back into the inner one, as may be seen from Fig.\,\ref{fig:nnn-energy-dispersion}(d).
Finally, for particular values of the shaking amplitude in the direction parallel to $\textbf{d}_1$, the nn
hopping parameters can be decreased in such a manner as to render $\gamma_{2,3}$ more relevant. In this case, the
two bands can overlap in energy, as depicted in Figs.\,\ref{fig:nnn-energy-dispersion}(e) and \,\ref{fig:nnn-energy-dispersion}(f) for
$\boldsymbol{\rho} = 5.2 (\hbar / m \Omega d) \hat{e}_x$
(in which case $\gamma_1 \approx \gamma_{2,3}$) and $\boldsymbol{\rho} = 4.8 (\hbar / m \Omega d) \hat{e}_x$
(with $\gamma_{2,3} \approx 0$),
respectively. In the latter example there are no band contact points, in spite of the overlap between the two bands, and
the system would be in an insulating phase if nnn hopping terms were not taken into account. This overlap in energy between the
two bands yields a non-zero density of states at any energy, such that the semi-metallic (or insulating) phase vanishes and
yields, at half-filling, a metallic phase with particle and anti-particle pockets in the first Brillouin zone.}
\begin{figure}[h]
\begin{minipage}{4cm}
\centering
\includegraphics[scale=0.16, angle=0]{spectrum-nnn-hom}
\end{minipage}
\begin{minipage}{4cm}
\centering
\includegraphics[scale=0.16, angle=0]{spectrum-nnn-merge}
\end{minipage}
\begin{minipage}{4cm}
\centering
\includegraphics[scale=0.16, angle=0]{spectrum-nnn-zerod}
\end{minipage}
\begin{minipage}{4cm}
\centering
\includegraphics[scale=0.16, angle=0]{spectrum-nnn-align}
\end{minipage}
\begin{minipage}{4cm}
\centering
\includegraphics[scale=0.16, angle=0]{spectrum-nnn-metal-hom}
\end{minipage}
\begin{minipage}{4cm}
\centering
\includegraphics[scale=0.16, angle=0]{spectrum-nnn-metal-zerod}
\end{minipage}
\caption{(Color online) Energy dispersion for the shaken honeycomb optical lattice, with $k=1$, $\gamma =1$, and $\gamma'=0.1$. The labels $q_x$ and $q_y$ represent the $x$ and $y$ components of the momentum, respectively. The $x$ and $y$ axes have been chosen such that the nn vectors are given by Eq.\,(\ref{eq:nn-vec}). (a) The homogeneous case, where $\gamma_1=\gamma_{2,3}$. (b) The merged Dirac points, where $\gamma_{2,3} = \gamma_1/2$. (c) The zero dimensional case, where $\gamma_{2,3}=0$. (d) The aligned Dirac points, where $\gamma_1=0$. (e) An example of the metallic phase with $\boldsymbol{\rho} = 5.2 (\hbar / m \Omega d) \hat{e}_x$. (f) Another example of the metallic phase with $\boldsymbol{\rho} = 4.8 (\hbar / m \Omega d) \hat{e}_x$.}
\label{fig:nnn-energy-dispersion}
\end{figure}
\subsection{The signs of the hopping parameters} \label{sec02d}
As already alluded to in the previous sections, the shaking of a honeycomb lattice can lead to a sign change of the hopping parameters. Quite generally, Fig.\,\ref{fig:phase-diagram-sign} shows that changing the relative signs of the nn hopping parameters results in a translation of the energy spectrum in momentum space. This effect was also mentioned in Ref. \cite{Hasegawa06}. Indeed, the relative signs determine at which of the four time-reversal invariant momenta in the first Brillouin zone the merging of Dirac points and the semimetal-insulator transition take place when $|\gamma_1|=2|\gamma_{2,3}|$. However, the sign change of the nn hopping parameters can be transformed away by a gauge transformation \cite{Hasegawa06}. Nevertheless, the sign of the nnn hopping parameter is important, since it determines whether the upper or the lower band is flattened.
\begin{figure}[h]
\centering
\includegraphics[scale=0.3, angle=0]{phase-diagram-sign}
\caption{(Color online) Phase diagram and contour plots of the energy bands, showing the effects of the renormalized nn hopping parameters $\gamma_j$. (a) to (h) Contour plots of the energy bands, where the color scaling is arbitrary and the first Brillouin zone is the area with higher contrast. The value of the nn hopping parameters for each contour plot are given by the position of the corresponding letter in the phase diagram and $\gamma'=0$. The dark regions indicate energies close to zero, whereas brighter regions are further away in energy from the Fermi level at half filling.}
\label{fig:phase-diagram-sign}
\end{figure}
\section{Interactions} \label{sec03}
Until now, we have considered single-component fermionic atoms and, due to the Pauli principle, the absence of $s$-wave interaction naturally results in an ideal Fermi lattice gas, albeit with an unusual band structure. By trapping two hyperfine states of the fermionic atoms, Hubbard-like interaction terms arise,
\begin{align}
H_{\textrm{int}}=&\sum_{\textbf{r} \in A} \sum_{\sigma, \sigma'} \frac{U}{2} \, a^\dagger_{\textbf{r},\sigma} a^\dagger_{\textbf{r},\sigma'} a_{\textbf{r},\sigma'} a_{\textbf{r},\sigma} \nonumber \\
+ &\sum_{\textbf{r} \in B} \sum_{\sigma, \sigma'} \frac{U}{2} \, b^\dagger_{\textbf{r},\sigma} b^\dagger_{\textbf{r},\sigma'} b_{\textbf{r},\sigma'} b_{\textbf{r},\sigma},
\end{align}
where the fermionic operators now acquire an additional spin index $\sigma=\{\uparrow, \downarrow\}$, which is summed over, and $U$ is the interaction energy. Naturally, in order to be able to apply the Floquet theory in the presence of interactions, the Hamiltonian $H_0$ in Eq. (\ref{eq:condition}) must now be replaced by $H = H_0 + H_{\rm int}$. This is the case in our study because we investigate the weak-coupling limit with $U\ll \gamma$. Since the interaction term commutes with the shaking, it is not renormalized, similarly to the on-site energy term proportional to $\mu$ in Eq.\,(\ref{eq:H0}). However, it has been shown that complications may arise when a multiple of the energy $U$ is in resonance with a harmonic of $\hbar\Omega$, $m\hbar\Omega=nU$, for integer $m$ and $n$. Whereas the limit \cite{Eckardt08} $m\ll n$ is not considered here because it is in contradiction with the small-$U$ large-frequency limit, critical resonances may occur for $m\gg n$ \cite{Poletti11}. Nevertheless, it has been shown in Ref. \cite{Poletti11} that these resonances, which occur in higher-order perturbation theory, are strongly suppressed in the large-$m$ limit.
In the weakly-interacting regime considered here, the ground state is adiabatically connected to that of the non-interacting system, with no broken symmetry. First, we use the Fourier transform of the creation and annihilation operators, $a_{\textbf{r},\sigma} = \mathcal{N}^{-1/2} \sum_\textbf{q} \exp(i \textbf{q} \cdot \textbf{r}) a_{\textbf{q},\sigma}$, to find the Hamiltonian in momentum space. Within a Hartree-Fock theory, we introduce a mean-field decoupling of the interaction terms,
\begin{align}
&a^\dagger_{\textbf{q}1,\sigma} a^\dagger_{\textbf{q}2,\sigma'} a_{\textbf{q}3,\sigma'} a_{\textbf{q}4,\sigma} \approx \\
\big< &a^\dagger_{\textbf{q}2,\sigma'} a_{\textbf{q}3,\sigma'} \big> a^\dagger_{\textbf{q}1,\sigma} a_{\textbf{q}4,\sigma} - \big< a^\dagger_{\textbf{q}2,\sigma'} a_{\textbf{q}4,\sigma} \big> a^\dagger_{\textbf{q}1,\sigma} a_{\textbf{q}3,\sigma'} \nonumber \\
+ \, &a^\dagger_{\textbf{q}2,\sigma'} a_{\textbf{q}3,\sigma'} \big<a^\dagger_{\textbf{q}1,\sigma} a_{\textbf{q}4,\sigma} \big> - a^\dagger_{\textbf{q}2,\sigma'} a_{\textbf{q}4,\sigma} \big< a^\dagger_{\textbf{q}1,\sigma} a_{\textbf{q}3,\sigma'} \big> \nonumber \\
- \big< &a^\dagger_{\textbf{q}2,\sigma'} a_{\textbf{q}3,\sigma'} \big> \big< a^\dagger_{\textbf{q}1,\sigma} a_{\textbf{q}4,\sigma} \big> + \big< a^\dagger_{\textbf{q}2,\sigma'} a_{\textbf{q}4,\sigma} \big> \big< a^\dagger_{\textbf{q}1,\sigma} a_{\textbf{q}3,\sigma'} \big> \nonumber,
\end{align}
such that the expectation values of both sides are equal. For the mean value we take
\begin{equation}
\langle a^\dagger_{\textbf{q},\sigma} a_{\textbf{q}', \sigma'} \rangle = \langle b^\dagger_{\textbf{q},\sigma} b_{\textbf{q}', \sigma'} \rangle = \mathcal{N} n_{\textbf{q},\sigma} \delta_{\textbf{q}, \textbf{q}'} \delta_{\sigma,\sigma'},
\label{eq:orderparameter}
\end{equation}
where $\mathcal{N}$ is the number of sites per sublattice, $n_{\textbf{q},\sigma}$ is the density of atoms with momentum $\textbf{q}$ and spin index $\sigma$, and $\delta_{\alpha,\alpha'}$ is the Kronecker delta. We then obtain the mean-field Hamiltonian
\begin{equation}
H_{MF} = H_{\textrm{eff}} - \frac{U \mathcal{N} n^2}{8} + \frac{U n}{4} \sum_{\sigma, \textbf{q}} \big( a^\dagger_{\textbf{q},\sigma} a_{\textbf{q},\sigma} + b^\dagger_{\textbf{q},\sigma} b_{\textbf{q},\sigma} \big),
\label{eq:Ham-mf1}
\end{equation}
where the total density is defined by $n = \sum_{q,\sigma} n_{\textbf{q},\sigma} $.
The Hamiltonian (\ref{eq:Ham-mf1}) may be rewritten in a matrix form:
\begin{align}
H_{MF} = &- \frac{U \mathcal{N} n^2}{8} \label{eq:Ham-mf-matrix-ab} \\
&+ \sum_{\sigma, \textbf{q}} \left( \begin{matrix} a^\dagger_{\textbf{q},\sigma} && b^\dagger_{\textbf{q},\sigma} \end{matrix} \right)
\left( \begin{matrix} h(\mu, \textbf{q}) && f(\textbf{q}) \\ f^*(\textbf{q}) && h(\mu, \textbf{q}) \end{matrix} \right) \left( \begin{matrix} a_{\textbf{q},\sigma} \\ b_{\textbf{q},\sigma} \end{matrix} \right), \nonumber
\end{align}
where we have introduced the functions
\begin{equation}
h(\mu, \textbf{q}) = \frac{U n}{4} - \mu - \gamma' \sum^3_{i=1} \sum^3_{j=1,j\neq i} \exp[-i \textbf{q} \cdot (\textbf{d}_i - \textbf{d}_j)],
\end{equation}
and $f(\textbf{q})$ is defined in Eq. (\ref{fq})
The Hamiltonian ($\ref{eq:Ham-mf-matrix-ab}$) can then be diagonalized by the unitary operator
\begin{equation}
\hat{{\cal U}} = \frac{1}{\sqrt{2}}\left( \begin{matrix} 1 && i f(\textbf{q})/|f(\textbf{q})| \\ f^*(\textbf{q})/|f(\textbf{q})| && -i \end{matrix} \right),
\end{equation}
which yields
\begin{widetext}
\begin{equation}
H_{MF} = - \frac{U \mathcal{N} n^2}{8} + \sum_{\sigma, \textbf{q}} \left( \begin{matrix} c^\dagger_{\textbf{q},\sigma} && d^\dagger_{\textbf{q},\sigma} \end{matrix} \right)
\left( \begin{matrix} h(\mu, \textbf{q}) - |f(\textbf{q})| && 0 \\ 0 && h(\mu, \textbf{q}) + |f(\textbf{q})| \end{matrix} \right) \left( \begin{matrix} c_{\textbf{q},\sigma} \\ d_{\textbf{q},\sigma} \end{matrix} \right).
\label{eq:Ham-mf-matrix-cd}
\end{equation}
Because the $c$ and $d$ quasiparticles are free, the partition function corresponding to the Hamiltonian ($\ref{eq:Ham-mf-matrix-cd}$) reads
\begin{equation}
Z = \exp \bigg[ \sum_{\sigma, \textbf{q}} \bigg( \log\bigg\{ 1 + \exp\big[-\beta \big(h(\mu, \textbf{q}) - |f(\textbf{q})| \big) \big] \bigg\} + \log\bigg\{ 1 + \exp\big[-\beta \big(h(\mu, \textbf{q}) + |f(\textbf{q})| \big) \big] \bigg\} \bigg) \bigg],
\end{equation}
where $\beta = (k_B T)^{-1}$ with $k_B$ denoting the Boltzmann's constant and $T$ the temperature.
The total number of particles $N$ is given by $(1/\beta) \partial \log Z / \partial \mu$, and one obtains
\begin{equation}
N = \sum_{\sigma,\textbf{q}} \bigg( \frac{1}{1 + \exp \big[\beta \big(h(\mu, \textbf{q}) - |f(\textbf{q})| \big) \big]} + \frac{1}{1 + \exp\big[\beta \big(h(\mu, \textbf{q}) + |f(\textbf{q})| \big) \big]} \bigg).
\label{eq:nrparticles}
\end{equation}
Since the expression inside the sum does not depend on spin, summing over $\sigma$ yields a factor 2.
One recognizes in Eq.\,(\ref{eq:nrparticles}) the Fermi-Dirac distribution function $N_{FD}(x)=[1+\exp(x)]^{-1}$.
The number of particles $N$ is related to the density $n$, which is defined here as the number of particles per lattice site, i.e. $n = N / 2\mathcal{N}$. Converting the sum over $\textbf{q}$ into an integral,
the following self-consistent equation for the density is derived
\begin{equation}
n(\mu) = \frac{1}{V_{1BZ}} \int\limits_{1BZ} d^2\textbf{q} \bigg\{N_{FD}\big[\beta \big(h(\mu, \textbf{q}) - |f(\textbf{q})| \big) \big] + N_{FD}\big[\beta \big(h(\mu, \textbf{q}) + |f(\textbf{q})| \big) \big] \bigg\},
\label{eq:density-interaction-os}
\end{equation}
where the integral is restricted to the first Brillouin zone, the surface of which is $V_{1BZ}$.
\\
\end{widetext}
In Fig.\,\ref{fig:density-mu}(a), the density $n(\mu)$ is plotted for several values of $\gamma_{2,3}/\gamma_1$. For the isotropic case, $\gamma_{2,3}/\gamma_1 = 1$, the result of Zhu \textit{et al}. is reproduced \cite{Zhu07}. For the 0D limit, the flat line due to the gap in the spectrum is clearly visible at the chosen temperature. Fig.\,\ref{fig:density-mu}(b) confirms that repulsive interactions lead to a lower density than in a system without interactions for the same chemical potential. Fig.\,\ref{fig:density-mu}(c) agrees with the observation that the nnn hopping breaks the particle-hole symmetry. This effect is also visible in Fig.\,\ref{fig:density-mu}(d), where the dependence of the density on the chemical potential is calculated for a shaking vector where the system is in the zero-gapped semimetallic phase for $\gamma'=0$ and in the metallic phase for $\gamma'=0.1$.
\begin{figure}[h]
\begin{minipage}{4cm}
\centering
\includegraphics[scale=0.16, angle=0]{density-mu-renhoppar}
\end{minipage}
\begin{minipage}{4cm}
\centering
\includegraphics[scale=0.16, angle=0]{density-mu-int}
\end{minipage}
\begin{minipage}{4cm}
\centering
\includegraphics[scale=0.16, angle=0]{density-mu-nnn}
\end{minipage}
\begin{minipage}{4cm}
\centering
\includegraphics[scale=0.16, angle=0]{density-mu-metal}
\end{minipage}
\caption{(Color online) Density $n$ as a function of the chemical potential $\mu$. Unless specified otherwise in the figure, the nn hopping parameters $\gamma_{2,3}=\gamma_1=\gamma =1 $, the nnn hopping parameter $\gamma'=0$, the interaction strength $U=0$, and the inverse temperature $\beta=20$. (a) Effect of the renormalization of the nn hopping parameters. (b) Effect of the interaction strength $U$ for the isotropic case. (c) Effect of the nnn hopping parameter $\gamma'=0.1$ in the shaken lattice. (d) The metallic phase. For the isotropic cases, $\boldsymbol{\rho} = 5.2 (\hbar / m \Omega d) \hat{e}_x$, whereas for the 0D cases $\boldsymbol{\rho} = 4.8 (\hbar / m \Omega d) \hat{e}_x$. These systems are in the metallic phase for $\gamma'=0.1$, whereas for $\gamma'=0$ they are in the semi-metallic and the insulating phase, respectively.}
\label{fig:density-mu}
\end{figure}
\section{Possibilities for experimental observation} \label{sec04}
Honeycomb optical lattices have recently been realized experimentally, although the existing setups have only been used to investigate bosonic atoms \cite{Honeycomb10, Soltan11}.
Shaking of a lattice has been experimentally implemented in a one-dimensional one by a periodic modulation on the position of the reflecting mirrors \cite{Zenesini09}. For a honeycomb lattice, the shaking could be realized by means of an acousto-optical device, as proposed for a triangular lattice in Ref. \cite{Eckardt10}.
The magnitude of the nn hopping parameter $\gamma$ in a honeycomb optical lattice has been evaluated in Ref. \cite{Lee09},
\begin{equation}
\gamma \approx 1.861 E_R \left(\frac{V_0}{E_R}\right)^{3/4} \exp\left( -1.582 \sqrt{\frac{V_0}{E_R}} \right) ,
\label{eq:size-gamma}
\end{equation}
in terms of the recoil energy $E_R=\hbar^2 k^2 / 2 m$ and the magnitude of the potential barrier between nearest-neighbor lattice sites $V_0$. The magnitude of the nnn hopping parameter $\gamma'$ is not yet known, but could be determined from numerical band structure calculations. In a typical experimental situation, we expect the ratio $\gamma'/\gamma$ to be in the $5-10\%$ range, in agreement with the parameter chosen in the discussion of Sec.\,\ref{sec02c}.
In a typical experiment, the shaking amplitude would be increased from zero to a finite value. Fig.\,\ref{fig:phases-linear} shows in which order the system goes through the different phases and dimensions upon increasing the shaking amplitude. Here, also the values of the shaking amplitude required for the dimensional crossovers are given for the same system as discussed in Sec.\,\ref{sec02c} and $\gamma'=0.1\gamma$. If the shaking direction is perpendicular to one of the nn vectors, the system will be in the gapped insulating phase beyond a certain value of the shaking amplitude, since the Bessel function crosses the value 0.5 only once and never obtains the value -0.5. If the shaking is parallel to one of the nn vectors, the system will be in the metallic phase beyond a certain value of the shaking amplitude. Nevertheless, it is still possible to induce a merging of Dirac points and to open up a gap in the spectrum, since the nn hopping parameters are renormalized such that the value of one of them will in general differ from that of the other two. However, whether the system is actually driven into an insulating phase or remains metallic depends on the precise value of the ratio $\gamma'/\gamma$.
\begin{figure}[h]
\centering
\includegraphics[scale=0.3, angle=0]{phases-linear}
\caption{(Color online) Overview of the different phases as a function of the shaking amplitude. The bottom scale gives the size of the argument of the Bessel function, whereas the values for $\rho$ in nm correspond to the optical lattice discussed in Sec.\,\ref{sec02c} with $\gamma'= 0.1\gamma$, $\Omega/2\pi=6$kHz, $k=830$nm, and containing $^{40}$K atoms. (a) Shaking perpendicular to one of the nn hopping directions. (b) Shaking parallel to one of the nn hopping directions.}
\label{fig:phases-linear}
\end{figure}
In experiments, an overall harmonic trapping potential is imposed to confine the atoms. It is described by
\begin{equation}
V_{\textrm{trap}}(\textbf{r}) = \frac{1}{2} m \omega_\textrm{trap}^2 \textbf{r}^2 ,
\end{equation}
where $\omega_\textrm{trap}$ is the trapping frequency, and $\textbf{r}$ is the position measured from the center of the trap. By applying the local density approximation (LDA), one finds that the chemical potential evolves radially according to $\mu \rightarrow \mu - V_{\textrm{trap}}(\textbf{r}^2)$.
Fig.\,\ref{fig:density-r}(a) shows the density profile for several ratios of $\gamma_{2,3}/\gamma_1$, without nnn hopping or interactions. The case with $\gamma_{2,3}/\gamma_1=0$, when the system is in the extreme limit of the band insulating phase, can be well distinguished from the other cases. Fig.\,\ref{fig:density-r}(b) shows that stronger interactions lead to a higher density away from the center of the trap. This effect becomes visible when the density starts to deviate from one particle per lattice site. Next-nearest-neighbor hopping leads to a higher density at the edge of the cloud compared to the case without nnn hopping, which can be seen from comparing Figs.\,\ref{fig:density-r}(a) and (c) and from Fig.\,\ref{fig:density-r}(d). The latter shows the effect of nnn hopping on the density profile for the case where the nnn hopping gives rise to the metallic phase for two different shaking vectors. In the first case, $\boldsymbol{\rho} = 5.2 (\hbar / m \Omega d) \hat{e}_x$, which gives $\gamma_1 \approx \gamma_{2,3}$, such that without nnn hopping, the system is in the zero-gapped semi-metallic phase and the Dirac points are located very close to the corners of the first Brillouin zone. In the second case, $\boldsymbol{\rho} = 4.8 (\hbar / m \Omega d) \hat{e}_x$, which results in $\gamma_{2,3} \approx 0$, such that without nnn hopping the system is in the insulating phase and the two energy bands are almost flat.
\begin{figure}[t]
\begin{minipage}{4cm}
\centering
\includegraphics[scale=0.16, angle=0]{density-r-renhoppar}
\end{minipage}
\begin{minipage}{4cm}
\centering
\includegraphics[scale=0.16, angle=0]{density-r-int}
\end{minipage}
\begin{minipage}{4cm}
\centering
\includegraphics[scale=0.16, angle=0]{density-r-nnn}
\end{minipage}
\begin{minipage}{4cm}
\centering
\includegraphics[scale=0.16, angle=0]{density-r-metal}
\end{minipage}
\caption{(Color online) Density $n$ as a function of the distance from the trap's centre $r=|\textbf{r}|$, which is expressed in units of the nearest-neighbor distance $d$. The trapping frequency has been chosen such that the trapping potential is given by $V_{\textrm{trap}}(\textbf{r}) = 0.001 \gamma \textbf{r}^2 / d^2$. The chemical potential $\mu$ for each case has been chosen such that the density at the trap's centre $n$ is one particle per site. This corresponds to half-filling, since we consider 2 species of fermions. Unless specified otherwise in the figure, the nn hopping parameters $\gamma_{2,3}=\gamma_1=\gamma =1 $, the nnn hopping parameter $\gamma'=0$, the interaction strength $U=0$, and the inverse temperature $\beta=20$. (a) Effect of the renormalization of the nn hopping parameters. (b) Effect of the interaction strength $U$ for the isotropic case. (c) Effect of the nnn hopping parameter $\gamma'=0.1$ in the shaken lattice. (d) The metallic phase. For the isotropic cases, $\boldsymbol{\rho} = 5.2 (\hbar / m \Omega d) \hat{e}_x$, whereas for the 0D cases $\boldsymbol{\rho} = 4.8 (\hbar / m \Omega d) \hat{e}_x$. These systems are in the metallic phase for $\gamma'=0.1$, whereas for $\gamma'=0$ they are in the semi-metallic and the insulating phase, respectively.}
\label{fig:density-r}
\end{figure}
We emphasize that, in the present paper, we only discuss weak correlations that adiabatically affect the density. However, when increasing further the onsite interaction, one may expect correlated phases with inhomogeneous density, even at half-filling.
A detailed study of these correlated phases is a vast research issue that is yet ongoing and that is beyond the scope of
the present paper. Here, we only provide a glimpse on how the density, which we discussed above in the weak-coupling limit, may evolve
in view of some phases studied in the literature.
Mean-field calculations indicate a transition to an antiferromagnetic state above a value of $U/\gamma\simeq 2.2$ \cite{Sorella92}, whereas more sophisticated quantum Monte-Carlo calculations indicate an intermediate spin-liquid phase between the semimetal and
the anti-ferromagnetic phase \cite{Meng10}. The spin-liquid phase may be viewed as a Mott insulator with a charge localization on the lattice sites, and recent slave-rotor calculations indicate that such spin-liquid phases dominate the phase diagram for $\gamma_1>\gamma_{2,3}$ \cite{Wang11}, which is the parameter range where the Dirac points would merge in the absence of interactions. The precise transition between the weakly-interacting liquid phases and these strongly-correlated Mott insulators could in principle
be determined with the help of the above-mentioned density measurements.
A more promising technique for detecting Dirac-point motion is momentum-resolved Raman spectroscopy. This technique has been proposed as an equivalent of angle-resolved photoemission spectroscopy for cold atom systems \cite{Dao07}. It has not yet been realized experimentally, while momentum-resolved radio-frequency spectroscopy, which is a very similar technique, has already been implemented \cite{Stewart08}. Notice further that another very similar technique, momentum-resolved Bragg spectroscopy, has been applied to ultracold bosonic atoms in a static optical lattice by Ernst \textit{et al}. \cite{Ernst10}.
Momentum-resolved spectroscopy can allow us to indirectly visualize the band structure. In momentum-resolved Raman spectroscopy, two laser pulses with frequencies $\omega_1$ and $\omega_2$ are irradiated upon the system. If the frequency difference is in resonance with
a transition $\omega_{hf}$ between atomic hyperfine states,
$\omega_1-\omega_2=\omega_{hf}$, some atoms are excited in a second-order process to the higher hyperfine state.
Then, with state-selective time-of-flight measurements, the dispersion of the atoms in the new state are measured, from which the dispersion of the original atoms can be derived. When the atoms are confined in a trapping potential and the laser pulses are focussed on the center of the trap, the quality of the results obtained by Raman spectroscopy is comparable to those of a homogeneous system \cite{Dao07}. Furthermore, Raman spectroscopy yields better results for a system with strong interactions compared to standard time-of-flight measurements \cite{Dao07}.
Notice that momentum resolved Raman spectroscopy was originally proposed to be applied to a gas of ultracold fermionic atoms
at equilibrium and not for a shaken lattice. We therefore discuss, in this final paragraph, why we think that this technique may
also be applied to the present case. Naturally, as long as the frequencies of the additional lasers in the Raman-spectroscopy setup
are small with respect to the shaking frequency, $\omega_1,\omega_2\ll \Omega$, even the full system satisfies the condition
(\ref{eq:condition}) for the validity of Floquet theory. As in the case of interactions, one needs, however, to avoid resonances between
the different laser frequencies that could become critical \cite{Poletti11}. The opposite limit, in which the laser frequencies and that
of the hyperfine transition are larger than the shaking frequency, is more delicate. However, even then, the shaken system remains
at quasi-equilibrium as long as the intensities of the lasers used in Raman spectroscopy are weak, such that they only constitute a
small perturbation. The atomic dynamics probed even at high frequencies is therefore still that of the atoms at quasi-equilibrium, with
the band strucure obtained from Floquet theory.
Furthermore, in the experimental studies by Zenesini \textit{et al}. \cite{Zenesini09}, time-of-flight measurements were used to determine the momentum distribution of bosonic atoms in a shaken lattice.
Apart from the time-scale considerations, there are also some length scales that need to be taken into account. There are indeed
two requirements for the correct size of the focus of the laser beams. On the one hand, it needs to be larger than the lattice spacing, such that sufficiently many atoms can be excited, while on the other hand the focus of the beams should be small enough, in order to have an approximately flat trapping potential inside the focus area. In addition, choosing the length of the pulses could possibly be a problem, since for shorter pulses, the excited atoms will be less affected by the lattice potential, whereas for longer pulses more atoms can be excited, leading to a stronger signal.
\section{Conclusions} \label{conc}
In conclusion, we have investigated the band engineering of fermionic atoms in an optical honeycomb lattice with the help of a periodic shaking of the lattice. If the shaking frequency $\Omega$ is large enough, i.e. if $\hbar\Omega$ constitutes the largest energy scale in the system, the Floquet theory may be applied and the system is at quasi-equilibrium in the sense that the atoms cannot follow the rapid motion associated with the shaking. Depending on the direction of the shaking, one may render the hopping amplitudes in the quasi-static lattice anisotropic, due to a renormalization of the nn and nnn hopping parameters by Bessel functions that go through zero and change sign. As a consequence, dimensional crossovers can be induced in absence of the nnn hopping. For a shaking direction parallel to
one of the nn vectors (such as e.g. $\textbf{d}_1$), one can make one of the nn hopping parameters vanish, $\gamma_1 \rightarrow 0$.
The system then undergoes a transition from 2D to 1D, while the Dirac points align simultaneously.
Shaking in the perpendicular direction ($\perp \textbf{d}_1$) allows one to decrease two nn hopping amplitudes simultaneously
while maintaining $\gamma_1$ unrenormalized. In this case,
a dimensional crossover from 2D to 0D is induced for $\gamma_{2,3} \rightarrow 0$, leading to two flat energy bands, beyond the merging of Dirac points \cite{Hasegawa06,Montambaux09}, which occurs at $|\gamma_1| = 2 |\gamma_{2,3}|$. A nonzero value of $\gamma'$ breaks the particle-hole symmetry and leads to a coupling among the 1D chains and the 0D dimers, for the $\gamma_1 = 0$ and $\gamma_{2,3} = 0$ cases, respectively, and thus to a weak 2D dispersion. The merging and the alignment of Dirac points, however, are not affected.
Moreover, for a shaking direction parallel to $\textbf{d}_1$, one pair of nnn hopping amplitudes [$\pm(\textbf{d}_2-\textbf{d}_3)$]
remains unrenormalized, and its relative importance is thus enhanced when compared to the decreasing nn hopping amplitudes. In
this limit, beyond the semi-metallic and the band-insulating phases, a novel metallic phase can appear that consists of particle and
hole pockets with a non-vanishing density of states even at half-filling.
Furthermore, we have investigated the role of weak repulsive on-site interactions. The resulting ground state is then adiabatically connected to that of the non-interacting system, and we have self-consistently calculated the dependence of the atomic density on the (local) chemical potential. The density profiles of the different phases, e.g. the gapless semimetal or the gapped band insulator, and the different dimensionality may be measured experimentally by in-situ density measurements. Moreover, momentum-resolved Raman spectroscopy may be a promising technique to measure the band structure associated with these different phases.
\section*{Acknowledgements}
We thank Gilles Montambaux, Guangquan Wang, Andreas Hemmerich, and Marco Di Liberto for fruitful discussions. We also thank Christoph \"{O}lschl\"{a}ger for informing us about an error in the calculations. This work was financially supported by the ANR project NANOSIM GRAPHENE under Grant No. ANR-09-NANO-016 and by the Netherlands Organization for Scientific Research (NWO).
|
2,869,038,155,889 | arxiv | \section{Introduction}
The consensus since the 1990s has been that the Milky Way is a barred
galaxy \citep[see, e.g.][]{blitz1991,blitz1993}. The estimate for the
size of the large-scale bar has grown from initial $R_{bar} \approx
2-3$ kpc to current estimates $R_{bar}=3-5$ kpc
\citep{habing2006,cabrera-lavers2007,cabrera-lavers2008}. The
position angle of the bar is thought to be in the range $15^\circ -
45^\circ$
\citep[][]{blitz1993,kuijken1996,weiner1999,benjamin2005,englmaier2006,cabrera-lavers2007,minchev2009}. The
differences in the position angle estimates may indicate that the
innermost structure is actually a triaxial bulge
\citep{cabrera-lavers2008}. On the other hand, this ambiguity may
be partly caused by our unfavorable viewing angle near the disk plane,
which also hinders study of other aspects of Galactic morphology.
The suggested configurations for the spiral morphology of the Galaxy
include models or sketches containing from two to six spiral arms
\citep[see e.g.][and references therein]{vallee2005,vallee2008}. A
case has also been suggested where a two-armed structure dominates in
the old stellar population, whereas the gas and young stellar
population exhibits a four-armed structure
\citep{lepine2001,churchwell2009}. In addition to spiral arms, there
may be an inner ring or pseudoring surrounding the bar, which
manifests itself as the so-called 3-kpc arm(s)
\citep{dame2008,churchwell2009}. Also, speculations about a nuclear
ring with a major axis of about 1.5 kpc have been made
\citep{rodriguez-fernandez2008}. Different kinds of rings -- nuclear
rings, inner rings and outer rings -- are often seen in the disks of
spiral galaxies, especially if there is also a large-scale bar
\citep{buta1996}. Thus, the presence of an outer ring in the Galaxy
may also be considered plausible \citep{kalnajs1991}.
Since the outer rings have an elliptic form, the broken outer rings
(pseudorings) resemble two tightly wound spiral arms. Nevertheless
their connection with the density-wave spiral arms is not very obvious
because their formation does not need the spiral-shaped perturbation in
the stellar disk. The main ingredient for their formation is a
rotating bar. Both test particle simulations \citep{schwarz1981,
byrd1994,bagley2009} with an analytical bar and N-body simulations
\citep{rautiainen1999,rautiainen2000}, where the bar forms in the disk
by instability, show that the outer rings and pseudorings are typically
located in the region of the Outer Lindblad Resonance (OLR). Two main
classes of the outer rings and pseudorings have been identified: the
$R_1$-rings and $R'_1$-pseudorings elongated perpendicular to the bar
and the $R_2$-rings and $R'_2$-pseudorings elongated parallel to the
bar. In addition, there is a combined morphological type $R_1R_2'$
that shows elements of both classes \citep{buta1986, buta1991,
buta1995, buta1996, buta2007}.
\citet{schwarz1981} connected two main types of the outer rings with
two main families of periodic orbits existing near the OLR of the bar
\citep{contopoulos1980,contopoulos1989}. The stability of orbits
enables gas clouds to follow them for a long time period. The
$R_1$-rings are supported by $x_1(2)$-orbits \citep[using the
nomenclature of][]{contopoulos1989} lying inside the OLR and
elongated perpendicular to the bar, while the $R_2$-rings are
supported by $x_1(1)$-orbits situated a bit outside the OLR and
elongated along the bar. There is also another conception of the ring
formation. \citet{romerogomez2007} show that Lyapunov periodic orbits
around $L_1$ and $L_2$ equilibrium points can lead to the formation of
the spiral arms and the outer rings. They associate the spiral arms
emanating from the bar's tips with the unstable manifolds of Lyapunov
orbits. This approach can be useful for explaining of the motion of
gas particles as well \citep{athanassoula2009}.
Besides the bar the galactic disks often contain spiral arms, which
modify the shape of the gravitational perturbation. In the simplest
case, the pattern speeds of the bar and spiral arms are the same. In
many studies this assumption has been used for constructing the
gravitational potential from near-IR observations (which represent the
old stellar population better than the visual wavelengths). Several
galaxies with outer rings have been modeled by this method, and
findings are in good accordance with studies made by using analytical
bars: the outer rings tend to be located near the OLR
\citep{salo1999}, although in some cases they can be completely
confined within the outer 4/1-resonance, \citep{treuthardt2008}.
A real galactic disk provides further complications, which can be
studied by N-body models, where the bars and spiral arms are made of
self-gravitating particles. In particular, there can often be one or
more modes rotating more slowly than the bar
\citep{sellwood1988,masset1997,rautiainen1999}. Even if there is an
apparent connection between the ends of the bar and the spiral arms,
it is no guarantee that the pattern speeds are equal -- the break
between the components may be seen only for a short time before the
connection reappears \citep[see Fig. 2 in][]{sellwood1988}. Sometimes
the bar mode can contain a considerable spiral part that forms the
observed spiral, together with the slower modes
\citep{rautiainen1999}. The multiple modes can also introduce cyclic
or semi-cyclic variations in the outer spiral morphology: outer rings
of different types can appear and disappear temporarily
\citep{rautiainen2000}.
In \citet{melnikrautiainen2009} (hereafter Paper I), we considered
models with analytical bars. In this case the motion of gas particles
is determined only by the bar. We found that the resonance between the
epicyclic motion and the orbital motion creates systematical
noncircular motions that depend on the position angle of a point
with respect to the bar elongation and on the class of the outer
ring. The resonance kinematics typical of the outer ring
of subclass $R_1R_2'$ reproduces the observed velocities in the
Perseus and Sagittarius regions well.
In paper I we also suggested that the two-component outer ring could
be misinterpreted as a four-armed spiral. In some galaxies
with the combined $R_1R_2'$-morphology, the $R_1$-component can also be seen
in the near infrared, but the $R_2$-component is usually
prominent only in blue \citep{byrd1994}. This could explain the
ambiguity of the number of spiral arms in the Galaxy. N-body
simulations confirm that the $R'_1$-rings can be forming in the
self-gravitating stellar subsystem, while the $R_2'$-rings usually
exist only in the gas component \citep{rautiainen2000}.
In the present paper we study the effect of multiple modes and their
influence on the kinematics and distribution of gas particles. We
construct N-body models to study the influence of self-gravity
in the stellar component on the kinematics of gas particles. We
compare the model velocities of gas particles with the observed velocities
of OB-associations in the neighborhood 3 kpc from the Sun.
This paper has the following structure. Observational data are considered
in Sect. 2. Section 3 is devoted to models and describes the
essential model parameters, the evolution of the stellar and gas
components: formation of the bar and the interplay between the bar and
slower spiral modes. In Sect. 3 we also analyze the general features
of the gas morphology. Section 4 is devoted to the comparison between
the observed and modeled kinematics. Both momentary and average
velocities of gas particles are considered. The influence of the bar
position angle $\theta_b$ on the model velocities is also investigated
in Sect. 4, as are the evolutionary aspects of kinematics.
Section 5 consists of conclusions and discussion.
\section{Observational data}
\begin{figure*}
\resizebox{\hsize}{!}{\includegraphics{14646fg1.eps}}
\caption{(a) The residual velocities of OB-associations projected
on to the galactic plane. It also shows the grouping of
OB-associations into regions of intense star formation. (b) The
mean $V_R$- and $V_\theta$- velocities of OB-associations in the
stellar-gas complexes. The X-axis is directed away from the
galactic center, and the Y-axis is in the direction of the
galactic rotation. The Sun is at the origin.} \label{complexes}
\end{figure*}
We have compared the mean residual velocities of OB-associations in the
regions of intense star formation with those of gas particles in
our models. These regions practically
coincide with the stellar-gas complexes identified by
\citet{efremov1988}. The residual velocities characterize the
non-circular motions in the galactic disks. They are calculated
as differences between the observed heliocentric velocities
(corrected for the motion to the apex) and the velocities due to
the circular rotation law. We used the list of OB-associations by
\citet{BlahaHumphreys1989}, the line-of-sight velocities
\citep{barbierbrossat2000}, and proper motions
\citep{hipparcos1997, vanleeuwen2007} to calculate their median
velocities along the galactic radius-vector, $V_R$, and in the
azimuthal direction, $V_\theta$. Figure ~\ref{complexes} shows the
residual velocities of OB-associations in the regions of intense
star formation. It also indicates the grouping of OB-associations
into stellar-gas complexes. For each complex we calculated the
mean residual velocities of OB-associations, which are listed in
Table~\ref{observations}. Positive radial residual velocities
$V_R$ are directed away from the Galactic center, and the
positive azimuthal residual velocities $V_\theta$ are in the
sense of Galactic rotation. Table~\ref{observations} also
contains the rms errors of the mean velocities, the mean
Galactocentric distances $R$ of OB-associations in the complexes,
the corresponding intervals of galactic longitudes $l$ and
heliocentric distances $r$, and names of OB-associations the
region includes \citep[see also][]{melnikdambis2009}.
\begin{table*}
\caption{Observed residual velocities of OB-associations in the stellar-gas complexes}
\begin{tabular}{lcccccl}
\hline
Region& {\it R} & $V_{R\mbox{ obs}}$ & $V_{\theta\mbox{ obs}}$ & {\it l} & {\it r} & Associations \\
& kpc& km s$^{-1}$ & km s$^{-1}$ & deg. & kpc & \\
\\[-7pt]\hline\\[-7pt]
Sagittarius & 5.6 & $+9.9\pm2.4$ & $-1.0\pm1.9$ & 8--23 & 1.3--1.9 & Sgr OB1, OB7, OB4, Ser OB1, OB2, \\
& & & & & & Sct OB2, OB3;\\
Carina & 6.5 & $-5.8\pm3.3$ & $+4.7\pm2.2$ & 286--315 & 1.5--2.1 & Car OB1, OB2, Cru OB1, Cen OB1,\\
& & & & & & Coll 228, Tr 16, Hogg 16, NGC 3766, 5606;\\
Cygnus & 6.9 & $-5.0\pm2.6$ & $-10.4\pm1.4$ & 73--78 & 1.0--1.8 & Cyg OB1, OB3, OB8, OB9; \\
Local System & 7.4 & $+5.3\pm2.8$ & $+0.6\pm2.5$ & 0--360 & 0.1--0.6 & Per OB2, Mon OB1, Ori OB1, Vela OB2, \\
& & & & & & Coll 121, 140, Sco OB2; \\
Perseus & 8.4 & $-6.7\pm3.0$ & $-5.9\pm1.5$ & 104--135 & 1.8--2.8 & Per OB1, NGC 457, Cas OB8, OB7, OB6, \\
& & & & & & OB5, OB4, OB2, OB1, Cep OB1;\\
\hline
\end{tabular}
\label{observations}
\end{table*}
The Galactic rotation curve derived from an analysis of the
kinematics of OB-associations is nearly flat in the 3-kpc solar
neighborhood and corresponds to the linear velocity at the solar
distance of $\Theta_0=220$ km s$^{-1}$
\citep{melnik2001,melnikdambis2009}. The nearly flat form of
the Galactic rotation curve was found in many other studies
\citep{burton1978, clemens1985, brand1993, pont1994, dambis1995,
russeil2003, bobylev2007}.
We adopted the Galactocentric distance of the Sun to be $R_0=7.5$
kpc \citep[][and other papers]{rastorguev1994, dambis1995,
glushkova1998}, which is consistent with the so-called short
distance scale for classical Cepheids \citep{berdnikov2000}.
\section{Models}
\subsection{The model parameters}
We made several N-body models, which satisfy ``broad observational
constraints'': the rotation curve is essentially flat and the size of
the bar is acceptable. From these models we have chosen our
best-fitting case, which we describe here in more detail.
The rotation curve of our best-fitting model is illustrated in
Fig.~\ref{nbody_rcurve}. In the beginning, the rotation curve is
slightly falling in the solar neighborhood, but the mass rearrangement
in the disk during the bar formation makes it rise slightly. We scaled
the simulation units to correspond to our preferred values of the
solar distance from the Galactic center and the local circular
velocity. This also gives the scales for masses and time
units. However, in the following discussion we will use simulation
time units, one corresponding to approximately 100 million years, and
the full length of the simulation is 6 Gyrs.
\begin{figure*}
\centering
\includegraphics{14646fg2.eps}
\caption{The rotation curve (solid line) of the N-body model at $T=0$
(left) and at $T=55$ (right). The contributions from the bulge
(dash-dotted line), disk (dashed line) and halo (dotted line) are
also indicated.} \label{nbody_rcurve}
\end{figure*}
The bulge and halo components are analytical, whereas the stellar disk is
self-gravitating. The bulge is represented by a Plummer sphere, mass
$M_{bulge}=1.17 \times 10^{10} \ M_\odot$, and scale length
$R_{bulge}=0.61$ kpc. The dark halo was included as a component giving
a halo rotation curve of form
\begin{equation}
V(R)={V_{max}R \over \sqrt{R^2+R_c^2}},
\end{equation}
\noindent where $V_{max} = 210 \ \mathrm{km \ s}^{-1}$ is the
asymptotic maximum on the halo contribution to the rotation curve and
$R_c = 7.6 \ \mathrm{kpc}$ the core radius.
The N-body models are two-dimensional, and the gravitational potential
due to self-gravitating particles is calculated by using a logarithmic
polar grid (108 radial and 144 azimuthal cells). The N-body code we
used has been written by H. Salo \citep[for more details on the code,
see][]{salo1991,salo2000}. The value of the gravitation softening is about
0.2 kpc on the adopted length scale. The mass of the disk
$M_{disk}=3.51 \times 10^{10} \ M_\odot$.
The disk is composed of 8 million gravitating stellar particles, whose
initial distribution is an exponential disk reaching about 10 scale
lengths. The disk and halo have nearly equal contribution to the
rotation curve at the solar distance. The initial scale length of
the disk was about 2 kpc, but after the bar formation, it forms a twin
profile disk: the inner profile becomes steeper and the outer profile
shallower, and the exponential scale length corresponds to about 3
kpc outside the bar region. The initial value of the Toomre-parameter
$Q_T$ was 1.75.
The gas disk was modeled by inelastically colliding test particles as
was done in Paper I. The initial velocity dispersion of the gas
disk was low, about $2 \ \mathrm{km \ s}^{-1}$, but it reached
typical values in the range $5-15 \ \mathrm{km \ s}^{-1}$ during the
simulation. If collisions are omitted, the velocity dispersion of the
test particles rises much higher into the range $25-50 \ \mathrm{km
\ s}^{-1}$. The model used in the kinematical analysis contains 40
000 gas particles initially distributed as a uniform disk with an
outer radius of 9.2 kpc.
\subsection{Evolution of the stellar component}
\begin{figure*}
\centering
\includegraphics{14646fg3.eps}
\caption{The amplitude spectra of the relative density perturbations
in the model disk. The frames show the amplitude spectra of the
stellar or gas component at various times (indicated on the frame
titles). The contour levels are 0.025,0.05,0.1,0.2,0.4, and 0.8,
calculated with respect to the azimuthal average surface density at
each radius. The continuous lines show the frequencies $\Omega$ and
$\Omega \pm \kappa/m$, and the dashed curves indicate the frequencies
$\Omega \pm \kappa/4$ in the $m=2$ amplitude spectrum.}
\label{nbody_power}
\end{figure*}
The inner regions quickly develop a small spiral (at $T \sim 2.5$),
which then evolves to a clear bar ($T \sim 5$). Its original pattern speed
$\Omega_{b}$ is about $80 \ \mathrm{km \ s}^{-1}\mathrm{kpc}^{-1}$,
meaning that when it forms it does not have an Inner Lindblad
Resonance (ILR). In its early phase the bar slows down quite quickly
($\Omega_{b} \approx 60 \ \mathrm{km \ s}^{-1}\mathrm{kpc}^{-1}$ at
$T=10$), but the deceleration rate soon settles down: $\Omega_{b}
\approx 54 \ \mathrm{km \ s}^{-1}\mathrm{kpc}^{-1} $ at $T=20$ and
$\Omega_{b} \approx 47 \ \mathrm{km \ s}^{-1}\mathrm{kpc}^{-1}$ at
$T=55$. In this model the bar's slowing down is accompanied by its growth,
and the bar can always be considered dynamically fast \citep[see
e.g.][]{debattista2000}. Using the same method to determine the bar
length as \citet{rautiainen2008} \citep[a modification of
one used by][]{erwin2005}, we get $R_{bar}= 4.0 \pm 0.6$ kpc at T=55
and $R_{CR}/R_{bar} = 1.2 \pm 0.2$. There is no secondary bar in this
model.
The amplitude spectra of the relative density perturbations \citep[see
e.g.][]{masset1997,rautiainen1999} (Fig.~\ref{nbody_power}) show
that the bar mode is not the only one in the disk, but there are also
slower modes. The strongest of these modes, hereafter the S1 mode, has
an overlap of resonance radii with the bar: the corotation radius of
the bar is approximately the same as the inner 4/1-resonance radius of
the slower mode (at $T=55$ the $R_{CR}$ of the bar and the inner 4/1
resonance radius of the S1 mode are both about 4.6 kpc). This
resonance overlap does not seem to be a coincidence: when the
amplitude spectra from different time intervals are compared, one can
see that both the bar and the S1 modes slow down so that the resonance
overlap remains (see Fig.~\ref{nbody_power}). Furthermore, this
resonance overlap was the most common case in the simulations of
\citet{rautiainen1999}. Also, the S1 mode has a strong $m=1$
signal and a maximum near its corotation at 7.1 kpc. The bar mode
is also seen as a strong signal in the $m=4$ spectrum, but only inside
CR -- the spiral part seems to be almost pure $m=2$ mode. Altogether,
the signals with $m>2$ tend to be much weaker than features seen in
$m=1$ and $m=2$ amplitude spectra.
\begin{figure*}
\centering
\includegraphics{14646fg4.eps}
\caption{The reconstructed modes in the stellar component (see text)
for $T=50-60$ time interval. The enhanced density compared to the
azimuthally averaged profile at each radius is shown.The shades of
gray (darker corresponds to higher surface density) have been chosen
to emphasize the features. The circles in the bar mode indicate ILR
(1.4 kpc), CR (4.6 kpc), and OLR (8.1 kpc), whereas the inner 4/1 (4.6
kpc) and CR (7.1 kpc) are shown for the mode S1.}
\label{mode_shapes}
\end{figure*}
We have also tried to reconstruct the shapes of the modes seen in the
amplitude spectra. This was done by averaging the surface density in
coordinate frames rotating with the same angular velocities as the
modes. No assumptions were made about the shapes of the modes. On the
other hand, one should take these reconstructions with some caution,
because the evolution of the two modes, the effect of slower (but
weaker) modes, and short-lived waves may affect them. The results for
the bar and the S1 mode at the time interval $T=50-60$ are shown in
Fig.~\ref{mode_shapes}. The mode $\Omega_p=47 \ \mathrm{km
\ s}^{-1}\mathrm{kpc}^{-1}$ clearly shows the bar and symmetrical
spiral structure that forms an $R_1$ outer ring or pseudoring. By the
$T=50-60$ interval, the density amplitude of the bar mode is about
15-20 per cent in the outer ring region, where the maxima and minima
have roughly the same strength. On the other hand, by $T=50-60$, the
mode $\Omega_p=31 \ \mathrm{km \ s}^{-1}\mathrm{kpc}^{-1}$ is clearly
lopsided, which is not surprising considering the signal seen in the
$m=1$ amplitude spectrum. There is a minimum with an amplitude of
about 30\% and a maximum of about 15\% at $R \approx 7$ kpc, which
corresponds to the CR of the S1 mode. Earlier, at $T \approx 20$, the
S1 mode does not have the $m=1$ characteristic but exhibits a
multiple-armed structure beyond its CR, accompanied by a clear signal
in the $m=3$ amplitude spectrum.
\subsection{The morphological changes in the gas component}
The amplitude spectra for the gas component at the interval $T=50-60$
are also shown in Fig.~\ref{nbody_power}. Due to fewer particles, they
include more noise, but otherwise they are quite similar. In addition
to the bar mode, the S1 mode is also seen, but now it is more
conspicuous in the $m=1$ spectrum.
\begin{figure*}
\centering
\includegraphics{14646fg5.eps}
\caption{The gas morphology at selected times. The bar is vertical in
all frames, whose width is 20 kpc.}
\label{nbody_gasmorph}
\end{figure*}
The result of having several modes is the quite complicated
evolution of the model (see Fig.~\ref{nbody_gasmorph}): at different
times, the morphology of the outer gaseous disk can be described as
$R_1R_2'$, $R_2'$, $R_1'$ or just as open spiral arms, which can sometimes
be followed over 400 degrees. There is no evolutionary trend between
the morphological stages, since they all appear several times during the
model time span. The shape of the inner ring also changes by being
sometimes more elongated or even consisting of tightly wound pair of
spiral arms. On the broader sense, the overall Hubble stage of the
model stays the same for several Gyrs.
Although the slow modes in the stellar component can be clearly seen
outside the bar radius (about 4 kpc), they become pronounced in the
gas from $R \approx 6$ kpc. To study their effect on the gas
morphology, we selected gas particles located at the annulus
$7<R<10$ kpc and calculated their number within every 5$^\circ$-sector
along $\theta$. Such density profiles were built for 301 moments from
the interval T=30--60 ($T \approx 3$--6 Gyr) with a step $\Delta
T=0.1$ ($\sim 10$ Myr). Earlier stages were not considered, because
then the pattern speed of the bar was changing so fast that it
complicated the analysis. At every moment the distribution of gas
density along $\theta$ was approximated by one-fold ($m=1$), two-fold
($m=2$), and four-fold ($m=4$) sinusoidal wave:
\begin{equation}
\sigma=\sigma_0+A_m\cos(m \theta +\phi_m),
\label{sigma}
\end{equation}
\noindent where $\sigma$ is the gas density in a segment, $\sigma_0$
is the average density in the annulus, $\phi_m$ and $A_m$ are the
phase and amplitude of the corresponding sinusoidal approximation,
respectively.
Figure ~\ref{humps} demonstrates the motion of maxima in the
distribution of gas particles along $\theta$. We made the density
profiles in the reference frame co-rotating with the bar, whose major
axis is always oriented in the direction $\theta=0^\circ$. Azimuthal
angle $\theta$ is increasing in the sense of the galactic rotation, so
the supposed position of the Sun is about $\theta=315^\circ$. To
illustrate the motion of density crests, we selected two intervals
T=35.5--37.5 and T=52.5--54.5 with a high amplitude of density
perturbation. These density profiles indicate the motion of density
maxima in the opposite direction to that of galactic rotation
(i.e. they actually rotate more slowly than the bar), which means an
increase in the phase $\phi_m$ of the sinusoidal wave
(Eq.~\ref{sigma}).
Figure ~\ref{phases} exhibits the variations in the phase $\phi_m$
and amplitude $A_m$ of the
sinusoidal wave at the time intervals T=30--40, 40--50, and 50--60.
The subscripts $1$ and $2$ are related to the one- and two-fold
sinusoids. Rotation of the density maxima causes
the sharp changes in the phase when it achieves the value of
$\phi=360^\circ$, and at the new turn its value must fall to zero.
These changes enable us to accurately calculate the mean values
of the periods for the propagation of the sinusoidal waves, which
appear to be $P_1=3.3\pm0.4$ and $P_2=1.5\pm0.4$. Remember that
we study the density oscillations in the reference frame
co-rotating with the bar, so the period $P$ of beating
oscillations between the bar and slow modes is determined by the
relation:
\begin{equation}
P_m=\frac{2\pi}{m(\Omega_b-\Omega_{sl})}.
\label{P}
\end{equation}
The periods, $P_1$ and $P_2$, appear to correspond to slow modes
rotating with the pattern speeds $\Omega=28\pm2$ km s$^{-1}$ kpc$^{-1}$
and $\Omega=26\pm6$ km s$^{-1}$ kpc$^{-1}$, respectively. It is
more convenient to use simulation units here. The transformation
coefficient between them and (km s$^{-1}$ kpc$^{-1}$) is
$k=9.77$, and the value of $\Omega_b$ is $\Omega_b=4.8$ s.u. The
$m=4$ wave manifested itself in two density maxima separated by
the angle $\Delta \theta\approx90^\circ$ (Fig.~\ref{humps}, right
panel). The analysis of phase motion of four-fold sinusoid
reveals the period $P_4=0.81\pm0.15$, which corresponds to slow
mode rotating with the speed $\Omega_{sl}=28\pm4$ km s$^{-1}$
kpc$^{-1}$ (Eq.~\ref{P}). Probably, it is mode $\Omega=28\pm4$ km
s$^{-1}$ kpc$^{-1}$ that causes the strong variations in gas
density with the periods $P_1=3.3$, $P_2=1.5$, and $P_4=0.8$ when
it works as $m=1$, $m=2$, and $m=4$ density perturbations,
respectively. This mode is well-defined in the gas and star
power spectra made for the interval T=50-60
(Fig.~\ref{nbody_power}).
Let us have a look at the amplitude variations
(Fig.~\ref{phases}). The highest value of $A_2$
equal $A_2=200$ (particles per 5$^\circ$-sector) is observed at
the time $T=36.0$ (left panel). On the other hand, $A_1$
achieves its highest value of $A_1=220$ at the time $T=56.5$
(right panel). Amplitude $A_4$ reaches its maximum value of
$A_4=180$ at the time interval $T=53-55$. Thus, the highest
values of the amplitudes $A_1$, $A_2$, and $A_4$ are nearly the
same.
Figure ~\ref{humps} (left panel) indicates the growth of the
amplitude of m=2 perturbation under a specific orientation of the
density clumps. The amplitude of the sinusoidal wave is
at its maximum at the moments $T=36.0$ and 37.5 when the density clumps
are located near the bar's minor axis, $\theta=90^\circ$ and
270$^\circ$. This growth is also seen in Fig.~\ref{phases} (left
panel) for the interval T=30-40: the amplitude $A_2$ is at its maximum
at the moments when $\phi_2\approx 180^\circ$. This phase
corresponds to the location of maxima of $m=2$ sinusoid at
$\theta=90^\circ$ and $\theta=270^\circ$ (Eq.~\ref{sigma}).
Our analysis revealed slight variations in the speed of the strongest
slow mode, and they depend on its orientation with respect to the bar:
Fig.~\ref{phases} (left panel) shows that the tilt of the phase
curve, $\phi_2(t)$, is variable. We can see that the slow mode rotates
a bit faster when $\phi_2\approx 180^\circ$ (density clumps are near
the bar's minor axis) and more slowly when $\phi_2=0$ or 360$^\circ$ (the
clumps are near the bar's major axis). Probably, the variations in the speed
of the slow mode are connected with the change in the form of the
density crests due to tidal interaction between the bar mode
(bar+$R_1$-ring) and the slow mode.
\begin{figure*}
\centering
\includegraphics{14646fg6.eps}
\caption{The perturbation in the density of gas particles,
$\sigma'=\sigma-\sigma_0$, located at the annulus $7<R<10$ kpc along
azimuthal angle $\theta$ built for different moments. It also shows
its approximation by two-fold (left panel) and one-fold (right
panel) sinusoids.}
\label{humps}
\end{figure*}
\begin{figure*}
\centering
\includegraphics{14646fg7.eps}
\caption{Variations in the phase $\phi$ (black curve) and
amplitude $A$ (gray curve) of the sinusoids that approximate the
distribution of gas particles located at the distances $7<R<10$
kpc along $\theta$. Subscripts $1$ and $2$ are related to the
one- and two-fold sinusoids, respectively} \label{phases}
\end{figure*}
\section{Kinematics of gas particles. Comparison with observations}
\subsection{Momentary and average velocities}
We start our kinematical study with the interval T=50--60 (5--6 Gyr in
physical time). At this period the bar rotates with a nearly constant
pattern speed of $\Omega_b=47$ km s$^{-1}$ kpc$^{-1}$ which simplifies
the analysis. The interval T=50--60 also provides the best
agreement between the model and observed velocities.
We determined the positions and velocities of gas particles at
101 moments separated by the step $\Delta T=0.1$. For each
moment we selected gas particles located within the boundaries of
the stellar gas complexes and calculated their mean velocities
and velocity dispersions. To determine the positions of the complexes,
we need to choose the position angle of the Sun with respect to
the bar elongation, $\theta_b$. In this section we adopted the
value of $\theta_b=45^\circ$, which gives the best fit between the
model and observed velocities.
\begin{figure*}
\centering
\includegraphics{14646fg8.eps}
\caption{Variations in the mean velocities of gas particles
located within the boundaries of the stellar-gas complexes. The
left panel is related to the radial component $V_R$ and the
right one to the azimuthal one $V_\theta$.} \label{vel-var}
\end{figure*}
Figure ~\ref{vel-var} shows the variations in the mean residual
velocities, $V_R$ and $V_\theta$, calculated for five complexes at
different moments. The residual velocities were computed as
differences between the model velocities and the velocities due to the
rotation curve. It is clearly seen that the momentary velocities
oscillate near the average values within the limits of $\sim\pm20$ km
s$^{-1}$. Two processes are probably responsible for these
oscillations. The first is the slow modes that cause a quasi-periodic
low in the velocity variations. The second process is likely
connected with the short-lived perturbations, e.g. from transient
spiral waves in the stellar component. The averaging of velocities
over long time interval reduces the influence of slow modes and
occasional perturbations.
Table~\ref{table-average1} represents the average values of the
momentary residual velocities, $\overline{V_R}$ and
$\overline{V_\theta}$, calculated over 101 moments. It also gives the
average values of velocity dispersions, $\overline{\sigma_R}$ and
$\overline{\sigma_\theta}$, and the average number of particles
$\overline{n}$ in the complexes. Since the bar has two tips, we
calculated velocities for two opposite positional angles,
$\theta_b=45^\circ$ and $\theta_b=225^\circ$, and used their mean
values. The averaged residual velocities are determined with the
errors of 0.4--1.4 km s$^{-1}$. The relatively low level of the errors
is due to the large number of moments considered (N=101).
\begin{table}
\centering
\caption{Model residual velocities averaged on interval T=50-60}
\begin{tabular}{lccccc}
\hline
Region & $\overline{V_R}$ & $\overline{\sigma_R}$ & $\overline{V_\theta}$ &$\overline{\sigma_\theta}$ & $\overline{n}$\\
& km s$^{-1}$ & km s$^{-1}$ & km s$^{-1}$& km s$^{-1}$ & \\
\hline
Sagittarius & 8.5& 7.2& 0.1&5.9& 70\\
Carina & 7.5& 7.6&-2.0&6.6&158\\
Cygnus & 6.8&10.1& 8.2&6.5&108\\
Local System& 6.8&11.7& 4.8&6.7&112\\
Perseus &-12.5&11.9&-2.9&6.5& 70\\
\hline
\end{tabular}
\label{table-average1}
\end{table}
When comparing Tables ~\ref{table-average1} and
~\ref{observations}, one can see that our model reproduces the
directions of the radial and azimuthal components of the residual
velocities in the Perseus and Sagittarius regions and in the Local
System. We succeed in the Sagittarius region where our model
reproduces the observed velocities with the accuracy 1.4 km s$^{-1}$.
Unfortunately, in the Perseus region the model residual velocity
$|\overline{V_R}|$ is too high, and the difference between the
model and observed velocities achieves 5.8 km s$^{-1}$ there. Our
model can also reproduce the positive $\overline{V_R}$ velocity in the
Local System, which deviates only 1.5 km s$^{-1}$ from the observed
one.
We now consider the mean difference between the model and
observed velocities $\Delta V$ calculated for the radial and
azimuthal components:
\begin{equation}
\Delta V^2=\frac{\sum^k_1 \left\{ (\overline{V_R}-V_{R\mbox{
obs}})^2+ (\overline{V_\theta}-V_{\theta\mbox{ obs}})^2 \right\}
}{2k},
\label{delta_v}
\end{equation}
\noindent where $k$ is a number of complexes. The value of
$\Delta V$ computed for three complexes (the Sagittarius and
Perseus regions and the Local System) equals $\Delta V=3.3$ km
s$^{-1}$. Another situation is observed in the Carina and Cygnus
regions where we cannot even reproduce the direction of the
observed residual velocities. The mean value of the velocity
deviations achieves $\Delta V=13.3$ km s$^{-1}$ there.
To demonstrate the distribution of the average velocities on the
galactic plane, we divided the area ($-10<x<+10$, $-10<y<+10$
kpc) into small squares of a size $0.250\times0.250$ kpc. For
each square we calculated the average values of the residual
velocities of gas particles. Then we averaged residual velocities
over 101 moments for the interval T=50--60. The average residual
velocities in squares are shown in Fig.~\ref{vel-average}. We
depicted only squares that contain high enough number of particles,
$n>\overline{m}/2$, where $n$ is the number of particles
accumulated in a square over 101 moments but $\overline{m}$ is
their number averaged over all squares, $\overline{m}=463$.
\begin{figure*}
\centering
\includegraphics{14646fg9.eps}
\caption{Distribution of the negative and positive average
residual velocities calculated in squares. The squares with
positive velocities are shown in black, while those with negative
ones are given in gray. Only squares that satisfy the condition
$n>\overline{m}/2$ are depicted. The left panel represents the
radial velocities, while the right one shows the azimuthal ones. It
also demonstrates the boundaries of the stellar-gas complexes.
The position angle of the Sun is supposed to be
$\theta_b=45^\circ$. The bar is oriented along the Y-axis, the
galaxy rotates clockwise, and a division on the $X$- and $Y$-axis
corresponds to 1 kpc.} \label{vel-average}
\end{figure*}
In Paper I we have built similar figures for models with
analytical bars. Two different moments were considered:
when the broken rings (pseudorings) were observed and when they
transformed into pure rings. The pseudorings and pure rings
created different kinematical pictures. We connected the main
kinematical features of the pseudorings with the gas outflow and those
of the pure rings with the resonance. The distribution of the negative
and positive velocities obtained for N-body simulations
(Fig.~\ref{vel-average}) strongly resembles that of the pseudorings in
models with analytical bars, giving support to the ``averaging
process'' adopted here. This similarity suggests there is
gas outflow in the present model (see also Sect. 4.4).
\subsection{Velocities in the complexes under different values of the solar position angle $\theta_b$}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{14646fg10.eps}}
\caption{Dependence of the average residual velocities,
$\overline{V_R}$ (a) and $\overline{V_\theta}$ (b), and the
$\chi^2$-function (c) on the solar position angle $\theta_b$. The
curves calculated for different complexes are shown by different
lines. The $\chi^2$-function was computed for three complexes:
the Perseus and Sagittarius regions and the Local System.}
\label{v-theta_b}
\end{figure}
We studied the dependence of the average residual velocities
$\overline{V_R}$ and $\overline{V_\theta}$ on the solar position angle
$\theta_b$. Figure ~\ref{v-theta_b}ab shows 5 curves that demonstrate
the velocity changes in 5 complexes. The sharpest changes in the
radial velocity $\overline{V_R}$ are observed in the Local System and
in the Cygnus region, and the radial velocities in the other complexes
depend only weakly on the choice of $\theta_b$. As for the azimuthal
component, the strongest changes can be seen in the Sagittarius,
Carina, and Perseus regions, but the velocity changes are modest in
other complexes. Practically speaking, the optimal value of
$\theta_b$ provided the best agreement between the model and observed
velocities is determined by the radial velocity in the Local System
and by the azimuthal velocity in the Sagittarius region. These
velocities achieve their observed values of $V_R=5.3$ and
$V_\theta=-1$ km s$^{-1}$ under $\theta_b=43^\circ$ and
$\theta_b=48^\circ$, respectively.
We now consider the sum of square differences between the model and
observed velocities, $\chi^2$, obtained for the radial and azimuthal
components under different values of $\theta_b$. Figure
~\ref{v-theta_b}c shows the $\chi^2$-function computed for three
complexes: the Perseus and Sagittarius regions and the Local
System. It is clearly seen that $\chi^2$ achieves its minimum values
at the interval $\theta_b >40^\circ$. We chose $\theta_b=45^\circ$ as
the optimal value because it reproduces the observational velocity
$V_\theta=-1$ km s$^{-1}$ well in the Sagittarius region. Models with
analytical bars in Paper I gave the same result.
\subsection{Analysis of periodicity in oscillations of the momentary velocities}
Now we approximate the oscillations in the radial and azimuthal
components of the momentary velocities, $V_R$ and $V_\theta$
(Fig.~\ref{vel-var}), by the sinusoidal law:
\begin{equation}
V_R (\textrm{or }V_\theta)=A_1\sin(2\pi T/P)+A_2\cos(2\pi T/P),
\end{equation}
\noindent where $P$ is a period of oscillations,
$A_0=\sqrt{A_1^2+A_2^2}$ is an amplitude of oscillations, and $T$ is
time counted from $T_0=50$.
We use the standard least square method to solve the system of 101
equations, which are linear in the parameters $A_1$ and $A_2$ for each
value of nonlinear parameter $P$. We then determine the value of $P$
that minimizes the sum of squared normalized residual velocities
$\chi^2$. Figure ~\ref{period} presents the $\chi^2$-curves built for
the oscillations of the radial velocity in 5 complexes, but the curves
made for the azimuthal velocities have no conspicuous minima. It is
clearly seen that $\chi^2$-curves demonstrate deep minima in the
Cygnus and Perseus regions and in the Local System. These minima
correspond to the best periods in approximating the velocity
oscillations that have the following values: $P=2.7\pm0.4$ in the
Cygnus region, $P=2.9\pm1.0$ in the the Local System, and
$P=1.6\pm0.2$ in the Perseus region. We have already obtained period
$P=1.5$ when studying density oscillations on the galactic periphery
(Sect. 3.3). Probably, the strongest slow mode $\Omega=28$ km s$^{-1}$
kpc$^{-1}$ is also responsible for the velocity oscillations: the
beating oscillations between the bar mode and a two-armed pattern
rotating with the speed $\Omega=28$ km s$^{-1}$ kpc$^{-1}$ must have
the period of $P=1.6$ and those calculated for one-armed perturbation
have a period of $P=3.2$. (Eq.~\ref{P}). Some of the small differences
between the pattern speeds derived from the amplitude spectra and
those obtained from kinematical analysis may be due to tidal
interaction in the stellar component between the bar and slow modes.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{14646fg11.eps}}
\caption{$\chi^2$-functions built for studying periodicity in
the oscillations of the radial velocities, $V_R$, in 5 complexes.
The minima on the curves must correspond to the best periods in
approximation of the velocity oscillations.} \label{period}
\end{figure}
\subsection{Evolutional aspects of kinematics at the time interval
T=30-60}
Let us compare the average residual velocities calculated for
different time intervals T=30--40, 40--50, and 50--60
(Tables~\ref{table-average1}, \ref{table-average2}, and
\ref{table-average3}). Generally, most changes in the residual
velocities do not exceed 4.0 km s$^{-1}$ and are likely caused by
occasional perturbations. On the other hand, radial velocities
$\overline{V_R}$ in the Local System and in the Cygnus region
demonstrate the ongoing growth, which can be connected with the
evolution of the outer rings.
Figure ~\ref{density_30-60} shows the surface density of gas particles
averaged in squares at different time intervals. The average density
was calculated in the reference frame that rotates with the speed of
the bar. The light-gray, dark-gray, and black colors represent squares
containing the increasing number of particles, $n>\overline{m}/2$,
$n>\overline{m}$, and $n>2\overline{m}$, respectively, where $n$ is
the number of particles accumulated in a square over 101 moments and
$\overline{m}$ is their number averaged over all squares,
$\overline{m}=463$. It is clearly seen that the major axis of the
outer ring $R_2$ changes its orientation: it goes $\alpha\sim20^\circ$
ahead of the bar at the interval T=30--40, but this angle increases to
$\alpha\sim45^\circ$ at the intervals T=40--50 and T=50--60. Moreover,
the outer ring changes its morphology: we can identify two outer rings
of classes $R_1$ and $R_2$ at the interval T=30--40, while there is
only one outer ring with an intermediate orientation of
$\alpha\approx45^\circ$ at the intervals T=40--50 and 50--60. Its
shape becomes rounder at the interval T=50--60.
Let us consider more thoroughly the distribution of gas particles at
the interval T=50--60 (Fig.~\ref{density_30-60}). It is clearly seen
that the surface density of gas particles at the distance range of
$R=6-9$ kpc is nearly twice the average density all over the disk
$\overline{m}$. The density perturbation inside the outer ring can be
approximated by two spiral arms with a pitch angle of
$i=6\pm1^\circ$. The density perturbation inside them reaches to
100 per cent with respect to the average gas density in the disk.
This is considerably larger than the density
perturbation seen in the stellar component (15-20 per cent).
Figure ~\ref{density-profile} shows the profiles of the surface
density of gas particles averaged at the different time
intervals. We can see the growth of the density hump at the
distance of $R\approx7$ kpc, which indicates the growth of the
outer ring. In contrast, the hump at $R\approx 3$ kpc is
decreasing, which reflects the weakening of the inner ring. At
the interval $T=50$--60, the maximum in the gas density
distribution is located at the distance $R=7.3$ kpc, which is just
in the middle between the outer 4/1 resonance (6.4 kpc) and the
OLR (8.1 kpc) of the bar.
Tables~\ref{table-average1}, \ref{table-average2},
\ref{table-average3} also represent the velocity dispersions of
gas particles in the stellar gas complexes. We can see that their
average values stay at nearly the same level of
$\overline{\sigma_R}=9.7\pm0.1$ and
$\overline{\sigma_\theta}=6.3\pm0.2$ km s$^{-1}$ during the period
T=30--60. The maximum growth, which does not exceed $\sim 20$ per
cent, is observed in the Perseus region. The model velocity
dispersions somewhat exceed the observed values derived for
OB-associations in the stellar-gas complexes, $\sigma_{R\mbox{
obs}}=7.7$ and $\sigma_{\theta\mbox{ obs}}=5.2$ km s$^{-1}$, but
this difference is below 30 per cent.
\begin{figure*}
\resizebox{\hsize}{!}{\includegraphics{14646fg12.eps}}
\caption{The surface density of gas particles averaged in squares
at the time intervals T=30--40, 40--50, and 50--60. The
light-gray, dark-gray, and black colors represent squares
containing the increasing number of particles:
$n>\overline{m}/2$, $n>\overline{m}$, and $n>2\overline{m}$,
respectively. The position of the Sun is shown by the specific
symbol. The bar is oriented along the Y-axis, the galaxy rotates
clockwise, and a division on the $X$- and $Y$-axis corresponds to 1
kpc.} \label{density_30-60}
\end{figure*}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{14646fg13.eps}}
\caption{Profiles of the surface density of gas particles
averaged at the intervals T=30--40, 40--50, and 50--60.}
\label{density-profile}
\end{figure}
\begin{table}
\centering
\caption{Model residual velocities averaged on interval T=30-40}
\begin{tabular}{lccccc}
\hline
Region & $\overline{V_R}$ & $\overline{\sigma_R}$ & $\overline{V_\theta}$ &$\overline{\sigma_\theta}$ & $\overline{n}$\\
& km s$^{-1}$ & km s$^{-1}$ & km s$^{-1}$& km s$^{-1}$ & \\
\hline
Sagittarius & 10.2& 8.4& 1.8&6.9& 84\\
Carina & 10.4& 8.7& 1.2&7.8&184\\
Cygnus & -0.7& 9.9&10.8&5.4& 87\\
Local System& -1.0&11.6& 7.7&5.9& 90\\
Perseus &-11.3& 9.7&-2.5&5.6& 83\\
\hline
\end{tabular}
\label{table-average2}
\end{table}
\begin{table}
\centering
\caption{Model residual velocities averaged on interval T=40-50}
\begin{tabular}{lccccc}
\hline
Region & $\overline{V_R}$ & $\overline{\sigma_R}$ & $\overline{V_\theta}$ &$\overline{\sigma_\theta}$ & $\overline{n}$\\
& km s$^{-1}$ & km s$^{-1}$ & km s$^{-1}$& km s$^{-1}$ & \\
\hline
Sagittarius & 8.9& 7.4& 0.8&5.7& 69\\
Carina & 9.1& 7.4&-1.0&6.8&164\\
Cygnus & 3.9&10.4&11.4&5.7&126\\
Local System& 5.6&12.5& 8.3&6.7&121\\
Perseus &-15.2&11.1&-1.9&5.5& 50\\
\hline
\end{tabular}
\label{table-average3}
\end{table}
\section{Conclusions}
We have presented N-body simulations that reproduce the kinematics of
OB-associations in the Perseus and Sagittarius regions and in the
Local System. The velocities of gas particles averaged over large time
intervals (1 Gyr or 8 bar rotation periods) reproduce the directions
of the observed velocities in these regions. The mean difference
between the model and observed velocities calculated for the radial
and azimuthal components is $\Delta V=3.3$ km s$^{-1}$ there.
The galactic disk in our model includes two subsystems. The behavior
of the stellar subsystem is modeled by 8 million gravitating
collisionless particles. The stellar disk quickly forms a bar. Its
original pattern speed is quite high, but it first quickly decreases
and then moves to a slow decrease with $\Omega \approx 50 \ \mathrm{km
\ s}^{-1} \mathrm{kpc}^{-1}$ for several Gyrs. With our favored
value of the solar distance, $R_0=7.5 \ \mathrm{kpc}$, this sets us
close to the OLR ($R_{OLR}=8.1 \ \mathrm{kpc}$). This agrees with
studies of local stellar velocity distribution
\citep{dehnen2000,fux2001,minchev2009}, although they tend to set the
OLR slightly inside $R_0$. The optimal value of the solar position
angle $\theta_b$ providing the best agreement between the model and
observed velocities is $\theta_b=45\pm5^\circ$. The bar is quite long
($R_{bar} \approx 4.0$ kpc), but both its size and orientation are
consistent with the parameters derived from infrared observations
\citep{benjamin2005,cabrera-lavers2007}.
The stellar disk also creates an outer ring of class $R_1$ rotating
with the pattern speed of the bar, and the corresponding density
perturbation amounts to 15--20 per cent of the average density at the
same distance. Besides the bar, the stellar disk includes several slow
modes. The strongest of these rotates with the pattern speed of
$\Omega \approx 30 \ \mathrm{km \ s}^{-1} \mathrm{kpc}^{-1}$ and is
often clearly lopsided.
The gas subsystem is modeled by 40 000 massless particles that move in
the potential created by the stellar particles (and analytical bulge
and halo) and can collide with each other inelastically. The gas disk
forms an outer ring that exhibits quasi-periodic changes in its
morphology because it has several modes. One can identify elements of
$R_1$- and $R_2$-morphology, and the outer ring can often be
classified as $R_1R_2'$. The gas density perturbation inside the ring
can be approximated by two spiral arms with the pitch angle of
$i=6\pm1^\circ$.
The models with analytical bars (Paper I) reproduced the residual
velocities well in the Perseus and Sagittarius regions. We explained this
success by the resonance between the relative orbital rotation of the
bar and the epicyclic motion. The Sagittarius region must be located
slightly inside the OLR where resonance orbits are elongated
perpendicular to the bar, whereas the Perseus region must lie outside
the OLR where periodic orbits are oriented along the bar. However,
models with the analytical bar failed dramatically with the Local System
where they yielded only negative radial velocities $-24<V_R<-16
\ \mathrm{km \ s}^{-1}$, whereas the observed value is $+5.3
\ \mathrm{km \ s}^{-1}$. The success of N-body simulations with the Local
System is likely due to the gravity of the stellar $R_1$-ring, which
is omitted in models with analytical bars.
To study the effects of the gravity of the $R_1$-ring we
construct more simple models with a ``time averaged bar
potential''. This was done by calculating the average density
distribution in the frame rotating with the bar. This process
should average out most of the effect of slower modes and leave
bar and the $R_1$-ring that corotates with the bar. The
preliminary study shows that momentary velocities in these models
are in a good agreement with the average velocities in the
present N-body simulation. The detailed description of these models
will be done in our next paper.
To simplify the analysis at this point we are forced to ignore a lot
of processes which are important at such long time interval as 6
Gyr. We do not consider the accumulation of gas at the galactic
center, the transitions between the gas and stellar subsystems,
resonant interaction between the bar and halo, or the minor mergers
and satellite accretion. Considering the effects of these processes
may be done in a later phase.
\begin{acknowledgements}
We want to
thank H. Salo who wrote the simulation code we have used in this
study. This work was partly supported by the Russian Foundation for Basic
Research (project nos.~10\mbox{-}02\mbox{-}00489).
\end{acknowledgements}
|
2,869,038,155,890 | arxiv | \section{Introduction}
Cellular membranes are complex heterogeneous materials consisting of mixtures of lipids, proteins, and other small molecules~\cite{Alberts2007}. The individual and collective dynamics of these species are fine-tuned to carry out complex cellular processes ranging from cell signalling to shape regulation of organelles~\cite{Alberts2007,Voeltz2007,Groves2007,Powers2002,Muller2012,NelsonStatMechMem2004}. The effective two dimensional fluid-elastic nature of cell membranes results in interfacial phenomena and interesting geometric shapes effecting both molecular interactions and dynamics that can be very distinct from their bulk counter-parts. To gain a deeper understanding of cellular processes requires insights into the fundamental mechanics of fluid-elastic bilayer membranes.
Early theoretical investigations of the hydrodynamics of flat lipid bilayer membranes include~\cite{Saffman1975,Saffman1976} and more recently the related works~\cite{Oppenheimer2009,Camley2013,Camley2012,LevineMobilityExtendedBodies2004,Naji2007c,Powers2002,Muller2012}. In the now classic papers of Saffman and Delbr\"uck~\cite{Saffman1975,Saffman1976}, the bilayer is treated as a two dimensional fluid. The two dimensional fluid is coupled to a bulk three dimensional fluid accounting for the solvent surrounding the membrane on both sides. This description of the hydrodynamics is used to model a protein inclusion within a flat infinite membrane to derive the self-mobility
$M_{SD} = (1/4\pi\mu_m)\left(\log({2L_{SD}}/a) - \gamma \right)$. This asymptotic result assumes $a \ll L_{SD}$, where $a$ is the protein size, $\gamma \sim 0.577$ is the Euler-Mascheroni constant. The $L_{SD} = \mu_m/2\mu_f$ is the Saffman-Delbr\"uck length associated with how dissipation within the entrained bulk solvent fluid of viscosity $\mu_f$ regularizes the long-range two dimensional flow of viscosity $\mu_m$. These results highlight the importance of dissipation in the bulk solvent fluid that if neglected would otherwise lead to the well-known Stokes paradox~\cite{Saffman1976,Lamb1895, HappelBrenner1983}. This shows that particle motions even within a flat interface has a very different character than its counter-part in a bulk fluid. From Stokes theory the bulk self-mobility of a particle scales like $M ~\sim 1/6\pi\mu_f a$~\cite{Acheson1990,HappelBrenner1983}. For curved membranes the topology and geometry can result in even more significant differences. This includes providing a finite closed membrane surface and trapped solvent fluid in a bounded interior domain augmenting the hydrodynamics and coupling.
More recent works explore the mechanics of membranes both through coarse-grained molecular models~\cite{Deserno2009, Reynwar2007, Cooke2005, DesernoJanuary2015, Farago2003, Tieleman1997, Tieleman2013,Ayton2006} and through continuum mechanics approaches~\cite{Seifert1993, Seifert1997, DesernoJanuary2015, Capovilla2002, Camley2012, AtzbergerSigurdsson2012, AtzbergerBassereau2014, LevineMobilityExtendedBodies2004, LevineViscoelastic2002, LevineDinsmoreHydroEffectTopology2008, LevineHenleHydroCurvedMembranes2010, ArroyoRelaxationDynamics2009, Oppenheimer2009, Vlahovska2011, Chou2008, Klug2006,Powers2002, Muller2012, Ayton2006, OsterSteigmann2013, Lowengrub2007,Du2004}. The particular works~\cite{LevineHenleHydroCurvedMembranes2010, LevineMobilityExtendedBodies2004, LevineDinsmoreHydroEffectTopology2008, Powers2002, NelsonStatMechMem2004, Noguchi2004} introduce a continuum mechanics description for the hydrodynamics of spherical vesicles and tubules. The self-mobility of an embedded particle is computed as the curvature is varied using a truncation of the series representation of the hydrodynamic flow. In~\cite{ArroyoRelaxationDynamics2009} an exterior calculus description of the continuum mechanics of a fluid-elastic membrane sheet is introduced and used to investigate lipid flow during processes such as budding with an asymptotic model for the ambient fluid. The prior work in this area primarily has focused on single particle mobility and transport by hydrodynamics averaged over the two bilayer leaflets.
We introduce here further approaches to investigate the collective hydrodynamic coupling of multiple particle inclusions within leaflets of curved fluid lipid bilayer membranes. We consider the case of spherical bilayer membranes where inclusion particles are coupled through (i) intramembrane hydrodynamics, (ii) traction stresses with the external and trapped solvent fluid, and (iii) intermonolayer slip between the two leaflets of the bilayer. We formulate tractable descriptions of the continuum mechanics of curved fluid bilayers drawing on results from the exterior calculus of differential geometry. We formulate a tractable description for the collective hydrodynamic coupling of the inclusion particles on curved manifolds building on our prior work on immersed boundary approximations
~\cite{Atzberger2006,Atzberger2007a,AtzbergerSELM2011,AtzbergerSigurdsson2012}.
We compute the translational and rotational mobilities of inclusion particles. Relative to infinite flat membranes, we show that spherical vesicles exhibit significant differences arising from the curvature and finite domain size.
\lfoot{}
In Section~\ref{sec:ContMechVesicle} we introduce our continuum mechanics description of the bilayer hydrodynamics expressed in terms of the operators of exterior calculus of differential geometry. We use exterior calculus to help take a less coordinate-centric approach in our derivations and to obtain more concise expressions that often have a more clear geometric interpretation. We also show how the exterior calculus can be used to generalize many of the techniques used in fluid mechanics to the context of curved surfaces. In Section~\ref{sec:LambSol}, we use Lamb's solution for the fluid flow exterior and interior to a spherical shell to obtain the traction stresses arising from the surrounding solvent fluid and the trapped solvent fluid. In Section~\ref{sec:SphResponses}, we consider the hydrodynamic flow within the lipid bilayer membrane. We use a spherical harmonics representation to derive analytic results for the solutions of the coupled hydrodynamic equations. In section~\ref{sec:curvatureAndShear}, we discuss some roles played by curvature in hydrodynamic flows within surfaces.
In Section~\ref{sec:particleBilayerCoupling}, we introduce immersed boundary approximations on manifolds to account for the coupling between the lipid flow and inclusion particles. We discuss some particular properties of this type of approximation. We then derive mobility tensors for the translational and rotational motions of inclusion particles within curved membranes.
In Section~\ref{sec:particleMobilities}, we investigate the self mobility and the coupled mobility of inclusion particles when varying (i) vesicle curvature, (ii) membrane viscosity vs solvent viscosity, and (iii) intermonolayer slip. In Section~\ref{sec:manyParticleDynamics}, we consider approaches for the collective dynamics of many coupled inclusion particles within spherical vesicles. We consider the collective mobility associated with an attracting cluster of particles and briefly discuss some of the interesting dynamics that can arise from the collective hydrodynamic coupling. In Appendix~\ref{sec:coord_charts}, we discuss briefly how we have addressed some of the issues that arise for spherical surfaces in practical numerical calculations to obtain our results.
In summary, the work presented here is meant as a starting point to understanding the basic features of the mobility of inclusion particles within curved bilayers. Overall, we expect our approaches introduced here to provide ways to investigate the general collective dynamics of inclusion particles within spherical lipid bilayers relevant in many applications.
\section{Continuum Mechanics of the Vesicle}
\label{sec:ContMechVesicle}
We formulate a continuum mechanics description of (i) the hydrodynamic flow of lipids within the two bilayer leaflets, (ii) intermonolayer slip, and (iii) coupling to the surrounding solvent fluid, see Figure~\ref{fig:hydroSchematic}. We derive a set of conservation laws on manifolds using tensor calculus and results from differential geometry similar to Marsden~\cite{Marsden1994}. We then use identities as in Arroyo and Disomone~\cite{ArroyoRelaxationDynamics2009} to express our equations in a convenient covariant form that is geometrically invariant. To obtain analytic results for hydrodynamic flows on the curved surface, we use exterior calculus to generalize techniques often employed in fluid mechanics to 2-manifolds. We then use these exterior calculus approaches to perform numerical calculations. While our approaches provide rather general methods for working with hydrodynamics within manifolds, we focus in this paper on the sphere which is relevant to flow within the lipid bilayers of vesicles.
\subsection{Hydrodynamics of Bilayer Leaflets}
\label{sec:HydroBilayers}
We first consider the hydrodynamics within a single bilayer leaflet of the membrane. We treat the membrane as a two-dimensional embedded continuum in the case that the surface velocity $\mb{V} = \mb{v} + v_n \mb{n}$ has zero velocity in the direction of the surface normal $v_n = 0$. The conservation of momentum and mass of such a deforming two-dimensional continuum is given by~\cite{Marsden1994}
\begin{eqnarray}
\label{equ_cont_mech_gen}
\rho\left( \partial_t \mb{v} + \mb{v}\cdot \nabla \mb{v}\right)
& = & {\mbox{div}} (\bs{\sigma}) + \mb{b} \\
\partial_t \rho + \rho {\mbox{div}} (\mb{v}) & = & 0.
\end{eqnarray}
The $\nabla$ denotes the covariant derivative which when expressed in terms of tensor components is $\left(\nabla \mb{v}\right)_{b}^a = v^a_{|b} = \partial_{\mb{x}^b} v^a + \Gamma_{bc}^a v^c,$ where $\Gamma_{bc}^a$ denotes the Christoffel symbols~\cite{Pressley2001, Abraham1988}. In the notation $ {\mbox{div}} (\cdot)$ and $ {\mbox{grad}} (\cdot)$ the corresponding covariant operations for divergence $ {\mbox{div}} (\mb{w}) = w_{|a}^a$ and gradient $ {\mbox{grad}} (\mb{w})^a_b = w_{|b}^a$. The $\rho$ denotes the mass density per unit surface area, the $\mb{v}$ the velocity components tangent to the surface, $\mb{b}$ the body force per unit surface area, and $\bs{\sigma}$ the surface stress tensor. We remark that while these equations look superficially similar to the Euclidean case owing to the convenient covariant derivative notation,
as we shall discuss the geometry introduces important differences and additional terms.
\begin{figure}[H]
\centering
\includegraphics[width=8cm]{fig_vesicleHydroSchematic.png}
\caption{Vesicle Hydrodynamics. We take into account the hydrodynamics within each leaflet of the bilayer, the intermonolayer slip between leaflets, and the traction stresses for both the solvent fluid trapped interior to the vesicle and the solvent fluid exterior to the vesicle. We use a covariant formulation of the continuum mechanics.
\label{fig:hydroSchematic}}
\end{figure}
The constitutive law for an incompressible Newtonian fluid can be expressed in terms of the rate of deformation tensor of the surface
\begin{eqnarray}
\mb{D} = \nabla \mb{v} + \nabla \mb{v}^T,
\end{eqnarray}
which in terms of tensor components is $D^{a}_b = v^a_{|b} + v^b_{|a}$.
The Newtonian stress is given by
\begin{eqnarray}
\bs{\sigma}^{\sharp} = \mu_m\mb{D}^{\sharp} + \mu_m' {\mbox{div}} (\mb{v})\mathcal{I}^{\sharp} - p \mathcal{I}^{\sharp}.
\end{eqnarray}
The $\mu_m$ and $\mu_m'$ are the first and second viscosities of the membrane. The $\mathcal{I}$ is the (1,1)-identity tensor with $(\mathcal{I})_b^a = \delta_b^a$ where $\delta_b^a$ denotes the Kronecker delta-function.
This has $\left(\mathcal{I}^{\sharp}\right)^{ab} = g^{ab} = \left(\mb{g}^{\sharp}\right)^{ab}$, where $\mb{g}$ is the metric tensor for the surface ~\cite{Marsden1994}.
We have adopted the notation for raising and lowering indices corresponding to the isomorphisms between the tangent and co-tangent spaces of the surface given by
\begin{eqnarray}
{\flat} & : & v^j \partial_{\mb{x}^j} \rightarrow
v_i d\mb{x}^i \\
{\sharp} & : & v_i d\mb{x}^i \rightarrow v^j \partial_{\mb{x}^j}.
\end{eqnarray}
The $\partial_{\mb{x}^j}$ denotes the coordinate associated basis vectors of the tangent space and $d{\mb{x}^j}$ the one-form coordinate associated basis of the co-tangent space. The isomorphisms can also be expressed directly in terms of the components as $v_i = g_{ij} v^j$ and $v^i = g^{ij} v_j$, where we denote the metric tensor as $g_{ij}$ and its inverse as $g^{ij}$ ~\cite{Abraham1988}. This extends naturally to tensors.
Using these conventions, the steady-state Stokes equations corresponding to equation~\ref{equ_cont_mech_gen} can be expressed in tensor components as
\begin{eqnarray}
\mu_m D^{ab}_{|b} - g^{ab}p + {b}^a & = & 0 \\
v^a_{|a} & = & 0.
\end{eqnarray}
We can express this in a more geometrically transparent manner by considering further the divergence of the rate of deformation tensor
\begin{eqnarray}
D^{ab}_{|b} & = & g^{ac}g^{bd} v_{c|d|b} + g^{ac}g^{bd} v_{d|c|b}.
\end{eqnarray}
We have that
\begin{eqnarray}
g^{ac}g^{bd} v_{c|d|b} = ({\Delta}^R\mb{v})^a + Kv^a
\end{eqnarray}
where ${\Delta}^R \mb{v} := \mbox{rough-laplacian}( \mb{v}) = {\mbox{div}} \left( {\mbox{grad}} \left(\mb{v}\right)\right)$ and
$K$ is the Gaussian curvature of the surface.
We have that
\begin{eqnarray}
g^{ac}g^{bd} v_{d|c|b} & = & {\mbox{grad}} \left( {\mbox{div}} \left(\mb{v}\right)\right) + Kv^a \\
\nonumber
& = & Kv^a.
\end{eqnarray}
This follows since $ {\mbox{div}} \left(\mb{v}\right) = 0$.
It is convenient to express the equations and differential operators in terms of the exterior calculus as follows. Let $\mb{d}$ denote the usual exterior derivative for a $k$-form $\alpha$ as
\begin{eqnarray}
\mb{d} \alpha =
\frac{1}{k!}
\frac{\partial \alpha_{i_1 \ldots i_k}}{\partial x^j} \mb{d}\mb{x}^j \wedge \mb{d}\mb{x}^{i_1} \cdots \wedge \mb{d}\mb{x}^{i_k}.
\end{eqnarray}
The $\bs{\delta}$ denotes the co-differential operator given by
$\bs{\delta} = \star \mb{d} \star$, where for a $k$-form $\alpha = \left(1/k!\right)\alpha_{i_1 \ldots i_k}\mb{d}\mb{x}^{i_1} \wedge \cdots \wedge \mb{d}\mb{x}^{i_{k}}$ the $\star$ denotes the Hodge star given by
\begin{eqnarray}
\star \alpha =
\frac{\sqrt{|g|}}{(n-k)!k!}
\alpha^{i_1 \ldots i_k}\epsilon_{i_1\ldots i_k j_1 \ldots j_{n-k}} \cdot \\
\hspace{1cm} \cdot \mb{d}\mb{x}^{j_1} \wedge \cdots \wedge \mb{d}\mb{x}^{j_{n-k}},
\end{eqnarray}
where
$\alpha^{i_1 \ldots i_k} = g^{i_1 \ell_1}\cdots g^{i_k \ell_k} \alpha_{\ell_1 \ldots \ell_k}$,
$\sqrt{|g|}$ is the square-root of the determinant of the metric tensor, and
$\epsilon_{i_1\ldots i_k j_1 \ldots j_{n-k}}$ denotes the
Levi-Civita tensor~\cite{Marsden1994}.
The generalization of the common differential operators of vector calculus to manifolds can be expressed in terms of exterior calculus as
\begin{eqnarray}
{\mbox{grad}} (f) & = & \lbrack \mb{d}f\rbrack^{\sharp} \\
{\mbox{div}} (\mb{F}) & = & -(\star \mb{d}\star \mb{F}^\flat) = -\bs{\delta} \mb{F}^\flat \\
{\mbox{curl}} (\mb{F}) & = & \left\lbrack\star (\mb{d}\mb{F}^\flat) \right\rbrack^{\sharp}.
\end{eqnarray}
The $f$ is a smooth scalar function and the $\mb{F}$ is a smooth vector field.
There are different types of Laplacians that can be defined for manifolds
\begin{eqnarray}
\Delta^H (\mb{F})
& = & -\left\lbrack \left(\bs{\delta}\mb{d} + \mb{d}\bs{\delta} \right) \mb{F}^\flat\right\rbrack^{\sharp} \\
\Delta^S (\mb{F})
& = & -\left\lbrack \bs{\delta}\mb{d} \mb{F}^\flat\right\rbrack^{\sharp} \\
\Delta^H f & = & \Delta^R f = -(\star \mb{d}\star )\mb{d} f = -\bs{\delta} \mb{d} f.
\end{eqnarray}
The $\Delta^R = {\mbox{div}} ( {\mbox{grad}} (\cdot))$ denotes the rough-Laplacian given by the usual divergence of the gradient. For vector fields, $\Delta^H (\mb{F})$ denotes the Hodge-de Rham Laplacian, which has similarities to taking the curl of the curl~\cite{Abraham1988}. In fact, in the case that $ {\mbox{div}} (\mb{F}) = -\bs{\delta} \mb{F}^{\flat} = 0$ we have $\Delta^H (\mb{F}) = \Delta^S (\mb{F}) = -\left[\bs{\delta} \mb{d} \mb{F}^{\flat}\right]^{\sharp}$.
Using these conventions, we have
\begin{eqnarray}
{\mbox{div}} (\mb{D}) & = &
-\bs{\delta} \mb{d} \mb{v}^{\flat} + 2K\mb{v}^{\flat}
\end{eqnarray}
where we used that $ {\mbox{div}} (\mb{v}) = -\bs{\delta} \mb{v}^{\flat} = 0$.
This allows for the steady-state Stokes problem on the surface to be expressed using exterior calculus in the convenient form
\begin{eqnarray}
\label{equ_Stokes_geometric}
\left\{
\begin{array}{llll}
\mu_m \left(-\bs{\delta} \mb{d} \mb{v}^{\flat} + 2 K \mb{v}^{\flat} \right)
- \mb{d}p + \mb{b}^{\flat}
& = & 0 \\
-\bs{\delta} \mb{v}^{\flat} & = & 0.
\end{array}
\right.
\end{eqnarray}
As we shall discuss, this form provides a very convenient approach for analytic and numerical calculations.
\subsection{Coupling to External Solvent Fluid}
\label{sec:LambSol}
The solvent fluid surrounding the lipid bilayer membrane also exerts a traction stress on the inner and outer leaflets. We account for this using the Stokes equations
\begin{eqnarray}
\label{equ_stokes_1}
\mu\Delta \mb{u} - \nabla p & = & 0, \hspace{0.3cm} \mbox{$\mb{x} \in \Omega$} \\
\label{equ_stokes_2}
\nabla \cdot \mb{u} & = & 0, \hspace{0.3cm} \mbox{$\mb{x} \in \Omega$} \\
\label{equ_stokes_3}
\mb{u} & = & \mb{v}, \hspace{0.3cm} \mbox{$\mb{x} \in \partial \Omega$} \\
\label{equ_stokes_4}
\mb{u}_{\infty} & = & 0.
\end{eqnarray}
The $\Omega = \Omega^{\pm}$ denotes either the outside region $\Omega^{+}$ of fluid surrounding the vesicle or the domain
$\Omega^{-}$ of fluid trapped inside the vesicle.
The solution to the Stokes equations and traction stress can be conveniently expressed in terms of harmonic functions. This is most immediately seen for the pressure, which when taking the divergence of equation~\ref{equ_stokes_1}, yields
\begin{eqnarray}
\Delta p & = & 0.
\end{eqnarray}
For the spherical geometry, the pressure can be expanded as
\begin{eqnarray}
p = \sum_{n = -\infty}^{\infty} p_n
\end{eqnarray}
where $p_n$ is the \textit{solid spherical harmonic} of order $n$
\begin{eqnarray}
\label{equ_q_ssph}
p_n(r,\theta,\phi) &=&
r^n \sum_{m = -|n|}^{|n|} C_m^{n} Y_m^n(\theta,\phi)
\end{eqnarray}
where
\begin{eqnarray}
\label{equ_q_ssph_2}
Y_m^n(\theta,\phi) &=& e^{im\phi} P_n^{m}(\cos(\theta)).
\end{eqnarray}
For the solvent fluid velocity $\mb{u}^{-}$ in the domain $\Omega^{-}$ interior to the vesicle,
Lamb showed that the solution can be expressed as
~\cite{HappelBrenner1983,Lamb1895}
\begin{eqnarray}
\mb{u}^{-} = \sum_{n = 1}^{\infty} \mb{u}^{-}_n
\end{eqnarray}
where
\begin{eqnarray}
\mb{u}^{-}_n & = & \nabla \times \left(\mb{r}\chi_n\right) + \nabla \Phi_n \\
& + & \frac{(n + 3)}{ 2\mu (n + 1)(2n + 3)} r^2\nabla p_n \\
& - & \frac{n}{\mu(n + 1)(2n + 3)}\mb{r}p_n.
\end{eqnarray}
We shall refer to this as the \textit{Lamb's Solution}.
The surface flow $\mb{v}$ determines the solid spherical harmonic functions $\chi_n, \Phi_n, p_n$ by the following relations
\begin{eqnarray}
p_n & = & \frac{\mu(2n + 3)}{n}\frac{1}{R}
\left(\frac{r}{R}\right)^n \cdot \\
&\cdot &
\nonumber
\left[
Y_n - (n - 1)X_n
\right] \\
\Phi_n & = & \frac{1}{2n} R \left(\frac{r}{R}\right)^n
\left[
(n + 1)X_n - Y_n
\right] \\
\chi_n & = & \frac{1}{n(n + 1)}
\left(\frac{r}{R}\right)^n
Z_n.
\end{eqnarray}
The $R$ is the radius of the spherical surface.
For the surface fluid velocity $\mb{V} = \mb{v} + v_n \mb{n}$ of the membrane, the $X_n, Y_n, Z_n$ are combined surface spherical harmonics of degree $n$ obtained by expanding the following scalar fields on the surface
\begin{eqnarray}
\label{equ_X_n}
\frac{\mb{r}}{r}\cdot \mb{V} & = & \sum_{n = -\infty}^{\infty} X_n \\
\label{equ_Y_n}
r \nabla \cdot \mb{V} & = & \sum_{n = -\infty}^{\infty} Y_n \\
\label{equ_Z_n}
\mb{r}\cdot \nabla \times \mb{V} & = & \sum_{n = -\infty}^{\infty} Z_n.
\end{eqnarray}
For the solvent fluid velocity $\mb{u}^{+}$ in the domain $\Omega^{+}$ exterior to the vesicle,
Lamb's solution is
~\cite{HappelBrenner1983,Lamb1895}
\begin{eqnarray}
\mb{u}^{+} = \sum_{n = 0}^{\infty} \mb{u}^{+}_n
\end{eqnarray}
where
\begin{eqnarray}
\mb{u}^{+}_n & = &
\nabla \times (\mb{r} \chi_{-(n+1)}) + \nabla \Phi_{-(n+1)} \\
& - & \frac{(n-2)}{\mu 2n (2n - 1)} r^2 \nabla p_{-(n+1)} \\
& + & \frac{(n + 1)}{\mu n (2n - 1)}\mb{r} p_{-(n+1)}.
\end{eqnarray}
The surface fluid velocity $\mb{V} = \mb{v} + v_n \mb{n}$ of the membrane, determines the harmonic functions
$\chi_{-(n+1)}, \Phi_{-(n+1)}, p_{-(n+1)}$ giving
\begin{eqnarray}
p_{-(n+1)} & = & \frac{\mu(2n - 1)}{n + 1}\frac{1}{R}
\left(\frac{R}{r}\right)^{n +1} \cdot \\
\nonumber
&& \cdot \left[
(n + 2)X_n + Y_{n}
\right] \\
\Phi_{-(n+1)} & = & \frac{1}{2(n + 1)} R \left(\frac{R}{r}\right)^{n + 1} \cdot \\
\nonumber
&& \cdot \left[
nX_n - Y_n
\right] \\
\chi_{-(n+1)} & = & \frac{1}{n(n + 1)}
\left(\frac{R}{r}\right)^{n+1}
Z_{n}.
\end{eqnarray}
In the special case of $v_n = 0$ and $ {\mbox{div}} (\mb{v}) = 0$ we have that $X_n = Y_n = 0$ and that only the $Z_n$ term is non-trivial. The Lamb's solution simplifies to
\begin{eqnarray}
\label{equ_lambs_sol}
\mb{u}^{+}_n & = & \nabla \times \left(\mb{r}\chi_{-(n+1)}\right) \\
\mb{u}^{-}_n & = & \nabla \times \left(\mb{r}\chi_n\right).
\end{eqnarray}
In this case, the traction stress of the external fluid on the lipid bilayer membrane is
\begin{eqnarray}
\label{equ_tract_stress_tplus}
\mb{t}^{+} & = & \bs{\sigma}^{+}\cdot \mb{n}^{+} \\
\nonumber
& = & \left[\mu_{+}
\left(
\nabla \mb{u}^{+} + \nabla \mb{u}^{+T}
\right)
- p^{+}\mathcal{I} \right]\cdot \mb{n}^{+} \\
\nonumber
& = &
\mu_{+}
\frac{\partial \mb{u}^{+}}{\partial r}
+
\mu_{+}
\nabla
\left(\mb{u}^{+}\cdot \mb{n}^{+}\right) \\
\label{equ_tract_stress_tminus}
\mb{t}^{-} & = & \bs{\sigma}^{-}\cdot \mb{n}^{-} \\
\nonumber
& = & \left[\mu_{-}
\left(
\nabla \mb{u}^{-} + \nabla \mb{u}^{-T}
\right)
- p^{-}\mathcal{I} \right]\cdot \mb{n}^{-} \\
\nonumber
& = &
-\mu_{-}
\frac{\partial \mb{u}^{-}}{\partial r}
+
\mu_{-}
\nabla
\left(\mb{u}^{-}\cdot \mb{n}^{-}\right).
\end{eqnarray}
The $\mb{n}^{\pm}$ denotes the unit normal on the surface $\partial \Omega^{\pm}$ in the direction pointing into the domain. In these expressions, we emphasize that $\mb{n}^{\pm}$ is to be held fixed during differentiation.
From equation~\ref{equ_tract_stress_tplus} and~\ref{equ_tract_stress_tminus} and the properties of solid spherical harmonics, we have that the traction stress on the membrane leaflets can be expressed as
\begin{eqnarray}
\label{equ_tract_stress_u_pm}
\mb{t}^{+} & = & \sum_{n = 0}^{\infty} -\frac{(n + 2)}{R^{+}} \mb{u}_n^{+} \\
\nonumber
\mb{t}^{-} & = & \sum_{n = 1}^{\infty} -\frac{(n - 1)}{R^{-}} \mb{u}_n^{-}.
\end{eqnarray}
\subsection{Intermonolayer Slip}
We account for the two bilayer leaflets of the membrane by considering two surface velocity fields $\mb{v}_+$ and $\mb{v}_-$. We model the intermonolayer slip between these two leaflets by the traction term proportional to the difference in lipid velocity
\begin{eqnarray}
\mb{s}^{\pm} &=& \pm \gamma\left(\mb{v}_- - \mb{v}_+ \right).
\end{eqnarray}
\subsection{Full Membrane Hydrodynamics}
\label{sec:fullMemHydro}
Using an approach similar to Section~\ref{sec:HydroBilayers}, we obtain for a two-leaflet membrane the following hydrodynamic equations.
\begin{align}
\label{equ_full_hydro_first}
\\
\nonumber
\left\{
\begin{array}{ll}
\mu_m
\left[
-\bs{\delta} \mb{d} \mb{v}_{+}^{\flat}
+
2K_{+} \mb{v}_{+}^{\flat}
\right]
+
\mb{t}_{+}^{\flat}
-\gamma
\left(
\mb{v}_{+}^{\flat} - \mb{v}_{-}^{\flat}
\right) \\
\hspace{0.5cm}
=
\mb{d}p_{+} - \mb{b}_{+}^{\flat}
=
-\mb{c}_{+}^{\flat},
\hspace{0.25cm}
\mb{x} \in \mathcal{M}_{+} \\
\bs{\delta} \mb{v}_{+}^{\flat} = 0,
\hspace{2.64cm}
\mb{x} \in \mathcal{M}_{+}, \\
\nonumber
\\
\nonumber
\mu_m
\left[
-\bs{\delta}\mb{d} \mb{v}_{-}^{\flat}
+
2K_{-} \mb{v}_{-}^{\flat}
\right]
+
\mb{t}_{-}^{\flat}
-\gamma
\left(
\mb{v}_{-}^{\flat} - \mb{v}_{+}^{\flat}
\right) \\
\hspace{0.5cm}
=
\mb{d}p_{-} - \mb{b}_{-}^{\flat}
=
-\mb{c}_{-}^{\flat},
\hspace{0.25cm}
\mb{x} \in \mathcal{M}_{-} \\
\bs{\delta} \mb{v}_{-}^{\flat} = 0,
\hspace{2.64cm}
\mb{x} \in \mathcal{M}_{-}.
\end{array}
\right.
\label{equ_full_hydro_last}
\end{align}
The $\mathcal{M}_{\pm}$ denotes the two surfaces representing the inner and outer bilayer leaflets. These equations take into account the internal membrane hydrodynamics of each leaflet of viscosity $\mu_m$, the intermonolayer slip $\gamma$, and the traction stresses with the bulk solvent fluids of viscosity $\mu^{\pm}$ trapped within and external to the vesicle. To obtain the coupling in the collective dynamics of inclusions embedded in such bilayer membranes, we must solve these equations for the hydrodynamic flow.
\subsection{Membrane Hydrodynamics and Modal Responses}
\label{sec:SphResponses}
We use exterior calculus methods to derive solutions to the membrane hydrodynamic equation~\ref{equ_full_hydro_first}. For analytic and numerical calculations of flow within surfaces, the exterior calculus provides a number of advantages over more coordinate-centric approaches such as tensor calculus~\cite{Abraham1988}. As already seen in our expressions of the hydrodynamic equations, there are fewer explicit references to the metric tensor with instead more geometrically intrinsic operations appearing such as the exterior derivative and Hodge star~\cite{Abraham1988}. In analytic calculations, we take advantage of this to develop succinct methods for curved manifolds that generalize many of the vector calculus based techniques often employed in fluid mechanics. From the exterior calculus formulation of the Stokes equations we can readily show that the incompressible surface flow can be expressed in terms of a scalar velocity potential $\Phi$ as
\begin{eqnarray}
\label{equ_gen_curl_2}
\mb{v}^{\flat} = -\star \mb{d} \Phi.
\end{eqnarray}
This provides a generalization for the surface geometry of the usual velocity potential used in fluid mechanics. The equation~\ref{equ_gen_curl_2} generalizes to surfaces the operation in Euclidean space of taking the curl to obtain an incompressible flow~\cite{HappelBrenner1983,Acheson1990}. The exterior calculus allows us to readily verify that the generated velocity field on the surface is incompressible
\begin{eqnarray}
-\bs{\delta} \mb{v}^{\flat} = (\star \mb{d} \star)(\star \mb{d} \Phi) = -\star \mb{d}^2 \Phi = 0.
\end{eqnarray}
This follows since $\star\star = -1$ on a surface (2-manifold) and $\mb{d}^2 = 0$ holds~\cite{Abraham1988}.
\subsubsection{Modal Response for Intramembrane Hydrodynamics}
\label{equ_modal_response_intramembrane}
To obtain equations for $\Phi$, we use the exterior calculus to determine the eigenfunctions of the operator in the Stokes equations. This can then be used to rigorously derive expressions for the modal responses of the hydrodynamics when acted upon by an applied force in a manner similar to~\cite{LevineHenleHydroCurvedMembranes2010}. For this purpose, we consider the eigenproblem
\begin{eqnarray}
\label{equ_eigenproblem}
\mu\left[
-\bs{\delta}\mb{d} \mb{v}_s^{\flat}
+ 2K\mb{v}_s^{\flat}
\right]
& = & \lambda_s \mb{v}_s^{\flat}.
\end{eqnarray}
Let $\Phi_s$ be a function such that
$\mb{v}_s^{\flat} = -\star \mb{d} {\Phi}_s$. The operator
$-\star \mb{d}$ commutes with $-\bs{\delta}\mb{d}$ since
$-\bs{\delta}\mb{d} \mb{v}_s^{\flat} = -\bs{\delta}\mb{d}(-\star \mb{d}) {\Phi}_s
= \star \mb{d} \star \mb{d} \star \mb{d} \Phi_s
= -\star \mb{d}\left(-\bs{\delta}\mb{d} \right){\Phi}_s$. The eigenproblem becomes
\begin{align}
(-\star \mb{d})
&\mu\left[
-\bs{\delta}\mb{d} \Phi_s
+ 2K\Phi_s
\right] \\
\nonumber
&=
(-\star \mb{d}) (\lambda_s \Phi_s).
\end{align}
This can be satisfied if $\Phi_s$ is a solution of
\begin{align}
\mu\left[
-\bs{\delta}\mb{d} \Phi_s
+ 2K\Phi_s
\right]
=
\lambda_s \Phi_s.
\end{align}
This can also be expressed as
\begin{eqnarray}
\label{equ_Laplace_Beltrami}
-\bs{\delta}\mb{d} \Phi_s
& = &
\gamma_s \Phi_s,
\end{eqnarray}
where
$\gamma_s =
\left(
{\lambda_s}/{\mu}
- 2K
\right)
$.
For scalar fields, the operator $-\bs{\delta}\mb{d}$ is the Laplace-Beltrami operator of the surface. In the special case of the sphere, the solutions are surface spherical harmonics of the form
\begin{eqnarray}
\Phi_s = Y_m^\ell(\theta,\phi) = e^{im\phi} P_{\ell}^{m}(\cos(\theta))
\end{eqnarray}
where $s = (\ell,m)$ subject to the restriction $|m| \leq \ell$. The eigenvalues are $\gamma_s = -\ell (\ell + 1)/R^2$
and $\lambda_s =\mu\left(-\ell (\ell + 1)/R^2 + 2K \right)$.
We can express the solution of the Stokes equations~\ref{equ_Stokes_geometric} by expanding the velocity field as
\begin{eqnarray}
\mb{v}^{\flat} =
\sum_s a_s \mb{v}_s^{\flat}
= -\star \mb{d}
\sum_s a_s \Phi_s.
\end{eqnarray}
We can also represent the solution with $\Phi = \sum_s a_s \Phi_s$. In a similar manner, the applied surface force can be expanded with coefficients $c_s$ as $\mb{c}^{\flat} = \mb{b}^{\flat} - \mb{d}p = -\star \mb{d}
\sum_s c_s \Phi_s$. The problem now becomes to find the coefficients $a_s$ for the flow when given an applied force with coefficients $c_s$.
As a demonstration of the utility of this exterior calculus approach, consider the
Stokes equations~\ref{equ_Stokes_geometric} for the surface flow on a sphere associated with a single leaflet, without yet the intermonolayer slip or traction stress. We treat the Stokes problem in equation~\ref{equ_Stokes_geometric} by taking $-\star \mb{d}$ of both sides to eliminate the pressure term. This yields
\begin{eqnarray}
-\star \mb{d}\mu\left[
-\bs{\delta}\mb{d} \mb{v}^{\flat}
+ 2K\mb{v}^{\flat}
\right]
-\star \mb{d}\mb{d}p
=
-\star \mb{d} \mb{b}^{\flat}.
\end{eqnarray}
Using the expansion for $\mb{v}^{\flat}$ in terms of $\mb{v}_s^{\flat} = -\star \mb{d} \Phi_s$ and that $\Phi_s$ was chosen to solve the eigenproblem in equation~\ref{equ_eigenproblem}, we have
\begin{align}
-\star \mb{d}&\mu\left[
-\bs{\delta}\mb{d} \mb{v}^{\flat}
+ 2K\mb{v}^{\flat}
\right]
- 0 \\
\nonumber
& =
-\star \mb{d} \sum_s a_s \lambda_s ( -\star\mb{d} \Phi_s) \\
\nonumber
&=
-\sum_s a_s \lambda_s (-\bs{\delta}\mb{d} \Phi_s)
= -\sum_s a_s \lambda_s \gamma_s \Phi_s
\\
\nonumber
&=
-\sum_s c_s \star \mb{d}\star \mb{d} \Phi_s
=
-\sum_s c_s \bs{\delta}\mb{d} \Phi_s \\
\nonumber
&=
\sum_s c_s \gamma_s \Phi_s.
\end{align}
We remark that we use $\mb{c}^{\flat}$ as opposed to $\mb{b}^{\flat}$ throughout our calculations to emphasize that only the solenoidal component of the applied force effects the flow. This is further exhibited in the identity $-\star \mb{d} \mb{c}^{\flat} = -\star \mb{d} \mb{b}^{\flat}$.
For mode $s$, we have $\lambda_s a_s = c_s$ and $K = 1/R^2$ which gives
\begin{eqnarray}
a_s = \left[ \left( \frac{\mu (2 - \ell (\ell + 1))}{R^2} \right) \right]^{-1} c_s.
\end{eqnarray}
This applies for $\ell \geq 2$. For the Stokes flow on the membrane surface this gives the modal response to an applied force.
We have assumed for this solution that the applied force has net-zero torque. The mode $\ell = 1$ does not introduce an internal shear stress within the membrane since this mode corresponds to a rigid-body motion of the spherical shell. Since we have not yet included the intermonolayer slip or the external fluid traction stress there would be no stresses to balance a force having non-zero net torque.
We remark that the membrane velocity field is obtained from these calculations by
\begin{eqnarray}
\label{equ_vel_rep}
\mb{v} & = & \left(\mb{v}^{\flat}\right)^{\sharp} = \sum_s a_s \left(-\star \mb{d} \Phi_s\right)^{\sharp} \\
\nonumber
& = & \sum_s a_s
\left[ \frac{\epsilon_{i\ell}}{\sqrt{|g|}} \frac{\partial \Phi}{\partial x^{\ell}} \right]\partial_{\mb{x}^{i}}.
\end{eqnarray}
The $|g| = \mbox{det}(\mb{g})$ is the determinant of the metric tensor and $\epsilon_{i\ell}$ is the Levi-Civita tensor (slight abuse of notation). The $x^\ell$ denotes the coordinates.
We remark if the standard spherical coordinates are used then $x^1 = \theta$ and $x^2 = \phi$ with the polar angle $\theta$ and azimuthal angle $\phi$. However, this has singularities at the north and south poles of the sphere. To use robustly the velocity representation~\ref{equ_vel_rep} for numerical calculations at each location on the sphere, we need to use different coordinate charts depending on location. The velocity field can be computed robustly by using either the standard spherical coordinates or the spherical coordinates with the poles at the east and west poles, for details see our discussion in Appendix~\ref{sec:coord_charts}.
\subsubsection{Modal Response when Coupled to External Solvent Fluid and with Intermonolayer Slip}
Using this approach, we can also incorporate for a two leaflet lipid bilayer membrane the additional contributions of the traction stress with the external solvent fluid and the intermonolayer slip. We consider the case when the outer bilayer leaflet of the membrane vesicle has radius $R_{+}$ and the inner bilayer leaflet has radius $R_{-}$. This requires us to derive the modal response to the induced bulk external solvent flow. The solvent flow satisfies the Stokes equations~\ref{equ_stokes_1}--\ref{equ_stokes_4} with no-slip with respect to the flow within the membrane surface. These Stokes equations must be solved twice, once in the domain $\Omega^{+}$ exterior to $\mathcal{M}^{+}$ and once in the domain $\Omega^{-}$ interior to $\mathcal{M}^{-}$. We obtain a representation for these solutions using Lamb's solution~\cite{HappelBrenner1983}, see Section~\ref{sec:LambSol}.
We represent the fluid velocity $\mb{v}_{\pm}$ within each leaflet of the membrane using the velocity potential $\Phi^{\pm}$. As in Section~\ref{equ_modal_response_intramembrane}, we expand the velocity potential in spherical harmonics $\Phi_s$ as $\Phi^{\pm} = \sum_s a_s^{\pm} \Phi_s$. This allows us to express the membrane velocity as
\begin{eqnarray}
\mb{v}_{\pm}^{\flat} =
-\star \mb{d}
\sum_s a_s^{\pm} \Phi_s.
\end{eqnarray}
From this representation and equation~\ref{equ_tract_stress_u_pm}, we can express the traction stress from the external solvent fluid on the membrane leaflet as
\begin{eqnarray}
\mb{t}_{+}^{\flat} & = & \sum_{\ell = 1}^{\infty} -\frac{\mu_{+}(\ell + 1)}{R_{+}}
\left(-\mb{d}\star \tilde{\Phi}_\ell^{-}\right)\\
\nonumber
\mb{t}_{-}^{\flat} & = & \sum_{\ell = 1}^{\infty} -\frac{\mu_{-}(\ell - 1)}{R_{-}}
\left(-\mb{d}\star \tilde{\Phi}_\ell^{+}\right).
\end{eqnarray}
The $\tilde{\Phi}_\ell^{\pm}$ denotes the linear combination of modes of degree $\ell$. In particular, $\tilde{\Phi}_\ell^{\pm} = \sum_{s', \ell' = \ell} a_{s'}^{\pm} \Phi_{s'}$ where $s' = (m',\ell')$.
By applying $-\star \mb{d}$ we have
\begin{eqnarray}
\label{equ_star_d_t}
-\mb{d}\star \mb{t}_{+}^{\flat} & = & -\frac{\mu_{+}}{R_{+}}
\sum_{s} (\ell + 1)
a_s^{+} \gamma_s^{+} \Phi_s^{+} \\
\nonumber
-\mb{d}\star \mb{t}_{-}^{\flat} & = & -\frac{\mu_{-}}{R_{-}}
\sum_{s} (\ell - 1)
a_s^{-} \gamma_s^{-} \Phi_s^{-}.
\end{eqnarray}
We use that $-\star\mb{d}\left(-\star\mb{d}\right) = -\bs{\delta}\mb{d}$ is the Laplace-Beltrami operator and that $-\bs{\delta}\mb{d} \Phi_s^{\pm} = \gamma_s^{\pm} \Phi_s^{\pm}$, see equation~\ref{equ_eigenproblem} and equation~\ref{equ_Laplace_Beltrami}.
Now we apply the operator $-\star \mb{d}$ to equation~\ref{equ_full_hydro_first}. By using equation~\ref{equ_star_d_t} and equation~\ref{equ_eigenproblem}, we obtain for the coefficients $a_s^{\pm}$ of the velocity fields of the leaflets
\begin{eqnarray}
\mu_m \lambda_s^{+} \gamma_s^{+} a_{s}^{+}
& - &\frac{\mu_{+}}{R_{+}}(\ell +1) \gamma_s^{+} a_{s}^{+} \\
\nonumber
& - & \gamma \left(\gamma_s^{+} a_{s}^{+} - \gamma_s^{+} a_{s}^{-} \right)
= -\gamma_s^{+} c_{s}^{+} \\
\mu_m \lambda_s^{-} \gamma_s^{-} a_{s}^{-}
& - &\frac{\mu_{-}}{R_{-}}(\ell-1) \gamma_s^{-} a_{s}^{-} \\
\nonumber
& - & \gamma \left(\gamma_s^{-} a_{s}^{-} - \gamma_s^{-} a_{s}^{+}
\right) = -\gamma_s^{-} c_{s}^{-}.
\end{eqnarray}
The solution coefficients for $\mb{v}_{+}^{\flat}$ and $\mb{v}_{-}^{\flat}$ for the full two-leaflet membrane hydrodynamics in equation~\ref{equ_full_hydro_first} can be expressed as
\begin{eqnarray}
\label{equ_Stokes_full_1}
\left[
\begin{array}{l}
a^{+}_s \\
a^{-}_s \\
\end{array}
\right]
& = & \mathcal{A}_s^{-1}
\left[
\begin{array}{l}
-c_s^{+} \\
-c_s^{-} \\
\end{array}
\right]
\end{eqnarray}
where
\begin{eqnarray}
\label{equ_Stokes_SPH_sol2_defAs}
\mathcal{A}_s & = &
\left[
\begin{array}{ll}
A_1^{\ell}
- \gamma & \gamma \\
\gamma &
A_2^{\ell} - \gamma \\
\end{array}
\right]
\end{eqnarray}
with
\begin{eqnarray}
\label{equ_Stokes_SPH_sol2_defA_ells}
\\
\nonumber
A_1^{\ell} & = &
\frac{\mu_m}{R_{+}^2}
\left(
2 - \ell(\ell + 1) - \frac{R_{+}}{L^{+}}(\ell + 1)
\right) \\
\nonumber
A_2^{\ell} & = &
\frac{\mu_m}{R_{-}^2}
\left(
2 - \ell(\ell + 1) - \frac{R_{-}}{L^{-}}(\ell - 1)
\right).
\end{eqnarray}
Associated with the inner and outer external fluids, we define the length-scales
$L^{-} = \mu_m/\mu_{-}$ and $L^{+} = \mu_m/\mu_{+}$. The Saffman-Delbr\"uck length-scale~\cite{Saffman1975,Saffman1976} associated with each leaflet is
$L_{SD}^{-} = \frac{1}{2}L^{-}$ and $L_{SD}^{+} = \frac{1}{2}L^{+}$ and on average $L_{SD} = \frac{1}{2}\left(L_{SD}^{-} + L_{SD}^{+} \right)$.
In summary, the equations~\ref{equ_Stokes_full_1}--~\ref{equ_Stokes_SPH_sol2_defA_ells} provide the modal responses for the hydrodynamic flow satisfying the two leaflet Stokes problem in equation~\ref{equ_full_hydro_first}. The model captures the hydrodynamic flow of lipids within the two curved bilayer leaflets that are coupled to one another by intermonolayer slip and that are coupled to the flow of the external solvent fluid. The key parameters are given in Table~\ref{table:descrParams}.
We remark that the membrane fluid velocity fields are obtained from these calculations by
\begin{eqnarray}
\label{equ_velFieldRep}
\\
\nonumber
\mb{v}_{-} & = & \left(\mb{v}_{-}^{\flat}\right)^{\sharp} = \sum_s a_s^{-} \left(-\star \mb{d} \Phi_s\right)^{\sharp} \\
& = &
\nonumber
\sum_s a_s^{-}
\left[ \frac{\epsilon_{i\ell}}{\sqrt{|g_{-}|}} \frac{\partial \Phi_s}{\partial x^{\ell}} \right]\partial_{\mb{x}^{i}} \\
\nonumber
\mb{v}_{+} & = & \left(\mb{v}_{+}^{\flat}\right)^{\sharp} =
\sum_s a_s^{+} \left(-\star \mb{d} \Phi_s \right)^{\sharp} \\
& = &
\nonumber
\sum_s a_s^{+}
\left[ \frac{\epsilon_{i\ell}}{\sqrt{|g_{+}|}} \frac{\partial \Phi_s}{\partial x^{\ell}} \right]\partial_{\mb{x}^{i}}.
\end{eqnarray}
The $|g_{\pm}|$ denotes the determinant of each of the metric tensors $\mb{g}_{\pm}$ associated with the leaflet surfaces $\mathcal{M}^{\pm}$ and the $\epsilon_{i\ell}$ denotes the Levi-Civita tensor. We remark that given coordinate singularities on the sphere to use robustly this velocity representation for numerical calculations, we need to use different coordinate charts. For details see our discussion in Appendix~\ref{sec:coord_charts}.
\begin{table}[H]
\centering
\begin{tabular}{|l|l|}
\hline
\rowcolor{LightGrey}
Notation & Description \\
\hline
$\mu_m$ & \small Intramembrane viscosity. \\
$\gamma$ & \small Intermonolayer slip. \\
$\mu_{\pm}$ & \small External bulk solvent viscosity. \\
$R_\subtxt{+}$ & \small Radius of the outer leaflet. \\
$R_\subtxt{-}$ & \small Radius of the inner leaflet. \\
$R$ & \small Average radius of the vesicle. \\
\hline
\end{tabular}
\caption{Description of notation and parameters.
\label{table:descrParams}}
\end{table}
\subsection{Characteristic Physical Scales}
\label{sec:charScales}
To characterize the hydrodynamic responses, we discuss a few useful non-dimensional groups. We first consider how the bulk solvent fluid regularizes the two dimensional membrane hydrodynamics. This can be characterized by considering a disk-shaped patch of a flat membrane of radius $r$. An interesting length-scale is the radius $r$ where the bulk three dimensional traction stress acting on the patch of area $A = \pi r$ is comparable in magnitude to the intramembrane stresses acting on the perimeter of the patch of length $\tilde{\ell} = 2\pi r$. This occurs for the inner and outer leaflets on length-scales scaling respectively like $L^{-} = \mu_m/\mu_{-}$ and $L^{+} = \mu_m/\mu_{+}$. The Saffman-Delbr\"uck length-scale~\cite{Saffman1975,Saffman1976} associated with each leaflet is
$L_{SD}^{-} = \frac{1}{2}L^{-}$ and $L_{SD}^{+} = \frac{1}{2}L^{+}$ with average $L_{SD} = \frac{1}{2}\left(L_{SD}^{-} + L_{SD}^{+} \right)$. For a vesicle, it is natural to consider these length-scales relative to the radius of the vesicle $R$. We introduce the non-dimensional groups $\Pi_1^+ = L^{+}/R_{+}$ and $\Pi_1^- = L^{-}/R_{-}$.
For the intermonolayer slip, we consider for the flow the response of the leading order modes with $\ell = 1$. These correspond to the rigid rotations of the spherical shell. For a velocity difference between the layers, the drag is given by $\gamma$. For the leading order modes with $\ell = 1$, the traction stresses arising from the entrained surrounding bulk solvent fluid give an effective drag ${\mu_{+}}/{R_{+}}$, see equation~\ref{equ_Stokes_SPH_sol2_defAs} and ~\ref{equ_Stokes_SPH_sol2_defA_ells}. To characterize for a lealfet the strength of the intermonolayer slip relative to the traction stress exerted by the surrounding solvent fluid, we introduce the
non-dimensional groups $\Pi_2^+ = \gamma R_{+}/\mu_{+}$ and $\Pi_2^- = \gamma R_{-}/\mu_{-}$. For convenience, we also introduce the notation $\gamma^{\pm}_0 = \mu_{\pm}/R_{\pm}$, so that we can express $\Pi_2^{\pm} = \gamma/\gamma^{\pm}_0$.
We remark that $\Pi_2^{\pm}$ can be expressed in the more familiar terms of a ratio of rotational drag coefficients. We have $\Pi_2^+ = \left[8 \pi (\gamma R_{+}) R_{+}^3\right] / \left[8 \pi \mu_{+} R_{+}^3\right]$. For a rigid spherical particle subject to torque $\tau$ in a fluid with viscosity $\bar{\mu}$, the angular velocity $\omega$ is given by $\omega = \left[8\pi\bar{\mu} R_{+}^3\right]^{-1} \tau$,~\cite{HappelBrenner1983}. This shows that the intermonolayer slip contributes similarly to leading order as a bulk solvent fluid of viscosity $\gamma R_{+}$. We can express similarly $\Pi_2^-$.
These four non-dimensional groups $\Pi_1^+$, $\Pi_1^-$, $\Pi_2^+$, $\Pi_2^-$ serve to characterize the relative contributions of the vesicle geometry, shear viscosity within the bilayer leaflets, the shear viscosity of the bulk solvent fluid, and the intermonolayer slip. To simplify our notation, we drop the $\pm$ when the same values are used for each leaflet and denote $\Pi_1 = \Pi_1^+ = \Pi_1^-$ and $\Pi_2 = \Pi_2^+ = \Pi_2^-$.
We can non-dimensionalize equations~\ref{equ_Stokes_full_1_nonDim}-- \ref{equ_Stokes_SPH_sol2_defA_ells_nonDim} by introducing a characteristic velocity $v_0^{\pm}$ and force density $f_0^{\pm}$. We find it convenient to consider the rigid rotation at angular velocity $\omega_0$ of the spherical shell in the solvent fluid. This motivates the choice of characteristic force density $f_0^{\pm} = \mu_f \omega_0$ and velocity $v_0^{\pm} = R \omega_0$. For simplicity, we consider only the case with $\mu^{\pm} = \mu_f$ and $R_{\pm} = R$. The non-dimensional velocity is $\tilde{\mb{v}}_{\pm}^{\flat} = {\mb{v}}_{\pm}^{\flat}/v_0$ and force density $\tilde{\mb{c}}_{\pm}^{\flat} = {\mb{c}}_{\pm}^{\flat}/f_0$
with coefficients $\tilde{a}$ and $\tilde{c}$.
With this choice, we can express the non-dimensional problem for the full two-leaflet membrane hydrodynamics in equation~\ref{equ_full_hydro_first} as
\begin{eqnarray}
\label{equ_Stokes_full_1_nonDim}
\left[
\begin{array}{l}
\tilde{a}^{+}_s \\
\tilde{a}^{-}_s \\
\end{array}
\right]
& = & \tilde{\mathcal{A}}_s^{-1}
\left[
\begin{array}{l}
-\tilde{c}_s^{+} \\
-\tilde{c}_s^{-} \\
\end{array}
\right]
\end{eqnarray}
where
\begin{eqnarray}
\label{equ_Stokes_SPH_sol2_defAs_nonDim}
\tilde{\mathcal{A}}_s & = &
\Pi_2
\left[
\begin{array}{ll}
\tilde{A}_1^{\ell}
- 1 & 1 \\
1 &
\tilde{A}_2^{\ell} - 1 \\
\end{array}
\right]
\end{eqnarray}
with
\begin{eqnarray}
\label{equ_Stokes_SPH_sol2_defA_ells_nonDim}
\\
\nonumber
\tilde{A}_1^{\ell} & = &
\Pi_2^{-1}\Pi_1
\left(
2 - \ell(\ell + 1) - \Pi_1^{-1}(\ell + 1)
\right) \\
\nonumber
\tilde{A}_2^{\ell} & = &
\Pi_2^{-1}\Pi_1
\left(
2 - \ell(\ell + 1) - \Pi_1^{-1}(\ell - 1)
\right).
\end{eqnarray}
As we shall discuss, this analysis will be useful when considering the relative contributions of the solvent traction stress, intramembrane viscosity, and the intermonolayer slip. Other non-dimensional scalings can also be considered using a similar approach.
\subsection{Curvature and Shear}
\label{sec:curvatureAndShear}
In contrast to a flat membrane, material transported on a curved membrane can undergo shear even by a flow having an effectively constant velocity field on the surface. To investigate the role of intrinsic curvature of the surface, we consider flow on the sphere which has constant positive Gaussian curvature $K > 0$ and the pseudosphere which has constant negative Gaussian curvature $K < 0$~\cite{Pressley2001}, see Figure~\ref{fig:curvatureAndShear}.
For concreteness, we parametrize the sphere having Gaussian curvature $K = 1$ with the coordinates $(\theta,\phi)$ with $\mb{x} = \psi(\theta,\phi) = \left[\sin(\theta)\cos(\phi),\sin(\theta)\sin(\phi),\cos(\theta)\right]$. We parametrize the pseudosphere having Gaussian curvature $K = -1$ with the coordinates
$(\theta,\phi)$ with $\mb{x} = \psi(\theta,\phi) = \left[\sech(\theta)\cos(\phi),\sech(\theta)\sin(\phi),\theta - \tanh(\theta)\right]$.
We first consider a flow having a velocity field $\mb{v}$ with zero co-variant derivative $\nabla \mb{v} = 0$ on the surface (constant tangent vector). On both the sphere and pseudo-sphere a velocity having this property is given by $\mb{v} = \left[-\sin(\phi),\cos(\phi),0\right]$. We remark that it is convenient here to express the velocity in terms of the $xyz$-components in $\mathbb{R}^3$ given by the embedding from the parametrization above. For a curved surface, this provides the analogue to a flat surface of having a flow with constant velocity. We find that the curvature results in shearing of the transported material. Intuitively, this arises relative to the flat surface by the way
intrinsic curvature requires distortion of the distance relationships between points on the surface. More precisely, consider two points located at $(\theta_1,\phi_0)$ and $(\theta_2,\phi_0)$ with $\theta_2 > \theta_1 \geq 0$ in the upper hemisphere. While both points travel at exactly the same speed, the point $(\theta_2,\phi_0)$ which starts closer to the north pole will take less time to traverse fully around the $xy$-circular cross-section of the surface. This curvature associated distortion of the distances results in shearing of the transported material. This is illustrated in the panel on the left in Figure~\ref{fig:curvatureAndShear}.
\begin{figure}[H]
\centering
\includegraphics[width=8cm]{fig_curvatureAndShear.png}
\caption{Curvature and Shear. In contrast to a flat membrane, material transported on a curved membrane can exhibit shear even by a flow having an effectively constant surface velocity. We consider two surfaces (i) the sphere with constant Gaussian curvature $K > 0$ and (ii) the pseudosphere with constant Gaussian curvature $K < 0$. For a rectangular patch of material on the surface (beige), we show how transport changes its shape over time (red). On the left we show the transport for a velocity field with zero co-variant derivative $\nabla \mb{v} = 0$ (tangent vectors are constant). On the right we show transport for a velocity field with zero dual exterior derivative $d\mb{v}^{\flat} = 0$ (co-tangent vectors are constant). For either type of velocity field on the surface, in contrast to a flat surface, we see that the intrinsic curvature can result in shearing of the transported material. This effect is captured in our hydrodynamic model by the Gaussian curvature term and exterior calculus in equations~\ref{equ_Stokes_geometric}.
\label{fig:curvatureAndShear}}
\end{figure}
We can also consider a flow having a velocity field $\mb{v}$ with dual field $\mb{v}^{\flat}$ having zero exterior derivative $d\mb{v}^{\flat} = 0$ (constant co-tangent vector field).
The constant co-tangent case is motivated by the exterior calculus formulation of the fluid equations where for such an incompressible field the flow is determined only from the Gaussian curvature term, see equation~\ref{equ_Stokes_geometric}.
We remark that while the co-tangent vector field $\mb{v}^{\flat} = v_b d\mb{x}^b$ is constant on both the sphere and pseudosphere, the velocity field $\mb{v} = v^a \partial_{x^a}$ on each surface is modulated by the local components of the inverse metric factor as $v^{a} = g^{ab} v_b$. For any incompressible velocity field with zero exterior derivative $\mb{d}\mb{v}^{\flat} = 0$, according to equation~\ref{equ_Stokes_geometric} on any constant Gaussian curvature surface the force density $\mb{b}^{\flat}$ must also have zero exterior derivative $\mb{d}\mb{b}^{\flat} = 0$.
To construct such a flow, we consider for the sphere $\mb{v}^{\flat} = -\mb{b}^{\flat}/2K = +d\phi$ and for the
pseudosphere $\mb{v}^{\flat} = -\mb{b}^{\flat}/2K = -d\phi$. The sign change in the fluid velocity arises from the way in which the Gaussian curvature effects the flow response to the force density, see equation~\ref{equ_Stokes_geometric}. For the velocity field on the sphere, the inverse metric term $g^{\phi\phi} = 1/\cos^2(\theta)$ yields $\mb{v} = \left[-\sin(\phi)/\cos(\theta),\cos(\phi)/\cos(\theta),0\right]$. For the pseudosphere, the inverse metric term $g^{\phi\phi} = 1/\sech^2(\theta)$ yields the velocity $\mb{v} = \left[\sin(\phi)/\sech(\theta),-\cos(\phi)/\sech(\theta),0\right]$.
We see that for both the sphere and pseudosphere the material transported under such a flow is sheared. For the sphere and pseudosphere the shear is expected to be in the opposite direction arising from the difference in sign of the Gaussian curvature $K$ of the surface. This is illustrated in the panel on the right in Figure~\ref{fig:curvatureAndShear}. We remark that similar types of geometry and shear effects can be used for performing rheological experiments as was done in~\cite{Calvo1990}.
\section{Particle-Bilayer Coupling : Immersed Boundary Methods for Manifolds}
\label{sec:particleBilayerCoupling}
To model the motions of particles within the membrane, we compute a mobility tensor using approximations closely related to the \textit{Immersed Boundary Method} (IB)~\cite{Peskin2002,AtzbergerSELM2011,Atzberger2007a,Atzberger2006,Atzberger2007c,AtzbergerTabak2015}. In IB the fluid-structure interactions are approximated by coupling operators that perform operations on the surrounding flow field to determine the particle velocity and perform operations yielding a force density field to account for particle forces~\cite{Peskin2002,AtzbergerSELM2011}. We introduce IB approaches for manifolds to capture both the translational and rotational responses of inclusion particles to applied forces and torques when subject to coupling through the membranes hydrodynamics. We show how our IB approaches can be used to compute an effective mobility tensor for these responses.
\subsection{Mobility Tensor}
\label{sec:mobilityTensor}
We express the mobility tensor $M$ for the velocity response of a collection of particles as
\begin{eqnarray}
\mb{V} = M \mb{F}.
\end{eqnarray}
The $\mb{V}$ is the collective vector of velocities and angular velocities of the particles and $\mb{F}$ is the collective vector of forces and torques applied to the particles. For particle $i$, the velocity is given by
$\mb{V}_i = \left[\mb{V} \right]_i$ and the particle force by
$\mb{F}_i = \left[\mb{F} \right]_i$. It is also convenient to decompose the mobility tensor into the components $M_{ij}$ where
\begin{eqnarray}
M =
\left[
\begin{array}{llll}
M_{11} & M_{12} & \ldots & M_{1N} \\
\vdots & \vdots & \vdots & \vdots \\
M_{N1} & M_{N2} & \ldots & M_{NN} \\
\end{array}
\right].
\end{eqnarray}
The response of a single particle to a force applied directly to that particle is given by the diagonal block components $M_{ii}$. The two-particle response associated with the velocity of particle $i$ in response to a force applied to particle $j$ is given by the off-diagonal block component $M_{ij}$.
\subsection{Coupling Operators $\Gamma$ and $\Lambda$ for Curved Surfaces}
\label{sec:mobilityOperators}
The mobility tensor for the interactions between the $i^{th}$ and $j^{th}$ particle is given by
\begin{eqnarray}
\label{def_mob_S}
M_{ij} = \Gamma_i \mathcal{S} \Lambda_j,
\end{eqnarray}
where we have the operators $\Gamma_i = \Gamma\left[\mb{X}^{i}\right]$ and $\Lambda_j = \Lambda\left[\mb{X}^{j}\right]$. In the notation, we denote by $\mathcal{S}$ the solution operator for the hydrodynamic equations~\ref{equ_full_hydro_first}. The velocity field for the hydrodynamics $\mb{v}(\mb{x})$ under the applied force density $\mb{f}(\mb{x})$ is given by $\mb{v} = \mathcal{S} \mb{f}$. The operators $\Gamma,\Lambda$ approximate the fluid-structure interaction by modelling the velocity response and forces of the particles. The force density generated by an applied force $\mb{F}$ on particle $j$ is given by $\mb{f} = \Lambda_j \mb{F}$. In response, the velocity $\mb{V}$ of particle $i$ is given by $\mb{V} = \Gamma_i \mb{v}$.
\begin{figure}[H]
\centering
\includegraphics[width=8cm]{fig_GammaLambdaCurved2.png}
\caption{For curved surfaces the coupling operators $\Gamma$ and $\Lambda$ must be consistent with the tangent bundle of the surface. We use reference vector fields on the surface to construct the coupling operators $\Lambda$ and $\Gamma$. We derive operators
$\Gamma$ and $\Lambda$ for translational and rotational motions
using the adjoint conditions in equations~\ref{equ_adjoint_cond} and ~\ref{equ_adjoint_discrete}. On the left we show the reference vector field for translational responses (green). On the right we show the reference vector field for rotational responses (green).
\label{fig:gammaLambdaSurface}}
\end{figure}
Many choices can be made for the operators $\Gamma$ and $\Lambda$. This can be used for both translational and rotational responses~\cite{AtzbergerSELM2011}. To ensure that the approximate fluid-structure coupling is non-dissipative, it has been shown the operators must be adjoints~\cite{AtzbergerSELM2011,Peskin2002,AtzbergerTabak2015}. We require for any choice of field $\mb{v}$ and vector $\mb{F}$ that the operators satisfy the adjoint condition
\begin{eqnarray}
\label{equ_adjoint_cond}
\langle \Gamma \mb{v} , \mb{F} \rangle & = &
\langle \mb{v} , \Lambda \mb{F} \rangle.
\end{eqnarray}
The inner-products are defined as
\begin{eqnarray}
\langle \Gamma \mb{v} , \mb{F} \rangle & = &
\sum_i \left[ \Gamma \mb{v} \right]_i \cdot \left[\mb{F}\right]_i \\
\langle \mb{v} , \Lambda \mb{F} \rangle & = & \int_{\Omega}
\mb{v}(\mb{x})\cdot \left(\Lambda \mb{F}\right)(\mb{x}) d\mb{x}
\end{eqnarray}
where $\cdot$ denotes the dot-procuct in the embedding space $\mathbb{R}^3$.
We use the notation $\Gamma^T = \Lambda$ to denote succinctly the adjoint condition~\ref{equ_adjoint_cond}. We remark that from equation~\ref{def_mob_S} this condition has the desirable consequence of yielding a symmetric mobility tensor $M$ when $\mathcal{S}$ is symmetric.
To obtain the translation and rotational responses of the particles, we introduce the operators
\begin{eqnarray}
\Gamma \mb{v} & = & \int_{\Omega} \mb{W}\left[\mb{v}\right] (\mb{y}) d\mb{y} \\
\Lambda \mb{F} & = & \mb{W}^*\left[ \mb{F} \right](\mb{x}).
\end{eqnarray}
The $\mb{X}$ denotes the collective vector of particle locations. The $i^{th}$ particle is at location $\left[\mb{X}\right]_i$.
To obtain the particle velocity in response to the hydrodynamic flow, the tensor $\mb{W}$ serves to average by sampling and weighing the velocity values on the surface. For a particle force the adjoint tensor $\mb{W}^*$ serves to produce a force density field.
For a curved surface, $\mb{W}$ must be chosen carefully. A simple form which is widely used in IB is to use a scalar weight $\mb{W}^*[\mb{F}] = \eta(\mb{y} -\mb{X}) \mb{F}$ where $\eta$ is the Peskin $\delta$-function~\cite{AtzbergerSELM2011,Peskin2002}. However, for a curved surface this is not a good choice since the force density field produced by $\Lambda\mb{F} = \eta(\mb{y} -\mb{X}) \mb{F}$ is not in the tangent space of a curved surface. Similarly, the averaging procedure $\Gamma$ may produce a particle velocity which is not in the tangent space.
For curved surfaces, we use a more geometrically motivated operator of the form
\begin{eqnarray}
\mb{W}\left[\mb{v}\right] & = &
\sum_i \mb{w}^{[i]}\left[\mb{v}\right](\mb{x}) \\
& = & \sum_i \mb{w}^{[i],\alpha}\left[\mb{v}\right] \partial_{\alpha}|_{\mb{X}^{[i]}} \\
\nonumber
& = & \sum_i
\left(\left(w^{[i]}\right)^{\alpha}_{\beta}v^{\beta}\right) \partial_{\alpha}|_{\mb{X}^{[i]}}.
\end{eqnarray}
The sum $i$ runs over the particle indices and the $\partial_{\alpha}|_{\mb{X}^{[i]}}$ denotes the tangent basis vector in direction $\alpha$ at location $\mb{X}^{[i]}$. The square brackets $[i]$ are used to help distinguish entries not involved in the Einstein conventions of summation. This can be interpreted as the procedure of obtaining the average velocity for particle $i$ by using for each coordinate direction $\alpha$ the inner-product of the velocity field $\mb{v}$ with the reference vector field $\mb{q}^{\alpha} = \left(\mb{w}^{[i],\alpha}\right)^{\sharp} = \left(w^{[i],\alpha}\right)^{\gamma}\partial_{\gamma}$.
The adjoint tensor yielding the local force density is given by
\begin{eqnarray}
\mb{W}^*\left[\mb{F}\right] & = & \sum_i
\left(\mb{w}^{[i],\alpha}\right)^{\sharp} F^{\alpha} \\
& = & \sum_i
\left(w^{[i],\alpha}\right)^{\gamma} F^{\alpha} \partial_{\gamma}.
\end{eqnarray}
For translational motions we use the reference vector fields of the form $\mb{q}^{\theta} = \psi(\mb{x} - \mb{X}^{[i]}) \partial_{\theta}$
and $\mb{q}^{\phi} = \psi(\mb{x} - \mb{X}^{[i]})/\cos(\theta) \partial_{\phi}$, where $\psi(r) = C \exp(-r^2/2\sigma^2)$.
For rotational responses we use the reference vector field on the surface
$\mb{q}^{n} = \psi(\mb{x} - \mb{X}^{[i]})\left( \mb{n} \times (\mb{x} - \mb{X}^{[i]})\right)$. We emphasize that we only use these expressions for a coordinate chart chosen so that the particle location $\mb{X}^{[i]}$ is away from a polar singularity, see Appendix~\ref{sec:coord_charts}. The reference velocity fields are illustrated in Figure~\ref{fig:gammaLambdaSurface}.
In practice for the spatially discretized system, the operators and the associated fields they generate can be expressed conveniently as
\begin{eqnarray}
V^{i} & = & \Gamma_{m}^{i} v^{m} \\
f^m & = & \Lambda^{m}_j {F}^{j}.
\end{eqnarray}
The index $m$ corresponds to the discrete degrees of freedom, such as the index of a lattice point or harmonic mode, and the $i$ and $j$ index the components of the vector. We have
\begin{eqnarray}
V^k F^k = \Gamma_m^{k} v^m {F}^{k} = v^m \Lambda^m_{k} {F}^{k} = v^m f^m.
\end{eqnarray}
The adjoint condition can be expressed as
\begin{eqnarray}
\label{equ_adjoint_discrete}
\Gamma_m^{k} = \Lambda_{k}^{m}.
\end{eqnarray}
In practice, we define the operator $\Lambda$ in numerical calculations using the specified reference velocity fields $\mb{q}^{\alpha}$ above to generate the force density at lattice sites on the sphere surface. Using the sparse matrix representation of this operation for $\Lambda$, equation~\ref{equ_adjoint_discrete} provides the adjoint velocity averaging operator $\Gamma$. This approach for developing consistent operators $\Lambda$ and $\Gamma$ on the sphere also extends straight-forwardly to immersed boundary approximations on more general curved surfaces and manifolds.
\section{Dynamics of Inclusion Particles Embedded in Spherical Bilayers}
\label{sec:particleMobilities}
For particle inclusions embedded within spherical lipid bilayer membranes, we investigate their translational and rotational motions in response to applied forces and torques. We consider the case when each embedded inclusion particle only spans one of the fluid bilayer leaflets, see Figure~\ref{fig:inclusionSchematic}. We investigate the mobility of these inclusions when varying the (i) vesicle radius, (ii) membrane viscosity, (iii) solvent viscosity, and (iv) intermonolayer slip. We investigate the four interaction cases (i) outer-outer, (ii) outer-inner, (iii) inner-outer, and (iv) inner-inner. We also investigate the coupled motions for the four cases (i) translation-translation, (ii) translation-rotation, (iii) rotation-translation, and (iv) rotation-rotation.
We express the translational and rotational responses as
\begin{eqnarray}
\left[
\begin{array}{l}
\mb{V} \\
\boldsymbol{\omega}
\end{array}
\right]
=
\mb{M}
\left[
\begin{array}{l}
\mb{F} \\
\boldsymbol{\tau}
\end{array}
\right]
\end{eqnarray}
where we decompose the mobility tensor into the blocks
\begin{eqnarray}
\mb{M} =
\left[
\begin{array}{ll}
M_{tt} & M_{tr} \\
M_{tr} & M_{rr} \\
\end{array}
\right].
\end{eqnarray}
In the notation, the $\mb{V}$ denotes the collective translational velocities and $\boldsymbol{\omega}$ the collective rotational angular velocities. The $\mb{F}$ denotes the collective forces applied to particles within the inner and outer leaflets. The $\boldsymbol{\tau}$ denotes the collective torques applied to particles within the inner and outer leaflets.
We denote the different ways in which the forces $\mb{F}$ and torques $\boldsymbol{\tau}$ couple to the particle translational motions $\mb{V}$ and rotational motions $\boldsymbol{\omega}$ using the notation $M_{XY}$. The $X$ denotes the response as either translation (t) or rotational (r). The $Y$ denotes the type of applied force as either standard force (t) or torque (r). The mobility components can be further decomposed into $M_{XY,i_1,\ell_1,i_2,\ell_2}$ where $i_k$ denotes the location of the $i_k^{th}$ particle. The leaflets in which the inclusions are embedded is denoted by $\ell_k \in \{\mbox{inner}, \mbox{outer}\}$.
\begin{figure}[H]
\centering
\includegraphics[width=8cm]{fig_hydroInclusionModality3.png}
\caption{We consider the case when each embedded inclusion particle only spans one of the bilayer leaflets. We consider the interaction cases when particles are both in the same leaflet or are in different leaflets. \label{fig:inclusionSchematic}}
\end{figure}
There are a few notable differences between spherical fluid membranes and flat fluid membranes. In the flat case, the membrane domain is often treated as effectively infinite and for theoretical convenience often as having periodic boundary conditions. In the spherical case, the membrane is intrinsically of finite area. Also for a sphere, as consequence of the topology, any in-plane hydrodynamic flow must have a singularity~\cite{Jarvis2004}. For the solvent fluid, flat membranes have fluid extending over an infinite domain symmetrically on both sides.
In the spherical case, this symmetric is broken with solvent fluid trapped within the interior in a region of finite volume and with solvent fluid extending exterior over an infinite domain. The curvature of the membrane surface can also play an important role in the hydrodynamics. This is particularly apparent from the Gaussian term that appears in equation~\ref{equ_full_hydro_first} and the effects we discussed in Section~\ref{sec:curvatureAndShear}.
We investigate the mobility of inclusion particles within spherical bilayers in a few different regimes. We consider the characteristic scales introduced in Section~\ref{sec:charScales}. The regime with $\Pi_1 = L/R \gg 1$ and for $\Pi_2 = \gamma R/\mu_f$ with $\Pi_2^{-1} \Pi_1 = 1$ corresponds to the case when the hydrodynamic flow is dominated by the intramembrane viscosity and intermonolayer slip. In this regime, for a force density having a non-zero net torque, the flow is approximated well by the leading order spherical harmonic modes with $\ell = 1$, see equation~\ref{equ_Stokes_SPH_sol2_defA_ells_nonDim}. The intramembrane viscosity strongly couples the surface fluid resulting in a flow that is a rigid body rotation of the entire spherical shell, see Figure~\ref{fig:singleParticleFlowResponse}. Parameter values are given in Table~\ref{table:defaultParams}.
Mathematically, this arises from the dominant spherical harmonic modes with degree index $\ell = 1$ and order index $m = -1,0,1$. Using the exterior calculus formation we apply the generalized $\mbox{curl} = -\star d$ on the surface to the vector potential $\Phi$ given by a linear combination of the harmonic modes of degree $\ell = 1$. This yields for the velocity field on the surface that of a rigid body rotation, see equation~\ref{equ_velFieldRep}. In the case when the surface force has zero net torque in the regime $\Pi_1 \gg 1$, $\Pi_2^{-1} \Pi_1 = 1$, the leading order flow is determined by the intramembrane viscosity and intermonolayer slip and depends on the higher-order moments of the torque of degree $\ell > 1$.
The regime with $\Pi_1 \ll 1$ and $\Pi_2 \ll 1$ corresponds to the case when the traction stress from the entrained external solvent fluid dominates the hydrodynamic response relative to the intramembrane viscosity and intermonolayer slip. This results in more localized flow within the surface, see Figure~\ref{fig:singleParticleFlowResponse}.
We remark that the regime when $\Pi_2 \gg 1$ corresponds to the case when the intermonolayer slip strongly couples the hydrodynamic flow between the two leaflets to make them nearly identical. This effectively doubles the intramembrane viscosity.
We have presented a few regimes indicating the contributing factors in the hydrodynamic responses and the interplay between the entrained solvent fluid, intramembrane viscosity, and intermonolayer slip. We now discuss some features of the hydrodynamic response that arise from the geometry of the spherical membrane.
\multicolinterrupt{
\begin{figure}[H]
\centering
\includegraphics[width=16cm]{fig_forceHydroResponse.png}
\caption{Hydrodynamic flow in response to force acting on an inclusion particle. The $L/R = \Pi_1$ is the relative Saffman-Delbr\"uck length-scale scaled by the vesicle radius. For small intramembrane viscosity the force produces a localized hydrodynamic flow on the surface. As the membrane viscosity increases the hydrodynamic flow becomes less localized and eventually approaches the velocity field of a rigid body rotation of the sphere. The flow exhibits two vortices with locations that migrate toward the equatorial poles as the viscosity increases. Parameter values in Table~\ref{table:defaultParams}.
\label{fig:singleParticleFlowResponse}}
\end{figure}
}
\subsection{Vortices and Membrane Viscosity}
\label{sec:vortices}
\begin{figure}[H]
\centering
\includegraphics[width=8cm]{fig_vortexLocation_newFlow.png}
\caption{Vortex Location and Membrane Viscosity. For a force applied to a particle in the outer leaflet located at the north pole, we show as the shear viscosity is varied how the vortex location changes in the outer and inner leaflets. In the nomenclature X-Y in the figure caption, X refers to the leaflet of the applied force and Y the leaflet of the flow response. For low viscosity the vortices are near the north pole $\theta = 0$. As viscosity increases the vortices migrate toward the equator $\theta = \pi/2$. The inset left to right shows typical progression in flows of the vortex location. The intermonolayer slip $\Pi_2 = \gamma/\gamma_0 = 4$ moderately couples the inner leaflet to the outer leaflet. We find this results in a flow within the inner leaflet with a vortex location closer toward the equator. Parameter values in Table~\ref{table:defaultParams}.
\label{fig:vortexLocation}}
\end{figure}
As a consequence of the spherical topology of the membrane, any hydrodynamic flow within the surface must contain a singularity~\cite{Jarvis2004}. We consider the case of an inclusion particle located at the north pole of the sphere and subjected to a force. These singularities manifest in the flow as two vortices of opposite sign, see Figures~\ref{fig:singleParticleFlowResponse}. The location of these vortices depends on $\Pi_1 = L/R$ characterizing the relative strength of the intermembrane shear viscosity vs the solvent traction stress. For small $\Pi_1$ the vortices start near the north pole and as $\Pi_1$ increases they migrate toward the equator, see Figures~\ref{fig:singleParticleFlowResponse}. For a force applied to a particle in either the outer leaflet or inner leaflet we consider how the vortex location changes as the viscosity of the membrane is varied. We show the vortex locations in the outer and inner leaflets in Figure~\ref{fig:vortexLocation}. In this case we vary $\Pi_1 = L/R$ and keep fixed $\Pi_2 = \gamma/\gamma_0$ where $\gamma_0 = \mu_f/R$ with parameters in Table~\ref{table:defaultParams}. We remark that these results can be used as a reference to estimate the membrane shear viscosity by making observations of the vortex locations of the fluid flow within the leaflets. Some recent experimental work to estimate the membrane viscosity of vesicles using vortex locations can be found in~\cite{Woodhouse2012,HonerkampSmith2013,Dimova1999}.
\subsection{Self-Mobility and Coupled-Mobility}
\label{sec:mobilityResults}
We next consider the hydrodynamic responses when a force or torque is applied to an inclusion particle embedded in the outer leaflet when the center of the sphere is held fixed. We take as our convention that this particle is embedded at a pole where we parametrize the sphere with $(\theta,\phi) = 0$. We then consider
how the resulting hydrodynamic flows within the inner or outer leaflets within the spherical bilayer couple the translational motions and rotational motions of inclusion particles at other locations. Throughout, we use the base-line parameters given in Table ~\ref{table:defaultParams}. These parameters correspond to the non-dimensional regime with $\Pi_1 = 0.65$ and $\Pi_2 = 4.0$.
\begin{table}[H]
\footnotesize
\centering
\begin{tabular}{|l|l||l|l|}
\hline
\rowcolor{LightGrey}
Parameter & Value & Parameter & Value \\
\hline
$R_{-}$ & 14 $\sigma$ & $\mu^{\pm} = \mu_f$ & 383 $m_0/\tau\sigma$ \\
\hline
$R_{+}$ & 16.6 $\sigma$ & $\mu_m$ & 3830 $m_0/\tau$ \\
\hline
$R$ & 15.3 $\sigma$ & $\gamma$ & 100 $m_0/\tau\sigma^2$ \\
\hline
$\sigma$ & 1 nm & $m_0$ & 1 amu \\
\hline
$\tau$ & 0.64 ps & $\epsilon$ & 2.5 amu$\cdot$nm$^2$/ps$^2$ \\
\hline
\end{tabular}
\caption{Vesicle Parameters. We use these default parameters throughout our discussions unless specified otherwise. These parameters correspond to the non-dimensional regime with $\Pi_1 = L/R = 0.65$ and $\Pi_2 = \gamma R/\mu_f = 4.0$.
\label{table:defaultParams}}
\end{table}
We investigate the roles played by the bulk solvent fluid, the intramembrane viscosity, and the intermonolayer slip. We use our methods to compute profiles of the mobility responses at different locations when varying the intramembrane viscosity and intermonolayer slip in Figure~\ref{fig:singleMobilityProfiles}. We show how the mobility varies when changing the intermonolayer slip and membrane viscosity in Figures~\ref{fig:viscAll} and~\ref{fig:slipAll}. For comparison we also compute the mobility responses within a flat membrane shown in Figure~\ref{fig:flatMembrane}.
Before discussing in more detail these results, we make a few remarks concerning how the mobility results are reported. The responses are shown along the two great circles on the sphere corresponding to the intersection with the $xy$-plane and the $xz$-plane.
\begin{figure}[H]
\centering
\includegraphics[width=8cm]{fig_diagram_crossSections2.png}
\caption{Cross Sections of the Sphere and Conventions. We consider the hydrodynamic responses when a force or torque is applied to an inclusion particle. For convenience in our calculations, we use by convention the coordinates for the inclusion particle $\mb{X} = (x,y,z) = (1,0,0)$ and we apply force to the inclusion particle in the direction $\mb{F} = (f_x,f_y,f_z) = (0,1,0)$. To characterize the hydrodynamic responses, we consider the cross-sections of the sphere in the $xy$-plane and the $xz$-plane. This gives two great circles of the sphere. We consider the velocity in the directions parallel and perpendicular to the tangents of each of the respective great circles.
\label{fig:diagramCrossSections}}
\end{figure}
We consider the velocity responses in the parallel $\parallel$ and perpendicular $\perp$ directions along each of these curves. We normalize all of the mobility results by comparing to the case of large intramembrane viscosity $\Pi_1 = L/R = 48$ for the leaflet or large intermonolayer slip corresponding to $\Pi_2 = \gamma R/\mu_f = 32$. This regime provides a reference case corresponding to the situation when the large intramembrane viscosity yields an effective rigid body rotation of the spherical shell within the bulk solvent fluid or when the two leaflets are tightly coupled.
We remark that this is in contrast to the flat membrane case where the mobility tends to zero as $\Pi_1 = L/R$ becomes large. In the flat membrane case, we normalize instead our reported results by the self-mobility when $\Pi_1 = L/R = 0.1$. Given the mobility model for the hydrodynamic responses discussed in Section~\ref{sec:mobilityTensor}, the self-mobility on the sphere for each type of coupling is given in our model by the results reported at location $(\theta, \phi) = 0$.
The mobility profiles reveal a number of interesting aspects of the hydrodynamic coupling between inclusion particles and leaflets. We find that the intermonolayer slip and curvature yield coupling for particles embedded in the inner leaflet significantly different than for particles embedded in the outer leaflet. For a force or torque applied to a particle embedded in the outer leaflet, the intermonolayer slip yields a flow for the inner leaflet with recirculation over a larger scale. This is seen when looking at the vortex locations when applying force at the north pole, where the intermonolayer slip plays a role pushing the vortex location of the inner leaflet closer to the equatorial poles, see Figure~\ref{fig:vortexLocation}.
We see this can result in both the translational motions and rotational motions of a particle within the inner leaflet moving in the opposite direction of an inclusion particle within the outer leaflet at the same location. This is seen for the smallest viscosities and intermonolayer slips for the translation-translation responses at location $xz$ with $\phi = \pi/4$ and for the rotation-rotation responses at location $xy$ with $\theta = \pi/4$, see Figure~\ref{fig:viscAll} and~\ref{fig:slipAll}.
\newpage
\multicolinterrupt{
\begin{figure}[H]
\centering
\includegraphics[width=16cm]{fig_mobility_single_mobility_profile_combined.png}
\caption{Mobility profiles of inclusion particles when varying the
membrane viscosity and intermonolayer slip. In each case a force or a torque is applied to a single inclusion particle within the outer leaflet located at $(\theta,\phi) = 0$. The resulting inclusion particle hydrodynamic response within the outer leaflet or inner leaflet is shown in terms of the mobility $M$ defined in Section~\ref{sec:mobilityTensor}. We use in the nomenclature in the titles of X-Y to indicate a forcing of type X and a response of type Y. We normalize the mobility by the self-mobility response obtained in the case when $\Pi_1 = L/R = 48$ and $\Pi_2 = \gamma R/\mu_f = \gamma/\gamma_0 = 32$. The intramembrane viscosity or intermonolayer slip is held fixed in panels displaying respectively $\Pi_1 = L/R = 0.13$ or $\Pi_2 = \gamma/\gamma_0 = 4$. All figures show the outer leaflet response with the exception of the figure on the upper-right for the translation-translation response which shows how the inner leaflet responds to increasing intermonolayer slip. The other panels show the dependence of the mobility response of inclusion particles embedded within the outer-leaflet when increasing the membrane viscosity as $\Pi_1 = L/R = 0.13, 0.26, 0.65, 1.3, 2.6, 6.5, 13, 26, 52$.
The curve with largest amplitude at $\theta = 0$ corresponds to the largest local mobility response which occurs for the smallest membrane viscosity. The panels show the dependence of the mobility response of inclusion particles embedded within the inner-leaflet when increasing the intermonolayer slip as $\Pi_2 = \gamma/\gamma_0 = 0.040, 0.10, 0.40, 1.0, 4.0, 8.0, 16, 32$.
The curve with smallest amplitude at $\theta = 0$ shows the smallest mobility response corresponds in each case to the smallest intermonolayer slip.
\label{fig:singleMobilityProfiles}}
\end{figure}
}
\newpage
For the translational and rotational response to forces in the outer leaflet, we find that the intermonolayer coupling smooths the flow over a larger scale within the inner leaflet.
We next consider for fixed intramembrane viscosity how the intermonolayer slip effects the flow. We see for a force acting on the outer leaflet as the intermonolayer slip becomes small the flow within the inner leaflet approaches a rigid body rotation, see the bottom curve in the upper-right panel of Figure~\ref{fig:singleParticleFlowResponse}.
From an analysis of the hydrodynamic response equation~\ref{equ_Stokes_SPH_sol2_defA_ells}, we have two interesting cases for the modes of the inner leaflet: (i) $\ell = 1$ and (ii) $\ell > 1$. In the first case, the inner-leaflet rotates as a rigid spherical shell and entrains the fluid trapped within to a rigid body motion. As a consequence there is no traction stress with the external solvent fluid for the inner leaflet and no intramembrane shear stress. This means there are no other stresses acting against the intermonolayer drag so $-\gamma(a_{\ell}^{-} - a_{\ell}^{+}) = 0$ and the inner leaflet matches the outer leaflets rotation with $a_{\ell}^{-} = a_{\ell}^{+}$ for $\ell = 1$.
In the second case with $\ell > 1$, the intramembrane stress and traction stress balance the intermonolayer drag. In this case, the hydrodynamic modes of the inner leaflet scale in proportion to the intermonolayer slip and the modes of the outer leaflet. As the intermonolayer slip decreases, the modes $a_{\ell}^{-}$ of the inner leaflet become small for $\ell > 1$.
This can be seen mathematically from equation~\ref{equ_Stokes_SPH_sol2_defA_ells} where the inner leaflet modes satisfy $a_{\ell}^{-} = -\left(\left(A_2^{\ell}/\gamma\right) - 1 \right)^{-1} a_{\ell}^{+}$. This can be expressed as
\begin{eqnarray}
\nonumber
a_{\ell}^{-} = -\Pi_3\left(2 - \ell(\ell + 1) - \Pi_1^{-1}(\ell - 1) - \Pi_3 \right)^{-1} a_{\ell}^{+}
\end{eqnarray}
where for convenience we denote $\Pi_3 = \Pi_2^{-}/\Pi_1^{-}$. For $\ell = 1$ this shows $a_{\ell}^{-} = a_{\ell}^{+}$ independent of the magnitude of $\Pi_3 \neq 0$. For $\ell > 1$, we have as the intermonolayer slip becomes small $\Pi_3 \ll 1$ the hydrodynamic response for the mode of the inner leaflet with $\ell > 1$ become small $a_{\ell}^{-} \ll 1$. This shows that the resulting hydrodynamic responses in the inner leaflet become dominated by the rigid rotation mode $\ell = 1$ for small intermonolayer slip. This can be seen in the upper-right panel of Figure~\ref{fig:singleMobilityProfiles}.
This has a number of interesting consequences for the motions of inclusion particles embedded within leaflets of spherical bilayers. From the different hydrodynamics of the two spherical shells, we have that for small intermonolayer slip the self-mobility and coupled-mobilities can result in large motions when forces or torques act on an inclusion particle within the inner leaflet. For small intermonolayer slip this arises since the rigid body mode $\ell = 1$ of the hydrodynamic response for the inner leaflet is not damped by the trapped solvent fluid but only by the weak intermonolayer coupling. This manifests in a near rigid rotation of the inner leaflet and a large self-mobility and coupled-mobility in response to an applied force or torque, see Figure~\ref{fig:slipAll}.
We remark that it is important to keep in mind this behaviour arises when forces applied to inclusion particles result for the inner leaflet in a force moment with non-zero net torque. This is what drives a significant hydrodynamic response for the rotation mode $\ell = 1$. In contrast, for the case of a collection of inclusion particles with total force acting on the inner leaflet that yields a zero net torque, super-position of the particle hydrodynamic responses cancel for the $\ell = 1$ mode and the behaviour of large motions for inclusions from the rigid shell rotation is suppressed. This means for inclusion particles embedded within spherical bilayers it is important to consider carefully the different ways forces and net torque act on the leaflets.
As the intermonolayer slip becomes large, the hydrodynamic flows within each of the two leaflets approach a common velocity. The self-mobility of inclusion particles embedded in the inner and outer leaflet also approach a common value. It is interesting to note that the common value is not simply $1/2$ of the self-mobility for the uncoupled leaflets, see Figure~\ref{fig:slipAll}. This arises from the asymmetric way in which the leaflets couple to the external solvent. For the outer leaflet the solvent is within an unbounded domain exterior to the spherical shell. For the inner leaflet the solvent is within a bounded domain trapped interior to the spherical shell. As a consequence, we see there are different tractions acting on the inner and outer leaflet, see equation~\ref{equ_Stokes_SPH_sol2_defA_ells}. As we saw for the rigid rotation mode $\ell = 1$, there is no traction stress on the inner leaflet since the solvent fluid rigidly rotates within the spherical shell but there is traction stress from the solvent on the outer leaflet. For the other modes $\ell > 1$ there continue to be asymmetries in the strength of the traction stress. As a result, the mobility of inclusion particles depend on the particular leaflet in which they are embedded. In the large intermonolayer slip limit, the mobility is determined by a combination of these different solvent tractions from each of the leaflets.
When investigating the mobility of membrane inclusion particles, the finite spatial extent and curved geometry of the bilayer can result in important hydrodynamic effects not captured when treating the membrane as an infinite flat sheet. We remark that the key consideration is how large the spatial extent or curvature is relative to the Saffman-Delbr\"uck length $L_{SD}$. For a very large vesicle radius or small curvatures, we do expect of course to recover similar behaviours as in the case of an infinite flat sheet. The interesting case is when the vesicle radius or membrane curvature yields a scale comparable or smaller than the Saffman-Delbr\"uck length $L_{SD}$.
We show the self-mobility of an inclusion particle embedded in a membrane treated as an infinite flat sheet in Figure~\ref{fig:flatMembrane}. These results were obtained by solving in Fourier space the hydrodynamic flow in response to an applied force density following closely the analytic approach presented in~\cite{Saffman1976, Oppenheimer2009} and our method for computing the mobility tensor discussed in Section~\ref{sec:mobilityOperators}. We see significant differences compared to the mobility responses in spherical bilayers.
In the regime of a vesicle radius comparable to the the Saffman-Delbr\"uck length $L_{SD}$, the finite spatial extent of the membrane and topology can play an important role. For spherical leaflets, it is required that mobility responses result in recirculation flows of the material within the finite leaflet. As we have seen, this can yield non-trivial behaviours in the coupling and provide possibly useful flow features for estimating viscosity as discussed in Section~\ref{sec:vortices}.
In contrast when treating the membrane as an infinite flat sheet, no vortex arises in the flows generated from single particle responses. The infinite flat sheet also no longer results in trapped fluid within an interior domain. The bulk solvent fluid is treated as occupying an effectively infinite domain on both sides of the bilayer. This results in more traction stress acting on the infinite flat sheet relative to the spherical shell which as a result reduces the self-mobility and strength of the coupled mobilities. In particular, as the intramembrane viscosity increases the rotational mode of the spherical case is no longer available and the self-mobility decays to zero, see Figure~\ref{fig:flatMembrane}. Our results show that significant differences can arise with treatment of the bilayer as an infinite flat sheet requiring treatment of the finite domain size and curved geometry of the bilayer when these length scales are comparable to the Saffman-Delbr\"uck length $L_{SD}$.
\clearpage
\newpage
\multicolinterrupt{
\begin{figure}[H]
\centering
\includegraphics[width=0.9\columnwidth]{fig_mobility_combined_visc_all.png}
\caption{Membrane Viscosity and Particle Mobility. For a torque applied to a particle embedded within either the inner or outer leaflet, we show as the membrane viscosity is varied the translational and rotational responses of inclusion particles embedded within the inner or outer leaflet. The intermonolayer slip is kept fixed at $\Pi_2 = \gamma/\gamma_0 = 4$. This is discussed in more detail in Section ~\ref{sec:mobilityResults}.
\label{fig:viscAll}}
\end{figure}
}
\clearpage
\newpage
.
\clearpage
\newpage
\multicolinterrupt{
\begin{figure}[H]
\centering
\includegraphics[width=0.99\columnwidth]{fig_mobility_combined_slip_all_sideSets.png}
\caption{Intermonolayer Slip and Particle Mobility. For a torque applied to a particle embedded within either the inner or outer leaflet, we show as the intermonolayer slip is varied the translational and rotational responses of inclusion particles embedded within the inner or outer leaflet. The membrane viscosity is kept fixed at $\Pi_1 = L/R = 0.13$. This is discussed in more detail in Section ~\ref{sec:mobilityResults}.
\label{fig:slipAll}}
\end{figure}
}
\clearpage
\newpage
\clearpage
\begin{figure}[H]
\centering
\includegraphics[width=8cm]{fig_mobility_flatMembrane.png}
\caption{Mobility for Flat Membranes. For a force or torque applied to a particle embedded within a large flat membrane, we show as the intramembrane viscosity is varied the translational and rotational responses of inclusion particles. Normalized by the mobility response when $\Pi_1 = L/R = 0.13$. These results were obtained from solving in Fourier space the hydrodynamic flow in response to an applied force density following closely the analytic approach presented in~\cite{Saffman1976, Oppenheimer2009} and our method for computing the mobility tensor discussed in Section~\ref{sec:mobilityOperators}.
\label{fig:flatMembrane}}
\end{figure}
\clearpage
\newpage
\subsection{Many-Particle Dynamics : Hydrodynamic Coupling and Diffusion}
\label{sec:manyParticleDynamics}
The collective dynamics of multiple particles within a spherical membrane can be modelled as
\begin{eqnarray}
\label{equ_full_BD_model}
&& \frac{d\mb{X}}{dt} = \mb{M} \mb{F} + k_B{T} \nabla\cdot \mb{M} + \mb{F}_{thm} \\
\nonumber
&& \langle \mb{F}_{thm}(s) \mb{F}_{thm}(t)^T\rangle = 2k_B{T}\mb{M}\delta(t - s).
\end{eqnarray}
The $\mb{X}$ denotes the collective particle configuration and $\mb{F}$ the collective forces acting on the particles. The mobility $\mb{M}$ is obtained from the hydrodynamic-coupling methods introduced in Section~\ref{sec:mobilityTensor}. The thermal fluctuations driving diffusion are accounted for by the drift $k_B{T} \nabla\cdot \mb{M}$ and the Gaussian random force $\mb{F}_{thm}(t)$ which is $\delta$-correlated in time with mean zero and covariance $\langle \mb{F}_{thm}(s) \mb{F}_{thm}(t)^T\rangle = 2k_B{T}\mb{M}\delta(t - s)$~\cite{AtzbergerTabak2015,AtzbergerSELM2011,Gardiner1985}. The equations for the particles are to be interpreted in the sense of Ito Calculus~\cite{Oksendal2000a,Gardiner1985}. The thermal drift term arises from the configuration-dependent correlations of the thermal fluctuations~\cite{AtzbergerTabak2015,AtzbergerSELM2011}.
We focus here in this paper on how our approaches can be used for the collective hydrodynamics of particles within the membranes of spherical vesicles. We defer to a future paper the full use of our introduced methods for the entire stochastic Brownian-hydrodynamic model in equation~\ref{equ_full_BD_model}. As a demonstration of the introduced methods, we consider the specific case of $4$ particles that are actively attracted to a central particle located on the positive x-axis at the east pole. We consider the hydrodynamic flow and particle dynamics within the outer-leaflet of the curved spherical bilayer. In addition to the $4$ attracting particles, we also consider $195$ passive tracer particles that are advected by the flow, see Figure~\ref{fig:multiparticleConfig}.
\begin{figure}[H]
\centering
\includegraphics[width=0.8\columnwidth]{fig_multiparticleApp.png}
\caption{Many-particle Dynamics within a Spherical Lipid Bilayer Membrane. The inclusion particles are coupled through hydrodynamic flow both within the membrane bilayers and through the external solvent fluid. We show the hydrodynamic response in the case of a group of four inclusion particles attracted to a central particle. We show the velocity of the other particles passively advected by the flow that either move in the opposite direction or are swept along depending on their relative location to the attracted particles.
\label{fig:multiparticleConfig}}
\end{figure}
We see that the hydrodynamic coupling can result in interesting dynamics with the passive particles either moving in the opposite direction of the attracting particles or swept along depending on their relative location. This can be characterized by looking at the coupled mobility $M$ of the passively advected particles defined by $M = V/F_T$. The $V$ is the passive particle velocity, $F_T$ the total force acting on the attracted particles. We consider the responses in the circular section in the $yz$-plane of radius $r_0 = 0.5R$ centred about the x-axis near the east pole and in the circular section in the $xz$-plane of radius $r_0 = R$ about the center of the sphere, see Figure~\ref{fig:multiparticleConfig} and Figure~\ref{fig:diagramCrossSectionsCircle}. The parameters in these calculations are taken to be the same as in Table~\ref{table:defaultParams}.
We see from the $yz$-responses $M_x$ that for locations half-way between the attracted particles, the passive particle move in the opposite direction to the attracted particles. This change in direction is a consequence of the incompressibility of the fluid which results in an out-flow to compensate for the in-flow toward the east pole generated by the attracted particles, see Figure~\ref{fig:multiparticleApp}.
\begin{figure}[H]
\centering
\includegraphics[width=\columnwidth]{fig_diagram_crossSections_circle2.png}
\caption{Cross Sections of the Sphere and Conventions. We consider the hydrodynamic responses of the inclusion particles on two cross-sections of the sphere. The first is the great circle of the sphere when intersected with the $xy$-plane. The second is the circle of radius $r_0$ on the sphere surface parallel to the $xz$-plane. For forces applied to the four attracting inclusion particles, we consider for the motions of the other inclusion particles as characterized by the mobility components parallel and perpendicular to the tangent of the respective cross-section curves. We parametrise the $xz$-section using angle $\theta$ with $0$ corresponding to the location $(x,y,z) = (1,0,0)$ and the $xy$-section using angle $\theta$ with $0$ for location $(x,y,z) = (0,0,r_0)$.
\label{fig:diagramCrossSectionsCircle}}
\end{figure}
We also see this manifest in the $yz$-responses $M_{\parallel}$ which are out of phase with $M_x$ reflecting that the passive particles move laterally toward the out-flow half-way between the attracted particles. The $xz$-responses correspond to passive particle motions when located on the same great circle in the $xz$-plane as two of the attracted particles. We see in these responses that the passive particles always move toward the attracting particle at the east pole, see bottom panel of Figure~\ref{fig:multiparticleApp}.
These results indicate some of the rich dynamics that can arise from hydrodynamic coupling even for relatively simple configurations of particles and force laws. The analytic approaches and computational methods we have introduced for the collective mobility tensor $M$ allow for incorporating such effects into simulations of many-particle dynamics within spherical lipid bilayers. Many of the approaches we have introduced can also be extended for more general curved bilayers.
\begin{figure}[H]
\centering
\includegraphics[width=8cm]{fig_multiParticleMobility.png}
\caption{Multi-particle Mobility. We show the location dependent mobility of the passively advected particles in response to the hydrodynamic coupling to the four attracting particles. We show $M = V/F_T$ where $V$ is the particle velocity, $F_T$ the total force,
$M_\parallel$ is the mobility tangent along the circular section. The $yz$ indicates the circular section in the $yz$-plane of radius $r_0 = 0.5R$ about the east pole and $xz$ indicates the circular section in the $xz$-plane of radius $r_0 = R$ about the sphere center, see Figure~\ref{fig:diagramCrossSectionsCircle}. In the response, depending on the position, the passive particles either move in the opposite direction or are swept along with the attracting particles. The maximum response $M_0$ corresponds to the self-mobility of each of the attracting particles.
\label{fig:multiparticleApp}}
\end{figure}
\section{Conclusions}
We have investigated the hydrodynamics of inclusion particles embedded within curved lipid bilayers. We have performed an extensive study of the hydrodynamic flows and mobility responses within spherical bilayers. We have studied both forces and torques applied to inclusion particles embedded within distinct inner and outer leaflets of the bilayer. We have found significant differences relative to the case when the membrane is treated as a single infinite flat sheet. We have found that the interplay between curvature, topology, and the two leaflet bilayer structure can yield interesting effects in the hydrodynamics. We found for spherical bilayers that the difference between the infinite exterior solvent domain and the finite interior solvent domain result in different traction stresses acting on each of the leaflets. We found this can have significant consequences for the mobility of inclusion particles especially when embedded within the inner leaflet. We further found that the intermonolayer slip can play an interesting role with flow regimes where inclusion particles at the same location but in different leaflets can move in opposing directions in response to the lipid flows generated by other inclusion particles. Our results show the rich individual and collective dynamics that can arise for inclusions within bilayers of spherical vesicles. Many of our analytic approaches and computational methods also can be extended to study inclusions embedded within more general curved lipid bilayer membranes.
\section{Acknowledgements}
The authors P.J.A, J.K.S. acknowledge support from research grant
NSF CAREER DMS-0956210 and DOE ASCR CM4 DE-SC0009254.
\bibliographystyle{plain}
|
2,869,038,155,891 | arxiv | \section{Introduction}
\label{sec:intro}
\setcounter{equation}{0}
It is well-known that Gromov-Witten theory can be used to answer questions in enumerative geometry. This dates back to Kontsevich's celebrated recursion for the number $N_d$ of rational plane curves passing through $3d-1$ general points (here $k,\ell\ge 1$).
$$N_d=\sum_{k+\ell=d} N_kN_{\ell} k^2\ell\left[\ell\ch{3d-4}{3k-2}-k\ch{3d-4}{3k-1}\right],\; d\ge 2$$
In this paper, we follow up on the results of \cite{stacks,GWinvs} by showing that the Gromov-Witten invariants of a certain Deligne-Mumford stack, $\mathbb{P}^2_{E,2}$, are enumerative. Here $E\subset\mathbb{P}^2$ is a smooth cubic, and $\mathbb{P}^2_{E,2}$ is obtained from $\mathbb{P}^2$ by applying the square root construction along $E$, which was defined in \cite{stacks}. The main property of this stack which makes it interesting for enumerative geometry is that a morphism $f:C\to\mathbb{P}^2$ from a smooth connected curve $C$ such that $f(C)\not\subset E$ lifts to a morphism $C\to\mathbb{P}^2_{E,2}$ if and only if $f^*E$ has even multiplicity at every point. This leads to the idea that the stack of twisted stable maps to $\mathbb{P}^2_{E,2}$ looks like a stack of maps to $\mathbb{P}^2$ with tangency conditions imposed. This was verified in \cite{stacks}.
In \cite{CH}, Caporaso and Harris found a recursion which computes the number of plane curves of genus $g$ passing through a specified number of points and having certain contact conditions with respect to a line. These contact conditions come in the form of $k$th order contacts at specified points of the line or at unspecified points. This was generalized by Vakil \cite{Va}, who solved the corresponding problem with the line replaced by a smooth rational curve on a rational surface (see [ibid] for hypotheses). In particular, he solved the problem for contacts with a smooth plane conic. This paper is a step towards solving the problem for a smooth plane cubic $E$.
We answer the following question. Given integers $a,b,c,d$ satisfying appropriate conditions for the question to be sensible, how many rational degree $d$ curves meet $E$ in $a$ specified order $1$ contacts, $b$ specified order $2$ contacts, and $c$ unspecified order $2$ contacts, and pass through $3d-1-a-2b-c$ general points in $\mathbb{P}^2$. Here by a specified contact, we mean that we have chosen a general point of $E$ and require the curve to pass through this point and have a certain order of contact there. By an unspecified contact, we mean that the contact occurs at an arbitrary point distinct from the specified points. We require all these contacts to occur at smooth points of the rational curve. We show that for general choices of points, the rational curves satisfying these conditions have at worst nodal singularities, none of which lie on $E$. In section \ref{sec:kontsevich} we denote this number by $N_d(a,b,c)$, and the solution is given in equations \ref{eq:main_1}-\ref{eq:main_3}. A maple program implementing this recursion can be downloaded from the author's homepage.
In section \ref{sec:def}, we begin by recalling facts that we proved elsewhere. Then we use some results of Caporaso and Harris to prove some nice facts about the stack of twisted stable maps into $\mathbb{P}^2_{D,r}$. Finally, we discuss the relation between deformations of a map from a twisted curve having a separating node with deformations of the maps from the two curves obtained by normalizing the node. This is essential to prove enumerativity of the Gromov-Witten invariants.
In section \ref{sec:cap_harris}, we show that the number of curves with specified contacts to a smooth curve in $\mathbb{P}^2$ equals an intersection number on the stack of twisted stable maps. Then we give a relation between these numbers which was inspired by the recursion of Caporaso and Harris \cite{CH}. In fact, it is the formula one would expect to get if the curves being deformed were not allowed to break into pieces. In our case they cannot, because a rational curve cannot map onto a curve of positive genus.
In section \ref{sec:kontsevich}, we derive the main result modulo the proof of enumerativity, which is given in section \ref{sec:enumerative}.
Several results in this paper can easily be generalized, but in the interest of efficiency we restrict attention to plane curves.
\smallskip
\noindent{\bf Notation and Conventions}
As is common in enumerative geometry, we work over $\mathbb{C}$. Throughout this paper, $D$ denotes a smooth plane curve of degree $\delta\ge 3$.
\smallskip
\noindent{\bf Acknowledgements}
The fact that certain twisted Gromov-Witten invariants are enumerative is part of my Ph.D. thesis at Columbia University under the supervision of Michael Thaddeus. This paper extends the enumerative result from my thesis by allowing tangencies at arbitrary points, and also verifies a conjecture contained therein.
\section{Deforming morphisms from twisted curves into $\mathbb{P}^2_{D,r}$}
\label{sec:def}
\setcounter{equation}{0}
In this section we review some facts about twisted stable maps into $X_{D,r}$ and then study their deformations. When the domain is smooth and does not map into $D$, a twisted stable map is equivalent to an ordinary map with tangency conditions to $D$ imposed. Much of what we need to understand deformations in this case is already worked out in \cite{CH}, and we review their results. We also study deformations of a twisted stable map preserving a node.
\subsection{Review of twisted stable maps}
Twisted stable maps were defined by Abramovich and Vistoli in \cite{AV}. We are interested in twisted stable maps into a particular kind of stack. Let $D\subset\mathbb{P}^2$ be a smooth curve of degree $\delta\ge 3$ and let $r$ be a positive integer. In \cite{stacks}, we introduced the $r$th root construction, which produces a Deligne-Mumford stack $\bP^2_{D,r}$. Locally it is the quotient of a cyclic $r$ sheeted covering of $\mathbb{P}^2$ which it totally ramified along $D$ by the $\mu_r$-action \cite[2.15]{stacks}. In \cite[\S 2.1]{GWinvs}, it is shown that there is a discrete invariant on the stack of twisted stable maps into $\bP^2_{D,r}$ called the contact type. If $n$ is the number of marked points, then the contact type is an $n$-tuple of integers between $0$ and $r-1$. The contact type allows for a nice characterization of maps from smooth twisted curves into $\bP^2_{D,r}$.
The following proposition is a consequence of \cite[3.9]{stacks}. Note that since we assumed $\delta\ge 3$, no rational curve can map onto $D$.
\begin{contact_type}
\label{th:contact_type}
Let $\mathfrak{C}$ be a smooth, $n$-marked, genus $0$ twisted curve over a scheme $S$ and let $\mathfrak{f}:\mathfrak{C}\to\bP^2_{D,r}$ be a twisted stable map of positive degree and contact type $\vec{\varrho}$. Let $C$ be the coarse moduli space of $\mathfrak{C}$ with induced markings $\sigma_i\subset C$, and let $f:C\to\mathbb{P}^2$ be induced by $\mathfrak{f}$. Then there is an effective Cartier divisor $Z\subset C$ such that
\begin{equation}
\label{eq:contact_cond}
f^*D=rZ+\sum_{i=1}^n\varrho_i\sigma_i.
\end{equation}
Moreover, given a morphism $f:C\to\mathbb{P}^2$ and an effective Cartier divisor $Z\subset C$, there is a unique (up to isomorphism) twisted curve $\mathfrak{C}$ with coarse moduli space $C$ and a unique twisted stable map $\mathfrak{f}:\mathfrak{C}\to\bP^2_{D,r}$ with contact type $\vec{\varrho}$ which induces $f$.
\end{contact_type}
The expected dimension of the stack $\mathscr{K}_{0,n}(\bP^2_{D,r},d,\vec{\varrho})$ is
\begin{equation}
\label{eq:edim}
d(3-\delta)+\frac{1}{r}(d\delta - \sum \varrho_i) + n - 1.
\end{equation}
This was computed in equation 3.7 of \cite{GWinvs}.
\begin{nice_maps}
\label{th:nice_maps}
We define $\mathscr{K}^{*}_{0,n}(\bP^2_{D,r},d,\vec{\varrho})$ as follows. Define an irreducible component of $\mathscr{K}_{0,n}(\bP^2_{D,r},d,\vec{\varrho})$ to be \emph{good} if the general map in that component has a smooth source curve and maps birationally onto its image. Let $\mathscr{U}\subset\mathscr{K}^{*}_{0,n}(\bP^2_{D,r},d,\vec{\varrho})$ be the open substack obtained by removing all the bad irreducible components, and let $\mathscr{K}^*_{0,n}(\bP^2_{D,r},d,\vec{\varrho})$ be its stack-theoretic closure.
\end{nice_maps}
Note that $\mathscr{K}^*_{0,n}(\bP^2_{D,r},d,\vec{\varrho})$ has an open dense substack which is a scheme. Indeed, the general map in each irreducible component has no automorphisms, and $\mathscr{K}_{0,n}(\bP^2_{D,r},d)$ has a projective coarse moduli scheme \cite[1.4.1]{AV}.
\subsection{Maps from smooth twisted curves}
We begin with an easy lemma about certain types of infinitesimal deformations. Let $X$ be a nonsingular curve, let $Y$ be a nonsingular surface, let $f:X\to Y$ be a morphism whose differential $df:T_X\to f^*T_Y$ is an injection of sheaves, and let $D\subset Y$ be a nonsingular curve. Let $\mathcal{N}$ be the cokernel of $df$, and let $f^*D=\sum_{i=1}^n \varrho_ip_i$, where $p_i\in C$ are distinct points and $\varrho_i>0$. Assume that $df$ is injective on fibers at each $p_i$.
\begin{inf_def}
\label{th:inf_def}
The first order infinitesimal deformations of $f$ which fix $Y$ (but not necessarily $X$) and preserve the multiplicities $\varrho_i$ (but not the points $x_i$) are naturally in bijection with $H^0(X,\mathcal{N}(-\sum (\varrho_i-1)x_i))$.
\end{inf_def}
\emph{Proof:} First we recall the construction which identifies first order deformations of $f$ fixing $Y$ with $H^0(X,\mathcal{N})$. Let $\mathbb{I}=\mathrm{Spec}\; k[\epsilon]/(\epsilon^2)$, and suppose we have the following commutative diagram.
$$\xymatrix{
& Y \\
X \ar[ur]^f \ar@{^{(}->}[r] \ar[d] \ar@{}[dr]|{\Box} & \mathcal{X} \ar[d] \ar[u]_F \\
\mathrm{Spec}\; k \ar@{^{(}->}[r] & \mathbb{I}}$$
Cover $X$ by affines $U_i$. Since $X$ and $\mathcal{X}$ have the same underlying topological space, we get an open covering of $\mathcal{X}$ by affines $\mathcal{U}_i$ so that $U_i$ embeds into $\mathcal{U}_i$ as $(\mathcal{U}_i)_{\mathrm{red}}$. Since nonsingular affine varieties have no nontrivial first order deformations, there are isomorphisms $\varphi_i:U_i\times\mathbb{I}\to\mathcal{U}_i$. On the overlaps $U_{ij}$, $\varphi_j^{-1}\circ\varphi_i$ determines a derivation $\alpha_{ij}\in H^0(U_{ij},T_{U_{ij}})$. These form a 1-cycle, and hence determine an element $\alpha\in H^1(X,T_{X})$.
On $U_i$, the morphisms $F$ and $\varphi_i$ determine a morphism $\psi_i:U_i\times\mathbb{I}\to Y$, and hence a derivation $\beta_i\in H^0(X,f^*T_Y\vert_{U_i})$. The commutative diagram
$$\xymatrix{
U_{ij}\times\mathbb{I} \ar[r]^{\varphi_i} \ar[dr]_{\psi_i} & \mathcal{U}_{ij} \ar[r]^{\varphi_j^{-1}} & U_{ij}\times\mathbb{I} \ar[dl]^{\psi_j} \\
& Y &}$$
shows that on $U_{ij}$ we have $\beta_i=df(\alpha_{ij})+\beta_j$. It follows that the $\beta_i$ glue to give an element $\beta\in H^0(X,\mathcal{N})$. This is independent of the choices and identifies the first order deformations with $H^0(X,\mathcal{N})$.
We need a necessary and sufficient condition for the first order deformation corresponding to $\beta$ to preserve the multiplicities of $f^*D$. For this we can reduce to the affine situation $U_i=\mathrm{Spec}\; S\to \mathrm{Spec}\; R\subset Y$ and assume that $f^*D=\varrho p$ for some $p\in U_i$. Let $r\in R$ be a local equation for $D$ and $s\in S$ a local equation for $p$ so that $f^*r=us^{\varrho}$ with $u\in S$ a unit. The derivation $\beta_i\in\mathrm{Der}_{\mathbb{C}}(R,S)$ sends $r$ to $f^*r+\beta_i(r)\epsilon$. This defines a multiplicity $\varrho$ divisor if and only if there are elements $v,w,x,y\in S$ with $v$ a unit such that $$us^{\varrho}+\beta_i(r)\epsilon = (v+w\epsilon)(x+y\epsilon)^{\varrho}.$$ This is equivalent to $\beta_i(r)$ being an element of the ideal generated by $s^{\varrho-1}$. Now assume $\varrho>1$ since otherwise no conditions are being imposed. Since $X$ is a curve, $Y$ is a surface, and $df$ is injective at $p$, the condition on $\beta_i$ says precisely that the image of $\beta_i$ in $H^0(U_i,\mathcal{N}\vert_{U_i})$ vanishes to order at least $\varrho-1$ at $p$. This finishes the proof.\hfill $\Box$\smallskip
Proposition \ref{th:contact_type} implies that the deformation theory of twisted stable maps $\mathfrak{C}\to\bP^2_{D,r}$ from smooth twisted genus $0$ curves $\mathfrak{C}$ is equivalent to the deformation theory of maps $C\to\mathbb{P}^2$ from smooth rational marked curves $C$ with contact conditions imposed at the markings. This explains the importance of the above lemma, as well as the following two results of Caporaso and Harris.
Let $\pi:C\to B$ be a smooth, proper family of connected curves over a smooth base $B$, let $f:C\to\mathbb{P}^2$ be a morphism, and let $b\in B$ be a general point. Assume that no fiber of $\pi$ maps to a point under $f$. Let $\mathcal{N}_b$ be the cokernel of the differential $df_b:T_{C_b}\to f_b^*T_{\mathbb{P}^2}$, which is injective by hypothesis. We have a morphism $\kappa_b:T_b B\to H^0(C_b,\mathcal{N}_b)$ induced by the family of morphisms $C\to B\times\mathbb{P}^2$, and this is often called the Horikawa map in light of Horikawa's foundational work \cite{Ho}.
\begin{Cap_Harris_1}
\label{th:Cap_Harris_1}
{\bf \cite[2.3]{CH}} Let $b\in B$ be a general point and assume that $f_b$ maps $C_b$ birationally onto its image. Then $$\mathrm{Im}(\kappa_b)\cap H^0(C_b,(\mathcal{N}_b)_\mathrm{tors})=0.$$
\end{Cap_Harris_1}
Let $Q\subset C$ be the image of a section of $\pi$ such that $f^*D$ has multiplicity $m$ along $Q$. Let $q=Q\cap C_b$. Let $\ell-1$ be the order of vanishing of $df_b$ at $q$.
\begin{Cap_Harris_2}
\label{th:Cap_Harris_2}
{\bf \cite[2.6]{CH}} For any $v\in T_bB$, the image of $\kappa_b(v)$ in $H^0(C_b,\mathcal{N}_b/(\mathcal{N}_b)_\mathrm{tors})$ vanishes to at least order $m-\ell$ at $q$ and cannot vanish to any order $k$ with $m-\ell<k<m$. If $f(Q)$ is a point, then it vanishes to order at least $m$ at $q$.
\end{Cap_Harris_2}
We combine these results with the existence of a perfect obstruction theory \cite[\S 3.1]{GWinvs} to prove the following.
\begin{gen_smooth}
\label{th:gen_smooth}
Let $e$ be the expected dimension of $\mathscr{K}_{0,n}(\bP^2_{D,r},d,\vec{\varrho})$, which is given by equation \ref{eq:edim}, and assume $e>0$, $d>0$, and $\varrho_i>0$ for all $i$. The substack $\mathscr{K}^*_{0,n}(\bP^2_{D,r},d,\vec{\varrho})$ is generically smooth of dimension $e$. At a general point of any irreducible component of $\mathscr{K}^*_{0,n}(\bP^2_{D,r},d,\vec{\varrho})$, the corresponding map $f:C\to\mathbb{P}^2$ of coarse moduli spaces satisfies the following.
\begin{enumerate}
\item The multiplicity of $f^*D$ at the $i$th marked point is $\varrho_i$.
\item There are precisely $(d\delta-\sum\varrho_i)/r$ points where $f^*D$ has multiplicity $r$.
\item If $e\ge 3$, then $f(C)$ has at worst nodal singularities, none of which lie on $D$.
\end{enumerate}
Moreover, for any irreducible component $\mathscr{K}\subset\mathscr{K}^*_{0,n}(\bP^2_{D,r},d,\vec{\varrho})$, every evaluation map $\mathscr{K}\to D$ corresponding to a twisted marking is surjective.
\end{gen_smooth}
\emph{Proof:} Let $\mathscr{K}\subset\mathscr{K}^*_{0,n}(\bP^2_{D,r},d,\vec{\varrho})$ be an irreducible component, given the reduced induced structure. Let $B\subset\mathscr{K}$ be a representable, smooth, dense open substack. Such a substack exists because the general stable map in $\mathscr{K}$ has no automorphisms. Let $b\in B$ be a general point. Let $\pi:C\to B$ be the coarse moduli space of the universal twisted curve over $B$, and let $f:C\to\mathbb{P}^2$ be the morphism induced by the universal morphism into $\bP^2_{D,r}$. Let $f_b:C_b\to\mathbb{P}^2$ be the restriction to the fiber over $b$. Let $t$ be the number of points in the support of $f_b^*D$ and let their multiplicities be $m_i$.
We have the following.
\begin{equation}
\label{eq:inequalities}
e\le\dim T_bB\le 3d-1-\sum(m_i-1)=3d-1-d\delta+t\le e
\end{equation}
The first inequality follows from the general fact that the dimension of a stack is greater than or equal to its expected dimension under a perfect obstruction theory. For the second inequality, note that $T_bB$ injects into $H^0(C_b,\mathcal{N}_b)$, that $C_b$ is rational, and that the degree of $\mathcal{N}_b$ is $3d-2$. Then apply Lemmas \ref{th:Cap_Harris_1} and \ref{th:Cap_Harris_2} (note that we use $e>0$ here). The final inequality follows from equation \ref{eq:edim} since the multiplicities of $f^*D$ are bounded below by either the contact types $\varrho_i$ or by $r$.
So each inequality in equation \ref{eq:inequalities} must be an equality. From the third inequality, we deduce that $t=n+(d\delta-\sum\varrho_i)/r$ which implies statements 1 and 2. From the second inequality, we deduce that $(\mathcal{N}_b)_{\mathrm{tors}}=0$. Combining these facts with Lemma \ref{th:inf_def}, it follows that $\mathscr{K}^*_{0,n}(\bP^2_{D,r},d,\vec{\varrho})$ is generically smooth of dimension $e$.
We have shown that the first order deformations of $f_b:C_b\to\mathbb{P}^2$ corresponding to vectors in $T_bB$ are the sections of $\mathcal{N}_b$ which vanish to order $\varrho_i-1$ at a marked point and to order $r-1$ at an unmarked point in $f_b^{-1}D$ (we call these sections \emph{admissible}) and that any such first order deformation extends to a deformation over a curve. Surjectivity of the evaluation maps $\mathscr{K}\to D$ now follows from the fact that a section of $\mathcal{N}_b$ vanishing to order $\varrho_i-1$ at the $i$th marked point cannot fix its image by Lemma \ref{th:Cap_Harris_2}.
For statement 3, since $\mathcal{N}_b$ has no torsion it suffices to show that at most two points of $C_b$ map to the same point in $\mathbb{P}^2$, that their tangent directions are distinct, and that at most one point maps to the same point of $D$. For the latter, if two points map to the same point of $D$, with orders of contact $m_1$ and $m_2$, take any admissible section of $\mathcal{N}_b$ vanishing to order $m_1-1$ at the first and $m_2$ at the second (we use $e\ge 2$). The corresponding deformation separates the two points by Lemma \ref{th:Cap_Harris_2}. For the former, if three points map to the same point of $\mathbb{P}^2$, then since $e\ge 3$ there is an admissible section vanishing at two of the points and not the third. Finally, if two points map with the same tangent direction, take any admissible section which vanishes at one of the points but not the other.\hfill $\Box$\smallskip
\subsection{Maps from nodal twisted curves}
\label{sec:nodal}
In this subsection, we discuss deformations of twisted stable maps which preserve a node. This is only needed for section \ref{sec:enumerative}, where it is shown that the Gromov-Witten invariants of $\mathbb{P}^2_{D,2}$, which were computed in \cite{GWinvs}, are enumerative if $D$ is a smooth cubic.
Recall some facts about stable maps into a scheme. An arbitrary source curve can be described as the result of identifying pairs of points on a smooth proper curve $C$ such that the resulting curve $C'$ is connected. For any scheme $X$, a morphism $C'\to X$ is the same as a morphism $C\to X$ which sends each pair of identified points to the same point. In other words, $C'$ satisfies a universal property with respect to $C$. For curves $C$ over an arbitrary base scheme $S$ together with two disjoint sections $s_1,s_2:S\to C$ in the smooth locus of $C\to S$, there is a clutching construction which produces a curve $C'$ over $S$ and an $S$-morphism $p:C\to C'$ which is universal for morphisms of schemes $f:C\to X$ such that $f\circ s_1=f\circ s_2$ \cite[\S 3]{Kn}. Because of this, it is clear how to deform a stable map while preserving a node in the source surve.
Such a construction is required for twisted stable maps. Given a (possibly disconnected) twisted curve $\mathfrak{C}\to S$, an \'etale gerbe $\Sigma\to S$ and two closed embeddings $s_1,s_2:\Sigma\to\mathfrak{C}$ in the smooth locus of $\mathfrak{C}\to S$, it would produce (at least \'etale locally on $S$) a twisted curve $\mathfrak{C}'$ over $S$, an $S$-morphism $p:\mathfrak{C}\to\mathfrak{C}'$, and a 2-morphism $\alpha:p\circ s_1\implies p\circ s_2$ such that the pair $(p,\alpha)$ satisfies a universal property. To construct $\mathfrak{C}'$, one can use Olsson's description of twisted curves \cite{Ol_twisted}. So it remains to prove the universal property in some 2-category (which hopefully contains smooth Deligne-Mumford stacks of finite type over $\mathbb{C}$).
While this is undoubtedly the ``right'' way to approach deformations of twisted curves preserving a node, we adopt a more economical approach here. Since our application is to genus $0$ stable maps, we only treat disconnecting nodes, and we use that fact that $\bP^2_{D,r}$ has an open cover by substacks which are global quotients of a scheme by $\boldsymbol{\mu}_r$ \cite[2.15]{stacks}.
\smallskip
\noindent{\bf Classification of twisted curves.\ } An $n$-marked twisted curve over $\mathbb{C}$ is determined up to isomorphism by a projective, connected curve $C$ having at worst nodal singularities, a finite set of distinct smooth points $p_1,\ldots,p_n\in C$ (the markings), and a labelling of each marked point and node by a positive integer. One could construct the corresponding twisted curve by gluing open substacks which \'etale locally look like
\begin{enumerate}
\item $[\mathrm{Spec}\; \mathbb{C}[x]/\boldsymbol{\mu}_r]$, $t\cdot x=t^{-1}x$, at a marked point and
\item $[\mathrm{Spec}\; \mathbb{C}[x,y]/(xy)/\boldsymbol{\mu}_r]$, $t\cdot x=t^{-1}x$, $t\cdot y=ty$, at a node \cite[\S 2.1]{AV}.
\end{enumerate}
This classification also follows from \cite[1.8]{Ol_twisted}.
\smallskip
\noindent{\bf Normalization of a node.\ } Let $\mathfrak{C}$ be a twisted curve over $k$ with coarse moduli space $C$. For any node $x\in C$, there is a twisted curve $\tilde{\mathfrak{C}}$ which normalizes $x$. In general, the normalization of a reduced stack is defined by taking a groupoid presentation $R\rightrightarrows U$ and normalizing both $R$ and $U$ \cite[1.18]{Vi}. If $R\rightrightarrows U$ is a presentation of $\mathfrak{C}$, then $\tilde{\mathfrak{C}}$ is obtained by normalizing only the preimages of $x$ in $R$ and $U$.
\smallskip
We recall some facts about ordinary prestable curves. Let $B$ be a variety and let $\pi_i:C_i\to B$, for $i=1,2$, be two prestable curves over $B$ with sections $s_i:B\to C_i$ which don't meet the singular locus of the projections $\pi_i$. Then there is a prestable curve $\pi:C\to B$ and $B$-morphisms $p_i:C_i\to C$ such that
\begin{enumerate}
\item $p_1\circ s_1=p_2\circ s_2$,
\item if $X$ is any scheme and $f_i:C_i\to X$ are any morphisms such that $f_1\circ s_1=f_2\circ s_2$, then there is a unique morphism $f:C\to X$ such that$f\circ p_i=f_i$,
\item and for every geometric point $b\in B$, $\pi_1^{-1}(b)\sqcup\pi_2^{-1}(b)\to\pi^{-1}(b)$ is the normalization of the node $p_1(s_1(b))$.
\end{enumerate}
Moreover, $C$ is unique up to isomorphism, and we say it is obtained by gluing $C_1$ and $C_2$ along $s_1$ and $s_2$. This follows from \cite[3.4]{Kn}. Conversely, if $\pi:C\to B$ is a prestable curve such that the singular locus of $\pi$ contains a connected closed subscheme $\Delta\subset C$ which dominates $B$, then there is a finite surjective morphism $\tilde{B}\to B$, two prestable curves $C_1$ and $C_2$ over $\tilde{B}$ and sections $s_i:B\to C_i$ in the smooth loci of the projections, such that $C\times_B\tilde{B}$ is isomorphic to the curve obtained by gluing $C_1$ and $C_2$ along $s_1$ and $s_2$. This follows from \cite[3.7]{Kn}.
\smallskip
Using these results, we can classify deformations of a twisted stable map that preserve a node. First, a word about what is meant by this. Ultimately, we want to know the dimension of an irreducible component of the space of twisted stable maps whose general source curve has a node, in terms of the dimension of the two irreducible components one obtains by separating this node. So we do not make things very precise, and in fact one must often take a base change or an open covering to go between the two types of deformations.
Let $\mathfrak{C}$ be a connected balanced twisted curve over $\mathbb{C}$, let $C$ be its coarse moduli space, and let $x\in C$ be a separating node. Let $F:\mathfrak{C}\to\bP^2_{D,r}$ be a representable morphism and let $f:C\to\mathbb{P}^2$ be the induced morphism. Let $\mathfrak{C}_1$ and $\mathfrak{C}_2$ be the connected components of the normalization of $\mathfrak{C}$ at $x$, let $C_i$ be the coarse moduli space of $\mathfrak{C}_i$, let $x_i\in C_i$ be the preimage of $x$ (viewed as a marked point), let $F_i:\mathfrak{C}_i\to\bP^2_{D,r}$ and $f_i:C_i\to\mathbb{P}^2$ be the induced morphisms, and let $\varrho_i$ be the contact type of $F_i$ at $x_i$. If $f(x)\not\in D$, then the node is untwisted in $\mathfrak{C}$, and it follows from Knudsen's results that a deformation of $F$ which preserves the node $x$ and the condition that $f(x)\not\in D$ is equivalent to a pair of deformations of $F_1$ and $F_2$ which preserve the condition $f_1(x_1)=f_2(x_2)\not\in D$.
Now suppose that $f(x)\in D$. Then the contact types must be complementary---$\varrho_1+\varrho_2\equiv 0\;(\mathrm{mod\ } r)$---because of the balanced condition. We claim that a deformation of $F$ which preserves the node $x$ and the condition $f(x)\in D$ is equivalent to a pair of deformations of $F_1$ and $F_2$ which preserve the condition $f_1(x_1)=f_2(x_2)\in D$. To see this, let $U\subset\mathbb{P}^2$ be an open set containing $f(x)$ such that $\mathcal{O}_{\mathbb{P}^2}(D)$ is trivial on $U$ and let $P\to U$ be an $r$ sheeted cyclic covering which is totally ramified along $D\cap U$ (this is obtained by taking the $r$th root multisection of the tautological section vanishing on $D$). Then $\mu_r$ acts on $P$ and the stack quotient $[P/\mu_r]$ is canonically identified with $\bP^2_{D,r}\times_{\mathbb{P}^2} U$ (see \cite[2.4,2.15]{stacks}). Given a deformation of $F$ to a family of maps $$(\pi,\tilde{F}):\tilde{\mathfrak{C}}\to B\times\bP^2_{D,r}$$ which has a closed substack $\Delta\subset\tilde{\mathfrak{C}}$ contained in the singular locus of $\pi$ and containing $x$, let $V\subset\tilde{\mathfrak{C}}$ be the preimage of $U$. The fiber product $V\times_{\bP^2_{D,r}}P$ is a curve over $B$ since $\tilde{F}$ is representable. One obtains the deformations of $F_1$ and $F_2$ by normalizing the preimage of $\Delta$ in $V\times_{\bP^2_{D,r}}P$ and taking stack quotients by $\boldsymbol{\mu}_r$.
Conversely, if we are given deformations of $F_1$ and $F_2$ over a variety $B$, then we take analogous fiber products and simply need to glue the resulting curves to form nodes. So we have curves $\tilde{C}_i$ over $B$ and subschemes $\sigma_i\subset\tilde{C}_i$. These subschemes are finite and \'etale over $B$ of degree equal to the greatest common divisor of $r$ and $\varrho_1$. In fact, they are the total spaces of principal bundles over $B$ having cyclic structure group. So after an \'etale base change on $B$ we can assume they are trivial. On the fibers over the special point $b\in B$ (corresponding to the original map $F$) we are given an identification of $\sigma_1$ with $\sigma_2$. This extends uniquely to an isomorphism $\sigma_1\to\sigma_2$ over $B$. Thus the curves can be glued to a curve $\tilde{C}$ using Knudsen's construction. Since we assumed that the condition $f(x_1)\in D$ is preserved by the deformation, and since $P\to U$ is totally ramified over $D$, the morphisms $\tilde{C}_i\to P$ extend to a morphism $\tilde{C}\to P$. Now taking the stack quotient of $\tilde{C}$ yields the required deformation of $F$.
\section{Intersection numbers on \boldmath $\mathscr{K}_{0,n}(\bP^2_{D,r},d,\vec{\varrho})$}
\label{sec:cap_harris}
\setcounter{equation}{0}
In this section we prove an analogue of a formula of Caporaso and Harris for rational plane curves having prescribed tangencies with a smooth plane curve of positive genus. Throughout this section we fix a plane curve $D$ of degree $\delta$, positive integers $r$ and $d$, and an $n$-tuple of integers $\varrho_1,\ldots,\varrho_n$ such that $1\le\varrho_i\le n$ and $\sum\varrho_i = d\delta$. Let $\mathscr{K}=\mathscr{K}_{0,n}(\bP^2_{D,r},d,\vec{\varrho})$.
Let $A^*(\mathscr{K})$ to denote the operational Chow ring of $\mathscr{K}$ \cite{Vi}. We define $N^1(\mathscr{K})$ to be $A^1(\mathscr{K})$ modulo the equivalence relation $a_1\equiv a_2$ if for all $b\in A_1(\mathscr{K})$, $$\int_{\mathscr{K}}a_1\cap b=\int_{\mathscr{K}}a_2\cap b.$$ Let $\mathscr{C}$ be the universal coarse curve over $\mathscr{K}$. By this we mean that we have a representable morphism $\mathscr{C}\to\mathscr{K}$ such that for every morphism $B\to\mathscr{K}$ from a scheme $B$, the pullback of $\mathscr{C}$ to $B$ is canonically isomorphic to the coarse moduli space of the pullback of the universal twisted curve to $B$. So we have the following diagram.
$$\xymatrix{
\mathscr{C} \ar[r]^f \ar[d]^{\pi} & \mathbb{P}^2 \\
\mathscr{K}}$$
We define classes $h,\chi_1,\ldots,\chi_n\in N^1(\mathscr{K})$ as follows. Let $h=\pi_*(f^*p)$, where $p\in A^2(\mathbb{P}^2)$ is the class of a point and $\pi_*$ is the proper, flat pushforward. In the notation of \cite[\S 17]{Fu}, this equals $\pi_*(f^*p\cdot [\pi])$, where $[\pi]$ is the orientation class. Let $\chi_i=e_i^*\tilde{p}$, where $e_i:\mathscr{K}\to D$ is the $i$th evaluation map and $\tilde{p}\in N^1(D)$ is the class of a point.
The following relation between these classes leads to the analogue of Caporaso and Harris's formula.
\begin{relations}
\label{th:relations}
If $r>d\delta$, then $$h=\sum_{i=1}^n \varrho_i\chi_i.$$
\end{relations}
\emph{Proof:} First we derive the key consequence of the hypothesis $r>d\delta$ and in doing so fix some notation. Let $F:\mathfrak{C}\to\bP^2_{D,r}$ be a twisted stable map over $\mathbb{C}$ which lies in $\mathscr{K}$, and let $C$ be the coarse curve of $\mathfrak{C}$, and let $g:C\to\mathbb{P}^2$ be the induced morphism. Let $x_1,\ldots,x_n\in\mathfrak{C}$ be the marked points and let $y_1,\ldots,y_m$ be the twisted nodes of $\mathfrak{C}$ which lie on at least one component of $\mathfrak{C}$ which maps with positive degree. By renumbering the markings if necessary, assume that $x_1,\ldots,x_k$ lie on components of $\mathfrak{C}$ which map with positive degree and assume that $x_{k+1},\ldots,x_n$ lie on components which map with degree $0$.
First note that no twisted node $y_i$ can lie on two components which map with positive degree. Indeed, since the contact types of $y_i$ on the two components must add to $r$, it would follow that the pullback of $D$ to the normalization of $C$ has degree at least $r$, contradicting $r>d\delta$. So each $y_i$ lies on a unique component mapping with positive degree. Let $\sigma_i$ be the contact type of $y_i$ on this component. For the same reason, the contact types $\sigma_i$ for all $i$ and $\varrho_j$ for $1\le j\le k$ are equal to the intersection number between $D$ and the component of $C$ at the given point.
Suppose that $A\subset\{k+1,\ldots,n\}$ is the set of markings lying in a fixed connected component of $g^{-1}(D)$ and suppose that $B\subset\{1,\ldots,m\}$ is the set of nodes lying in the same component. We claim that
\begin{equation}
\label{eq:multiplicities}
\sum_{i\in B}\sigma_i=\sum_{j\in A}\varrho_j.
\end{equation}
That they are congruent modulo $r$ follows from the fact that the sum of contact types at all twisted points lying on an irreducible component mapping with degree $0$ must be a multiple of $r$, together with the fact that the two contact types at a node sum to a multiple of $r$. From this congruence, equality is deduced from the fact that both sides are between $0$ and $r-1$.
To show that $h=\sum\varrho_i\chi_i$, it suffices to show that for any one dimensional integral closed substack $V\subset\mathscr{K}$, $\int_V h=\sum\varrho_i\int_V\chi_i$. It also suffices to replace $V$ with its normalization. Let $\mathscr{C}_V$ be the restriction of the universal curve to $V$. Then $\int_V h=\int_{\mathscr{C}_V} f_V^* p = \int_{\mathbb{P}^2} p\cap (f_V)_*[\mathscr{C}_V]$ which is the degree of $f_V$. There exists a finite, flat base change $W\to V$ such that every irreducible component of $\mathscr{C}_W$ is generically irreducible over $W$ (hence generically smooth). Replace $V$ with $W$.
Now we use the notation introduced at the beginning of the proof, where we take $F:\mathfrak{C}\to\bP^2_{D,r}$ to be the map corresponding to a general point of $V$. For each node $y_i$, there are two irreducible components of $\mathscr{C}_V$ which contain $y_i$, and the intersection between these components is a section $t_i:V\to\mathscr{C}_V$. Let $s_i:V\to\mathscr{C}_V$ be the section corresponding to the $i$th marking. The degree of $f_V$ can be computed by $(f_V)_*(f_V)^*[D]=(\deg(f_V))[D]$. This shows that $$\deg(f_V)=\sum_{i=1}^k\varrho_i\deg(f_V\circ s_i)+\sum_{i=1}^m\sigma_i\deg(f_V\circ t_i).$$ Here the morphisms on the right hand side are viewed as going from $V$ to $D$. The first summation is $\sum_{i=1}^k\varrho_i\int_V\chi_i$. It follows from equation \ref{eq:multiplicities} that the second summation is $\sum_{i=k+1}^n\varrho_i\int_V\chi_i$.\hfill $\Box$\smallskip
Now we define intersection numbers on the stack $\mathscr{K}^*_{0,n}(\bP^2_{D,r},d,\vec{\varrho})$. First we introduce some notation. If $\alpha=(\alpha_1,\ldots)$ is a sequence of positive integers with all but finitely many equal to $0$, then let $\vert\alpha\vert=\sum\alpha_i$, $I\alpha=\sum i\alpha_i$, and $\alpha!=\prod\alpha_i!$.
\begin{CH_numbers}
\label{th:CH_numbers}
Let $\alpha=(\alpha_1,\ldots)$ and $\beta=(\beta_1,\ldots)$ be sequences of positive integers with all but finitely many equal to $0$. Let $n=\vert\alpha\vert + \vert\beta\vert$, let $\vec{\varrho}$ be an $n$-tuple of positive integers such that $\varrho_j=i$ for $\alpha_1+\ldots+\alpha_{i-1}<j\le\alpha_1+\ldots+\alpha_i$ and for $I\alpha+\beta_1+\ldots+\beta_{i-1}<j\le I\alpha+\beta_1+\ldots+\beta_i$, and choose $r$ so that $r>d\delta$. Let $D$ be a smooth curve of degree $\delta$ such that $d\delta=I\alpha+I\beta$. If the expected dimension of $\mathscr{K}^*_{0,n}(\mathbb{P}^2_{D,r},d,\vec{\varrho})$ equals $e\ge\max(1,\vert\alpha\vert)$, then we define $$N_d^D(\alpha,\beta)=\frac{1}{\beta!}\int_{\mathscr{K}^*_{0,n}(\bP^2_{D,r},d,\vec{\varrho})} h^{e-\vert\alpha\vert}\cdot\prod_{i=1}^{\vert\alpha\vert}\chi_i.$$
\end{CH_numbers}
These numbers all have enumerative significance according to the following proposition (hence are independent of $r$). When we say that a curve has a specified $i$th order contact with $D$, we mean that we have chosen a general point of $D$ and require the curve to have an $i$th order contact with $D$ at this point.
\begin{CH_enum}
\label{th:CH_enum}
$N_d^D(\alpha,\beta)$ is the number of rational degree $d$ curves passing through $e-I\alpha$ general points, having $\alpha_i$ specified $i$th order contacts with $D$, and having $\beta_i$ unspecified $i$th order contacts with $D$. If $e\ge 3$ or $d\le 2$, then these curves have at worst nodal singularities, none of which lie on $D$.
\end{CH_enum}
\emph{Proof:} Let $\mathscr{C}^{e-I\alpha}$ be the $(e-I\alpha)$-fold product of the universal curve with itself. We have a morphism $$F:\mathscr{C}^{e-I\alpha}\to (\mathbb{P}^2)^{e-I\alpha}\times D^{\vert\alpha\vert}$$ given by evaluation at marked points. The integral in Definition \ref{th:CH_numbers} is equal to the degree of $F$. We can throw away the boundary of $\mathscr{C}^{e-I\alpha}$ to get a proper morphism of smooth schemes $U\to V$, whose degree equals the integral in question. Given a curve satisfying the stated conditions, there are $\beta!$ such twisted stable maps, because the marked points corresponding to unspecified contacts can be relabeled arbitrarily. Now the proposition follows from generic smoothness, together with Theorem \ref{th:gen_smooth}.\hfill $\Box$\smallskip
Now we come to the main theorem of this section. Let $e_k$ be the sequence with all but the $k$th entry $0$, and the $k$th entry equal to $1$.
\begin{CH_recursion}
\label{th:CH_recursion}
If $e-I\alpha>0$, then $$N_d^D(\alpha,\beta)=\sum_{k:\beta_k>0} kN_d^D(\alpha+e_k,\beta-e_k).$$
\end{CH_recursion}
\emph{Proof:} If $j$ is chosen so that $\varrho_j=k$ and $j>I\alpha$, then $$N_d^D(\alpha+e_k,\beta-e_k)=\frac{\beta_k}{\beta!}\int_{\mathscr{K}^*_{0,n}(\bP^2_{D,r},d,\vec{\varrho})}h^{e-I\alpha-1}\cdot\prod_{i=1}^{I\alpha}\chi_i\cdot\chi_j.$$ Therefore, it suffices to show that in $N^1(\mathscr{K}^*_{0,n}(\bP^2_{D,r},d,\vec{\varrho}))$, we have $h=\sum_{i=1}^n \varrho_i\chi_i$ and $\chi_i^2=0$. The latter equality is obvious, because if one chooses two distinct points of $D$, then their preimages under $e_i:\mathscr{K}^*_{0,n}(\bP^2_{D,r},d,\vec{\varrho})\to D$ are disjoint. The former equality follows from Proposition \ref{th:relations}.\hfill $\Box$\smallskip
\section{Derivation of the main result}
\label{sec:kontsevich}
\setcounter{equation}{0}
Let $E\subset\mathbb{P}^2$ be a smooth cubic. Let $d$ be a positive integer and let $a,b,c$ be nonnegative integers such that $a+2b+2c\le 3d$ and $a+2b+c\le 3d-1$. We define $N_d(a,b,c)$ to be $N_d^E((a,b),(3d-a-2b-2c,c))$. As a convention, we'll say that $N_d(a,b,c)=0$ if any of the above hypotheses are not satisfied. Our recursion for these numbers will show that they are independent of $E$.
From Theorem \ref{th:CH_recursion} we have
\begin{equation}
\label{eq:CH}
N_d(a,b,c)=N_d(a+1,b,c)+2N_d(a,b+1,c-1) \mbox{ if } a+2b+c<3d-1 \mbox{ and } a,b\ge 0.
\end{equation}
To apply this formula, we define a family of functions $P_d^{b,c}:\mathbb{Z}\to\mathbb{Z}$ by $$P_d^{b,c}(t)=N_d(3d-1-2b-c-t,b,c).$$ The parameter $t$ is equal to the exponent of $h$ in Definition \ref{th:CH_numbers}, so it equals the number of general points in $\mathbb{P}^2$ through which the curves are required to pass. This function vanishes outside of the interval $\max(0,c-1)\le t\le 3d-1-2b-c$.
Let $\Delta$ be the difference operator defined by $\Delta P(t) = P(t+1)-P(t)$. Then equation \ref{eq:CH} can be rewritten as
\begin{equation}
\label{eq:CH2}
\Delta P_d^{b,c}(t) = 2P_d^{b+1,c-1}(t) \mbox{ if } 0\le t \le 3d-2-2b-c \mbox{ and } b\ge 0.
\end{equation}
This equation implies that the restriction of $P_d^{b,c}$ to the interval $0\le t\le 3d-1-2b-c$ agrees with an integer polynomial $Q_d^{b,c}$ of degree at most $c$. If $2b+2c\neq 3d$, then there are at least $c+1$ integers in this interval, so $Q_d^{b,c}$ is uniquely determined by $P_d^{b,c}$. If $2b+2c=3d$, then equation \ref{eq:CH} becomes $N_d(0,b,c)=2N_d(0,b+1,c-1)$, so $N_d(0,b,c)=2^{c-1}N_d(0,b+c-1,1)$, which translates into $$Q_d^{b,c}(c-1)=2^{c-1}Q_d^{b+c-1,1}(0).$$ The general solution for $Q_d^{b,c}(t)$ below satisfies this equality.
Since $Q_d^{b,c}(t)$ has degree at most $c$, we can write
$$Q_d^{b,c}(t)=\alpha_0\ch{t}{c} + \alpha_1\ch{t}{c-1} +\cdots + \alpha_c.$$ Since $Q_d^{b,c}(t)=0$ for $0\le t\le c-2$, we have $\alpha_i=0$ for $i\ge 2$. We claim that $\alpha_1=2\alpha_0$. Since $\Delta\ch{t}{c}=\Delta\ch{t}{c-1}$, equation \ref{eq:CH2} reduces to the case $c=1$. In that case $Q_d^{b,1}(t)=\alpha_0t+\alpha_1$ and $Q_d^{b+1,0}(t)=\alpha_0/2$. So we have reduced to showing that $Q_d^{b,1}(0)=4Q_d^{b+1,0}(0)$. Translating into the notation of Definition \ref{th:CH_numbers} and replacing $b$ with $b+1$, this says the following.
\begin{key_relation}
\label{th:key_relation}
If $a+2b=3d$, then $$N_d^E((a,b-1),(0,1))=4N_d^E((a-1,b),(1,0)).$$
\end{key_relation}
\emph{Proof:} Let $E^{a+b}\to\mathrm{Pic}^{3d}E$ be the morphism $$(x_1,\ldots,x_a,y_1,\ldots,y_b) \mapsto \mathcal{O}_E(\sum x_i+2\sum y_i).$$ Let $S\subset E^{a+b}$ be the preimage of $[\mathcal{O}_E(d)]$, where $\mathcal{O}_E(1)$ is the restriction of $\mathcal{O}_{\mathbb{P}^2}(1)$ to $E$. Let $p_i:S\to E$, $1\le i\le a$ and $q_i:S\to E$, $1\le i\le b$ be the projections onto the coordinates $x_i$ and $y_i$ respectively. Let $\hat{p}_i:S\to E^{a+b-1}$ and $\hat{q}_i:S\to E^{a+b-1}$ be the projections onto the complementary factors. Then the following diagrams are fiber squares.
$$\xymatrix{
S \ar[r]^{\hat{p_a}} \ar[d]_{p_a} \ar@{}[dr]|{\Box} & E^{a+b-1} \ar[d] \\
E \ar[r]^{\cong} & \mathrm{Pic}^1(E)}$$
Here the right arrow is the composition of $E^{a+b-1}\to\mathrm{Pic}^{3d-1}(E)$ with the morphism $\mathrm{Pic}^{3d-1}E\to\mathrm{Pic}^1(E)$ sending $[\mathcal{L}]$ to $[\mathcal{O}_E(d)\otimes\mathcal{L}^{-1}]$.
$$\xymatrix{
S \ar[r]^{\hat{p}_b} \ar[d]_{p_b} \ar@{}[dr]|{\Box} & E^{a+b-1} \ar[d] \\
E \ar[r] & \mathrm{Pic}^2(E)}$$
Here the right arrow is defined similarly to above and the bottom arrow is the squaring map. Letting $S\to E_{a+b}$ be the projection, it is now clear that $N_d^E((a-1,b),(1,0))$ equals the number of rational degree $d$ curves passing through a general divisor in the image of $S$. Since we have shown that $\hat{p}_b$ is an \'etale degree 4 cover, it follows that $N_d^E((a,b-1),(0,1))$ is the number passing through any of 4 such divisors (while the divisors are not general with respect to each other, they can all be chosen to lie in a given dense open subset of $S$).\hfill $\Box$\smallskip
From all this, it follows that there exist numbers $K^{\lambda}_d$ such that
\begin{equation}
\label{eq:main_1}
N_d(3d-1-2b-c-t,b,c)=2^cK^{b+c}_d\left[\ch{t}{c}+2\ch{t}{c-1}\right].
\end{equation}
In fact, $K^{b+c}_d$ is an integer whenever $2b+2c\neq 3d$, since then $K^{b+c}_d=N_d(0,b+c,0)$. If $2b+2c=3d$, then $K^{b+c}_d$ must be in $\frac{1}{4}\mathbb{Z}$.
In the next section we show that the numbers $N_d(a,0,c)$ equal the Gromov-Witten invariants of $\mathbb{P}^2_{E,2}$ which were computed in \cite{GWinvs}. From that computation we obtain the base cases $$K_1^0=K_1^1=1,\; K_2^3=\frac{3}{4},$$ and recursion 5.7 from [ibid] leads to the following recursion for the coefficients $K_d^b$.
\footnotesize
\begin{eqnarray}
\lefteqn{K_d^{(b)}f^{(b)}_d = \sum_{\stackrel{\scriptstyle b_1+b_2=b}{d_1+d_2=d}}K_{d_1}^{(b_1)}K_{d_2}^{(b_2)}f^{(b_1)}_{d_1}f^{(b_2)}_{d_2}\left[d_1^2d_2^2\ch{3d-4-b}{3d_1-2-b_1}-d_1^3d_2\ch{3d-4-b}{3d_1-1-b_1}\right] +} \label{eq:main_2} \\
& & \sum_{\stackrel{\scriptstyle b_1+b_2=b-1}{d_1+d_2=d}}K_{d_1}^{(b_1)}K_{d_2}^{(b_2)}f^{(b_1)}_{d_1}f^{(b_2)}_{d_2}\alpha^{\vec{b}}_{\vec{d}}\left[2d_1d_2\ch{3d-4-b}{3d_1-2-b_1}-d_1^2\ch{3d-4-b}{3d_1-1-b_1}-d_2^2\ch{3d-4-b}{3d_1-3-b_1}\right], \nonumber
\end{eqnarray}
\normalsize
where
\begin{equation}
\label{eq:main_3}
\alpha^{\vec{b}}_{\vec{d}}=\frac{(3d_1-2b_1)(3d_2-2b_2)(3d_2-1)}{3d_2(3d_2-1-b_2)}.
\end{equation}
In the sums, $d_i$ are positive integers and $b_i$ are nonnegative integers. The recursion is only valid for values of $b$ and $d$ not covered by the base cases, and we set $f_d^{(b)}$ and $\alpha_{\vec{d}}^{\vec{b}}$ to be 0 whenever their denominators would make them undefined.
\section{Enumerativity of certain Gromov-Witten invariants}
\label{sec:enumerative}
\setcounter{equation}{0}
Let $p\in N^2(\mathbb{P}^2)$ be the class of a point, $\alpha\in N^0(E)$ be the fundamental class, and $\beta\in N^1(E)$ be the class of a point. We use the notation $I_d(p^k\alpha^{\ell}\beta^m)$ to denote genus $0$ Gromov-Witten invariants of $\mathbb{P}^2_{E,2}$, as in \cite{GWinvs}. This section is devoted to proving the following.
\begin{enumerative}
\label{th:enumerative}
Given integers $a,c,d$ such that $d>0$, $a,c\ge 0$, $a+2c\le 3d$, and $a+c\le 3d-1$, $$N_d(a,0,c)=\frac{1}{(3d-a-2c)!}I_d(p^{3d-1-a-c}\alpha^{3d-a-2c}\beta^a).$$ Moreover, this is the number of rational degree $d$ plane curves passing through $3d-1-a-c$ general points of $\mathbb{P}^2$, passing through $a$ general points of $E$, and having $c$ tangencies to $E$. These curves have at worst nodal singularities, none of which lie on $E$.
\end{enumerative}
It is convenient to replace the contact type $\vec{\varrho}$ with the number of twisted marked points, and assume the markings are ordered so that the twisted markings come before the untwisted ones. So let $\mathscr{K}_{0,n}(\bP^2_{E,2},d,m)=\mathscr{K}_{0,n}(\bP^2_{E,2},d,(1,\ldots,1,0,\ldots,0))$, where there are $m$ ones and $n-m$ zeros. The expected dimension of $\mathscr{K}_{0,n}(\bP^2_{E,2},d,n)$ is $$n+\frac{3d-n}{2}-1.$$ The number of tangencies being imposed is $(3d-n)/2$, and must be an integer. Therefore, the expected dimension is at least equal to $1$ and is at least $3$ if $d\ge 3$.
Recall that Gromov-Witten invariants are defined to be integrals over $\mathscr{K}_{0,n}(\bP^2_{E,2},d,m)$ \cite[\S 2.3]{GWinvs}. By Theorem \ref{th:gen_smooth}, the closed substack $\mathscr{K}^*_{0,n}(\bP^2_{E,2},d,m)$ has the expected dimension, and therefore the restriction of the virtual fundamental class to $\mathscr{K}^*_{0,n}(\bP^2_{E,2},d,m)$ is the ordinary fundamental class. It follows from the definitions that the contribution of $\mathscr{K}^*_{0,n}(\bP^2_{E,2},d,m)$ to the Gromov-Witten invariants gives precisely $N_d(a,0,c)$. In light of Proposition \ref{th:CH_enum}, it remains only to show that the remaining irreducible components of $\mathscr{K}_{0,n}(\bP^2_{E,2},d,m)$ do not contribute to the invariants. For this we need the following lemmas.
\begin{smooth_source}
\label{th:smooth_source}
If $\mathscr{K}\subset\mathscr{K}_{0,n}(\bP^2_{E,2},d,n)$ is an irreducible component whose general point corresponds to a stable map $\mathfrak{f}:\mathfrak{C}\to\bP^2_{E,2}$ with $\mathfrak{C}$ smooth, then the associated map of coarse moduli spaces $f:C\to\mathbb{P}^2$ maps $C$ birationally onto its image.
\end{smooth_source}
\emph{Proof:} Suppose that $C$ does not map birationally onto its image. Let $\bar{C}$ be the normalization of the image of $f$, so that $f$ decomposes into
$$\xymatrix{
C \ar[r]^g & \bar{C} \ar[r]^h & \mathbb{P}^2.}$$
The morphism $h$ corresponds to a representable morphism $\bar{\mathfrak{C}}\to\bP^2_{E,2}$ from a smooth twisted curve $\bar{\mathfrak{C}}$ which on the level of coarse moduli spaces equals $h$. The point of $\mathfrak{C}$ lying over a point $x\in C$ is twisted if and only if $k:=\mathrm{mult}_x f^*E$ is odd. Likewise, the point of $\bar{C}$ lying over $g(x)$ is twisted if and only if $\ell:=\mathrm{mult}_{g(x)}h^*E$ is odd. Since $g$ is ramified to order $k/\ell$ at $x$, it follows that $g$ lifts uniquely to a representable morphism $\mathfrak{C}\to\bar{\mathfrak{C}}$ which factors $f$.
Let $e$ be the degree of $h$ and let $m$ be the number of twisted marked points on $\bar{\mathfrak{C}}$. We have shown that $h:\bar{\mathfrak{C}}\to\bP^2_{E,2}$ lies on an irreducible component of $\mathscr{K}_{0,m}(\bP^2_{E,2},e,m)$ which has the expected dimension $(3e+m)/2-1$. Moreover, the dimension of $\mathscr{K}$ is at least equal to the expected dimension, which is $(3d+n)/2-1$. Since we assumed that $f:\mathfrak{C}\to\bP^2_{E,2}$ was chosen generally, we must have $3e+m\ge 3d+n$. Since $m\le 3e$, we have $6e\ge 3d$, so $g$ must be a double cover and $6e=3d$, so $m=3e$ and $n=0$. Since $C$ and $\bar{C}$ are rational, $g$ is ramified at exactly 2 points. But $g$ must be ramified over every twisted point of $\bar{\mathfrak{C}}$, which is impossible.\hfill $\Box$\smallskip
For each untwisted marking we have an evaluation map $\mathscr{K}_{0,n}(\bP^2_{E,2},d,m)\to\mathbb{P}^2$, and for each twisted marking an evaluation map $\mathscr{K}_{0,n}(\bP^2_{E,2},d,m)\to E$. In the following proof, we use the result of section \ref{sec:nodal}, which compares deformations of a twisted stable map to deformations of the pair of maps obtained by normalizing a separating node.
\begin{gen_elmt}
\label{th:gen_elmt}
Let $\mathscr{K}\subset\mathscr{K}_{0,n}(\bP^2_{E,2},d,m)$ be an irreducible component. Let $f:\mathfrak{C}\to\bP^2_{E,2}$ be a general map in $\mathscr{K}$ and let $\tau$ be the number of nodes of $\mathfrak{C}$. Let $ev:\mathscr{K}\to (\mathbb{P}^2)^{n-m}\times E^m$ be the product of all evaluation maps. Then $$\dim(ev(\mathscr{K}))+\tau\le\mathrm{edim}(\mathscr{K}).$$ Moreover, each evaluation map is surjective, except possibly for evaluation maps corresponding to untwisted markings which lie on a component of $\mathfrak{C}$ mapping into $E$ with degree $0$. In the latter case, the evaluation map surjects onto $E$.
\end{gen_elmt}
\emph{Proof:} We prove this by induction on $\tau$. First assume $\tau=0$. If $d=0$, then a simple calculation shows that $\mathrm{edim}(\mathscr{K})\ge 1$ and $\mathrm{edim}(\mathscr{K})\ge 2$ if there are no twisted markings. The surjectivity of evaluation maps in this case is clear. If $d>0$, then Lemma \ref{th:smooth_source} implies that $\mathscr{K}\subset\mathscr{K}^*_{0,n}(\bP^2_{E,2},d,m)$. So the result follows from Theorem \ref{th:gen_smooth} if it is shown that untwisted evaluation maps $\mathscr{K}\to\mathbb{P}^2$ surject. If not, then the image is at most 1-dimensional, so by forgetting all untwisted markings we would have a component of $\mathscr{K}^*_{0,m}(\bP^2_{E,2},d,m)$ of dimension $0$. But this contradicts the fact that the expected dimension is $(3d+m)/2-1\ge 1$.
For the general case, note first that $\mathfrak{C}$ has no nodes between components which map with degree $0$. If such a node maps to a point not in $E$, then this follows from the irreducibility of $\bar{M}_{0,n}$. If the node maps into $E$, then it follows from the fact that $\mathscr{K}_{0,n}(B\mu_2)$ is flat over $\bar{M}_{0,n}$ \cite[3.0.5]{ACV}.
Suppose that $\mathfrak{C}$ has a node $x$ lying on two components which map with positive degree. Let $\mathfrak{C}_1$ and $\mathfrak{C}_2$ be the twisted curves resulting from normalizing $x$, and let $\mathscr{K}_1$ and $\mathscr{K}_2$ be the irreducible components of the stack of twisted stable maps in which $f\vert_{\mathfrak{C}_1}$ and $f\vert_{\mathfrak{C}_2}$ lie (we take the preimages of $x$ to be marked points). By induction, the result holds for $\mathscr{K}_1$ and $\mathscr{K}_2$. If $x$ is untwisted, then $ev(\mathscr{K})=ev(\mathscr{K}_1)\times_{\mathbb{P}^2}ev(\mathscr{K}_2)$, where the morphisms to $\mathbb{P}^2$ are projections onto the factor corresponding to the preimage of $x$. Since these evaluation maps are surjective, it follows that $\dim(ev(\mathscr{K}))=\dim(ev(\mathscr{K}_1))+\dim(ev(\mathscr{K}_2))-2$. Moreover, $\mathrm{edim}(\mathscr{K})=\mathrm{edim}(\mathscr{K}_1)+\mathrm{edim}(\mathscr{K}_2)-1$, so it follows that $\dim(ev(\mathscr{K}))+\tau\le\mathrm{edim}(\mathscr{K})$. The surjectivity of evaluation maps also follows by induction.
If $x$ is twisted, then replacing $\mathbb{P}^2$ with $E$ we find that $\dim(ev(\mathscr{K}))=\dim(ev(\mathscr{K}_1))+\dim(ev(\mathscr{K}_2))-1$, and a calculation shows that $\mathrm{edim}(\mathscr{K})=\mathrm{edim}(\mathscr{K}_1)+\mathrm{edim}(\mathscr{K}_2)$. So the result follows in the same way.
The other case to consider is when the node $x$ joins a positive degree component with a degree $0$ component. Then the above argument will fail if the degree $0$ component maps into $E$ and the node is untwisted. However, in this case the condition that the preimage of $x$ in $\mathfrak{C}_1$ maps to $E$ imposes a nontrivial condition on the space $\mathscr{K}_1$, and since the evaluation maps at least surject onto $E$, it follows that $\dim(ev(\mathscr{K}))=\dim(ev(\mathscr{K}_1))+\dim(ev(\mathscr{K}_2))-2$ anyway. Everything else works as before.\hfill $\Box$\smallskip
We now return to the proof of Theorem \ref{th:enumerative}. If $\mathscr{K}\subset\mathscr{K}_{0,n}(\bP^2_{E,2},d,m)$ is any irreducible component which is not in $\mathscr{K}^*_{0,n}(\bP^2_{E,2},d,m)$, then by Lemma \ref{th:smooth_source} we have $\tau>0$ in the notation of Lemma \ref{th:gen_elmt}. Therefore, $\dim(ev(\mathscr{K}))<\mathrm{edim}(\mathscr{K})$ for all irreducible components not in $\mathscr{K}^*_{0,n}(\bP^2_{E,2},d,m)$. But this implies that the pushforward of the virtual fundamental class of $\mathscr{K}_{0,n}(\bP^2_{E,2},d,m)$ under $ev$ equals the pushforward of its restriction to $\mathscr{K}^*_{0,n}(\bP^2_{D,r},d,m)$ (which is well-defined, since the latter has the expected dimension). This implies that irreducible components not in $\mathscr{K}^*_{0,n}(\bP^2_{E,2},d,m)$ do not contribute to the Gromov-Witten invariants.
\noindent{\bf Remark} There exist irreducible components of $\mathscr{K}_{0,n}(\bP^2_{E,2},d,m)$ having greater than the expected dimension. One way to see this is to consider maps from a twisted curve with two irreducible components, one of which maps with degree $0$ and contains at least $5$ twisted points.
|
2,869,038,155,892 | arxiv | \section{ Introduction }
In the study of complex systems it is of interest to synthesize the information coming from different sources into a single output, which is either numerical or represented by a suitable function, graph, etc. For instance, the copula representation has proved to be a suitable tool to describe uncertain inputs in a probabilistic framework (see, e.g., \cite{DuSe,Nels}) as well as in an imprecise setting (see, e.g., \cite{DuSp,OmSt2}). When seeking a copula that fits given data best, a practitioner would perform what is called an \textit{Exploratory Data Analysis}. A possible way to do that would be to go through the following steps: (1) starting with a rank plot, (2) measuring association, (3) testing exchangeability, (4) testing for independence; and of course, performing any additional test necessary depending on the situation. The motivation is given in the overview paper by Genest and Favre \cite{GeFa}, cf. also\ \cite{GeNeRe,GeRe}.
Step (2) may be seen as one of the reasons why it has become so popular recently to study local Fr\'echet-Hoeffding bounds of families of copulas with mutually related measures of concordance. Some history of explorations connected to the kind of bounds can be found in \cite[Sections 3\&4]{KoBuKoMoOm2}. In \cite{OmSt3} theoretical aspects of local bounds are given, called there \textit{constrained bounds}, and studied in more details. In particular, the local bounds of sets of copulas having a fixed value of Spearman's rho, Kendall's tau, respectively Blomqvist's beta, have been worked out \cite{NQRU,NeUbFl}. On the other hand, the analogous question is still open for the Spearman's footrule and Gini's gamma, two measures of association whose importance has recently been brought up in \cite{GeNeGh}.
We denote by $\mathcal C$ the set of all bivariate copulas and by $\mathds I$ the interval $[0,1]\subseteq\mathds R$. Some transformations are naturally defined on $\mathcal C$:
The transpose of copula $C$ will be denoted by $C^t$, i.e., $C^t(u,v)= C(v,u)$. We denote by $C^{\sigma_1}$ and $C^{\sigma_2}$ the two reflections of a copula $C$ defined by $C^{\sigma_1}(u,v) =v-C(1-u,v)$ and $C^{\sigma_2}(u,v)=u-C(u,1-v)$ (see \cite[{\S}1.7.3]{DuSe}), and by $\widehat{C}=\left(C^{\sigma_1}\right)^{\sigma_2}$ the survival copula of $C$.
Several orders can be introduced on $\mathcal C$. Copula $C$ is preceding $D$ in the \textit{concordance order} if $C(u,v)\leqslant D(u,v)$ and $\widehat{C}(u,v)\leqslant \widehat{D}(u,v)$ for all $(u,v)\in\mathds I^2$ {\cite[Definition 2.4]{Joe}. Copula $C$ is preceding $D$ in the \textit{pointwise order} if only $C(u,v)\leqslant D(u,v)$ for all $(u,v)\in\mathds I^2$. (See \cite[Definition 2.8.1]{Nels} and \cite[\S{2.11}]{Joe} for further details.) The concordance order and the pointwise order coincide on the set of two-dimensional copulas. (See \cite[\S{2.2.1}]{Joe97} for a proof of this statement.) Hence, we will simply refer to them as the order, and write $C\leqslant D$ for $C,D\in\mathcal C$ if $C(u,v)\leqslant D(u,v)$ for all $(u,v)\in\mathds I^2$.
It is well known that $\mathcal C$ is
{a partially ordered set} with respect to the order{, but not a lattice \cite[Theorem 2.1]{NeUbF2},} and that $W(u,v)=\max\{0,u+v-1\}$ and $M(u,v)=\min\{u,v\}$ are the lower and the upper bound of all copulas, respectively. Copulas $W$ and $M$ are called the \textit{Fr\'echet-Hoeffding lower and upper bound}, respectively. It was proved in \cite{NeUbF2} that the set of all bivariate quasi-copulas is a complete lattice that is order isomorphic to the Dedekind--MacNeille completion of the set of all bivariate copulas.
The paper is organized as follows. Sections 2 and 3 present preliminaries on measures of concordance and on local bounds. Local bounds of the set of all copulas corresponding to a fixed value of the Spearman's footrule $\phi\in[-\frac12,1]$ are given in Section 4. So, Theorem 5 is one of the two main results of the paper. The second main result is Theorem 7 presented in Section 5; the local bounds of the set of copulas that have the value of Gini's gamma $\gamma\in[-1, 1]$ fixed are determined there. Section 6 is devoted to comparison of these local bounds and Section 7 to the study of relations between the two measures of concordance and a third one, namely Blomqvist's beta.
\section{Preliminaries on measures of concordance}\label{sec:prelim}
A mapping $\kappa:\mathcal C\to [-1,1]$ is called a \textit{measure of concordance} if it satisfies the following properties (see \cite[Definition 2.4.7]{DuSe}):
\begin{enumerate}[(C1)]
\item $\kappa(C)=\kappa(C^t)$ for every $C\in\mathcal C$.
\item $\kappa(C)\leqslant\kappa(D)$ when $C\leqslant D$. \label{monotone}
\item $\kappa(M)=1$.
\item $\kappa(C^{\sigma_1})=\kappa(C^{\sigma_2})=-\kappa(C)$.
\item If a sequence of copulas $C_n$, $n\in\mathbb{N}$, converges uniformly to $C\in\mathcal C$, then $\lim_{n\to\infty}\kappa(C_n)=\kappa(C)$.
\end{enumerate}
We will refer to property (C\ref{monotone}) above simply by saying that a measure of concordance under consideration is \textit{monotone}.
{Certain properties that are sometimes stated in definitions of a measure of concordance follow from the properties listed above. Namely, a measure of concordance also satisfies the following (see \cite[\S{3}]{KoBuKoMoOm2} for further details):
\begin{enumerate}
\item[(C{6})] $\kappa(\Pi)=0$, where $\Pi$ is the independence copula $\Pi(u,v)=uv$. \label{kappa_Pi}
\item[(C{7})] $\kappa(W)=-1$.\label{kappaW}
\item[(C{8})] $\kappa(C)=\kappa(\widehat{C})$ for every $C\in\mathcal C$.
\end{enumerate}
{Because of their significance in statistical analysis, measures of concordance and their relatives measures of association and measures of dependence are a classical topic of research. It was Scarsini \cite{Scar} who introduced formal axioms of a measure of concordance. Some of more recent references on bivariate measures of concordance include \cite{EdTa,FrNe,FuSch,Lieb,NQRU,NQRU2}. Their multivariate generalization was studied e.g. in \cite{BeDoUF2,DuFu,Tayl,UbFl}. For bivariate copulas the main measures of concordance are naturally studied through symmetries that are a consequence of properties of the \textit{concordance function} $\mathcal{Q}$ (see for instance \cite{BeDoUF,EdMiTa,EdMiTa2}). } The concordance function is defined for a pair of random vectors $(X_1,Y_1)$ and $(X_2,Y_2)$. If the corresponding copulas are $C_1$ and $C_2$ and if the distribution functions of $X_1$, $X_2$, $Y_1$ and $Y_2$ are continuous, then we have
\begin{equation}\label{concordance}
\mathcal{Q}=\mathcal{Q}(C_1,C_2)= 4 \int_{\mathds I^2} C_2(u,v)dC_1(u,v) -1.
\end{equation}
(See \cite[Theorem 5.1.1]{Nels}.)
The concordance function was introduced by Kruskal \cite{Krus} and it has a number of useful properties \cite[Corollary 5.1.2]{Nels} and \cite[\S{3}]{KoBuKoMoOm2}.}} In the sequel, we use the following two:
\begin{enumerate}[(Q1)]
\item It remains unchanged when both copulas are replaced by their survival copulas: \\ {$\mathcal{Q}\left(C_1,C_2\right)=\mathcal{Q}\left(\widehat{C}_1,\widehat{C}_2\right)$.}
\item When both copulas are replaced by their reflected copulas the sign changes: \\ $\mathcal{Q}\left(C_1^{\sigma_j},C_2^{\sigma_j}\right)=-\mathcal{Q}\left(C_1,C_2\right)$ for $j=1,2$.
\end{enumerate}
The {four} most commonly used measures of concordance of a copula $C$ are Kendall's tau, Spearman's rho,
Gini's gamma and Blomqvist's beta. {We refer to \cite{Lieb} for an extended definition of a measure of concordance. If we replace Property (C4) by Property (C6) in the definition of a measure of concordance, we get what Liebscher in \cite{Lieb} calls a \textit{weak measure of concordance}. Spearman's footrule is an example of such a weak measure of concordance.} The range of a measure of concordance is the interval $[-1,1]$, while the range of Spearman's footrule is equal to $\left[-\frac12,1\right]$ (see \cite[\S4]{UbFl}). The sets of all copulas where the bounds $-\frac12$ and $1$ for Spearman's footrule are attained are given in \cite{FuMcC}, where also the generalization to multidimensional setting $d\geqslant 3$ is given.
To simplify the discussion from now on, we include Spearman's footrule when we talk about measures of concordance in general thus omitting the word 'weak'. In formal statements of our results we include the word 'weak' for precision.
Statistical significance of all five measures of concordance is already well established. See e.g. \cite{DiaGra,CoNi,GeNeGh,Nels1998,UbFl,SSQ} for Gini's gamma, Blomqvist's beta and Spearman's footrule. Nelsen \cite{Nels1998} discusses the $l_1$ nature of Gini's gamma and Spearman's footrule as compared to $l_2$ nature of Spearman's rho. Spearman's footrule depends only on $l_1$ distance of copula $C$ to the upper bound $M$, while Gini's gamma depends on $l_1$ distances to both bounds $W$ and $M$ \cite{Nels1998}.
All three measures of concordance that we study are of degree one \cite{EdTa}.
\\
In this paper, we focus on Spearman's footrule, Gini's gamma, and, in the last section, on their relation with Blomqvist's beta. The first two of them may be defined in terms of the concordance function $\mathcal{Q}$. The \textit{Spearman's footrule} is defined by
\begin{equation}\label{phi}
\phi(C)
\frac12\left(3\mathcal{Q}(C,M)-1\right)=6\int_0^1 C(t,t) dt - 2
\end{equation}
and \textit{Gini's gamma} by
\begin{equation}\label{gamma}
\gamma(C)=\mathcal{Q}(C,M)+\mathcal{Q}(C,W) = 4\int_0^1 \left(C(t,t)+C(t,1-t)\right) dt - 2.
\end{equation}
On the other hand, \textit{Blomqvist's beta} is defined by
\begin{equation}\label{beta}
\beta(C)
4\,C\left(\frac12,\frac12\right)-1.
\end{equation}
(See \cite[{\S}2.4]{DuSe} and \cite[Ch. 5]{Nels}.) Note that Gini's gamma and Blomqvist's beta are measures of concordance, so (C1)-(C8) hold for them. On the other hand, only Properties (C4) and (C7) do not hold for Spearman's footrule. Property (C8) holds for it since
\begin{equation}\label{C8 for phi}
\phi(C)=\frac12\left(3\mathcal{Q}(C,M)-1\right)=\frac12\left(3\mathcal{Q}(\widehat{C},\widehat{M})-1\right)=
\frac12\left(3\mathcal{Q}(\widehat{C},{M})-1\right)=\phi(\widehat{C}).
\end{equation}
Here we used (Q1) and the fact that $\widehat{M}=M$.
Gini's gamma and Spearman's footrule are related. In fact, Gini's gamma is a symmetrized version of Spearman's footrule \cite{Nels2004}, since
\begin{equation}\label{symm_gama}
\gamma(C)=\frac23\left(\phi(C)-\phi(C^{\sigma_i})\right)
\end{equation}
for either $i=1$ or $i=2$. To show \eqref{symm_gama} we use (Q2) and the fact that $W^{\sigma_i}=M$:
$$\gamma(C)=\mathcal{Q}(C,M)+\mathcal{Q}(C,W)=\mathcal{Q}(C,M)-\mathcal{Q}(C^{\sigma_i},M)=\frac23\left(\phi(C)-\phi(C^{\sigma_i})\right).$$
\section{Preliminaries on local bounds}\label{sec:local}
Besides the Fr\'{e}chet-Hoeffding upper and lower bound, which are global bounds for the ordered set of copulas, one often studies local bounds of certain subsets. Perhaps among the first known examples of the kind is given in Theorem 3.2.3 of Nelsen's book \cite{Nels} (cf.\ also \cite[Theorem 1]{NQRU}), where the bounds of the set of copulas $C\in\mathcal C$ with $C(a,b)=d$ for fixed $a,b\in\mathds I$ and $d\in[W(a,b),M(a,b)]$ are given. In general, if $\mathcal{C}_0$ is a set of copulas, we let
\begin{equation}\label{eq:inf:sup}
\underline{C}=\inf\mathcal{C}_0\quad\text{and}\quad\overline{C} =\sup\mathcal{C}_0,
\end{equation}
where the infimum and the supremum are taken point-wise. In \cite{NQRU} the authors study the bounds for the set of copulas whose Kendall's tau equals a given number $t\in [-1,1]$ and for the set of copulas whose Spearman's rho equals a given number $t\in [-1,1]$. In both cases the bounds are copulas that do not belong to the set. Similar bounds for the set of copulas having a fixed value of Blomqvist's beta were found in \cite{NeUbFl}. In \cite{BBMNU14} the authors present the local bounds for the set of copulas having a fixed value of the degree of non-exchangeability. In all these cases the bounds are again copulas. We know that this is not true in general since the bound of a set of copulas may be a proper quasi-copula. This is true in several cases considered in our paper.\\
Suppose that $\kappa:\mathcal{C}\to[-1,1]$ is a given measure of concordance and that $k\in [-1,1]$ is a fixed value in the range of $\kappa$. Then we write
\begin{equation}\label{eq:kappa}
\mathcal{K}_{k}:=\{C\in\mathcal{C}\,|\,\kappa(C)=k\}.
\end{equation}
We denote by $\underline{K}_{k}=\inf\mathcal{K}_{k}$ {and} $\overline{K}_{k} =\sup\mathcal{K}_{k}$ the lower and the upper bound of \eqref{eq:kappa}, respectively. The symmetries that the concordance function and measures of concordance possess imply symmetries on the bounds \cite{BeDoUF,EdMiTa,EdMiTa2}.
\begin{lemma}\label{lem:symm}
\textit{(a)} Suppose that $\kappa$ is at least a weak measure of concordance. Then the lower and the upper bounds $\underline{K}_{k}$ {and} $\overline{K}_{k}$ are symmetric and radially symmetric: $$\underline{K}_{k}(a,b)=\underline{K}_{k}(b,a)\ \text{and}\ \underline{K}_{k}(a,b)=\widehat{\underline{K}}_{k}(a,b)$$
and
$$\overline{K}_{k}(a,b)=\overline{K}_{k}(b,a)\ \text{and}\ \overline{K}_{k}(a,b)=\widehat{\overline{K}}_{k}(a,b).$$
\textit{(b)} If $\kappa$ is a (proper) measure of concordance then also
$$\underline{K}_{k}^{\sigma_i}(a,b)=\overline{K}_{-k}(a,b)\ \text{and}\ \overline{K}_{k}^{\sigma_i}(a,b)={\underline{K}}_{\, -k}(a,b)$$
for $i=1,2.$
\end{lemma}
\begin{proof}
Suppose that $\kappa$ satisfies Properties (C1) and (C8). Then for a copula $C\in\mathcal{C}$ we have $C\in\mathcal{K}_{k}$ if and only if $C^t\in\mathcal{K}_k$, and $C\in\mathcal{K}_{k}$ if and only if $\widehat{C}\in\mathcal{K}_{k}$. So we conclude that
\begin{equation*}
\underline{K}_{k}(a,b)
\inf_{C\in \mathcal{K}_{k}}C(b,a)\\
=\underline{K}_{k}(b,a)
\end{equation*}
and
\begin{equation*}
\underline{K}_{k}(a,b)=\inf_{C\in \mathcal{K}_{k}}C(a,b)
=a+b-1+\inf_{C\in \mathcal{K}_{k}}C(1-a,1-b)
=\widehat{\underline{K}}_{\, k}(a,b).
\end{equation*}
The equalities for the upper bound are proved analogously.
Suppose now that $\kappa$ satisfies property (C4). Then for each $i$ we have $C\in \underline{K}_{k}$ if and only if $C^{\sigma_i}\in \underline{K}_{\, -k}$. This implies that
\begin{equation*}
\underline{K}_{k}(a,b)=\inf_{C\in \mathcal{K}_{k}}C(a,b)
=\inf_{C\in \mathcal{K}_{-k}}C^{\sigma_1}(a,b)
=b-\sup_{C\in \mathcal{K}_{-k}}C(1-a,b)
=\overline{K}_{-k}^{\sigma_1}(a,b).
\end{equation*}
The other equalities follow analogously.
\end{proof}
The notion \textit{maximal asymmetry function} was introduced in \cite[\S 2]{KoBuKoMoOm1} following the ideas of \cite{KlMe}, its value at a fixed point $(u, v)\in\mathds I^2$ was computed as
\[
d^*_\mathcal F (u,v) = \sup_{C\in\mathcal F} \{|C(u,v)-C(v,u)|\},
\]
where $\mathcal F\subseteq\mathcal C$ is an arbitrary family of copulas. If $\mathcal F=\mathcal C$, this supremum is attained since $\mathcal C$ is a compact set by \cite[Theorem 1.7.7]{DuSe}. Klement and Mesiar \cite{KlMe} and Nelsen \cite{N} showed that
\begin{equation}\label{eq:kle mes}
d_\mathcal C^*(u,v)=\min\{u,v,1-u,1-v,|v-u|\}.
\end{equation}
In \cite{KoBuKoMoOm2}, extremal copulas where the asymmetry bounds are attained were introduced. Choose $(a,b)\in\mathds I^2$ and a $c\in\mathds I$ such that $0\leqslant c\leqslant d_\mathcal C^*(a,b)$. Define $\mathcal C_0$ to be the set of all $C$ such that
\begin{equation}\label{eq:asym_point}
C(a,b)-C(b,a) = c.
\end{equation}
Note that this set is nonempty since the set $\mathcal C$ is convex by
\cite[Theorem 1.4.5]{DuSe}. The local bounds $\underline{C}$ and $\overline{C}$ of this set were computed in \cite[Theorem 1]{KoBuKoMoOm2}
\begin{equation*
\underline{C}^{(a,b)}_{c}(u,v)= \max\{W(u,v),\min\{d_1,u-a+d_1,v-b+d_1,u+v-a-b+d_1\}\},
\end{equation*}
and
\begin{equation*
\overline{C}^{(a,b)}_{c}(u,v)=\min\{M(u,v),\max\{d_2,u-b+d_2,v-a+d_2,u+v-a-b+d_2\}\},
\end{equation*}
where
\begin{equation*
d_1=W(a,b)+c
{\text{ and }
d_2=M(a,b)-c}
\end{equation*}
for $0 \leqslant c \leqslant d^*_C(a,b)$. Observe that $c$ is small enough so that everywhere close to the boundary of the square $\mathds I^2$ copula $W$ prevails in the definition of $\underline{C}^{(a,b)}_{c}$, and that copula $M$ prevails close to the boundary of the square $\mathds I^2$ in the definition of $\overline{C}^{(a,b)}_{c}$.
Copulas $\underline{C}^{(a,b)}_{c}$ and $\overline{C}^{(a,b)}_{c}$ can be considered for any $c$ such that $0\leqslant c\leqslant\min\{a,b,1-a,1-b\}$, not necessarily $c \leqslant |b-a|$. It turns out that they are exactly the minimal and the maximal copulas with the property $C(a,b) = d_1$ and $C(a, b) = d_2$, respectively \cite[Theorem 3.2.3]{Nels}.
Note that $\underline{C}^{(a,b)}_{c}$ and $\overline{C}^{(a,b)}_{c}$ are shuffles of $M$, compare \cite[{\S}3.2.3]{Nels} and \cite[\S3.6]{DuSe} (cf.\ also \cite{N}), so they are automatically copulas. More precisely, as shuffles of $M$ they are rewritten as
\begin{equation}\label{eq:C shuffle}
\begin{split}
\underline{C}^{(a,b)}_{c} & =M(4,\{[0,a-d_1],[a-d_1,a],[a, 1-b+d_1], [1-b+d_1,1]\},(4,2,3,1),-1)\\
\overline{C}^{(a,b)}_{c} & =M(4,\{[0,d_2],[d_2,b],[b,a+b-d_2], [a+b-d_2,1]\},(1,3,2,4),1),
\end{split}
\end{equation}
for $0 \leqslant c \leqslant min\{a, b, 1-a, 1-b\}$, where the last parameter on the righthand-side in the above expressions is a function $f:\{1,2,\ldots,n\}\to\{-1,1\}$ which is in the first line of Equation \eqref{eq:C shuffle} identically equal to $-1$ and in the second one identically equal to 1.
To compute the values of various measures of concordance of these copulas we need the values of the concordance function $\mathcal{Q}$ introduced in Section \ref{sec:prelim} for various copulas such as $W$, $M$, and $\underline{C}^{(a,b)}_{c}$, respectively $\overline{C}^{(a,b)}_{c}$.
The following proposition is proved in \cite[Propositions 4\&5]{KoBuKoMoOm2}. It was also pointed out there that these results are symmetric with respect to the main diagonal and to the counter-diagonal \cite[Proposition 6]{KoBuKoMoOm2}.
\begin{proposition} \label{prop1}
Let $(a,b)\in\mathds I^2$ and $0\leqslant c\leqslant\min\{a,b,1-a,1-b\}$. For copulas $\underline{C}^{(a,b)}_{c}$ and $\overline{C}^{(a,b)}_{c}$ it holds:
\begin{enumerate}[(a)]
\item $\mathcal{Q}(M, \underline{C}^{(a,b)}_{c}) =$ \\ $=\left\{ \begin{array}{ll}
0; & \text{if } b \geqslant d_1 + \frac12, \vspace{1mm}\\
(2d_1+1-2b)^2; & \text{if } \frac12(1+d_1) \leqslant b \leqslant d_1 + \frac12, a \leqslant b-d_1, \vspace{1mm}\\
(1+d_1-a-b)(1+3d_1+a-3b); & \text{if } \frac12(1+d_1) \leqslant b \leqslant d_1 + \frac12, a \geqslant b-d_1, \vspace{1mm}\\
d_1(2+3d_1-4b); & \text{if } b \leqslant \frac12(1+d_1), a \leqslant b-d_1, \vspace{1mm}\\
2d_1(1+d_1-a-b)-(a-b)^2; & \text{if } d_1 \geqslant 2a-1, d_1 \geqslant 2b-1, d_1 \geqslant a-b,d_1 \geqslant b-a, \vspace{1mm}\\
d_1(2+3d_1-4a); & \text{if } a \leqslant \frac12(1+d_1), b \leqslant a-d_1, \vspace{1mm}\\
(1+d_1-a-b)(1+3d_1-3a+b); & \text{if } \frac12(1+d_1) \leqslant a \leqslant d_1 + \frac12, b \geqslant a-d_1, \vspace{1mm}\\
(2d_1+1-2a)^2; & \text{if } \frac12(1+d_1) \leqslant a \leqslant d_1 + \frac12, b \leqslant a-d_1, \vspace{1mm}\\
0; & \text{if } a \geqslant d_1 + \frac12,
\end{array} \right.$
\item $\mathcal{Q}(M, \overline{C}^{(a,b)}_{c}) = 1 - 4(a-d_2)(b-d_2),$
\item $\mathcal{Q}(W, \underline{C}^{(a,b)}_{c}) = 4d_1(1-a-b+d_1)-1,$
\end{enumerate}
\end{proposition}
\section{Local bounds for Spearman's footrule}\label{sec:footrule}
In this section we compute local bounds of the set of all copulas corresponding to a fixed value of Spearman's footrule $\phi\in[-\frac12,1]$~:
\begin{equation}\label{eq:phi}
\mathcal F_{\phi}:=\{C\in\mathcal C\,|\,\phi(C)=\phi\}.
\end{equation}
We choose a point $(a,b)$ in the interior of the unit square $\mathds I^2$. We need to find the minimal and maximal value of $C(a,b)$ for all copulas in $\mathcal F_{\phi}$.
Suppose now that $C\in \mathcal F_{\phi}$ and $C(a,b)=d$. By \cite[Theorem 3.2.3]{Nels} it follows that
$$\underline{C}^{(a,b)}_{c_1} \leqslant C \leqslant \overline{C}^{(b,a)}_{c_2},$$
where $d=\underline{C}^{(a,b)}_{c_1}(a,b)=W(a,b)+c_1$ and $d=\overline{C}^{(b,a)}_{c_2}(a,b)=M(a,b)-c_2$, so that
\begin{equation}\label{c_i}
c_1=d-W(a,b)\ \text{and}\ c_2=M(a,b)-d.
\end{equation}
Here we prefer to view $c=c(d)$ as a function of $d=C(a,b)$. Note that $c$ takes values on $[c_1,c_2]$.
Since concordance functions are monotone it follows that
$$\underline{f}_{a,b}(d)\leqslant \phi(C)\leqslant \overline{f}_{a,b}(d),$$
where we write
\begin{equation}
\label{f-lower}\underline{f}_{a,b}(d)=\phi\left(\underline{C}^{(a,b)}_{c_1}\right)=\phi\left(\underline{C}^{(a,b)}_{d-W(a,b)}\right)
\end{equation}
and
\begin{equation}\label{f-upper}
\overline{f}_{a,b}(d)=\phi\left(\overline{C}^{(b,a)}_{c_2}\right)=\phi\left(\overline{C}^{(b,a)}_{M(a,b)-d}\right).
\end{equation}
\begin{theorem}\label{thm_phi}\label{thm_phi_low}
The pointwise infimum $\underline{F}_{\phi}$
of $\mathcal F_{\phi}$ for $\phi\in[-\frac12,1]$ is given by
\begin{equation}\label{F_lower}
\underline{F}_{\phi}(a,b)=\left\{ \begin{array}{ll}
\frac12\left(a+b-\sqrt{\frac{2}{3}(1-\phi)+(b-a)^2}\right); & \text{if } b\notin\{0,1\}, \text{ and }\frac{1-\phi}{6b} \leqslant a \leqslant 1- \frac{1-\phi}{6(1-b)}, \vspace{1mm}\\
W(a,b); & \text{otherwise, }
\end{array} \right.
\end{equation}
for any $(a,b)\in\mathds I^2$.
\end{theorem}
\begin{proof}
Proposition \ref{prop1}\textit{(b)} implies that function $\overline{f}_{a,b}$ of \eqref{f-upper} is given by
\begin{equation}\label{f_ab_dep_on_d_2}
\overline{f}_{a,b}(d)=\frac32\mathcal{Q}\left(M,\overline{C}^{(b,a)}_{c_2}\right)-\frac12=1-6(a-d)(b-d).
\end{equation}
Since
\begin{equation}\label{d_two}
d=\overline{C}^{(b,a)}_{c_2}(a,b)
\end{equation}
it follows that $W(a,b)\leqslant d\leqslant M(a,b)$. For such values of $d$, the expression on the right-hand side of \eqref{f_ab_dep_on_d_2} is increasing in $d$ since its maximum is achieved at $\frac12(a+b)$ that is greater or equal to $M(a,b)$. Thus, the minimal possible value of $\overline{f}_{a,b}(d)$ is achieved when $d=W(a,b)$. Then, we have
$$\overline{f}_{a,b}\left(W(a,b)\right)=\left\{ \begin{array}{ll}
1-6ab; & \text{if } a+b \leqslant 1, \vspace{1mm}\\
1-6(1-a)(1-b); & \text{if } a+b \geqslant 1.
\end{array} \right.$$
For function $\overline{f}_{a,b}:[W(a,b), M(a,b)]\to [\overline{f}_{a,b}\left(W(a,b)\right),1]$ we need to find its inverse.
If for a given value $\phi \in \left[ -\frac12, 1 \right]$ it holds that $\phi \leqslant \overline{f}_{a,b}\left(W(a,b)\right)$, then we take $d=W(a,b)$.
Otherwise, we take the inverse of the expression \eqref{f_ab_dep_on_d_2}, i.e.
\begin{equation}\label{bound for d_2}
d = \frac12\left(a+b-\sqrt{\frac{2}{3}(1-\phi)+(b-a)^2}\right).
\end{equation}
The inequality $\phi \geqslant \overline{f}_{a,b}\left(W(a,b)\right)$ gives us the condition
$$b\notin\{0,1\}, \text{ and }\frac{1-\phi}{6b} \leqslant a \leqslant 1- \frac{1-\phi}{6(1-b)}.$$
We conclude that the required lower bound is given by \eqref{F_lower}.
\end{proof}
\begin{corollary}\label{cor_phi}
Suppose that $\underline{F}_{\phi}$ is the infimum given in Theorem \ref{thm_phi_low}. Then:
\begin{enumerate}[(i)]
\item $\underline{F}_{\phi}$ is a copula for every $\phi \in \left[ -\frac12, 1\right]$.
\item We have $\underline{F}\,_{-\frac12}=W$ and $\underline{F}_{1}=M$, while for every $\phi \in \left( -\frac12, 1 \right)$ the copula $\underline{F}_{\phi}$ is different from Fr\'echet-Hoeffding lower and upper bounds $W$ and $M$. It has a singular component distributed on graphs of hyperbolas
$$ab=\frac{1-\phi}{6}\ \text{ and }\ (1-a)(1-b)=\frac{1-\phi}{6}$$
for $a\in [\frac12-\ell(\phi),\frac12+\ell(\phi)]$ and on the anti-diagonal $a+b=1$ for other $a\in\mathds I$. Here, we write $\ell(\phi)=\frac16{\sqrt{3(1+2\phi)}}$. The absolutely continuous part of $\underline{F}_{\phi}$ is distributed inside the region enclosed by both hyperbolas. (See Figure \ref{fig Fspodaj}.)
\item $\underline{F}_{\phi}$ is increasing in $\phi$ (in the concordance order).
\item $\underline{F}_{\phi}$ is symmetric and radially symmetric: $\underline{F}_{\phi}(a,b)=\underline{F}_{\phi}(b,a)$ and $\underline{F}_{\phi}(a,b)=\widehat{\underline{F}}_{\phi}(a,b)$.
\item For $\phi\in\left( -\frac12, 1 \right)$, copula $\underline{F}_{\phi}$ is not a member of $\mathcal F_{\phi}$, but it holds that $\phi\left(\underline{F}_{\phi}\right)<\phi$. (See Figure \ref{phi(phi)}.)
\end{enumerate}
\end{corollary}
\begin{figure}[h]
\includegraphics[width=5cm]{f_spodaj_-frac14.pdf} \hfil \includegraphics[width=5cm]{f_spodaj_frac14.pdf} \\
\includegraphics[width=5cm]{f_spodaj_frac34.pdf} \hfil \includegraphics[width=5cm]{f_spodaj_099.pdf}
\caption{ Scatterplots of $\underline{F}_{\phi}$ for $\phi = -\frac14, \frac14$, (first row), and $\phi= \frac34, 0.99$ (second row).} \label{fig Fspodaj}
\end{figure}
\begin{proof}
From \eqref{F_lower} we see that $\underline{F}\,_{-\frac12}=W$ and $\underline{F}_{1}=M$. Assume now that $\phi \in \left( -\frac12, 1 \right)$. To prove that $\underline{F}_{\phi}$ is a copula we use \cite[Theorem 2.1]{DuJa}. (See \cite{DuJa} for the definitions of Dini's derivatives as well.) For fixed $b$ the righthand side upper Dini derivative of $\underline{F}_{\phi}(a,b)$ is
\begin{equation}\label{F_lower-a}
D^+\underline{F}_{\phi}(a,b)=\left\{ \begin{array}{ll}
0; & \text{if } b=0 \text{ or }b>0 \text{ and }a < \min\left\{1-b, \frac{1-\phi}{6b}\right\}, \vspace{1mm}\\
\frac12\left(1+\frac{b-a}{\sqrt{\frac{2}{3}(1-\phi)+(b-a)^2}}\right); & \text{if } b\notin\{0,1\}, \text{ and }\frac{1-\phi}{6b} \leqslant a < 1- \frac{1-\phi}{6(1-b)}, \vspace{1mm}\\
1; & \text{otherwise. }
\end{array} \right.
\end{equation}
For $(a,b)\in\mathds I^2$ such that $\frac{1-\phi}{6b} < a < 1- \frac{1-\phi}{6(1-b)}$, we have
\begin{equation}\label{F_lower-ab}
\frac{\partial^2}{\partial a\partial b}\underline{F}_{\phi}(a,b)=
\frac{\frac{2}{3}(1-\phi)}{\left(\frac{2}{3}(1-\phi)+(b-a)^2\right)^{\frac32}}.
\end{equation}
Since the second derivative in \eqref{F_lower-ab} is positive and the Dini derivative in \eqref{F_lower-a} has a positive jump at points on the graphs of hyperbolas
$$ab=\frac{1-\phi}{6}\text{ and }(1-a)(1-b)=\frac{1-\phi}{6}$$
for $a\in [\frac12-\ell(\phi),\frac12+\ell(\phi)]$ or on the anti-diagonal $a+b=1$ for other $a\in\mathds I$, it follows that statements in \textit{(i)} and \textit{(ii)} hold.
The derivative with respect to $\phi$ of the expression on the righthand side of \eqref{bound for d_2} is positive for $\phi \in \left( -\frac12, 1 \right)$. Thus \textit{(iii)} follows.
Statement \textit{(iv)} is a special case of Lemma \ref{lem:symm}.
Finally, we have $\phi(\underline{F}_{\phi})=6\int_0^1 \underline{F}_{\phi}(t,t)\, dt-2=2-\phi-\sqrt{6(1-\phi)}$, which is less then $\phi$ for $\phi\in\left(-\frac12,1\right)$. So, \textit{(v)} holds as well.
\end{proof}
The computation of the upper bound $\overline{F}_{\phi}$ is done using the opposite bounds to the ones that were used in the above proof. However, the computation is much more involved and it requires a careful analysis of several cases depending on $\phi$. We divide the unit square in several areas. With increasing value of $\phi$ their shapes evolve and they disappear one after another. (See Figure \ref{fig obmocja phi}.)
We define the areas in the unit square as
\begin{align*}
\Delta_{\phi}^1 = \bigg\{ (a,b)\in \mathds I^2; \, & a \leqslant \frac12 \left( 1- \frac{\sqrt{3}}{3} \sqrt{1+2\phi} \right), \, b \geqslant \frac12 \left( 1+ \frac{\sqrt{3}}{3} \sqrt{1+2\phi} \right), \\ & b \leqslant a+\frac12 \left( 1- \frac{\sqrt{3}}{3} \sqrt{1+2\phi} \right) \bigg\} \\
\Delta_{\phi}^2 = \bigg\{ (a,b)\in \mathds I^2; \, & b \leqslant \frac12 \left( 1+ \frac{\sqrt{3}}{3} \sqrt{1+2\phi} \right), \\ & \frac13 \left( 2b-1 +\sqrt{(2b-1)^2+1+2\phi} \right) \leqslant a \leqslant \frac13 \left( b+1 -\sqrt{(2b-1)^2+1+2\phi} \right) \bigg\}
\\
\Delta_{\phi}^3 = \bigg\{ (a,b)\in \mathds I^2; \, & a \geqslant \frac12 \left( 1- \frac{\sqrt{3}}{3} \sqrt{1+2\phi} \right), \\ & \frac13 \left( a+1 +\sqrt{(2a-1)^2+1+2\phi} \right) \leqslant b \leqslant \frac13 \left(2a+2 -\sqrt{(2a-1)^2+1+2\phi} \right) \bigg\} \\
\Delta_{\phi}^4 = \bigg\{ (a,b)\in \mathds I^2; \, & \frac13 \left(b+1 -\sqrt{(2b-1)^2+1+2\phi} \right) \leqslant a \leqslant \frac13 \left( b+1 +\sqrt{(2b-1)^2+1+2\phi} \right),\\
& \frac13 \left( a+1 -\sqrt{(2a-1)^2+1+2\phi} \right) \leqslant b \leqslant \frac13 \left(a+1 +\sqrt{(2a-1)^2+1+2\phi} \right), \\
& a \leqslant \sqrt{\frac23 (1-\phi)-(b-1)^2}, \, b \leqslant \sqrt{\frac23 (1-\phi)-(a-1)^2} \bigg\} \\
\Delta_{\phi}^5 = \bigg\{ (a,b)\in \mathds I^2; \, & b \geqslant \frac12 \left( 1- \frac{\sqrt{3}}{3} \sqrt{1+2\phi} \right), \\ & \frac13 \left( b+1 +\sqrt{(2b-1)^2+1+2\phi} \right) \leqslant a \leqslant \frac13 \left(2b+2 -\sqrt{(2b-1)^2+1+2\phi} \right) \bigg\}
\end{align*}
\begin{align*
\Delta_{\phi}^6 = \bigg\{ (a,b)\in \mathds I^2; \, & a \leqslant \frac12 \left( 1+ \frac{\sqrt{3}}{3} \sqrt{1+2\phi} \right), \\ & \frac13 \left( 2a-1 +\sqrt{(2a-1)^2+1+2\phi} \right) \leqslant b \leqslant \frac13 \left( a+1 -\sqrt{(2a-1)^2+1+2\phi} \right) \bigg\}
\\
\Delta_{\phi}^7 = \bigg\{ (a,b)\in \mathds I^2; \, & b \leqslant \frac12 \left( 1- \frac{\sqrt{3}}{3} \sqrt{1+2\phi} \right), \, a \geqslant \frac12 \left( 1+ \frac{\sqrt{3}}{3} \sqrt{1+2\phi} \right), \\ & b \geqslant a-\frac12 \left( 1- \frac{\sqrt{3}}{3} \sqrt{1+2\phi} \right) \bigg\}
\end{align*}
Notice that for $\phi$ close to $-\frac12$ all these areas are nonempty. When $\phi$ increases some of the areas vanish. More precisely, all the areas are nonempty for $\phi \in [-\frac12, \, -\frac13]$. For $\phi=-\frac12$ area $\Delta_\phi^4$ is reduced to the main diagonal. For $\phi \in \left( -\frac13,\, -\frac15 \right]$ only areas $\Delta_\phi^1$ and $\Delta_\phi^7$ are empty. For $\phi \in \left( -\frac15, \frac14 \right]$ only area $\Delta_\phi^4$ is nonempty. For $\phi \in \left( \frac14, 1\right]$ all the areas are empty.
In Figure \ref{fig obmocja phi} this dynamics is illustrated by the regionplots of the areas for $\phi = -\frac12, -\frac25$ (first row), and $\phi= -\frac{32}{100}, 0$ (second row).
\begin{figure}[h]
\includegraphics[width=5cm]{obmocja_phi_-frac12.pdf} \hfil \includegraphics[width=5cm]{obmocja_phi_-frac25.pdf} \\
\includegraphics[width=5cm]{obmocja_phi_-032.pdf} \hfil \includegraphics[width=5cm]{obmocja_phi_0.pdf}
\caption{ Regionplots of the areas $\Delta_\phi^1, \ldots, \Delta_\phi^7$ for $\phi = -\frac12, -\frac25, -\frac{32}{100}, 0$.} \label{fig obmocja phi}
\end{figure}
Next, we define functions of $\phi$ depending on $(a,b)$ on these areas. Since they are, for a fixed value of $(a,b)$, inverses of $\underline{f}_{a,b}(d)$ we chose to denote them by $\delta^i_{a,b}(\phi)$. They are
\begin{align*}\label{delta_ab^i}
\delta_{a,b}^1(\phi)&=\frac12 \left(2b-1+\frac{\sqrt{3}}{3}\sqrt{1+2\phi}\right),\\
\delta_{a,b}^2(\phi)&=\frac13 \left(2b-1+\sqrt{(1-2b)^2+1+2\phi}\right),
\\
\delta_{a,b}^3(\phi)&=\frac13 \left(a+3b-2+\sqrt{(1-2a)^2+1+2\phi}\right),\\
\delta_{a,b}^4(\phi)&=\frac12 \left(a+b-1+\sqrt{3(b-a)^2+(1-2a)(1-2b)+\frac23\left(1+2\phi\right)}\right),\\
\delta_{a,b}^5(\phi)&=\frac13 \left(3a+b-2+\sqrt{(1-2b)^2+1+2\phi}\right),\\
\delta_{a,b}^6(\phi)&=\frac13 \left(2a-1+\sqrt{(1-2a)^2+1+2\phi}\right),\\
\delta_{a,b}^7(\phi)&=\frac12 \left(2a-1+\frac{\sqrt{3}}{3}\sqrt{1+2\phi}\right).
\end{align*}
We are now ready to state one of our main results.
\begin{theorem}\label{thm_phi_upp1}
The pointwise supremum $\overline{F}_{\phi}$
of $\mathcal F_{\phi}$ for any $\phi\in[-\frac12, 1]$ and for any $(a,b)\in\mathds I^2$
is given by
\begin{equation}\label{F_upper1}
\overline{F}_{\phi}(a,b)=\left\{ \begin{array}{ll}
\delta_{a,b}^1(\phi); & \text{if } (a,b) \in \Delta_{\phi}^1 , \vspace{1mm}\\
\delta_{a,b}^2(\phi); & \text{if } (a,b) \in \Delta_{\phi}^2 , \vspace{1mm}\\
\delta_{a,b}^3(\phi); & \text{if } (a,b) \in \Delta_{\phi}^3 , \vspace{1mm}\\
\delta_{a,b}^4(\phi); & \text{if } (a,b) \in \Delta_{\phi}^4 , \vspace{1mm}\\
\delta_{a,b}^5(\phi); & \text{if } (a,b) \in \Delta_{\phi}^5 , \vspace{1mm}\\
\delta_{a,b}^6(\phi); & \text{if } (a,b) \in \Delta_{\phi}^6 , \vspace{1mm}\\
\delta_{a,b}^7(\phi); & \text{if } (a,b) \in \Delta_{\phi}^7 , \vspace{1mm}\\
M(a,b); & \text{otherwise. }
\end{array} \right.
\end{equation}
\end{theorem}
Before we give the proof we gather some observations in the following corollary:
\begin{corollary}\label{cor phi2}
Suppose that $\overline{F}_{\phi}$ is the supremum given in Theorem \ref{thm_phi_upp1}. Then:
\begin{enumerate}[(i)]
\item We have $\overline{F}_{-\frac12}$ is a shuffle of $M$, i.e., $\overline{F}_{-\frac12} = M(2,\{[0,\frac12],[\frac12,1]\},(2,1),1)$ and $\overline{F}_{\phi}=M$ for $\phi \in \left[ \frac14, 1 \right]$.
\item For $\phi \in \left( -\frac12, \frac14\right)$ the bound $\overline{F}_{\phi}$ is not a copula, but a proper quasi-copula.
\item $\overline{F}_{\phi}$ is increasing in $\phi$ (in the concordance order on quasicopulas).
\item $\overline{F}_{\phi}$ is symmetric and radially symmetric: $\overline{F}_{\phi}(a,b)=\overline{F}_{\phi}(b,a)$ and $\overline{F}_{\phi}(a,b)=\widehat{\overline{F}}_{\phi}(a,b)$.
\item If we extend the weak measure of concordance $\phi$ to any quasicopula $Q$ by defining $$\phi(Q)=6\int_0^1 Q(t,t) dt - 2$$
then
we have $\phi\left(\overline{F}_{\phi}\right)<\phi$ for all $\phi\in(0,1)$. (See Figure \ref{phi(phi)}.)
\end{enumerate}
\end{corollary}
\begin{figure}[h]
\includegraphics[width=6cm]{fi_F_fi__.pdf}
\caption{Graphs of values of $\phi(\underline{F}_{\phi})$ (orange) and $\phi(\overline{F}_{\phi})$ (green). } \label{phi(phi)}
\end{figure}
\begin{proof} First notice that $\overline{F}_{\phi}$ equals the Fr\'echet-Hoeffding upper bound $M$ for any $\phi \in [\frac14, 1]$, since then all the regions $\Delta_{\phi}^i$ disappear. If $\phi = -\frac12$, then $\overline{F}_{\phi}$ is a shuffle of $M$, namely
$\overline{F}_{-\frac12} = M(2,\{[0,\frac12],[\frac12,1]\},(2,1),1)$. For any $\phi \in (-\frac12, \frac14)$, the point $(\frac12,\frac12)$ lies inside the area $\Delta_\phi^4$ and the second derivative $\frac{\partial^2 \delta_{a,b}^4(\phi)}{\partial a \partial b}$ at the point $(\frac12,\frac12)$ equals $-\frac{\sqrt{3}}{2 \sqrt{4 \phi +2}}<0$. Therefore, \cite[Theorem 2.1]{DuSe} implies that $\overline{F}_{\phi}$ is {not} a copula. So, \textit{(i)} and \textit{(ii)} hold.
A careful analysis that we omit shows that $\overline{F}_{\phi}$ is an increasing function of $\phi$, as \textit{(iii)} asserts. Statement \textit{(iv)} is a consequence of Lemma \ref{lem:symm}.
A rather technical calculation shows that
$$
\phi(\overline{F}_{\phi})=\left\{ \begin{array}{ll}
\frac12\left(2-\sqrt{3-12\phi}+(1+2\phi)\log\left(3+\sqrt{3-12\phi}\right)\right); & \text{if } \ -\frac12\leqslant\phi\leqslant\frac14, \vspace{1mm}\\
1; & \text{otherwise. }
\end{array} \right.
$$
So \textit{(v)} holds (see Figure \ref{phi(phi)}).
\end{proof}
In the Figure \ref{f zgoraj} we give the 3D plots of the quasicopulas $\overline{F}_{\phi}$ for $\phi = -\frac25$, $-\frac{32}{100}$ and 0.\\
We remark that the set $\mathcal F_{1}$ consists of $M$ only, while $\mathcal F_{-\frac12}$ consist of all copulas $C$ such that $C\leqslant\overline{F}_{-\frac12}$ since $\phi$ is monotone. This is a special case of a result of Fuchs and McCord \cite{FuMcC}.
\begin{figure}[h]
\includegraphics[width=6cm]{kvazikopula_F_zgoraj_-frac25.pdf} \hfil \includegraphics[width=6cm]{kvazikopula_F_zgoraj_-032.pdf}\\
\includegraphics[width=6cm]{kvazikopula_F_zgoraj_0.pdf}
\caption{ Graphs of quasicopulas $\overline{F}_{\phi}$ for $\phi = -\frac25$, $-\frac{32}{100}$ and 0.} \label{f zgoraj}
\end{figure}
\begin{proof}[Proof of Theorem \ref{thm_phi_upp1}]
The pointwise supremum $\overline{F}_{\phi}$ is symmetric and radially symmetric by Lemma \ref{lem:symm}.
Thus we may assume that the point $(a,b)$ lies in the triangle $\Delta= \{ (a,b) \in \mathds I^2; \, a \leqslant b, \, a+b \leqslant 1 \}$.
Now, we use Proposition \ref{prop1} to show that for $(a,b)\in\Delta$ we have
\begin{equation}\label{f underline phi}
\begin{split}
\underline{f}_{a,b}(d)&=\phi\left(\underline{C}^{(a,b)}_{d-W(a,b)}\right)=\frac32\mathcal{Q}\left(M,\underline{C}^{(a,b)}_{c_1}\right)-\frac12= \\
&=\left\{ \begin{array}{ll}
-\frac12; & \text{if } b \geqslant d + \frac12, \vspace{1mm}\\
f_{a,b}^1(d); & \text{if } \frac12(1+d) \leqslant b \leqslant d + \frac12, \vspace{1mm}\\
f_{a,b}^2(d); & \text{if } a+d \leqslant b \leqslant \frac12(1+d), \vspace{1mm}\\
f_{a,b}^4(d); & \text{if } b \leqslant a+d.
\end{array} \right. ,
\end{split}
\end{equation}
where
\begin{align*
f_{a,b}^1(d)&=\frac32\left(2d+1-2b\right)^2-\frac12,\\
f_{a,b}^2(d)&=\frac32 d\left(2+3d-4b\right)-\frac12,\\
f_{a,b}^4(d)&=3d\left(1+d-a-b\right)-\frac32 \left(b-a\right)^2-\frac12.
\end{align*}
Since $d=\underline{C}^{(a,b)}_{c_1}(a,b)$
it follows that $W(a,b)=0\leqslant d\leqslant M(a,b)=a$. For such values of $d$ the expression on the right-hand side of \eqref{f underline phi} is increasing in $d$ and thus, the maximal possible value of $\underline{f}_{a,b}(d)$ is achieved when $d=a$. Then, we have
$$\underline{f}_{a,b}(a)=\left\{ \begin{array}{ll}
-\frac12; & \text{if } b \geqslant a + \frac12, \vspace{1mm}\\
6(b-a)^2-6(b-a)+1; & \text{if } \frac12(1+a) \leqslant b \leqslant a + \frac12, \vspace{1mm}\\
\frac32 a\left(3a-4b+2\right)-\frac12; & \text{if } 2a \leqslant b \leqslant \frac12(1+a), \vspace{1mm}\\
1-\frac32\left((a-1)^2+b^2\right); & \text{if } b \leqslant 2a.
\end{array} \right. $$
For the function $\underline{f}_{a,b}:[0, a]\to [-\frac12,
\underline{f}_{a,b}(a)]$ we need to find its inverse.
If for a given value $\phi \in \left[ -\frac12, 1 \right]$ it holds that $\phi \geqslant \underline{f}_{a,b}(a)$, we take $d=a=M(a,b)$.
Otherwise, we take the inverses of the expressions for $f_{a,b}^i$ which are $\delta_{a,b}^i(\phi)$ for $i=1, 2, 4$.
The inequality $\phi \geqslant \underline{f}_{a,b}(a)$ gives us the area
\begin{align*}
\bigg\{ (a,b)\in \Delta; \,
& \left( b \geqslant a+\frac12 \left( 1- \frac{\sqrt{3}}{3} \sqrt{1+2\phi} \right) \textrm{ and } b \geqslant \frac12(1+a) \right) \textrm{ or } \\
& \left( a \leqslant \frac13 \left( 2b-1 +\sqrt{(2b-1)^2+1+2\phi} \right) \textrm{ and } 2a \leqslant b \leqslant \frac12(1+a) \right) \textrm{ or } \\
& \left( b \geqslant \sqrt{\frac23 (1-\phi)-(a-1)^2} \textrm{ and } b\leqslant 2a \right)
\bigg\}
\end{align*}
which is equal to the area $\Delta \setminus (\Delta^1_\phi \cup \Delta^2_\phi\cup \Delta^4_\phi)$. {We continue by considering the inequalities $\phi \geqslant \underline{f}^i_{a,b}(a)$ for $i=1,2,4$. Their }
careful consideration yields areas where each of the expressions $\delta_{a,b}^i(\phi)$ is valid {and these are exactly} the areas $\Delta^i_\phi \cap \Delta$ for $i=1,2,4$. Now, we reflect the expressions $\delta_{a,b}^1(\phi), \delta_{a,b}^2(\phi), \delta_{a,b}^4(\phi)$ over the main diagonal and over the counter-diagonal to obtain the expressions $ \delta_{a,b}^1(\phi), \ldots, \delta_{a,b}^7(\phi)$. The areas where they are valid are the reflections of the areas $\Delta^1_\phi\cap \Delta, \Delta^2_\phi\cap \Delta, \Delta^4_\phi\cap \Delta$ over the main diagonal and over the counter-diagonal, i.e., the areas $\Delta^1_\phi, \ldots, \Delta^7_\phi$.
We conclude that the required upper bound is given by \eqref{F_upper1}. The detailed calculations of the functions and their domains were done with a help of Wolfram Mathematica software \cite{Mathematica}.
\end{proof}
\section{Local bounds for Gini's gamma}\label{sec:gamma}
In this section we compute the local bounds for Gini's gamma. For each value $\gamma\in[-1,1]$ we write
\begin{equation}\label{eq:gamma}
{\mathcal G}_{\gamma}:=\{C\in\mathcal C\,|\,\gamma(C)=\gamma\}.
\end{equation}
Our aim is to find the upper and lower bound of ${\mathcal G}_{\gamma}$. We denote by $\underline{G}_{\gamma}(a,b)$ the pointwise infimum of ${\mathcal G}_{\gamma}$ and by $\overline{G}_{\gamma}(a,b)$ the pointwise supremum of ${\mathcal G}_{\gamma}$. The computation of the upper bound of ${\mathcal G}_{\gamma}$ is done in a way that is similar to the one used for the upper bound of $\mathcal F_{\phi}$ in the previous section. The lower bound of ${\mathcal G}_{\gamma}$ is then obtained using the symmetries that hold for Gini's gamma and are proved in Lemma \ref{lem:symm}. Part (b) of Lemma \ref{lem:symm} does not hold for Spearman's footrule, so the argument there had to be different. The reason is the fact that Spearman's footrule is only a weak measure of concordance while Gini's gamma is a measure of concordance.
Now, we fix a value $\gamma\in[-1,1]$ and we choose a point $(a,b)$ in the interior of the unit square $\mathds I^2$. We will find the minimal and maximal value of $C(a,b)$ for all copulas in ${\mathcal G}_{\gamma}$.
Suppose that $C\in {\mathcal G}_{\gamma}$ and $C(a,b)=d$. By results of \cite{KoBuKoMoOm2} it follows that
$$\underline{C}^{(a,b)}_{c_1} \leqslant C \leqslant \overline{C}^{(b,a)}_{c_2}.$$
Recall that $d=\underline{C}^{(a,b)}_{c_1}(a,b)=W(a,b)+c_1$ and $d=\overline{C}^{(b,a)}_{c_2}(a,b)=M(a,b)-c_2$ and so (\ref{c_i}) holds. Since concordance functions are monotone it follows that
$$\underline{g}_{a,b}(d)\leqslant \gamma(C)\leqslant \overline{g}_{a,b}(d),$$
where we write
\begin{equation}
\label{g-lower}\underline{g}_{a,b}(d)=\gamma\left(\underline{C}^{(a,b)}_{c_1}\right)=\gamma\left(\underline{C}^{(a,b)}_{d-W(a,b)}\right)
\end{equation}
and
\begin{equation}\label{g-upper}
\overline{g}_{a,b}(d)=\gamma\left(\overline{C}^{(b,a)}_{c_2}\right)=\gamma\left(\overline{C}^{(b,a)}_{M(a,b)-d}\right).
\end{equation}
First, we compute the upper bound $\overline{G}_{\gamma}(a,b)$. To simplify the expressions we introduce some new notation. We divide the unit square in several areas depending on the value of $\gamma\in [-1,1]$.
The shapes of these regions evolve and they vanish one after another with increasing value of $\gamma$. The dynamics can be observed in Figure \ref{fig obmocja gamma}. We define the areas in the unit square as
\begin{align*}
\Omega_{\gamma}^1 = \bigg\{ (a,b)\in \mathds I^2; \, & a\leqslant \frac12, \,\frac12 \left( 1+\frac{1+\gamma}{1-2a} \right) \leqslant b \leqslant 1 -\frac{1+\gamma}{4a}\bigg\}
\\
\Omega_{\gamma}^2 = \bigg\{ (a,b)\in \mathds I^2; \, & b \leqslant \frac12 \left(1+ \frac{1+\gamma}{1-2a} \right), (1+2a-2b)^2+4a(1-b) \geqslant 1+\gamma, \\
& b \geqslant \frac13 \left(a+1 +\frac12 \sqrt{(2a-1)^2+3(1+\gamma)} \right), b \geqslant \frac14 \left(6a-1+\frac{1+\gamma}{1-2a} \right)
\bigg\}
\end{align*}
\begin{align*
\Omega_{\gamma}^3 = \bigg\{ (a,b)\in \mathds I^2; \, &
b \leqslant \frac13 \left(a+1 +\frac12 \sqrt{(2a-1)^2+3(1+\gamma)} \right), b \leqslant \frac18 \left(3a+6- \frac{1+\gamma}{a} \right), \\
& a \leqslant \frac{1}{11} \left(3+5b- \sqrt{9(2b-1)^2+11(1+\gamma)} \right)
\bigg\} \\
\Omega_{\gamma}^4 = \bigg\{ (a,b)\in \mathds I^2; \, & a \geqslant \frac13 \left(b+1 - \frac12 \sqrt{(2b-1)^2+3(1+\gamma)} \right),
a \geqslant \frac18 \left(3b-1+ \frac{1+\gamma}{1-b} \right), \\
& b \geqslant \frac{1}{11} \left(3+5a + \sqrt{9(2a-1)^2+11(1+\gamma)} \right)
\bigg\}\\
\Omega_{\gamma}^5 = \bigg\{ (a,b)\in \mathds I^2; \, & \frac{1}{11} \left(3+5b- \sqrt{9(2b-1)^2+11(1+\gamma)} \right) \leqslant a \leqslant \frac{1}{11} \left(3+5b + \sqrt{9(2b-1)^2+11(1+\gamma)} \right), \\
& \frac{1}{11} \left(3+5a- \sqrt{9(2a-1)^2+11(1+\gamma)} \right) \leqslant b \leqslant \frac{1}{11} \left(3+5a + \sqrt{9(2a-1)^2+11(1+\gamma)} \right), \\
& b \leqslant -2a+ \sqrt{3a(a+2)-(1+\gamma)}, a \leqslant -2b+ \sqrt{3b(b+2)-(1+\gamma)}
\bigg\} \\
\Omega_{\gamma}^6 =\bigg\{ (a,b)\in \mathds I^2; \, & b \geqslant \frac13 \left(a+1 - \frac12 \sqrt{(2a-1)^2+3(1+\gamma)} \right),
b \geqslant \frac18 \left(3a-1+ \frac{1+\gamma}{1-a} \right), \\
& a \geqslant \frac{1}{11} \left(3+5b + \sqrt{9(2b-1)^2+11(1+\gamma)} \right)
\bigg\}
\\
\Omega_{\gamma}^7 = \bigg\{ (a,b)\in \mathds I^2; \, & a \leqslant \frac13 \left(b+1 +\frac12 \sqrt{(2b-1)^2+3(1+\gamma)} \right), a \leqslant \frac18 \left(3b+6- \frac{1+\gamma}{b} \right), \\
& b \leqslant \frac{1}{11} \left(3+5a- \sqrt{9(2a-1)^2+11(1+\gamma)} \right)
\bigg\} \\
\Omega_{\gamma}^8 = \bigg\{ (a,b)\in \mathds I^2; \, & a \leqslant \frac12 \left(1+ \frac{1+\gamma}{1-2b} \right), (1-2a+2b)^2+4b(1-a) \geqslant 1+\gamma, \\
& a \geqslant \frac13 \left(b+1 +\frac12 \sqrt{(2b-1)^2+3(1+\gamma)} \right), a \geqslant \frac14 \left(6b-1+\frac{1+\gamma}{1-2b} \right)
\bigg\} \\
\Omega_{\gamma}^9 = \bigg\{ (a,b)\in \mathds I^2; \, & b\leqslant \frac12, \,\frac12 \left( 1+\frac{1+\gamma}{1-2b} \right) \leqslant a \leqslant 1 -\frac{1+\gamma}{4b}\bigg\}
\end{align*}
Notice that for $\gamma$ close to $-1$ all these areas are nonempty. When $\gamma$ increases some of the areas vanish. More precisely, all the areas are nonempty for $\gamma \in [-1, \, -\frac34]$. For $\gamma=-1$ area $\Omega_\gamma^5$ is reduced to the main diagonal and the areas $\Omega_\gamma^2$ and $\Omega_\gamma^8$ are reduced to unions of two perpendicular line segments on the lines $a=\frac12$, $b=\frac12$. For $\gamma \in \left( -\frac34,\, -\frac49 \right]$ only areas $\Omega_\gamma^1$ and $\Omega_\gamma^9$ are empty. For $\gamma \in \left( -\frac49,\, -\frac{4}{13} \right]$ the areas $\Omega_\gamma^1$, $\Omega_\gamma^2$, $\Omega_\gamma^8$ and $\Omega_\gamma^9$ are empty. For $\gamma \in \left( -\frac{4}{13}, \frac12 \right]$ only area $\Omega_\gamma^5$ is nonempty. For $\gamma \in \left( \frac12, 1\right]$ all the areas are empty.
In Figure \ref{fig obmocja gamma} we give the regionplots of the areas for $\gamma = -1, -\frac{24}{25}, -\frac45$, (first row), and $\gamma= -\frac{7}{10}, -\frac{43}{100}, \, 0$ (second row).
\begin{figure}[h]
\includegraphics[width=5cm]{obmocja_gamma_-1.pdf} \hfil
\includegraphics[width=5cm]{obmocja_gamma_-096.pdf} \hfil
\includegraphics[width=5cm]{obmocja_gamma_-frac45.pdf} \\
\includegraphics[width=5cm]{obmocja_gamma_-07.pdf} \hfil
\includegraphics[width=5cm]{obmocja_gamma_-043.pdf} \hfil \includegraphics[width=5cm]{obmocja_gamma_0.pdf}
\caption{ Regionplots of the areas $\Omega_\gamma^1, \ldots, \Omega_\gamma^9$ for $\gamma = -1, -\frac{24}{25},-\frac45, -\frac{7}{10}, -\frac{43}{100}, 0$.} \label{fig obmocja gamma}
\end{figure}
We also define functions of $\gamma$ depending on $(a,b)$ on these areas. Note that for a fixed value of $(a,b)$ they are inverses of $\underline{g}_{a,b}$. They are
\begin{align*}\label{omega_ab^i}
\omega_{a,b}^1(\gamma)&=\frac12 \left(a+b-1+\sqrt{(a+b-1)^2+1+\gamma}\right),\\
\omega_{a,b}^2(\gamma)&=\frac14 \left(a+3b-2+\sqrt{(a+b-1)^2+(1-2a)(1-2b)+2(1+\gamma)}\right),\\
\omega_{a,b}^3(\gamma)&=\frac17 \left(2a+4b-3+\sqrt{(2a+4b-3)^2+7(1+\gamma)}\right),\\
\omega_{a,b}^4(\gamma)&=\frac17 \left(3a+5b-4+\sqrt{(4a+2b-3)^2+7(1+\gamma)}\right),\\
\omega_{a,b}^5(\gamma)&=\frac12 \left(a+b-1+\frac{\sqrt{3}}{3}\sqrt{5(a+b-1)^2-2(1-2a)(1-2b)+2(1+\gamma)}\right),\\
\omega_{a,b}^6(\gamma)&=\frac17 \left(5a+3b-4+\sqrt{(2a+4b-3)^2+7(1+\gamma)}\right),\\
\omega_{a,b}^7(\gamma)&=\frac17 \left(4a+2b-3+\sqrt{(4a+2b-3)^2+7(1+\gamma)}\right),\\
\omega_{a,b}^8(\gamma)&=\frac14 \left(3a+b-2+\sqrt{(a+b-1)^2+(1-2a)(1-2b)+2(1+\gamma)}\right).
\end{align*}
We are now ready to state one of our main results.
\begin{theorem}\label{thm_gamma_upp}
The pointwise supremum $\overline{G}_{\gamma}$
of ${\mathcal G}_{\gamma}$ for any $\gamma\in[-1, 1]$ and for any $(a,b)\in\mathds I^2$
is given by
\begin{equation}\label{G_upper}
\overline{G}_{\gamma}(a,b)=\left\{ \begin{array}{ll}
\omega_{a,b}^1(\gamma); & \text{if } (a,b) \in \Omega_{\gamma}^1 \cup \Omega_{\gamma}^9 , \vspace{1mm}\\
\omega_{a,b}^2(\gamma); & \text{if } (a,b) \in \Omega_{\gamma}^2 , \vspace{1mm}\\
\omega_{a,b}^3(\gamma); & \text{if } (a,b) \in \Omega_{\gamma}^3 , \vspace{1mm}\\
\omega_{a,b}^4(\gamma); & \text{if } (a,b) \in \Omega_{\gamma}^4 , \vspace{1mm}\\
\omega_{a,b}^5(\gamma); & \text{if } (a,b) \in \Omega_{\gamma}^5 , \vspace{1mm}\\
\omega_{a,b}^6(\gamma); & \text{if } (a,b) \in \Omega_{\gamma}^6 , \vspace{1mm}\\
\omega_{a,b}^7(\gamma); & \text{if } (a,b) \in \Omega_{\gamma}^7 , \vspace{1mm}\\
\omega_{a,b}^8(\gamma); & \text{if } (a,b) \in \Omega_{\gamma}^8 , \vspace{1mm}\\
M(a,b); & \text{otherwise. }
\end{array} \right.
\end{equation}
\end{theorem}
Before we give the proof we gather some observations:
\begin{corollary}\label{cor gamma1}
Suppose that $\overline{G}_{\gamma}$ is the supremum given in Theorem \ref{thm_gamma_upp}. Then:
\begin{enumerate}[(i)]
\item We have $\overline{G}_{-1}=W$ and $\overline{G}_{\gamma}=M$ for $\gamma \in [\frac12, 1]$.
\item For any $\gamma \in (-1, 0)$ the supremum $\overline{G}_{\gamma}$ is not a copula, but a proper quasi-copula.
\item For any $\gamma\in [0,\frac12)$ the supremum $\overline{G}_{\gamma}$ is a copula that is different from Fr\'echet-Hoeffding lower and upper bounds $W$ and $M$. It is singular. Its absolutely continuous part is distributed inside the bounded region enclosed by the graphs of hyperbolas $\omega^5_{a,b}=a$ and $\omega^5_{a,b}=b$ (as functions of $a$ and $b$). Its singular component is distributed on the boundary of the region and on the two segments of the diagonal $a=b$ outside the region. (See Figure \ref{g zgoraj1}.)
\item $\overline{G}_{\gamma}$ is increasing in $\gamma$ (in the concordance order on quasi-copulas).
\item $\overline{G}_{\gamma}$ is symmetric and radially symmetric.
\item If we extend the measure of concordance $\gamma$ to any quasi-copula $Q$ by defining \begin{equation}\label{ext_gam}
\gamma(Q)=4\int_0^1 \left(Q(t,t)+Q(t,1-t)\right) dt - 2
\end{equation}
then
$\gamma\left(\overline{G}_{\gamma}\right)>\gamma$ for all $\gamma\in(-1,1)$. (See Figure \ref{gamma(gamma)}.)
\end{enumerate}
\end{corollary}
\begin{proof} First notice that $\overline{G}_{\gamma}$ equals the Fr\'echet-Hoeffding upper bound $M$ for any $\gamma \in [\frac12, 1]$, since then all the regions $\Omega_{\gamma}^i$ disappear. If $\gamma = -1$, then $\overline{G}_{-1}=W$. For any $\gamma \in (-1, 0)$ the point
$$\left(\frac12 \left(1- \frac{\sqrt{3}}{3} \sqrt{1-2 \gamma} \right), \frac12 \left(1- \frac{\sqrt{3}}{3} \sqrt{1-2 \gamma} \right)\right)$$
lies in the lower left corner of the area $\Omega_{\gamma}^5$ and the mixed second derivative $\frac{\partial^2 \omega_{a,b}^5(\gamma)}{\partial a \partial b}$ at this point has its value equal to $\frac{\gamma}{3}<0$, so the mixed second derivative is negative also inside the area $\Omega_{\gamma}^5$ due to continuity. Therefore, \cite[Theorem 2.1]{DuSe} implies that $\overline{G}_{\gamma}$ is {not} a copula for $\gamma \in (-1, 0)$. So, \textit{(i)} and \textit{(ii)} hold.
Next, assume that $\gamma\in[0,\frac12)$. Observe that only the area $\Omega^5_{\gamma}$ has nonempty interior for such $\gamma$ and so
$$ \overline{G}_{\gamma}(a,b)=\left\{ \begin{array}{ll}
\omega_{a,b}^5(\gamma)& \text{if } (a,b) \in \Omega_{\gamma}^5 , \vspace{1mm}\\
M(a,b); & \text{otherwise. }
\end{array} \right.$$
Also, note that $\Omega^5_{\gamma}$ is the area bounded by the graphs of hyperbolas $\omega^5_{a,b}=a$ and $\omega^5_{a,b}=b$. To prove that $\overline{G}_{\gamma}$ is a copula we use \cite[Theorem 2.1]{DuJa}. (See \cite{DuJa} for the definitions of Dini's derivatives as well.) For fixed $b$ the righthand side upper Dini derivative of $\overline{G}_{\gamma}(a,b)$ is
\begin{equation}\label{GG_lower-a}
D^+\overline{G}_{\gamma}(a,b)=\left\{ \begin{array}{ll}
0; & \text{if } 0\leqslant a\leqslant \min\{b,\omega^5_{a,b}(\gamma)\}, \vspace{1mm}\\
\frac12 \left(1+\frac{\sqrt{3}\left(5a+b-3\right)}{3\sqrt{5(a+b-1)^2-2(1-2a)(1-2b)+2(1+\gamma)}}\right); & \text{if } a \text{ is such that } (a,b) \in \Omega_{\gamma}^5, \vspace{1mm}\\
1; & \text{otherwise. }
\end{array} \right.
\end{equation}
For $a\in\mathds I$ such that $(a,b)$ is in the interior of $\Omega_{\gamma}^5$, we have
\begin{equation}\label{GG_lower-ab}
\frac{\partial^2}{\partial a\partial b}\overline{G}_{\gamma}(a,b)=
\frac{\sqrt{3}(6a+6b-12ab-2+\gamma)}{{3}\left(5(a+b-1)^2-2(1-2a)(1-2b)+2(1+\gamma)\right)^{\frac32}}.
\end{equation}
The diagonal points of $\Omega_{\gamma}^5$ have coordinates $\frac12\pm \frac{\sqrt{3}}{6}\sqrt{1-2\gamma}$. The second derivative in \eqref{GG_lower-ab} is positive on the square $[\frac12- \frac{\sqrt{3}}{6}\sqrt{1-2\gamma},\frac12+ \frac{\sqrt{3}}{6}\sqrt{1-2\gamma}]^2$ since the value of $6a+6b-12ab-2+\gamma$ is positive there and $\Omega_{\gamma}^5$ is contained in the square. The Dini derivative \eqref{GG_lower-a} has a positive jump at points on the boundary of the region that is enclosed by the graphs of hyperbolas $\omega^5_{a,b}=a$ and $\omega^5_{a,b}=b$ and on the two segments of the diagonal $a=b$ outside the region. So, it follows that statement \textit{(iii)} holds.
A careful analysis that we omit shows that $\overline{G}_{\gamma}$ is an increasing function of $\gamma$, as \textit{(iv)} asserts. Statement \textit{(v)} is a consequence of Lemma \ref{lem:symm}.
A technical calculation shows that \textit{(vi)} holds (see Figure \ref{gamma(gamma)} that was drawn using the Mathematica software \cite{Mathematica}).
\end{proof}
In Figure \ref{g zgoraj} we give the 3D plot of the quasicopulas $\overline{G}_{\gamma}$ for $\gamma = -\frac78, -\frac35, -\frac{2}{13}$. In Figure \ref{g zgoraj1} we give the scatterplots of the copulas $\overline{G}_{\gamma}$ for $\gamma = 0, \frac14$.
\begin{figure}[h]
\includegraphics[width=6cm]{G_zgoraj_-frac78.pdf}
\includegraphics[width=5.3cm]{G_zgoraj_-frac35.pdf}
\includegraphics[width=6cm]{G_zgoraj_-frac213.pdf}
\caption{Graphs of quasicopulas $\overline{G}_{\gamma}$ for $\gamma = -\frac78, -\frac35, -\frac{2}{13}$} \label{g zgoraj}
\end{figure}
\begin{figure}[h]
\includegraphics[width=6cm]{G_zgoraj_0.pdf} \hfil \includegraphics[width=6cm]{G_zgoraj_frac14.pdf}
\caption{ Scatterplots of copulas $\overline{G}_{\gamma}$ for $\gamma = 0, \frac14$} \label{g zgoraj1}
\end{figure}
\begin{proof}[Proof of Theorem \ref{thm_gamma_upp}]
Due to Lemma \ref{lem:symm}~(a) we may assume that the point $(a,b)$ lies in the triangle $\Delta= \{ (a,b) \in \mathds I^2; \, a \leqslant b, \, a+b \leqslant 1 \}$.
We use Proposition \ref{prop1} to show that for $(a,b)\in\Delta$ we have
\begin{equation}\label{g underline gamma}
\begin{split}
\underline{g}_{a,b}(d)&=\gamma\left(\underline{C}^{(a,b)}_{d-W(a,b)}\right)=\mathcal{Q}\left(M,\underline{C}^{(a,b)}_{c_1}\right)+\mathcal{Q}\left(W,\underline{C}^{(a,b)}_{c_1}\right)= \\
&=\left\{ \begin{array}{ll}
g_{a,b}^1(d); & \text{if } b \geqslant d + \frac12, \vspace{1mm}\\
g_{a,b}^2(d); & \text{if } \frac12(1+d) \leqslant b \leqslant d + \frac12, \vspace{1mm}\\
g_{a,b}^3(d); & \text{if } a+d \leqslant b \leqslant \frac12(1+d), \vspace{1mm}\\
g_{a,b}^5(d); & \text{if } b \leqslant a+d.
\end{array} \right. ,
\end{split}
\end{equation}
where
\begin{align*}
g_{a,b}^1(d)&=4d \left(1-a-b+d\right)-1,\\
g_{a,b}^2(d)&=\left(1-2b+2d\right)^2+4d (1-a-b+d)-1,\\
g_{a,b}^3(d)&= d\left(6-4a-8b+7d\right)-1, \\
g_{a,b}^5(d)&= 6d\left(1-a-b+d\right)- \left(a-b\right)^2-1.
\end{align*}
\begin{figure}[h]
\includegraphics[width=6cm]{G_spodaj_frac78.pdf}
\includegraphics[width=5.3cm]{G_spodaj_frac35.pdf}
\includegraphics[width=6cm]{G_spodaj_frac213.pdf}
\caption{Graphs of quasicopulas $\underline{G}_{\gamma}$ for $\gamma = \frac{2}{13}, \frac35, \frac78$.} \label{g spodaj}
\end{figure}
Since $d=\underline{C}^{(a,b)}_{c_1}(a,b)$
it follows that $W(a,b)=0\leqslant d\leqslant M(a,b)=a$. For such values of $d$ the expression on the right-hand side of \eqref{g underline gamma} is increasing in $d$ and thus, the maximal possible value of $\underline{g}_{a,b}(d)$ is achieved when $d=a$. Then, we have
$$\underline{g}_{a,b}(a)=\left\{ \begin{array}{ll}
4a(1-b)-1; & \text{if } b \geqslant a + \frac12, \vspace{1mm}\\
(1+2a-2b)^2+4a(1-b)-1; & \text{if } \frac12(1+a) \leqslant b \leqslant a + \frac12, \vspace{1mm}\\
a(6+3a-8b)-1; & \text{if } 2a \leqslant b \leqslant \frac12(1+a), \vspace{1mm}\\
6a(1-b)-(a-b)^2-1; & \text{if } b \leqslant 2a.
\end{array} \right. $$
For the function $\underline{g}_{a,b}:[0, a]\to [-1, \underline{g}_{a,b}(a)]$ we need to find its inverse.
If for a given value $\gamma \in \left[ -1, 1 \right]$ it holds that $\gamma \geqslant \underline{g}_{a,b}(a)$, we take $d=a=M(a,b)$.
Otherwise, we take the inverses of the expressions for $g_{a,b}^i$ which are $\omega_{a,b}^i(\phi)$ for $i=1, 2, 3, 5$.
The inequality $\gamma \geqslant \underline{g}_{a,b}(a)$ gives us the area
\begin{align*}
\bigg\{ (a,b)\in \Delta; \,
& \left( b \geqslant 1- \frac{1+\gamma}{4a} \textrm{ and } b \geqslant a+\frac12 \right) \textrm { or } \\
& \left( (1+2a-2b)^2+4a(1-b) \leqslant 1+\gamma \textrm{ and } \frac12(1+a) \leqslant b \leqslant a + \frac12 \right) \textrm{ or } \\
& \left( b \geqslant \frac18 \left( 3a+6- \frac{1+\gamma}{a} \right) \textrm{ and } 2a \leqslant b \leqslant \frac12(1+a) \right) \textrm{ or } \\
& \left( b \geqslant -2a+\sqrt{3a(a+2)-(1+\gamma)} \textrm{ and } b\leqslant 2a \right)
\bigg\}
\end{align*}
which is equal to the area $\Delta \setminus (\Omega^1_\gamma \cup \Omega^2_\gamma\cup \Omega^3_\gamma\cup \Omega^5_\gamma)$. We continue by considering the inequalities $\gamma \geqslant \underline{g}^i_{a,b}(a)$ for $i=1,2,3,5$. Their careful consideration yields areas where each of the expressions $\omega_{a,b}^i(\gamma)$ is valid. These are the areas $\Omega^i_\gamma \cap \Delta$ for $i=1,2,3,5$. Now, we reflect the expressions $\omega_{a,b}^1(\gamma), \omega_{a,b}^2(\gamma),\omega_{a,b}^3(\gamma),\omega_{a,b}^5(\gamma) $ over the main diagonal and over the counter-diagonal to obtain the expressions $ \omega_{a,b}^1(\gamma), \ldots, \omega_{a,b}^8(\gamma)$. The areas where they are valid are the reflections of the areas $\Omega^1_\gamma\cap \Delta, \Omega^2_\gamma\cap \Delta, \Omega^3_\gamma\cap \Delta, \Omega^5_\gamma\cap \Delta$ over the main diagonal and over the counter-diagonal, i.e., the areas $\Omega^1_\gamma, \ldots, \Omega^9_\gamma$.
\begin{figure}[h]
\includegraphics[width=6cm]{G_spodaj_-frac14.pdf} \hfil \includegraphics[width=6cm]{G_spodaj_0.pdf}
\caption{ Scatterplots of copulas $\underline{G}_{\gamma}$ for $\gamma = -\frac14$, $0$.} \label{g spodaj1}
\end{figure}
We conclude that the required upper bound is given by \eqref{G_upper}. The detailed calculations of the functions and their domains were done with a help of using Wolfram Mathematica software \cite{Mathematica}.
\end{proof}
A direct consequence of Lemma \ref{lem:symm} (b) is the following corollary.
\begin{corollary}\label{thm_gamma_low}
The pointwise infimum $\underline{G}_{\gamma}$
of ${\mathcal G}_{\gamma}$ for any $\gamma\in[-1, 1]$ is given by reflecting pointwise supremum $\overline{G}_{-\gamma}$ with respect to either variable, i.e.
$$\underline{G}_{\gamma}(a, b) = a- \overline{G}_{-\gamma}(a, 1-b) = b - \overline{G}_{-\gamma}(1-a, b).$$
\end{corollary}
\begin{figure}[h]
\includegraphics[width=6cm]{gama_G_gama__.pdf}
\caption{Graphs of values of $\gamma(\underline{G}_{\gamma})$ (orange) and $\gamma(\overline{G}_{\gamma})$ (green). } \label{gamma(gamma)}
\end{figure}
In Figure \ref{g spodaj} we give the 3D plot of the quasicopulas $\overline{G}_{\gamma}$ for $\gamma = \frac{2}{13}, \frac35, \frac78,$ and in Figure \ref{g spodaj1} we give the scatterplots of the copulas $\overline{G}_{\gamma}$ for $\gamma = -\frac14, 0$.
Further observations then follow directly by Corollary \ref{cor gamma1}.
\begin{corollary}\label{cor gamma2}
Suppose that $\underline{G}_{\gamma}$ is the infimum given in Corollary \ref{thm_gamma_low}. Then:
\begin{enumerate}[(i)]
\item We have $\underline{G}_{1}=M$ and $\underline{G}_{\gamma}=W$ for $\gamma \in [-1,-\frac12]$.
\item For any $\gamma \in (0,1)$ the infimum $\underline{G}_{\gamma}$ is not a copula, but a proper quasi-copula.
\item For any $\gamma\in (-\frac12,0]$ the infimum $\underline{G}_{\gamma}$ is a copula that is different from Fr\'echet-Hoeffding lower and upper bounds $W$ and $M$. It is singular. Its absolutely continuous part is distributed inside the bounded region enclosed by the graphs of hyperbolas $\omega^5_{1-a,b}=1-a$ and $\omega^5_{a,1-b}=1-b$ (as functions of $a$ and $b$). Its singular component is distributed on the boundary of the region and on the two segments of the anti-diagonal $a+b=1$ outside the region. (See Figure \ref{g spodaj1}.)
\item $\underline{G}_{\gamma}$ is increasing in $\gamma$ (in the concordance order on quasi-copulas).
\item $\underline{G}_{\gamma}$ is symmetric and radially symmetric.
\item If we extend the measure of concordance $\gamma$ to any quasi-copula $Q$ by \eqref{ext_gam}
then
$\gamma\left(\underline{G}_{\gamma}\right)<\gamma$ for all $\gamma\in(-1,1)$. (See Figure \ref{gamma(gamma)}.)
\end{enumerate}
\end{corollary}
\section{Comparison of local bounds}
In this section we give a comparison of effectiveness of bounds for Spearman's footrule and Gini's gamma and compare them with respect to the same bounds for Kendall's tau, Spearman's rho in Blomqvist's beta given by Nelsen and \'Ubeda-Flores in \cite{NeUbFl}.
Suppose that $\kappa:\mathcal{C}\to[-1,1]$ is a given measure of concordance, $k$ a fixed value in the range of $\kappa$ and that $\underline{K}_{k}$ {and} $\overline{K}_{k}$ are the lower and the upper bound of \eqref{eq:kappa}, respectively. Then the effectiveness of $\kappa$ is measured by the function \cite{NeUbFl}
\begin{equation}
m_{\kappa}(k)=1-6\int\int_{\mathds I^2} \left|\overline{K}_{k}-\underline{K}_{k}\right|\,du\, dv.
\end{equation}
Here, the double integral represents the volume between the upper and lower local bound, and it is scaled so that $m_{\kappa}(k)=0$ means there is no improvement on the bound, i. e., $\overline{K}_{k}=M$ and $\underline{K}_{k}=W$, and $m_{\kappa}(k)=1$ means that the two bounds coincide. In the two tables below we give the values for $m_{\kappa}(k)$ for $\kappa=\phi$ and $\kappa=\gamma$. For Gini's gamma the effectiveness function $m_{\gamma}$ is an even function by Corollary \ref{thm_gamma_low}, so we give its values for $k\in[0,1]$ only. Their graphs are presented in Figure \ref{ff spodaj1}.
\vskip 15pt
\begin{center}
\begin{tabular}{|c|c|}
\hline
\parbox[c]{20mm}{\centering value of $k$} &
\parbox[c]{20mm}{\centering $m_{\phi}(k)$} \\
\hline\hline
-0.5 & 0.7500\\
-0.4 &
0.3718\\ -0.3&
0.2244\\ -0.2&
0.1352\\ -0.1&
0.0820\\ 0.0&
0.0574\\ 0.1&
0.0569\\ 0.2&
0.0763\\ 0.3&
0.1108\\ 0.4&
0.1562\\ 0.5&
0.2146\\ 0.6&
0.2895\\ 0.7&
0.3860\\ 0.8&
0.5130\\ 0.9&
0.6889\\ 1.0&
1.0000 \\
\hline
\end{tabular}
\quad\quad\quad
\begin{tabular}{|c|c|}
\hline
\parbox[c]{20mm}{\centering value of $k$} &
\parbox[c]{20mm}{\centering $m_{\gamma}(k)$} \\
\hline\hline
0.0& 0.0581\\
0.1& 0.0633\\
0.2& 0.0792\\
0.3& 0.1059 \\
0.4& 0.1438\\
0.5& 0.1942\\
0.6& 0.2587\\
0.7& 0.3422 \\
0.8& 0.4565\\
0.9& 0.6320\\
1.0& 1.0000\\
\hline
\end{tabular}
\vskip 10pt
{\sc Table 1.} Values of the effectiveness function for \\ Spearman's footrule (left) and Gini's gamma (right).
\end{center}
\vskip 15pt
A comparison of values in Table 1 with those given in \cite[Table 1]{NeUbFl} shows that Spearman's footrule and Gini's gamma have higher values of the effectiveness function, so their bounds are stricter, as compared to Spearman's rho and Kendall's tau. Blomqvist's beta, however, has highest values for all $k$ considered except for values of $k$ very close to $1$.
\vskip 15pt
\begin{figure}[h]
\includegraphics[width=4.8cm]{mera_fi.pdf} \hfil \includegraphics[width=6cm]{mera_gama.pdf}
\caption{Graphs of effectiveness functions of Spearman's footrule (left) and Gini's gamma (right).} \label{ff spodaj1}
\end{figure}
\section{Spearman's footrule and Gini's gamma vs. Blomqvist's beta}\label{sec:beta}
In this section we give the exact regions of possible pairs of values $\left(\kappa(C),\beta(C)\right)$, $C\in \mathcal{C}$, where $\kappa$ is either Spearman's footrule or Gini's gamma. Historically, a possible region of pairs of values of a pair of measure of concordance was studied first for Spearman's rho and Kendall's tau. In 1950s it was considered by Daniels, Durbin and Stuart, and Kruskal \cite{Dani,DuSt,Krus} (see also \cite[\S5.1.3]{Nels}), and later by other authors e. g. in \cite{Dani,DuSt,FrNe,GeNe2,Krus}. The exact region of possible pairs of values $(\rho(C),\tau(C))$ was given recently in \cite{ScPaTr}. For other pairs of measures of concordance we are aware only of the exact regions of possible pairs $\left(\kappa(C),\beta(C)\right)$ for $\kappa\in\{\rho,\tau,\gamma\}$ that are stated by Nelsen as Exercise 5.17 of \cite{Nels} with a hint of a proof. Our proof for the exact region between Gini's gamma and Blomqvist's beta uses results of our previous section and it is different from the one suggested in \cite{Nels}. The exact region for Spearman's footrule and Blomqvist's beta seems to be new.
\begin{theorem}\label{thm:phi} Let $C$ be any copula.
\begin{enumerate}[(a)]
\item If $\phi(C)=\phi$ for some $\phi\in\left[-\dfrac12,1\right]$, then
\[
1-2\sqrt{\frac{2}{3}(1-\phi)}\
\leqslant \beta(C)\leqslant
\begin{cases} -1+2\sqrt{\frac23\left(1+2\phi\right)}, &
\mbox{if } -\dfrac12\leqslant\phi\leqslant \dfrac14\\ 1, &
\mbox{if } \dfrac14\leqslant\phi\leqslant 1. \end{cases}
\]
\item If $\beta(C)=\beta$ for some $\beta\in\left[-1, 1\right]$, then
\[
\dfrac{3(1+\beta)^2}{16}-\dfrac12\
\leqslant \phi(C)\leqslant
1-\dfrac{3(1-\beta)^2}{8} .
\]
\end{enumerate}
The bounds are attained.
\end{theorem}
\begin{proof}
Suppose that $\phi(C)=\phi$. Then $\underline{F}_{\phi}(\frac12,
\frac12) \leqslant C(\frac12, \frac12)\leqslant
\overline{F}_{\phi}(\frac12, \frac12)$.
From Theorem \ref{thm_gamma_low} we have $\underline{F}_{\phi}(\frac12,
\frac12) = \frac12\left(1-\sqrt{\frac{2}{3}(1-\phi)}\right)$. Since for
$\phi \leqslant \frac14$ the point $(\frac12, \frac12)$ lies in the area
$\Delta_{\phi}^4$ we have $\overline{F}_{\phi}(\frac12, \frac12) =
\delta_{\frac12, \frac12}^4(\phi) =
\frac12\sqrt{\frac23\left(1+2\phi\right)}$. For $\phi \geqslant \frac14$
we have $\overline{F}_{\phi}(\frac12, \frac12)= M(\frac12, \frac12) =
\frac12$. Now, $\beta(C) = 4C(\frac12, \frac12) - 1$ gives us point (a).
The bounds are attained by copulas $\overline{C}^{(\frac12,
\frac12)}_{c_2}$ and $\underline{C}^{(\frac12, \frac12)}_{c_1}$,
respectively. Next, point (b) is obtained from (a) by inverting the
functions.
\end{proof}
In Figure \ref{fig3} we display the set of all possible pairs
$(\phi(C), \beta(C))$ for a copula $C$. The expressions for the bounds
of the shaded regions are given in Theorem \ref{thm:phi}.
\begin{figure}[h]
\includegraphics[width=5cm]{Slika6-kosir.pdf} \hfil
\includegraphics[width=6.7cm]{Slika5-kosir.pdf}
\caption{ Spearman's footrule vs.\ Blomqvist's beta
}\label{fig3}
\end{figure}
\begin{theorem}\label{thm:gamma} Let $C$ be any copula.
\begin{enumerate}[(a)]
\item If $\gamma(C)=\gamma$ for some $\gamma\in\left[-1, 1\right]$, then
\[
\left.\begin{array}{ll}
-1, & \mbox{if } -1\leqslant\gamma\leqslant-\dfrac12 \\
1-2\sqrt{\frac23(1-\gamma)}, & \mbox{if }
-\dfrac12\leqslant\gamma\leqslant1
\end{array}\right\}
\leqslant \beta(C)\leqslant
\begin{cases} -1+2\sqrt{\frac23(1+\gamma)}, & \mbox{if }
-1\leqslant\gamma\leqslant\dfrac12 \\ 1, & \mbox{if }
\dfrac12\leqslant\gamma\leqslant1 . \end{cases}
\]
\item If $\beta(C)=\beta$ for some $\beta\in\left[-1, 1\right]$, then
\[
\dfrac{3(1+\beta)^2}{8}-1\
\leqslant \gamma(C)\leqslant
1-\dfrac{3(1-\beta)^2}{8} .
\]
\end{enumerate}
The bounds are attained.
\end{theorem}
\begin{proof}
Suppose that $\gamma(C)=\gamma$. Then $\underline{G}_{\gamma}(\frac12,
\frac12) \leqslant C(\frac12, \frac12)\leqslant
\overline{G}_{\gamma}(\frac12, \frac12)$.
For $\gamma \leqslant \frac12$ the point $(\frac12, \frac12)$ lies in
the area $\Omega_{\gamma}^5$ so we have $\overline{G}_{\gamma}(\frac12,
\frac12) = \omega_{\frac12, \frac12}^5(\gamma) =
\frac12\sqrt{\frac23(1+\gamma)}$. For $\gamma \geqslant \frac12$ we have
$\overline{G}_{\gamma}(\frac12, \frac12)= M(\frac12, \frac12) =
\frac12$. Next, $\underline{G}_{\gamma}(\frac12, \frac12) = \frac12 -
\overline{G}_{-\gamma}(\frac12, \frac12) =
\frac12\left(1-\sqrt{\frac23(1-\gamma)}\right)$ for $\gamma \geqslant
-\frac12$ and $\underline{G}_{\gamma}(\frac12, \frac12) = 0$ otherwise.
Now, $\beta(C) = 4C(\frac12, \frac12) - 1$ gives us point (a). The
bounds are attained by copulas $\overline{C}^{(\frac12, \frac12)}_{c_2}$
and $\underline{C}^{(\frac12, \frac12)}_{c_1}$, respectively. Finally,
point (b) is obtained from (a) by inverting the functions.
\end{proof}
In Figure \ref{fig4} we display the set of all possible pairs
$(\gamma(C), \beta(C))$ for a copula $C$. The expressions for the bounds
of the shaded regions are given in Theorem \ref{thm:gamma}.
\begin{figure}[h]
\includegraphics[width=5cm]{Slika8-kosir.pdf} \hfil
\includegraphics[width=5cm]{Slika7-kosir.pdf}
\caption{ Gini's gamma vs.\ Blomqvist's beta }\label{fig4}
\end{figure}
\noindent\textbf{Acknowledgement.} The authors are thankful to the referees. Their suggestions helped us to improve the paper. The figures in the paper were drawn using the Mathematica software \cite{Mathematica}.
|
2,869,038,155,893 | arxiv | \section{Introduction}
It would appear that if a firm has no cost of staying open to accept orders, it will open as early as possible. But we shall see that is not necessarily true. Suppose that all consumers prefer consuming the good in period 1. But if the store opens in period 0, and only one unit of the good is available, some consumers, fearing that the good may be unavailable in period 1, may go to the store in period 0, and buy the good early. By opening the store only in period 1, the firm precludes such wasteful behavior. The paper shows that the monopolist himself does not gain by offering the good early.
Of special interest to us is rent seeking behavior---will consumers buy the good earlier than at the time they most value consuming the good because they fear a stockout. Such rent seeking behavior is common. People have been known to wait in line for hours to buy a new model of iPhone, before others do.\footnote
{
See
\url{https://www.cnbc.com/2019/09/20/apple-iphone-11-goes-on-sale-with-lines-outside-major-stores-around-the-world.html}
}
They have also camped out for days and nights in the hope of buying a
house\footnote
{
See ``House buyers sleep on street for NINE nights to buy homes." \url{https://www.mirror.co.uk/news/uk-news/people-camping-out-cars-first-16230962}
}
Perhaps firms create such artificial shortages, or set prices that generate excess demand, to publicize the good, or to create a buzz about it. We do not examine such motives, but our model does examine how the costs consumers incur in seeking the good affect demand for the good and the price the firm can charge.
A contribution of this paper is to shed light on inventory problems. If the firm is known to offer the good only in one period it never holds any inventory. So examining inventory issues requires asking why the firm offers the good in multiple periods. Related issues are examined by Antoniou and Fiocco (2019) who investigate the inventory behavior of a firm faced with forward-looking consumers who can store a good in anticipation of higher future prices. They show that a seller who cannot commit to future prices can profit from holding inventory when buyers may stockpile. Other work, well surveyed by Antoniou and Fiocco (2019), documents buyer stockpiling in anticipation of higher future prices. But in that work, unlike in ours, a consumer who buys early does not affect the welfare of a consumer who intends to buy later.
A further contribution of the model is to examine a seller's decision of when to offer the good for sale, and thus to allow examination of how governmental regulations (such as blue laws, or hour restrictions on stores) affect prices or sales.
\section{Literature}
A large literature considers dynamic pricing, in which a seller changes the price over period for the purpose of price discrimination---high-valuers buy the good early at a high price, whereas low valuers buy the good later at a low price. Under some conditions profit-maximization may require prices to decline over period, as in Su (2007), where consumers differ both in willingness to pay and in willingness to wait. Other work also considers a firm which has a fixed inventory to sell over an infinite horizon rather than over two periods (Gallien 2006). A firm may also profit by creating shortages in future periods---the possible shortage induces consumers who highly value the good to buy it at a high price, expecting a possible shortage in future periods. Analyses have considered such strategies when consumers observe the stock of inventory at the period they may buy, and when they do not. Papers considering observable stocks, and sales over two periods, include Liu and van Ryzin (2008), and Zhang and Cooper (2008). A similar model, but with customers not observing the seller's stock, is Gallego, Phillips, and Sahin (2008). Advance sales, that is a sale made before the item is delivered, with the possibility of resale, is studied by Cachon and Feldman (2018). Their model focuses on the different prices the firm may charge in different periods, and on a consumer who learns over time his valuation of the good. Like us, they allow for the possibility that a consumer who does not buy in an early period may find that the good has been sold to someone else at an earlier period. But, in contrast to our analysis, they do not consider consumers who differ in the period at which they most value the good, and do not
consider how one consumer may change his behavior depending on what other consumers do.
Little work considers a firm's strategy when it wants customers to hold inventory instead of having the firm incur the expense of holding inventory. An important, and early, paper is Blattberg, Eppen, and Lieberman (1981). They describe an inventory model in which both consumers and the retailer minimize their own costs, with the variations in price inducing customers to buy early. Glazer and Hassin (1986) consider shifting consumer demand when their holding costs are higher or lower than the firm's. Anily and Hassin (2013) extended Glazer and Hassin's model to heterogeneous consumers.
\section{Assumptions}
The monopolist can sell one unit of the good, either in period 0 or in period 1. All consumers most value the good in period 1; nevertheless a consumer can buy the good in period 0 and consume it then. Alternatively, we can think of a consumer buying the good in period 0, holding it until period 1, consuming then, but incurring holding costs. Such early buying reduces the consumer's utility from $V$ to $V-K$. This cost $K$ can arise because early buying causes the consumer to incur a holding cost. Or, as with ice cream, the store can keep item cold but the consumer cannot, so that a consumer must consume the good at the period he buys it.
If several consumers arrive simultaneously, the good is allocated randomly to one of them. The good's price is $P$. That price is fixed over the two periods, perhaps because of menu costs, perhaps because consumers would get angry if they are charged more than they had remembered from an advertisement issued by the store, perhaps because in posting a higher price for one period than for another, some consumers would recall only the higher price, and so be unwilling to go to the store. Going to the store costs a consumer a search cost $c$, whether he gets the good or not.
Let the equilibrium probability that a consumer chooses to arrives in period $t,$ be $q_t, \ t=0,1$. Let $U_{t}$ be the corresponding expected utility. A consumer who does not come to the store at all (or who never arrives) obtains zero utility.
In each period, a consumer may decide to arrive with certainty, to never arrive, or to arrive with positive probability less than 1. And if each consumer arrives with positive probability in any period, the aggregate probability that a consumer arrives may be $1$ or less than that. Some of these possibilities, however, are irrational (for example that a consumer arrives with certainty in period 0 and also arrives with certainty in period 1). That leaves seven possible outcomes. We list them as follows, where $q_t$ is the probability a consumer arrives in period $t$.
\section{Equilibrium behavior}
For the probabilities $q_{t}$ to constitute an equilibrium, the following conditions must be satisfied:
\begin{enumerate}
\item A consumer who chooses never to arrive would have non-positive expected utility were he to arrive at either period. That is, $q_0=q_1=0 \Longrightarrow U_{0},U_{1}\le 0$.
\item If a consumer chooses never to arrive in period 0, but chooses to arrive in period 1 with probability less than 1, then arriving in period 1 must generate 0 utility, whereas arriving in period 0 generates non-positive utility. That is, $0<q_1<1$ and $q_0=0 \Longrightarrow U_{1}=0$ and $U_{0}\le 0$.
\item If, however, a consumer chooses to arrive in period 0, or in period 1, or not to arrive at all, all with positive probabilities, then his expected utility arriving at any period must be zero. That is, $0<q_1,q_0<1$ and $q_0+q_1<1 \Longrightarrow U_{0}=U_{1}=0$.
\item If a consumer chooses to arrive in period $0$ with probability less than $1$, but never arrives in period 1, then his expected utility when arriving in period $0$ must be zero, whereas his expected utility when arriving in period $1$ must be non positive. That is, $0<q_0<1$ and $q_1=0 \Longrightarrow U_{0}=0$ and $U_{1}\le 0$.
\item A consumer who arrives with positive probability at either period $0$ or period $1$ must be indifferent between arriving at these periods. And if he always wants to arrive at some period, his expected utility must be non-negative. That is, $0<q_1,q_0<1$ and $q_0+q_1=1 \Longrightarrow U_{0}=U_{1} \ge 0$.
\item A consumer who chooses to arrive in period $0$ must expect higher utility than arriving in period $1$ or of never arriving. That is,
$q_0=1 \Longrightarrow U_{0}\ge\max\{U_{1},0\}$.
\item Lastly, a consumer who chooses to arrive in period $1$ must enjoy utility at least as large as when arriving in period $0$ or of never arriving. That is, $q_1=1 \Longrightarrow U_{1} \ge \max\{U_{0},0\}$.
\end{enumerate}
Let the number of potential consumers, or the number of consumers under consideration, have a Poisson distribution with intensity $\lambda$. This assumption fits the situation with a large population of potential consumers, each person wanting the good with a small probability; the Poisson distribution is then obtained as the limit of the Binomial distribution.
Although all consumers more highly value the good in period 1, if a fraction $q_0>0$ nevertheless arrives in period 0, then the number of consumers arriving in period 0 has a Poisson distribution with intensity $\lambda_0 \equiv \lambda q_0$.
The number of consumers arriving in period 1 has a Poisson distribution with intensity $\lambda_1 \equiv \lambda q_1$. The expected utilities are
\begin{equation}\label{E9000}
U_{0}= -c+\sum_{j=0}^{\infty} \frac{V-K-P}{j+1}\cdot\frac{\lambda_0^j e^{-\lambda_0}}{
j!}=
-c+{V-K-P\over\lambda_0}\left(1-e^{-\lambda_0}\right),
\end{equation}
\begin{equation}\label{E9001}
U_{1}=-c+e^{-\lambda_0}{V-P\over\lambda_1}\left(1-e^{-\lambda_1}\right).
\end{equation}
Note that the expression for $U_{1}$ is multiplied by the probability $e^{-\lambda_0}$ that no consumer arrives in period 0, and hence the good is still available in period 1.
By L'H\^{o}pital's rule, $\lim_{\lambda\to0}{1-e^{-\lambda}\over\lambda}=1$. So as $\lambda$ increases from 0 to 1 the function decreases from 1 to $1-e^{-1}$.
The firm's expected profit is $\Pi=P(1-e^{-\lambda_0-\lambda_1})$.
\subsection{Characterizing the different equilibria }
\noindent
\begin{lemma} There is at most one equilibrium of each type.
\end{lemma}
\begin{proof} The claim is straightforward for equilibrium types 7, 6, and 1, where $q_0$ and $q_1$ are explicitly defined.
The type-3 equilibrium (where a consumer may choose never to arrive) requires that $U_{0}=0$, which uniquely defines $\lambda_0$. Substituting the resulting value in $U_{1}=0$ uniquely defines the candidate $\lambda_1$. For the type-2 equilibrium, which assumes $\lambda_0=q_0=0$, $\lambda_1$ follows as above. Similarly, the type-4 equilibrium already assumes that $q_1=0$, leading to $\lambda_{1}=0$, with $\lambda_0$ uniquely determined by $U_{0}=0$.
The type-5 equilibrium (where a consumer is indifferent between arriving in periods 0 and 1) requires that $q_0+q_1=1$, or equivalently that $\lambda_0+\lambda_1=\lambda$. Substituting this condition into the other requirement that $U_{0}=U_{1}$ gives ${V\over V-K}={e^{\lambda_0}-1\over\lambda_0}{1-\lambda_0\over1-e^{-(1-\lambda_0)}}$. The right-hand side is a monotonic function of $\lambda_0$. Hence, if $\lambda_0$ is in $[0,1]$ it is uniquely defined (and corresponds to an equilibrium).
\end{proof}
We now analyse the existence of equilibrium types according to the input parameters. We normalize $V-P=1$ and $\lambda=1$. To simplify the notation, call the expected number of people arriving in period 0 $x \equiv \lambda_0$; call the expected number of people arriving to the store in period 1 $y \equiv \lambda_1$. For each type of equilibrium we present the conditions on $c$ and $K$ for having an equilibrium of that type. Note that we describe the feasibility region for equilibria of types 3-5 using parametric representation of the conditions on $c$ and $K$.
\begin{enumerate}
\item
A type-1 equilibrium (where no consumer ever arrives) has $\lambda_0=\lambda_1=0$, $U_{0}\le 0$, and $U_{1}\le 0$, requiring that $1-K \le c$ and that $c \ge 1$.
\item
A type-2 equilibrium (where no consumer arrives in period 0, but may arrive in period 1) has $\lambda_0=0$. The condition $U_{1}=0$ is equivalent to $c={1-e^{-y}\over y}$, implying that $c\in(1-1/e,1)$. The condition $U_{0}\le 0$ means that $K+c \ge 1$.
\item
For the type-3 equilibria (where a consumer with ideal period 1 chooses to arrive in period 0, or in period 1, or not to arrive at all, all with positive probabilities), for $0\le x\le 1$ and $0\le y\le1-x$, the condition $U_{0}=0$ reduces to
$c=(1-K){1-e^{-x}\over x}$. The condition $U_{1}=0$ amounts to
$c=e^{-x}{1-e^{-y}\over y}\in\left(e^{-x}{1-e^{(1-x)}\over1-x},e^{-x}\right)$.
\item
For the type-4 equilibrium (where no consumer arrives in period 1, but consumers may arrive in period 0), for $0 \le x \le 1$, the condition $U_{0}=0$ implies that $K=1-c{x \over 1-e^{-x}}$, and $U_{1} \le 0$ with $q_1=0$ implies that $c \ge e^{-x}$.
\item
In the type-5 equilibrium (where a consumer is indifferent about when to arrive), $\lambda_0+\lambda_1=1$. Define $x \equiv \lambda_0$. For $0 \le \lambda_0 \le 1$, the condition $U_{0}=U_{1}$ reduces to $K=1-{x\over1-x}{1-e^{-(1-x)}\over
e^x-1}$ . And the non-negativity of $U_{1}$ reduces to $c\le e^{-x}{1-e^{-(1-x)}\over1-x}$. This condition also implies that $c \le 1-{1\over e}$.
\item
In a type-6 equilibrium (where consumers may arrive in period 0), $\lambda_0=1$ and $\lambda_1=0$. The conditions $U_{0} \ge U_{1}$ and $U_{0} \ge 0$ reduce to $(1-K)(1-{1\over e})\ge {1\over e}$, or $K\le {e-2\over e-1}$, and $c \le (1-K)(1-{1\over e})$.
\item
In a type-7 equilibrium (where consumers may arrive in period 1), $\lambda_0=0$ and $\lambda_1=1$. The conditions $U_{1}\ge U_{0}$ and $U_{1}\ge0$ reduce to $1-{1\over e}\ge 1-K$, or $K \ge{1\over e}$ and $c \le 1-{1\over e}$.
\end{enumerate}
Figure \ref{f1} describes the regions corresponding to the different types of equilibria. The point at the intersection of types 7,5,3 and 2 has $c=1-1/e$ and $K=1/e$. The point at the intersection of types 7, 6, 5, 4, and 3 has $c=1/e$ and $K=(e-2)/(e-1)$.
\begin{figure}
\centering
\includegraphics[scale=0.6]{e5.pdf}
\vspace{-3cm}
\caption{Types of equilibria in the $(c,K)$ space.}\label{f1}
\end{figure}
From Figure \ref{f1} we conclude that for most combinations of $c$ and $K$ a unique equilibrium exists. However, in the narrow region where a type-5 equilibrium exists, other equilibria also exist.
In the following, denote $\lambda_e \equiv (\lambda_0,\lambda_1)$.
\medskip
\noindent{\bf Example 1} Let a consumer's cost of going to the store be $c=0.2$. Let a consumer's penalty for buying the good earlier than at his ideal period be $K=0.4$. Then the pair of arrival rates at the store $\lambda_0=0$ and $\lambda_1=1$ is a type-7 equilibrium (where consumers may arrive in period 1). The pair $\lambda_0=1$ and $\lambda_1=0$ (denoted by $\lambda_e=(1,0)$) is a type-6 equilibrium (an equilibrium where consumers may arrive in period 0). And the pair $\lambda_0=0.6305$ and $\lambda_1=0.3995$ is a type-5 equilibrium. Note that the type-7 equilibrium is efficient. A consumer who wants to consume the good in period 1 arrives in period 1. A type-6 equilibrium suggests inefficiency. Although consumers want the good in period 1, they arrive in period 0. So if the waiting cost is positive, a consumer who arrives early may incur a cost without increasing the benefit he gets from the good. Early arrival reflects rent-seeking behavior. The same applies for a type-5 equilibrium. In terms of social welfare (the aggregated utilities), note that the social welfare function, denoted as $SW$, satisfies
$$SW=(1-e^{-\lambda_0})(V-P-K)+e^{-\lambda_0}(1-e^{-\lambda_1})(V-P)-c(\lambda_0+\lambda_1)=$$
$$(1-e^{-\lambda_0})0.6+e^{-\lambda_0}(1-e^{-\lambda_1})-0.2(\lambda_0+\lambda_1).$$
Thus social welfare for the equilibria of types 7, 6 and 5 equilibria are 0.432, 0.179 and 0.056, respectively. Put differently, if the store refused to sell in period 0, consumers would be better off.
Early arrival reflects behavior of {\it strategic complements} or {\it follow the crowd} (see Hassin and Haviv 2003), where a consumer's best response tends to follow the strategy of the others. Typically in such situations there are two extreme pure-equilibrium strategies and one mixed strategy.
\medskip
\noindent{\bf Example 2} Let $c=0.4$ and $K=0.37$. Then $\lambda_e=(0,1)$ is a type-7 equilibrium, $\lambda_e=(0.041,0.959)$ is a type 5 equilibrium, and $\lambda_e=(0.989,0)$ is a type-4 equilibrium.
\medskip
\noindent{\bf Example 3} Let $c=0.4$ and $K=0.4$. Then $\lambda_e=(0,1)$ is a type-7 equilibrium, $\lambda_e=(0.63,0.37)$ is a type-5 equilibrium; $\lambda_e=(0.8742,0.085)$ is a type-3 equilibrium.
\bigskip
We now change the way we normalize the parameters and assume (instead of $V-P=1$) that $c=1$. To simplify the presentation, we use $V$ to represent $V-P$.
\begin{enumerate}
\item
In a type-1 equilibrium $\lambda_0=\lambda_1=0$. The utilities $U_{0}$ and $U_{1}$ will both be non-positive only if $V-K \le 1$ and $V \le 1$.
\item
A type-2 equilibrium has $\lambda_0=0$. The condition $U_{1}=0$ is equivalent to $V={y\over1-e^{-y}}$, implying that $V \ge 1$. The inequality $U_{0}\le 0$ means that $V-K \le 1$.
\item
For the type-3 equilibrium with $0 \le \lambda_0 \le 1$ and $0 \le \lambda_1 \le 1$, $U_{0}=0$ implies that $V=K+{x\over 1-e^{-x}}$ (with $x \equiv \lambda_0$ and $y \equiv \lambda_1$). And $U_{1}=0$ amounts to $V=e^x{y\over1-e^{-y}}$.
\item
For the type-4 equilibrium, with $0 \le \lambda_0 \le 1$, $U_{0}=0$ implies that $V=K+{x\over 1-e^{-x}}$. And $U_{1} \le 0$ (with $q_1=0$) amounts to $V \le e^x$.
\item
For the type-5 equilibrium with $0 \le \lambda_0 \le 1$,
$U_{0}=U_{1}$ reduces to
${K\over V}=1-{x\over 1-x}{1-e^{-(1-x)}\over e^x-1}$
The non-negativity of $U_{1}$ reduces to $V\ge e^{x}{1-x\over1-e^{-(1-x)}}$.
\item
In a type-6 equilibrium, $\lambda_0=1$ and $\lambda_1=0$. The conditions
$U_{0}\ge U_{1}$ and $U_{0}\ge0$ reduce to $V \ge {e-1\over e-2}K$, and $V \ge K+{e\over e-1}$.
\item
In a type-7 equilibrium, $\lambda_0=0$ and $\lambda_1=1$. The conditions
$U_{1}\ge U_{0}$ and $U_{1}\ge0$ reduce to $K\ge{V\over e}$ and $V\ge{e\over e-1}$.
\end{enumerate}
Figure \ref{f2} describes the regions corresponding to the different types of equilibria. The point at the intersection of types 7,5,3 and 2 has $V=e/(e-1)$ and $K=1/(e-1)$. The point at the intersection of types 7,6,5,4 and 3 has $V=e$ and $K=e(e-2)/(e-1)$.
\begin{figure
\centering
\includegraphics[scale=0.6]{e7.pdf}
\vspace{-3cm}
\caption{Types of equilibria in the $(V,K)$ space.}\label{f2}
\end{figure}
\section{Unbounded potential demand}\label{SU}
The model with finite potential demand is too difficult to solve analytically. Moreover, we also showed that the equilibrium is not always unique. Therefore we consider the simpler situation with an infinite potential demand $\lambda$. Then the only possible equilibrium types are 1-4. The conditions for types 1 and 2 are as before. Assume first that the price $P$ is zero. The line separating Regions 3 and 4 satisfies $U_{0}=U_{1}=0$ and $\lambda_1=0$. Thus from~(\ref{E9000}) we have
\begin{equation}\label{e99}
U_{0}=(V-K){1-e^{-\lambda_0}\over \lambda_0}-c=0.
\end{equation}
Recalling that $\lim_{\lambda \to 0}{1-e^{-\lambda}\over\lambda}=1,$ and utilizing (\ref{E9001}), with $U_{1}=0$, gives $V e^{-\lambda_0}=c$. That is,
$\lambda_0=\ln{\left(V\over c\right)}$. Substituting this $\lambda_0$ in (\ref{e99}) gives
$$
K=V-{c\ln{\left(V \over c \right)} \over 1-{c \over V}}.
$$
We find it convenient to normalize all monetary values by considering $c$ as the unit value. Thus we define
$v \equiv \frac{V}{c}$, $k \equiv \frac{K}{c}$, $p \equiv \frac{P}{c}$, and $\pi \equiv \frac{\Pi}{c}$. In particular, from the equation above we get
\begin{equation} \label{E1000}
k=v-\frac{\ln{v}}{1-\frac{1}{v}}.
\end{equation}
We can verify that for $v>1$, the right-hand side of (\ref{E1000}) strictly increases from $0$ to infinity. Thus, for every $k>0$, Equation (\ref{E1000}) has a unique solution $u(k)$. Hence we define
\begin{definition}\label{D1}
Let $(u(k),k)$ be the borderline between Regions 3 and 4, then $u=u(k)$ uniquely solves
$$
k=u-\frac{\ln{u}}{1-\frac{1}{u}}.
$$
\end{definition}
The partition of the $(v,k)$ plane is shown in Figure \ref{f3}.
\begin{figure
\centering
\includegraphics[scale=0.6]{e9.pdf}
\vspace{-3cm}
\caption{Types of equilibria in the $(v,k)$ space - infinite potential demand.}\label{f3}
\end{figure}
Recall that $v=\frac{V}{c}$. Thus, for example, the condition $v \leq 1$, that defines Region 1, corresponds to $V \leq c$, etc.
Summarizing, in Region 1, (where $0 <v \leq 1$), because the search cost exceeds the value of consuming the good, consumers never arrive. In Region 2, (where $1< v \leq k+1$), the value of an early purchase is smaller than the search cost, so consumers arrive only in period 1. In Region 3, (where $k+1 < v\leq u(k)$, and $u(k)$ was defined in Definition~\ref{D1}), the arrival rate in period 0 is such that the expected utility of a consumer arriving then is zero. But the added benefit, $k$, of buying in period 1 rather than in period 0 is sufficiently large to compensate for the reduced probability of getting the good in period 1. Thus, consumers arrive in both periods. In Region 4, (where $v\geq u(k)$), the small value of $k$ means that a large arrival rate in period 0 discourages consumers from arriving in period 1.
By setting a price $p=\frac{P}{c}$, the net benefit of the purchase reduces to $v-p$. Thus, an increase in $p$ means moving to the left in the $(v,k)$ space. The equilibrium, and therefore the profit, depend on the region where we end up.
Summarizing the results on the four regions:
\begin{itemize}
\item In Region 1, \ $\lambda_0=\lambda_1=0,\ U_0, U_1\leq 0$, and it is reached iff \ $0\leq v-p< 1$.
\item In Region 2, $\lambda_0=0,\ 0<\lambda_1<\lambda,\ U_1=0, \ U_0\leq 0$, and it is reached iff \ $1\leq v-p\leq k+1$.
\item In Region 3,\ $\lambda_0,\lambda_1>0, \lambda_0+\lambda_1<\lambda, \ U_0=U_1=0$, and it is reached iff \ $k+1<v-p< u(k)$, where $u$ was defined in Definition~\ref{D1}.
\item In Region 4, $0<\lambda_0<\lambda, \lambda_1=0, \ U_0=0, \ U_1\leq 0$, and it is reached iff \ $u(k)\leq v-p\leq v$,
\end{itemize}
where $u(k)$ satisfies $k=u-\frac{\ln{u}}{1-\frac{1}{u}}$ (see Definition~\ref{D1}).
Two equations are central for the model. First, the equation $U_0=0$, which is satisfied in Regions 3,4, and by~(\ref{e99}) gives
\begin{equation}\label{E1}
v-p-k=\frac{\lambda_0}{1-e^{-\lambda_0}}.
\end{equation}
Second, the equation $U_1=0$ which is satisfied in Regions 2,3, and by~(\ref{E9001}) gives
\begin{equation}\label{E2}
v-p=\frac{\lambda_1e^{\lambda_0}}{1-e^{-\lambda_1}}.
\end{equation}
\subsection{Arrival rates}\label{SAR}
We consider the behavior of $\lambda_0,\lambda_1$ and $\lambda_0+\lambda_1$ in each of the regions.
\begin{proposition}\label{P1}
Let $v$ be the value of consuming the good at the ideal period, and $p$ the price of the good. The arrival rates (namely $\lambda_0$ and $\lambda_1$) in period 0 and 1 satisfy:
\begin{itemize}
\item In Region 1, $\lambda_0=\lambda_1=0$.
\item In Region 2, $\lambda_0=0$, and $\lambda_1$ increases with $v-p$.
\item In Region 3, $\lambda_0,\lambda_1>0$, \ $\lambda_0$ increases with $v-p$, and $\lambda_1$ declines with $v-p$.
\item In Region 4, $\lambda_0$ increases with $v-p,$ and $\lambda_1=0.$
\end{itemize}
\end{proposition}
As expected, the arrivals rates $\lambda_0$ and $\lambda_1$ usually decline with the price. Nevertheless, an exception appears in Region 3, where $\lambda_1$ declines with $v-p$. An explanation is that a higher price induces fewer consumers to arrive in period 0. This makes it more likely that the good is still available in period 1, and so more consumers arrive in period 1, even though the price is higher. The proof of Proposition~\ref{P1} appears in the Appendix (see Section~\ref{PP1}).
Moreover, the following theorem claims that in Region 3 (where consumers may arrive in both periods and expected consumer utility is zero) the arrival rate in period $1$ increases with the price, and this increased arrival rate exceeds the reduced arrival rate in period $0$.
\begin{theorem}\label{T1}
Let a consumer value consuming the good more in period 1 than in period 0 (that is, $k>0$). Let the value of consuming the good at the ideal period be $v$, and the price of the good $p$. Then in Region 3 (where consumers may arrive in both periods and expected consumer utility is zero) the aggregate arrival rate $\lambda_0+\lambda_1$ decreases in $v-p$.
\end{theorem}
Our central results involve a function called the Lambert function $W[a],$ (also called the omega function or product logarithm). The Lambert function is actually a set of functions, namely the branches of the inverse relation of the function $ze^z$. In other words, if $a=ze^z,$ then $W[a]=z.$ Because the function $ze^z$ is not injective, $W[a]$ is multivalued (except at 0). Note that the relation is defined only for $a \geq -e^{-1}$, and is double-valued on $(-1/e, 0)$. The additional constraint that $W[a]\geq -1$ defines a single-valued function, which is presented in Figure~\ref{F10}).
\begin{figure
\begin{center}
\includegraphics[scale=0.9]{W1.pdf}
\vspace{2cm}
\caption{The graph of $W[a]$ for $-\frac{1}{e}\leq a\leq 1$.}\label{F10}
\end{center}
\end{figure}
\begin{definition}\label{DW}
For all $a\geq -e^{-1}$: $W[\cdot] \ \textit{is the inverse function of} \ ae^{a},$
that satisfies $W[a]\geq -1$.
\end{definition}
The following properties of $W$ will be used below. For proofs see Corless et al. (1996).
\begin{itemize}
\item W1. $W[\cdot]$ is an increasing function for all $a\geq -e^{-1}$.
\item W2. $W[-e^{-1}]=-1$.
\item W3. $W'[a]=\frac{W[a]}{a(W[a]+1)}$.
\item W4. $W[0]=0.$
\item W5. $e^{-W[a]}=\frac{W[a]}{a}$.
\item W6. $W[ae^a]=a$ for all $a\geq -1$.
\end{itemize}
To prove Theorem~\ref{T1}, we first need to establish several results.
\begin{definition}\label{D2}
For all $a,$
$$R(a)=W[-ae^{-a}].$$
\end{definition}
Note that by W6, for all $a \leq 1$, (which is equivalent to $-a\geq -1$), \ \ $R(a)=-a$. In Regions 2-4, however, where $v-p> 1,$ we cannot exploit this attractive property.
\begin{proposition}\label{P2}
Let $v$ be the value of consuming the good at the ideal period $1$, $p$ the price of the good, and $k$ the reduced utility if a person with ideal period 1 buys the good in period $0$. Then in Regions 3 and 4, the arrival rate $\lambda_0=v-p-k+R(v-p-k)$ uniquely solves (\ref{E1}).
\end{proposition}
The proof of Proposition~\ref{P2} appears in the Appendix (see Section~\ref{PP2}).
The proof of the following Corollary is very similar to the proof of Proposition~\ref{P2}.
\begin{corollary}\label{C333}
Let $v$ be the value of consuming the good at the ideal period $1$, $p$ be the price of the good, $k$ be the reduced utility if a person with ideal period 1 buys the good in period $0$, and $R(v-p)=W[-(v-p)e^{-(v-p)}]$.
In Region 2 (where $1< v\leq k+1$), the unique arrival rate that solves (\ref{E2}) is
$$\lambda_1=v-p+R(v-p).$$
\end{corollary}.
\begin{definition}\label{D10}
For all $a,$
$$A(a)=-(a+k)e^{-a+\frac{kR(a)}{a}}.$$
\end{definition}
Denote $A=A(v-p-k)$.
\begin{proposition}\label{P3}
Let $v$ be the value of consuming the good at the ideal period $1$, $p$ be the price of the good, and $k$ be the reduced utility if a person with ideal period 1 buys the good in period $0$. Then in Region 3 (where consumers may arrive in both periods and expected consumer utility is zero) the unique solution to the equation system of (\ref{E1}) and (\ref{E2}) is
$$\lambda_0=v-p-k+R(v-p-k),$$
$$\lambda_1=W[A]-\frac{(v-p)R(v-p-k)}{v-p-k}.$$
\end{proposition}
The proof of Proposition~\ref{P3}, appears in the Appendix (see Sections~\ref{PP3}). The presentation and proof of several lemmas needed for the proof of Theorem~\ref{T1}, appear in Section~\ref{SL}, and finally the proof of Theorem~\ref{T1} appears in Section~\ref{PPT1}.
\subsection{Profit maximization}\label{PM}
Consider the profit-maximizing price.
The seller's expected profit (in units of $c$) is the price $p>0$, multiplied by the probability that the good is sold (which is the same as the probability of at least one arrival), thus
\begin{equation}\label{E5000}
\pi=p\left(1-e^{-(\lambda_0+\lambda_1)}\right).
\end{equation}
For any given pair $(v,k)$, we need to find the price that maximizes the seller's profit. As explained, raising the price is equivalent to moving to the left of $v$. The first step, is to find the expression for $\pi$ in each region and then the local maximum of each of these expressions. However, these local maxima may be obtained outside the region in which the corresponding expression of $\pi$ is valid, and in that case it is irrelevant. In other words, the prices that are relevant are the local maximum points that satisfy that $(v-p,k)$ still belongs to the original region of $(v,k)$. In case of a region that has no relevant local maximum, we need to check its end points. Among these candidates of $p$, the $p^*$ that attains the maximal value for $\pi(p),$ is the price that maximizes the seller's profit.
For any given pair $(v,k)$ we now express the four regions in terms of $p$. At the beginning of Section~\ref{SU}, we defined the regions in terms of $v-p.$ The definition of the regions in terms of $p,$ are:
\begin{itemize}
\item Region 1: \ $v-1<p\leq v$.
\item Region 2: \ $v-k-1\leq p< v-1$.
\item Region 3: \ $v-u<p< v-k-1$.
\item Region 4: \ $0\leq p\leq v-u$,
\end{itemize}
where $u$ was defined as the unique solution to $k=u-\frac{\ln{u}}{1-\frac{1}{u}}$ (see Definition~\ref{D1}).
Note that not all four regions exist for all pairs of $(v,k)$. For example, if $v<1,$ then (because $p \geq 0$), all regions except for Region 1, are empty, and if $v-k-1<0,$ then Regions 3 and 4 are empty.
See Figure~\ref{F300}, that presents $\pi$ for $v=10$ and $k=2.$ The numbered regions in Figure \ref{F300} correspond to the regions in Figure \ref{f3}.
\begin{figure
\begin{center}
\includegraphics[scale=0.9]{pi.pdf}
\vspace{2cm}
\caption{$\pi$ as a function of $p.$}\label{F300}
\end{center}
\end{figure}
In this example, Regions 2 and 4 each has a relevant local maximum. But in Region 3, the expression for $\pi$ is monotonically increasing in $p$, and thus the right end of Region 3 is also a candidate. As seen, in this example the local maximum in Region 2 is the price that maximizes the seller's profit. The monotonicity of $\pi$ seen in Region 3 in this example also holds in general for all $(v,k)$, as stated in the following proposition.
\begin{proposition}\label{P666}
Let $v$ be the value of consuming the good at the ideal period, and $k$ the reduced utility of consuming the good at an earlier period. For given values of $p$ and $k$, the firm's profit $\pi$ increases in $p$ for all $p,$ s.t., $(v-p,k)$ belongs to Region 3 (where consumers may arrive in both periods and expected consumer utility is zero).
\end{proposition}
\begin{proof}
By~(\ref{E5000}), \ $\pi=p\left(1-e^{-(\lambda_0+\lambda_1)}\right).$ Now, by Theorem~\ref{T1}, in Region 3, $\lambda_0+\lambda_1$ declines in $v-p,$ hence\ $\lambda_0+\lambda_1$ increases in $p.$ This implies that $\pi=p\left(1-e^{-(\lambda_0+\lambda_1)}\right)$ also increases in $p$.
\end{proof}
It follows from Proposition~\ref{P666} that in general, the global maximum can never be attained inside Region 3. However, it may be attained at its right end $v-k-1$.
As we will show, in general, Regions 2 and 4 do not always have relevant local maxima.
Denote the right end of Region 3 as $p_3$ Then $p_3=v-k-1$. Denote $p_2$ and $p_4$ as the local maxima of Regions 2 and 4 respectively. Recall that if $p_2$ does not belong to Region 2, then it is irrelevant. Similarly, for $p_4$ and Region 4. In contrast, $p_3,$ which is the point separating between Regions 2 and 3, always exists and is relevant.
\begin{definition}\label{D200}
$$W=W[-(k+1)e^{-(k+1)}].$$
\end{definition}
Denote \ $\pi(p_i)=\pi_i, \ i=1,2,3.$
\begin{proposition}\label{P5000}
Let $v$ be the value of consuming the good at the ideal period $1$, $p$ the price of the good, and $k$ the reduced utility if a person with ideal period 1 buys the good in period $0$.
\begin{enumerate}
\item \begin{equation}\label{E901}
p_2=v-{\ln\left(v\right)\over1-{1\over v}}, \ \ \
\pi_2=v-1-\ln\left(v\right).
\end{equation}
\item \begin{equation}\label{E700}
p_4=v-k-{\ln\left(v-k\right)\over1-{1\over v-k}}, \ \ \
\pi_4=v-k-1-\ln\left(v-k\right),
\end{equation}
\item
\begin{equation}\label{E891}
p_3=v-k-1, \ \ \
\pi_3=(v-k-1)\left( 1-e^{-(W+k+1)}\right).
\end{equation}
\end{enumerate}
\end{proposition}
The proof of Proposition~\ref{P5000} appears in the Appendix (see Section~\ref{PP5000}).
\begin{theorem}\label{P6}
For all $p \geq 0$:
\begin{enumerate}
\item $\pi_4 < \pi_2$
\item $\pi_3\leq \pi_2$,
\end{enumerate}
where $\pi_i=\pi(p_i),$ \ $p_i$ is the local maximum of the expression of $\pi$ in Region $i=1,2$, and $\pi_3=\pi(p_3),$ where $p_3=v-k-1$ is the border point between Regions 2 and 3.
\end{theorem}
The proof of Theorem~\ref{P6} appears in the Appendix (see Section~\ref{PP6}).
The following Corollary follows from Theorem~\ref{P6}.
\begin{corollary}
If $p_2$ is in Region 2, then the price that maximizes $\pi$ is $p_2.$
\end{corollary}
We now find the conditions that guarantee that $p_2$ is in Region 2.
\begin{lemma}\label{P5}
If \ $1\leq v\leq e^{W+k+1},$ then $p_2$ belongs to Region 2.
\end{lemma}
The proof of Lemma~\ref{P5} appears in the Appendix (see Section~\ref{PP5}).
\begin{corollary}\label{C333}
If $1\leq v\leq e^{W+k+1},$ then $p^*=p_2=v-{\ln\left(v\right)\over1-{1\over v}},$ \ and \ $\pi^*=\pi_2=v-1-\ln\left(v\right).$
\end{corollary}
For larger $v$, namely $v \geq e^{W+k+1}$, we need to find where $\pi_3 \geq \pi_4$ is satisfied. In that case, $p^*=p_3$. Recall that $p_3$ is the point that separates between Regions 2 and 3, therefore it always exists and is relevant. But $p_4,$ which is the local maximum of the expression for $\pi$ in Region 4, may not belong to Region 4, and in that case it is not relevant.
Recall that
$\pi_4=v-k-1-\ln\left(v-k\right)$, and $\pi_3=(v-k-1) \left( 1-e^{-(W+k+1)}\right))$.
Let $f(v)=\pi_4-\pi_3$. Hence
\begin{definition}\label{D99}
$f(v)=\left( 1-\frac{v}{k+1}\right)W-\ln{(v-k)}.$
\end{definition}
Denote $v_m$ as the minimum of $f$ in Region 3.
\begin{lemma}\label{P8}
Given $k$, $f(v)$ is a strictly convex function which has exactly two roots, the smaller of which is $k+1.$
\end{lemma}
The proof of Lemma~\ref{P8} appears in the Appendix (see Section~\ref{PP8}).
\begin{definition}\label{D100}
$v_f$ is the unique value that satisfies both: $f(v)=0,$ and $v> k+1.$
\end{definition}
See Figure~\ref{F100} that presents $f$ as a function of $v,$ for $k=1$.
\begin{figure
\begin{center}
\includegraphics[scale=0.3]{fU.pdf}
\vspace{2cm}
\caption{$f$ as a function of $v$}\label{F100}
\end{center}
\end{figure}
Since $f(v)=\pi_4-\pi_3$, it follows that for all $v$ satisfying $k+1< v< v_f$, $\pi_4<\pi_3$.
Recall that by Theorem~\ref{P6}, \ $\pi_3 \leq \pi_2,$ and that by Lemma~\ref{P5}, $p_2$ is relevant for all $v$ satisfying $1\leq v\leq e^{W+k+1}.$
\begin{corollary}\label{C1}
Let $k$ be the cost of an early arrival, let $W$ stand for $W[-(k+1)e^{-(k+1)}]$, and let $v_f$ be the larger root of $f=\pi_4-\pi_3$. Then for every pair $(v,k)$
$$e^{W+k+1}\leq v_f.$$
\end{corollary}
The proof of Corollary~\ref{C1} appears in the Appendix (see Section~\ref{PC1}).
\begin{corollary}
\
\begin{itemize}
\item For all $1\leq v\leq e^{W+k+1},$ \ \ $p^*=p_2,$ \ and \ $\pi^*=\pi_2$.
\item For all $e^{W+k+1}<v\leq v_f,$ \ \ $p^*=p_3,$ \ and \ $\pi^*=\pi_3$.
\end{itemize}
\end{corollary}
But what happens for $v > v_f$? By Lemma~\ref{P8}, $\pi_4>\pi_3$ \ for \ $v> v_f.$ But, is $p_4$ relevant (i.e., belongs to Region 4), for $v> v_f$? The following Lemma says Yes.
\begin{lemma} \label{P9}
\
\begin{enumerate}
\item If $v\geq k+u,$ then $p_4$ belongs to Region 4.
\item For every pair $(v,k)$, \
$v_f\geq k+u.$
\end{enumerate}
\end{lemma}
The proof of Lemma~\ref{P9} appears in the Appendix (see Section~\ref{PP9}).
Recall that $W$ stands for $W[-(k+1)e^{-(k+1)}],$ and that $v_f$ is the root of $f(v)$ that is greater than the root $k+1$. The following theorem summarizes the results.
\begin{theorem} \label{T4}
Let $v$ be the value of consuming the good at the ideal period, and $k$ the reduced utility of consuming the good at an earlier period. Then for given values of $v$ and $k$, the profit-maximizing (normalized) price ($p^*$), the induced arrival rates ($\lambda_0$ and $\lambda_1$), and the associated expected (normalized) profit ($\pi^*$) are
\begin{enumerate}
\item If $v<1,$ then $\lambda_0=0, \ \lambda_1=0$, and \ $\pi^* =0.$
\item If $1 \leq v\leq e^{W+k+1},$ then $p^*=v-{\ln v\over 1-{1\over v}},$ $\pi^*=v-1-\ln v,$ \ $\lambda_0=0, and \ \lambda_1=\ln{v}$.
\item If $e^{W+k+1}< v \leq v_f,$ then $p^*=v-k-1$, $\pi^*=(v-k-1)\left( 1-e^{-(W+k+1)}\right)$, $\lambda_0=0, and \ \lambda_1=W+k+1$.
\item If $v> v_f,$ then
$p^*=v-k-{\ln\left(v-k\right)\over1-{1\over v-k}}$,
$\pi^*=v-k-1-\ln\left(v-k\right)$, $\lambda_0=\ln{(v-k)}$, and $\lambda_1=0$,
\end{enumerate}
where $W=W[-(k+1)e^{-(k+1)}]$.
\end{theorem}
The profit-maximizing strategy may have the firm set the price such that consumers buy only in period 1 (cases 2 and 3) or only in period 0 (case 4). The firm's profit depends on the value of $k$ (the penalty for hiring early). The firm's profits are not maximized when $k=0$, but rather when $k$ makes the value of $e^{W+k+1}$ greater than $v$. Profits are then $\pi_2=v-1-\ln v$, which, according to Theorem~\ref{P6}, is the maximum profit. Note that this conclusion relates to the case in which the firm must offer the good at both periods. But as the following section shows, the firm is always better off not offering the good early.
\subsection{Outcomes with a single period}
Consider a model with a single-period, in which consumers are only allowed to come in period 1. This is equivalent to making $k$ infinite in the original model with two periods. Then, unless $v<1$ (which results in $\pi=0$), $\lim_{k\to\infty}e^{W+k+1}=\infty$. And so \ $1\leq v<e^{W+k+1},$ thus belonging to the second case in Theorem~\ref{T4}. Hence $\pi^*=\pi_2=v-1-\ln v.$
By Theorem~\ref{P6}, \ $\pi_2\geq \pi_3, \pi_4$. Hence the seller's highest profit is when selling the good in period 1. It does not offer the good at an earlier period than the period most desired by the consumers.
\subsection{Social optimization}
Clearly, social welfare is optimized when consumers only buy in Period 1. The optimal arrival rate $\lambda_1$ is determined by solving $\max_{\lambda_1}\{v_1(1-e^{-\lambda_1})-\lambda_1 \},$ which gives the optimal value $\lambda_1^*=\ln v-p$.
We now ask whether when the arrival rates are determined by consumers' equilibrium behavior, is it socially optimal to forbid early sales.
Denote the normalized (i.e., divided by the search cost $c$) social welfare functions in the single- and two-period cases as $SW_1$ and $SW_2$ respectively.
If $v<1,$ then there are no arrivals, hence $SW_1=SW_2=0$. Assume $v\geq 1.$
For the single-period case, because all consumers need to pay the (normalized) search cost of $1$ but at most one consumer gains the good (if indeed there is at least one arrival), then
\begin{equation}\label{E8000}
SW_1=(v-p)(1-e^{-\lambda_1})-\lambda_1.
\end{equation}
As $k \to \infty$, Regions 3 and 4 become empty, so we are left with Region 2. In Region 2, equation (\ref{E2}) gives $v-p=\frac{\lambda_1}{1-e^{-\lambda_1}}.$ Substituting this in~(\ref{E8000}) gives $0.$
For the two-period case, one may assume that larger $k$ reduces social welfare. However, we show now that in equilibrium, social welfare is $0$ for all $k$ The reason is that in all 1-4 types of equilibrium, whenever there is a positive arrival rate, $\lambda_i>0$ then the corresponding utility satisfies $U_i=0.$ Consider
\begin{equation}\label{E8001}
SW_2=(1-e^{-\lambda_0})(v-p-k)+e^{-\lambda_0}(1-e^{-\lambda_1})(v-p)-(\lambda_0+\lambda_1).
\end{equation}
\begin{proposition}\label{P77}
In all equilibria, social welfare in the two-period case equals zero, namely, $SW_2=0$.
\end{proposition}
\begin{proof}
\
\begin{itemize}
\item In Region 1, $\lambda_0=\lambda_1=0.$ Substituting this in~(\ref{E8001}) gives $SW_2=0.$
\item In Region 2,~(\ref{E2}) with $\lambda_0=0,$ gives $v-p=\frac{\lambda_1}{1-e^{-\lambda_1}}.$ \ Substituting this in~(\ref{E8001}) gives $SW_2=0.$
\item In Region 3, by~(\ref{E1}) and~(\ref{E2}) we have
$$
v-p-k=\frac{\lambda_0}{1-e^{-\lambda_0}}, \ \text{and} \ \
v-p=\frac{\lambda_1e^{\lambda_0}}{1-e^{-\lambda_1}}.
$$
Substituting this in~(\ref{E8001}) again gives \ $SW_2=0.$
\item In Region 4,~(\ref{E1}) with $\lambda_1=0,$ gives $v-p-k=\frac{\lambda_0}{1-e^{-\lambda_0}}.$ \ Substituting this in~(\ref{E8001}) gives $SW_2=0.$
\end{itemize}
\end{proof}
\section{Back to the finite case}
When the arrival rate $\lambda$ is finite, equilibrium types 5-7 do exist. The sum of arrival rates $\lambda_0+\lambda_1=\lambda$ is fixed. This implies that the expected profit $\pi=p\Big( 1-e^{-(\lambda_0+\lambda_1)} \Big)=p\Big( 1-e^{-\lambda}\Big)$ is linear and increasing in $p$. Hence the candidates for the maximum points are the border-points between the regions. For the other (former) equilibria types 1-4, we proved that for equilibrium type 3, \ $\lambda_0+\lambda_1$ increases with $p$, so the points on the left borders of the region in Figure~\ref{f2} are candidates for maximal profit. In equilibrium types 2 and 4, we have proved that there is a local maximum for $\pi$ that are also candidates for maximal profit. These results still hold for the finite case.
\section{Conclusion}
A consumer who wants to consume a good at a particular period may nevertheless attempt to buy it earlier if he is concerned that in delaying he would find the good already sold. This paper considers a model in which the good may be offered at two periods; the period in which all consumers most value the good (period 1), and an earlier period (period 0). We show that a firm profits by not selling early. The strategy of consumers, when deciding if and when to arrive, is more complicated than one may suppose, and can generate some unexpected behavior. For example, the arrival rate at period 1 may {\em decline} with the surplus a person gets from buying the good; indeed even the aggregate arrival rate (from both periods) can decline with that surplus. The behavior of the firm can also be surprising. If the firm is obligated to also offer the good early, but must charge the same price in both periods, then, depending on the value of the good to a consumer, the firm may maximize profits by setting a price which induces consumers to arrive only in period 1, or under different conditions, induces them to arrive only in period 0. The firm would not set a price which induces consumers to arrive in both periods. This result also means that, even with an infinitesimal cost of opening the store in some period, the firm will not want to remain open in all periods. In any case, under the assumptions of the model, the firm is always better off not offering the good early.
\pagebreak
\section{Appendix}\label{App}
Let $v_1=v-p$, and let $v_0=v-p-k$. Then $v_1$ is the consumer's benefit when buying in period 1 at price $p$, and $v_0$ is the consumer's benefit when buying in period 0 at price $p$. Note that given $v$ and $k$, the values $v_0$ and $v_1$ are functions of $p$. We find that most of the expressions appearing in our analysis, are functions of
$v-p-k,$ thus we center our attention on $v_0.$
Summarizing the results on the four regions in terms of $v_0,$
\begin{itemize}
\item In Region 1, \ $\lambda_0=\lambda_1=0,\ U_0, U_1\leq 0$, and it is reached iff \ $-k\leq v_0< 1-k$.
\item In Region 2, $\lambda_0=0,\ 0<\lambda_1<\lambda,\ U_1=0, \ U_0\leq 0$, and it is reached iff \ $1-k\leq v_0\leq 1$.
\item In Region 3,\ $\lambda_0,\lambda_1>0, \lambda_0+\lambda_1<\lambda, \ U_0=U_1=0$, and it is reached iff \ $1<v_0< u-k$.
\item In Region 4, $0<\lambda_0<\lambda, \lambda_1=0, \ U_0=0, \ U_1\leq 0$, and it is reached iff \ $u-k\leq v_0\leq v-k$.
\end{itemize}
\subsection{Proof of Proposition~\ref{P1}}\label{PP1}
Recall that $x \equiv \lambda_0,\ y \equiv \lambda_1$.
\begin{enumerate}
\item The proof of the first statement of the proposition follows immediately from the definition of Region 1.
\item In Region 2, $\lambda_0=0$. Thus (\ref{E2}) becomes
$$v_1=\frac{y}{1-e^{-y}}.$$
Hence $$\frac{dv_1}{dy}= \frac{1-e^{-y}-ye^{-y}}{(1-e^{-y})^2}.$$
Because $y+1\leq e^y$, for $y\geq 0$ The numerator on the right-hand side satisfies $1-e^{-y}(1+y)\geq 0$. Thus $v_1' \geq 0$ and $v_1$ increases with $y$. Hence $y$ increases with $v_1$ (and thus in $v_0$), proving the second claim of the proposition.
\item In Region 3, $\lambda_0$ and $\lambda_1$ are both positive, and both equations~(\ref{E1}) and~(\ref{E2}) are satisfied.
Similarly to the proof above for $y$ in Region 2, it follows from~(\ref{E1}) that in Region 3, $x$ increases with $v_0$. From~(\ref{E1}) and~(\ref{E2}) together, we have $$k+\frac{x}{1-e^{-x}}=\frac{ye^x}{1-e^{-y}}.$$
So
\begin{equation}\label{E3}
ke^{-x}+\frac{x}{e^{x}-1}=\frac{y}{1-e^{-y}}.
\end{equation}
The left-hand side of (\ref{E3}) declines with $x \equiv \lambda_0$ since
$$
\Big( ke^{-x}+\frac{x}{e^{x}-1}\Big)'=-ke^{-x}+\frac{e^x-1-xe^x}{(e^x-1)^2}=-ke^{-x}-\frac{e^x(x-1)+1}{(e^x-1)^2}.
$$
The expression $e^x(x-1)+1$, appearing in the numerator on the right-hand side, increases with $x$ and equals $0$ when $x=0$. Hence it is positive and so $-ke^{-x}-\frac{e^x(x-1)+1}{(e^x-1)^2}<0$, implying that the left-hand side of~(\ref{E3}) indeed decreases in $x$. In contrast, the right-hand side of~(\ref{E3}) increases in $y \equiv \lambda_1$. Thus $y$ decreases in $x $. Because $x \equiv \lambda_0$ increases in $v_0,$ it follows that $y \equiv \lambda_1$ declines with $v_0$, proving the third claim of the proposition.
\item In Region 4 $y \equiv \lambda_1 =0$ and (\ref{E1}) is satisfied. And so, as in Region 3, $x \equiv \lambda_0$ increases with $v_0$, proving the last statement of the proposition.
\end{enumerate}
\
\subsection{Proof of Theorem~\ref{T1}}\label{PT1}
To prove Theorem~\ref{T1}, we first need to establish several results, using the Lambert function $W[x],$ (see Definition~\ref{DW} in Section~\ref{SAR}). We have also presented a list of W1-W6 properties of the Lambert function, that we use in our analysis ahead (see Section~\ref{SAR}).
Recall that we have denoted
$$R(a)=W[-ae^{-a}].$$
\subsubsection{Proof of Proposition~\ref{P2}}\label{PP2}
First, we show that $\lambda_0=v_0+R(v_0)$ indeed solves (\ref{E1}).
Substituting $\lambda_0=v_0+R(v_0)$ in the right-hand side of (\ref{E1}), gives
\begin{equation}\label{E4}
\frac{v_0+R(v_0)}{1-e^{-v_0}e^{-R(v_0)}}.
\end{equation}
By W5,
\begin{equation}\label{E790}
e^{-R(v_0)}=\frac{R(v_0)}{-v_0e^{-v_0}}.
\end{equation}
Substituting this in (\ref{E4}) gives
$$\frac{v_0+R(v_0)}{1+\frac{R(v_0)}{v_0}}=v_0,$$
which is the left-hand side of (\ref{E1}).
Now, because the right-hand side of (\ref{E1}) is monotonic in $\lambda_0$, then for any given $v_0$ the value $\lambda_0=v_0+R(v_0)$ \emph{uniquely} solves (\ref{E1}).
\subsubsection{Proof of Proposition~\ref{P3}}\label{PP3}
Recall that
$A(a)=-(a+k)e^{-a+\frac{kR(a)}{a}},$ \
and that \ $A=A(v_0)$.
From Proposition~\ref{P2}, $\lambda_0=v_0+R(v_0)$ uniquely solves (\ref{E1}). Substituting this in (\ref{E2}) gives
$$v_1=\frac{\lambda_1e^{v_0+R(v_0)}}{1-e^{-\lambda_1}},$$
which is the same as
\begin{equation}\label{E5}
v_1e^{-v_0-R(v_0)}=\frac{\lambda_1}{1-e^{-\lambda_1}}.
\end{equation}
Since the right-hand side of (\ref{E5}) is monotonic in $\lambda_1$, for any given pair $v_0, v_1$ at most one value of $\lambda_1$ satisfies (\ref{E5}). We now show that the proposed solution $\lambda_1=W[A]-\frac{v_1R(v_0)}{v_0}$ satisfies~(\ref{E5}). Substituting this $\lambda_1$ in the right-hand side of (\ref{E5}) gives
$$\frac{W[A]-\frac{v_1R(v_0)}{v_0}}{1-e^{\frac{v_1R(v_0)}{v_0}}e^{-W[A]}}.$$
Because of W5, this expression is equal to
$$\frac{W[A]-\frac{v_1R(v_0)}{v_0}}{1-\frac{e^{\frac{v_1R(v_0)}{v_0}}W[A]}{A}},$$ which is equal to
$$\frac{\Big(W[A]-\frac{v_1R(v_0)}{v_0}\Big)A}{A- e^{\frac{v_1R(v_0)}{v_0}}W[A]}.$$
Since $v_1=v_0+k,$ the above equals
$$\frac{\Big(\frac{v_1R(v_0)}{v_0}-W[A]\Big)v_1e^{-v_0+\frac{kR(v_0)}{v_0}}}{-v_1e^{-v_0+\frac{kR(v_0)}{v_0}}- e^{\frac{v_1R(v_0)}{v_0}}W[A]}$$
$$=\frac{\Big(\frac{v_1R(v_0)}{v_0}-W[A]\Big)e^{R(v_0)+\frac{kR(v_0)}{v_0}}}{-v_1e^{-v_0+\frac{kR(v_0)}{v_0}}- e^{\frac{v_1R(v_0)}{v_0}}W[A]}\cdot v_1e^{-v_0-R(v_0)}$$
Hence we need to prove that the quotient above equals 1, namely that
\begin{equation}\label{E70}
\Big(\frac{v_1R(v_0)}{v_0}-W[A]\Big)e^{R(v_0)+\frac{kR(v_0)}{v_0}}=-v_1e^{-v_0+\frac{kR(v_0)}{v_0}}- e^{\frac{v_1R(v_0)}{v_0}}W[A].
\end{equation}
Note that the expression $e^{R(v_0)+\frac{kR(v_0)}{v_0}}$ appearing on the left side satisfies
$$
e^{R(v_0)+\frac{kR(v_0)}{v_0}}=e^{\big(\frac{v_0+k}{v_0}\big)R(v_0)}=e^{\frac{v_1R(v_0)}{v_0}}.
$$
Substituting this in the left-hand side of~(\ref{E70}) gives
$$\frac{v_1R(v_0)}{v_0}e^{\frac{v_1R(v_0)}{v_0}}-e^{\frac{v_1R(v_0)}{v_0}}W[A].$$
Hence we only need to prove that
$$\frac{v_1R(v_0)}{v_0}e^{\frac{v_1R(v_0)}{v_0}}=-v_1e^{-v_0+\frac{kR(v_0)}{v_0}}.$$
This is equivalent to proving that
$$
\frac{R(v_0)}{v_0}e^{\frac{v_1R(v_0)}{v_0}+v_0-\frac{kR(v_0)}{v_0}}=-1.
$$
Note that the left-hand side of the above equation equals
$$\frac{R(v_0)}{v_0}e^{\frac{(v_1-k)R(v_0)}{v_0}+v_0}=\frac{R(v_0)}{v_0}e^{R(v_0)+v_0}.$$
By~(\ref{E790}) the above equals
$$\frac{R(v_0)}{v_0}\frac{(-v_0e^{-v_0})e^{v_0}}{R(v_0)}=-1.$$
\subsubsection{Several lemmas for the proof of Theorem~\ref{T1}}\label{SL}
Recall Definition~\ref{D1}, that $u=u(k)$ satisfies \ $k=u-\frac{\ln{u}}{1-\frac{1}{u}}.$ Note that $u-k=\frac{\ln{u}}{1-\frac{1}{u}},$ is strictly increasing in $u,$ which is strictly increasing in $k$.
The following lemma proves essential properties of the function $R(v_0)$.
\begin{lemma}\label{T2}
\
\begin{itemize}
\item R1. \ $R(v_0)$ is negative and increasing for all $ v_0>1$.
\item R2. \ $R(1)=-1$.
\item R3. \ $R(u-k)=\frac{k}{u}-1$.
\item R4. \ $R'(v_0)=\frac{-R(v_0)}{R(v_0)+1}\cdot\frac{v_0-1}{v_0}.$
\end{itemize}
\end{lemma}
\begin{proof}
\
\begin{enumerate}
\item Since $-v_0<-1<0,$ then $-v_0e^{-v_0}<0,$ and by W1 and W4, \ $R(v_0)=W[-v_0e^{-v_0}]<0$.
Now, $(-v_0e^{-v_0})'=e^{-v_0}(v_0-1)>0$ (since $1<v_0$). The Lambert function is increasing and so $W[-v_0e^{-v_0}]$ increases with $v_0$, proving R1.
\item To prove R2, note that by W2 $R(1)=W[-e^{-1}]=-1$.
\item We wish to prove that $R(u-k)=\frac{k}{u}-1$. Recall that $u$ satisfies $k=u-\frac{\ln{u}}{1-\frac{1}{u}}$. Hence $(k-u)(1-\frac{1}{u})=-\ln{u}$, and so $k-u=-\ln{u}+(\frac{k}{u}-1)$.
Adding $\ln{(k-u)}$ to both sides of the equation gives
$$(k-u) +\ln{(k-u)}=\ln{\Big(\frac{k}{u}-1\Big)}+\Big(\frac{k}{u}-1\Big).$$
Thus
$$(k-u)e^{k-u}=\Big( \frac{k}{u}-1\Big)e^{\frac{k}{u}-1}.$$
Applying the Lambert function to both sides of the equation gives
\begin{equation}\label{E6}
W[(k-u)e^{k-u}]=W\Big[\Big( \frac{k}{u}-1\Big)e^{\frac{k}{u}-1}\Big].
\end{equation}
The left-hand side of (\ref{E6}) is $R(u-k)$. Now by W6, since $\frac{k}{u}-1\geq -1$ then
$$
W \Big[\Big( \frac{k}{u}-1\Big)e^{\frac{k}{u}-1}\Big]=\frac{k}{u}-1,
$$
proving R3.
\item
\begin{equation}\label{E7}
R'(v_0)=W'[-v_0e^{-v_0}]e^{-v_0}(v_0-1).
\end{equation}
Hence by W3,
$$R'(v_0)=\frac{R(v_0)e^{-v_0}(v_0-1)}{-v_0e^{-v_0}(R(v_0)+1)}=\frac{-R(v_0)}{R(v_0)+1}\cdot\frac{v_0-1}{v_0},
$$
proving R4.
\end{enumerate}
\end{proof}
For the next result we need the following lemma.
\begin{lemma}\label{L2}
The expression $v_0+(v_0+k)R(v_0),$ is negative in Region 3.
\end{lemma}
\begin{proof}
Substituting $v_0=u-k,$ which is the right end of Region 3, in: $v_0+(v_0+k)R(v_0),$ and recalling that by R3, $R(u-k)=\frac{k}{u}-1$, gives $0$. Additionally, by R4, the derivative \ $1+R(v_0)+(v_0+k)R'(v_0),$ \ of this expression is $$1+R(v_0)+\frac{-R(v_0)(v_0+k)(v_0-1)}{v_0(R(v_0)+1)}.$$
By R1 and R2, in Region 3: \ $1+R(v_0)>0,$ and \ $-R(v_0)>0.$
Hence the derivative is positive and so the expression \ $v_0+(v_0+k)R(v_0),$ is negative in Region 3.
\end{proof}
Recall Definition~\ref{D10}, that $A=A(v_0)=-(v_0+k)e^{-v_0+\frac{kR(v_0)}{v_0}},$ and recall that $u=u(k).$
\begin{lemma}\label{T3}
\
\begin{enumerate}
\item $W[A(u-k)]=-1.$
\item $A(v_0)$ is decreasing for $1< v_0< u-k.$
\end{enumerate}
\end{lemma}
\begin{proof}
\
\begin{enumerate}
\item Since $W$ is strictly monotonic and by W2, $W[-e^{-1}]=-1$, we must prove that $A(u-k)=-e^{-1}$. Now,
\begin{equation}\label{E8}
A(u-k)=-ue^{k-u+\frac{kR(u-k)}{u-k}}.
\end{equation}
From R3, \ $R(u-k)=\frac{k}{u}-1$.
Substituting this in the right-hand side of (\ref{E8}) gives
$$
-ue^{k-u+\frac{k(k/u-1)}{u-k}}=-ue^{k-u-\frac{k}{u}}=-e^{k-u-\frac{k}{u}+\ln{u}},
$$
Substituting $\ln{u}=-(k-u)\Big( 1-\frac{1}{u}\Big)$ gives
$$
-e^{k-u-\frac{k}{u}-(k-u)\Big( 1-\frac{1}{u}\Big)}=-e^{-1}.
$$
\item To prove that $A(v_0)$ is decreasing, we arrive at
\begin{equation}\label{E789}
A'(v_0)=\frac{e^{-v_0+\frac{kR}{v_0}}\Big(v_0(v_0+k-1)+kR \Big)\big( v_0+(v_0+k)R\big)}{v_0^2(R+1)},
\end{equation}
where $R=R(v_0)$. By R1 and R2, the denominator is positive. Thus we need to prove that the numerator is negative. The expression in the first parentheses is positive since
$v_0(v_0+k-1)+kR=k(v_0+R)+v_0(v_0-1)>0,$ since in Region 3, \ $v_0>1,$ and $v_0+R>1+R,$ (which is positive according to R1 and R2). The expression appearing in the second parentheses is negative by Lemma~\ref{L2}, thus $A'(v_0)$ is negative
implying that $A(v_0)$ is indeed decreasing there.
\end{enumerate}
\end{proof}
Recall that $R=R(v_0),$ and that $u=u(k)$ is defined by the equation $k=u-\frac{\ln{u}}{1-\frac{1}{u}}.$
We wish to look at $W[A]$ as a function of $k.$
Given $v_0,$ denote by $k_0,$ the $k$ that satisfies $v_0=u(k_0)-k_0,$ and denote $u_0=u(k_0).$
Also denote by $A_0$ the $A$ that correspond to $k_0,$ namely
$ A_0=-(v_0+k_0)e^{-v_0+\frac{k_0R}{v_0}}.$
\begin{lemma}\label{P50}
Given $v_0:$
\begin{enumerate}
\item $W[A]$ is increasing in $k,$ in Region 3.
\item $\frac{d}{dk}W[A]$ is decreasing in $k,$ in Region 3.
\item $\frac{d}{dk}W[A]\Big|_{k=k_0}=\frac{1}{u_0}.$
\end{enumerate}
\end{lemma}
\begin{proof}
\
\begin{enumerate}
\item To prove that $W[A]$ is increasing in $k$ in Region 3, note that $\frac{dA}{dk}=-\frac{e^{-v_0+\frac{kR}{v_0}}}{v_0}(v_0+(v_0+k)R).$ By Lemma~\ref{L2}, \ $v_0+(v_0+k)R<0$ in Region 3, thus $\frac{dA}{dk}>0$ and $A$ is increasing in $k$ in Region 3. Since $W$ is an increasing function, then $W[A]$ is also increasing in $k$ in Region 3.
\item Using W3, we arrive at
\begin{equation}\label{E34}
\frac{d}{dk}W[A]=\frac{(v_0+(v_0+k)R)W[A]}{v_0(v_0+k)(W[A]+1)}.
\end{equation}
The above equals
$$\frac{v_0+(v_0+k)R}{v_0(v_0+k)}-\frac{(v_0+(v_0+k)R)}{v_0(v_0+k)(W[A]+1)}.$$
Differentiating the above expression with respect to $k$ gives
\begin{equation}\label{E35}
\frac{d^2}{dk^2}W[A]= \frac{-v_0^2}{(v_0(v_0+k))^2}+\frac{v_0(v_0+k)\frac{d}{dk}W[A](v_0+(v_0+k)R)}{(v_0(v_0+k)(W[A]+1))^2}.
\end{equation}
The first term in~(\ref{E35}) is negative. The second term is also negative since we proved in Part 1, that $\frac{d}{dk}W[A]$ is positive, and by Lemma~\ref{L2}, $v_0+(v_0+k)R$ is negative.
\item
By~(\ref{E34})
$$\frac{d}{dk}W[A]=\frac{(v_0+(v_0+k)R)W[A]}{v_0(v_0+k)(W[A]+1)}.$$
Note that for $k=k_0,$ the numerator vanishes according to Lemma~\ref{L2}, and the denominator vanishes according to Part 1 of Lemma~\ref{T3}.
By L'H\^{o}pital's rule
$$\frac{d}{dk}W[A]\Big|_{k=k_0}=\frac{\frac{d}{dk}\Big((v_0+(v_0+k)R)W[A]\Big)\Big|_{k=k_0}}{\frac{d}{dk}\Big(v_0(v_0+k)(W[A]+1)\Big)\Big|_{k=k_0}}.$$
Hence
\begin{equation}\label{E30}
\frac{d}{dk}W[A]\Big|_{k=k_0}=\frac{RW[A_0]+(v_0+(v_0+k_0)R)\frac{d}{dk}W[A]\Big|_{k=k_0}}{v_0(W[A_0]+1)+v_0(v_0+k_0)\frac{d}{dk}W[A]\Big|_{k=k_0}}.
\end{equation}
Denote $s=\frac{d}{dk}W[A]\Big|_{k=k_0},$
then~(\ref{E30}) becomes
$$s=\frac{RW[A_0]+(v_0+(v_0+k_0)R)s}{v_0(W[A_0]+1)+v_0(v_0+k_0)s}.$$
Recalling that $v_0+(v_0+k_0)R$ appearing in the numerator equals $0,$ and that $W[A_0]=-1,$ we obtain
\begin{equation}\label{E31}
s=\frac{-R}{v_0(v_0+k_0)s}
\end{equation}
Note that $s=0$ does not solve~(\ref{E31}), hence the above expression is well defined.
From~(\ref{E31}) we get
\begin{equation}\label{E32}
s^2=\frac{-R}{v_0(v_0+k_0)}.
\end{equation}
By R3, $R=R(v_0)=R(u_0-k_0)=\frac{k_0}{u_0}-1.$ Substituting this, and also $v_0=u_0-k_0,$ in~(\ref{E32}) gives
\begin{equation}\label{E111}
s^2=\frac{1-\frac{k_0}{u_0}}{(u_0-k_0)u_0}=\frac{u_0-k_0}{(u_0-k_0)u_0^2}=\frac{1}{u_0^2}.
\end{equation}
Now, from the proof of the first part of Lemma~\ref{P50}, (that $W[A]$ is increasing in $k,$ in Region 3), we get that $\frac{d}{dk}W[A]$ is non-negative in Region 3. In particular, $s\geq 0.$ Combining this with the fact that $s\ne 0$, which we established earlier, gives $s>0$. This, together with~(\ref{E111}), implies that $s=\frac{1}{u_0}.$
\end{enumerate}
\end{proof}
We can now prove Theorem~\ref{T1}.
\subsubsection{Proof of Theorem~\ref{T1}}\label{PPT1}
\
Recall that $u=u(k)$ is the solution for $k=u-\frac{\ln{u}}{1-\frac{1}{u}}$ \ (see Definition~(\ref{D1})). \
\newline
In Region 3, by Proposition~\ref{P3},
$$\lambda_0+\lambda_1=v_0+R(v_0)+W[A]-\frac{v_1R(v_0)}{v_0}.$$
We utilize R4 and~(\ref{E789}), to arrive at
\begin{equation}\label{E9}
(\lambda_0+\lambda_1)'=\frac{v_0^2W[A](R+1)+(v_0+k)\Big( v_0^2+v_0(v_0+k)R+kR^2\Big)}{v_0^2(v_0+k)\Big(W[A]+1\Big)(R+1)},
\end{equation}
where the derivative is according to $v_0.$
According to R1 and R2, \ $R+1>0$ in Region 3 and according to Lemma~\ref{T3}, $W[A]+1>0$ in Region 3. Hence, the denominator is positive in Region 3. Thus it is sufficient to prove that the numerator is negative.
By substituting the end points of Region 3, namely, $v_0=1, v_0=u-k,$ it is easily verified that at the end points of Region 3, the numerator vanishes. We wish to prove that for all $k\geq 0,$ the numerator is negative for all $1<v_0<u-k$ (i.e., for all $v_0$ in Region 3).This is illustrated in Figure~\ref{F201} which presents the numerator as a function of $v,$ for $k=2.$ Note that $u(2)=3.81449,$ hence Region 3 is \ $1<v_0<1.81449.$
\begin{figure
\includegraphics[scale=0.7]{mv.pdf}
\centering
\vspace{2cm}
\caption{The numerator of $(\lambda_0+\lambda_1)'$ as a function of $v_0,$ in Region 3, for $k=2.$}\label{F201}
\end{figure}
\
Recall that we denoted $v_0=u_0-k_0.$
For any given $v_0,$ if we look at the numerator as a function of $k,$ then, as explained, for $k_0=u_0-v_0$ (corresponding to the right end point $v_0=u_0-k_0$ of Region 3 when $k=k_0$), the numerator equals $0.$ We wish to prove that given $v_0,$ \ for all $k>k_0,$ the numerator is decreasing as a function of $k,$ and therefore is negative. This will be proved shortly and is demonstrated in Figure~\ref{F202}.
\begin{figure
\begin{center}
\includegraphics[scale=0.4]{Dk.pdf}
\vspace{2cm}
\caption{The numerator of $(\lambda_0+\lambda_1)'$ in Region 3, for $k=2,4,6,8,10.$}\label{F202}
\end{center}
\end{figure}
In Figure~\ref{F202} which presents the numerator for $k=2,4,6,8,10,$ we see that at $v_0=1.81449$ the numerator equals zero for $k=k_0=2,$ (upper line), and then for increasing $k,$ the value of the numerator decreases (and is thus negative as claimed). Note also that the length of Region 3, increases with $k.$ This is so, since for all $k>0,$ the left end of Region 3, is $1,$ and the right end is $u-k$ which equals
$\frac{\ln{u}}{1-\frac{1}{u}},$ which increases with $u$ and thus with $k$. In particular when $k\to 0,$ the right end $u(k)-k\to 1,$ and so when $k=0,$ Region 3 is empty (since it consists of $1<v_0<1$).
Given $v_0,$ denote by $N(k),$ the numerator of $(\lambda_0+\lambda_1)'$ (when the derivative is with respect to $v_0$), namely
$$N(k)=v_0^2W[A](R+1)+(v_0+k)\Big( v_0^2+v_0(v_0+k)R+kR^2\Big).$$
We now differentiate $N(k)$ with respect to $k.$ Note that $v_0$ and $k_0$ are fixed in $k,$ and consequently $R,$ which is a function of $v_0$ only, is fixed as well.
\begin{equation}\label{E33}
\frac{d}{dk}N(k)= v_0^2(R+1)\frac{d}{dk}W(k)+v_0^2+v_0(v_0+k)R+kR^2+R(v_0+k)(R+v_0).
\end{equation}
We will first prove that $\frac{d}{dk}N(k)$ for $k=k_0$ is negative. We will then prove that $\frac{d}{dk}N(k)$ is decreasing in $k,$ for all $k>k_0,$ hence for all $k>k_0,$ $\frac{d}{dk}N(k)<0,$ and since $N(k_0)=0$ it follows that $N(k)<0$ for all $k>k_0.$
Recall that for $k=k_0:$ \ $v_0=u_0-k_0,$ and \ $R=R(v_0)=R(u_0-k_0)=\frac{k_0}{u_0}-1.$ Also, by Part 3 of Lemma~\ref{P50}, $\frac{d}{dk}W[A]\Big|_{k=k_0}=\frac{1}{u_0}.$ Substituting all this in~(\ref{E33}) gives
$$(u_0-k_0)^2\frac{k_0}{u_0^2}+(u_0-k_0)^2+(u_0-k_0)u_0\left(\frac{k_0}{u_0}-1\right)+k_0\left(\frac{k_0}{u_0}-1\right)^2+\left(\frac{k_0}{u_0}-1\right)u_0\left(\frac{k_0}{u_0}-1+u_0-k_0\right).$$
This equals
$$\left(\frac{k_0}{u_0}-1\right)^2k_0+\left(\frac{k_0}{u_0}-1\right)(2k_0-u_0+u_0^2-u_0k_0),$$ which equals
\begin{equation}\label{E6666}
\left(\frac{k_0}{u_0}-1\right)\left( (u_0-1)(u_0-k_0)+\frac{k_0^2}{u_0}\right).
\end{equation}
Since for all $k> 0,$ \ we have $u-k> 1,$ then the first factor in~(\ref{E6666}) is negative, and the second factor is positive in Region 3, proving that indeed $\frac{d}{dk}N(k)\big|_{k=k_0}<0$ in Region 3.
We now prove that $\frac{d}{dk}N(k)$ is decreasing in $k,$ for all $k>k_0.$
By~(\ref{E33})
$$\frac{d}{dk}N(k)= v_0^2(R+1)\frac{d}{dk}W(k)+v_0^2+v_0(v_0+k)R+kR^2+R(v_0+k)(R+v_0).$$
This equals
\begin{equation}\label{E37}
v_0^2(R+1)\frac{d}{dk}W[A]+2R(R+v)k+v_0^2(R+1)+v_0R(R+v_0).
\end{equation}
The first term $v_0^2(R+1)\frac{d}{dk}W[A]$ is decreasing in $k,$ since by R1 and R2, $R+1>0$ in Region 3 and by the second statement in Lemma~\ref{P50},
$\frac{d}{dk}W[A]$ is decreasing in $k.$ The reminder expression $2R(R+v)k+v_0^2(R+1)+v_0R(R+v_0)$ is a linear function of $k$ with a negative multiplier $2R(R+v)$ for $k,$ hence it is also decreasing in $k.$
So indeed, $\frac{d}{dk}N(k)$ is decreasing in $k,$ for all $k>k_0.$ Since we have proven that $\frac{d}{dk}N(k)\big|_{k=k_0}<0,$ then it follows that for all $k>k_0,$ \ $\frac{d}{dk}N(k)<0.$
Now, since $N(k_0)=0,$ then it follows that $N(k)<0,$ for all $k>k_0.$ This means that the numerator of $(\lambda_0+\lambda_1)'$ (where the derivative is by $v_0$), is negative for all $k>k_0.$ Since $u-k=\frac{\ln{u}}{1-\frac{1}{u}}$ is increasing in $k,$ then for all $k>0:$ \ $u-k > u_0-k_0=v_0.$ Hence, for all $k>0,$ and for all $1<v_0<u-k,$ (i.e., Region 3) $(\lambda_0+\lambda_1)'<0,$ implying that $\lambda_0+\lambda_1$ decreases in $v_0$ in Region 3.
\subsection{Results relating to Section~\ref{PM} - Profit maximization}
\subsubsection{Proof of Proposition~\ref{P5000}}\label{PP5000}
\begin{enumerate}
\item Recall that in Region 2, $\lambda_0=0$. Hence by~(\ref{E2}),
$(v-p){1-e^{-\lambda_1}\over \lambda_1}=1$, and so
$$p= v-{\lambda_1\over1-e^{-\lambda_1}},$$
$$\pi=v(1-e^{-\lambda_1})-\lambda_1,$$
and
\begin{equation}\label{E903}
\frac{d}{d\lambda_1}\pi=ve^{-\lambda_1}-1.
\end{equation}
Because $\frac{d}{d\lambda_1}\pi=0$ for $\lambda_1^*=\ln{v}$, the local maximum profit of Region 2, and the profit associated with it, are defined in~(\ref{E901}).
\item Recall that in Region 4, $\lambda_1=0$. By (\ref{E1}) we have
$$p=v-k-{\lambda_0\over1-e^{-\lambda_0}}.$$
So by~(\ref{E5000})
$$\pi =(v-k)(1-e^{-\lambda_0})-\lambda_0.$$
Now,
\begin{equation}\label{E900}
\frac{d}{d\lambda_0}\pi=(v-k)e^{-\lambda_0}-1,
\end{equation}
giving the profit-maximizing value $\lambda_0^*=\ln\left(v-k\right)$.
Hence, the local maximum profit of Region 4, and the profit associated with it, are given by~(\ref{E700}).
\item Because $p_3=v-k-1$ separates between Regions 2 and 3, at this point $\lambda_0=0$, and so $\pi_3=(v-k-1)\left( 1-e^{-\lambda_1}\right)$.
Now, $p=v-k-1$ implies that $v_0=1$.
Hence by Proposition~\ref{P3}, in Region 3,
$$\lambda_1=W[A]-\frac{(v_0+k)R(v_0)}{v_0}.$$ Thus by R2, we obtain for $v_0=1$
$$\lambda_1=W[A(1)]-(k+1)(-1)=W[-(k+1)e^{-(k+1)}]+k+1=W+k+1.$$
Substituting $\lambda_1=W+k+1$ in $\pi_3$ gives~(\ref{E891}).
\end{enumerate}
\subsubsection{Proof of Theorem~\ref{P6}}\label{PP6}
\
\begin{enumerate}
\item By (\ref{E700}), \ $\pi_4=v-k-1-\ln\left(v-k\right),$ and by (\ref{E901}), \ $\pi_2=v-1-\ln{\left(v\right)}.$
Note that for $y\geq 1$, the value of $y-1-\ln{y}$ is increasing. That result and the inequalities $1\leq v-k<v$ imply that $\pi_4 < \pi_2.$
\item
To prove that $\pi_3\leq \pi_2,$ we must prove that
$$(v-k-1)\left( 1-e^{-(W+k+1)}\right)\leq v-1-\ln{v},$$ which is equivalent to proving that
\begin{equation}\label{E13}
(v-k-1)\left( 1-e^{-(W+k+1)}\right)- v+\ln{v}\leq -1.
\end{equation}
We will find the maximum value of the left-hand side of (\ref{E13}) and show that it equals $-1$. To find the maximum we solve
$$
\left((v-k-1)\left( 1-e^{-(W+k+1)}\right)- v+\ln{v}\right)'=1-e^{-W-k-1}-1+\frac{1}{v}=0.
$$
The solution is $v=e^{W+k+1}$, for which the second derivative is $-\frac{1}{v^2}$. Hence $e^{W+k+1}$ is a local maximum.
\begin{equation}\label{E14}
e^{W+k+1}=e^We^{k+1}=\frac{-(k+1)e^{-k-1}e^{k+1}}{W}=\frac{k+1}{-W}.
\end{equation}
We now show that for $v=e^{W+k+1}$, the left-hand side of (\ref{E13}) is $-1$.
By W5 we obtain
$$=\left(-\frac{k+1}{W}-k-1\right)\left(1+\frac{We^{-(k+1)}}{(k+1)e^{-(k+1)}}\right)
+\frac{k+1}{W}+\ln\left(\frac{k+1}{-W}\right)$$
$$=\left(-\frac{k+1}{W}-k-1\right)\left(1+\frac{W}{k+1}\right)+\frac{k+1}{W}+\ln\left(\frac{k+1}{-W}\right),$$
which gives
\begin{equation}\label{E15}
-k-2-W+\ln\left(-\frac{k+1}{W}\right).
\end{equation}
Now, since by (\ref{E14}) $\frac{k+1}{-W}=e^{W+k+1}$, then
$$
\ln\left(\frac{k+1}{-W}\right)=\ln{e^{(W+k+1)}}=W+k+1.
$$
Substituting this in (\ref{E15}) gives
$$-k-2-W+\ln\left(\frac{k+1}{-W}\right)=-k-2-W+W+k+1=-1.$$
\end{enumerate}
\subsubsection{Proof of Lemma~\ref{P5}}\label{PP5}
We will first prove that $p_2$ belongs to Region 2, iff $\frac{\ln{v}}{1-\frac{1}{v}}\leq k+1.$ Then, we will prove that $\frac{\ln{v}}{1-\frac{1}{v}}\leq k+1,$ iff $v\leq e^{W+k+1}.$
Recall that Region 2 refers to all $p$ satisfying $v-k-1\leq p\leq v-1$.
First, by L'H\^{o}pital's rule
$$\lim_{v\to 1}\frac{\ln{v}}{1-\frac{1}{v}}=\frac{\lim_{v\to 1}\frac{1}{v}}{\lim_{v\to 1}\frac{1}{v^2}}=\frac{1}{1}=1.$$
Because $\frac{\ln{v}}{1-\frac{1}{v}}$ is increasing for all $v\geq 1,$
$$\frac{\ln{v}}{1-\frac{1}{v}}\geq 1,$$
and so
$$v-\frac{\ln{v}}{1-\frac{1}{v}}\leq v-1,$$ proving $p_2\leq v-1,$ always.
We now prove that $p_2\geq v-k-1,$ iff
$\frac{\ln{v}}{1-\frac{1}{v}}\leq k+1$. The latter is equivalent to
$$v-\frac{\ln{v}}{1-\frac{1}{v}}\geq v-k-1.$$ Hence in this case, $p_2\geq v-k-1,$ and so $p_2$ is in Region 2.
Now, we will show that the unique solution for
\begin{equation}\label{E16}
\frac{\ln{v}}{1-\frac{1}{v}}=k+1,
\end{equation}
is \ $e^{(W+k+1)}.$
By (\ref{E14}) $$e^{W+k+1}=-\frac{k+1}{W}.$$ Hence
$$We^{W+k+1}=-(k+1),$$ implying that
$$(W+k+1)e^{W+k+1}-(k+1)e^{W+k+1}=-(k+1),$$ and so
$$(W+k+1)e^{W+k+1}=(k+1)\left(e^{W+k+1}-1\right).$$
This is equivalent to
$$\frac{(W+k+1)e^{W+k+1}}{e^{W+k+1}-1}=k+1,$$ which implies
$$\frac{W+k+1}{1-\frac{1}{e^{W+k+1}}}=k+1.$$
Hence $e^{W+k+1}$ solves (\ref{E16}). Because the left-hand side of (\ref{E16}) is strictly increasing, $e^{W+k+1}$ uniquely solves (\ref{E16}).
\subsubsection{Proof of Lemma~\ref{P8}}\label{PP8}
First, note that $f(k+1)=0.$ Now,
\begin{equation}\label{E66}
f'(v)=\left(\left( 1-\frac{v}{k+1}\right)W-\ln{(v-k)} \right)'=-\frac{W}{k+1}-\frac{1}{v-k},
\end{equation}
and
we have $$f''(v)=\frac{1}{(v-k)^2}>0,$$ hence $f$ is indeed strictly concave, thus has at most two roots. To see that $k+1$ is not the only root, we need to find $v_m$ the minimum of $f$ in Region 3, and show that $k+1\ne v_m.$
To solve $f'(v)=0,$ we utilize~(\ref{E66}), and get
$$-\frac{W}{k+1}-\frac{1}{v-k}=0.$$ This is equivalent to
$$v-k=-\frac{k+1}{W}.$$ Hence
$$v=k-\frac{k+1}{W},$$ and so
$v_m=k-\frac{k+1}{W},$ is the minimum of $f$ in Region 3.
We now show that $k+1<v_m.$ Since $-(k+1)e^{-(k+1)}\geq -e^{-1}$\ for all $k>0$, and $W(\cdot)$ is an increasing function, then $W=W[-(k+1)e^{-(k+1)}]\geq W[-e^{-1}]=-1$. Thus for all $k>0,$ $W>-(k+1),$ and so $$1<-\frac{k+1}{W},$$ and
$$k+1<k-\frac{k+1}{W}=v_m.$$
Hence $f$ has exactly two roots.
Since $f$ is strictly convex and $f(v_m)< 0$, (since $f(k+1)=0,$ and $v_m$ is the minimum of $f$), then $v_m$ lies between the two roots. Since we have established that $k+1<v_m,$ then
\begin{equation}\label{E777}
k+1<v_m\leq v_f.
\end{equation}
\subsubsection{Proof of Corollary~\ref{C1}}\label{PC1}
Recall that by (\ref{E14}) we have $$e^{W+k+1}=-\frac{(k+1)}{W}.$$ Thus
$$e^{W+k+1}=-\frac{(k+1)}{W}\leq k-\frac{(k+1)}{W}=v_m\leq v_f,$$
where the last inequality follows from~(\ref{E777}).
\subsubsection{Proof of Lemma~\ref{P9}}\label{PP9}
\
\begin{enumerate}
\item Recall that Region 4 refers to all $p$ satisfying $0\leq p\leq v-u.$
To prove that $p_4\geq 0,$ we must prove that
$$v-k-{\ln\left(v-k\right)\over1-{1\over v-k}}\geq 0.$$
This is equivalent to
$$(v-k)\left( 1-\frac{1}{v-k}\right)-\ln(v-k)\geq 0.$$
Hence we need to prove that
\begin{equation}\label{E80}
(v-k) -\ln(v-k)\geq 1.
\end{equation}
Note that $v-k\geq u\geq 1,$ (where the last inequality follows from the fact that $u=u(k)\geq 1,$ for all $k\geq 0$).
For $v-k\geq 1,$ (\ref{E80}) always holds since the left-hand side of~(\ref{E80}) equals $1,$ for $v-k=1$ and is increasing for $v-k \geq 1$. Hence $p_4$ is indeed non-negative.
We now prove that $p_4\leq v-u$ if $v\geq k+u$.
Since $\frac{\ln y}{1-\frac{1}{y}}$ is increasing in $y,$ then $v-k\geq u$ implies
$$\frac{\ln (v-k)}{1-\frac{1}{v-k}}\geq \frac{\ln u}{1-\frac{1}{u}}.$$
Hence
$$v-k-\frac{\ln (v-k)}{1-\frac{1}{v-k}}\leq v-k-\frac{\ln u}{1-\frac{1}{u}}.$$
The left-hand side of the above equals $p_4$, so we have
$$
p_4\leq v-k-\frac{\ln u}{1-\frac{1}{u}}.
$$
Recall that $u=k+\frac{\ln{u}}{1-\frac{1}{u}}$, so that $p_4 \leq v-u$.
\item We will prove that $k+u\leq v_m$, where $v_m$ was defined as the minimum of $f.$ This result and the observation that $v_m\leq v_f$ (see~(\ref{E777}) in the proof of Lemma~\ref{P8}), complete the proof. Recall that $$v_m=k-\frac{(k+1)}{W}.$$ Hence we need to prove that $$u\leq -\frac{(k+1)}{W}.$$
By (\ref{E14}) $$-\frac{(k+1)}{W}=e^{W+k+1},$$ so we must prove that
$$
u\leq e^{W+k+1}.
$$
Both sides of the inequality are functions of $k$. For $k=0$, we obtain equality with $1$ on both sides. As seen in Figure~\ref{F101} from then on $u<e^{W+k+1}$.
\end{enumerate}
\begin{figure
\begin{center}
\includegraphics[scale=0.3]{kU.pdf}
\vspace{2cm}
\caption{$u$ (solid line) and $e^{W+k+1}$ (dashed line) as functions of $k$}\label{F101}
\end{center}
\end{figure}
\pagebreak
\section{Notation}
\begin{description}
\item[$c$] Cost of going to store
\item[$V$] Value of good to consumer if bought at his ideal period
\item[$v$]=$\frac{V}{c}$
\item[$K$] Reduction in consumer's utility if he buys the good too early
\item[$k$]=$\frac{K}{c}$
\item[$P$] Price of good
\item[$p$]=$\frac{P}{c}$
\item[$q_{t}$] Probability that a consumer chooses to arrive in period $t$
\item[$U_{i}$] Expected utility of a consumer who arrives in period $t$
\item [$u$] The unique solution for~(\ref{E1000}) for a given $k.$
\item[$\lambda$] Arrival rate of potential consumers
\item[$\lambda_t$] Arrival rate of consumers at the store in period t
\item [$x$] $\lambda_0$
\item [$y$] $\lambda_1$
\item[$\Pi$] Profits
\item[$\pi$]=$\frac{\Pi}{c}$
\end{description}
|
2,869,038,155,894 | arxiv |
\section*{Acknowledgments}
This document was prepared by the NOvA Collaboration using the resources of the Fermi National Accelerator Laboratory (Fermilab), a U.S. Department of Energy, Office of Science, HEP user facility. Fermilab is managed by Fermi Research Alliance, LLC (FRA), acting under Contract No. DE-AC02-07CH11359.
This work was supported by the U.S. Department of Energy; the U.S. National Science Foundation; the Department of Science and Technology, India; the European Research Council; the MSMT CR, GA UK, Czech Republic; the RAS, RFBR, RMES, RSF, and BASIS Foundation, Russia; CNPq and FAPEG, Brazil; STFC and the Royal Society, United Kingdom; and the state and University of Minnesota. We are grateful for the contributions of the staffs of the University of Minnesota at the Ash River Laboratory and of Fermilab.
\vspace*{-5mm}
\section*{APPENDIX: DIFFRACTIVE PION PRODUCTION}
\begin{figure*}[ht]
\begin{center}
\includegraphics[width=0.49\textwidth]{dfr_eff_t_abs_rw.eps}
\caption{Selection efficiency of DFR as a function of $|t|$. The DFR $|t|$ distribution estimated from Kopeliovich \textit{et al.} is shown in gray with arbitrary normalization.
\label{figure_dfr_eff}}
\end{center}
\end{figure*}
NC DFR pion production on free protons (hydrogen) is a background process to the coherent signal.
It produces a forward-going pion with small momentum transfer to the recoil proton and becomes indistinguishable from coherent if the recoil proton is undetected.
The recoil protons, when detected, could create additional prongs, increase the vertex energy, or decrease the ratio of $E_{\pi^0}/E_{Tot}$, causing the DFR events failing the selection cuts as a result.
The acceptance of DFR, therefore, depends upon the kinetic energy of the recoil proton ($T_p$), which is related to $|t|$ by $T_p = |t|/2m_p$.
The DFR selection efficiency in this measurement is shown in Fig. \ref{figure_dfr_eff} as a function of $|t|$.
It is notable that the selection efficiency decreases with $|t|$, since the proton energy increases with $|t|$, and the overall efficiency (1.7\%) is considerably lower than the coherent signal (4.1\%).
\begin{figure*}[ht]
\begin{center}
\includegraphics[width=0.49\textwidth]{xsec_coh_dfr.eps}
\caption
Cross section of DFR $\pi^0$ production on hydrogen as a function of incoming neutrino energy predicted by the Rein model (GENIE 2.12.2), compared with coherent $\pi^0$ production on carbon predicted by the Rein-Sehgal model (GENIE 2.10.4).
\label{figure_dfr_rein}}
\end{center}
\end{figure*}
\begin{figure*}[ht]
\begin{center}
\includegraphics[width=0.49\textwidth]{xsec_t_np_genie.eps}
\includegraphics[width=0.49\textwidth]{xsec_q2_np_genie.eps}
\caption{The Kopeliovich $\frac{d\sigma}{d|t|}$ (left) and $\frac{d\sigma}{dQ^2}$ (right) predictions of $\nu p \rightarrow \nu p \pi^0$ at $E_\nu = 2.7$\,GeV, comparing with GENIE 2.12.2 prediction of the same channel without DFR at the same energy (shape only). Enhancements in the low-$Q^2$ and low-$t$ region can be observed from Kopeliovich. \label{figure_dfr_q2_t}}
\end{center}
\end{figure*}
\begin{figure*}[ht]
\begin{center}
\includegraphics[width=0.49\textwidth]{fit_dfr_kopeliovich.eps}
\includegraphics[width=0.49\textwidth]{tmtmin_nc_2p7gev.eps}
\caption{Left: The Kopeliovich $\frac{d\sigma}{d(|t|-|t|_{\text{min}})}$ prediction of $\nu p \rightarrow \nu p \pi^0$ at $E_\nu = 2.7$\,GeV fitted with the GENIE 2.12.2 prediction of the same channel without DFR and an exponential term. The fitted exponential term is considered to be the maximum estimation of DFR $\frac{d\sigma}{d(|t|-|t|_{\text{min}})}$. Right: A shape comparison between the fitted exponential term from Kopeliovich and the term extracted from the Rein model prediction for DFR. A slightly softer shape is observed for the one from Kopeliovich.
\label{figure_dfr_tmtmin}}
\end{center}
\end{figure*}
DFR is simulated by the Rein model in GENIE 2.12.2 and is predicted to be about $20\%$ of the coherent cross section on carbon in the few-GeV energy region (Fig. \ref{figure_dfr_rein}).
However, the Rein model is intended for the hadronic invariant mass $W>2$\,GeV region.
In the $W<2$\,GeV region, the interference between DFR and RES or non-RES pion productions makes the performance of the Rein model questionable.
Alternatively, the DFR cross section can be estimated by a similar method as in Ref. \cite{Mislivec:2017qfz} from the calculation of inclusive $\nu p \rightarrow \nu p \pi^0$ by Kopeliovich \textit{et al.}, which is based upon the PCAC hypothesis and includes both DFR and non-DFR contributions \cite{Kopeliovich:2012}\cite{Kopeliovich:np}.
In Fig. \ref{figure_dfr_q2_t}, the left shows Kopeliovich's predictions of $\nu p \rightarrow \nu p \pi^0$ in $\frac{d\sigma}{d|t|}$ at NOvA's average neutrino energy (2.7\,GeV).
This prediction is compared with the GENIE 2.12.2 prediction of $\nu p \rightarrow \nu p \pi^0$ without DFR, and an enhancement is observed in the low-$|t|$ region.
A similar enhancement can be observed at low $Q^2$ (Fig. \ref{figure_dfr_q2_t}, right).
The DFR contribution to $\nu p \rightarrow \nu p \pi^0$ can be quantified by fitting Kopeliovich's prediction of $\frac{d\sigma}{d(|t|-|t|_{\text{min}})}$
with GENIE without DFR
plus an exponential term $A*\exp(-B(|t|-|t|_{\text{min}}))$, where $A$ and $B$ are fitting parameters (Fig. \ref{figure_dfr_tmtmin}, left).
$\frac{d\sigma}{d(|t|-|t|_{\text{min}})}$ is used instead of $\frac{d\sigma}{d|t|}$ since the DFR $\frac{d\sigma}{d(|t|-|t|_{\text{min}})}$ follows an exponential form, while $\frac{d\sigma}{d|t|}$ deviates from an exponential at low $|t|$ because of the $|t|_{\text{min}}$ suppression.
The exponential term extracted from the fit is considered as the maximum possible cross section of DFR since it may include other contributions to the low-$|t|$ enhancement in the Kopeliovich prediction in addition to DFR.
It shows a slightly softer shape in $|t|-|t|_{\text{min}}$ than the Rein model prediction (Fig. \ref{figure_dfr_tmtmin}, right).
An integral over the exponential
gives the estimation of the total DFR cross section at $E_\nu=2.7$\,GeV: $3.04\times10^{-40}\,\text{cm}^{2}/\text{proton}$.
For the measurement reported in this paper, the DFR background events are first simulated by the Rein model in GENIE 2.12.2, and then reweighted to the estimation from Kopeliovich in both normalization and shape as a function of $|t|-|t|_{\text{min}}$.
This reweighting makes a 1\% difference in the coherent signal measurement.
The above method simulates DFR independently from other pion production channels on hydrogen simulated by GENIE.
The interference between DFR and non-DFR channels, however, could potentially affect the rate and shape of both DFR and non-DFR pion productions, and the effect on the measurement reported in the paper needs to be discussed.
Since the Kopeliovich model includes both DFR and non-DFR contributions in a coherent way,
this effect can be taken into account by simulating all the $\nu p \rightarrow \nu p \pi^0$ events on hydrogen using the Kopeliovich model.
This is achieved by reweighting the GENIE simulated $\nu p \rightarrow \nu p \pi^0$ events on hydrogen to the prediction of Kopeliovich as a 2D function of $|t|$ and $Q^2$.
The background template fit is repeated with the hydrogen contribution fixed.
The effect on the measurement is a 2.6\% difference from the nominal result, which is considered as an additional systematic uncertainty contribution from DFR.
|
2,869,038,155,895 | arxiv |
\cleardoublepage\oldsection{Summary and Concluding Remarks} \label{s:conc}
In this section we will overview the discoveries that we have made. Firstly we will discuss the qualitative results that we have produced. After that we will overview those results which are important to the field of flows in porous media, and may affect future research in this area. Finally further investigation that could be performed into the current formulation will be discussed.
We produce solutions for incompressible Darcy imbibition with a wetting front that has a constant contact angle within the pores, as is formulated in section \ref{s:ProbForm}. In section \ref{s:Numerics} we put forward a numerical scheme that is suitable for solving this formulation, and can easily be modified to solve for non-linear boundary conditions, such as those produced by a dynamic contact angle or the modes proposed by Shikhmurzaev and Sprittles in \cite{wetting_dynamics_shk}. This numerical scheme is used to produce the velocity and pressure distributions across the wetted region that are plotted in section \ref{ss:pv_dist}. These reveal that for small domains gravity has little effect, whilst for large domains the fluid can clearly be seen to fall under its action far from the drawing area. In the region around the contact line CL1 (see figure \ref{f:intro_CA_CL}) the velocities are found to be singular, whilst around CL2 the velocity distribution is highly dependent on the contact angle CA2 along with the strength of the gravitational effect. Asymptotic analysis is performed in section \ref{ss:asymp}, guided and confirmed by the numerical results in section \ref{ss:num_asymp}, that reveal the behaviour local to the contact lines. The analysis local to CL2 was then interpreted in relation to the dynamics of the wetting front in section \ref{ss:asym_phys}, giving the different possible behaviours. It was predicted that, for the case without gravity, the contact angle CA2 would have a constant solution $\pi/2$, and for initial conditions of an angle less than $\pi/2$ the angle would converge on $\pi/2$. Also, for the case with gravity, the contact angle would certainly increase to be larger than $\pi/2$. The predictions from our analysis were confirmed by the numerical simulations in section \ref{ss:simulate}, the contact angle converging to $\pi/2$ without gravity and a larger angle with gravity. Gravity also makes the wetted region move faster downwards, which causes the volume of the wetted region to increase faster, and retards the advancement of the contact line CL2. Finally, we observed that the wetted regions evolution seems to be largely independent of the initial conditions, converging on the same dynamics as time passes.
In our asymptotic analysis, section \ref{ss:asymp}, we obtained singular velocities. In section \ref{ss:asym_phys} we discuss the physical meaning of this, which we conclude must be that Darcy's equation is invalid in these regions, and an improvement is required. Considering another phenomenon, Darcy's equation is used successfully to model capillary rise in porous columns, as discussed in our introduction. However, if this column was tipped on its side during the imbibition then the equation that describes the process would no longer be Darcy's equation, as shown by our analysis. An improvement is required not only for the phenomenon considered here, but for a wide range of phenomena existing in research, engineering and nature. This improvement should, first and foremost, not ignore inertial effects. It is also possible that long range viscous effects will exist due to the enormous velocity gradients present. In any case, an investigation into producing a valid equation for this phenomena is required to advance the field of fluid flows in porous materials.
With regard to the current formulation, that is believed to be qualitatively correct, it has revealed that the value of the contact angle at CL2, CA2 or $\theta_1$, is convergent on different values depending on the strength of the gravitational effect, specified by the value of $\gamma$. It would be of interest to discover how the limiting value of the contact angle depends on $\gamma$. It would also be of informative to see if the contact line CL2 stops moving when it is far from the axis of symmetry, i.e. if it too converges depending on $\gamma$. In addition, we proposed that there may be a stable manifold in the state space of all possible wetting fronts that is converged onto for all physical initial conditions. All of these properties should be investigated.
\subsection{Discrete form of the Boundary Conditions}
\subsubsection{Essential Boundary Conditions}
The discrete equations \eqref{eq:discrete_continuity} and \eqref{seq:discrete_darcy} are applicable at every node in the bulk. However on the boundary we wish to apply the boundary conditions in \eqref{seq:dimless_system}, and must do so to arrive at the correct number of linearly independent equations. We notice that the continuity equation applies a scalar restriction and thus specifies pressure, whilst Darcy's equation applies a vector restriction and thus specifies velocity (see \cite{fluid_poz} for a fuller justification). In the discrete form the instance of the equations with weight function $\psi_i$ specifies the value of the functions at node $i$. Therefore we can apply the boundary conditions as `essential boundary conditions', replacing the appropriate equation by the specification of the boundary condition. This removes the linearly dependent equations leaving us with the same number of equations as unknowns.
For the conditions
\begin{alignat}{2}
p&=-1 \hspace{1cm} &\forall \: \boldsymbol{r} &\in \Gamma_0 \tag{\ref{eq:bc_gamma0}} \\
p&=0 \hspace{1cm} &\forall \: \boldsymbol{r} &\in \Gamma_2 \tag{\ref{eq:bc_gamma2}}
\end{alignat}
we see that, if node $i$ is on one of these boundaries, we replace \eqref{eq:discrete_continuity} with
\begin{subequations}\label{eq:discrete_bc_pressure}\begin{alignat}{2}
p_i&=-1 \hspace{1cm} &\forall \: i \: : \: \boldsymbol{r}_i &\in \Gamma_0 \\
p_i&=0 \hspace{1cm} &\forall \: i \: : \: \boldsymbol{r}_i &\in \Gamma_2
\end{alignat}\end{subequations}
For the condition
\begin{alignat}{2}
\boldsymbol{u}\cdot\hat{\boldsymbol{n}}&=0 \hspace{1cm} &\forall \: \boldsymbol{r} &\in \Gamma_1 \cup \Gamma_3, \tag{\ref{eq:bc_gamma13}}
\end{alignat}
neither of \eqref{seq:discrete_darcy} are for $\boldsymbol{u}\cdot\hat{\boldsymbol{n}}$, we chose to have one for $\boldsymbol{u}\cdot\hat{\boldsymbol{r}}$ and the other for $\boldsymbol{u}\cdot\hat{\boldsymbol{z}}$. Thus we must use a new rotated form of the discrete equations. Let us define orthogonal constant unit vectors in the $r$-$z$ plane, $\hat{\boldsymbol{N}}$ and $\hat{\boldsymbol{T}}$, such that if node $i$ is on $\Gamma_1$ or $\Gamma_3$ then $\hat{\boldsymbol{N}}=\hat{\boldsymbol{n}}$ at $\boldsymbol{r}=\boldsymbol{r}_i$ and $\hat{\boldsymbol{T}}$ points in the anticlockwise direction around the boundary. Writing $\hat{\boldsymbol{N}}=\hat{N}_r\hat{\boldsymbol{r}}+\hat{N}_z\hat{\boldsymbol{z}}$ we see that $\hat{\boldsymbol{T}}=-\hat{N}_z\hat{\boldsymbol{r}}+\hat{N}_r\hat{\boldsymbol{z}}$. Thus the condition \eqref{eq:bc_gamma13} and tangential component of \eqref{eq:integral_discrete_darcy} are, respectively,
\begin{subequations} \label{eq:discrete_bc_velocity} \begin{align}
\hat{N}_r u_i + \hat{N}_z v_i &= 0, \\
\sum_j\left[-\hat{N}_z C_{ij} u_j + \hat{N}_r C_{ij} v_j + (\hat{N}_z A_{ij} - \hat{N}_r B_{ij} + \hat{N}_z D_{ij}) p_j\right] &= \hat{N}_z a_i - \hat{N}_r b_i - \hat{N}_r \gamma g_i.
\end{align}\end{subequations}
and are used in place of \eqref{seq:discrete_darcy} for $i \: : \: \boldsymbol{r}_i \in \Gamma_1 \cup \Gamma_3$.
\subsubsection{Natural Boundary Conditions}
The objects $a_i$, $b_i$ and $c_i$ are boundary integrals of the unknowns $p$ and $\boldsymbol{u}\cdot\hat{\boldsymbol{n}}$. If the required variable is specified on the domain of integration as a boundary condition then this is a ``natural boundary condition'' and the integral is taken directly from the condition. If the value is not known then it can be obtained from the approximations in \eqref{seq:function_discrete}. Let
\begin{equation} \label{eq:discrete_EF}
E_{ij}=\int_{\partial\Omega_0} \psi_i \psi_j \hat{n}_r r \dif s,
\qquad
F_{ij}=\int_{\partial\Omega_0} \psi_i \psi_j \hat{n}_z r \dif s,
\end{equation}
therefore
\begin{equation} \label{eq:discrete_abc}
a_i=\sum_j E_{ij} p_j,
\qquad
b_i=\sum_j F_{ij} p_j,
\qquad
c_i=\sum_j \left[ E_{ij} u_j + F_{ij} v_j \right].
\end{equation}
Note that when using \eqref{eq:1d_master_integration} each term in the sum can be chosen to be of the natural or approximate form individually.
\subsubsection{Boundary Conditions at the Corners}
In the mesh there are nodes at each of the corners $C_0$, $C_1$, $C_2$ and $C_3$, and we must choose which of the boundary conditions to apply at each corner. However, in all tests the solutions produced with each boundary condition were indistinguishable. We have arbitrarily chosen to use pressure boundary conditions at all corners except for $C_0$ at which the normal velocity condition is applied.
\subsection{Discrete form of the Bulk Equations}
The remaining unknowns are the values of the functions at the nodes, the method of finding these values is explained here. Analytically these are specified by the bulk equations, these bulk equations will be converted into a numerical scheme which is called the Galerkin finite element method.
We first construct weighted residuals of the bulk equations by volume integrating the equation with weight $\psi_i$. Integration by parts is then used to minimise the level of differentiability required on any function, as well as providing a way to include boundary conditions, preferring to differentiate the interpolation functions over the approximate solutions. Requiring that this form of the equations is satisfied exactly by the approximate solution produces equations that specifies the values of the functions at the $i$th node in terms of the values at the nodes in the elements it is part of. The approximations \eqref{seq:function_discrete} are used to produce this set of linear equations for the unknowns. Since there is one interpolation function and three unknowns for each node, and there are three equations (a vector equation counts as two), the full set of discrete equations will uniquely specify the values of the unknowns (once the boundary conditions are included to remove linearly dependent equations).
Next we consider the volume that will be integrated over. It must be a three dimensional region, the integrals over which being reducible to integrals over $\Omega_0$. The simplest choice is a wedge of the wetted region, i.e. the part of it that satisfies $\phi\in[\alpha-\frac{1}{2}\delta\alpha,\alpha+\frac{1}{2}\delta\alpha]$ for some $\alpha$, depicted in figure \ref{f:wedge_integrate}. This shall be called $\Omega^{wedge}$ and is considered as $\delta\alpha \rightarrow 0$ to obtain the region $\Omega_0$.
Note that the discrete form produced here is certainly not the only one possible for our system, \eqref{seq:dimless_system}, and not even the only scheme for our choice of interpolation. Stabilized schemes such as that in \cite{stable_darcy} exist but were not found to improve the accuracy of the solution.
\begin{figure}[t]
\centering
\input{Figures/TikZ/Wedge.tex}
\caption{Illustration of the domain $\Omega^{wedge}$ with boundary $\partial\Omega^{wedge}$, this is the part of the wetted region that satisfies $\phi \in [\alpha-\frac{1}{2}\delta\alpha,\alpha+\frac{1}{2}\delta\alpha]$.}
\label{f:wedge_integrate}
\end{figure}
\subsubsection{The Continuity Equation}
The dimensionless form of the continuity equation was found to be
\begin{subequations}\begin{alignat}{2}
\nabla \cdot \boldsymbol{u}&=0 \hspace{1cm} &\forall \: \boldsymbol{r} &\in \Omega_0. \tag{\ref{eq:continuity}}
\end{alignat}\end{subequations}
The weighted residual form of this is
\begin{align*}
\int_{\Omega^{wedge}} \psi_i \nabla \cdot \boldsymbol{u} \dif V &=0
\\
\Rightarrow \qquad
\int_{\Omega^{wedge}} (\nabla \psi_i) \cdot \boldsymbol{u} \dif V &=\int_{\partial \Omega^{wedge}} \psi_i \boldsymbol{u} \cdot \hat{\boldsymbol{n}} \dif S
\end{align*}
where $\partial \Omega^{wedge}$ is the surface of $\Omega^{wedge}$ and $\dif S$ is a surface element. As $\delta\alpha \rightarrow 0$, to leading order
\begin{gather*}
\delta\alpha\int_{\Omega_0} (\nabla \psi_i) \cdot \boldsymbol{u} r \dif r \dif z
=\delta\alpha\int_{\partial \Omega_0} \psi_i \boldsymbol{u} \cdot \hat{\boldsymbol{n}} r \dif s
+\int_{\Omega_0} \psi_i \boldsymbol{u} \cdot \hat{\boldsymbol{\phi}}\Big|_{\phi=\alpha+\frac{1}{2}\delta\alpha} \dif r \dif z
-\int_{\Omega_0} \psi_i \boldsymbol{u} \cdot \hat{\boldsymbol{\phi}}\Big|_{\phi=\alpha-\frac{1}{2}\delta\alpha} \dif r \dif z
\\
\Rightarrow \qquad
\int_{\Omega_0} (\nabla \psi_i) \cdot \boldsymbol{u} r \dif r \dif z
=\int_{\partial \Omega_0} \psi_i \boldsymbol{u} \cdot \hat{\boldsymbol{n}} r \dif s,
\end{gather*}
where $s$ is the arc length along $\partial \Omega_0$. Let us now define the following
\begin{align} \label{eq:discrete_ABc}
A_{ij}&=\int_{\Omega_0} \frac{\partial \psi_i}{\partial r} \psi_j r \dif r \dif z,
&
B_{ij}&=\int_{\Omega_0} \frac{\partial \psi_i}{\partial z} \psi_j r \dif r \dif z,
&
c_i&=\int_{\partial \Omega_0} \psi_i \boldsymbol{u} \cdot \hat{\boldsymbol{n}} r \dif s.
\end{align}
Thus, using the approximations in \eqref{seq:function_discrete}, we arrive at the discrete form of the continuity equation
\begin{equation} \label{eq:discrete_continuity}
\sum_j\left[A_{ij}u_j + B_{ij}v_j\right] = c_i.
\end{equation}
\subsubsection{Darcy's Equation}
The dimensionless form of Darcy's equation was found to be
\begin{subequations}\begin{alignat}{2}
\boldsymbol{u}&=-\nabla (p +\gamma z) \hspace{1cm} & \forall \: \boldsymbol{r} &\in \Omega_0. \tag{\ref{eq:darcy}}
\end{alignat}\end{subequations}
The weighted residual form of this is
\begin{align*}
\int_{\Omega^{wedge}}\psi_i(\boldsymbol{u}+\nabla p +\gamma \hat{\boldsymbol{z}}) \dif V &=0
\\
\Rightarrow \qquad
\int_{\Omega^{wedge}}\psi_i\boldsymbol{u}\dif V
-\int_{\Omega^{wedge}}(\nabla\psi_i) p \dif V
+\gamma \hat{\boldsymbol{z}}\int_{\Omega^{wedge}}\psi_i \dif V
&=-\int_{\partial\Omega^{wedge}}\psi_i p \hat{\boldsymbol{n}}\dif S.
\end{align*}
As $\delta\alpha \rightarrow 0$, to leading order
\begin{equation*}
\begin{array}{rr} \displaystyle \vspace{1mm}
\delta\alpha\int_{\Omega_0}\psi_i\boldsymbol{u} r \dif r \dif z
-\delta\alpha\int_{\Omega_0}(\nabla\psi_i) p r \dif r \dif z
+\delta\alpha\gamma \hat{\boldsymbol{z}}\int_{\Omega_0}\psi_i r \dif r \dif z
+ \ldots \\ \displaystyle \ldots
+\int_{\Omega_0}\psi_i p \hat{\boldsymbol{\phi}}\Big|_{\phi=\alpha+\frac{1}{2}\delta\alpha}\dif r \dif z
-\int_{\Omega_0}\psi_i p \hat{\boldsymbol{\phi}}\Big|_{\phi=\alpha-\frac{1}{2}\delta\alpha}\dif r \dif z
\end{array}
=-\delta\alpha\int_{\partial\Omega_0}\psi_i p \hat{\boldsymbol{n}}r \dif s.
\end{equation*}
Next notice that
\begin{equation*}
\hat{\boldsymbol{\phi}}\Big|_{\phi=\alpha+\frac{1}{2}\delta\alpha}
-\hat{\boldsymbol{\phi}}\Big|_{\phi=\alpha-\frac{1}{2}\delta\alpha}
=
-2\hat{\boldsymbol{r}}\Big|_{\phi=\alpha}\sin\left(\frac{\delta\alpha}{2}\right),
\end{equation*}
therefore
\begin{equation} \label{eq:integral_discrete_darcy}
\int_{\Omega_0}\psi_i\boldsymbol{u} r \dif r \dif z
-\int_{\Omega_0}(\nabla\psi_i) p r \dif r \dif z
-\hat{\boldsymbol{r}}\int_{\Omega_0}\psi_i p\dif r \dif z
+\gamma \hat{\boldsymbol{z}}\int_{\Omega_0}\psi_i r \dif r \dif z
=-\int_{\partial\Omega_0}\psi_i p \hat{\boldsymbol{n}}r \dif s.
\end{equation}
Using $\hat{\boldsymbol{n}}=\hat{n}_r \hat{\boldsymbol{r}} + \hat{n}_z \hat{\boldsymbol{z}}$, let
\begin{equation} \label{eq:discrete_CDabg}
\begin{array}{c} \displaystyle
C_{ij} = \int_{\Omega_0} \psi_i \psi_j r \dif r \dif z,
\qquad
D_{ij} = \int_{\Omega_0} \psi_i \psi_j \dif r \dif z,
\\ \displaystyle
a_{i} = \int_{\partial\Omega_0} \psi_i p \hat{n}_r r \dif s,
\qquad
b_{i} = \int_{\partial\Omega_0} \psi_i p \hat{n}_z r \dif s,
\qquad
g_i = \int_{\partial\Omega_0} \psi_i r \dif r \dif z.
\end{array}
\end{equation}
To arrive at a discrete form that a computer can understand, it must be a set of scalar equations. In the bulk it does not matter what direction we choose for these scalar equations, but orthogonal directions are best. Thus we simply choose to scaler product \eqref{eq:integral_discrete_darcy} with $\hat{\boldsymbol{r}}$ and $\hat{\boldsymbol{z}}$, and then use the approximations in \eqref{seq:function_discrete}, to arrive at
\begin{subequations}\label{seq:discrete_darcy}\begin{align}
\sum_j\left[C_{ij}u_j-(A_{ij}+D_{ij})p_j\right] &=-a_i, \\
\sum_j\left[C_{ij}v_j-B_{ij}p_j\right] &=-b_i-\gamma g_i.
\end{align}\end{subequations}
\cleardoublepage\oldsection{Discrete form of the Equations} \label{s:Numerics}
In the set of equations to solve, \eqref{seq:dimless_system}, it is important to observe that the only time dependence is in the advancing of the wetting front, \eqref{eq:bc_timestep_F}. Thus the equations can be solved at each instant of time for the velocity and pressure distribution independently of temporal evolution. First we shall present the scheme for numerical solution to the spatial problem, which shall then be tested, before giving the method for time stepping.
\input{FEM/Interpolation.tex}
\input{FEM/Integration.tex}
\input{FEM/Gradient.tex}
\input{FEM/Spines}
\input{FEM/Discrete_Bulk.tex}
\input{FEM/Discrete_Boundary.tex}
\input{FEM/Summary}
\input{FEM/Testing}
\input{FEM/Time}
\input{FEM/Measurements}
\subsection{The Gradient Operator and Normals}
We will require the gradient of differentiable axisymmetric scalar functions, let us denote a generic such function by $f(r,z)$. This is the gradient in cylindrical coordinates, but $f$ is not a function of $\phi$, thus
\begin{equation*}
\nabla f = \frac{\partial f}{\partial r} \hat{\boldsymbol{r}} + \frac{\partial f}{\partial z} \hat{\boldsymbol{z}}
=\left(\begin{array}{c} \displaystyle \vspace{1mm}
\frac{\partial f}{\partial r} \\ \displaystyle
\frac{\partial f}{\partial z}
\end{array}\right)
\end{equation*}
Defining
\begin{equation*}
\boldsymbol{\partial_\xi} f = \frac{\partial f}{\partial \xi} \hat{\boldsymbol{\xi}} + \frac{\partial f}{\partial \eta} \hat{\boldsymbol{\eta}}
=\left(\begin{array}{c} \displaystyle \vspace{1mm}
\frac{\partial f}{\partial \xi} \\ \displaystyle
\frac{\partial f}{\partial \eta}
\end{array}\right)
\end{equation*}
and using \eqref{eq:elem_iso_coord_trans} we see that
\begin{align}\notag
\left(\begin{array}{c} \displaystyle \vspace{1mm}
\frac{\partial f}{\partial \xi} \\ \displaystyle
\frac{\partial f}{\partial \eta}
\end{array}\right)
&=
\left(\begin{array}{cc}\vspace{1mm}
\dfrac{\partial r}{\partial \xi} & \dfrac{\partial z}{\partial \xi} \\
\dfrac{\partial r}{\partial \eta} & \dfrac{\partial z}{\partial \eta}
\end{array}\right)
\left(\begin{array}{c} \displaystyle \vspace{1mm}
\frac{\partial f}{\partial r} \\ \displaystyle
\frac{\partial f}{\partial z}
\end{array}\right)
\\ \Rightarrow \qquad
\nabla f &= (\boldsymbol{J}^e)^{-1} \boldsymbol{\partial_\xi} f.
\end{align}
In this manner spatial derivatives of scalar functions are calculated.
To obtain the outward unit normal on $\partial \Omega_0$ we impose some restrictions on $f$. We require $f=0$ and $|\nabla f| \neq 0$ on the boundary $\partial \Omega_0$, $f<0$ in the region $\Omega_0$ and $f>0$ otherwise. Under these conditions an outward normal is $\boldsymbol{n}=\nabla f$, and since the transformation \eqref{eq:elem_iso_coord_trans} takes the boundary to the master elements boundary, $\boldsymbol{n_\xi}=\boldsymbol{\partial_\xi} f$ is an outward normal to the master element at the transformed point. Therefore
\begin{equation} \label{eq:normal}
\boldsymbol{n}=(\boldsymbol{J}^e)^{-1} \boldsymbol{n_\xi},
\end{equation}
and the outward unit normal can be obtained by normalising the transformation of a sensible choice of outward normal in the master coordinates, for example
\begin{equation*}
\boldsymbol{n_\xi}=-\hat{\boldsymbol{\xi}} \text{ on } \Gamma^{M0},
\qquad
\boldsymbol{n_\xi}=-\hat{\boldsymbol{\eta}} \text{ on } \Gamma^{M1},
\qquad
\boldsymbol{n_\xi}=\hat{\boldsymbol{\xi}}+\hat{\boldsymbol{\eta}} \text{ on } \Gamma^{M2}.
\end{equation*}
\subsection{Numerical Integration}
When we construct the finite element method for our problem, we shall need to be able to evaluate integrals over both the domain and its boundary. First we shall consider integrals over the domain of the form
\begin{equation*}
I=\int_{\Omega_0} f(\boldsymbol{r}) \dif r \dif z.
\end{equation*}
We notice that the integral over the entire domain is the sum of the parts over the elements, thus
\begin{equation*}
I=\sum_e\int_{\Omega^e} f(\boldsymbol{r}) \dif r \dif z.
\end{equation*}
Next we transform the integrals into the master element. For this we require the Jacobian of the transformation defined in \eqref{eq:elem_iso_coord_trans}
\begin{equation} \label{eq:jacobian}
\boldsymbol{J}^e=
\left(\begin{array}{cc}\vspace{1mm}
\dfrac{\partial r}{\partial \xi} & \dfrac{\partial z}{\partial \xi} \\
\dfrac{\partial r}{\partial \eta} & \dfrac{\partial z}{\partial \eta}
\end{array}\right)
=
\left(\begin{array}{cccccc}\vspace{1mm}
\dfrac{\partial \psi_0^M}{\partial \xi} &
\dfrac{\partial \psi_1^M}{\partial \xi} &
\dfrac{\partial \psi_2^M}{\partial \xi} &
\dfrac{\partial \psi_3^M}{\partial \xi} &
\dfrac{\partial \psi_4^M}{\partial \xi} &
\dfrac{\partial \psi_5^M}{\partial \xi} \\
\dfrac{\partial \psi_0^M}{\partial \eta} &
\dfrac{\partial \psi_1^M}{\partial \eta} &
\dfrac{\partial \psi_2^M}{\partial \eta} &
\dfrac{\partial \psi_3^M}{\partial \eta} &
\dfrac{\partial \psi_4^M}{\partial \eta} &
\dfrac{\partial \psi_5^M}{\partial \eta} \\
\end{array}\right)
\left(\begin{array}{cc}
r_0^e & z_0^e \\
r_1^e & z_1^e \\
r_2^e & z_2^e \\
r_3^e & z_3^e \\
r_4^e & z_4^e \\
r_5^e & z_5^e \\
\end{array}\right).
\end{equation}
Thus we have, using $J^e \equiv |\boldsymbol{J}^e|$,
\begin{equation} \label{eq:2d_master_integration}
\int_{\Omega_0} f(\boldsymbol{r}) \dif r \dif z=\sum_e\int_{\Omega^m} f(\boldsymbol{r}(\xi,\eta)) J^e \dif \xi \dif \eta.
\end{equation}
To evaluate the integrals over the master element, we use the quadrature set out in \cite{fem_framework_shk} which uses nine points and exactly integrates polynomials of order five. The integrands will be polynomials up to order eight, but as the size of the elements decrease the result of the numerical approximation will tend towards the true value. The scheme is
\begin{equation*}
\int_{\Omega^M}g(\xi,\eta) \dif \xi \dif \eta \sim \sum_{j=1}^9 g(\xi_j, \eta_j) W_j
\end{equation*}
where
\begin{gather*}
\begin{array}{c@{=}c@{\hspace{1cm}}c@{=}c@{\hspace{1cm}}c@{=}c}
\xi_1 & +0.00000 \, 00000 \, 00000, &
\xi_4 & +0.77459 \, 66692 \, 41483, \\
\eta_1 & -0.88729 \, 83346 \, 20741, &
\eta_2 & -0.50000 \, 00000 \, 00000, &
\eta_3 & -0.11270 \, 16653 \, 79258, \\
\eta_4 & -0.97459 \, 66692 \, 41483, &
\eta_6 & -0.80000 \, 00000 \, 00000, &
\eta_9 & +0.57459 \, 66692 \, 41483, \\
W_1 & +0.24691 \, 35802 \, 46913, &
W_2 & +0.39506 \, 17283 \, 95061, &
W_4 & +0.03478 \, 44646 \, 23227, \\
W_5 & +0.05565 \, 51433 \, 97164, &
W_7 & +0.27385 \, 75106 \, 85414, &
W_8 & +0.43817 \, 20170 \, 96662,
\end{array}
\\
\xi_{2,3}=\xi_1,
\quad
\xi_{5,6}=-\xi_{7,8,9}=\xi_4,
\quad
\eta_5=\eta_1,
\quad
\eta_7=\eta_6,
\quad
\eta_8=\eta_3,
\quad
W_3=W_1,
\quad
W_6=W_4,
\quad
W_9=W_7.
\end{gather*}
Next we consider a boundary integral over $\partial \Omega_0$ of the form
\begin{equation*}
I=\int_{\partial \Omega_0} f(\boldsymbol{r}) \dif s
\end{equation*}
where $s$ is the arc-length along the boundary. We write this as a sum over the elemental boundaries that are part of $\partial \Omega_0$,
\begin{equation*} \label{eq:integral_length}
I=\sum_{e,b : \Gamma^{eb}\subseteq\partial\Omega_0}\int_{\Gamma^{eb}} f(\boldsymbol{r}) \dif s.
\end{equation*}
These integrals can now be transformed onto the master boundary, using \eqref{eq:bound_iso_coord_trans} we see that
\begin{equation} \label{eq:dsdomega}
\frac{\dif s}{\dif \omega}=\sqrt{\left(\frac{\dif r}{\dif \omega}\right)^2+\left(\frac{\dif z}{\dif \omega}\right)^2}
=\sqrt{\left(\sum_{\textfrak{i}} r_\textfrak{i}^{eb}\frac{\dif \psi_\textfrak{i}^{eb}}{\dif \omega}\right)^2+\left(\sum_{\textfrak{i}} z_\textfrak{i}^{eb}\frac{\dif \psi_\textfrak{i}^{eb}}{\dif \omega}\right)^2}
\end{equation}
thus
\begin{equation} \label{eq:1d_master_integration}
\int_{\partial \Omega_0} f(\boldsymbol{r}) \dif s
=
\sum_{e,b : \Gamma^{eb}\subseteq\partial\Omega_0}\int_{\Omega^{B}} f(\boldsymbol{r}(\omega)) \frac{\dif s}{\dif \omega}\dif \omega.
\end{equation}
To evaluate the integrals over the master boundary, the standard eight point Gaussian quadrature is used. This is exact for polynomials of order fifteen and will converge for any of the integrals we consider as the element size decreases. In fact, if $1/x$ and $\sqrt{x}$ can be accurately approximated by quadratic Taylor expansions for any given integral, the integrals will be exact. The scheme is
\begin{equation*}
\int_{\Omega^{B}} g(\omega) \dif \omega \sim \sum_{j=1}^8 g(\omega_j) W_j
\end{equation*}
where
\begin{gather*}
\begin{array}{c@{=}c@{\hspace{1cm}}c@{=}c}
\omega_1 & 0.18343 \, 46424 \, 95649 \, 8, &
\omega_3 & 0.52553 \, 24099 \, 16329 \, 0, \\
\omega_5 & 0.79666 \, 64774 \, 13626 \, 7, &
\omega_7 & 0.96028 \, 98564 \, 97536 \, 3, \\
W_1 & 0.36268 \, 37833 \, 78362 \, 0, &
W_3 & 0.31370 \, 66458 \, 77887 \, 3, \\
W_5 & 0.22238 \, 10344 \, 53374 \, 5, &
W_7 & 0.10122 \, 85362 \, 90376 \, 3 \\
\end{array}
\\
\omega_2=-\omega_1,
\quad
\omega_4=-\omega_3,
\quad
\omega_6=-\omega_5,
\quad
\omega_8=-\omega_7,
\\
W_2=W_1,
\quad
W_4=W_3,
\quad
W_6=W_5,
\quad
W_8=W_7.
\end{gather*}
\subsection{Interpolation Functions}\label{ss:Interp}
The numerical simulations are performed using the finite element method, described in \cite{intro_fem,fem_framework_shk}. A finite set of nodes are chosen at positions $\boldsymbol{r}_i(t)=r_i(t)\hat{\boldsymbol{r}}+z_i(t)\hat{\boldsymbol{z}} \in \overline{\Omega_0}$, arranged into triangles with curved sides, one node at each vertex and one on each side, as shown in figure \ref{f:2d_element}. These triangles are known as quadratic triangular elements, the domain of the $e$th element being denoted $\Omega^e$. We define continuous interpolation functions $\psi_i(\boldsymbol{r},t)$ such that $\psi_i(\boldsymbol{r}_j(t),t)=\delta_{ij}$, where $\delta_{ij}$ is the Kronecker delta, and $\sum_i \psi_i=1$. Note that this definition does not uniquely specify the interpolation functions.
\begin{figure}[t]
\centering
\begin{tabular}{c c}
\begin{subfigure}[t]{0.47\textwidth}
\centering
\input{Figures/TikZ/Element.tex}
\caption{An example element in the mesh. This is the $e$th element with domain $\Omega^e$ and nodes at the numbered locations.}
\label{f:2d_element}
\end{subfigure}
&
\begin{subfigure}[t]{0.47\textwidth}
\centering
\input{Figures/TikZ/Master.tex}
\caption{The master element in master coordinates $\xi$ and $\eta$, with domain $\Omega^M$ and nodes at the numbered locations.}
\label{f:2d_master}
\end{subfigure}
\\
\begin{subfigure}[t]{0.47\textwidth}
\centering
\input{Figures/TikZ/Element_Boundary.tex}
\caption{An example boundary element in the mesh. This is the $b$th boundary of the $e$th element with domain $\Omega^{eb}$ and nodes at the numbered locations.}
\label{f:1d_element}
\end{subfigure}
&
\begin{subfigure}[t]{0.47\textwidth}
\centering
\input{Figures/TikZ/Master_Boundary.tex}
\caption{The master boundary element in master coordinate $\omega$, with domain $\Omega^{B}$ and nodes at the numbered locations.}
\label{f:1d_master}
\end{subfigure}
\end{tabular}
\caption{}
\end{figure}
The bulk variables $u$, $v$ and $p$ are interpolated using their values at all the nodes, which is the scheme used in \cite{stable_darcy}. Using the same notation for the approximations as for the true solutions, we have
\begin{subequations}\label{seq:function_discrete}\begin{align}
u(\boldsymbol{r},t)&=\sum_i u_i(t) \psi_i(\boldsymbol{r},t), \\
v(\boldsymbol{r},t)&=\sum_i v_i(t) \psi_i(\boldsymbol{r},t), \\
p(\boldsymbol{r},t)&=\sum_i p_i(t) \psi_i(\boldsymbol{r},t).
\end{align}\end{subequations}
Note that $u(\boldsymbol{r}_i(t),t)=u_i(t)$, etc. thus the new variables are the values of the unknown functions at the nodes.
To obtain unique interpolation functions, we first define global node numbers to be the italicised indices used so far, and local node numbers over the $e$th element that have the values 0 to 5, as shown in figure \ref{f:2d_element}, will be denoted by Roman indices and a superscript $e$ index. Local node numbers only exist for the nodes that are part of the element, and there is an arbitrary choice of three configurations of the node numbers corresponding to rotating the definition of the numbering heuristic in figure \ref{f:2d_element}. The global node number $i$ is a function of the element number $e$ and the local node number $\mathrm{i}$, such a function is represented as a connectivity matrix $\boldsymbol{M}$, such that $i(e,\mathrm{i})=M^e_\mathrm{i}$. Local interpolation functions are defined as $\psi_i^e(\boldsymbol{r},t)=\psi_i(\boldsymbol{r},t)\:\forall \boldsymbol{r} \in \Omega^e(t)$.
Next, we define the master element to have domain $\Omega^M$ in a master coordinate system $(\xi,\eta)$. Its local node numbers are defined in figure \ref{f:2d_master} with coordinates $(\xi_\mathrm{i},\eta_\mathrm{i})$, and its sides are straight. The master interpolation functions $\psi^M_{\mathrm{i}}(\xi,\eta)$ are uniquely defined by the condition $\psi^M_\mathrm{i}(\xi_\mathrm{j},\eta_\mathrm{j})=\delta_{\mathrm{i}\mathrm{j}}$ and the requirement that they be quadratics in the master coordinates, explicitly
\begin{equation}
\begin{array}{r@{=}l@{\hspace{1cm}}r@{=}l@{\hspace{1cm}}r@{=}l} \vspace{1mm}
\psi_0^M & \frac{1}{2}\eta(\eta+1) ,&
\psi_1^M & \frac{1}{2}(\xi+\eta)(\xi+\eta+1) ,&
\psi_2^M & \frac{1}{2}\xi(\xi+1) ,\\
\psi_3^M & (\xi+1)(\eta+1) ,&
\psi_4^M & -(\xi+\eta)(\eta+1) ,&
\psi_5^M & -(\xi+\eta)(\eta+1) .
\end{array}
\end{equation}
We define an isoparametric coordinate transformation between $\Omega^M$ and $\Omega^e$
\begin{equation} \label{eq:elem_iso_coord_trans}
\boldsymbol{r}(\xi,\eta;e)=\sum_{\mathrm{i}} \boldsymbol{r}_\mathrm{i}^e(t) \, \psi^M_\mathrm{i}(\xi,\eta),
\end{equation}
which uniquely specifies the curve of the elemental boundaries, and thereby $\Omega^e$. Finally the interpolation functions are uniquely defined by
\begin{equation} \label{eq:local_interp_master_def}
\psi^e_\mathrm{i}(\boldsymbol{r}(\xi,\eta;e))=\psi^M_\mathrm{i}(\xi,\eta).
\end{equation}
and $\psi_i=0$ in any element that does not contain node $i$.
Boundary elements and interpolation functions are also needed. The elemental boundaries are identified by a parameter $b$: the boundary from node $0$ anticlockwise to node $1$ corresponds to $b=0$; from $1$ to $2$ has $b=1$; from $2$ to $0$ has $b=2$. The domain of the boundary is denoted $\Gamma^{eb}$ in an element and $\Gamma^{Mb}$ in the master element, which are illustrated in figures \ref{f:2d_element} and \ref{f:2d_master} respectively. The master boundary element is defined in the master coordinate $\omega$ to have domain $\Omega^{B}$, and is shown in figure \ref{f:1d_master}. Its boundary node numbers as shown are denoted by a fraktur index $\textfrak{i}$ and a superscript $B$. A linear transformation can be defined between any of the master elements three boundaries onto the master boundary element which means that $\textfrak{i}=\textfrak{i}(b,\mathrm{i})$. Under any of these transformations the interpolation functions become what we shall call the master boundary interpolation functions
\begin{equation}
\begin{array}{r@{=}l@{\hspace{1cm}}r@{=}l@{\hspace{1cm}}r@{=}l} \vspace{1mm}
\psi_0^{B}(\omega) & \frac{1}{2} \omega (\omega-1) ,&
\psi_1^{B}(\omega) & (1+\omega)(1-\omega) ,&
\psi_2^{B}(\omega) & \frac{1}{2} \omega (\omega+1) .
\end{array}
\end{equation}
Under the coordinate transformation \eqref{eq:elem_iso_coord_trans}, the chosen boundary of the master element transforms into a boundary of the element $e$, so we define the local boundary node number to be denoted with an index $\textfrak{i}$ and superscript indices $e$ and $b$. Since the master boundary interpolation functions are only master interpolation functions for a restricted domain, the boundary interpolation functions are defined as
\begin{align}
\psi^{eb}_\textfrak{i}(\boldsymbol{r}(\omega;e,b))&=\psi^{B}_\textfrak{i}(\omega), \label{eq:boundary_interp_master_def}
\intertext{where}
\boldsymbol{r}(\omega;e,b)&=\sum_{\textfrak{i}} \boldsymbol{r}_\textfrak{i}^{eb}(t) \, \psi^{B}_\textfrak{i}(\omega). \label{eq:bound_iso_coord_trans}
\end{align}
The approximated solutions can therefore be expressed over the elemental boundaries as
\begin{subequations}\label{seq:function_discrete_boundary}\begin{align}
u(\boldsymbol{r},t)&=\sum_{i} u_i^{eb}(t) \psi_i^{eb}(\boldsymbol{r},t), \\
v(\boldsymbol{r},t)&=\sum_{i} v_i^{eb}(t) \psi_i^{eb}(\boldsymbol{r},t), \\
p(\boldsymbol{r},t)&=\sum_{i} p_i^{eb}(t) \psi_i^{eb}(\boldsymbol{r},t),
\end{align}\end{subequations}
for appropriate $e$ and $b$.
Schemes that have the same degree of interpolation for pressure and velocity are used to approximate solutions to Darcy's equation elsewhere, for example \cite{stable_darcy}, which we use to justify the choice of interpolation outlined above. Schemes which have the interpolation of pressure one degree higher than that for velocity can also be used, for example that in \cite{roberts_FEM}. The most convenient scheme of this nature for our purposes is to have velocity interpolated linearly using only the corner nodes in each element. However, when this was used the discrete form of the bulk equations broke down at the corner nodes in each element, so this has not been used.
\subsection{Measurements}
In our results we will discuss the volume flux into the wetted region, the total volume of the wetted region and the contact angle variation. These are calculated as follows. The volume influx is
\begin{equation}
F=\int_{0}^{2\pi} \dif\phi \int_{\Gamma_2} r \dif l \left(\boldsymbol{u} \cdot -\hat{\boldsymbol{n}}\right)=-2\pi \int_{0}^{1} v r \dif r.
\end{equation}
The total volume of the wetted region is
\begin{equation}
V=\int_{0}^{2\pi} \dif \phi \int_{\Omega_0} r \dif r \dif z = 2\pi \int_{\Omega_0} r \dif r \dif z.
\end{equation}
The contact angle of interest is CA2 (from the introduction). This is the angle subtended at $C_1$ between $\Gamma_0$ and $\Gamma_1$, and will be denoted $\theta_1$. Using the notation from the previous section, where $i$ represents the node number along the wetting front, this is calculated by
\begin{equation}
\theta_1=\frac{1}{5}\sum_{i=1}^{5} \arctan\left(\frac{z_{i}-z_{i-1}}{r_{i}-r_{i-1}}\right).
\end{equation}
Due to the curvature of the wetting front, this will always produce a slight underestimate, but this can be taken into consideration when evaluating the results. Also, the variation of the mesh at each time step will cause the approximation to fluctuate. We can ignore this since it it is an artefact of our method of extracting data from our numerical scheme and not the scheme itself.
\subsection{Mesh Design and the Method of Spines} \label{ss:Spines}
From section \ref{ss:Interp} we are left with five unknowns for every node at any given instant of time: its position and the values of the functions at the node. The method of constructing a mesh of nodes shall be discussed here, first describing how to construct the elements from spines, then how to position the spines, and finally how to refine around a point.
\subsubsection{Constructing Elements from Spines} \label{sss:spines_to_elements}
\begin{figure}[t]
\centering
\begin{tabular}{c c}
\begin{subfigure}[t]{0.47\textwidth}
\centering
\input{Figures/TikZ/Spines.tex}
\caption{Example spines for mesh construction. In a simulation the spines would be much more densely packed to produce a high resolution on the solution. The circles show the location of the corner points $C_0$, $C_1$, $C_2$, and $C_3$.}
\label{f:spines}
\end{subfigure}
&
\begin{subfigure}[t]{0.47\textwidth}
\centering
\input{Figures/TikZ/Spines_Block.tex}
\caption{A block of elements between two spines, the standard way of generating elements. The circles show the location of the nodes that are part of the elements depicted.}
\label{f:spines_block}
\end{subfigure}
\\
\begin{subfigure}[t]{0.47\textwidth}
\centering
\input{Figures/TikZ/Spines_Wedge_Up.tex}
\caption{An increasing wedge between two spines, for when the next spine has more intervals to fill then the previous. The circles show the location of the nodes that are part of the element depicted.}
\label{f:spines_wedge_up}
\end{subfigure}
&
\begin{subfigure}[t]{0.47\textwidth}
\centering
\input{Figures/TikZ/Spines_Wedge_Down.tex}
\caption{A decreasing wedge between two spines, for when the next spine has fewer intervals to fill then the previous. The circles show the location of the nodes that are part of the element depicted.}
\label{f:spines_wedge_down}
\end{subfigure}
\end{tabular}
\caption{}
\end{figure}
Spines are curves which are used to generate elements, the elements are positioned in between the spines such that the base of the triangle is along one spine and the point opposite is on an adjacent spine. We first construct the spines and then position elements in between them, the spines to be used are shown graphically in figure \ref{f:spines}. These spines are good because they are centred around $C_1$ which will allow us to refine the mesh around this point, tend towards straight lines at $\Gamma_3$ which makes aligning the elements with the axis trivial, and are approximately perpendicular to the wetting front if it is a simple arc, thus a significant degree of distortion will have to occur for the wetting front to become parallel to them and the mesh unusable. They are isoclines of the bipolar coordinate system, specifically the coordinate
\begin{equation} \label{eq:spines_chi}
\chi=\ln\left(\frac{\sqrt{(r+r_f)^2+z^2}} {\sqrt{(r-r_f)^2+z^2}}\right)
\end{equation}
where $r_f$ is the radial coordinate of the focus, in our case $C_1$. The spines satisfy $\chi=\chi_n$, where $n$ is the index of the spine, the numbering starting at $C_1$ and increasing for decreasing $r$. This causes the spines to be circles with centre $(R_n,0)$ and radius $\rho_n$, where
\begin{equation} \label{eq:spines_Rrho}
R_n=\frac{r_f}{\tanh\left(\chi_n\right)}, \qquad \rho_n=\frac{r_f}{\sinh\left(\chi_n\right)}.
\end{equation}
If $(r,z)$ is the point of intersection of the spine with the wetting front $\Gamma_0$, the angle subtended along the spine is
\begin{equation} \label{eq:spines_theta}
\theta_n=\arctan\left(\frac{-z}{r_f-r}\right)
\end{equation}
Let $r_n$ be the point of intersection of the spine with the $r$-axis. It is $r_n$ that we shall calculate first to position the spine, this process shall be described in the next section. For this section it will suffice to imagine that they are evenly distributed along $0 \leq r \leq r_f$. From $r_n$, we can use \eqref{eq:spines_chi} to calculate $\chi_n $ and then \eqref{eq:spines_Rrho} and \eqref{eq:spines_theta} to calculate $R_n$, $\rho_n$ and $\theta_n$.
When the spines are generated, we shall ensure that $r_{n-1}-r_n \approx r_{n}-r_{n+1}$, since it is important that small elements and large elements are not too close to each other for solution accuracy. Define for the spines that have two neighbours
\begin{equation} \label{eq:spines_h}
h_n=(r_{n-1}-r_{n+1})/2,
\end{equation}
this is the mean distance from this spine to its two neighbours. For the spines at $C_1$ and $C_3$, $h_n$ is the distance to the single adjacent spine. The elements generated must not be overly distorted, an element that is long and thin will induce error, so we should divide the spine up into intervals approximately of length $h_n$, each interval being an elemental boundary. Define
\begin{equation} \label{eq:spines_J}
J_n=\lceil \rho_n \theta_n/h_n\rceil,
\end{equation}
this shall be the number of intervals the spine is divided into, each interval being of equal length as measured along the arc of the spine.
\paragraph{Element Generation}To generate elements, we run between two spines from the $r$-axis to the wetting front generating elements that span between an interval on one side, and the point between two intervals on the other, as shown in figures \ref{f:spines_block}, \ref{f:spines_wedge_up} and \ref{f:spines_wedge_down}. The usual method is to create a block that advances along one interval for each spine, as shown in figure \ref{f:spines_block}. The four corner nodes are places at the ends of the intervals, and the remaining nodes are placed at the midpoints of the sides they are on. However, this method will only be able to generate all the elements if $J_n=J_{n+1}$. If $J_n<J_{n+1}$ then there will be left over intervals on the next spine, which can be filled by single elements known as \textit{increasing wedges} as shown in figure \ref{f:spines_wedge_up}. If $J_n>J_{n+1}$ then there will be left over intervals on the current spine, which can be filled by single elements known as \textit{decreasing wedges} as shown in figure \ref{f:spines_wedge_down}. These extra elements should be spread out evenly along the spine to minimise the amount of distortion in the elements, for example in the current implementation if there are two elements to be added these will be added at $1/4$ and $3/4$ of the way along the spine. This is achieved by setting a counter to $0.5$ at the start of a run between two spines. Each time elements are going to be added, the counter is increased by $|J_n-J_{n+1}|/\max\{J_n,J_{n+1}\}$, if the counter exceeds $1$ then a wedge is added next and the counter decreased by $1$, otherwise a block is added.
It should be noted that each spine must carry information about its $\chi_n$ and endpoint at the wetting front. I.e. when programming this algorithm the spines should be stored in such a way that, knowing the value $n$, the values $\chi_n$ and the coordinates of the endpoint of the spine can be accessed. In the above discussion it has not been mentioned how the elements at the wetting font will be constructed. The centre points of the elemental boundaries that lie along $\Gamma_0$ must be on $\Gamma_0$, and not the midpoint of the endpoints of the spines, otherwise the solution will be inaccurate since we will not have approximated the domain as well as we can. Thus we include `pseudo-spines' that will be placed in between each pair of spines such that $r_{n+(1/2)}=(r_n+r_{n+1})/2$ is the point of intersection of the pseudo-spine with the $r$-axis. These will be used purely to hold their point of intersection with the wetting front, such that the last element generated between every pair of spines can use this point and have its boundary along the wetting front.
To find the point of intersection of a spine with the wetting front, we use the notation that the wetting front is parametrically $r=\tilde{r}(s)$, $z=\tilde{z}(s)$, where $s=0$ is $C_1$ and $0<s<s_{\textrm{max}}$ is the wetting front (this is the form in which the initial conditions are given). This means that $r(0)=r_f$ and $z(0)=0$. The point of intersection will occur at the root of the function
\begin{equation}
f(s)=\ln\left(\frac{\sqrt{(\tilde{r}(s)+r_f)^2+\tilde{z}(s)^2}} {\sqrt{(\tilde{r}(s)-r_f)^2+\tilde{z}(s)^2}}\right)-\chi_n
\end{equation}
which satisfies $0\leq \tilde{r}(s) \leq r_f$, $\tilde{z}(s) \leq 0$, and we must have that there is only one solution to be able to generate the mesh. We solve this equation by using the Newton-Raphson method, where an initial guess $s_0$ is produced (the arbitrary nature of this guess is why we require the solution to be unique), and refinements on this guess are produced by
\begin{equation}
s_{m+1}=s_m - \frac{f(s_m)}{f'(s_m)}
\end{equation}
where the index $m$ numbers our attempts at finding the solutions. The exact solution is obtained as $m \rightarrow \infty$ (assuming that it does indeed converge), or numerically at the point when $s_m=s_{m+1}$. This method can be used as stated for the first instant of time, since the wetting front is the initial condition and is provided in this form. For later instants of time the wetting front has been time stepped from the previous one, and will be a sequence of elemental boundaries. On each elemental boundary the coordinates can be obtained as $\tilde{r}(\omega;e,b)\hat{\boldsymbol{r}}+\tilde{z}(\omega;e,b)\hat{\boldsymbol{z}}$ for $-1\leq\omega\leq1$, thus we simply use these coordinates to produce $f(\omega)$ and solve in exactly the same way, except that now the elemental boundary will also have to be stepped onto the adjacent one when $\omega$ exceeds its bounds. It is worth noting that the first and last spines should be included as special cases, since not only is it easy to overstep the end point of an elemental boundary and then have no adjacent element to step into, but the value of $\chi$ is divergent at $C_1$ which cannot be handled numerically.
\subsubsection{Positioning Spines}
To generate the spines we require the values of $r_n$, which control the size of the elements produced. We have two constraints, firstly the spine separation ($r_{n}-r_{n+1}$) should not change suddenly since this will give distorted elements of different sizes next to each other. Thus we shall enforce that
\begin{equation} \label{eq:spine_max_rate}
\frac{1}{M_{mh}}\geq\frac{r_{n}-r_{n+1}}{r_{n-1}-r_{n}}\geq M_{mh}
\end{equation}
where $M_{mh}$ is the maximal rate of change of the spine separation. Also there must be spines that intersect $C_1$, $C_2$ and $C_3$, to enable us to have nodes at these points and fill the domain with elements, this shall be reflected in our algorithm. We shall require there to be a minimum spine density of $I_{mh}$ per unit length, thus
\begin{equation} \label{eq:spine_max_sep}
r_{n}-r_{n+1} \leq 1/I_{mh},
\end{equation}
to ensure a decent level of mesh resolution and solution accuracy throughout. We shall denote the smallest separation between spines permitted to be $S_{t}$,
\begin{equation} \label{eq:spine_min_sep}
r_{n}-r_{n+1} \geq S_{t}.
\end{equation}
The value of $S_t$ is calculated at each time step to account for the changing shape of the wetted region and get the required resolution. Let $-H$ be the $z$ coordinate of $C_0$, $S_{t1}=(r_f-1)/20$ and $S_{t2}=H/100$. We define $S_t=\min\{S_{t1},S_{t2},S_{mh},1/I_{mh}\}$ where $S_{mh}$ is a parameter dictating the maximal value of $S_t$ allowed.
The spines must not be allowed to separate out so far that they are further apart than they are long, thus we define the number of times longer a spine must be than the separation to the next to be $C_{mh}$, therefore
\begin{equation} \label{eq:spine_close}
r_{n}-r_{n+1} \leq \frac{r_n \theta_n}{C_{mh}}.
\end{equation}
A higher level of resolution shall be required at $C_1$ than at any other point, due to the multivalued and singular solutions there, thus we shall start at this point with the smallest elements in the mesh and increase the separation as we move away. There are also these problems at $C_2$, but $C_1$ is on the wetting front which is were we require the highest level of accuracy for the time stepping.
The first spine to be generated shall be that at $C_1$, this is of zero length but is required to generate elements that span from it into the domain (which shall be increasing wedges), thus $r_0=r_f$. The separation between spines $0$ and $1$ should be the smallest in the mesh, so we govern it with the parameter $S_{mh}$, $r_1=r_0-S_{mh}$, which allows us to control how dense the mesh becomes in this region. Note that if $S_{mh}\geq1/I_{mh}$ then $r_1=r_0-(1/I_{mh})$ instead. The separation between spines should now increases at a steady rate, we define $R_{mh}$ to be this rate such that $R_{mh} \leq M_{mh}$ and
\begin{equation} \label{eq:spine_rapid_sep}
r_{n+1}=r_n-(r_{n-1}-r_n) R_{mh}.
\end{equation}
If this causes the new spine to break any of the above inequalities, then the value of $r_{n+1}$ should be altered to satisfy the respective equality. These are applied in the order \eqref{eq:spine_max_sep}, \eqref{eq:spine_close}, \eqref{eq:spine_max_rate}, then \eqref{eq:spine_min_sep}.
We next consider how to ensure that the spines align with the point $C_2$, such that one spine passes through this point. Let the distance between the most recently generated spine and $C_2$ be $D$, thus $D=r_n-1$, and the most recent spine separation be $L$, thus $L=r_{n-1}-r_n$. If we are to traverse the distance to $C_2$ in $q$ or $q+1$ equally spaced spines then we require that
\begin{equation} \label{eq:spine_next_mode}
qL \leq D \leq (q+1)L.
\end{equation}
The minimal and maximal distance, $d_-$ and $d_+$, that can be traversed in $q$ spines are, under the constraint \eqref{eq:spine_max_rate}, given by the geometric progression formula
\begin{align}
d_-&=\frac{L}{M_{mh}} \frac{M_{mh}^{-q}-1}{M_{mh}^{-1}-1}, & d_+&=L M_{mh} \frac{M_{mh}^{q}-1}{M_{mh}-1}.
\end{align}
Therefore we require that
\begin{gather}
d_- \leq qL \leq D \leq (q+1)L \leq d_+ \notag\\
\Rightarrow \qquad
\frac{M_{mh}^{-q}-1}{1-M_{mh}} \leq q
\qquad\qquad
M_{mh} \frac{M_{mh}^{q}-1}{M_{mh}-1} \geq q+1
\qquad
\end{gather}
and the value of $q$ can be calculated prior to generating the spines by considering $q=1$ and then increasing its value to the next integer while the inequalities do not hold.
The spines are generated using \eqref{eq:spine_rapid_sep} until \eqref{eq:spine_next_mode} is satisfied. At this point the distance $D$ is divided up into $N$ equal segments that minimise the jump in spine separation, resulting in
\begin{equation} \label{eq:spine_aim_step}
r_{n+1}=r_{n} - \frac{D}{N}.
\end{equation}
The value of $N$ is chosen algorithmically by starting with $N=1$ and increasing $N$ to the next integer value while it is true that
\begin{equation} \label{eq:spine_aim_number}
\left|\ln\left( \frac{D/(N+1)}{r_{n-1}-r_{n}} \right)\right| < \left|\ln\left( \frac{D/N}{r_{n-1}-r_{n}} \right)\right|.
\end{equation}
such that the change in step size is minimised. If the value of $r_{n+1}$ causes \eqref{eq:spine_max_rate} to be broken, then it is altered to satisfy the respective equality. Applying this algorithm for each spine generation up to the point $C_2$ produces spines whose separation changes at the maximal rate allowed by \eqref{eq:spine_max_rate} up to a point, and then becomes static. The conditions \eqref{eq:spine_max_sep}-\eqref{eq:spine_close} are not applied.
To generate spines in the region between $C_2$ and $C_3$ the same method is used but with $D=r_n$. I.e. \eqref{eq:spine_rapid_sep} is used \{applying \eqref{eq:spine_max_sep}, \eqref{eq:spine_close}, \eqref{eq:spine_max_rate}, then \eqref{eq:spine_min_sep}\} until \eqref{eq:spine_next_mode} is satisfied, and then \eqref{eq:spine_aim_step} is used with $N$ from \eqref{eq:spine_aim_number} applying \eqref{eq:spine_max_rate}.
An example mesh produced with this method is depicted in figure \ref{f:example_mesh}. It illustrates how we achieve a uniform mesh that has a spine intersecting with $C_2$ and steadily refines around $C_1$.
\begin{figure}[tbp]
\centering
\begin{tabular}{c c}
\begin{subfigure}[t]{0.9\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures/Mesh/NoZoom.eps}
\caption{}
\label{f:example_mesh_all}
\end{subfigure}
\\
\begin{subfigure}[t]{0.9\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures/Mesh/Focus}
\caption{}
\label{f:example_mesh_focus}
\end{subfigure}
\end{tabular}
\caption{Example mesh generated without refinement at $C_2$. The parameters of the mesh are $R(t)=1$, $\tilde{r}(s)=2\cos(s)$, $\tilde{z}(s)=-2\sin(s)$, $I_{mh}=20$, $S_{mh}=10^{-5}$, $C_{mh}=5$, $M_{mh}=1.4$, and $R_{mh}=1.15$.}
\label{f:example_mesh}
\end{figure}
\subsubsection{Mesh Refinement}
The solution shall not only be singular around $C_1$, but also around $C_2$. The singularity around $C_2$ is less important, since it is not on the wetting front where the solution is required to greatest accuracy, and thus we refine around this point as a secondary consideration. The refinement will have the element sizes changing rapidly, which will decrease the accuracy of the solution, however it will be more accurate than with an unrefined mesh around the singularity which has been found to cause the solution to be poor.
\begin{figure}[t]
\centering
\begin{tabular}{c c}
\begin{subfigure}[t]{0.47\textwidth}
\centering
\input{Figures/TikZ/Spines_Zoom.tex}
\caption{The method of local refinement of a block.}
\label{f:spines_zoom}
\end{subfigure}
&
\begin{subfigure}[t]{0.47\textwidth}
\centering
\input{Figures/TikZ/Spines_Zoom_Centre.tex}
\caption{The smallest elements in the refinement.}
\label{f:spines_zoom_centre}
\end{subfigure}
\end{tabular}
\caption{}
\end{figure}
The refinement is performed by the generation of an alternative block. Instead of generating the block using the method depicted in figure \ref{f:spines_block}, we use the method depicted in figure \ref{f:spines_zoom}. The block depicted is for immediately left of $C_2$, adjacent to the $r$-axis, the block to right of $C_2$ uses a mirrored version of the method discussed.
The three quadrants not containing $C_2$ are filled with four elements as depicted, the extra nodes being midpoints of the sides they are on. This leaves a block remaining that has one quarter the area of the original, which can then be divided up in exactly the same way as the first. This process is repeated until a predefined point has been reached, let us define this to be when the length of the side of the remaining block along the $r$-axis is less than $Z_{mh}$. When this condition is reached the remaining block is split into two elements, see figure \ref{f:spines_zoom_centre}, choosing to have one element containing $C_2$ since the elements containing the singularity induce error, and so we want the total area of such elements to be minimal. It is important that both the blocks are refined the same number of times.
The refinement of the mesh in figure \ref{f:example_mesh} is depicted in figure \ref{f:example_mesh_zoom}, illustrating the method.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.8\textwidth]{Figures/Mesh/Zoom}
\caption{Example mesh generated with refinement at $C_2$, the plot only showing the region around this point. The parameter governing the refinement is $Z_{mh}=10^{-6}$, all other parameters being the same as in figure \ref{f:example_mesh}.}
\label{f:example_mesh_zoom}
\end{figure}
\subsection{Summary of the Spatial Method}
First the spines are generated from the position of the wetting front, either from the initial condition $r=r(s)$, $z=z(s)$ or the set of elemental boundaries obtained from time stepping the wetting front. The spines are constructed from the values of $r_n$, where $r_0=r_f$, $r_1=r_0-S_{mh}$ and then the algorithm in \eqref{eq:spine_rapid_sep} is used, applying the constraints \eqref{eq:spine_max_sep}, \eqref{eq:spine_close}, \eqref{eq:spine_max_rate}, then \eqref{eq:spine_min_sep}. This proceeds until the condition \eqref{eq:spine_next_mode} is reached with $D=r_n-1$, at which point \eqref{eq:spine_aim_step} is used with $N$ from \eqref{eq:spine_aim_number} applying the constraint \eqref{eq:spine_max_rate}. The constant parameter for each spine ($\chi_n$) can then be found from \eqref{eq:spines_chi} with $(r,z)=(r_n,0)$, and from this all other parameters of the spine using \eqref{eq:spines_Rrho}, \eqref{eq:spines_theta}, \eqref{eq:spines_h} and \eqref{eq:spines_J}. The elements are then produced algorithmically between the spines using the method discussed in \S\ref{sss:spines_to_elements}.
From the above we have the mesh of nodes over which to calculate the solution, this is done by each node having three equations for its values. For the bulk nodes these equations are \eqref{eq:discrete_continuity} and \eqref{seq:discrete_darcy}, where the value of $i$ is the global node number of the considered node. For a node at a boundary that has the pressure condition \eqref{eq:bc_gamma0} and \eqref{eq:bc_gamma2} we use \eqref{eq:discrete_bc_pressure} and \eqref{seq:discrete_darcy}. For a node at a boundary that has the velocity condition \eqref{eq:bc_gamma13} we use \eqref{eq:discrete_continuity} and \eqref{eq:discrete_bc_velocity}. The variables involved in these equations are defined as integrals in \eqref{eq:discrete_ABc} and \eqref{eq:discrete_CDabg}. The terms in the integrands are defined in \eqref{eq:normal}, \eqref{eq:local_interp_master_def} and \eqref{eq:boundary_interp_master_def}, the integrals being performed over master coordinates using \eqref{eq:2d_master_integration} and \eqref{eq:1d_master_integration}, with the coordinate transformations having Jacobian \eqref{eq:jacobian} and derivative \eqref{eq:dsdomega}. The coordinate transformations these describe are defined in \eqref{eq:elem_iso_coord_trans} and \eqref{eq:bound_iso_coord_trans}. In cases where $a_i$, $b_i$ or $c_i$ are required on regions of the boundary where the integrated variable is not supplied as a boundary condition, \eqref{eq:discrete_abc} is used to find the value, where the variables are defined in \eqref{eq:discrete_EF}.
These equations are constructed as a matrix and then solved using standard methods.
\subsection{Numerical Testing}
\begin{figure}[ptb]
\centering
\begin{tabular}{c c}
\begin{subfigure}[t]{0.47\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures/Convergance/Linear}
\caption{Convergence test for the mesh on a linear polynomial $P=1-z$. Plotted is: top, relative error on $v$ with line of best fit $1.54\cdot10^{-2}\cdot I_{mh}^{-1.95}$; middle, relative error on $p$ with line of best fit $3.21\cdot10^{-3}\cdot I_{mh}^{-2.94}$; bottom, integrated error on $p$ with line of best fit $7.40\cdot10^{-4}\cdot I_{mh}^{-3.66}$.}
\label{f:test_linear}
\end{subfigure}
&
\begin{subfigure}[t]{0.47\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures/Convergance/Cubic}
\caption{Convergence test for the mesh on a linear polynomial $P=10-z+r^2-2z^2+3r^2z-2z^3$. Plotted is: top, relative error on $p$ with line of best fit $4.48\cdot10^{-2}\cdot I_{mh}^{-1.52}$; bottom, integrated error on $p$ with line of best fit $9.38\cdot10^{-3}\cdot I_{mh}^{-1.99}$.}
\label{f:text_cubic}
\end{subfigure}
\end{tabular}
\caption{Plotted is the maximal or integrated errors for several runs of numerical solver. The parameters of the mesh are $R(t)=1$, $r(s)=2\cos(s)$, $z(s)=-2\sin(s)$, $S_{mh}=10^{10}$, $Z_{mh}=10^{10}$, $C_{mh}=1$, $M_{mh}=1.4$, and $R_{mh}=1.15$. Also, $\gamma=0$ and boundary conditions on $\hat{\boldsymbol{n}}\cdot\boldsymbol{u}$ are applied on $\Gamma_1$ and $\Gamma_3$.}
\end{figure}
\begin{figure}[ptb]
\centering
\begin{tabular}{c c}
\begin{subfigure}[t]{0.47\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures/ZoomTesting/Pressure}
\caption{Absolute error on pressure. Lines are $10^{-14}/\rho$ and $10^{-17}/\rho$.}
\label{f:test_zoom_p}
\end{subfigure}
&
\begin{subfigure}[t]{0.47\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures/ZoomTesting/RadialVelocity.eps}
\caption{Absolute error on radial velocity. Lines are $10^{-13}/\rho^2$ and $10^{-17}/\rho^2$.}
\label{f:text_zoom_u}
\end{subfigure}
\end{tabular}
\caption{Errors on three runs of the solver, the error on every node plotted as a function of the distance from $C_2$, $\rho$. The data are for runs with $P=1$ and $Z_{mh}=10^{-5}$ ($\fullmoon$), $P=1$ and $Z_{mh}=10^{-10}$ ($\triangledown$) and $P=1-z$ and $Z_{mh}=10^{-10}$ ($\square$). $\gamma=0$ for all runs.}
\end{figure}
To perform error analysis on the code we consider exact analytic solutions to the bulk equations. From these analytic solutions boundary conditions can be deduced and the numerical solver run with these conditions. This should reproduce the analytic solution, and any difference between the analytic solution and the numerical solution is numerical error.
Combining \eqref{eq:continuity} and \eqref{eq:darcy} we obtain the equation for pressure $\nabla^2 p=0$. Considering a cubic polynomial solution in axisymmetric cylindrical coordinates the general form is, denoting the analytic solution by $P$, $U$ and $V$,
\begin{align}
P(r,z)&=P_1+P_2z+P_3r^2-2P_3z^2-3P_4r^2z+2P_4z^3, \\
U(r,z)&=-2P_3r+6P_4rz, \\
V(r,z)&=-P_2-\gamma+4P_3z+3P_4r^2-6P_4z^2.
\end{align}
We consider three types of error: absolute, relative and integrated, which are for pressure
\begin{equation*}
E_a=p-P, \qquad E_r=\frac{p-P}{P}, \qquad E_i=\frac{\int_{\Omega_0} (p-P)^2 \dif r \dif z}{\int_{\Omega_0} P^2 \dif r \dif z}
\end{equation*}
respectively.
First we examine the convergence properties as the mesh is refined. We do this by setting $S_{mh}$ to be very large such that all spines are constructed in a uniform distribution approximately $1/I_{mh}$ apart. By changing the value of $I_{mh}$ the convergence properties can be seen. Figure \ref{f:test_linear} shows how, for a linear polynomial, the convergence is very rapid. For higher order polynomials, as in figure \ref{f:text_cubic}, the convergence is slower, but for $I_{mh}>10$ the solution is acceptable. In regions where we are not having to refine the mesh the solution is well behaved and so this level of resolution should be sufficient.
The refinement around $C_1$ is steady and so will not produce errors until $S_{mh}\lesssim10^{-7}$, at which point the fact that the value of the Jacobian is less than $10^{-14}$ may start to produce errors from machine precision. The refinement around $C_2$ is much more rapid and the error from machine precision will become a problem much more rapidly. This is clearly shown in figure \ref{f:test_zoom_p}, where the absolute errors for one linear and two constant solutions are plotted at every node against $\rho=\sqrt{(r-1)^2+z^2}$. As the nodes get closer to $C_2$ the error on pressure grows as $\rho^{-1}$, where $\rho$ characterises the size of the elements. For the constant solutions this error is the only error and so it is shown across the range of values. For the linear solution there is a region in which the error due to the other inaccuracies dominates, but as the elements get smaller there comes a point when the error caused by the rapidly changing element size dominates. From figure \ref{f:text_zoom_u} we see that, for the linear polynomial, the convergence of velocity caused by the mesh refinement is zero in the region where pressure is converging. This is worrying since this is for a linear polynomial, which have the highest rate of convergence. For other solutions the error in velocity will likely increase throughout the refinement. However, for the singularity at $C_2$ the refinement is required to stabilise the solution, and the solution is not required at this point, only at the wetting front to perform the time-stepping. This aspect of our mesh is the least desirable and in any future work should be improved upon.
\subsection{Time Stepping the Wetting Front}
The time stepping of the front will be discussed in several parts. First we shall discuss the stepping of a front with a set of known velocities, then the process by which velocities are extracted from a solution, and finally the scheme of time-stepping that is to be used. To number the nodes on the wetting front we shall use the subscript $i$, this should not cause confusion with the global node numbers since we will not be using them in this subsection. The numbering scheme will number the node at $C_1$ as $0$ and use consecutive natural numbers as we move towards the node at $C_0$ up to a highest value of $N$.
Firstly, time stepping once the velocities are known. Let the velocity of the surface at node $i$ be $v_{s,i}$, the coordinate of the node be $\boldsymbol{r}_i (t)$, the unit normal at this node be $\hat{\boldsymbol{n}}_i$ and the amount to time step be $\Delta t$. The position of the nodes after time-stepping is
\begin{subequations}\begin{equation}
\boldsymbol{r}_i(t+\Delta t)=\boldsymbol{r}_i(t) + v_{s,i} \hat{\boldsymbol{n}}_i \Delta t \hspace{1cm} \forall \: i \in \{1,2,\ldots,N-1\}.
\end{equation}
At either end the stepping is performed using the assumption that the velocities are locally constant, which means that they are stepped by
\begin{align}
r_0(t+\Delta t)&=r_0(t) + \frac{v_{s,0}}{\hat{n}_{r,i}} \Delta t \\
z_N(t+\Delta t)&=z_N(t) + \frac{v_{s,0}}{\hat{n}_{z,i}} \Delta t.
\end{align}\end{subequations}
The velocities can be found from the solution at the time $t$ either by taking the values of the solution at the nodes that the problem is solved over or, if the node to step is not part of the solution mesh, by simple interpolation using \eqref{seq:function_discrete_boundary}. However, this will cause problems since the error on the node fluctuates from one node to the next, i.e. if the error on the normal velocity is $\delta$ at node $i$ then it will be $-\delta$ at nodes $i-1$ and $i+1$. This error would cause the wetting front at the next time step to have fluctuations in it, which has been found to cause situations where the fluctuations build and build. To solve this problem a simple smoothing algorithm is employed. The use of a standard splines smoother may also be suitable, but that is not what has been used. We smooth not only the velocities, but also the normals to aid the stepping if fluctuations do start to build, to produce the smoothed variables $\bar{v}_{s,i}$ and $\hat{\bar{\boldsymbol{n}}}$. The smoothing algorithm to remove the fluctuating errors is presented below for velocity, and is the same for the normals.
\begin{align}
\bar{v}_{s,i}&=\frac{2 v_{s,i}+v_{s,i+1}+v_{s,i-1}}{4} \hspace{1cm} \forall \: i \in \{2,3,\ldots,N-1\} \\
\bar{v}_{s,1}&=\frac{1}{2}\left( v_{s,1} + \sum_{j=0}^{3} \bar{v}_{s,4-j} \psi_j^B(\omega(\chi(\boldsymbol{r}_1))) \right) \\
\bar{v}_{s,0}&=\frac{1}{2}\left( v_{s,0} + \sum_{j=0}^{3} \bar{v}_{s,4-j} \psi_j^B(\omega)) \right) \\
\bar{v}_{s,N}&=\frac{1}{2}\left( v_{s,N} + \sum_{j=0}^{3} \bar{v}_{s,N-1-j} \psi_j^B(\omega(\chi(\boldsymbol{r}_N))) \right)
\end{align}
In the equation for $\bar{v}_{s,1}$, $\omega(\chi(\boldsymbol{r}_1))$ denotes the process by which the value of $\chi(\boldsymbol{r}_1)$ is found, and then the Newton-Raphson method is used on the the boundary made up of nodes $4,3,2$ to find the value of $\omega$ that has the correct value of $\chi$. This process is described in subsection \ref{ss:Spines}. Similar notation is used in the equation for $\bar{v}_{s,N}$, except that the boundary is made up of the nodes $N-1,N-2,N-3$. In the equation for $\bar{v}_{s,0}$, the value of $\omega$ is found by solving for $z=0$ in the boundary made up of nodes $4,3,2$. This process removes the main contribution of the error along the bulk of the wetting front during time stepping in the simplest manner whilst reducing the spatial accuracy of the solution slightly. At the corners we interpolate along to perform the averaging. In the current implementation we use this stabilisation twice on the velocities before performing the time-stepping.
We time-step using Heun's method (also known as the improved Euler's method) where first the velocities at one instant of time are found, then a trial time step is performed and the velocities are found at this time. The actual time-step is performed by using the average of the velocity at time $t$ and at the trial step. The velocity at the trial time step must be found at the node that was projected from the front at time $t$, and not at the nodes that now form the mesh, which is done by interpolation. Note that here when we say velocity we mean the two component vector, i.e. the normal velocity \textit{and} the normal direction. This method is of second order convergence, which is deemed to be sufficient for our problem.
The size of the time-step that is used is not fixed, but is adjusted to restrict the rate of change of the contact angle CA2 and the rate of change of the local curvature of the surface at both the trial step and a secondary trial step taken from the trial step forward $\Delta t$.
\cleardoublepage\oldsection{\cleardoublepage\oldsection}
\cleardoublepage\oldsection{Introduction}
The flow of fluids through porous media is present in a vast variety of natural phenomena and industrial applications. Some examples are oil recovery, carbon-dioxide sequestration, hydro-geology, fuel cells, ink-jet and 3D printing, and the creation of ceramics. Porous media are materials such as sandstone, paper or packed beads, which have small voids in their bulk, called \textit{pores}, connected together to form a network of thin passageways on a microscopic scale. The connectivity of the pores allows fluids to flow through them. When more than one fluid occupies the porous medium there will be pores in which the two fluids meet, causing an interfacial surface to form where surface tension will act. The fluids on either side of this surface may be part of a large \textit{bulk} which occupies the pore space on a length scale much larger than that of the pores, such as an aquifer or oil reservoir, or be in the form of \textit{ganglia} only occupying a few pores at most. When the fluid is flowing rapidly into a porous medium, or \textit{wetting} it, a sharp interface may form on the macroscopic scale between the bulk phases of the wetting and displaced fluids called a \textit{wetting front}. Whether or not a clear wetting front is formed depends on the characteristics of the two fluids and the porous solid. If the porous medium is initially saturated with and surrounded by one fluid, and is then brought into contact with another fluid which then wets it, this is called \textit{imbibition}. The body of fluid that has been introduced shall be called the \textit{external reservoir}, the area of contact between this and the porous medium the \textit{drawing area}, the resulting bulk phase of the wetting fluid the \textit{wetted region}, and the bulk phase of the displaced fluid the \textit{dry region}. The terminology that we employ is illustrated in figure \ref{f:intro_terms}.
The most important parameter characterising the porous medium itself is the \textit{porosity}, which is the volume fraction of the material that is pore space. That is, if we consider a volume $V$ within the porous medium, then $V_1$ of this total volume will be made up of the pore voids and $V_2$ of the solid matrix itself, such that $V=V_1+V_2$. The porosity is $V_1/V$. This is what we mean by a volume fraction, the terms length fraction and area fraction shall also be used in this work.
\begin{figure}[ptb]
\centering
\input{Figures/TikZ/Terminology.tex}
\caption{Illustration of the imbibition of a wetting fluid, labelling the various regions. The external reservoir depicted is a vertical column of fluid supported by a solid cylinder.}
\label{f:intro_terms}
\end{figure}
The field of flows in porous media has been under investigation for many years now and progress has been made in the mathematical description and conceptual understanding of all the topics above. However, a full theoretical model is still in wanting.
We aim to investigate mathematical models of the wetting front. We shall do this by theoretically studying the imbibition of a liquid through a horizontal surface into a porous substrate. The external reservoir may either be a column of liquid or a droplet. This will produce theoretical predictions which can then be tested empirically. In the present work explicit modelling of the fluid exterior to the porous medium shall not be undertaken, instead we will model and begin to study the bulk region of the imbibed fluid, the subject of interest being the propagation of the wetting front into the porous medium as time progresses. In addition we shall only investigate only a very simple model of the wetting process, but put forward a scheme that can be easily enhanced to investigate much more complicated models.
In the literature review that follows we will first overview in broad terms the approaches to modelling fluid flows in porous media, followed by a closer look at the continuum models. Another example of imbibition shall then be considered, where boundary conditions for the wetting front have already been proposed and tested, of particular interest is that of Shikhmurzaev and Sprittles'. Finally, we shall examine the progress made into imbibition through a horizontal surface, especially that of droplets since much progress has been made in this area.
\subsection{Approaches to Modelling}
The main problem in this area is to change the scale of the description of the flow from that of the pore to that of the macroscopic domain, which may be, for example, an oil field or a piece of paper. On the pore scale the standard equations for macroscopic fluids (such as the Navier-Stokes equations) are valid, and the domain of the flow is the pores. Performing an order of magnitude estimate, the length scale of a pore may be $\sim 10^{-5} \mathrm{m}$ \cite{natural_rock}, having a volume $\sim 10^{-15} \mathrm{m}^3$. A rain drop has a length scale of $\sim 10^{-2} \mathrm{m}$, thus if a rain drop imbibes into a porous medium it will pass into $\sim 10^{9}$ pores, the precise dynamics of the flow being required in every one. It is not only impractical to attempt to calculate the solution in such a domain, but also unwise to require detailed knowledge of the pore structure in the sample, which would render impossible the modelling of flows without sophisticated apertures to scan the sample first. Therefore other methods have to be devised.
\begin{figure}[ptb]
\centering
\includegraphics[width=0.35\textwidth]{Figures/Capillary_Network.eps}
\caption{Capillary network from \cite{spread_sessile_numerics}, the pores and throats connecting them are arranged in a regular rectangular grid, the diameter of both being randomly generated}
\label{f:intro_capil_net}
\end{figure}
Adler and Brenner \cite{review_adler} review various methodologies still present in the field. The more recent review by Alava \textit{et al.} in 2004 \cite{disordered_alava} discusses many of the more modern (and advanced) forms of these methods, which broadly speaking can be classified into two types.
Firstly there are continuum descriptions. Here we consider the case where the pores are on a much smaller scale than the bulk region of fluid, and the time scales characterising the flow in the pores is much shorter than that of the macroscopic flow we are investigating. Thus we can model the flow using averaged quantities on intermediate scales. These approaches have the advantage that they provide a macroscopic description of a macroscopic phenomena. This is what is ultimately desired from any model; even of we could use the Navier-Stokes equation to describe the flow in every pore, the desired results would be the concentrations of fluids in different regions, their flux and their averaged stress or pressure. If these can be calculated directly then this is to great advantage analytically and intuitively.
Secondly there are the lattice or particle models. These typically operate by considering the porous network to be regular in some sense. For example the capillary network in figure \ref{f:intro_capil_net}, where a rectangular grid of spherical pores with throats connecting them is used to represent the porous network. The flow is then modelled using some algorithm dictating which fluid each pore, throat or other small region is occupied by. The algorithm is deduced from assumptions about the behaviour at each modelling point to approximate when one fluid will displace the other. Of course the porous network in a rock will not resemble the figure, it will be much more disordered, and one fluid does not suddenly displace another, it takes time if only a very small amount. We see that these models operate in the same regime as the continuum models, requiring the separation of scales.
Contrasting the two approaches, continuum models have the advantage of giving direct access to the macroscopic parameters that will ultimately be of interest, and are analytically tractable to provide asymptotic information in limiting cases. Another consideration is topology, since the pores are modelled directly in the lattice model, and their orientation cannot be guaranteed to be (and often isn't intended to be) the orientation of the true pores, a huge number of pores must be used in the model to hide the inaccuracies produced, and more than can be feasibly simulated. Continuum models do not face this obstacle. The lattice models must be proven to have some advantage over a continuum model, which can only be that they have unsurpassed accuracy and precision when describing a range of phenomena. This has not been achieved so far. In what follows an overview of continuum models is presented.
\subsection{Continuum Mechanical Models}
The assumptions involved in continuum mechanics shall now be stated more formally. In general, continuum mechanics assumes a separation of the length and time scales between the macroscopic behaviour of interest and the microscopic processes that drive it. Thus the macroscopic behaviour can be modelled using spatio-temporally averaged quantities on intermediate scales (which are almost always the quantities of interest). The equations used can be thought of as the dominant terms in the asymptotic expansion as the ratio of microscopic to macroscopic scales tends to zero. Within an individual pore a primary continuum limit\footnote{In this limit, the microscopic behaviour is that of atoms and molecules, the macroscopic behaviour is that of the fluid flow in a single pore.}
is used to model the fluid, yielding such equations as the Navier-Stokes equation. The flow in a pore and the flow of the bulk regions of fluid are assumed to be on scales separated by orders of magnitude, thus we can model the macroscopic flow using a secondary continuum limit, which shall be used unless otherwise stated. The scales characterising the macroscopic region shall hereafter be referred to as Darcy scales.
Under this secondary continuum limit, the porosity can be viewed as the continuum average of a function that takes the value 1 in the pores and 0 in the solid matrix. Using this definition porosity is clearly, in general, a function of position, and if the porous medium is homogeneous the the porosity is a constant.
When developing continuum models the behaviour under the primary continuum limit is sometimes required, and the behaviour under the second is calculated as a result. However, we do not wish to consider a specific porous network, and instead choose to represent it using cylindrical pores. The flow in these representative pores is assumed to approximate well the flow that occurs in the real pores once the secondary continuum limit is applied. The representative pores have an effective pore radius which is not only a function of position but also of the direction of the pore, and in isotropic and homogeneous porous media becomes a constant. Calculating the effective pore radius that will best describe a particular material is subtle, a method for doing so is presented in \cite{cap_rise_powders} and tested in \cite{wetting_quartz}.
In our study we will require equations that describe the macroscopic flow of the fluid through the wetted region. Examples of these equations will now be discussed and an appropriate equation chosen.
The simplest continuum description was discovered empirically by Darcy in 1856, and is explained in \cite{darcy_intro,natural_rock}. It has been well tested and is used extensively in engineering applications. That is not to say that it is the best equation, but it certainly is adequate for most situations. If gravity is the only applied body force then Darcy's equation is, denoting the velocity $\boldsymbol{u}$ and pressure $p$,
\begin{equation*}
\boldsymbol{u}=-\frac{k}{\mu}(\nabla p -\rho \boldsymbol{g}),
\end{equation*}
where $\mu$ and $\rho$ are the viscosity and density of the fluid, $k$ the permeability and $\boldsymbol{g}$ the free fall acceleration due to gravity. The permeability characterises the resistance of the porous medium to the motion of the fluid. Interpreting this equation, the fluid only experiences forces due to the pressure gradient and body force, convection and viscous diffusion having negligible effect. Also, since the acceleration occurs on a time scale much shorter than that of the macroscopic flow, it is the velocity that responds to these forces (in the continuum limit). Darcy's equation applies to the flow in a region saturated with one fluid phase. To apply as-is to imbibition, the wetting fronts between the phases must be surfaces and there must be no ganglia. We will discuss shortly the ways in which Darcy's equation is modified to model more complicated flow scenarios.
Darcy's equation can be derived by explicitly volume averaging the equations of motion within the individual pores, as in \cite{darcy_whitaker}. The assumptions that must be made in this derivation give insight into the equations conditions of validity. The most important conditions are that the pore size is much smaller than the domain of the flow and that the macroscopic acceleration of the fluid is small (as should be expected). The paper then goes on to derive alternative equations which include some correction terms for small effects. The equations developed are the Navier-Stokes equation with perturbing terms, and not Darcy's equation with corrections, since the mathematical technique applies the correction of including the porous matrix to the free flow. An equation produced in this manner may well be valid for particle suspension phenomena, since there the flow is indeed perturbed by the presence of solid particles. However, it has not been shown that any equation derived in this manner is more accurate than Darcy's, nor that they give any advantages for describing flows in porous media where the effects of the solid matrix dominate.
Other equations have been produced that are corrections to Darcy's equation. One of these is Brinkman's equation, which includes a correction for long range viscous effects. This equation has often been justified (see \cite{bc_haber}) by the claim that it allows for the Beavers and Joseph boundary condition \cite{bc_beavers} and the experimental results that accompany it in the paper. This boundary condition states that, at the edge of the porous medium where the fluid transitions into free flow, the components of velocity tangential to the boundary change rapidly in the direction normal to the boundary. However, as demonstrated in \cite{about_bj}, the condition itself does not show the separation of scales required for a valid continuum mechanical model, nor is their experimental data of true porous flow and free flow, but rather the `free flow' is in a region of a similar scale to the pores. This does not invalidate Brinkman's equation, but does show that we have no reason to believe in its validity. Many more examples of corrections do exist (the other classic example is the Forchheimer equation \cite{forcheimer_derivation}), but it is not clear if any of them are valid and in what regime, and they all reduce to Darcy's equation in the continuum limit.
A more complete description would include the modelling of ganglia, as well as intertwined percolating bulk phases. In a continuum model with mixed phases we must introduce saturations of the different fluids as functions of position and time, as described in \cite[Ch. 5]{natural_rock}. Of course this makes the modelling of the interactions between fluids much more difficult, since we do not know the size and extent of each region of fluid, nor the geometry of the surfaces that separate them. Typically the interaction is modelled via a constitutive equation specifying a pressure difference between the phases, which will likely be a function of the saturation. If Darcy's equation is used for each fluid phase then the permeability may be altered by a factor known as the relative permeability, which will also be a function of the saturation. In some formulations even terms involving the direct effect of the pressure in other phases are included into a modified Darcy's equation.
Hilfer has attempted to create a very general model of multiphase fluid flows. In his recent paper \cite{hilfer_theory} divulging all theoretical development he starts with general statements of mass and momentum conservation. He also models the bulk phases and ganglia as different phases, such that each possesses its own saturation and can be modelled using its own constitutive equations. These constitutive equations are then proposed characterising the behaviour of one of the fluid phases, or the interaction between two fluid phases, or between a fluid and the solid matrix. However, the constitutive equations proposed are of forms that are unjustified and have so many free parameters that the resulting model is simply unusable in its most general form. This is well demonstrated by what happens when he applies sufficient restrictions are applied to the model to produce Darcy flow in the two bulk phases. The pressure difference between them is a function of one variable with \textit{ten} arbitrary parameters. It is no wonder that the model fits well to a small number of empirical curves, it would be a surprise if it didn't. The model is also simulated numerically in \cite{hilfer_numerics} in a one dimensional situation, however no empirical evidence is provided. For this model to be validated, it needs to be shown that it can predict experimental results in a manner that is not indicative of its vast number of free parameters, but that the parameters are constants for the materials in the system.
In this study we do not intend to include the effects of ganglia in our model. Of the models that do not include these effects, Darcy's equation is the only one that has been extensively verified. All others that have been developed have not been been sufficiently well tested or have been shown to be inaccurate. Since we do not intend to test bulk equations, Darcy's equation will be used.
\subsection{Capillary Rise in a Porous Column}
The mathematical modelling of the interfaces between different fluid phases is a difficult topic in its own right, and thus a simple situation is required in which it can be studied. This can be achieved by considering a vertical column of a porous material initially saturated with one fluid. The base of this column is then immersed in an external reservoir that imbibes into it, rising up against gravity. This process is known as \textit{capillary rise}, and is a simplification since the wetting front will be approximately horizontal and propagating in the vertical direction which makes it reasonable to model it as a one-dimensional phenomenon. The behaviour of interest is that of the menisci at the wetting front as the fluid propagates, and the boundary conditions required to describe it. Of these we are especially interested in that of Shikhmurzaev and Sprittles, which has recently been shown to accurately describe this phenomenon. First we shall briefly discuss the relevant bulk equations and then move onto the boundary conditions.
The equation that is used to describe the bulk flow may be Darcy's, but often Washburn's equation \cite{washburn_original} is used. The flow along a long thin tube or \textit{capillary} of constant circular cross section, that in general may be curved, is assumed to follow Poiseuille's law for locally unidirectional flow. The only coordinate for this one dimensional flow is the distance along the tube, and the only variables of interest are the velocity and pressure averaged over the cross-section. The velocity in Poiseuille flow is a function of the distance from the centre of the tube and time, therefore the cross-sectionally averaged velocity will only be a function of time. The porous medium is modelled as a bundle of these capillaries, aligned in the vertical direction. The assumption of unidirectional flow is invalidated at the inlet, leading to the development of corrections to this equation such as \cite{washburn_applicability}. Another improvement that has been made is the inclusion of pore doublets \cite{extended_washburn}. These improvements are of little interest here, since Washburn's equation, or preferably Darcy's equation since this is what is used in a general flow in a porous material, are sufficient to examine boundary conditions that may be applied at the wetting front.
The simplest assumption that may be made about the menisci in the pores (or capillaries) on the wetting front is that they form spherical caps that, at the edge of the capillary, subtend a prescribed constant angle to the solid boundary known as the \textit{contact angle}. Across each meniscus surface tension acts, causing a bulk pressure difference between the imbibing and displaced fluids. If the contact angle is less than $\pi/2$ then the pressure in the imbibing fluid is less than that of the of the displaced fluid. This decrease in pressure will cause a pressure gradient in the imbibing fluid, since the pressure at the base of the porous column will be less than that at the wetting front, and if the force of the pressure gradient is greater than the force of gravity then the fluid will be driven upwards.
Delker \textit{et al.} \cite{interface_pinning} model the vertical porous material using Darcy's equation and the assumption of a constant contact angle. They show analytically that $h(t)-h_0 \propto e^{t/\tau}$, where $h(t)$ is the current height, $h_0$ is the equilibrium height and $\tau$ is the characteristic time scale for the imbibition. They then go on to present experimental data that is included here in figure \ref{f:intro_capil_rise_delker}, along with a plot of the analytic solution. It is observed that the analytic solution fits well for small times, but that for large times the flow is much slower.
\begin{figure}[ptb]
\centering
\includegraphics[width=0.6\textwidth]{Figures/Delker_Capillary_Rise.eps}
\caption{Four sets of empirical data for capillary rise from \cite{interface_pinning}, the x-axis showing our $t$ and y-axis our $h$. For early times the solution to Darcy's equation for constant contact angle in the pores of the wetting front fits well. However, for later times the flow rate reduces dramatically below what is predicted. The experimental data is for packed beads of diameter 180$\mu$m ($\triangledown$), 253$\mu$m ($\fullmoon$), 359$\mu$m ($\vartriangle$), and 510$\mu$m ($\square$).}
\label{f:intro_capil_rise_delker}
\end{figure}
A possible solution to this problem is to allow the contact angle to vary as a \textit{dynamic contact angle}. In any propagation of a fluid, the contact angle is a functional\footnote{A functional is a mapping from a function to a number, this is usually an integral of the function. In this case it would likely be an integral involving the velocity field and some weight function.}
of the local velocity field \cite[\S 3.2.3.3]{shk_capillary_flows}. Since capillary rise is modelled in one dimension, all of the local velocities are characterised by a single scalar velocity which is equal to the velocity of the meniscus itself. Therefore, we assume that there is an equation that relates the velocity of the meniscus and the contact angle, preferably such that one is a function of the other. Martic \textit{et al.} \cite{cap_rise_martic} used Washburn's equation to model capillary rise. At the wetting front the meniscus velocity was restricted to be a monotonically increasing function of contact angle for the range of contact angles involved in the process, with a parameter to govern the magnitude of contact angle variation. A larger contact angle will lead to a flatter meniscus and lower pressure difference across it, thus a lower velocity, which is what is shown by their simulations in figure \ref{f:intro_capil_rise_martic}. To describe the results in figure \ref{f:intro_capil_rise_delker}, we could employ a model of contact angle variation that is almost constant for the range of velocities encountered at early times, and smoothly increases for the lower velocities encountered near the end.
\begin{figure}[ptb]
\centering
\includegraphics[width=0.6\textwidth]{Figures/Martic_Capillary_Rise.eps}
\caption{Numerical simulations of capillary rise from \cite{cap_rise_martic}, the x-axis showing our $t$ and y-axis our $h$. The graph demonstrates that by increasing the variation of the dynamic contact angle the equilibrium state takes longer to reach. The white circles represent empirical data from \cite{cap_rise_quere}.}
\label{f:intro_capil_rise_martic}
\end{figure}
The model developed by Shikhmurzaev and Sprittles in \cite{wetting_dynamics_shk} slows the advancement using a different method, involving two distinct modes as illustrated in figure \ref{f:intro_modes}. These modes are modelled in a representative cylindrical pore that (in an isotropic medium) is perpendicular to the wetting front, and itself modelled in the one-dimensional manner using velocities and pressures averaged over the cross-section. In mode 1 the meniscus is advancing along the pore freely, as illustrated by figure \ref{f:intro_modes}a, its free surface forming dynamic contact angle $\theta_d$ with the pore wall. In mode 2 the contact line is pinned until the contact angle reaches $\theta_*$, as illustrated by figure \ref{f:intro_modes}b. The length fraction along the pore traversed in mode $i$ is $s_i$. If $\theta_d\geq \theta_*$ then pinning does not occur and $s_1=1$, otherwise it takes the value $s_1=s_{10}$ where $s_{10}$ is the representative length fraction over which pinning cannot occur. From these length fractions and the velocity of the meniscus in each of the modes, the area fraction of the wetting front in mode $i$ is calculated. The pressure and normal velocity of the wetting front are equal to the mean weighted by area fraction of the values of the representative menisci.
The pressure in mode 1 is calculated relative to the pressure in the displaced fluid using the surface tension across the spherical cap, as usual. The proposed function for the dynamic contact angle is that from the theory of capillary flows with forming interfaces \cite{shk_capillary_flows}. Thus, in mode 1, the condition is a non-linear relationship between pressure and normal velocity. In mode 2, the stagnation pressure is defined as the pressure that builds up on the meniscus when it is prevented from deforming. This is then used to derive the pressure and velocity at the meniscus as it deforms, averaged over time. The resulting boundary condition is a non-linear relationship between the normal velocity of the wetting front, the pressure and the stagnation pressure.
\begin{figure}[ptb]
\centering
\includegraphics[width=0.7\textwidth]{Figures/Wetting_Modes.eps}
\caption{Illustration of the two modes proposed in \cite{wetting_dynamics_shk}, the wetting mode (a) and the threshold mode (b).}
\label{f:intro_modes}
\end{figure}
Numerical simulations were performed to compare the results of Shikhmurzaev and Sprittles' model with the empirical results of Delker \textit{et al.}, and are included in figure \ref{f:intro_modes_rise}. Qualitatively, the plots show the same behaviour. However, there does seem to be some discrepancy in the results, especially for the beads with a diameter of 510$\mu$m. Denoting the diameter of the beads as $d$ and the distance moved in the vertical direction as $h$, continuum mechanics is valid in the limit $d/h \rightarrow 0$, and averaged quantities being defined on a scale $\sqrt{d/h}\,h$. For the largest beads the separation of scales is $\sim 1/6$, which is nowhere near zero as required. For the smallest beads the separation is $\sim1/30$, which is acceptable. The most likely explanation for the increase in accuracy as the bead diameter decreases is that the experiments were not sufficiently well within the continuum regime.
\begin{figure}[ptb]
\centering
\includegraphics[width=0.6\textwidth]{Figures/Modes_Rise.eps}
\caption{Solid and dashed lines are numerical simulations of capillary rise from \cite{cap_rise_shk}, with the empirical data from \cite{interface_pinning} that is also plotted in figure \ref{f:intro_capil_rise_delker}, the x-axis showing our $t$ and y-axis our $h$. The experimental data is for packed beads of diameter 180$\mu$m ($\triangledown$), 253$\mu$m ($\fullmoon$), 359$\mu$m ($\vartriangle$), and 510$\mu$m ($\square$).}
\label{f:intro_modes_rise}
\end{figure}
Now that a theoretical model has been shown to describe otherwise unexplained phenomena in a simple situation, its effects should be investigated in a more complicated environment. Our aim is to start an investigation into modelling the phenomena discussed below.
\subsection{Imbibition into a Porous Substrate}
An important topic of research is the dynamics of imbibition when we cannot model the phenomenon as one dimensional. These flows reveal more complicated behaviours across the wetted region and wetting front, as we discover in our study. We consider a fluid imbibing into the flat horizontal top of a porous substrate from a reservoir of fluid that has been placed on it. This is a three-dimensional process, or in the axisymmetric case where the drawing area is circular, two-dimensional. In Shikhmurzaev and Sprittles' \cite{wetting_dynamics_shk} model the multi-dimensional wetting front allows different regions of the wetting front to have different area fractions in each mode. The pressure of the fluid in the external reservoir is of little importance, since it is insignificant in comparison to the Darcy pressure \cite{dynam_angle_shk}, thus the wetted region draws in any fluid it requires through this drawing area with no resistance from the reservoir. Therefore, the only parameter from the reservoir that affects imbibition is the radius of the drawing area. If the reservoir is a cylindrical column of fluid then this radius will be constant (or possibly a known function of time), if it is a droplet then it may be a constant, a function of time or a function of the volume of imbibed fluid for simple cases.
The phenomenon that we will be considering is imbibition through a circular drawing area of constant radius, whilst the main topic of research in this area is the imbibition of liquid droplets into porous substrates. This phenomenon is the most common subject for multi-dimensional imbibition processes. Despite our research not being on this subject specifically since we will not be modelling the droplet, the area of research is important due to its presence in the literature and its applications in ink-jet printing, 3D printing and the manufacture of ceramics. It is relevant since, in the simplest case, the drawing area of the droplet is constant. In addition, our model of the wetted region could easily be expanded to use a simple model of the droplet to vary the radius of the drawing area. The remainder of this subsection shall be devoted to analytical, experimental and numerical progress in this area.
\begin{figure}[ptb]
\centering
\input{Figures/TikZ/Contact_Lines.tex}
\caption{Illustration of droplet imbibition, labelling the contact lines and contact angles.}
\label{f:intro_CA_CL}
\end{figure}
It is helpful to define two contact lines, which are lines at which three different materials meet, and contact angles, which are the angles subtended through one of the materials at the contact line. The contact lines and angles discussed are labelled in figure \ref{f:intro_CA_CL}. Let CL1 be the contact line between the droplet, the wetted region and whatever `atmosphere' the droplet is surrounded by. Let CL2 be the contact line at which the wetting front and solid surface meet. Let CA1 be the contact angle subtended by the droplet at CL1, and CA2 be the angle subtended by the wetted region at CL2. This terminology shall also be used for a column of fluid. Of course CL1 and CL2 could meet at the same line, as is investigated by Shikhmurzaev in \cite{dynam_angle_shk} for droplet imbibition. He also shows that, as CA1 and CA2 tend to $\pi/2$, the contact lines split with CL2 advancing ahead.
Denesuk \textit{et al.} \cite{dynamics_denesuk} define three regimes of behaviour for the spread of a liquid droplet over a porous solid. Let the time scale of spreading be $\tau_s$ and the time scale of imbibition (or, as it is called in their paper, depletion) be $\tau_d$. If $\tau_d \gg \tau_s$ then the droplet will spread out in a similar manner to spreading over a non-porous substrate, before slowly imbibing in a semi-static manner. If $\tau_d \ll \tau_s$ then the fluid will imbibe into the solid before any significant spreading can occur. If $\tau_d \approx \tau_s$ then the droplet will imbibe whilst the fluid spreads, but the imbibition itself is only affected by the radius of the drawing area, therefore imbibition controls (in part) the dynamics of spread. In our investigation, since we shall not be modelling the droplet, we will only be able to consider cases where the droplet moves in a semi-static manner. That is for $\tau_d \gg \tau_s$, and possibly late times for $\tau_d \approx \tau_s$, once the droplet has already spread out and the behaviour of the droplet is driven by imbibition in such a manner that inertial effects of the droplet are negligible. In the earlier paper by Denesuk \textit{et al.} \cite{penetration_denesuk} they consider the imbibition of a droplet that has already spread out, specifying three cases that occur as the droplets volume depletes (see figure \ref{f:intro_denesuk_cases}). Case (a) is that of decreasing drawing area (DDA), where CL1 recedes, decreasing the radius of the drawing area to zero for a droplet of zero volume. In case (b) the drawing area remains constant, CL1 being pinned in place, proving a constant drawing area (CDA). This can occur in two ways that are experimentally distinct: (b1) where the drawing area maintains the appearance of having a constant radius; (b2) where the drawing area appears to decrease in radius, but a thin film remains that can supply the pores with fluid from the bulk of the droplet. Both cases of (b) produce the same behaviour within the porous material, thus we consider the distinction no further. In our work we will only model the case of CDA. It is likely that neither of DDA or CDA are commonplace, and that as droplets imbibe their drawing area decreases but not to zero. Denesuk \textit{et al.} then perform theoretical analysis of the two cases, using a Washburn type model for the porous solid. From this they deduce that the time for imbibition with DDA, and constant contact angle CA1, is nine times greater than that of CDA.
\begin{figure}[ptb]
\centering
\includegraphics[width=0.6\textwidth]{Figures/Denesuk_Cases.eps}
\caption{Illustration of the different cases of droplet depletion from \cite{penetration_denesuk}, described in the text.}
\label{f:intro_denesuk_cases}
\end{figure}
Experiments have been performed in a variety of the cases and limits described by Denesuk \textit{et al.} \cite{dynamics_denesuk}. Holman \textit{et al.} \cite{spread_holman} perform experiments for droplets with $\tau_d \approx \tau_s$, using materials: HPA 0.5 with porosity $0.549$ and representative pore radius $0.07\mu$m; HPA 1 with porosity $0.575$ and representative pore radius $0.17\mu$m. Droplets of diameter $54\mu$m are placed onto the substrate. Performing a best fit for their data, the radius of the drawing area at short times is approximately $R(t) = 54.1(0.04+t)^{0.176}$. At later times it is assumed to follow the model presented by Denesuk \textit{et al.} \cite{penetration_denesuk} for DDA, but this is not plotted for a comparison.
Hapgood \textit{et al.} \cite{drop_penetration} perform experiments of imbibition into various powders and packed beads. The photographs they provide are informative as to the dynamics of the process and the time scales involved, but no data on the radii of the drawing area is provided.
Popovich \textit{et al.} \cite{popovich_rise_spread} experimentally investigate the spread of various fluids over carbon black, reporting initial and maximal radii, the rate of spread and the time for imbibition. However the porous substrate did fracture during the experiments, thus it is unclear as to the quality of the results.
Chandra and Avedisan \cite{droplet_experiment_chandra} perform experiments into the spread of droplets over a ceramic substrate, including images of the droplets spreading in their paper.
To investigate the level of agreement between theory and experiment, numerical simulations have been performed. Reis \textit{et al.} in \cite{droplet_reis} produced numerical simulations of both the flow in a droplet imbibing into the solid and the flow within the solid. They then compared them to empirical results, which show a good level of agreement for some of the simulations. They chose to use a spatially averaged Navier-Stokes equation, which is appropriate for particle suspension phenomena and has not been shown to be valid for flow in a porous material, as has already been discussed. Equivalent simulations need to be performed using Darcy's law for a fair comparison to be made as to the merits of their choice of bulk equation. They also use a constant contact angle CA1 as a boundary condition, which they justify with results from \cite{wetting_fukai}, which is for a droplet rapidly spreading on a non-porous substrate. The assumption may also be valid for spreading on a porous substrate, but it is expected that (unless we have DDA) the contact angle will initially be some finite value and zero when all the fluid has been imbibed. This is what is shown in their plots in \cite{droplet_reis_2} which do not maintain the contact angle they specify, although this may be because the method of approximating the boundary that they use does not produce a smooth curve as it should. Finally, the contact angle that they use in the pores is constant, which may or may not be a good approximation for droplet imbibition, this is yet to be tested. Considering all of these questionable elements, the results produced are remarkably similar to the empirical results which does suggest that their mathematical model may be largely correct, but without many alternatives to compare it to we cannot yet draw this conclusion.
Another relevant study has been done by Markicevec \textit{et al.} \cite{spread_sessile_numerics}. In this study a capillary network model is used, producing numerical results with around 20\% accuracy. The final example is that by Alleborn and Razillier \cite{spread_porous}, considering a very wide flat droplet using lubrication theory, in which motion can only occur in the vertical direction, producing surprisingly conical wetted regions. The validity of the lubrication approximation used shall be discussed later.
\subsection{The Present Work}
Our purpose is to investigate the dynamics of the wetting front by modelling and simulating imbibition into a porous substrate. In the present work the boundary condition on the wetting front that is used is for a constant contact angle within the pores, but the numerical scheme developed is easily expandable to include dynamic contact angles and even the modes proposed by Shikhmurzaev and Sprittles in \cite{wetting_dynamics_shk}. The numerical scheme is for axisymmetric imbibition obeying Darcy's equation and incompressibility.
In section \ref{s:ProbForm} we will formulate a model of imbibition through a circular region of constant radius. Then in section \ref{s:Numerics} we describe the numerical model that will be used to produce solutions to the equations, and simulate the imbibition process. In section \ref{s:simple} we investigate the velocity and pressure distributions across the wetted region for particular wetting fronts, both using our numerical solutions and asymptotic analysis in regions of interest. Following this we produce numerical simulations of the wetting fronts evolution for various initial conditions. Finally we summarise the results and propose future work in section \ref{s:conc}.
During our study we discover problems with the solutions to Darcy's law that are unexpected and reveal it to be an invalid equation when modelling a range of flows. This motivates the existence of the improvements we discussed earlier, although none of these have been proposed to solve problems like those that we discover.
\cleardoublepage\oldsection{Problem Formulation}\label{s:ProbForm}
Consider a non-deformable isotropic homogeneous porous solid initially filled with a gas, which in the process to be studied will be regarded as dynamically passive. We assume the solid is large enough to ignore all of its faces other than its flat horizontal top, through which an incompressible fluid is imbibed over a circular region of radius $R$. Outside the solid, we call the region of fluid the \textit{external reservoir} and the rest the \textit{atmosphere}. Within the solid the region of fluid is called the \textit{wetted region}, and the rest is the \textit{dry region}. Here is set out the modelling of the dynamics of the wetted region under the secondary continuum limit, i.e. the limit as the ratio of the pore scale to the Darcy scale tends to zero, which shall be used unless otherwise stated.
We assume that the velocity, pressure and wetted region are axisymmetric, thus we choose to use cylindrical polar coordinates. The cylindrical axis is placed on the axis of symmetry with its coordinate $z$ such that $z<0$ in the solid and $z=0$ on its top, as shown in figure \ref{f:imbib}. The radial coordinate shall be $r$, the azimuth $\phi$, the time $t$ and the position $\boldsymbol{r}$. Our model will be developed in the $r$-$z$ plane, which contains all the information of the problem. Figure \ref{f:imbib} illustrates an example configuration. In it $\Omega_0$ is the wetted region, $\Gamma_1$ and $\Gamma_2$ are the boundaries to the atmosphere and external reservoir respectively, $\Gamma_3$ is on the axis of symmetry, and $C_0$, $C_1$, $C_2$ and $C_3$ are defined by the figure. $\Gamma_0$ is the boundary to the dry region, known as the \textit{wetting front}, that moves as the fluid imbibes. All other regions may also evolve with time.
For later convenience, we define the total boundary as $\partial \Omega_0 = \Gamma_0 \cup \Gamma_1 \cup \Gamma_2 \cup \Gamma_3 \cup C_0 \cup C_1 \cup C_2 \cup C_3$, and $\hat{\boldsymbol{n}}$ to be the outward pointing unit normal to $\partial \Omega_0$.
\begin{figure}[t]
\centering
\input{Figures/TikZ/Domain.tex}
\caption{Illustration of the axisymmetric imbibition process, with moving free surface $\Gamma_0$ and a droplet (or any appropriate external reservoir) resting on the solid being imbibed through $\Gamma_2$.}
\label{f:imbib}
\end{figure}
Let us use the notation $\boldsymbol{u}(\boldsymbol{r},t)=u(r,z,t)\,\hat{\boldsymbol{r}}(\phi)+v(r,z,t)\,\hat{\boldsymbol{z}}$ to be the velocity and $p(\boldsymbol{r},t)=p(r,z,t)$ to be the pressure of the averaged flow on the Darcy scale. Using the assumptions of incompressibility, isotropy and homogeneity the continuity equation can be written as
\begin{equation}\label{eq:cont_dimfull}
\nabla \cdot \boldsymbol{u}=0 \hspace{1cm} \forall \: \boldsymbol{r} \in \Omega_0.
\end{equation}
The momentum balance in the wetted region is given by Darcy's equation
\begin{equation} \label{eq:darcy_dimfull}
\boldsymbol{u}=-\frac{k}{\mu}\nabla (p+\rho g z) \hspace{1cm} \forall \: \boldsymbol{r} \in \Omega_0,
\end{equation}
where $k$ is the permeability of the porous solid, $\mu$ and $\rho$ are the dynamic viscosity and density of the imbibing fluid respectively, and $g$ the magnitude of free-fall acceleration due to gravity, all being constant. Combining \eqref{eq:cont_dimfull} and \eqref{eq:darcy_dimfull} we see that $\nabla^2 p=0$ so that, if the boundary isn't moving, we require one boundary condition at every boundary point, and for a moving boundary we require two conditions.
In general, fluid could pass through $\Gamma_1$ to form a new region of fluid above the surface or be drawn down creating a new de-wetting front. This would require the modelling of the process of creating new boundaries, as well as the formulation of boundary conditions that allow for the de-wetting process. For simplicity we assume that these processes do not occur and thus
\begin{equation} \label{eq:bc_gamma1_dimful}
\boldsymbol{u}\cdot\hat{\boldsymbol{n}}=0 \hspace{1cm} \forall \: \boldsymbol{r} \in \Gamma_1.
\end{equation}
The boundary $\Gamma_2$ must have a condition that matches the solution in the wetted region to the external reservoir. We consider the scales of pressure in the regions, using the same technique as in \cite{dynam_angle_shk}, measuring the pressure relative to that of the dynamically passive gas. Note that variables with a tilde represent those of the external reservoir. Define the surface tension to be $\sigma$, the representative pore radius to be $a$, and the velocity and length scales to be $U$ and $L$ respectively. Note that $\tilde{L}=L$. The scale of pressure in the wetted region is $P=2\sigma / a$ from the assumption that the pores are cylinders and the menisci are spherical caps, as shall be discussed later. The scale of pressure in the external reservoir is $\tilde{P}=\mu \tilde{U}/\tilde{L}$, from the Navier-Stokes equation in the bulk at Reynolds numbers that are small or approximately one. The pressure is continuous across the boundary, $p=\tilde{p}$ on $\Gamma_2$, marking dimensionless parameters with a prime this is
\begin{equation*}
p'=\frac{\mu \tilde{U}}{2 \sigma}\frac{a}{L}\tilde{p}' \hspace{1cm} \forall \: \boldsymbol{r} \in \Gamma_2.
\end{equation*}
The secondary continuum limit is the limit that $a/L\rightarrow0$, and hence the pressure in the reservoir is negligible compared to that of the wetted region. Therefore the continuum mechanical boundary condition is
\begin{equation} \label{eq:bc_gamma2_dimfull}
p=0 \hspace{1cm} \forall \: \boldsymbol{r} \in \Gamma_2.
\end{equation}
In a physical situation the external pressure can of course be chosen to be of the same order of magnitude as the Darcy pressure, but in most circumstances this requires significant engineering to achieve and would almost certainly not be the case in droplet imbibition.
On $\Gamma_3$, we have the condition of axisymmetry
\begin{equation} \label{eq:bc_gamma3_dimful}
\boldsymbol{u}\cdot\hat{\boldsymbol{n}}=0 \hspace{1cm} \forall \: \boldsymbol{r} \in \Gamma_3.
\end{equation}
Considering the boundary $\Gamma_0$, it is first assumed that the wetting front moves with the velocity of the fluid. Denoting the normal velocity of the wetting front by $v_s$, this assumption is stated mathematically as $v_s=\boldsymbol{u}\cdot\hat{\boldsymbol{n}}$. We define a function $F(\boldsymbol{r},t)$ such that $F=0$ on $\Gamma_0$, in our case this equation can be written in differential form as the kinematic boundary condition
\begin{equation} \label{eq:bc_timestep_F_dimfull}
\frac{\partial F}{\partial t} + \boldsymbol{u}\cdot\nabla F =0.
\end{equation}
For the dynamic boundary condition we use the standard model of wetting, which is mode 1 of Shikhmurzaev and Sprittles' model \cite{wetting_dynamics_shk}. Under the primary continuum limit the wetting front consists of the menisci within the pores. In this model representative pores are used, aligned normal to the surface, containing a representative meniscus that is a spherical cap forming the contact angle $\theta$ with the wall. The meniscus is advancing along the pore with velocity $u_1$ and pressure $p_1$ (both averaged across the pore cross section). The variables in the representative pore and of the secondary continuum limit are related by the equations
\begin{alignat}{2}
p&=p_1 \hspace{1cm} &\forall \: \boldsymbol{r} &\in \Gamma_0, \label{eq:bc_p1_dimfull}\\
\boldsymbol{u}\cdot\hat{\boldsymbol{n}}&=u_1 \hspace{1cm} & \forall \: \boldsymbol{r} &\in \Gamma_0, \label{eq:bc_u1_dimfull}
\end{alignat}
As discussed in the introduction, there is a function that relates the dynamic contact angle $\theta_d$ and the velocity of the meniscus, $G(\theta_d,u_1)=0$. Due to the spherical cap approximation for the meniscus shape, in a pore with representative radius $a$ and surface tension $\sigma$ the fluid has a pressure relative to the constant pressure of the dynamically passive gas given by
\begin{equation}
p_1 = -\frac{2\sigma}{a} \cos(\theta_d). \label{eq:bc_gamma0_dimfull}
\end{equation}
Finally we require an initial condition for \eqref{eq:bc_timestep_F_dimfull}. This initial condition must specify the shape of the wetting front, i.e. $F(\boldsymbol{r},0)=0$, although it is much easier to provide the curve along which it is zero. Thus we shall require functions $r(s)$ and $z(s)$ such that $F(\hat{\boldsymbol{r}}r(s)+\hat{\boldsymbol{z}}z(s),0)=0 \: \forall s\in[0,s_\mathrm{max}]$ where $s_\mathrm{max}$ is the end point of the wetting front. We also require that $\hat{\boldsymbol{r}}r(0)+\hat{\boldsymbol{z}}z(0)$ is the point $C_1$ and $\hat{\boldsymbol{r}}r(s_\mathrm{max})+\hat{\boldsymbol{z}}z(s_\mathrm{max})$ is the point $C_0$ at time $t=0$.
The equations we have discussed are
\begin{alignat}{2}
\nabla \cdot \boldsymbol{u}&=0 \hspace{1cm} \hspace{1cm} &\forall \: \boldsymbol{r} &\in \Omega_0, \tag{\ref{eq:cont_dimfull}} \\
\boldsymbol{u}&=-\frac{k}{\mu}\nabla (p+\rho g z) \hspace{1cm} \hspace{1cm} &\forall \: \boldsymbol{r} &\in \Omega_0, \tag{\ref{eq:darcy_dimfull}} \\
\frac{\partial F}{\partial t} + \boldsymbol{u}\cdot\nabla F &=0, \tag{\ref{eq:bc_timestep_F_dimfull}} \\
\boldsymbol{u}\cdot\hat{\boldsymbol{n}}&=0 \hspace{1cm} \hspace{1cm} &\forall \: \boldsymbol{r} &\in \Gamma_1 \tag{\ref{eq:bc_gamma1_dimful} and \ref{eq:bc_gamma3_dimful}} \\
p&=0 \hspace{1cm} &\forall \: \boldsymbol{r} &\in \Gamma_2 \tag{\ref{eq:bc_gamma2_dimfull}} \\
p&=p_1 \hspace{1cm} &\forall \: \boldsymbol{r} &\in \Gamma_0, \tag{\ref{eq:bc_p1_dimfull}} \\
\boldsymbol{u}\cdot\hat{\boldsymbol{n}}&=u_1 \hspace{1cm} & \forall \: \boldsymbol{r} &\in \Gamma_0 \tag{\ref{eq:bc_u1_dimfull}}, \\
p_1 &= -\frac{2\sigma}{a} \cos(\theta_d) \tag{\ref{eq:bc_gamma0_dimfull}} \\
G(\theta_d,u_1)&=0.
\end{alignat}
In this work we will only consider the simplest of wetting processes, that of constant contact angle. We enforce $\theta_d=\theta_s$ where $\theta_s \in (0,\pi)$, therefore $G(\theta_d,u_1)=\theta_d-\theta_s$. The equations are now written in dimensionless form, where the scales of pressure, length, velocity and time are $P=2\sigma\cos(\theta_s)/a$, $L=R$, $U=(k/\mu L) P$ and $T=L/U$ respectively, using the same symbols for the dimensionless functions as we did for the dimensional ones. The only dimensionless parameter of the system is $\gamma=k \rho g/\mu U$.
\begin{subequations}\label{seq:dimless_system}\begin{alignat}{2}
\nabla \cdot \boldsymbol{u}&=0 \hspace{1cm} &\forall \: \boldsymbol{r} &\in \Omega_0 \label{eq:continuity}\\
\boldsymbol{u}&=-\nabla (p +\gamma z) \hspace{1cm} & \forall \: \boldsymbol{r} &\in \Omega_0 \label{eq:darcy}\\
\frac{\partial F}{\partial t} + \boldsymbol{u}\cdot\nabla F&=0 \label{eq:bc_timestep_F}\\
\boldsymbol{u}\cdot\hat{\boldsymbol{n}}&=0 \hspace{1cm} &\forall \: \boldsymbol{r} &\in \Gamma_1 \cup \Gamma_3 \label{eq:bc_gamma13}\\
p&=-1 \hspace{1cm} &\forall \: \boldsymbol{r} &\in \Gamma_0 \label{eq:bc_gamma0} \\
p&=0 \hspace{1cm} &\forall \: \boldsymbol{r} &\in \Gamma_2 \label{eq:bc_gamma2}
\end{alignat}\end{subequations}
The equations in \eqref{seq:dimless_system} along with specifying the initial conditions $r(s)$ and $z(s)$ form the closed set of equations to solve.
\subsection{Asymptotic Analysis} \label{ss:asymp}
\begin{figure}[tbp]
\centering
\input{Figures/TikZ/Asymptotics.tex}
\caption{The wedge that the regions around $C_1$ and $C_2$ tend to asymptotically, with subtended angle $\theta_1$.}
\label{f:asymptotics}
\end{figure}
Let us consider the domain asymptotically as we tend towards the contact lines $C_1$ and $C_2$. As we do this the curvature on the length scale we are observing tends to zero, thus the wetting front tends to a plane, the curvature of the contact line (due to it being a circle) tends to zero, and the domain of the flow tends towards a two dimensional wedge. In both cases the boundary with $\hat{\boldsymbol{n}}\cdot\boldsymbol{u}=0$ (which is $\Gamma_1$) is horizontal, so we choose to consider the wedge depicted in \ref{f:asymptotics}, with contact angle $\theta_1$ and local polar coordinates $\rho$ and $\theta$ such that $z=\rho \sin(\theta-\theta_1)$. The local components of velocity are $u_\rho=\boldsymbol{u}\cdot\hat{\boldsymbol{\rho}}$ and $u_\theta=\boldsymbol{u}\cdot\hat{\boldsymbol{\theta}}$, where $\hat{\boldsymbol{\rho}}$ and $\hat{\boldsymbol{\theta}}$ are the basis vectors of the local polar coordinate system. These are related to the components $u$ and $v$ by
\begin{subequations}\label{seq:local_u_c1}\begin{align}
u&=-u_\rho \cos(\theta-\theta_1) + u_\theta \sin(\theta-\theta_1), \\
v&=u_\rho \sin(\theta-\theta_1)+u_\theta \cos(\theta-\theta_1),
\end{align}\end{subequations}
for $C_1$, and for $C_2$
\begin{subequations}\label{seq:local_u_c2}\begin{align}
u&=-u_\rho \cos(\theta) + u_\theta \sin(\theta), \\
v&=-u_\rho \sin(\theta)-u_\theta \cos(\theta).
\end{align}\end{subequations}
The equations in the wedge region are, using \eqref{eq:darcy} to eliminate velocity,
\begin{subequations}\begin{alignat}{2}
\nabla^2 p&=0 \hspace{1cm} &\forall \: \theta &\in [0,\theta_1], \\
p&=p_0 \hspace{1cm} & \mathrm{on} \: \theta&=0, \\
\frac{\partial (p+\gamma \rho \sin(\theta-\theta_1))}{\partial \theta}&=0 \hspace{1cm} & \mathrm{on} \: \theta&=\theta_1.
\end{alignat}\end{subequations}
We make the change of variables $\tilde{p}=p-p_0+\gamma \rho \sin(\theta-\theta_1)$ to obtain
\begin{subequations}\begin{alignat}{2}
\nabla^2\tilde{p}&=0 \hspace{1cm} &\forall \: \theta &\in [0,\theta_1], \\
\tilde{p}&=-\gamma \rho \sin(\theta_1) \hspace{1cm} & \mathrm{on} \: \theta&=0, \\
\frac{\partial\tilde{p}}{\partial \theta}&=0 \hspace{1cm} & \mathrm{on} \: \theta&=\theta_1.
\end{alignat}\end{subequations}
It is observed that, for $\theta_1\neq\pi/2$, this set of equations has a solution
\begin{equation}
\tilde{p}_1=-\gamma \rho \sin(\theta_1) \left[\cos(\theta)+\tan(\theta_1)\sin(\theta)\right]
\end{equation}
and for $\theta_1=\pi/2$ it has a solution
\begin{equation}
\tilde{p}_2=\frac{2\gamma}{\pi} \sin(\theta) \rho \ln (\rho)-\gamma \rho \cos(\theta) \left[ 1 - \frac{2}{\pi}\theta \right].
\end{equation}
Defining $\hat{p}=\tilde{p}-\tilde{p}_1$ for $\theta_1\neq\pi/2$ and $\hat{p}=\tilde{p}-\tilde{p}_2$ for $\theta_1=\pi/2$, the equations become
\begin{subequations}\begin{alignat}{2}
\nabla^2\hat{p}&=0 \hspace{1cm} &\forall \: \theta &\in [0,\theta_1], \\
\hat{p}&=0 \hspace{1cm} & \mathrm{on} \: \theta&=0, \\
\frac{\partial\hat{p}}{\partial \theta}&=0 \hspace{1cm} & \mathrm{on} \: \theta&=\theta_1.
\end{alignat}\end{subequations}
This is now soluble using separation of variables, the solution is
\begin{equation}
\hat{p}=\sum_{n\in\mathbb{Z}} \left[ c_n \rho^{(n+\frac{1}{2})\frac{\pi}{\theta_1}} \sin\left(\left[n+\frac{1}{2}\right]\frac{\pi}{\theta_1}\theta\right)\right].
\end{equation}
where the values $c_n$ are arbitrary constants. Observing that in our numerical solution the pressure is bounded, the sum is truncated to $n\geq0$, this is the solution obtained in \cite[(3.11)]{dynam_angle_shk} except that there the velocity was restricted to be bounded also, and only the case $\theta_1=\pi$ was considered. For $\theta_1\neq\pi/2$ we obtain the solution
\begin{subequations}\begin{align}
p&=\sum_{n=0}^{\infty} \left[ c_n \rho^{(n+\frac{1}{2})\frac{\pi}{\theta_1}} \sin\left(\left[n+\frac{1}{2}\right]\frac{\pi}{\theta_1}\theta\right)\right]
-\gamma \rho \frac{\sin(\theta)}{\cos(\theta_1)}+p_0
,\\
u_\rho&=-\sum_{n=0}^{\infty} \left[ c_n \left(n+\frac{1}{2}\right)\frac{\pi}{\theta_1} \rho^{(n+\frac{1}{2})\frac{\pi}{\theta_1}-1} \sin\left(\left[n+\frac{1}{2}\right]\frac{\pi}{\theta_1}\theta\right)\right]
+ \gamma \sin(\theta_1) \left[\cos(\theta)+\tan(\theta_1)\sin(\theta)\right]
,\\
u_\theta&=-\sum_{n=0}^{\infty} \left[ c_n \left(n+\frac{1}{2}\right)\frac{\pi}{\theta_1} \rho^{(n+\frac{1}{2})\frac{\pi}{\theta_1}-1} \cos\left(\left[n+\frac{1}{2}\right]\frac{\pi}{\theta_1}\theta\right)\right]
+ \gamma \sin(\theta_1) \left[\tan(\theta_1)\cos(\theta)-\sin(\theta)\right] \label{eq:asymp_uth_1}
,
\end{align}\end{subequations}
and for $\theta_1=\pi/2$
\begin{subequations}\begin{align}
p&=\sum_{n=0}^{\infty} \left[ c_n \rho^{2n+1} \sin\left(\left[2n+1\right]\theta\right)\right]
+ \frac{2\gamma}{\pi} \sin(\theta) \rho \ln (\rho)+\gamma \rho \cos(\theta) \frac{2}{\pi}\theta + p_0
,\\
u_\rho&=-\sum_{n=0}^{\infty} \left[ c_n \left(2n+1\right) \rho^{2n} \sin\left(\left[2n+1\right]\theta\right)\right]
-\frac{2\gamma}{\pi} \sin(\theta) \left[\ln (\rho)+1\right]-\gamma \cos(\theta) \left[\frac{2}{\pi}\theta-1\right]
,\\
u_\theta&=-\sum_{n=0}^{\infty} \left[ c_n \left(2n+1\right) \rho^{2n} \cos\left(\left[2n+1\right]\theta\right)\right]
-\frac{2\gamma}{\pi} \cos(\theta) \left[\ln (\rho)+1\right]+\gamma \sin(\theta) \left[\frac{2}{\pi}\theta-1\right] \label{eq:asymp_uth_2}
.
\end{align}\end{subequations}
Let us now consider the leading order solutions as $\rho \rightarrow 0$ in the cases relevant to our model. We shall deduce the components of velocity $u$ and $v$ using equations \eqref{seq:local_u_c1} and \eqref{seq:local_u_c2}, for these components the leading order terms sometimes cancel and in these cases the second order terms shall be stated for this function only. In all cases only sufficient terms to understand the numerical results in the previous section are presented.
For the region around $C_2$ the wedge subtends an angle $\theta_1=\pi$ and $p_0=0$, to leading order
\begin{subequations}\begin{align}
p& \sim c_0 \rho^{\frac{1}{2}} \sin\left(\frac{1}{2}\theta\right) ,\\
u_\rho& \sim -c_0 \frac{1}{2}\rho^{-\frac{1}{2}} \sin\left(\frac{1}{2}\theta\right) ,\\
u_\theta& \sim -c_0 \frac{1}{2} \rho^{-\frac{1}{2}} \cos\left(\frac{1}{2}\theta\right) ,\\
u &\sim -c_0 \frac{1}{2} \rho^{-\frac{1}{2}} \left[-\sin\left(\frac{1}{2}\theta\right)\cos(\theta)+\cos\left(\frac{1}{2}\theta\right)\sin(\theta)\right], \\
v &\sim -c_0 \frac{1}{2} \rho^{-\frac{1}{2}} \left[-\sin\left(\frac{1}{2}\theta\right)\sin(\theta)-\cos\left(\frac{1}{2}\theta\right)\cos(\theta)\right].
\end{align}\end{subequations}
For $C_1$ we have $p_0=-1$. We consider four cases, firstly for $\theta_1>\pi/2$, or $\gamma=0$ and $\theta_1\neq\pi/2$, to leading order
\begin{subequations}\begin{align}
p+1& \sim c_0 \rho^{\frac{\pi}{2\theta_1}} \sin\left(\frac{\pi}{2\theta_1}\theta\right) ,\label{eq:asymp_c1_1_p}\\
u_\rho& \sim -c_0 \frac{\pi}{2\theta_1} \rho^{\frac{\pi}{2\theta_1}-1} \sin\left(\frac{\pi}{2\theta_1}\theta\right) ,\\
u_\theta& \sim -c_0 \frac{\pi}{2\theta_1} \rho^{\frac{\pi}{2\theta_1}-1} \cos\left(\frac{\pi}{2\theta_1}\theta\right), \label{eq:asymp_c1_1_theta}\\
u &\sim -c_0 \frac{\pi}{2\theta_1} \rho^{\frac{\pi}{2\theta_1}-1}\left[ -\sin\left(\frac{\pi}{2\theta_1}\theta\right) \cos(\theta-\theta_1) + \cos\left(\frac{\pi}{2\theta_1}\theta\right) \sin(\theta-\theta_1)\right], \\
v &\sim -c_0 \frac{\pi}{2\theta_1} \rho^{\frac{\pi}{2\theta_1}-1}\left[ \sin\left(\frac{\pi}{2\theta_1}\theta\right) \sin(\theta-\theta_1) + \cos\left(\frac{\pi}{2\theta_1}\theta\right) \cos(\theta-\theta_1)\right].
\end{align}\end{subequations}
For the components of velocity the power of $\rho$ is less than zero, so all are singular. Secondly for $\theta_1<\pi/2$ and $\gamma\neq0$,
\begin{subequations}\begin{align}
p+1& \sim -\gamma \rho \frac{\sin(\theta)}{\cos(\theta_1)} ,\\
u_\rho& \sim \gamma \sin(\theta_1) \left[\cos(\theta)+\tan(\theta_1)\sin(\theta)\right] ,\\
u_\theta& \sim \gamma \sin(\theta_1) \left[\tan(\theta_1)\cos(\theta)-\sin(\theta)\right] ,\\
u &\sim -\gamma \sin^2(\theta_1) [\tan(\theta_1)+\cot(\theta_1)], \\
v &\sim -c_0 \frac{\pi}{2\theta_1} \rho^{\frac{\pi}{2\theta_1}-1}\left[ \sin\left(\frac{\pi}{2\theta_1}\theta\right) \sin(\theta-\theta_1) + \cos\left(\frac{\pi}{2\theta_1}\theta\right) \cos(\theta-\theta_1)\right].
\end{align}\end{subequations}
The radial component of velocity is constant, and the axial component has power of $\rho$ greater than zero, so is finite. The final two cases are for $\theta_1=\pi/2$, for $\gamma=0$
\begin{subequations}\begin{align}
p+1 & \sim \rho\left[ c_0 \sin(\theta) \right] ,\\
u_\rho & \sim -c_0 \sin(\theta) ,\\
u_\theta & \sim -c_0 \cos(\theta) ,\\
u & \sim c_0 ,\\
v & \sim 3 c_1 \rho^2 \sin(2\theta) ,
\end{align}\end{subequations}
so both components are finite. For $\gamma\neq0$
\begin{subequations}\begin{align}
p+1 & \sim \rho \ln (\rho) \left[ \frac{2\gamma}{\pi} \sin(\theta) \right] + \rho \left[c_0 \sin(\theta) + \gamma \cos(\theta) \frac{2}{\pi} \theta \right] ,\\
u_\rho & \sim \ln (\rho) \left[-\frac{2\gamma}{\pi} \sin(\theta) \right] +\left[-c_0 \sin(\theta) - \frac{2 \gamma}{\pi} \sin(\theta) - \gamma \cos(\theta) \left(\frac{2}{\pi}\theta-1\right)\right] ,\\
u_\theta & \sim \ln (\rho) \left[-\frac{2\gamma}{\pi} \cos(\theta) \right] +\left[-c_0 \cos(\theta) - \frac{2 \gamma}{\pi} \cos(\theta) + \gamma \sin(\theta) \left(\frac{2}{\pi}\theta-1\right)\right] ,\\
u & \sim \ln (\rho)\left[\frac{2\gamma}{\pi}\right] + \left[ c_0 + \frac{2\gamma}{\pi} \right] ,\\
v & \sim \gamma\left[\frac{2}{\pi}\theta-1\right] ,
\end{align}\end{subequations}
so the radial component is singular and the axial component is multivalued at $C_1$.
Curves of the forms obtained above are plotted in \cref{sf:simple_c1_acute,sf:simple_c1_obtuse,sf:simple_c1_right,sf:simple_c1_right_ng}, and fit the data plotted very well. We shall next discuss the physical meaning of these equations.
\subsection{Pressure and Velocity Distributions} \label{ss:pv_dist}
\begin{figure}[ptb]
\centering
\begin{tabular}{c}
\begin{subfigure}[t]{0.8\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures/Small_and_medium/uvp_small_region_nograv.eps}
\caption{$\gamma=0$}
\label{f:uvp_small_ng}
\end{subfigure}
\\
\begin{subfigure}[t]{0.8\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures/Small_and_medium/uvp_small_region_grav.eps}
\caption{$\gamma=0.2$}
\label{f:uvp_small_g}
\end{subfigure}
\end{tabular}
\caption{Two plots of velocity and pressure for the region $\theta_1=\pi/2$, $r_f=1.2$ and $H=0.6$, (a) for without gravity and (b) for with gravity. In this domain the pressure gradient is sufficiently high that the gravitational effect is negligible and the pressure and velocity distributions are almost identical.}
\label{sf:uvp_small}
\end{figure}
\begin{figure}[ptb]
\centering
\includegraphics[width=0.8\textwidth]{Figures/Small_and_medium/flpr_small_region_nograv.eps}
\caption{The distribution of axial velocity along the surface of the porous substrate, $z=0$, in the case plotted in figure \ref{f:uvp_small_ng}.}
\label{f:v_small}
\end{figure}
\begin{figure}[ptb]
\centering
\begin{tabular}{c}
\begin{subfigure}[t]{0.95\textwidth}
\centering
\includegraphics[height=0.43\textheight]{Figures/Small_and_medium/uvp_medium_region_nograv.eps}
\caption{$\gamma=0$}
\label{f:uvp_medium_ng}
\end{subfigure}
\\
\begin{subfigure}[t]{0.95\textwidth}
\centering
\includegraphics[height=0.43\textheight]{Figures/Small_and_medium/uvp_medium_region_grav.eps}
\caption{$\gamma=0.2$}
\label{f:uvp_medium_g}
\end{subfigure}
\end{tabular}
\caption{Two plots of velocity and pressure for the region $\theta_1=\pi/2$, $r_f=2$ and $H=2$, (a) for without gravity and (b) for with gravity. In this domain we see that gravity causes the fluid to flow downward as can be seen from the streamlines, especially the streamline at largest $r$. }
\label{sf:uvp_medium}
\end{figure}
\begin{figure}[ptb]
\centering
\begin{tabular}{c}
\begin{subfigure}[t]{0.95\textwidth}
\centering
\includegraphics[height=0.43\textheight]{Figures/Small_and_medium/uvp_zoom_medium_region_nograv.eps}
\caption{$\gamma=0$}
\label{f:uvp_z_medium_ng}
\end{subfigure}
\\
\begin{subfigure}[t]{0.95\textwidth}
\centering
\includegraphics[height=0.43\textheight]{Figures/Small_and_medium/uvp_zoom_medium_region_grav.eps}
\caption{$\gamma=0.2$}
\label{f:uvp_z_medium_g}
\end{subfigure}
\end{tabular}
\caption{An enlargement around $C_1$ for the plots in figure \ref{sf:uvp_medium}. For the case without gravity the streamlines all intersect with the free surface approximately at the perpendicular, meaning that the free surface will propagate approximately uniformly. With gravity there is a region of the free surface that is not fed by the drawing area, the region near the contact line receding and that below advancing. All of the fluid in this region is noticeably affected by gravity.}
\label{sf:uvp_z_medium}
\end{figure}
\begin{figure}[ptb]
\centering
\begin{tabular}{c}
\begin{subfigure}[t]{0.95\textwidth}
\centering
\includegraphics[height=0.43\textheight]{Figures/Small_and_medium/uvp_zoom_c2_medium_region_nograv.eps}
\caption{$\gamma=0$}
\label{f:uvp_zc2_medium_ng}
\end{subfigure}
\\
\begin{subfigure}[t]{0.95\textwidth}
\centering
\includegraphics[height=0.43\textheight]{Figures/Small_and_medium/uvp_zoom_c2_medium_region_grav.eps}
\caption{$\gamma=0.2$}
\label{f:uvp_zc2_medium_g}
\end{subfigure}
\end{tabular}
\caption{An enlargement around $C_2$ for the plots in figure \ref{sf:uvp_medium}. The plots appear similar, the high pressure gradient means that the effect of gravity is negligible. It is clear that there is a high volume flux through $\Gamma_2$ local to $C_2$.}
\label{sf:uvp_zc2_medium}
\end{figure}
\begin{figure}[ptb]
\centering
\begin{tabular}{c}
\begin{subfigure}[t]{0.95\textwidth}
\centering
\includegraphics[height=0.43\textheight]{Figures/Angles/uvp_angle_obtuse_nograv.eps}
\caption{$\gamma=0$}
\label{f:uvp_obtuse_ng}
\end{subfigure}
\\
\begin{subfigure}[t]{0.95\textwidth}
\centering
\includegraphics[height=0.43\textheight]{Figures/Angles/uvp_angle_obtuse_grav.eps}
\caption{$\gamma=0.2$}
\label{f:uvp_obtuse_g}
\end{subfigure}
\end{tabular}
\caption{Two plots of velocity and pressure for the region $\theta_1=0.8\pi$, $r_f=1.5$ and $H=2$, (a) for without gravity and (b) for with gravity. This plot has the same qualitative features as figure \ref{sf:uvp_medium}.}
\label{sf:uvp_obtuse}
\end{figure}
\begin{figure}[ptb]
\centering
\begin{tabular}{c}
\begin{subfigure}[t]{0.95\textwidth}
\centering
\includegraphics[height=0.43\textheight]{Figures/Angles/uvp_zoom_angle_obtuse_nograv.eps}
\caption{$\gamma=0$}
\label{f:uvp_z_obtuse_ng}
\end{subfigure}
\\
\begin{subfigure}[t]{0.95\textwidth}
\centering
\includegraphics[height=0.43\textheight]{Figures/Angles/uvp_zoom_angle_obtuse_grav.eps}
\caption{$\gamma=0.2$}
\label{f:uvp_z_obtuse_g}
\end{subfigure}
\end{tabular}
\caption{An enlargement around $C_1$ for the plots in figure \ref{sf:uvp_obtuse}. We see that the plots are qualitatively the same local to $C_1$, the dominant effect being that the velocity of the wetting front is highest near $C_1$ and reduces along.}
\label{sf:uvp_z_obtuse}
\end{figure}
In this section we plot the velocity and pressure distributions within the wetted region for various wetting fronts, to give the reader a qualitative understanding of the solution before we perform the asymptotic analysis. We do this for solutions that do not include gravity ($\gamma=0$) and for a small, but certainly not negligible, gravitational effect ($\gamma=0.2$). See figure \ref{f:uvp_small_ng} as an example of such a plot. The plot is in the $r$-$z$ plane, with the wetting front plotted in black. Pressure contours are plotted in colours that represent the value of pressure, red for high pressures and blue for low pressures. Example streamlines are plotted in grey, and a small number of velocity vectors are plotted in black. In this plot we also label some intervals of the boundary which will be used for other cases but not labelled on their plots. The intervals $F$ and $F'$ extend from the axis of symmetry to the first streamline plotted on $\Gamma_2$ and $\Gamma_0$ respectively. $L$ and $L'$ are the parts of $\Gamma_2$ and $\Gamma_0$ between the last streamline plotted and the contact line $C_2$ and $C_1$ respectively.
This first pair of plots, figure \ref{sf:uvp_small}, reveal that, for a small domain, the pressure gradient dominates the effect of gravity such that the plots appear almost identical. Looking more closely, the separation of the pressure contours close to the wetting front is approximately the same along the length of the wetting front. Due the the velocity being proportional to the pressure gradient the wetting front should propagate approximately uniformly along its length, at least at first. The pressure contours close to the point $C_2$ at $(1,0)$ are very closely packed, revealing enormous velocities close to this point. Finally, the streamlines that enter the wetted region at large $r$ spread out much more than those that enter at small $r$. As the wetting front advances, the volume increase due to the advancement of a segment of the wetting front between to streamlines must come from the influx of volume through the drawing area between these same streamlines. Therefore, the volume flux through the section of the drawing area $L$ must be sufficient to supply the segment of the wetting front $L'$. The area it has to supply is enormous in comparison to the area that is supplied by the section $F$, which is $F'$, especially when axisymmetry is taken into account. The volume flux though the drawing area is vastly greater near $r=1$ than it is near $r=0$. This is seen clearly in figure \ref{f:v_small}, the axial velocity is singular at $C_2$, which is why $L$ can supply enough fluid to feed $L'$.
Figure \ref{sf:uvp_medium} show how, for a larger domain, gravity has an effect. The pressure contours are spread out close to the wetting front, revealing the smaller pressure gradient which is now of the same order as the gravitational effect. We also see that, for the plot with gravity, the pressure gradient close to $\Gamma_1$ is angled upward to counter gravity, which is the result of enforcing that the normal velocity on this surface is zero. The streamlines are angled downwards in the case with gravity in comparison to the case without, showing how the fluid is falling under its action. In the plot \ref{f:uvp_medium_g} we see even more starkly how much greater the segment of the front fed by the section of the drawing area $L$ is than the segment fed by $F$. In fact, in this case, it is too large. Figure \ref{f:uvp_z_medium_g} shows an enlargement around $C_1$. We see that there is a region of the wetting front around $C_1$ that is not fed by the drawing area, and is cut off by a streamline that starts at around $(1.96,0)$. In this cut off region the fluid at the top is receding and at the bottom advances, as the fluid 'slumps' under the action of gravity. The plot without gravity , figure \ref{f:uvp_z_medium_ng}, does not show this behaviour, instead the pressure gradient is very uniform and the velocities at the wetting front are approximately perpendicular to it. In this case the front will advance uniformly.
Examining the behaviour local to $C_2$, figure \ref{sf:uvp_zc2_medium} again reveals that the velocities close to contact line are very large. In addition, the streamlines that start at a larger value of $r$ spread out more from their neighbours more than those at smaller $r$. It is also of note that the pressure and velocity distribution around this point is not affected by gravity, due to the huge pressure gradients.
The next case that we consider is that of an obtuse contact angle, $\theta_1>\pi/2$. The large scale pressure and velocity distribution is qualitatively the same as for the previous case, with gravity causing the velocities far from the drawing area to fall rather than rise. However, the pressure distribution near $\Gamma_1$ appears the similar in the two cases. Figure \ref{sf:uvp_z_obtuse} is an enlargement around $C_1$, and it is seen that in this region the pressure and velocity fields are indeed very similar, appearing identical very close to $C_1$. The pressure contours are very closely spaced around $C_1$, and spread out as we move along the wetting front, from this we deduce that the velocity is very large at the contact line and is smaller further from it, causing the contact angle to reduce as the front propagates.
In this section we have found that there is some interesting behaviour close to the points $C_1$ and $C_2$. We shall next look at the results in these regions and investigate the leading order terms that dominate the behaviour.
\subsection{Local Behaviour in Numerical Results} \label{ss:num_asymp}
\begin{figure}[ptb]
\centering
\begin{tabular}{c c}
\begin{subfigure}[t]{0.47\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures/Asymptotics/c1_acute_rh.eps}
\caption{$\square$ is $-u/\sqrt{3}$ with line $0.5$ and $\triangle$ is $-v/(\sin(3\theta/2)\sin(\theta-\pi/3)+\cos(3\theta/2)\cos(\theta-\pi/3))$ with curve $2\sqrt{\rho}$.}
\label{f:simple_c1_acute_rh}
\end{subfigure}
&
\begin{subfigure}[t]{0.47\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures/Asymptotics/c1_acute_th.eps}
\caption{$\Diamond$ is $u/\sqrt{3}$ with line at $-0.5$ and $\triangle$ is $v/\sqrt{\rho}$ with curve $-2[\sin(3\theta/2)\sin(\theta-\pi/3)+\cos(3\theta/2)\cos(\theta-\pi/3)]$.}
\label{f:simple_c1_acute_th}
\end{subfigure}
\end{tabular}
\caption{Plots for $\theta_1=\pi/3$, $\gamma=0.5$ around $C_1$.}
\label{sf:simple_c1_acute}
\end{figure}
\begin{figure}[ptb]
\centering
\begin{tabular}{c c}
\begin{subfigure}[t]{0.47\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures/Asymptotics/c1_obtuse_rh.eps}
\caption{$\square$ is $u/(-\sin(3\theta/4)\cos(\theta-2\pi/3)+\cos(3\theta/4)\sin(\theta-2\pi/3))$ and $\triangle$ is $v/(\sin(3\theta/4)\sin(\theta-2\pi/3)+\cos(3\theta/4)\cos(\theta-2\pi/3))$ with curve $0.64/\rho^{1/4}$.}
\label{f:simple_c1_obtuse_rh}
\end{subfigure}
&
\begin{subfigure}[t]{0.47\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures/Asymptotics/c1_obtuse_th.eps}
\caption{$\Diamond$ is $u \rho^{1/4}$ with curve $0.64[-\sin(3\theta/4)\cos(\theta-2\pi/3)+\cos(3\theta/4)\sin(\theta-2\pi/3)]$ and $\triangle$ is $v \rho^{1/4}$ with curve $0.64[\sin(3\theta/4)\sin(\theta-2\pi/3)+\cos(3\theta/4)\cos(\theta-2\pi/3)]$.}
\label{f:simple_c1_obtuse_th}
\end{subfigure}
\end{tabular}
\caption{Plots for $\theta_1=2\pi/3$, $\gamma=0.5$ around $C_1$.}
\label{sf:simple_c1_obtuse}
\end{figure}
\begin{figure}[ptb]
\centering
\begin{tabular}{c c}
\begin{subfigure}[t]{0.47\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures/Asymptotics/c1_right_rh.eps}
\caption{$\square$ is $u-0.5$ with line $\ln(\rho)/\pi$ and $\triangle$ is $v/(2\theta/\pi-1)$ with line $0.5$.}
\label{f:simple_c1_right_rh}
\end{subfigure}
&
\begin{subfigure}[t]{0.47\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures/Asymptotics/c1_right_th.eps}
\caption{$\Diamond$ is $(u-0.5)/\ln(\rho)$ with line $1/\pi$ and $\triangle$ is $v$ with line $(\theta/\pi)-0.5$.}
\label{f:simple_c1_right_th}
\end{subfigure}
\end{tabular}
\caption{Plots for $\theta_1=\pi/2$, $\gamma=0.5$ around $C_1$.}
\label{sf:simple_c1_right}
\end{figure}
\begin{figure}[ptb]
\centering
\begin{tabular}{c c}
\begin{subfigure}[t]{0.47\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures/Asymptotics/c1_right_nograv_rh.eps}
\caption{$\square$ is $u$ with line at $0.3$ and $\triangle$ is $v$.}
\label{f:simple_c1_right_ng_rh}
\end{subfigure}
&
\begin{subfigure}[t]{0.47\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures/Asymptotics/c1_right_nograv_th.eps}
\caption{$\Diamond$ is $u$ with line at $0.3$ and $\triangle$ is $v$.}
\label{f:simple_c1_right_ng_th}
\end{subfigure}
\end{tabular}
\caption{Plots for $\theta_1=\pi/2$, $\gamma=0$ around $C_1$}
\label{sf:simple_c1_right_ng}
\end{figure}
\begin{figure}[ptb]
\centering
\begin{tabular}{c c}
\begin{subfigure}[t]{0.47\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures/Asymptotics/c2_rh.eps}
\caption{$\square$ is $u/(-\sin(\theta/2)\cos(\theta)+\cos(\theta/2)\sin(\theta))$ and $\triangle$ is $v/(-\sin(\theta/2)\sin(\theta)-\cos(\theta/2)\cos(\theta))$, curve is $0.7/\sqrt{\rho}$.}
\label{f:simple_c2_rh}
\end{subfigure}
&
\begin{subfigure}[t]{0.47\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures/Asymptotics/c2_th.eps}
\caption{$\Diamond$ is $u\sqrt{\rho}$ with curve $0.7[-\sin(\theta/2)\cos(\theta)+\cos(\theta/2)\sin(\theta)]$ and $\triangle$ is $v\sqrt{\rho}$ with curve $0.7[-\sin(\theta/2)\sin(\theta)-\cos(\theta/2)\cos(\theta)]$.}
\label{f:simple_c2_th}
\end{subfigure}
\end{tabular}
\caption{Plots for $\gamma=0$ around $C_2$.}
\label{sf:simple_c2}
\end{figure}
We produce numerical solutions for different $\theta_1$ in the regions around $C_1$ and $C_2$ to examine the locally dominant behaviour. Our purpose is to investigate observed multivalued points and singularities in the solutions for velocity, which shall reveal some fundamental issues in the current formulation of this phenomenon. Around each of $C_1$ and $C_2$ we use a local polar coordinate systems with distance from the point of interest $\rho$ and angle $\theta$, as defined by figure \ref{f:asymptotics}. The curves and lines plotted on the graphs are the leading order terms from the asymptotic analysis that is performed in the next section, and are included for later comparison. Also note that the scattering of points at small $\rho$ is due to numerical error when evaluating singularities with the current scheme.
We now consider the solutions around $C_1$ for different values of $\theta_1$ and $\gamma$ for very small values of $\rho$.
In figure \ref{sf:simple_c1_acute} we plot the values of the velocities $u$ and $v$ for $\theta_1=\pi/3$ and $\gamma=0.5$. From it we see that $u$ tends to a constant as $\rho$ becomes small, whilst the $\rho$ dependence of $v$ is $v=O(\sqrt{\rho})$. Therefore the solution is single valued and bounded, and can easily used for simulating the propagation of the wetting front. In figure \ref{sf:simple_c1_obtuse} we consider the case $\theta_1=2\pi/3$ and $\gamma=0.5$. Here the velocities diverge as $u,v=O(\rho^{-1/4})$. For the case $\theta_1=\pi/2$ and $\gamma=0.5$, figure \ref{sf:simple_c1_right}, we see that $u$ diverges as $u=O(\ln(\rho))$, whilst $v$ is multivalued at $C_1$. For $\theta_1=\pi/2$ and $\gamma=0$, figure \ref{sf:simple_c1_right_ng}, these issues do not occur, $u$ being constant and $v=0$.
Considering the solution around $C_2$ we have singularities in both components of velocity as $u,v=O(1/\sqrt{\rho})$, as seen in figure \ref{sf:simple_c2}. The angular dependence is also plotted, although there are issues with our mesh resolution around this point so the quality of the angular dependence is not high. The implications of the divergent velocities around $C_2$ are important and shall be discussed later.
Next we will verify the results that we have obtained numerically using local asymptotic solutions. This will give a full picture of the range of behaviours that exist and allow us to physically interpret them.
\subsection{Interpretation of the Asymptotic Analysis} \label{ss:asym_phys}
\begin{figure}[ptb]
\centering
\input{Figures/TikZ/Conclusion.tex}
\caption{Example configuration of two dimension flow for which we can examine the validity of the equations in the problem formulation.}
\label{f:darcy_problem}
\end{figure}
\begin{figure}[ptb]
\centering
\begin{tabular}{c c}
\begin{subfigure}[t]{0.47\textwidth}
\centering
\input{Figures/TikZ/Asymptotic_Advance_Sing.tex}
\caption{$v_s = c\rho^{-n}$ where $-1<n<0$, or $v_s = -c\ln(\rho)$}
\label{f:asymptotic_advance_sing}
\end{subfigure}
\vspace{0.5cm}
&
\begin{subfigure}[t]{0.47\textwidth}
\centering
\input{Figures/TikZ/Asymptotic_Advance_Const.tex}
\caption{$v_s = c$}
\label{f:asymptotic_advance_const}
\end{subfigure}
\vspace{0.5cm}
\\
\begin{subfigure}[t]{0.47\textwidth}
\centering
\input{Figures/TikZ/Asymptotic_Advance_Root.tex}
\caption{$v_s = c\rho^n$ where $0<n<1$}
\label{f:asymptotic_advance_root}
\end{subfigure}
\vspace{0.5cm}
&
\begin{subfigure}[t]{0.47\textwidth}
\centering
\input{Figures/TikZ/Asymptotic_Advance_Lin.tex}
\caption{$v_s = c\rho$}
\label{f:asymptotic_advance_lin}
\end{subfigure}
\vspace{0.5cm}
\\
\vspace{1cm}
\begin{subfigure}[t]{0.47\textwidth}
\centering
\input{Figures/TikZ/Asymptotic_Advance_Poly.tex}
\caption{$v_s = c\rho^n$ where $n>1$}
\label{f:asymptotic_advance_poly}
\end{subfigure}
\end{tabular}
\caption{Illustrations of the different dynamics caused by the various powers of $\rho$ in the expansions of $v_s$. The left curve in each figure is the wetting front at some time, and the right curve is the front that it evolves into. The value of $c$ is a constant value, that is irrelevant for the dynamics (so long as it is non-zero), the only thing that matters is the power. For values of $c$ that are negative, the front moves in the opposite direction.}
\end{figure}
In our analysis we obtained that the velocities are singular at $C_2$, and at $C_1$ for the case $\theta_1>\pi/2$ and for $\theta_1=\pi/2$ when $\gamma\neq0$. These singularities are all integrable, i.e. they diverge as $\rho^n$ where $n>-1$ or as $\ln(\rho)$. They are called integrable because the integral of the velocity over any finite surface will be finite, which means that flux of volume through any finite surface will be finite. The physical interpretation of the singularities it that a finite volume of fluid is moving through a point or line per unit time.
The singularities are the symptom of a fundamental problem in our problem formulation. Darcy's equation is believed to describe slow creping flows in porous materials where the effect of inertia is negligible. The singular velocities the we observe are inconstant with this. For a particle that passes through one of these singular points its velocity will start out finite, become divergent and then become finite again. The velocity and acceleration of such a particle are certainly not small. Therefore, one of our equations must be un-physical. To examine which equation this is let us temporarily examine the two dimensional flow depicted in figure \ref{f:darcy_problem}. We can be sure that the boundary condition $\boldsymbol{u}\cdot\hat{\boldsymbol{n}}=0$ is correct between the two contact lines because the surface of the porous medium is covered by an impermeable solid. The wetted region cannot penetrate the impermeable solid, nor can it retreat away from it because that would create a vacuum. The most that can happen is that CL2 recedes causing the wetted region to `peal off,' but this still leaves a finite amount of time with the boundary condition valid. The boundary condition $p=0$ on the drawing area was established using analysis of the scales of the pressures. For this to be wrong there would have to be a boundary layer in the external reservoir just above the drawing area, but this cannot be the case due to the very low volume flux into the wetted region. Of course, if the singular velocity also existed in the external reservoir then this would cause there to be very high pressures and velocity gradients which may change the solution, but this would not solve the fundamental problem. The slow imbibition of a highly viscous fluid into a low porosity solid should not cause a boundary layer due to high stresses in the external reservoir. Therefore, the singularities \emph{must} arise due to inadequacies in Darcy's equation, and not in the boundary conditions. Even \emph{if} the boundary conditions in the asymptotic analysis are not physically correct for \emph{this} phenomenon, they are physically correct for \textit{a} phenomenon, and so cannot be what is fundamentally wrong with the problem formulation. From this we identify the point $C_2$, the contact line CL1 at the edge of the drawing area, to be a place at which improvements to Darcy's equation could be tested. Such an improvement would almost certainly need to include inertial effects, and perhaps long range viscous diffusion effects also. One of the improvements that is discussed in the introduction may be what is required, although none of these were developed to rectify an issue like the one we face and so this is unlikely.
However, the volume of fluid that passes through $\Gamma_2$ into the porous solid is likely to be almost the same for any improvement (since the fluid is drawn in to feed the advancement of the wetting front which dictates the volume of fluid required) and will simply be distributed more evenly along the portion of $\Gamma_2$ that is close to $C_2$. It is also possible that the imbibition will be slower because the volume flux though the drawing area is suppressed. This requires further investigation.
We shall now discuss the behaviour local to $C_1$ in the various cases in the previous section, that is the local distribution of the normal velocity of the front, which is $v_s=\boldsymbol{u}\cdot\hat{\boldsymbol{n}}=-u_\theta$ on $\theta=0$. We must assume that the behaviour occurring with Darcy's equation will be qualitatively the same as for an equation that suppresses the velocities that we see, and also for a formulation where a dynamic contact angle is used. Whether this is a reasonable assumption should be verified.
First the case when $\theta_1<\pi/2$, from \eqref{eq:asymp_uth_1} the leading order terms in the expansion of the surface velocity are
\begin{equation}
v_s\sim - \gamma \sin(\theta_1)\tan(\theta_1) + c_0 \frac{\pi}{2\theta_1} \rho^{\frac{\pi}{2\theta_1}-1} + c_1 \frac{3\pi}{2\theta_1} \rho^{\frac{3\pi}{2\theta_1}-1}.
\end{equation}
These first three terms have been included because they reveal three of the five behaviours that the wetting front can undertake, the three that exist for this case. The first term is constant across the wetting front, so moves all of the wetting front equally as illustrated in figure \ref{f:asymptotic_advance_const}. The value of the term is negative and so it is causing the wetting front to recede, although other terms will balance this in a wetting process causing the front to advance. Physically this can be understood as gravity attempting to reshape the wetted region such that it extends further downwards and has less of its mass at its top. The second term is a power of $\rho$ that is between zero and one, as illustrated in figure \ref{f:asymptotic_advance_root}. This causes the contact angle $\theta_1$ to change rapidly and does not cause the contact line to advance. The third and all subsequent terms are of a higher power than one, illustrated in \ref{f:asymptotic_advance_poly}, so they do not affect the contact angle or move the contact line, and only have an influence further along the wetting front.
Next the case when $\theta_1=\pi/2$, this time we extract the leading order terms from \eqref{eq:asymp_uth_2} to arrive at
\begin{equation} \label{eq:asym_vs_right}
v_s\sim \frac{2\gamma}{\pi} \ln(\rho) + \left[ \frac{2\gamma}{\pi} + c_0 \right] + 3 c_1 \rho^2 .
\end{equation}
The first term is singular, as illustrated by figure \ref{f:asymptotic_advance_sing}. By the sign of the coefficient we see that the contact line is receding at a singular velocity, gravity is rapidly increasing the contact angle as it causes the fluid to fall. From the second term we see that gravity is also causing the fluid to advance, so that the fluid is indeed receding near the surface of the solid substrate, and advancing below as in figure \ref{f:uvp_z_medium_g}. The constant term also includes an unspecified constant, which could cause the front to either advance or recede. The third term and all subsequent terms are, as before, of the type depicted in \ref{f:asymptotic_advance_poly}, affecting neither the contact angle nor the contact lines position.
Finally the case $\theta_1>\pi/2$ is very similar to the first case, except that the terms are of different orders and so have different effects. Ordering the terms by their dominance we see that
\begin{equation} \label{eq:asym_vs_obtuse}
v_s\sim c_0 \frac{\pi}{2\theta_1} \rho^{\frac{\pi}{2\theta_1}-1} - \gamma \sin(\theta_1)\tan(\theta_1) + c_1 \frac{3\pi}{2\theta_1} \rho^{\frac{3\pi}{2\theta_1}-1}.
\end{equation}
The term that is now first is singular, as illustrated by figure \ref{f:asymptotic_advance_sing}. If $\gamma\neq0$ then we would anticipate that for $\theta_1\approx\pi/2$ that the contact line would be receding and the contact angle increasing, because this is the behaviour seen at $\pi/2$. For the contact angle to be physical it must be that eventually $c_0=0$ at some $\theta_1\in(\pi/2,\pi)$, otherwise the contact angle will increase to infinity. However, the behaviour may not be so trivial as there being a particular value of $\theta_1$ for each $\gamma$ at which $c_0=0$, it may be that the contact angle varies in a manner that depends on the geometry of the entire wetting front, increasing and decreasing until the entire wetting front has reached a suitable geometry. The second term has the same meaning as it did in the first case (where it was the first term). The third term causes different behaviour depending on $\theta_1$. For $\theta_1\in(\pi/2,3\pi/4)$ the power of $\rho$ is greater than unity, so does not affect the contact angle or move the contact line. For $\theta_1=3\pi/4$ power is one and affects the contact angle as illustrated in figure \ref{f:asymptotic_advance_lin}. For $\theta_1\in(3\pi/4,\pi)$ the power is between zero and one, so affects the contact angle as illustrated in figure \ref{f:asymptotic_advance_root}. All subsequent terms have power greater than one, and so do not affect the contact angle or move the contact angle.
It is important to realise that the terms that we discuss do add together, and so one term affecting the contact angle and another moving the wetting front in the far field will cause both the angle to change and the wetting front to move. In all cases the wetting front has the ability to advance, since they either have a constant term, or a singular term and a high power term. That is all cases except $\theta_1<\pi/2$ and $\gamma=0$ where the contact angle must change up to $\pi/2$ before the contact line can advance, and $\theta_1=\pi/2$ and $\gamma=0$ where the contact angle cannot change.
For the cases where the wetting front does actually recede, we have the additional issue that our problem formulation is only valid for wetting processes. We must assume that the de-wetting and re-wetting processes have the same physics as the wetting process. This should be verified.
Numerically speaking, any simulations that are run will not be able to simulate the singular behaviour with the accuracy that is desired for prediction. The numerical scheme would need to be specially designed to cope with this behaviour, and ours was not because we did not anticipate such an un-physical solution. However, we can produce some qualitative predictions which may be useful in guiding future developments in this area.
\cleardoublepage\oldsection{Numerical and Asymptotic Analysis}\label{s:simple}
\subsection{Initial Conditions}
We shall first examine numerical solutions for a single instant of time, for which the initial condition for the wetting front shall be the only wetting front geometry. Then we will look at asymptotic analysis that justifies the behaviour that we see. Finally we will look at some time evolutions of the wetting front. We make the simplification that the initial $\Gamma_0$ is a segment of an ellipse and subtends a contact angle $\theta_1$ to the boundary $\Gamma_1$, this angle is CA2 from the introduction. Let the radial coordinate of $C_1$ be $r_f$ and the intersection of the wetting front with the axis of symmetry be at $z=-H$. The equation for the wetting front is
\begin{subequations}\begin{align}
r(s)&=b\cos(s+s_0), \\
z(s)&=a-(a+H)\sin(s+s_0), \\
\intertext{where}
a&=\frac{H^2}{r_f\tan(\theta_1)-2H}, \\
b&=(a+H)\sqrt{\frac{r_f}{a\tan(\theta_1)}}, \\
\sin(s_0)&=\frac{a}{a+H}.
\end{align}\end{subequations}
\input{SimpleBC/Distributions.tex}
\input{SimpleBC/Numerics.tex}
\input{SimpleBC/Asymptotics.tex}
\input{SimpleBC/Physical.tex}
\input{SimpleBC/Simulations.tex}
\subsection{Numerical Simulations} \label{ss:simulate}
\subsubsection{Large Initial Wetted Regions}
\begin{figure}[ptb]
\centering
\begin{tabular}{c c}
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures/Angles/Fr_angle_acute_nograv.eps}
\caption{$\gamma=0$}
\label{f:front_acute_ng}
\end{subfigure}
&
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures/Angles/Fr_angle_acute_grav.eps}
\caption{$\gamma=0.2$}
\label{f:front_acute_g}
\end{subfigure}
\end{tabular}
\\
\begin{subfigure}[t]{\textwidth}
\centering
\includegraphics[width=0.45\textwidth]{Figures/Angles/FrCm_angle_acute.eps}
\caption{Comparison of the wetting fronts at time $t=1$, with the original front in black.}
\label{f:front_acute_comp}
\end{subfigure}
\\
\begin{subfigure}[t]{0.9\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures/Angles/AnCm_angle_acute.eps}
\caption{The variation of $\theta_1$ with time.}
\label{f:front_acute_ang}
\end{subfigure}
\caption{Plots depicting the dynamics of the wetting front for initial conditions $\theta_1=0.4\pi$, $r_f=1.5$ and $H=2$. Red curves are for without gravity and blue are for with gravity. (a) and (b) include the wetting front at times $t \in \{0,0.1,\ldots,1\}$.}
\label{sf:acute_set}
\end{figure}
\begin{figure}[ptb]
\centering
\begin{tabular}{c c}
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures/Angles/Fr_angle_right_nograv.eps}
\caption{$\gamma=0$}
\label{f:front_right_ng}
\end{subfigure}
&
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures/Angles/Fr_angle_right_grav.eps}
\caption{$\gamma=0.2$}
\label{f:front_right_g}
\end{subfigure}
\end{tabular}
\\
\begin{subfigure}[t]{\textwidth}
\centering
\includegraphics[width=0.45\textwidth]{Figures/Angles/FrCm_angle_right.eps}
\caption{Comparison of the wetting fronts at time $t=1$, with the original front in black.}
\label{f:front_right_comp}
\end{subfigure}
\\
\begin{subfigure}[t]{0.9\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures/Angles/AnCm_angle_right.eps}
\caption{The variation of $\theta_1$ with time.}
\label{f:front_right_ang}
\end{subfigure}
\caption{Plots depicting the dynamics of the wetting front for initial conditions $\theta_1=0.5\pi$, $r_f=1.5$ and $H=2$. Red curves are for without gravity and blue are for with gravity. (a) and (b) include the wetting front at times $t \in \{0,0.1,\ldots,1\}$.}
\label{sf:right_set}
\end{figure}
\begin{figure}[ptb]
\centering
\begin{tabular}{c c}
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures/Angles/Fr_angle_obtuse_nograv.eps}
\caption{$\gamma=0$}
\label{f:front_obtuse_ng}
\end{subfigure}
&
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures/Angles/Fr_angle_obtuse_grav.eps}
\caption{$\gamma=0.2$}
\label{f:front_obtuse_g}
\end{subfigure}
\end{tabular}
\\
\begin{subfigure}[t]{\textwidth}
\centering
\includegraphics[width=0.45\textwidth]{Figures/Angles/FrCm_angle_obtuse.eps}
\caption{Comparison of the wetting fronts at time $t=1$, with the original front in black.}
\label{f:front_obtuse_comp}
\end{subfigure}
\\
\begin{subfigure}[t]{0.9\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures/Angles/AnCm_angle_obtuse.eps}
\caption{The variation of $\theta_1$ with time.}
\label{f:front_obtuse_ang}
\end{subfigure}
\caption{Plots depicting the dynamics of the wetting front for initial conditions $\theta_1=0.8\pi$, $r_f=1.5$ and $H=2$. Red curves are for without gravity and blue are for with gravity. (a) and (b) include the wetting front at times $t \in \{0,0.1,\ldots,1\}$.}
\label{sf:obtuse_set}
\end{figure}
\begin{figure}[ptb]
\centering
\begin{subfigure}[t]{\textwidth}
\centering
\includegraphics[width=0.65\textwidth]{Figures/Long_time/Fr_long_time.eps}
\caption{Plot of the wetting front at times $t\in\{0,1,\ldots,10\}$}
\label{f:front_long}
\end{subfigure}
\\
\begin{subfigure}[t]{\textwidth}
\centering
\includegraphics[width=0.7\textwidth]{Figures/Long_time/An_long_time.eps}
\caption{The variation of $\theta_1$ with time.}
\label{f:front_long_ang}
\end{subfigure}
\\
\begin{subfigure}[t]{\textwidth}
\centering
\includegraphics[width=0.7\textwidth]{Figures/Long_time/CL2_long_time.eps}
\caption{The variation of $r_f$ with time.}
\label{f:front_long_cl2}
\end{subfigure}
\caption{Plots depicting the dynamics of the wetting front for initial conditions $\theta_1=0.5\pi$, $r_f=1.5$ and $H=2$ for $\gamma=0.2$. Shows late times of the same situation as figure \ref{sf:right_set}.}
\label{sf:long_set}
\end{figure}
\begin{figure}[ptb]
\centering
\includegraphics[width=\textwidth]{Figures/Long_time/uvp_zoom_long_time.eps}
\caption{Plot of velocity and pressure locally to the contact line for the dynamics in figure \ref{sf:long_set} at time $t=15$. The front has reached a state where the fluid largely flows along it causing the contact angle to become fixed and the contact line to become slow.}
\label{f:uvp_z_long}
\end{figure}
The aim of this section is to produce simulations of an already established wetted region to see the contact angle variation and advancement of the wetting front. We shall compare the advancement of the wetting front both without gravity ($\gamma=0$) and with ($\gamma=0.2$). For a typical set of figures see \cref{sf:acute_set}. The plots without gravity are in red and with gravity are in blue. (a) and (b) are plots of the wetting front at uniformly distributed points in time, (c) is a comparison of the wetting front at the latest time simulated and (d) shows the contact angle variation.
This figure (figure \ref{sf:acute_set}) depicts the dynamics for an initially acute contact angle. It shows that the contact line $C_1$ advances much slower with gravity than without, this should be expected from the discussion of the asymptotic analysis in the previous section, where we showed that gravity `pulls' the wetting front back local to $C_1$. Around the bottom of the front, close to $C_0$, gravity can be seen to aid the advancement of the wetting front, this should be no surprise. The contact angle variation is consistent with our asymptotic analysis. Without gravity, the leading order terms in \eqref{eq:asym_vs_right} are linear and quadratic, neither of which cause contact angle variation. Our analysis showed that $\theta_1(t)=\pi/2$ is a a solution, now our numerical result show us that it is stable. With gravity, the contact angle initially increases very rapidly, as we argued that it should for $\theta_1 \approx \pi/2$. It then slows down to what appears to be a linear function of time, this cannot continue since that would result in $\theta_1>\pi$ which is not physical. The behaviour at greater times will be discussed later.
Figure \ref{sf:right_set} is for the wetting front initially perpendicular to the substrate surface, and shows very similar results. The reader should briefly compare figures \ref{f:front_acute_comp} and \ref{f:front_right_comp}. We might naively expect that the initially larger wetted region should remain larger, but this is not the case, the smaller advances faster to catch up producing indistinguishable results.
This is not the case for an initially obtuse contact angle, as depicted in figure \ref{sf:obtuse_set}, although this is likely because the initial wetted region occupies space that the previous two cases do not reach in the times that we consider. It is likely that if we were to run the simulation over perhaps as little as five units of time that the wetted regions reached would be indistinguishable. The other interesting behaviour of this front is that of the contact angle. Without gravity the contact angle converges to $\pi/2$ as always, but with gravity it initially decreases, and then changes to being increasing. Looking at figure \ref{f:front_obtuse_g}, at time $0.1$ the contact line $C_1$ has advanced greatly but the front local to it has not advanced as much. It would seem that this contact line is initially too close to the drawing area, and that during rapid advancements the contact angle $\theta_1$ becomes closer to $\pi/2$. We will see a further example of this in the next section. With regard to our discussion of \eqref{eq:asym_vs_obtuse}, it would seem that $c_0$ does indeed change sign during advancements (see figure \ref{f:front_obtuse_ang}), and that the contact angle does not monotonically tend towards a prescribed value for all time, although it may do so as $t\rightarrow\infty$.
Finally, we consider the large times for the wetting front under the effect of gravity. We impose initial condition $\theta_1=\pi/2$ and simulate. From \ref{f:front_long} we see very clearly that the contact line $C_1$ slows down as it advances, and that the point $C_0$ moves at approximately uniform speed, the effect of gravity dominating the motion. From figure \ref{f:front_long_ang} we see that the contact angle does in fact tend to a constant value. We cannot reach any conclusions about the long time limit of $r_f$ (the radial coordinate of $C_1$) from the data that we have, it may tend towards a constant value, or may continue to increase slowly up to infinity. We also plot velocity and pressure local to $C_1$ in the style of section \ref{ss:pv_dist} in figure \ref{f:uvp_z_long}. It shows that the velocities on the wetting front are almost tangential to it, the fluid falling under gravity, which is why the front is dramatically slower than without gravity where the velocity distribution would be similar to that plotted in figure \ref{f:uvp_z_obtuse_ng}.
In this section we have presented the first set of results for the dynamics of the wetting front, but there is still much to investigate. The most important unresolved issues are how the limit of $\theta_1$ and $r_f$ as $t\rightarrow\infty$ depends on $\gamma$, and whether $r_f$ is even convergent. In addition we discussed how the wetting fronts we produced appear to converge on the same dynamics as time passes. It is conceivable that in the state space of all possible wetting fronts there is a stable manifold that all (or a large subset of) physical initial conditions converge onto and move along as time passes. This stable manifold would have to be the set of wetting fronts produced from the initial condition of $\Gamma_0=\{(r,z) \: : \: r\in[0,1] , z=0\}$, the wetted region of zero volume. It is stressed that, at present, this is only a possibility, although one worth investigation.
\subsubsection{A Small Initial Wetted Region}
\begin{figure}[ptb]
\centering
\begin{tabular}{c c}
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures/Imbib/FrCm_imbib_0.eps}
\caption{$t\in\{0,1\cdot10^{-5},\ldots,5\cdot10^{-5}\}$}
\label{f:front_imbib_0}
\end{subfigure}
&
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures/Imbib/FrCm_imbib_1.eps}
\caption{$t\in\{0,2\cdot10^{-3},\ldots,10\cdot10^{-3}\}$}
\label{f:front_imbib_1}
\end{subfigure}
\end{tabular}
\\
\begin{subfigure}[t]{\textwidth}
\centering
\includegraphics[width=0.75\textwidth]{Figures/Imbib/FrCm_imbib_2.eps}
\caption{$t\in\{0,2\cdot10^{-2},\ldots,10\cdot10^{-2}\}$}
\label{f:front_imbib_2}
\end{subfigure}
\\
\begin{subfigure}[t]{\textwidth}
\centering
\includegraphics[width=0.75\textwidth]{Figures/Imbib/FrCm_imbib_3.eps}
\caption{$t\in\{0,2\cdot10^{-1},\ldots,10\cdot10^{-1}\}$}
\label{f:front_imbib_3}
\end{subfigure}
\caption{Plots depicting the dynamics of the wetting front for initial conditions $\theta_1=0.5\pi$, $r_f=1.005$ and $H=0.02$. Red curves are for without gravity and blue are for with gravity ($\gamma=0.2$), the initial front is plotted in black. Notice that (a) only includes part of the domain and (b) has a distorted aspect ratio.}
\label{sf:imbib_set}
\end{figure}
\begin{figure}[ptb]
\centering
\begin{subfigure}[t]{\textwidth}
\centering
\includegraphics[width=0.8\textwidth]{Figures/Imbib/AnCm_imbib.eps}
\caption{Comparison of the contact angle variation}
\label{f:front_imbib_ang}
\end{subfigure}
\\
\begin{subfigure}[t]{\textwidth}
\centering
\includegraphics[width=0.8\textwidth]{Figures/Imbib/FluxCm_imbib.eps}
\caption{Comparison of the volume flux variation}
\label{f:front_imbib_flux}
\end{subfigure}
\\
\begin{subfigure}[t]{0.8\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures/Imbib/VolCm_imbib.eps}
\caption{Comparison of the total volume variation}
\label{f:front_imbib_vol}
\end{subfigure}
\caption{Plots of the measured quantities for the initial conditions used in figure \ref{sf:imbib_set}. Red curves are for without gravity and blue are for with gravity ($\gamma=0.2$).}
\label{sf:imbib_measure}
\end{figure}
In this section we simulate the imbibition from a very small initial wetted region. This is to gain insight into the dynamics that result from imbibing into a porous solid without an initial wetted region, and choose this approach because our numerical scheme cannot solve over a region of zero volume. This is plotted in figure \ref{sf:imbib_set}. Figure \ref{f:front_imbib_0} plots very early times, it is seen that initially the fluid flows mainly in the vertical direction (see $t=10^{-5}$) before advancing in the vertical direction. We propose that this is because the initial condition is not part of the stable manifold in the state space of wetting fronts, and the front is first converging upon it and then propagating along it. Examining the front for times up to as high as $t=2\cdot10^{-2}$ the front has a definite structure, with a flat horizontal profile from the axis of symmetry up to a particular radius, before curving up to meet the surface of the porous substrate approximately at the perpendicular. We assume that this behaviour is exhibited at all times for imbibition into a porous solid without an initial wetted region.
Such behaviour is not what is assumed in \cite{spread_porous}, where lubrication theory is used to examine the imbibition of a thin liquid drop. They assume that, because the drop and wetted region are thin that the radial derivative of pressure, and thus the radial velocity, is small. This is trivially not the case. At early times $\Gamma_1$ and $\Gamma_3$ will be approximately the same length, thus the pressure will change by the same amount over a similar distance and the radial and axial velocities are seen to be comparable.
At later times, $t>10^{-1}$, the wetting front evolves into an arc comparable to those seen in figures \ref{f:front_acute_comp} and \ref{f:front_right_comp}. We therefore propose that the front evolves as seen in figure \ref{f:front_long} for later times (of course we must ignore the plot of the initial condition from figure \ref{f:front_long}).
Figure \ref{sf:imbib_measure} contains plot of the measured quantities. It should be noted that the heuristic used to find volume flux and contact angle are sub-optimal, which is why there are some jumps in the plots. These are not problems with the numerical solution (at least, not more so than has already been discussed), but rather in extracting information from it. In figure \ref{f:front_imbib_ang} we see the usual behaviour of $\theta_1=\pi/2$ being stable without gravity, and with gravity the contact angle increases up to the stable value plotted in \ref{f:front_long_ang}. The volume flux into the wetted region, plotted in figure \ref{f:front_imbib_flux}, is found to be higher with gravity than without, this is because gravity is aiding the advancement of the wetting front causing the volume of the wetted region to increase faster than it does with pressure gradient alone. As time passes the pressure gradient decreases, because the wetted region is larger, and so the fluid imbibes more slowly. These features are seen again in the plot of the total volume of the wetted region, figure \ref{f:front_imbib_vol}.
|
2,869,038,155,896 | arxiv | \section{}
In our recent paper~\cite{casado:2018}, we investigate the generation of directed motion in a system consisting of a sphere immersed in a viscous fluid and subjected to time-periodic forces of zero average. Specifically: (i) we obtain necessary conditions for the existence of such directed motion, (ii) we derive an analytical expression for the average terminal velocity of the sphere in the adiabatic limit, (iii) we carry out a numerical analysis of the dependence of the average terminal velocity on the system parameters and compare the results with those obtained using the adiabatic approximation, and (iv) we explain some aspects of the observed phenomenology by means of symmetry arguments. It is important to emphasize that none of these results is questioned by Mart\'{\i}nez and Chac\'on (abbreviated hereinafter as M\&C) in their Comment~\cite{martinez:2021}. On the contrary, comparison of the top and bottom panels of Figs.~1 and 2 of~\cite{martinez:2021}, as well as of the top and middle panels of Fig.~3 of~\cite{martinez:2021}, only confirms the validity of the analytical expression for the average terminal velocity in the adiabatic limit derived in the commented paper [Eq.~(12) of~\cite{casado:2018}]. Note that this analytical expression is wrongly written in the Comment [compare Eq.~(3) of~\cite{martinez:2021} with Eq.~(12) of~\cite{casado:2018}].
M\&C focus their criticism mainly on the following text that appears below Fig.~3 of~\cite{casado:2018}: ``The curves in Fig.~3 also reveal that, for fixed values of the other parameters, there exists an optimal value of $\zeta$ which maximizes the second component of the average terminal velocity. Furthermore, as $\omega \tau$ increases, the maximum velocity decreases and its location shifts towards lower values of $\zeta$. It should be noted here that, in the lowest order, the general formalism developed in Refs.~[27, 28]'' (for Refs.~\cite{cuesta:2013,casado:2015}) ``leads to the approximate expression $\overline{V}_2(\zeta)\approx C \zeta^2(1-\zeta)$, where $C$ is independent of $\zeta$. This expression vanishes at $\zeta=0$ and $\zeta=1$, and displays a maximum at $\zeta=2/3$, thus qualitatively resembling the behavior seen in Fig.~3. However, it is unable to account for the dependence of the location of the maximum velocity on $\omega \tau$. This deficiency is not surprising, given that the above approximation is expected to be accurate only for small values of $f_0$ and, in Fig.~3, we have taken $f_0=100$.'' In reference to this text, M\&C claim ``\textit{This Comment will question some of the above statements.}'', but they do not explicitly indicate which of the above statements they question.
It should be pointed out that the aforementioned text of~\cite{casado:2018} only describes objective facts or easily verifiable mathematical facts.
Indeed, the first two sentences provide an objective description of what is observed in Fig.~3 of~\cite{casado:2018}. The third sentence indicates a mathematical fact that can easily be verified, namely, ``in the lowest order, the general formalism developed in Refs.~[27,~28]'' (for Refs.~\cite{cuesta:2013,casado:2015}) ``leads to the approximate expression $\overline{V}_2(\zeta)\approx C \zeta^2(1-\zeta)$, where $C$ is independent of $\zeta$''. The fourth sentence indicates some properties of the function $C \zeta^2(1-\zeta)$ that can easily be verified, namely, the function $C \zeta^2(1-\zeta)$ vanishes at $\zeta=0$ and at $\zeta=1$, displays a maximum at $\zeta=2/3$, and its shape qualitatively resembles the behavior seen in Fig.~3 of~\cite{casado:2018}. The fifth sentence indicates an objective fact, namely, the approximate expression $\overline{V}_2(\zeta)\approx C \zeta^2(1-\zeta)$ ``is unable to account for the dependence of the location of the maximum velocity on $\omega \tau$''. Finally, the sixth sentence indicates a well-known mathematical fact, namely, in a series expansion (in the present case, the series expansion obtained by applying the general formalism developed in~\cite{cuesta:2013,casado:2015}), the lowest-order approximation is expected to be accurate only for small values of the expansion parameter (in the present case, for small values of $f_0$) and, consequently, it is not surprising that such an approximation is inadequate if the expansion parameter is large (in the present case, $f_0=100$). None of these statements is questioned in the Comment~\cite{martinez:2021}.
Referring again to the aforesaid text of~\cite{casado:2018}, M\&C claim in the Abstract ``\textit{The author explains the dependence on the relative amplitude of the two harmonic components of the average terminal velocity from the perspective of a general formalism. In this Comment, this explanation is shown to be in general incorrect, \dots}'' and at the end of the Comment ``\textit{In conclusion, the theoretical explanation discussed in Ref.~[1] is in general incorrect, \dots}''. It is important to emphasize that at no point in~\cite{casado:2018} do we claim to explain ``\textit{the dependence on the relative amplitude of the two harmonic components of the average terminal velocity}'' from the perspective of the general formalism developed in~\cite{cuesta:2013,casado:2015}. In fact, what we do claim is that such a formalism in its lowest order approximation cannot explain (``is unable to account for'') an important aspect of such a dependence, specifically, ``the dependence of the location of the maximum velocity on $\omega \tau$''. Therefore, M\&C base their criticism on a misunderstanding or misrepresentation of what is actually claimed in~\cite{casado:2018}. Furthermore, the conclusion of the Comment~\cite{martinez:2021} is ambiguous and may mislead the reader into thinking that the theoretical explanations that are actually discussed in~\cite{casado:2018} (by using the adiabatic approximation and/or symmetry arguments) are incorrect.
The aforesaid text of~\cite{casado:2018} is used by M\&C to criticize the general formalism developed in~\cite{cuesta:2013,casado:2015}. They claim that such a formalism is in general incorrect because it cannot explain the dependence of the optimal value of $\zeta$ that maximizes $\overline{V}_2(\zeta)$ on a new parameter $\alpha$ not present in the commented paper [compare Eq.~(2) of~\cite{martinez:2021} with Eq.~(13) of~\cite{casado:2018}]. Their argument is based on the wrong premise that ``\textit{the prediction coming from the aforementioned general formalism~[2,3]}'' (for Refs.~\cite{cuesta:2013,casado:2015}) is ``\textit{that the dependence of the average terminal velocity should scale as $\overline{V}_2(\zeta)\approx C \zeta^2\alpha(1-\zeta)$}'' [see paragraph before Eq.~(5) of~\cite{martinez:2021}]. Actually, the general formalism developed in~\cite{cuesta:2013,casado:2015} predicts that the dependence of the average terminal velocity scales as $\overline{V}_2(\zeta)\approx C \zeta^2\alpha(1-\zeta)$ only in its lowest order approximation, which is expected to be accurate only for small values of $f_0$. For the values of $f_0$ considered in~\cite{martinez:2021} ($f_0= 100$ and $f_0=1$), it is to be expected that the lowest order approximation is not sufficient and higher order terms are required. In fact, it is not necessary to introduce the new parameter $\alpha$, as M\&C do, to realize that the lowest order approximation fails when the expansion parameter $f_0$ is not small enough. This issue is already mentioned in~\cite{casado:2018} when we say that the lowest order approximation ``is unable to account for the dependence of the location of the maximum velocity on $\omega \tau$'' observed in Fig.~3 of~\cite{casado:2018}. Therefore, the argument used by M\&C against the formalism developed in~\cite{cuesta:2013,casado:2015} is invalid because it is based on their confusion between the general formalism per se and its lowest order approximation.
As a matter of fact, the arguments presented in~\cite{martinez:2021} demonstrate unequivocally that the supposedly universal results predicted by the theory of ratchet universality developed in~\cite{chacon:2007b,chacon:2010} are in general incorrect. Indeed, on the first page of~\cite{martinez:2021}, M\&C claim that ``\textit{For any $\alpha>0$, we shall argue that the maximum velocity is reached for $\zeta=2\alpha/(1+2\alpha)$ as predicted by the theory of ratchet universality (RU) [4-6]}.'' [see also Eq.~(4) of~\cite{martinez:2021}]. Therefore, for $\alpha=1$, which is the only value considered in~\cite{casado:2018}, the theory of ratchet universality predicts that the maximum velocity is reached for $\zeta=2/3$, irrespective of the particular value of the dimensionless relaxation time $\omega \tau$. Remarkably, in this case the theory of ratchet universality predicts the same optimal value of $\zeta$ as the lowest order approximation of the general formalism developed in~\cite{cuesta:2013,casado:2015}. This prediction is clearly incorrect because, according to the results shown in Fig.~3 of~\cite{casado:2018}, the optimal value of $\zeta$ that maximizes the velocity depends on the value of $\omega \tau$.
In the Comment~\cite{martinez:2021}, M\&C also propose an explanation of the supposed effectiveness of the ratchet universality prediction given by Eq.~(4) of~\cite{martinez:2021}. This explanation is untenable since it is based on an equation [specifically, Eq.~(7) of~\cite{martinez:2021}] that is clearly incorrect. As can be easily shown, Eq.~(7) of~\cite{martinez:2021} cannot be correct because it predicts an inconsistent result for the terminal velocity. Indeed, integrating Eq.~(7) of~\cite{martinez:2021}, one obtains that
\begin{equation}
v_2(\theta)=-\frac{A a_0 \theta}{\omega \tau}+G(\theta),\label{error1}
\end{equation}
where, according to M\&C, $-A a_0$ is given by Eq.~(8) of~\cite{martinez:2021} and $G(\theta)$ is a $\pi$-periodic function of the dimensionless time $\theta= \omega t$ that can be expressed as a Fourier series. The terminal velocity can be obtained from Eq.~(\ref{error1}) by taking the long time limit $\theta \to \infty$. Thus, unless $A a_0\equiv 0$, which is not the case as can be seen in Fig.~4 of~\cite{martinez:2021}, Eq.~(7) of~\cite{martinez:2021} predicts an infinite terminal velocity. This prediction clearly contradicts the numerical results presented in~\cite{casado:2018} and~\cite{martinez:2021} and, consequently, Eq.~(7) of~\cite{martinez:2021} is incorrect. The statement made by M\&C that Eq.~(1) of~\cite{casado:2018} can be written as Eq.~(6) and (7) of~\cite{martinez:2021} is therefore also incorrect.
In addition, M\&C propose a possible explanation of the dependence of the location of the maximum velocity on $\omega \tau$ using the vibrational mechanics approach developed in Ref.~\cite{blekhman:2000}. In order for this approach to be valid, the frequency of the ``\textit{fast}'' force should be much higher than the frequency of the ``\textit{slow}'' force. However, in the case considered by M\&C, the frequency of the ``\textit{fast}'' force is only twice the frequency of the ``\textit{slow}'' force [see paragraph above Eq.~(12) of~\cite{martinez:2021}] and, therefore, the vibrational mechanics approach is not justified. Furthermore, although M\&C do not give an explicit derivation of Eq.~(12) of~\cite{martinez:2021}, it is easy to see that it cannot be correct. Indeed, because of the term $\smash{V_2^{3/2}}$, Eq.~(12) of~\cite{martinez:2021} leads to unphysical complex values of the velocity $V_2$ if the initial velocity is negative, which does not make sense. Therefore, the conclusions that M\&C draw from Eq.~(12) of~\cite{martinez:2021} are, to say the least, unreliable.
It is worth mentioning that the asymptotic behavior $V_2\sim e^{-4 t/\tau}$ as $t\to \infty$ reported in~\cite{martinez:2021} is also incorrect.
It is easy to verify that Eq.~(12) of~\cite{martinez:2021} can be written as
\begin{equation}
\label{linear}
-2 \frac{d }{dt_{\tau}}\left(V_2^{-1/2}\right)+V_2^{-1/2}=\frac{4 \pi}{\delta_0},
\end{equation}
which is a linear differential equation for $\smash{V_2^{-1/2}}$. The general solution of Eq.~(\ref{linear}) is $\smash{V_2^{-1/2}=4\pi/\delta_0+ C e^{t_{\tau}/2}}$, where $C$ is a constant that depends on the initial conditions. Taking into account that $t_{\tau}=t/\tau$ (see~\cite{martinez:2021}), one obtains that $V_2=(4\pi/\delta_0+C e^{t/(2 \tau)})^{-2}$. Therefore, the correct asymptotic behavior as $t\to \infty$ is $V_2\sim e^{-t/\tau}$ and not $V_2\sim e^{-4 t/\tau}$, as M\&C claim.
A second criticism raised by M\&C refers to the following sentence that appears in the second paragraph of the Introduction section of~\cite{casado:2018}: ``In this class of models, the mechanism behind the generation of directed motion is basically harmonic mixing~[3,8,12].'' (for Refs.~\cite{hanggi:2009,salerno:2002,marchesoni:1986}). In reference to this sentence, M\&C claim ``\textit{Finally, the author claims that: ``In this class of models [for rocking ratchets in the presence of thermal noise], the mechanism behind the generation of directed motion
is basically harmonic mixing\dots .'' This is incorrect. \dots}''. Surprisingly, M\&C decide to leave out an important part of the sentence: the references~[3,8,12]. By hiding this information, they turn what is only a reference to previous results by other authors into a claim made by the author of~\cite{casado:2018}. Again, their criticism is not aimed at a new result reported in~\cite{casado:2018} but at a matter, the rectification via harmonic mixing, that is not even used in~\cite{casado:2018} and that has been widely discussed in the literature (see, e.g., ~\cite{hanggi:2009,salerno:2002,marchesoni:1986}). Therefore, a Comment on~\cite{casado:2018} is not the right place to question this matter.
In conclusion, the Comment~\cite{martinez:2021} do not question any of the new results reported in~\cite{casado:2018}. On the contrary, the numerical simulations contained in~\cite{martinez:2021} corroborate the validity of the analytical expression for the average terminal velocity in the adiabatic limit derived in the commented paper [Eq.~(12) of~\cite{casado:2018}]. The authors of the Comment focus their criticism on two issues of little relevance to the commented paper: the general formalism developed in~\cite{cuesta:2013,casado:2015} and the rectification via harmonic mixing discussed, for example, in~\cite{hanggi:2009,salerno:2002,marchesoni:1986}. These two issues barely take up five sentences in~\cite{casado:2018}. Their criticism is based on a misunderstanding or misrepresentation of what is actually claimed in~\cite{casado:2018}. In particular, the argument presented in~\cite{martinez:2021} against the general formalism developed in~\cite{cuesta:2013,casado:2015} is invalid because it is based on a confusion by the authors of the Comment between the general formalism per se and its lowest order approximation. In fact, the arguments contained in~\cite{martinez:2021} do not show that the formalism developed in~\cite{cuesta:2013,casado:2015} is in general incorrect, as the authors of the Comment suggest, but rather that the supposedly universal predictions of the theory of ratchet universality developed in~\cite{chacon:2007b,chacon:2010} are in general incorrect. In addition, it has been shown that other arguments presented in~\cite{martinez:2021} are invalid because they are based on incorrect equations.
\begin{acknowledgments}
J.C.-P. acknowledges financial support from the Ministerio de Econom\'{\i}a y Competitividad of Spain through Project No. FIS2017-86478-P and from the Junta de Andaluc\'{\i}a.
\end{acknowledgments}
|
2,869,038,155,897 | arxiv | \section{Introduction}
With the advent of the era of Internet of Things (IoT), the unprecedented growth of latency-critical applications are nevertheless hardly satisfied by {\em mobile cloud computing (MCC)} alone. To cater for the low-latency requirements while alleviating the burden over backhaul networks, {\em mobile-edge computing (MEC)}, also interchangeably known as {\em fog computing} has aroused a paradigm shift by extending cloud capabilities to the very edge within the radio access network (RAN) (see \cite{mao2017survey} and the references therein).
Both industry and academia have devoted constant effort to providing the next generation mobile networks with ultra-reliable low latency communications (uRLLC). Among pioneering industrialization on fog computing, Cisco has proposed fog computing as a promising candidate for IoT architecture \cite{Bonomi2012ACM}. In academics, \cite{6574874,7264984,7442079,6675770} focused on one-to-one offloading scheme where there is one mobile user and one corresponding cloudlet, \cite{6678114} \cite{7511367} presented multiple-user cases where there are multiple edge servers, while \cite{6787113} related to multiple-to-one scenarios where multiple mobile users offload computing to one edge server.
Recently, the intrinsic collaborative properties of the input data for computation offloading was investigated for augmented reality (AR) in \cite{7906521}. In fact, in many mobile applications such as augmented reality (AR) and virtual reality (VR), multiple mobile devices share parts of computing input/output in common, thus making it possible for further reducing computing latency at the edge. In \cite{8332500}, some important insights on the interplay among the social interactions in the VR mobile social network was revealed, and a significant reduce on the end-to-end latency was achieved through stochastic optimization technique. \cite{8335683} investigated potential spatial data correlation for VR applications to minimize the delay of accomplishing computation.
On another front, joint optimization of computation offloading with communications resources (such as power, bandwidth, and rate) proves to improve the performance of fog computing by explicitly taking channel conditions and communications constraints into account. In an early research \cite{4536215}, the offloading decision making was examined through the estimation of bandwidth data without considering the allocation of communication resources and channel conditions. For communications-aware computation offloading, \cite{7769867687e} minimized the local user's computation latency in a multi-user cooperative scenario, while \cite{8234686} minimized the energy consumption of remote fog computing nodes. However, these line of work have not taken the shared data feature aforementioned into account, thus failing to fully reap the advantage of fog computing.
In this paper, we consider a multi-user fog computing system, in which multiple single-antenna mobile users running applications featuring shared data can choose between (partially) offloading their computing tasks to a nearby single-antenna cloudlet and executing them locally, and then download the results from the cloudlet. Mobile users' overall energy consumption is minimized via joint optimization of computation offloading and communications resource allocation. Compared with existing literature, e.g., \cite{7906521}, although it investigated the energy minimization problem of shared-data featured offloading, it did not find the optimal solution. Moreover, it did not draw explicit conclusion regarding the channel condition's influence in the computation offloading. From this point of view, our work provides in-depth understanding of the shared-data featured offloading in MEC systems.
\section{System Model}
We consider a mobile-edge system that consists of $U$ mobile users running AR applications, denoted as ${\cal U}=\{1, ..., U\}$, and one base station (BS) equipped with computing facilities working as a cloudlet. All of the mobile users and the BS are assumed to be equipped with single antenna.
The input data size for user $u$ is denoted by $D^I_u$, $\forall u \in {\cal U}$, in which one fraction data size of $D_S^I$ bits are the shared data that is the same across all $U$ mobile users and the other fraction of $D_u^L$ bits are the data executed locally by user $u$. The shared data can be transmitted from each user by part, denoted by \(D_{u,S}^I\), \(\forall u \in {\cal U}\), such that \(\sum_{u=1}^UD_{u,S}^I=D_S^I\). The amount of input data that is exclusively transmitted by $u$ is thus given by $\bar{D}^I_u=D^I_u-D^I_{S}-D^L_u, \forall u\in {\cal U}$.
\begin{figure}[h]
\centering
\includegraphics[width=8.5cm]{timing_diagram.eps}
\caption{\bf{Timing illustration for the considered multi-user MEC system.}}
\label{fig-sample}
\end{figure}
It can be seen from Fig. 1 that there are two consecutive sub-phases for both input data offloading and results downloading phases: the shared and the individual data transmission. The transmission duration for offloading the shared input data is denoted by $t_{u,S}^{ul}$, $\forall u \in {\cal U}$; the offloading duration for the individual data is denoted as $t_u^{ul}$, $\forall u \in {\cal U}$; and the durations for downloading the shared and the individual output data are $t^{dl}_{u,S}$ and $t^{dl}_u$, $\forall u \in {\cal U}$ respectively. The remote computation time are also illustrated in Fig. 1, where $t_S^C$ and $t_u^C$, $\forall u \in {\cal U}$, denote that for the shared and the individual data transmitted to the cloudlet, respectively. Similarly, $F$ and $f_u$, $\forall u \in {\cal U}$, denote the computational frequency (in cycles/s) allocated to the shared and the individual tasks, respectively, by the cloudlet. In addition, the local computation time is denoted by $t_{u,L}^C$, $\forall u \in {\cal U}$.
\subsection{Uplink Transmission}
As observed from Fig. 1, there are two consecutive uplink transmission sub-phases: the shared data and the individual data offloading \cite{7906521}. Each mobile user offloads its computation task to the cloudlet server via frequency division multiple access (FDMA). The channel coefficient from user $u$ is given by $h_u, \forall u \in {\cal U}$, which is assumed to remain unchanged during the uplink transmission duration. With the transmission power given by $p_{u,S}^{ul}$, the achievable individual data rate for offloading the shared data is expressed as:
\begin{equation}
R_{u,S}^{ul}=W_u^{ul}log_2(1+\dfrac{p_{u,S}^{ul}|h_u|^2}{N_0}), \forall u \in {\cal U},
\end{equation}
where $W_u^{ul}=\tfrac{W^{ul}}{U}$ with $W^{ul}$ denoting the overall bandwidth available for the uplink transmission, and $N_0$ is the additive white Gaussian noise (AWGN) power. Accordingly, $t_{u,S}^{ul}=D^I_{u,S}/R_{u,S}^{ul}$, and the energy consumed by the $u$-th user in the shared data offloading sub-phase is given as
\begin{equation}
E_{u,S}^{ul}=t^{ul}_{u,S}p_{u,S}^{ul}=\dfrac{t^{ul}_{u,S}}{|h_u|^2}f(\dfrac{D^I_{u,S}}{t^{ul}_{u,S}}), \forall u \in {\cal U}, \label{eq:shared data energy}
\end{equation}
where the function $f(x)$ is defined as $f(x)=N_0(2^{\tfrac{x}{W^{ul}_u}}-1)$.
Similarly, the energy consumption for the $u$-th user in the individual data offloading sub-phase is expressed as:
\begin{equation}
E_u^{ul}=t^{ul}_{u}p_{u}^{ul}=\dfrac{t^{ul}_{u}}{|h_u|^2}f(\dfrac{D^I_u-D^I_S-D^L_u}{t^{ul}_{u}}), \forall u \in {\cal U}.\label{eq:individual data energy}
\end{equation}
\subsection{Computation Model}
Based on the energy model in \cite{6787113}, given the local computing bits $D_u^L$, the energy consumption for executing local computation is given by:
\begin{equation}
E^C_u=\kappa_0\dfrac{(\lambda_0 D_u^{L})^3}{{t^{C}_{u,L}}^2}, \forall u \in {\cal U}, \label{eq:local computing energy}
\end{equation}
where $\lambda_0$ (in cycles/bit) denotes the number of CPU cycles needed for processing one bit of input data, and $\kappa_0$ is the energy consumption capacitance coefficient.
\subsection{Downlink Transmission}
Similar to the uplink transmission, the downlink transmission phase also has two separate sub-phases: the shared and the individual results downloading. The shared output data are multicasted to the mobile users by the cloudlet at its maximum transmitting power $P_{\max}$. The achievable individual rate for the shared data downloading is thus given by
\begin{equation}
R^{dl}_{u,S}=W_u^{dl}log_2(1+\dfrac{P_{max}|g_u|^2}{N_0}), \forall u \in {\cal U},
\end{equation}
where $W_u^{dl}=\tfrac{W^{dl}}{U}$ with $W^{dl}$ denotes the overall bandwidth available for downlink transmission. The downlink channel coefficient is given by $g_u$, $\forall u \in {\cal U}$. The relation between the shared output data and the input data is given by \(D_S^O=a_0D_S^I\), where \(a_0\) is the factor representing the number of output bits for executing one bit of input data. Accordingly, $t^{dl}_{u,S}=D^O_S/R^{dl}_{u,S}, \forall u \in {\cal U}$, and thus the latency for transmitting the shared output data to all mobile users is given by
\begin{equation}
t^{dl}_S=\max_{u\in{\cal U}}\{t^{dl}_{u,S}\}.
\end{equation}
This is because the individual results downloading cannot be initiated until the shared data has finished transmission.
After the multicasting transmission, the individual output data is sent to each mobile user via FDMA. Denoting the downlink transmitting power for the $u$-th individual data by $p^{dl}_u$, the achievable rate for individual data downloading is thus expressed as:
\begin{equation}
R^{dl}_u=W^{dl}_ulog_2(1+\dfrac{p^{dl}_u|g_u|^2}{N_0}), \forall u \in {\cal U}.
\end{equation}
Similarly, denoting the individual output data size by \(D_u^O\), \(\forall u \in {\cal U}\), \(D_u^O=a_0\bar D_u^I=a_0(D_u^I-D_S^I-D_u^L)\), and \(t_u^{dl}=D^O_u/R^{dl}_u\).
For energy consumption, the overall energy consumed for decoding the result sent back by the cloudlet at the $u$-th mobile user is given by \cite{7906521}
\begin{equation}
E^{dl}_u=(t^{dl}_{u,S}+t^{dl}_u)\rho^{dl}_u, \forall u \in {\cal U}, \label{eq:downloading energy}
\end{equation}
where $\rho^{dl}_u$ (in Joules/second) captures the energy expenditure per second.
In addition, the total energy consumed by the BS for results transmission is given by,
\begin{align}
\sum_{u \in {\cal U}}\dfrac{t_u^{dl}}{|g_u|^2}f(\dfrac{a_0(D^I_u-D^I_S-D^L_u)}{t^{dl}_u}), \forall u \in {\cal U},
\end{align} which is required not to exceed $E_{\max}$ by the BS operator.
\subsection{Total Latency}
Next, we consider the overall computing latency. As illustrated in Fig. 1, it is observed the individual data downloading in Phase II cannot start until the cloudlet completes individual data computing, and the BS finishes the shared data transmission over the downlink.
Moreover, for the individual data computing, it cannot start before either the corresponding individual data finishes offloading or the cloudlet completes the shared data computing, i.e., \(\max\{t^{ul}_{u,S}+t^{ul}_u,\displaystyle\max_{u \in {\cal U}}\{t^{ul}_{u,S}\}+t^C_S\}\). Furthermore, also seen from Fig. 1, for the shared data results, it can only start being transmitted in the downlink after the cloudlet completes the shared data computing and all the individual data finishes offloading in the uplink, i.e., \(\max\bigg\{\displaystyle\max_{u \in {\cal U}}\{t^{ul}_{u,S}\}+t^C_S, \displaystyle\max_{u \in {\cal U}}\{t^{ul}_{u,S}+t^{ul}_u\}\bigg\}\). Combining the above facts, the total computing latency is expressed as follows:
\begin{equation}
\begin{split}
\label{equ:latency expression}
&\tau_u=\max\Bigg\{\max\{t^{ul}_{u,S}+t^{ul}_u,\displaystyle\max_{u \in {\cal U}}\{t^{ul}_{u,S}\}+t^C_S\}+t^C_u,\\
&\max\bigg\{\displaystyle\max_{u \in {\cal U}}\{t^{ul}_{u,S}\}+t^C_S, \displaystyle\max_{u \in {\cal U}}\{t^{ul}_{u,S}+t^{ul}_u\}\bigg\}+t^{dl}_S\Bigg\}+t^{dl}_u ,\\
&\forall u \in {\cal U}.\\
\end{split}
\end{equation}
\section{Problem Formulation}
The overall energy consumption at the mobile users consists of three parts: data offloading over the uplink (c.f. (2) and (3)), local computing (c.f. (4)), and results retrieving (c.f. (8)), which is thus given by
\begin{equation}
\begin{split}
&E_{total}=\sum_{u\in{\cal U}}\kappa_0\dfrac{(\lambda_0D^L_u)^3}{{t^C_{u,L}}^2}+\sum_{u\in{\cal U}}\dfrac{t_{u,S}^{ul}}{|h_u|^2}f(\dfrac{D^I_{u,S}}{t^{ul}_{u,S}})\\
&+\sum_{u\in{\cal U}}\dfrac{t^{ul}_u}{|h_u|^2}f(\dfrac{D^I_u-D^I_S-D^L_u}{t^{ul}_u})+\sum_{u\in{\cal U}}(t^{dl}_{u,S}+t^{dl}_u)\rho^{dl}_u .
\end{split}
\end{equation}
The objective is to minimize the overall energy consumption given by $E_{total}$, subject to the computing latency constraints, the maximum local computing frequencies, and the total energy consumption on the individual data at the BS. Specifically, the optimization problem is formulated as below:
\begin{subequations}
\begin{align}
\mathrm{(P1)}: &{\kern-12pt}\mathop{\mathtt{min}}_{\{t_{u,S}^{ul},t^{ul}_u,t^C_{u,L},t^{dl}_u,D^L_u, D^I_{u,S}\}}{\kern-12pt}
~E_{total}\\
&{\kern20pt}\mathtt {s.t.} \notag\\
&~\tau_u \leq T_{max}, \forall u \in {\cal U},\label{eq:latency constraint}\\
&\sum_{u \in {\cal U}}\dfrac{t_u^{dl}}{|g_u|^2}f(\dfrac{a_0(D^I_u-D^I_S-D^L_u)}{t^{dl}_u}) \leq E_{max}, \label{eq:downlink energy constraint}\\
&0 \leq t^C_{u,L}\leq T_{max}, \forall u \in {\cal U},\label{eq:local latency constraint}\\
&\lambda_0D^L_u\leq t^C_{u,L}f_{u,max},\label{eq:local bits constraint}\\
&0\leq D^L_u \leq D_u^I-D_S^I, \forall u \in {\cal U},\\
&\sum_{u\in{\cal U}}D^I_{u,S}=D^I_S, D^I_{u,S}\geq 0,\label{eq:shared data constraint}\\
&t_{u,S}^{ul}\geq0, t^{ul}_u\geq0, t^C_{u,L}\geq0, t^{dl}_u\geq 0, \forall u \in {\cal U}.
\end{align}
\end{subequations}
Constraint \eqref{eq:latency constraint} and \eqref{eq:local latency constraint} gives the latency constraints that the time taken for accomplishing computing tasks cannot excess the maximum allowed length, both for offloading and local computing. \eqref{eq:downlink energy constraint} tells that the available energy for downlink transmission of remote computing node should be lower than a maximum level. \eqref{eq:local bits constraint} restricts the number of allowable local computing bits imposed by local computing capabilities. Besides, \eqref{eq:shared data constraint} puts that adding all the shared data bits offloaded by all mobile users respectively, the value should be equal to the exact amount of shared bits existing in the same user group.
\section{Optimal scheme for joint offloading and communication resource allocation}
\subsection{Problem Reformulation}
Although the latency expression \eqref{equ:latency expression} looks complex in its from, \eqref{eq:latency constraint} is still a convex constraint. For the ease of exposition, we assume herein that the cloudlet executes the shared and the individual computing within the duration of the individual data offloading and the shared results downloading, respectively, i.e., \(t_S^C\ll t_u^{ul}\), and \(t_u^c\ll t_{u,S}^{dl}\), \(\forall u \in {\cal U}\)\footnote{We assume herein that the computation capacities at the cloudlet is relatively much higher than those at the mobile users, and thus the computing time taken is much shorter than the data transmission time.}. As a result, \eqref{eq:latency constraint} can be simplified as below:
\begin{equation}
\max\{t^{ul}_{u,S}+t^{ul}_u\}+t^{dl}_S+t^{dl}_u \leq T_{max}, \forall u \in {\cal U}.\label{eq:transformed latency constraint}
\end{equation}
by introducing the auxiliary variable $t^{dl}$, which satisfies $t^{dl}_u \leq t^{dl}, \forall u \in {\cal U}$, \eqref{eq:transformed latency constraint} reduces to
\begin{equation}
t^{ul}_{u,S}+t^{ul}_u\leq T_{max}-t^{dl}_S-t^{dl}, \forall u \in {\cal U} .\label{eq:final latency constraint}
\end{equation}
Notice that \(E_u^C\)'s (c.f. (4)) is monotonically decreases with respect to the local computing time $t_{u,L}^C$ for each mobile user. To obtain the minimal energy consumption, it is obvious that $t_{u,L}^C=T_{max}, \forall u \in {\cal U}$. Then the optimization problem to be solved is reformulated as:
\begin{subequations}
\begin{align}
\mathrm{(P1^\prime)}: &{\kern-12pt}\mathop{\mathtt{min}}_{\{t_{u,S}^{ul},t^{ul}_u,t^{dl}_u,t^{dl},D^L_u, D^I_{u,S}\}}{\kern-12pt}
~E_{total}\\
&{\kern20pt}\mathtt {s.t.}\notag\\
& ~(12c-12h), (14).\\
&t^{dl}_u \leq t^{dl}, \forall u \in {\cal U}.
\end{align}
\end{subequations}
\subsection{Joint offloading and communication resource allocation}
Introducing dual variables ${\bm {\beta}}, {\bm {\omega}}, {\bm {\sigma}}, {\nu}$, the Lagrangian of problem $(P1^\prime)$ is presented as:
\begin{equation}
\begin{split}
&L({\bm {\beta}}, {\bm {\omega}}, {\bm {\sigma}}, {\nu},t_{u,S}^{ul},t^{ul}_u,t^{dl}_u,t^{dl},D^L_u, D^I_{u,S})=\\
&\sum_{u\in{\cal U}}\dfrac{t_{u,S}^{ul}}{|h_u|^2}f(\dfrac{D^I_{u,S}}{t^{ul}_{u,S}})+\sum_{u\in{\cal U}}\dfrac{t^{ul}_u}{|h_u|^2}f(\dfrac{D^I_u-D^I_S-D^L_u}{t^{ul}_u})\\
&+\sum_{u\in{\cal U}}\kappa_0\dfrac{(\lambda_0D^L_u)^3}{{t^C_{u,L}}^2}+\sum_{u\in{\cal U}}(t^{dl}_{u,S}+t^{dl}_u)\rho^{dl}_u+\sum_{u\in{\cal U}}\beta_u(t^{ul}_{u,S}\\
&+t^{ul}_u-T_{max}+t^{dl}_S+t^{dl})+\sum_{u\in{\cal U}}\omega_u(\lambda_0D^L_u\\
&-t^C_{u,L}f_{u,max})+\sum_{u\in{\cal U}}\sigma_u(t^{dl}_u-t^{dl})\\
&+\nu[\sum_{u\in{\cal U}}\dfrac{t^{dl}_u}{|g_u|^2}f(\dfrac{a_0(D^I_u-D^I_S-D^L_u)}{t^{dl}_u})-E_{max}],
\end{split}
\end{equation}
where ${\bm {\beta}}=\{\beta_1, ..., \beta_U\}$ are dual variables associated with the latency constraint \eqref{eq:final latency constraint}, ${\bm {\omega}}=\{\omega_1, ..., \omega_U\}$ are associated with local computing bits constraint \eqref{eq:local bits constraint}), ${\bm {\sigma}}=\{\sigma_1, ..., \sigma_U\}$ are connected with the constraint for auxiliary variable $t^{dl}$, and $\nu$ catches the downlink transmission energy constraint \eqref{eq:downlink energy constraint}. Hence, we have the Lagrangian dual function expressed as:
\begin{equation}
\begin{split}
&g({\bm {\beta}}, {\bm {\omega}},{\bm {\sigma}}, {\nu})\\
&=\smash{\displaystyle\min_{\{t_{u,S}^{ul},t^{ul}_u,t^{dl}_u,t^{dl},D^L_u, D^I_{u,S}\}}} L({\bm {\beta}}, {\bm {\omega}},{\bm {\sigma}}, {\nu},t_{u,S}^{ul},t^{ul}_u,t^{dl}_u,t^{dl},\\
&D^L_u, D^I_{u,S}),\label{eq:Lagrangian dual function}
\end{split}
\end{equation}
\quad\quad s.t.
\quad\quad\quad\quad\quad\quad (12f-12h).
Consequently, the corresponding dual problem is formulated as:
\begin{equation}
\smash{\displaystyle\max_{\{{\bm {\beta}}, {\bm {\omega}}, {\bm {\sigma}}, {\nu}\}}} g({\bm {\beta}}, {\bm {\omega}}, {\bm {\sigma}},{\nu})\label{eq:Dual Problem}
\end{equation}
\quad\quad s.t.
\begin{equation}
\quad\quad\quad\quad\quad\quad {\bm{\beta}}\succeq 0, {\bm{\omega}}\succeq 0, {\bm{\sigma}}\succeq 0, \nu\geq0. \notag
\end{equation}
\begin{proposition}
Given a determined set of dual variables \({\bm {\beta}}, {\bm {\omega}}, {\bm {\sigma}}, {\nu}\), the optimal solution to the Lagrangian dual problem (16) can be determined as follows. \label{proposition1}
The optimal primal variables \(t_{u,S}^{ul}\), \(t_{u}^{ul}\), and \(t_u^{dl}\), are given by
\begin{equation}
\hat t^{ul}_{u,S}=\dfrac{\hat D^I_{u,S}}{\dfrac{W^{ul}_u}{ln2}[W_0(\dfrac{1}{e}(\dfrac{\beta_u|h_u|^2}{N_0}-1))+1]}, \forall u \in {\cal U}.\label{equ:share time}
\end{equation}
\begin{equation}
\label{equ:uplink time}
\hat t^{ul}_{u}=\dfrac{D^I_u-D^I_S-\hat D^L_u}{\dfrac{W^{ul}_u}{ln2}[W_0(\dfrac{1}{e}(\dfrac{\beta_u|h_u|^2}{N_0}-1))+1]}, \forall u \in {\cal U}.
\end{equation}
\begin{equation}
\label{equ:downlink time}
\hat t^{dl}_u=\dfrac{a_0(D^I_u-D^I_S-\hat D^L_u)}{\dfrac{W^{dl}_u}{a_0ln2}[W_0(\dfrac{1}{e}(\dfrac{(\rho^{dl}_u+\sigma_u)|g_u|^2}{\nu N_0}-1))+1]}, \forall u \in {\cal U}.
\end{equation}
where $W_0(x)$ is the principle branch of the Lambert $W$ function defined as the solution for $W_0(x)e^{W_0(x)}=x$ \cite{8234686}, $e$ is the base of the natural logarithm;
the optimal auxiliary variable \(t^{dl}\) is given by:
\begin{equation}
\label{equ:auxiliary tdl}
\hat t^{dl}=\left\{
\begin{aligned}
0, \quad&\sum_{u\in{\cal U}}\beta_u-\sum_{u\in{\cal U}}\sigma_u >0,\\
T_{max}-t^{dl}_S, \quad&otherwise;
\end{aligned}
\right.
\end{equation}
and the optimal local computing data size is given by
\begin{equation}
\label{equ:local bits}
\begin{split}
&\hat D^L_u=\\
&\min\Bigg\{T_{max}\sqrt{\bigg[\dfrac{N_0ln2}{3\kappa_0{\lambda_0}^3}(\dfrac{2^{\tfrac{\hat r^{ul}_{u}}{W^{ul}_u}}}{W^{ul}_u|h_u|^2}+\dfrac{\nu a_0\cdot 2^{\tfrac{a_0 \hat r^{dl}_{u}}{W^{dl}_u}}}{W^{dl}_u|g_u|^2})-\dfrac{\omega_u}{3\kappa_0 \lambda_0^2}\bigg]^+}\\
&, D^I_u-D^I_S\Bigg\}, \forall u \in {\cal U}, \notag
\end{split}
\end{equation}
where $\hat r^{ul}_{u}=\frac{W^{ul}_u}{ln2}[W_0(\frac{1}{e}(\frac{\beta_u|h_u|^2}{N_0}-1))+1]$ and $\hat r^{dl}_{u}=\frac{W^{dl}_u}{a_0ln2}[W_0(\frac{1}{e}(\frac{(\rho^{dl}_u+\sigma_u)|g_u|^2}{\nu N_0}-1))+1]$, $\forall u \in {\cal U}$.
\end{proposition}
\begin{IEEEproof}
Please refer to Appendix A.
\end{IEEEproof}
In fact, on one hand, \(\hat r_u^{ul}\)'s and \(\hat r_u^{dl}\)'s can be interpreted as the optimum transmission rate for the shared/individual data offloading and the individual data downloading, respectively, given the dual variables. On the other hand, for each user $u$, the optimal transmission rate for the shared data is seen to be identical to that of the individual data over the uplink, given that the uplink channel gains remain unchanged during the whole offloading phase.
Next, to obtain the optimal offloading bits of the shared data for each user, i.e., \(\hat D_{u,S}^I\), we need the following lemma.
\begin{lemma}
The optimal offloaded shared data for user $u$ is expressed as,
\begin{equation}
\label{equ:shared bits}
\hat D^I_{u,S}=\left\{
\begin{aligned}
D^I_S, \quad&\hat{u}=arg \smash{\displaystyle\min_{1 \leq u \leq U}} \Delta_u,\\
0, \quad&otherwise,
\end{aligned}
\right.
\end{equation}
where $\Delta_u=\frac{f(\hat r^{ul}_{u,S})}{\hat r^{ul}_{u,S}|h_u|^2}+\frac{\beta_u}{\hat r^{ul}_{u,S}}, \forall u \in {\cal U}$.
\end{lemma}
\begin{IEEEproof}
Please refer to Appendix B.
\end{IEEEproof}
Notable, it is easily observed from Lemma 1 that the shared data is optimally offloaded by one specific user instead of multiple ones.
Based on Proposition 1, the dual problem can thus be iteratively solved according to ellipsoid method (with constraints), the detail of which can be referred to \cite{EE364b}. The algorithm for solving $\mathrm{(P1^\prime)}$ is summarized in Table \ref{table: Algorithm I}.
\small\begin{table}[htp]
\begin{center}
\caption{Algorithm I for solving \(\mathrm{(P1^\prime)}\)} \label{table: Algorithm I}
\vspace{-0.75em}
\hrule
\vspace{0.50em}
\begin{algorithmic}[1]
\REQUIRE \((\bm{\beta^{(0)}}, \bm {\omega^{(0)}}, \bm {\sigma^{(0)}}, \nu^{(0)})\)
\REPEAT
\STATE Solve \eqref{eq:Lagrangian dual function} given \((\bm {\beta^{(i)}},\bm {\omega^{(i)}},\bm {\sigma^{(i)}}, \nu^{(i)})\) according to Proposition~\ref{proposition1} and obtain \(\{\hat t_{u,S}^{ul}, \hat t_{u}^{ul}, \hat t_u^{dl}, \hat t_{dl}, \hat D_u^L, \hat D_{u,S}^I\}\);
\STATE update the subgradient of \(\bm{\beta},\bm{ \omega},\bm {\sigma}, \nu\) respectively, i.e., \(t^{ul}_{u,S}+t^{ul}_u-T_{max}+\displaystyle\max_{u \in {\cal U}}\{t^{dl}_{u,S}\}+t^{dl}\), \(\lambda_0D^L_u-t^C_{u,L}f_{u,max}\), \(t^{dl}_u-t^{dl}\), \(\sum_{u\in{\cal U}}\frac{t^{dl}_u}{|g_u|^2}f(\dfrac{a_0(D^I_u-D^I_S-D^L_u)}{t^{dl}_u})-E_{max}\) in accordance with the ellipsoid method \cite{EE364b};
\UNTIL the predefined accuracy threshold is satisfied.
\ENSURE The optimal dual variables to the dual problem \eqref{eq:Dual Problem} \((\bm{\beta^\ast},\bm{\omega^\ast},\bm{\sigma^\ast}, \nu^\ast)\)
\STATE Solve \eqref{eq:Lagrangian dual function} again with \((\bm{\beta^\ast},\bm{\omega^\ast},\bm{\sigma^\ast},\nu^\ast)\)
\ENSURE \(\{t_{u,S}^{ul\ast}, t_{u}^{ul\ast}, t_u^{dl\ast}, t^{dl\ast}, D_u^{L\ast}, D_{u,S}^{\ast}\}\)
\end{algorithmic}
\vspace{0.50em}
\hrule
\end{center}
\vspace{-1.0em}
\end{table}
\section{Numerical Results}
\begin{normalsize}
In this section, the numerical results of the proposed algorithm together with other baseline algorithms are presented. Except for the local computing only scheme where users execute all the data bits locally, there are three other offloading schemes presented as baseline algorithms: 1) {\it Offloading without considering the shared data}: the collaborative properties are ignored, every user makes the offloading decision without coordination among other users; 2) {\it Full offloading only}: the shared data is taken into consideration, but the whole chunks of input data of every user are forced to be offloaded to the edge computing node, excluding the local computing capability from participating in the computation tasks; 3) {\it Offloading with equal time length}: taking the correlated data into consideration, the data offloading and downloading are performed for each user with equal time length, with optimal solutions obtained through CVX.
In the simulation, the bandwidth avaialble is assumed to be $W^{ul}=W^{dl}$=10MHz, the maximum downlink transmit power $P_{max}=1W$, and the input data size $D_u^I=10kbits$ for all users. The spectral density of the (AWGN) power is -169 dBm/Hz. The mobile energy expenditure per second in the downlink is $\rho^{dl}_u$=0.625 J/s \cite{7906521}, the maximum local computing capability $f_{u,max}=1G$Hz. Besides, $\lambda_0=1\times 10^3$ cycle/bit, $a_0=1$, $\kappa_0=10^{-26}$. The pathloss model is $PL=128.1+37.6log_{10}(d_u)$, where $d_u$ represents the distance between user $u$ and edge computing node in kilometers.
\end{normalsize}
\begin{figure}
\centering
\includegraphics[scale=0.37]{differ_Tmax_new.eps}
\caption{\bf{Energy consumption versus different latency constraints}}
\label{fig-sample}
\end{figure}
\begin{normalsize}
Fig.2 depicts how the energy consumption changes with different latency constraints. The energy consumption are becoming lower as the latency requirement gets longer for all listed offloading algorithms. Only the proposed offloading scheme can give the lowest energy consuming performance. The best energy saving improvement can only be achieved through the joint participation of local computing and shared data coordination. Besides, even though the equal time length offloading has lower complexity than the proposed algorithm, it cannot compete with the proposed one in terms of energy saving. Recalling our conclusion that the best way to achieve the energy saving is to let these correlated bits transmitted by one specific user, the reason is that forcing offloading time duration to be equal makes the shared data to be transmitted by all users simultaneously.
The energy consumed for computing one data bit increases exponentially as the latency constraint diminishes. Hence for the local computing only scheme, when latency constraint comes to 0.01 second the energy taken to finish the computation tasks, which is 1000 mJoules, can reach up to nearly 100 times more than those of all the offloading algorithms. Then it drops exponentially to 10 mJoules when the latency constraint goes to 0.1 second. As a result, the curve representing local computing only is not added in Fig.2, otherwise the comparison of the offloading schemes will not be clear.
\end{normalsize}
\begin{normalsize}
In Fig.3, the energy consumption changes with the percentage of shared data is demonstrated. Apparently, as long as we take the shared data into consideration when making offloading decisions, the lower overall energy consumption is achieved when the proportion of shared data gets higher. More energy will be saved when the percentage of shared data gets higher for proposed offloading scheme compared to the scheme without considering the existence of shared data. This trend applies to the full offloading only algorithm as well, because it also cares about the existence of shared data when making offloading decisions. The energy consumptions for full offloading only do not always go under that of offloading without considering shared data. That is because when given specific latency constraint, the importance of local computing capabilities diminishes in saving mobile users' energy consumption as the share of common data increases. Since most of the data will be offloaded to the edge node, few input bits would remain local for computing. Then the energy consumption of the full offloading only scenario represents that it get closer to that of the proposed algorithm when the percentage of shared data increases. Similar trend applies to the equal time length offloading as well.
\end{normalsize}
\begin{figure}
\centering
\includegraphics[scale=0.388]{differ_perc_new.eps}
\caption{\bf{Energy consumption versus different percentage of shared data}}
\label{fig-sample}
\end{figure}
\begin{equation}
t_S^C=\lambda_0 D^I_S/F
\end{equation}
\section{Conclusions}
\begin{normalsize}
In this paper, a multi-user fog computing system was considered, in which multiple single-antenna mobile users running applications featuring shared data can partially offload their individual computation tasks to a nearby single-antenna cloudlet and then download the results from it. The mobile users' energy consumption minimization problem subject to the total latency, the total downlink transmission energy and the local computing constraints was formulated as a convex problem with the optimal solution obtained by classical Lagrangian duality method. Based upon the semi-closed form solution, it was proved that the shared data is optimally transmitted by only one of the mobile users instead of multiple ones collaboratively. The proposed joint computation offloading and communications resource allocation was verified by simulations against other baseline algorithms that ignore the shared data property or the mobile users' own computing capabilities..
\end{normalsize}
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\begin{appendices}
\section{}\label{appendix:proof of rank-one for W_p^prime}
In order to find the optimal solutions of the primary problem, we need to examine the related partial derivatives $\frac{\partial L}{\partial D^L_u}, \frac{\partial L}{\partial D^I_{u,S}}, \frac{\partial L}{\partial t^{ul}_{u,S}}, \frac{\partial L}{\partial t^{ul}_{u}}, \frac{\partial L}{\partial t^{dl}_{u}}, \frac{\partial L}{\partial t^{dl}}, \forall u \in {\cal U}$. After obtaining these partial derivatives, the KKT conditions can be applied to find the optimal solutions. For example, let $\frac{\partial L}{\partial D^L_u}$ and $\frac{\partial L}{\partial D^I_{u,S}}$ be equal to 0. The inverse function of $y=f(x)-xf'(x)$ for $x>0$ is given by $x=\tfrac{W^{ul}_u}{ln2}[W_0(-\tfrac{y}{eN_0}-\tfrac{1}{e})+1]$. Then it follows that $f(\hat r^{ul}_{u,S})-\hat r^{ul}_{u,S}f'(\hat r^{ul}_{u,S})=f(\hat r^{ul}_{u})-\hat r^{ul}_{u}f'(\hat r^{ul}_{u})=-\beta_u|h_u|^2$, and the optimal uplink transmission rate of the shared data ${\hat r^{ul}_{u,S}}$ and that of the exclusively offloaded data ${\hat r^{ul}_u}$ are thus derived. Then the expressions of the optimal primary variables are readily obtained as shown in \eqref{equ:share time}, \eqref{equ:uplink time}, \eqref{equ:downlink time}, \eqref{equ:auxiliary tdl}, \eqref{equ:local bits}, and \eqref{equ:shared bits}.
\section{}\label{appendix:proof of shared data offloading}
To obtain how the shared input data offloading $\hat D^I_{u,S}$ are distributed among users, we need to examine the partial Lagrangian regarding $D^I_{u,S}$ and $t_{u,S}^{ul}$. Replacing the shared data offloading time \(t_{u,S}^{ul}\) with \(\frac{D_{u,S}^I }{\hat r_u^{ul}}\), the partial Lagrangian is expressed as
\begin{equation}
\label{eq:partial Lagrangian}
\begin{split}
&\smash{\displaystyle\min_{\{D^I_{u,S}\}}} \overline{L}=\sum_{u \in {\cal U}}[\dfrac{t^{ul}_{u,S}}{|h_u|^2} f(\dfrac{D^I_{u,S}}{t^{ul}_{u,S}})+\beta_u t^{ul}_{u,S}]\\
&\\
&=\sum_{u \in {\cal U}}[\dfrac{D^I_{u,S}}{\hat r^{ul}_{u,S}|h_u|^2}f(\hat r^{ul}_{u,S})+\beta_u\dfrac{D^I_{u,S}}{\hat r^{ul}_{u,S}}]\\
&=\sum_{u \in {\cal U}}\Delta_u\cdot D^I_{u,S}
\end{split} \tag{24a}
\end{equation}
\quad\quad s.t.
\begin{equation}
\sum_{u \in {\cal U}}D^I_{u,S}=D^I_S, D^I_{u,S}\geq 0, \forall u \in {\cal U}, \tag{24b}
\end{equation}
where we define $\Delta_u=\dfrac{f(\hat r^{ul}_{u,S})}{\hat r^{ul}_{u,S}|h_u|^2}+\dfrac{\beta_u}{\hat r^{ul}_{u,S}}$ as a constant given the dual variable \(\beta_u\)'s. As a result, the optimal solution to the linear programming (LP) (24) is easily obtained as shown in \eqref{equ:shared bits}.
\end{appendices}
\begin{IEEEbiography}[{\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{picture}}]{John Doe}
\blindtext
\end{IEEEbiography}
\bibliographystyle{ieeetr}
|
2,869,038,155,898 | arxiv | \section*{\abstractname
\begin{document}
\markboth{Evgeniy Zorin}
{Zero order estimates for analytic functions}
\title{MULTIPLICITY ESTIMATES FOR ALGEBRAICALLY DEPENDENT ANALYTIC FUNCTIONS
}
\author{EVGENIY ZORIN}
\date{}
\maketitle
\date{}
\maketitle
\begin{abstract}
We prove a new general multiplicity estimate applicable to sets of functions without any assumption on algebraic independence. The multiplicity estimates are commonly used in determining measures of algebraic independence of values of functions, for instance within the context of Mahler's method. For this reason, our result provides an important tool for the proofs of algebraic independence of complex numbers. At the same time, these estimates can be considered as a measure of algebraic independence of functions themselves. Hence our result provides, under some conditions, the measure of algebraic independence of elements in ${\bf F}_q[[T]]$, where ${\bf F}_q$ denotes a finite field.
\end{abstract}
\section{Introduction}
Let $\kk$ be any field and let
\begin{equation} \label{intro_functions_f}
f_1(z),\dots,f_n(z)\in\kk[[z]]
\end{equation}
be a collection of formal power series with coefficients in $\kk$.
In this article we are interested in so-called \emph{multiplicity estimates}, referred to as uniform upper bounds for
$$
\ordz P(z,f_1(z),\dots,f_n(z)),
$$
the order of vanishing of $ P(z,f_1(z),\dots,f_n(z))$ at $z=0$, where $P$ is a polynomial $P\in\kk[Z,X_1,\dots,X_n]$ in $n+1$ variables. Naturally, the upper bounds should depend on the degree of $P$, since, for example, $\ordz P_N(z,f_1(z))>N$ when $f_1(z)=\sum_{i=0}^{\infty}a_nz^n$ and $P_N(Z,X_1)=X_1-\sum_{i=0}^{N}a_nZ^n$. In applications it is often desirable to have upper bounds of the form
\begin{equation} \label{def_LM_a}
\ordz P(z,f_1(z),\dots,f_n(z))<F(\deg_{z}(P),\deg_{\ul{X}}(P)),
\end{equation}
where $F$ is independent of $P$ and the degrees in $z$ and in $X_1,\dots,X_n$ are separated. We say that $f_1(z),\dots,f_n(z)$ verify \emph{a multiplicity estimate/lemma} (with respect to $F$) if~\eqref{def_LM_a} holds for all $P\in\kk[Z,X_1,\dots,X_n]$ such that $ P(z,f_1(z),\dots,f_n(z))\ne 0$.
Multiplicity estimates form an important tool in transcendental number theory for establishing the measure of algebraic independence of, for example, complex numbers. The multiplicity estimates can also be considered as measures of algebraic independence of functions (or formal power series). For more detailed explanation we refer the reader to~\cite{Bertrand1985,Bertrand2007a,Bertrand2007b,Bertrand2007}.
The central theme in transcendental number theory is to determine whether or not the values $f_1(\alpha),\dots,f_n(\alpha)$, of a given set of analytic functions given by~\eqref{intro_functions_f},
are algebraically independent at algebraic points $\alpha$.
In the next few pages we explain why it is natural to apply the estimates~\eqref{def_LM_a} within this theme.
To begin with, recall the famous Lindemann-Weierstrass Theorem:
\begin{theorem}[(Lindemann-Weierstrass)]
Let $n\geq 2$ be an integer and $\alpha_1,\dots,\alpha_n\in\mcc$ be algebraic over $\mqq$. Suppose that $\exp(\alpha_1z),\dots,\exp(\alpha_nz)$ are algebraically independent over $\mqq$ functions of a complex variable $z$. Then for any $\beta\in\ol{\mqq}^*$ the numbers $\exp(\alpha_1\beta),\dots,\exp(\alpha_n\beta)$ are algebraically independent over $\mqq$.
\end{theorem}
\begin{remark}
Usually this theorem is stated with the hypothesis that $\alpha_1,\dots,\alpha_n$ are linearly independent over $\mqq$, instead of algebraic independence of functions $\exp(\alpha_1z),\dots,\exp(\alpha_nz)$. It is an easy exercise to verify that these conditions are equivalent.
\end{remark}
Later this result was generalized to the much broader class of so-called $E$-functions (`$E$' here is for ``exponential'', as the definition of this class captures some important properties of the exponential function). We refer the reader to~\cite{Sh1959} for the definition of this class.
This line of research was initiated by Siegel with an impressive development via a number of \emph{tours de force}~\cite{Sh1959,NS1996,Andre2000,Beukers2006}, crowned by the following qualitatively best possible result.
\begin{theorem}[(Nesterenko-Shidlovsky, \cite{NS1996})] \label{intro_theo_NS}
Let $f_1(z),\dots,f_n(z)$ be a set of $E$-functions that form a solution of the system of first-order differential equations
\begin{equation} \label{NS_theo_h_system}
\frac{d}{dz}\begin{pmatrix}f_1(z)\\\vdots\\ f_n(z)\end{pmatrix}=A(z)\begin{pmatrix}f_1(z)\\\vdots\\ f_n(z)\end{pmatrix},
\end{equation}
where $A$ is an $n\times n$-matrix with entries in $\ol{\mqq}(z)$. Denote by $T(z)$ the common denominator
of the entries of $A$. Then, for any $\alpha\in\ol{\mqq}$ such that $\alpha T(\alpha)\ne 0$,
\begin{equation} \label{intro_trdegf_eq_trdegalpha}
\trdeg_{\mqq}\mqq\left(f_1(\alpha),\dots,f_n(\alpha)\right)=\trdeg_{\mcc(z)}\mcc(z)\left(f_1(z),\dots,f_n(z)\right).
\end{equation}
Moreover, there exists a finite set $S$ such that for all $\alpha\in\ol{\mqq}$, $\alpha\not\in S$ the following holds. For any homogeneous polynomial $P\in\ol{\mqq}[X_1,\dots,X_n]$ with $P(f_1(\alpha),\dots,f_n(\alpha))=0$ there exists $Q\in\ol{\mqq}[z,X_1,\dots,X_n]$, homogeneous in $X_1,\dots,X_n$, such that $Q(z,f_1(z),\dots,f_n(z))\equiv 0$ and
\begin{equation} \label{theo_NS_PQrelation}
P(X_1,\dots,X_n)=Q(\alpha,X_1,\dots,X_n).
\end{equation}
\end{theorem}
Slightly later this theorem was proved with a completely different method by Andr\'e~\cite{Andre2000}. Using Andr\'e's method, Beukers~\cite{Beukers2006} has proved that in the statement of the Nesterenko-Shidlovsky Theorem above one can always take $S$ to be the set of zeros of $zT(z)$.
The class of $E$-functions is not the only example for which results similar to~\eqref{intro_trdegf_eq_trdegalpha} are sought. For example, of particular interest are sets of functions satisfying certain functional relations, such as relations of Mahler's type~\cite{Ni1986,Pellarin2009} or equations in $q$-differences~\cite{ATV2007,AV2003}. Another example is the so-called $G$-functions~\cite{Andre,Andre2000}, allied with $E$-functions. However, despite substantial progress, there are still many open problems outside the theory for $E$-functions.
The Nesterenko-Shidlovsky-Andr\'e-Beukers Theorem is best possible. For instance, if the functions~\eqref{intro_functions_f} have algebraic coefficients and admit an algebraic relation over $\mcc(z)$, that is if there is a polynomial $R\in\mcc(z)[X_1,\dots,X_n]$ vanishing at $\left(f_1(z),\dots,f_n(z)\right)$, one can verify
that this is a relation over $\ol{\mqq}(z)$, i.e. the coefficients of $R$ are from $\ol{\mqq}(z)$.
Multiplying $R$ by a common denominator of its coefficients we obtain
a polynomial $R_1$ from $\ol{\mqq}[z][X_1,\dots,X_n]$. The polynomial $R_1(\alpha,X_1,\dots,X_n)$ vanishes at $\left(f_1(\alpha),\dots,f_n(\alpha)\right)$ and has algebraic coefficients, by construction. Thus polynomial relations~\eqref{theo_NS_PQrelation} from the theorem above are transmitted directly from functions to their values. So the best possible result regarding algebraic independence of the values $(f_1(\alpha),\dots,f_n(\alpha))$ one may hope to obtain is the result~\eqref{intro_trdegf_eq_trdegalpha}.
There exist powerful methods~\cite{PP1986,PP1997,PP_KF} which yield results of type~\eqref{intro_trdegf_eq_trdegalpha} for various sets of functions. It appears in practice that it is greatly preferable to measure the strength of algebraic independence between the functions~\eqref{intro_functions_f}. Loosely speaking, the reason is that the passage from algebraic independence of functions to the algebraic independence of their values usually involves a loss of information. It is very desirable to know how much one can lose.
The following question naturally arises: how one can measure the strength of algebraic independence? Let us start by considering the complex numbers
\begin{equation} \label{intro_numbers_x}
x_1,\dots,x_n\in\mcc^n.
\end{equation}
By definition, these complex numbers are algebraically independent if (and only if) for any non-zero polynomial $P\in\mqq[X_1,\dots,X_n]$ one has
\begin{equation*}
P(x_1,\dots,x_n)\ne 0.
\end{equation*}
At first glance, one may try to use as a measure of algebraic independence of numbers~\eqref{intro_numbers_x} the infimum of absolute value
\begin{equation} \label{intro_av_P}
|P(x_1,\dots,x_n)|
\end{equation}
over all allowed polynomials (that is $P\in\mqq[X_1,\dots,X_n]\setminus\{0\}$). However, it is easy to see that this infimum is always 0, at least if the degree and height\footnote{The notion of \emph{height} of a polynomial in fact has several meanings in the number theory. We understand it in a sense of Weil's logarithmic height. The reader can find the definition in subsection~\ref{sss_Heights}, for instance see~\eqref{defHsimple}.} of polynomials are unbounded. In view of this, it is natural to compare the rate of decreasing of the absolute value~\eqref{intro_av_P} with the corresponding values of \emph{degree} and \emph{height} of $P$.
\begin{definition} \label{def_mia}
One says that a function $\phi:\mnn\times\mrr\rightarrow\mrr^+$ is a measure of algebraic independence of the set of complex numbers~\eqref{intro_numbers_x} if and only if the following inequality is verified for any non-zero polynomial $P\in\mzz[X_1,\dots,X_n]$
\begin{equation} \label{def_mai_lb}
|P(x_1,\dots,x_n)|>\phi(\deg(P),h(P)).
\end{equation}
\end{definition}
In the case of formal power series, or analytic (at $z=0$) functions~\eqref{intro_functions_f} we can apply essentially the same definition.
The natural choice of absolute value is that coming from the order of vanishing at the origin:
\begin{equation} \label{intro_expordz}
|g(z)|=\exp(-\ordz g(z)).
\end{equation}
One readily verifies that this is indeed an (ultrametric) absolute value. The \emph{height} of polynomial $P\in\mcc[z][X_1,\dots,X_n]$ in this case is equal to $\deg_z P$.
So in the case of functional fields, the inequality~\eqref{def_mai_lb} takes the following form
\begin{equation} \label{def_mai_lb_functions}
\exp\left(-\ordz P(f_1(z),\dots,f_n(z))\right)>\phi(\deg_z(P),\deg_{\ul{X}}(P)).
\end{equation}
Let
$$
F(\deg_z(P),\deg_{\ul{X}}(P)):=-\log\left(\phi(\deg_z(P),\deg_{\ul{X}}(P)\right).
$$
Then taking logarithms of both sides of~\eqref{def_mai_lb_functions} and changing signs in~\eqref{def_mai_lb_functions} we may rewrite this inequality as
\begin{equation*}
\ordz P(f_1(z),\dots,f_n(z))<F(\deg_z(P),\deg_{\ul{X}}(P)),
\end{equation*}
which coincides with~\eqref{def_LM_a}. Hence the \emph{multiplicity estimate}~\eqref{def_LM_a} for a set of algebraically independent functions $f_1(z),\dots,f_n(z)$ is nothing else but the measure of algebraic independence of these functions. Note for instance that if we provide an upper bound $F(X,Y)$ in~\eqref{def_LM_a} with \emph{a slow rate of growth}, then we assure that the function $\phi$ in~\eqref{def_mai_lb_functions} has \emph{a slow rate of decrease}, so functions $(f_1(z),\dots,f_n(z))$ have a large measure of algebraic independence, in the sense we have explained just above.
We have a natural limit of results when seeking to improve the function $F$ in the r.h.s. of~\eqref{def_LM_a}. In any case, if functions $f_1,\dots,f_n$ are all algebraically independent we have
\begin{equation} \label{intro_F_lb}
F(X,Y)>\lceil(X+1)(Y+1)^n/n!\rceil.
\end{equation}
That is, for any $X,Y\in\mnn$\label{pl_F_is_large} we can construct a polynomial $P_{X,Y}$ of degree in $z$ at most $X$ and of degree in $X_1,\dots,X_n$ at most $Y$ verifying
\begin{align*}
\ordz P_{X,Y}(z,f_1(z),\dots,f_n(z))&\geq\lceil\frac{1}{n!}(X+1)(Y+1)^n\rceil\\&\geq\lceil\frac{1}{n!}\left(\deg_z(P)+1\right)\left(\deg_{\ul{X}}P+1\right)^n\rceil.
\end{align*}
To see this consider monomials $m_{k_0,k_1,\dots,k_n}=Z^{k_0}X_1^{k_1}\dots X_n^{k_n}$ with $0\leq k_0\leq X$ and $0\leq \sum_{i=1}^nk_i\leq Y$. There are $(X+1)\binom{Y+1+n}{n!}\geq\lceil(X+1)(Y+1)^n/n!\rceil$ of such monomials. The polynomial with indeterminate coefficients $c_{k_0,k_1,\dots,k_n}$
$$
Q(Z,X_1,\dots,X_n)=\sum_{\substack{0\leq k_0\leq X\\0\leq \sum_{i=1}^nk_i\leq Y}}c_{k_0,k_1,\dots,k_n}m_{k_0,k_1,\dots,k_n}
$$
has degree $\leq X$ in Z and $\leq Y$ in $X_1\dots,X_n$, in particular for any specialization of coefficients $c_{k_0,k_1,\dots,k_n}$. If we substitute $z,f_1(z),\dots,f_n(z)$ in $Q$ we obtain an analytic function
$$
g(z)=Q(z,f_1(z),\dots,f_n(z)).
$$
Every coefficient in Taylor series of this function is a linear form in $c_{k_0,k_1,\dots,k_n}$. Hence by basic linear algebra we can find a non-trivial set of coefficients $c_{k_0,k_1,\dots,k_n}$ such that the first $\lceil(X+1)(Y+1)^n/n!\rceil-1$ coefficients of $g(z)$ vanish. For this set of coefficients we have
$$
\ordz Q(z,f_1(z),\dots,f_n(z))=\ordz g(z)\geq\lceil(X+1)(Y+1)^n/n!\rceil,
$$
hence the claim.
If functions $f_1,\dots,f_n$ are algebraically dependent, we can not provide an upper bound~\eqref{def_LM_a} valid for all non-zero polynomials. We naturally have to exclude the ideal of polynomials vanishing at $\left(z,f_1(z),\dots,f_n(z)\right)$, we denote this ideal by $\idp_{\ul{f}}$. In this case, the considerations from linear algebra that justify~\eqref{intro_F_lb} can not be applied to the linear space of all the polynomials, we have to consider the linear space of polynomials of bi-degree bounded by $(X,Y)$ factorized by $\idp_{\ul{f}}$, the ideal of polynomials vanishing at $\left(z,f_1(z),\dots,f_n(z)\right)$. The dimension of this space is bounded from below~\cite{PP2000,PP} as a constant times $(X+1)(Y+1)^t$, where $t$ denotes the transcendence degree
\begin{equation} \label{intro_tr_degree}
t:=\trdeg_{\kk(z)}\kk(z)\left(f_1(z),\dots,f_n(z)\right).
\end{equation}
We naturally have to replace in~\eqref{intro_F_lb} the parameter $n$ by $t:=\trdeg_{\kk(z)}\kk(z)\left(f_1(z),\dots,f_n(z)\right)$.
To illustrate this at a more elementary level, let
\begin{equation} \label{intro_f_t}
f_1,\dots,f_t
\end{equation}
be algebraically independent (over $\kk(z)$) functions, and let $n>t$. Consider the $n$-tuple of functions
\begin{equation} \label{intro_f_nt}
(f_1,\dots,f_t,f_t,\dots,f_t),
\end{equation}
that is we complete the $t$-tuple~\eqref{intro_f_t} by $n-t$ copies of $f_t$. Clearly, the set of analytic functions that we can realize substituting the functions~\eqref{intro_f_nt} in the polynomials from $\kk[z][X_0,\dots,X_n]$ coincide with the set of functions that we can realize substituting the functions~\eqref{intro_f_t} in the polynomials from $\kk[z][X_0,\dots,X_t]$. In other terms, the additional copies of $f_t$ brings us no extra flexibility to increase the order of vanishing at $z=0$.
Thus the best possible function that we can have at the r.h.s. of~\eqref{def_LM_a} is
$$
C\left(\deg_{z}(P)+1\right)\left(\deg_{\ul{X}}(P)+1\right)^t,
$$
where $t$ is the transcendence degree~\eqref{intro_tr_degree}.
Another fact that we should keep in mind when proving the estimates of the type~\eqref{def_LM_a} is that there are sets of functions that refute any given r.h.s. $F(X,Y)$ in this inequality. It happens exactly when the field $\kk(z,f_1,\dots,f_n)$ contains (very) lacunary series. For instance, let $g:\mrr^+\to\mrr^+$ be a function monotonically tending to infinity and satisfying $g(g(x))\geq g(x)+1$ for every $x\in\mrr^+$. Define $a_0=1$, $a_{n+1}=g(a_n)$ and $f_1(z)=\sum_{k=0}^{\infty}z^{a_k}$. Clearly the polynomial $P_N(z,X_1):=X_1-\sum_{k=0}^{N}z^{a_k}$ satisfies $\ordz P_N(z,f_1(z))>g(N)$, whilst $\deg P_N\leq N$.
At the same time, quite a lot of interest in multiplicity lemma comes from their potential applications to the problem of algebraic independence of values of analytic functions. Here it is worth mentioning that the result of the type~\eqref{intro_trdegf_eq_trdegalpha} does not hold, of course, for arbitrary sets of functions. Already in 1886 Weierstrass had constructed an example of a transcendental entire function $\mcc\rightarrow\mcc$ taking rational values at every rational point. In 1895 St\"ackel generalized this result, showing that for every countable subset $\Sigma\subset\mcc$ and every dense subset $D\subset\mcc$ there exists a transcendental entire function satisfying $f(\Sigma)\subset D$.
For all these reasons, when one aims to prove a multiplicity lemma or a result of the type~\eqref{intro_trdegf_eq_trdegalpha}, one is forced to introduce some extra assumptions on functions in question. Almost always these extra assumptions include the hypothesis of some functional relations satisfied by the set $f_1(z),\dots,f_n(z)$. In the modern theory of algebraic independence these functional relations most often take one of the following two types.
\begin{enumerate}
\item Differential system. Typically one consider a system
\begin{equation} \label{intro_diff_system}
\frac{d}{dz}\begin{pmatrix}f_1(z)\\\vdots\\ f_n(z)\end{pmatrix}=\begin{pmatrix}R_1(z,f_1(z),\dots,f_n(z))\\\vdots\\ R_n(z,f_1(z),\dots,f_n(z))\end{pmatrix},
\end{equation}
where $R_i(z,X_1,\dots,X_n)$ are rational functions (compare for instance with the hypothesis~\eqref{NS_theo_h_system} in the Nesterenko-Shidlovsky theorem)
\item Functional system. Typically it has a form
\begin{equation*}
\begin{pmatrix}f_1(p(z))\\\vdots\\ f_n(p(z))\end{pmatrix}=\begin{pmatrix}R_1(z,f_1(z),\dots,f_n(z))\\\vdots\\ R_n(z,f_1(z),\dots,f_n(z))\end{pmatrix},
\end{equation*}
where $p(z)$ is a rational function of the variable $z$ satisfying $p(0)=0$ and $R_i(z,X_1,\dots,X_n)$, $i=1,\dots,n$, are rational functions of the variables $z,X_1,\dots,X_n$.
For example, when $q$ is a complex number (satisfying $|q|>1$, say) and we set $p(z)=qz$, we find the general case of so called \emph{equations in $q$-differences}, currently widely studied~\cite{ATV2007,AV2003,Bertrand2007a,Bertrand2007,VZ2008}.
In the case $p(z)=z^d$, where $d\geq 2$ is an integer, we find a classical setup of Mahler's method. If we impose the weaker condition $\ordz p(z)\geq 2$ (with no extra assumption on the form of rational function $p(z)$), we find again Mahler's relations, this time understood in a broader sense~\cite{Ni1996,Pellarin2010,ThTopfer1995,EZ2010,EZ2011_2}.
\end{enumerate}
In all these cases there is a large variety of multiplicity lemmas established in various situations~\cite{Bertrand2007a,Bertrand2007,N1977,N1996,ThTopfer,EZ2010,EZ2011}.
The most general results link multiplicity lemmas with properties of ideals stable under an appropriate map.
For example, having a differential system~\eqref{intro_diff_system} we can define the differential operator $D:\kk[Z,X_1,\dots,X_n]\rightarrow\kk[Z,X_1,\dots,X_n]$ by
\begin{multline} \label{intro_def_D}
D(P)(Z,X_1,\dots,X_n)=A_0(Z,X_1,\dots,X_n)\frac{d}{dz}P(Z,X_1,\dots,X_n)\\+\sum_{i=1}^nA_i(Z,X_1,\dots,X_n)\frac{d}{dX_i}P(Z,X_1,\dots,X_n),
\end{multline}
where $A_i\in\mcc[Z,X_1,\dots,X_n]$ are polynomials such that the rational fractions $R_i$ in the system~\eqref{intro_diff_system} can be presented as $R_i=A_i/A_0$.
Note that the definition~\eqref{intro_def_D} assures
$$
D(P)(z,f_1(z),\dots,f_n(z))=A_0(z,f_1(z),\dots,f_n(z))\frac{d}{dz}P(z,f_1(z),\dots,f_n(z)).
$$
We say that an ideal $I$ of the ring $\mcc[Z,X_1,\dots,X_n]$ is $D$-stable iff $D(I)\subset I$.
The following theorem holds.
\begin{theorem}[Nesterenko, see Theorem~1.1 of Chapter~10, \cite{NP}] \label{theoNesterenko_classique}
Suppose that functions
\begin{equation*}
\ull{f} = (f_1(\b{z}),\dots,f_n(\b{z})) \in \mcc[[\b{z}]]^n
\end{equation*}
are analytic at the point $\b{z}=0$ and form a solution of the system~\eqref{intro_diff_system}.
If there exists a constant $K_0$ such that every $D$-stable prime ideal $\idp \subset \mcc[X_1',X_1,\dots,X_n]$,
$\idp\ne(0)$, satisfies
\begin{equation} \label{intro_ordIleqKdegI}
\min_{P \in \idp}\ordz P(\b{z},\ull{f}) \leq K_0,
\end{equation}
then there exists a constant $K_1>0$ such that for any polynomial $P \in
\mcc[X_1',X_1,\dots,X_n]$, $P\ne 0$, the following inequality holds
\begin{equation} \label{intro_LdMpolynome}
\ordz(P(\b{z},\ull{f})) \leq K_1(\deg_{\ul{X}'} P + 1)(\deg_{\ul{X}} P + 1)^n.
\end{equation}
\end{theorem}
\begin{remark}
Note that the upper bound~\eqref{intro_LdMpolynome} is the best possible, up to a multiplicative constant $K_1$
(see discussion on the page~\pageref{pl_F_is_large}).
\end{remark}
Note that the condition~\eqref{ordIleqKdegI} can be interpreted as the statement that all the differential ideals in the differential ring $(A,D)$ lies, in a certain sense, not too close to the functional point $(z,f_1(z),\dots,f_n(z))$. This statement was formalized by Nesterenko in~\cite{N1996}, he gave the name "$D$-property" to this phenomenon. In fact, this $D$-property is quite mysterious in nature: it seems hard to provide non-trivial examples of differential rings in characteristic 0 not satisfying it. At the same time, Nesterenko's theorem~\ref{theoNesterenko_classique} shows that this property ensures essentially optimal multiplicity lemmas, hence paving the way for the best possible results on algebraic independence. The fabulous example in this direction is the proof by Nesterenko of the fact that among four numbers
$$
\e^{2\pi iz}, E_2(z), E_4(z), E_6(z),
$$
where $z\in\mcc\setminus\{0\}$ verifies $0<|\e^{2\pi iz}|<1$ and $E_2$, $E_4$ and $E_6$ are Eisenstein series, at least three are algebraically independent over $\mqq$.
In the context of Mahler's method the corresponding general conditional result was conjectured but remained an open question~\cite{PP_KF} up to the recent time. In the same time, in the context of equations in $q$-differences D.Bertrand established its analogue, with a sharp control of multiplicative constant (corresponding to $K_1$ in~\eqref{intro_LdMpolynome}).
In our works~\cite{EZ2010,EZ2011} we established a common root for all such conditional results. We succeeded to introduce a natural formalism embedding all the situations mentioned above and to prove a conditional result analogous to Nesterenko's conditional multiplicity estimate cited above. In fact, being specialized to the case of differential systems our result gave the same conclusion as Nesterenko's theorem, and even more: in our result we replace the hypothesis~\eqref{intro_ordIleqKdegI} by a weaker one. Also, in the case of Mahler's method it gave the forecasted analogue of Nesterenko's theorem (again, in a reinforced form). Further analysis of stable ideals in polynomial ring allowed to deduce new multiplicity estimate within the context of Mahler's method and as a consequence to provide new results on algebraic independence~\cite{EZ2010,EZ2011_2}.
At the same time this general result has a drawback. It was established for the case of \emph{algebraically independent} functions $(f_1,\dots,f_n)$. However, in many situations of interest one may need a multiplicity estimate for \emph{algebraically dependent} functions. For example, in the context of Mahler's method, when applied to generating series of finite automata, it is quite usual to complete a set $f_1,\dots,f_r$ with some new functions, $f_{r+1},\dots,f_n$, in order to form a complete solution of a system of functional equations. These functions sometimes appear to be algebraically dependent with $f_1,\dots,f_r$ (over $\mcc(z)$). So even in the case when we aim to prove that the values $f_1(\alpha),\dots,f_r(\alpha)$ are all algebraically independent, it may appear to be very useful to be able to treat the case of algebraically dependent functions.
In this article we develop further and extend the techniques elaborated in~\cite{EZ2010} and~\cite{EZ2011}. We obtain a general multiplicity estimate, see Theorem~\ref{LMGP}, optimal up to a multiplicative constant and applicable in the case of algebraically dependent functions.
There is a subtle point concerning the stable ideals in the case when the functions $f_1,\dots,f_n$ are algebraically dependent, or, using the notation $\idp_{\ul{f}}$ introduced above, if $\idp_{\ul{f}}\ne\{0\}$. The point is that in the case of differential system, as well as in our more general framework with the map $\phi$, the ideal $\idp_{\ul{f}}$ is $\phi$-stable. At the same time, the distance from the corresponding variety to $\ul{f}$ is 0, as $\idp_{\ul{f}}$ vanishes at $\ul{f}$. This fact immediately ensures that the $D$-property~\cite{N1996,NP}, as well as the weak $\phi$-property~\cite{EZ2010,EZ2011} do not hold. Hence the theorems stated in all the previous variants automatically have this important hypothesis failed, as far as functions in question are not algebraically independent.
However we neatened our formalism, allowing to exclude from the consideration all the ideals that vanish at $\ul{f}$, hence removing this difficulty.
Theorem~\ref{intro_LMGP} below presents a simplified version of the central result of this article (for the full statement, we refer the reader to Theorem~\ref{LMGP}). In this theorem we assume the following situation.
Let $\kk$ an algebraically closed field and let $\A=\kk[X_0',X_1',X_0,\dots,X_n]$ be a polynomial ring bi-graduated with respect to $\left(\deg_{\ul{X}'},\deg_{\ul{X}}\right)$. Consider a point
\begin{equation*}
\ul{f}=\left(1:z,1:f_1(z):\dots:f_n(z)\right)\in\mpp^1_{\kk[[z]]}\times\mpp^n_{\kk[[z]]}
\end{equation*}
and a map $\phi:\A\rightarrow\A$. We assume that the map $\phi$ is $\ul{f}$-admissible. This latter notion is introduced in Definition~\ref{def_admissible}. However on the first acquittance the reader may find more comfortable to postpone the reading of this definition and just keep in mind that both derivations and algebraic morphisms non-degenerated at the point $\ul{f}$ are $\ul{f}$-admissible, this notion is a common generalization for these two kinds of maps.
We denote by $t_{\ul{f}}$ the transcendence degree
\begin{equation} \label{def_r}
t_{\ul{f}}:=\trdeg_{\kk(z)}\kk\left(f_1(z),\dots,f_n(z)\right),
\end{equation}
by $\idp_{\ul{f}}$ the bi-homogeneous ideal of polynomials from $\A$ vanishing at $\ul{f}$ and by $r_{\ul{f}}$ the rank of the ideal $\idp_{\ul{f}}$. Note that in view of these definitions we have $t_{\ul{f}}+r_{\ul{f}}=n+1$.
In the statement of Theorem~\ref{intro_LMGP} we use also the notation $m(I)$. It is introduced formally in Definition~\ref{definDePP}, informally it can be interpreted as a number of irreducible components (counted with multiplicities) in the variety $\V(I)$ associated to the ideal $I$. We also use the quantity $\ord_{\ull{f}}\idq$, introduced in Definition~\ref{defin_ord_xy}. Informally speaking, it measures how close is the point $\ul{f}$ to the zero locus of the ideal $\idq$: bigger is the quantity $\ord_{\ull{f}}\idq$, closer is the point $\ul{f}$ to the zero locus of the ideal $\idq$. At extremity, if all polynomials from $\idq$ vanish at $\ul{f}$, we have $\ord_{\ull{f}}\idq=+\infty$. On the first reading, the reader may find it comfortable to substitute $\min_{P\in\idq}P(\ul{f})$ instead of $\ord_{\ull{f}}\idq$. In many situations these two quantities coincide, and in any case we have $\min_{P\in\idq}P(\ul{f})\geq\ord_{\ull{f}}\idq$.
Finally, we say a few words on the $\left(\phi,\cK\right)$-property playing an important role in the statement of Theorem~\ref{intro_LMGP}. This property is described in Definition~\ref{def_weakDproperty}. The $\cK$ in the notation refer to a family of bi-homogeneous ideals of the ring $\A$. We say that the $\left(\phi,\cK\right)$-property holds, if for every ideal $I\subset\cK$ that verifies $\phi(I)\subset I$, we can find a prime factor $\idq\in\Ass\left(\A/I\right)$ that admits a nice upper bound for $\ord_{\ul{f}}\idq$ (informally speaking, $\idq$ is not too close to the point $\ul{f}$).
\begin{theorem}[Formal multiplicity lemma, simplified version]\label{intro_LMGP} Let $\kk$, $\A$, $\ul{f}$ and $\phi$ be as above, and let $C_0, C_1\in\mrr^+$. Assume that the map $\phi$ is $\ul{f}$-admissible. We denote by $\cK$ the set of all equidimensional bi-homogeneous ideals $I\subset\AnneauDePolynomes$ of rank $\geq 1+r_{\ul{f}}$, such that $\idp_{\ul{f}}\subsetneq I$, $\ul{f}\not\in\V(I)$ and $m(I)\leq C_m$ ($C_m$ is an absolute constant introduced in Definition~\ref{def_Cm}),
and moreover such that all its associated prime ideals satisfy
\begin{equation} \label{theoLMGP_condition_ordp_geq_C0}
\ord_{\ull{f}}\idq \geq C_0.
\end{equation}
Assume also that $\ul{f}$ has the $\left(\phi,\cK\right)$-property (see Definition~\ref{def_weakDproperty}).
Then there exists a constant $K>0$ such that for all $P \in
\AnneauDePolynomes$, satisfying $P(1,z,1,f_1(z),\dots,f_n(z))\ne 0$
satisfy also
\begin{equation} \label{LdMpolynome2}
\ordz(P(\ullt{f})) \leq K\left((\mu+\nu_0)(\deg_{\ul{X}'}P+1)+\nu_1\deg_{\ul{X}}P\right)\\ \times\mu^{n-1}(\deg_{\ul{X}} P + 1)^{t_{\ul{f}}}.
\end{equation}
\end{theorem}
Now it is a good point to say a few words about the ideas that we use in our proof. We present this short overview of our proof in a few of subsequent paragraphs. Note that at some points, for the sake of simplicity, we simplify some formulae, as compared to the formulae given in the main text. The reader will find later that the presented principles work as well if we use heavier variants from the main text.
We start with a polynomial
\begin{equation} \label{intro_explanation_start_point}
P(X_0',X_1',X_0,\dots,X_n)\in\A:=\kk[X_0',X_1',X_0,\dots,X_n]
\end{equation}
bi-homogeneous in groups of variables $\ul{X}'$ and $\ul{X}$. To establish a multiplicity lemma, we have to provide an upper bound for the order of vanishing of this polynomial at the functional point $\ul{f}:=(1,z,1,f_1(z),\dots,f_n(z))$. From some point of view, which we clarify in our article, the big order of vanishing $\ordz P(1,z,1,f_1(z),\dots,f_n(z))$ can be interpreted as a small projective distance from the point $\ul{f}\in\mpp^1\times\mpp^n$ to the (bi-projective) hypersurface defined by the polynomial $P$.
We use a transference lemma, recently established by P. Philippon~\cite{PP} (see section~\ref{section_transference_lemma} in this article), to find in this situation an algebraic point $\ul{\alpha}$ with a small projective distance to $\ul{f}$, lying in the zero locus of $P$ and such that every polynomial from $\A$ (see~\eqref{intro_explanation_start_point}) vanishing at $\ul{f}$ vanishes also at $\ul{\alpha}$. To this $\ul{\alpha}$, we associate a certain couple of integers $(\delta_0,\delta_1)$. We define this integers with two properties:
\begin{enumerate}
\item \label{intro_point_one} there exists a bi-homogeneous polynomial $Q\in\A$ of bi-degree $(\delta_0,\delta_1)$ vanishing at $\ul{\alpha}$ and does not vanishing at $\ul{f}$ and
\item the couple of integers $(\delta_0,\delta_1)$ minimizes (for the polynomials satisfying the point~\ref{intro_point_one}) a certain linear form related to $\idp$, the ideal of definition of $\ul{\alpha}$ (more precisely, it should minimize the linear form~\eqref{def_delta_condition_minimum_crossproduct} given below, where one substitutes $I:=\idp$ and the absolute positive constants $\mu$, $\nu_0$ and $\nu_1$ are defined with the general framework).
\end{enumerate}
To clarify the situation a little bit, we say that the analogue of the couple $(\delta_0,\delta_1)$ in the projective (and not bi-projective) space would be a minimal possible degree of a homogeneous polynomial vanishing at $\ul{\alpha}$ and not vanishing at $\ul{f}$.
In the subsequent paragraph, it is customary to use the notation $\idp_{\ul{f}}$ for the bi-homogeneous ideal of polynomials from $\A$ vanishing at $\ul{f}$.
It appears that the polynomials vanishing at $\alpha$, not vanishing at $\ul{f}$ and of a bi-degree comparable to $(\delta_0,\delta_1)$ have nice properties allowing us to complete our proof. In our general framework, we consider a map $\phi:\A\rightarrow\A$ such that the bi-degree of $\phi(P)$ can be controlled in terms of bi-degree of $P$, and the order of vanishing at $\ul{f}$ of $\phi(P)$ can be controlled in terms of order of vanishing at $\ul{f}$ of $P$. So, we introduce constants $\rho_i$, $i=1,\dots,n+1$ (see Definition~\ref{V_irho_i}), which depends on the transformation $\phi$ only. We consider ideals that we denote $I(V_i,\idp)$, $i=1,\dots,n+1$, generated as follows. We take an ideal generated by $\idp_{\ul{f}}$ and all the polynomials of bi-degree at most $\rho_i$ times bigger than $(\delta_0,\delta_1)$ (see Definition~\ref{V_irho_i} for more precise formula) vanishing at $\ul{\alpha}$ and do not vanishing at $\ul{f}$, consider all its minimal primary factors that belong to the ideal $\idp$ (the ideal of definition of $\ul{\alpha}$) and take there intersection. From the geometrical point of view, we intersect the variety corresponding to $\idp_{\ul{f}}$ with all the (bi-projective) hypersurfaces defined over $\kk$, passing by $\ul{\alpha}$, do not passing by $\ul{f}$ and of bi-degree bounded by $(\rho_i\delta_0,\rho_i\delta_1)$, and in this complete intersection we choose the irreducible varieties passing by $\ul{\alpha}$.
The crucial property is that the the number of irreducible components, counted together with their multiplicities, in the variety corresponding to $I(V_i,\idp)$ is bounded by a constant that depends on $\rho_i$ only (see Lemma~\ref{LemmeCor14NumberW}). Using this property, we deduce that either the dimension of $I(V_i,\idp)$ is at most $n+1-i$ or at least one of the radical of primary components of this ideal is a $\phi$-stable ideal. In the latter case, we use the fact that all the primary components of $I(V_i,\idp)$ are contained in $\idp$, the ideal of definition of $\ul{\alpha}$. This property readily implies that all the components of $I(V_i,\idp)$ are sufficiently close to the point $\ul{f}$ (as the point $\ul{\alpha}$ was constructed to be close to $\ul{f}$ ). On the other hand, using a variant of B\'ezout's theorem we provide a nice control of bi-degree of $I(V_i,\idp)$, hence of all its primary components. These two bounds put together contradict our $(\phi,\cK)$-property, introduced in Definition~\ref{def_weakDproperty}. So, assuming in our main result, Theorem~\ref{LMGP}, that the $(\phi,\cK)$-property holds, we exclude this possibility.
To complete the proof, we remark that if the dimension of $I(V_i,\idp)$ is at most $n+1-i$ for $i=1,\dots,n+1$, then $I(V_{n+1},\idp)$ is necessarily 0-dimensional, and this is impossible as by construction all the minimal ideals of $I(V_{n+1},\idp)$ are contained in the one-dimensional ideal $\idp$.
To finish this introductory part, we remark that in fact we can consider, instead of one only map $\phi$, a (possibly infinite) family of maps $\phi_i$, $i\in I$. All we need to verify is that
\begin{enumerate}
\item all these maps satisfy the properties~\eqref{degphiQleqdegQ} and~\eqref{condition_T2_facile}, presented below, with uniform constants $\lambda$, $\mu$, $\nu_0$ and $\nu_1$ and
\item all these maps are locally correct at $\ul{f}$ (see Definition~\ref{def_locally_correct}); for example, this second point is automatically hold if these maps are derivations or dominant algebraic morphisms (see~\cite{EZ2011}, section~2.2).
\end{enumerate}
If these two conditions are satisfied, the proofs presented in this article can be transfered verbatim to the more general situation, with the transformation $\phi$ replaced by the family $\phi_i$, $i\in I$. In this case, the condition on $\phi$-stable ideals is replaced by the same condition on the ideals stable under all the $\phi_i$, $i\in I$. It seems that in certain situations it can restrain significantly the amount of ideals subject to be studied.
Nevertheless, in this paper we restrain our considerations to the case when we have only one transformation. The reason for this is, on the one hand, the purpose not to overcharge the paper with technical details, whilst it is already quite complicated from this point of view. On another hand, the reader who takes the effort to make out the proofs in this article will find it easy to pass to the case of many transformations.
\section{Framework, definitions and first properties}
\subsection{General framework} \label{subsection_general_framework}
As in~\cite{EZ2011}, we start the paper with the section recalling the general framework imposed in our studies (see~\cite{EZ2010}).
We denote by $\kk$ a (commutative) algebraically closed field of any characteristic, and by $\A$ a ring of polynomials with coefficients in $\kk$: $\A=\kk[X_0',X_1'][X_0,...,X_n]$. We consider the ring $\A$ as bi-graduated with respect to $\deg_{\ul{X}'}$ and $\deg_{\ul{X}}$.
\begin{remark}
The assumption that field $\kk$ is algebraically closed in fact is not a constraint. We readily extend our results to an arbitrary field using an embedding of a field in its algebraic closure, $\kk\subset\ol{\kk}$. We refer the reader to~\cite{EZ2011}, Remark~2.1 for more details.
\end{remark}
We fix a set of functions $f_1(z),\dots,f_n(z)\in\kk[[z]]$ and we denote by $\ul{f}$ the set $(1,z,1,f_1(z),\dots,f_n(z))$. Note that one can consider $\ul{f}$ as a system of projective coordinates of a point $(1:\b{z},1:f_1(\b{z}):...:f_n(\b{z})) \in \mpp^1_{\kk[[\b{z}]]}\times\mpp^n_{\kk[[\b{z}]]}$. By a slight abuse of notation we also sometimes denote $\ul{f}=(1:\b{z},1:f_1(\b{z}):...:f_n(\b{z})) \in \mpp^1_{\kk[[\b{z}]]}\times\mpp^n_{\kk[[\b{z}]]}$. Our final aim in this article is to provide~\emph{multiplicity estimate} for functions $f_1(z),\dots,f_n(z)$.
The main difference with our previous article~\cite{EZ2011} consists in the fact that we drop the assumption that all the functions $f_1,\dots f_n$ are algebraically independent over $\kk(z)$. The following definition introduces the notions that allow to control this dependence.
\begin{definition} \label{def_tf}
We denote by $\idpf$ the bi-homogeneous ideal of polynomials $P\in\A$ satisfying $P(\ul{f})=0$. Also, we denote by $C_{\ul{f}}\geq 1$ a constant such that $C_{\ul{f}}\geq 1$ and such that the ideal $\idpf$ is defined by polynomials of bi-degree bounded by $C_{\ul{f}}$, $(\deg_{\ul{X}'}P,\deg_{\ul{X}}P)\leq\left(C_{\ul{f}},C_{\ul{f}}\right)$. We define by $t=t_{\ul{f}}$ the transcendence degree
\begin{equation*}
t=t_{\ul{f}}:=\trdeg_{\mcc(z)}\mcc\left(f_1(z),\dots,f_n(z)\right)
\end{equation*}
and by $r_{\ul{f}}$ the rank of the bi-homogeneous ideal $\idp_{\ul{f}}$.
In view of these definitions we have the equality
$$
t_{\ul{f}}+r_{\ul{f}}=n.
$$
\end{definition}
To provide multiplicity estimate for $\ul{f}$ we need an additional structure. This structure will be encoded in properties of a map $\phi$ below. We do not suppose \emph{a priori} that $\phi$ respects any classical structure defined on $\A$, for example that one of the polynomial ring. Instead we impose some conditions on this map that are suitable for our purposes and applications we have in mind. For instance, we assume~(\ref{degphiQleqdegQ}) and~(\ref{condition_T2_facile}) below.
We fix a bi-homogeneous map $\phi:\A\rightarrow\A$ such that for all bi-homogeneous polynomial
$Q\in\AnneauDePolynomes$ one has
\begin{equation} \label{degphiQleqdegQ}
\begin{aligned}
&\deg_{\ul{X}} \phi(Q) \leq \mu \deg_{\ul{X}} Q,\\
&\deg_{\ul{X}'} \phi(Q) \leq \nu_0 \deg_{\ul{X}'} Q + \nu_1 \deg_{\ul{X}} Q
\end{aligned}
\end{equation}
with some positive constants $\mu, \nu_0>0$ and a non-negative constant $\nu_1$.
\begin{notation}
We denote by $\phi^N$ the $N$-th iteration of the map $\phi$.
\end{notation}
Using recurrence on the hypothesis~\eqref{degphiQleqdegQ} one readily establishes the following lemma.
\begin{lemma} \label{majorationphinQ} Let $N$ be a positive integer and $Q\in\AnneauDePolynomes$ be a bi-homogeneous polynomial. Then,
\begin{eqnarray}
\deg_{\ul{X}}\phi^N(Q) &\leq& \mu^N\deg_{\ul{X}}Q, \label{majorationdegXphinQ} \\
\deg_{\ul{X'}}\phi^N(Q) &\leq& \nu_0^N\deg_{\ul{X'}}Q+\nu_1\left(\sum_{i=0}^{N-1}\nu_0^{N-i-1}\mu^i\right)\deg_{\ul{X}}Q. \label{majorationdegXprimephinQ}
\end{eqnarray}
\end{lemma}
\begin{proof}
See~\cite{EZ2011}, Lemma~2.5.
\end{proof}
\hidden{
\begin{notabene}
Constants $\mu$, $\nu_0$ and $\nu_1$ appear quite often in this text, for example they are used in Definition~\ref{def_delta} further within this section. All the time these letters denote \emph{the same} constants. That is, we apply all the machinery presented in this paper every time to \emph{just one} map $\phi$ (though this map is arbitrary in the limits described in this section), in the sense that definitions such as Definition~\ref{def_delta} depend on the choice of $\phi$, more precisely they depend on the constants $\mu$, $\nu_0$ and $\nu_1$ in~\eqref{degphiQleqdegQ}. This fact does not restrain the generality of our considerations, but it may be useful to note at this point that constants $\mu$, $\nu_0$ and $\nu_1$ from property~\eqref{degphiQleqdegQ} are considered as absolute in what follows.
\end{notabene}
}
We assume that there exist two constants $\lambda>0$ and $K_{\lambda}\geq 0$ such that
\begin{equation} \label{condition_T2_facile}
\ordz \phi(Q)(\ul{f}) \geq \lambda \, \ordz(Q(\ul{f})).
\end{equation}
for all bi-homogeneous polynomials $Q\in\AnneauDePolynomes$ satisfying $\ordz(Q(\ul{f}))\geq K_{\lambda}$.
Two typical examples of a map $\phi$ satisfying~\eqref{degphiQleqdegQ} and~\eqref{condition_T2_facile} are derivations and algebraic morphisms.
Our principal result, Theorem~\ref{LMGP}, is proved for maps satisfying these assumptions, as well as one additional assumption described in Definition~\ref{defin_phiestcorrecte}.
The proof can be found in~\cite{EZ2011} (see Lemma~2.5 \emph{loc.cit.}).
\begin{remark}
We will need to consider $\phi$ as acting on $\kk[\b{z}][X_0:...:X_n]$ by setting
\begin{equation}
\b{\phi}(\b{Q})=\phi\left(X_0'^{\deg_{\b{z}}\b{Q}}\b{Q}(X_1'/X_0')(X_0:...:X_n)\right)\Bigg|_{(X_0':X_1')=(1,\b{z})}
\end{equation}
for all $Q\in\kk[\b{z}][X_0:...:X_n]$ homogeneous in~$X_0,\dots,X_n$.
This map $\b{\phi}$ satisfies
\begin{equation} \label{hphiQleqhQ}
\begin{aligned}
\deg_{\ul{X}} \b{\phi}(\b{Q}) &\leq \mu \deg_{\ul{X}} \b{Q},\\
h(\b{\phi}(\b{Q}))=\deg_{\b{z}} \b{\phi}(\b{Q}) &\leq \nu_0 \deg_{\b{z}} \b{Q} + \nu_1\deg_{\ul{X}} \b{Q} \\&\leq \nu_0 h(\b{Q}) + \nu_1\deg\b{Q}.
\end{aligned}
\end{equation}
\end{remark}
\subsection{Definitions and properties related to commutative algebra} \label{definitions_comm_algebra}
\begin{definition}
Let $I\subset\AnneauDePolynomes$ be a bi-homogeneous ideal. We denote by $\V(I)$ the sub-scheme of $\mpp^1\times\mpp^n$ defined by $I$.
Conversely, for any sub-scheme $V$ of $\mpp^1\times\mpp^n$ we denote $\I(V)$ the bi-homogeneous saturated ideal in $\AnneauDePolynomes$ that defines $V$.
\end{definition}
\begin{definition} \label{def_I}
Let $V$ be a $\kk$-linear subspace of $\A$ and $\idp\subset\A$ a prime ideal. We define $I(V,\idp)$ to be the smallest bi-homogeneous ideal of $\AnneauDePolynomes$ containing $(V\AnneauDePolynomes_{\idp})\cap\AnneauDePolynomes$,
where $\AnneauDePolynomes_{\idp}$ denotes the localization of $\AnneauDePolynomes$ by $\idp$ and $V\AnneauDePolynomes_{\idp}$ denotes
the ideal generated in $\AnneauDePolynomes_{\idp}$ by elements of $V$.
\end{definition}
\begin{remark}
Ideal $I(V,\idp)$ is the intersection of the primary components of $V\A$ contained in $\idp$.
\end{remark}
\begin{definition}\label{definIdealTstable}
We say that an ideal $I \subset \AnneauDePolynomes$ is \emph{$\phi$-stable} if and only if $\phi(I) \subset I$.
\end{definition}
\begin{definition} \label{def_eqI} Let $I$ be a bi-homogeneous ideal of the ring $\A$ and
\begin{equation}
I = \mathcal{Q}_1 \cap \dots \cap \mathcal{Q}_r \cap
\mathcal{Q}_{r+1} \cap ... \cap \mathcal{Q}_{s}
\end{equation}
be its primary decomposition, where
$\mathcal{Q}_1$,...,$\mathcal{Q}_r$ are the bi-homogeneous primary ideals associated to the ideals of minimal rank (i.e. of rank
$\rg(I)$) and $\mathcal{Q}_{r+1}$,...,$\mathcal{Q}_{s}$
correspond to the components of rank
strictly bigger than $\rg(I)$.
We denote by
\begin{equation}
\eq(I) \eqdef \mathcal{Q}_1 \cap \dots \cap \mathcal{Q}_r
\end{equation}
the equidimensional part of the minimal rank of $I$.
\end{definition}
We give now a preliminary definition, it will be needed in Definition~\ref{def_admissible}, which introduces a property important for our main result.
\begin{definition} \label{defin_phiestcorrecte} We say that a map $\phi:\A\rightarrow\A$ is {\it correct with respect to the ideal $\idp\subset\AnneauDePolynomes$} if for every ideal
$I$, such that all its associated primes are contained in $\idp$, the inclusion
\begin{equation}\label{defin_phi_phiI_subset_eqI}
\phi(I)\subset\eq(I)
\end{equation}
implies
\begin{equation}\label{defin_phi_phieqI_subset_eqI}
\phi(\eq(I))\subset\eq(I)
\end{equation}
(recall that $\eq(I)$ is introduced in Definition~\ref{def_eqI}).
\end{definition}
Two important examples of correct morphisms are derivations and (dominant) algebraic morphisms (see~\cite{EZ2011}, section~2.2 for proofs and some more discussions on the class of correct maps).
\begin{definition}\label{definDePP}\begin{enumerate}
\item Let $\idp$ be a prime ideal of the ring $\AnneauDePolynomes$, $V$ a $\kk$-linear subspace
of $\AnneauDePolynomes$ and $\phi$ a (set-theoretical) map of $\AnneauDePolynomes$ to itself. Then
\begin{equation} \label{definDePP_defin_e}
e_{\phi}(V,\idp) \eqdef \max(e\,\vline\,\rg\left((V+\phi(V)+...+{\phi}^e(V))\AnneauDePolynomes_{\idp}\right)=\rg\left(V\AnneauDePolynomes_{\idp}\right)).
\end{equation}
\item Let $\mathcal{R}$ be a ring and $M$ be an $\mathcal{R}$-module. We denote by $l_{\mathcal{R}}(M)$ the length of $M$ (see p.~72 of~\cite{Eis} for the definition). In fact we shall use this definition only in the case $\mathcal{R}=\AnneauDePolynomes_{\idp}$ and $M=(\AnneauDePolynomes/I)_{\idp}$, where $I$ denotes an ideal of $\A$.
\item Let $I$ be a proper ideal of the ring $\AnneauDePolynomes$,
\begin{equation} \label{definDePP_defin_m}
m(I)=m(\eq(I)) \eqdef \sum_{\idp\in\Spec(\AnneauDePolynomes)\,\vline\,\rg(\idp)=\rg(I)}l_{\AnneauDePolynomes_{\idp}}((\AnneauDePolynomes/I)_{\idp}) \in \mnn^*.
\end{equation}
\end{enumerate}
\end{definition}
Note that the quantity $m(I)$ is the number of primary components of $I$ counted with their length as a multiplicity.
\subsection{Definitions and properties related to multi-projective diophantine geometry} \label{definitions_multiprojective_dg}
In this section, we shall see the notions of \emph{(bi-)degree} and \emph{height} of a variety. We give here several properties of these quantities that we shall use later. For a more detailed introduction the reader is invited to consult Chapters~5 and~7 of~\cite{NP} or Chapter~1 of~\cite{EZ2010}.
\subsubsection{Heights} \label{sss_Heights}
Let $K$ be an infinite field. Assume that there exists a family $\MM_{K}$ of absolute values, $\left(|\cdot|_v\right)_{v\in \MM_{K}}$, satisfying the \emph{product formula} with exponents $n_v$:
$$
\prod_{v\in \MM_K}|\alpha|_v^{n_v}=1\text{ for every }\alpha\in K\setminus\{0\}.
$$
It is a classical result that if $L$ is a finite extension of $K$, then absolute values from $\MM_K$ can be extended to a form a family $M_L$ of absolute values on $L$, satisfying a product formula with exponents $(n_w)_{w\in \MM_L}$:
$$
\prod_{w\in \MM_L}|\alpha|_w^{n_w}=1\text{ for every }\alpha\in L\setminus\{0\}.
$$
In this situation we can define a notion of the \emph{height} for different objects defined over $\ol{K}$.
\begin{example}
\begin{enumerate}
\item $K=\mqq$, $\MM_K=\{\text{prime numbers}\}\cup\{\infty\}$. If $p$ is a prime, $|\cdot|_{p}$ is equal to the $p$-adic absolute value (normalized to have $|p|_p=p^{-1}$) and $|\cdot|_{\infty}$ is equal to the usual archimedian absolute value over $\mqq$. The product formula in this case is a consequence of the fundamental theorem of arithmetics.
\item $K=\kk(\b{z})$, where $\kk$ denotes an algebraically close (commutative) field, and $\MM_K=\mpp^1_{\kk}$. We associate to every element $v\in\MM_K$ the absolute value $\exp(-\ord_{v})$. The product formula results in this case from the uniqueness of the polynomial factorization.
\end{enumerate}
\end{example}
To every \emph{ultrametric} place $v$ we naturally associate the
valuation $\ord_v$. For the commodity of notation in what follows, we introduce the following notation for \emph{every} valuation, archimedean as well as ultrametric ones. We define for every absolute value $|\cdot|_v$ (ultrametric
or archimedian) $\ord_v\alpha:=-\log |\alpha|_v$ for every $\alpha \in K^*$.
\paragraph{Height of elements.} We start by recalling the notion of the \emph{height} of an element from $\ol{K}$. So let's take a finite extension $L\supset K$ and let $\MM_L$ denotes a family of places of $L$ extending $\MM_K$. For every $v\in\MM_L$ we denote by $L_v$ the completion of $L$ with respect to $v$ and
$$
n_v:=[L_v:K_v].
$$
For every $\b{\alpha} \in L$ we define
\begin{equation} \label{defHsimple}
h_L(\b{\alpha}) := -\frac{1}{[L:K]}\sum_{v\in\MM_L}n_v \min(0,\ord_v(\b{\alpha})),
\end{equation}
In fact this definition does not depend on the extension $L$ chosen in the beginning: if $L \subset L' \subset \overline{K}$, $[L':L] < +\infty$ and $\b{\alpha} \in L$ we have $h_{L'}(\b{\alpha})=h_{L}(\b{\alpha})$.
More generally, let ${\alpha} \in \mpp^n_{\ol{K}}$, we fix a representative $\ul{\alpha}\in\ol{K}^{n+1}$ of ${\alpha}$ and a finite extension $L$ of $K$ such that all the coordinates of $\ull{\alpha}$ belong to this extension. We set
\begin{equation} \label{defHpoint}
h({\alpha}) \eqdef -\frac{1}{[L:K]}\sum_{v\in\MM_L}n_v \min(\ord_v(\b{\alpha}_0),...,\ord_v(\b{\alpha}_n)).
\end{equation}
One readily verifies that this definition does not depend on the choice of the representative $\ull{\alpha}$ neither on the choice of the extension $L$.
In view of these definitions, for any $\alpha\in L$ we have $h_L(\alpha)=h(1:\alpha)$.
\paragraph{Height of forms.} Let $L$ be a finite extension of $K$ and let ${F}\in L[\ul{u}^{(1)},\dots,\ul{u}^{(n)}]$ be a non-zero multihomogeneous form.
For every place $v$ of $L$ (archimedean or non-archimedian) we denote by $M_v({F})$ the maximum of the $v$-adic norm
of the coefficients of ${F}$.
\hidden{===================================
On the other hand, for every archimedian place $v$, we have by the theorem of Gelfand-Tornheim (see~\cite{A1956}, p. 45 and 67) an embedding $\sigma_v: K_v\rightarrow\mcc$ such that $|\alpha|_v=|\sigma_v(\alpha)|$, where $|\cdot|$ denotes the usual (archimedean) absolute value on $\mcc$.
In this case we define $M_v(F):=M(\sigma_v(F))$, where $M(\cdot)$ denotes the measure recalled in~\cite{NP}, chap.~7, p.~97:
\begin{multline*}
\log M_v(\sigma_v(P))=\int_{S_{n_1+1}(1)\times\dots\times S_{n_r+1}(1)}\log|\sigma_v(F)|\sigma_{n_1+1}\wedge\dots\wedge\sigma_{n_r+1}\\+\sum_{i=1}^r\deg_{\ul{u}^{(i)}}F\sum_{j=1}^{n_i}\frac{1}{2j}
\end{multline*}
where $S_{n+1}(1)$ denotes the sphere of the radius $1$ in $\mcc^{n+1}$ and $\sigma_{n+1}$ is the normalized Haar measure of the total mass 1 on $S_{n+1}(1)$.
=========================================}
We define then the \emph{height of the form ${F}$} to be
\begin{equation} \label{def_hF}
h({F})\eqdef\frac{1}{[L:K]}\sum_{v\in\MM_L}n_v\log M_v({F}).
\end{equation}
We complete this definition by $h(0)=0$. Note that replacing $L$ by its finite extension does not affect the value $h(\b{F})$.
\subsubsection{Bi-degrees}
\begin{enumerate}
\item \label{def_bd_p_hypersurface} In the case of an hypersurface, i.e. if the variery $V\subset\mpp^1_{\kk}\times\mpp^n_{\kk}$ is the locus of the zeros of a bi-homogeneous polynomial $P\in\A$, the bi-degree is a couple of integers $\left(\deg_{\ul{X}'}P,\deg_{\ul{X}}P\right)$. In this case it is common to write also $\deg_{1,n-1}V:=\deg_{\ul{X}}P$ and $\deg_{0,n}V:=\deg_{\ul{X}'}P$. In general the bi-degree of a variety $V\subset\mpp^1\times\mpp^n$ is a couple of integers denoted often as $\left(\deg_{0,\dim(V)}V,\deg_{1,\dim(V)-1}V\right)$. This notation is explained in Chapter~5 of~\cite{NP}.
\item If $V=V_1\cup\dots\cup V_r$ is a decomposition of $V$ in a union of irreducible components, we have
\begin{equation*}
\dim_{i,n-i}V=\sum_{j=1}^r\dim_{i,n-i}V_j,\quad i=0,1.
\end{equation*}
\item For any irreducible variety $V\subset\mpp^1\times\mpp^n$ and any hypersurface $Z\subset\mpp^1\times\mpp^n$ of bi-degree $(a,b)$, such that $V$ and $Z$ intersect properly, there exists a variety $W$ such that its zero locus coincides with intersection of zero loci of $V$ and $Z$ (hence $\dim W=\dim V-1$), and $W$ satisfies
\begin{eqnarray}
\dd_{(1,\dim(V)-2)}(W) &=& b\cdot\dd_{(1,\dim(V)-1)}(V), \label{BT_dll} \\
\dd_{(0,\dim(V)-1)}(W) &=& a\cdot\dd_{(1,\dim(V)-1)}(V) + b\cdot\dd_{(0,\dim(V))}(V). \label{BT_doo}
\end{eqnarray}
We shall denote such a variety $W$ as $V\cap Z$.
\item Let $W\subset\mpp^n_{\kk(z)}$ be a subvariety. We can replace $(1:z)$ by $(X_0':X_1')$ transforming $W$ into a subvariety $\tilde{W}\subset\mpp^1_{\kk}\times\mpp^n_{\kk}$. Our point here is that $\tilde{W}$ is a bi-projective variety over $\kk$, whilst $W$ is a projective variety over $\kk(z)$. In this case we have a direct link between the \emph{height} and the \emph{degree} of $W$ on one side and the bi-degree of $\tilde{W}$ on another side. Notably, the \emph{height} of $W$ equals $h(W)=\dd_{(0,\dim(\tilde{W}))}(\tilde{W})$ and the \emph{degree} of $W$ is $\deg(W)=\dd_{(1,\dim(\tilde{W})-1)}(\tilde{W})$.
\item We can associate to any bi-homogeneous ideal $I\subset\A$ (resp. any homogeneous ideal $J\subset\kk[z][X_0,\dots,X_n]$) a bi-projective (resp. projective) variety \V(I), thus defining $\deg_{i,n+1-\rk(I)-i}I$, $i=0,1$ (resp. $\deg(I)$ and $h(I)$).
\end{enumerate}
We shall use later the following lemma. It is a variant of the so-called \emph{B\'ezout's theorem}.
\begin{lemma} \label{lemma_BT}
Suppose that a bi-homogeneous ideal $I\subset\A$ has rank $r$, contains a bi-homogeneous ideal $\idp\subset I$ of rank $r_{\idp}$ and is generated by $\idp$ and $r-r_{\idp}$ bi-homogeneous polynomials of bi-degree upper bounded by $(a,b)$. Then
\begin{eqnarray}
\dd_{(1,n-r)}(I)&\leq& \deg_{(1,n-r_{\idp})}\idp\cdot b^{r-r_{\idp}} \label{BT_dll_v2},\\
\dd_{(0,n-r+1)}(I) &\leq&(r-r_{\idp})\deg_{(1,n-r_{\idp})}\idp\cdot a \cdot b^{r-r_{\idp}-1}\\\nonumber&&+\deg_{(0,n-r_{\idp}+1)}\idp\cdot b^{r-r_{\idp}}. \label{BT_doo_v2}
\end{eqnarray}
\end{lemma}
\begin{proof}
This is a consequence of Propositions~3.4 and~3.6 of Chapter~5, \cite{NP}.
\end{proof}
\hidden{===================================
\begin{proof}
Suppose first that the ideal $I$ is a complete intersection. In this case, the proof follows by recurrence on the formulae~\eqref{BT_dll} and~\eqref{BT_doo}. Indeed, for $r=1$ we have a principal ideal and the claim follows from the point~(\ref{def_bd_p_hypersurface}) above. Further, assume that~\eqref{BT_dll_v2} and~\eqref{BT_doo_v2} holds for $r-1$. Take the generators $p_1,\dots,p_r$ of the ideal $I$ and consider the complete intersection $J$ of $r-1$ polynomials $p_1,\dots,p_{r-1}$. For the ideal $J$ the formulae~\eqref{BT_dll_v2} and~\eqref{BT_doo_v2} hold by the recurrence hypothesis. Now, the ideal $I$ is generated by $J$ and $p_r$ and the intersection of two varieties, corresponding to $J$ and to $(p_r)$, is proper (as $I$ is a complete intersection). Hence we can apply~\eqref{BT_dll} and~\eqref{BT_doo} proving the claim.
In the general case, when $I$ is generated by polynomials of the bi-degree upper bounded by $(a,b)$,
\end{proof}
In fact, the upper bounds~\eqref{BT_dll_v2} and~\eqref{BT_doo_v2} holds true not only for complete intersections, but also under a milder assumption that the ideal $I$ is generated by polynomials of bi-degrees bounded by $(a,b)$. Indeed,
Suppose that an ideal $I\subset\A$ has a rank $r$ and is a complete intersection of polynomials of bi-degrees bounded by $(a,b)$. Making recurrence on $r$ with the formulae~\eqref{BT_dll} and~\eqref{BT_doo} we readily find the following upper bounds (note that $\dim(I)=\dim(\mpp^1\times\mpp^n)-\rk(I)=n+1-r$)
\begin{eqnarray}
\dd_{(1,n-1-r)}(I)&\leq& b^r \label{BT_dll_v2},\\
\dd_{(0,n-r)}(I) &\leq& rab^{r-1}. \label{BT_doo_v2}.
\end{eqnarray}
Indeed, for $r=1$ we have a principal ideal and the claim follows from the point~(\ref{def_bd_p_hypersurface}) above. The proof of~\eqref{BT_dll_v2} and~\eqref{BT_doo_v2} for $r$ under the hypothesis that these two formulae holds for $r-1$ follows directly by the application of~\eqref{BT_dll} and~\eqref{BT_doo}.
In fact, the upper bounds~\eqref{BT_dll_v2} and~\eqref{BT_doo_v2} holds true not only for complete intersections, but also under a milder assumption that the ideal $I$ is generated by polynomials of bi-degrees bounded by $(a,b)$. Indeed,
===================================}
We shall regularly use the valuation $\ordz$ on the ring $\kk[[z]]$ of formal power series. This valuation induces the notions $\Ord({x},V)$ and $\ord({x},V)$, both measuring how far a point $x$ is in a (multi-)projective space from a variety $V$ belonging to the same space. In some related articles, these quantities may be denoted by $\Dist({x},V):=\exp(-\Ord({x},V))$ and $\dist({x},V):=\exp(-\ord({x},V))$. Precise definitions could be found in~\cite{NP}, chapter~7, \S~4 and~\cite{EZ2010}, chapter~1, \S~3. We shall interchangeably use the notation $\Ord_{x}V:=\Ord({x},V)$ and $\ord_{x}V:=\ord({x},V)$
In order to make this article self-contained we introduce briefly these notions. In this article, we define the quantity $\Ord$ only in cases when $V$ is either 0-dimensional or a hypersurface. This is the only cases when we make use of $\Ord$. We refer the reader to~\cite{NP}, Chapter~7, \S~4 and~\cite{EZ2010}, Chapter~1, \S~3 for the general treatment.
\begin{definition} \label{defin_ord_xy} \begin{enumerate}
\item If $\ul{x}=(x_0,\dots,x_n)\in\kk((z))^{n+1}$, we define $\ordz\ul{x}=\min_{i=0,\dots,n}\ordz x_i$.
\item Let $x,y\in\mpp_{\kk((z))}^n$ be two points and $\ul{x}$ and \ul{y} be systems of projective coordinates respectively for $x$ and $y$. We define $\ul{x}\wedge\ul{y}$ to be a vector with $n(n-1)/2$ coordinates $\left(x_iy_j-x_jy_i\right)_{1\leq i<j\leq n}$ (the ordering of coordinates $x_iy_j-x_jy_i$ of this vector is not important for our purposes). Finally, we define
\begin{equation} \label{def_ord_xy}
\ordz(x,y):=\ordz(\ul{x}\wedge\ul{y})-\ordz \ul{x} - \ordz\ul{y}.
\end{equation}
One readily verifies that the r.h.s. in~(\ref{def_ord_xy}) does not depend on the choice of systems of projective coordinates for $x$ and $y$.
\item Let $x,y\in\mpp^1_{\kk((z))}\times\mpp^n_{\kk((z))}$ and $\pi_1$ (resp. $\pi_n$) be a canonical projection of $\mpp^1_{\kk((z))}\times\mpp^n_{\kk((z))}$ to $\mpp^1_{\kk((z))}$ (resp. $\mpp^n_{\kk((z))}$). We define
\begin{equation} \label{def_ord_xy_biprojectif}
\ordz(x,y):=\min_{i=1,n}\ordz\left(\pi_i(x),\pi_i(y)\right).
\end{equation}
\item Let $V\subset\mpp^1\times\mpp^n$ (or $V\subset\mpp^n$) be a variety. We define
\begin{equation} \label{def_ord_Vx}
\ordz(x,V):=\max_{y\in V}\ordz\left(x,y\right).
\end{equation}
\item Sometimes we shall write simply $\ord(x,y)$, $\ord(x,V)$, $\Ord(x,V)$ etc. instead of $\ordz(x,y)$, $\ordz(x,V)$, $\Ord_{z=0}(x,V)$ etc. This will not create any ambiguity because we shall be interested in only one valuation $\ordz$, so all the derived constructions, such as $\ord(x,y)$ and $\ord(x,V)$, will refer always to this valuation.
\item We shall use the notation $\ord_x(V)$ to refer to $\ord(x,V)=\ordz(x,V)$ introduced in this definition.
\end{enumerate}
\end{definition}
We proceed to introduce $\Ord(x,V)$ for the cases when $\dim(V)=0$ or $V$ is a hypersurface.
\begin{enumerate}
\item \label{defin_ordOrd_Ord1} If $V$ is 0-dimensional over $\ol{\kk((z))}$, it can be represented as a union of $r$ points $y_1,\dots,y_r$ (in fact, $r=\deg(V)$) and we define $\Ord(x,V):=\sum_{i=1}^r\Ord(x,y_i)$. In particular, if $V$ contains just one point over $\ol{\kk((z))}$, we set $\Ord(x,y)=\ord(x,y)$.
\hidden{
\item \label{defin_ordOrd_Ord2} Let $V$ be a variety of dimension $d\geq 0$. We consider an intersection of $V$ with $d$ hyperplanes $U_1,\dots,U_d$ in general position, it is a 0-dimensional variety $V\cap U_1\cap\dots\cap U_d$. In particular, the quantity $\Ord(x,V\cap U_1\cap\dots\cap U_d)$ is well-defined as a function of $(U_1,\dots,U_d)$. We denote $\Ord(x,V)$ the supremum of this function for all the sets of hyperplanes $U_1,\dots,U_d$ in general position.
\item \label{defin_ordOrd_Ord3} More generally, we can replace in point~(\ref{defin_ordOrd_Ord2}) a set of $d$ hyperplanes in general position by a set of $d$ hypersurfaces, of degrees $(\delta_1,\dots,\delta_d)\in\mnn^n$, in general position. In this case we obtain a notion usually denoted as $\Ord_{(\delta_1,\dots,\delta_d)}(x,V)$.
}
\item \label{defin_ordOrd_Ord4} If $V=\Z(F)$, where $F\in\kk[X_0',X_1'][X_0,\dots,X_n]$, then for any system of bi-projective coordinates $\ul{x}=(x_0',x_1',x_0,\dots,x_n)$ of $x$ we have
\begin{equation*}
\Ord(x,V)=\ordz F(\ul{x})-\left(\ordz\ul{x}\right)^{\deg F}
\end{equation*}
(see p.~89 of~\cite{NP}).
\end{enumerate}
Now we are ready to introduce the key notions of $\ul{f}$-admissible map and of \emph{$(\phi,\cK)$-property} (see Definitions~\ref{def_admissible} and~\ref{def_weakDproperty} below). These notions are used in the statement of our main result. We start with a preliminary definition.
\begin{definition} \label{def_locally_correct}
Let $\kk$ be a field and $\ul{f}=(1,z,1,f_1,\dots,f_n)\in\kk[[z]]^{n+3}$. Let $\phi:\A\rightarrow\A$ be a bi-homogeneous self-map of a polynomial ring $\A=\kk[X_0',X_1'][X_0,\dots,X_n]$
and $C_0 \in \mrr^+$ be a constant
such that for all bi-homogeneous prime ideal $\idq\subset\AnneauDePolynomes$ of rank $n$ one has
\begin{equation} \label{intro_theoLMGP_condition_de_correctitude}
\ord_{\ull{f}}\idq\geq C_0 \Rightarrow \mbox{ the map $\phi$ is correct with respect to }\idq.
\end{equation}
In this situation we say that $\phi$ is \emph{locally correct at} $\ull{f}$.
\end{definition}
\begin{definition} \label{def_admissible}
We say that a bi-homogeneous map $\phi:\A\rightarrow\A$ is \emph{$\ul{f}$-admissible} (or simply \emph{admissible}) if it is locally correct at $\ul{f}$ and satisfies~(\ref{degphiQleqdegQ}) and~(\ref{condition_T2_facile}).
\end{definition}
\begin{remark} \label{remark_def_admissible}
Corollary~2.24 of~\cite{EZ2011} implies that derivations are $\ul{f}$-admissible maps for arbitrary $\ul{f}$. Also, corollary~2.25 of~\cite{EZ2011} implies that under some mild restrictions (essentially, to be non-degenerate in the neighbourhood of the point $\ul{f}$) algebraic morphism $\T^*$ is an $\ul{f}$-admissible map.
\end{remark}
\begin{definition} \label{def_weakDproperty}
Let $\A$ be a polynomial ring and $\phi:\A\rightarrow\A$ a map.
Let $\cK$ be a subset of the set of ideals of $\A$.
Suppose that there exists a
constant $K_0 \in \mrr^{+}$ (depending on $\cK$, $\phi$ and $\ull{f}$ only) with the following property: for every
ideal $I\in\cK$ that is $\phi$-stable (i.e. $\phi(I)\subset I$)
there exists a prime factor $\idq\in\Ass(\AnneauDePolynomes/I)$ satisfying
\begin{equation} \label{def_RelMinN2}
\ord_{\ull{f}}(\idq) < K_0\left(\dd_{(0, n-\rg\idq+1)}(\idq)+\dd_{(1, n-\rg\idq)}(\idq)\right)
\end{equation}
In this situation we say that \emph{$\ul{f}$ has the $\left(\phi,\cK\right)$-property}, or, if the choice of $\ul{f}$ is obvious, we say also that \emph{one has $\left(\phi,\cK\right)$-property} (or also that couple $\left(\phi,\cK\right)$ satisfies the \emph{weak $\phi$-property}).
\end{definition}
\begin{remark}
The name \emph{$\left(\phi,\cK\right)$-property} is chosen to make a reference to the $D$-property introduced by Nesterenko in~\cite{N1996}. In the case when $\cK$ is a set of prime ideals and $\phi=D$ is a derivation our $\left(\phi,\cK\right)$ property is a weakening of $D$-property. Indeed, the $D$-property is as follows: we require the existence of a constant $C_1$ such that for every $D$-stable prime ideal $\idp$ one has
\begin{equation} \label{def_D_peoperty_classical}
\min_{P\in\idp}\ord_{\ull{f}}P(\ull{f})\leq C_1.
\end{equation}
It is easy to verify that $\min_{P\in\idp}\ord_{\ull{f}}P(\ull{f})\geq\ord_{\ull{f}}(\idp)$, hence the following property is weaker than~\eqref{def_D_peoperty_classical} (that is~\eqref{def_D_peoperty_classical} implies~\eqref{def_D_peoperty_modified}):
\begin{equation} \label{def_D_peoperty_modified}
\ord_{\ull{f}}(\idp)\leq C_1.
\end{equation}
In the inequality~\eqref{def_RelMinN2} above we have even weaker condition: the r.h.s. of~\eqref{def_RelMinN2} grows as grows the complexity of the ideal $\idq$.
\end{remark}
We mention here two technical lemmas that we shall use later, notably in the proof of Proposition~\ref{PropositionLdMprincipal}. Proofs are easy and can be found in~\cite{EZ2010}, Chapter~1.
Lemma~\ref{Representants} below provides us a possibility to replace the quantity $\Ord(X,Y)$, measuring the distance between two points $X$ and $Y$ in a projective space, by a quantity that is easier to control in the situation considered in the proof of Proposition~\ref{PropositionLdMprincipal}.
\begin{lemma} \label{Representants} Let $X, Y \in \mpp^n_{\overline{\kk((\b{z}))}}$ be two points in the projective space.
a) Let $\ull{x}\in\overline{\kk((\b{z}))}^{n+1}$ be a system of projective coordinates of $X$ and $\ull{y}\in\overline{\kk((\b{z}))}^{n+1}$ be a system of projective coordinates of $Y$ satisfying
\begin{equation*}
\ordz \ull{x} = \ordz \ull{y}.
\end{equation*}
Then
\begin{equation} \label{LemmeRepresentantsA}
\Ordz(X,Y) \geq \ordz(\ull{x} - \ull{y}) - \ordz\ull{y}.
\end{equation}
b) Suppose $\Ordz(X,Y)>0$, if we fix for $Y$ a system of projective coordinates $\ull{y}$ in $\overline{\kk((\b{z}))}^{n+1}$, then there is a system of projective coordinates $\ull{x} \in \overline{\kk((\b{z}))}^{n+1}$ of $X$ satisfying
\begin{equation}
\begin{split}
&\alpha) \quad \ordz\ull{x}=\ordz\ull{y},\\
&\beta) \quad \Ordz(X,Y) = \ordz(\ull{x} - \ull{y}) - \ordz(\ull{y})
\end{split}
\end{equation}
\end{lemma}
\begin{proof}
See Lemma~1.22 of~\cite{EZ2010}.
\end{proof}
\begin{lemma}[Liouville's inequality] \label{lemma_Liouville_ie}
Let $Q\in\kk(\b{z})$ and $\b{Z}$ be a cycle in $\mpp_{\ol{\kk(z)}}^n$ of dimension 0 defined over $\kk(\b{z})$. Then
\begin{equation}\label{iet_main}
\deg(\b{Q})h(\b{Z})+h(\b{Q})\deg(\b{Z})\geq\left|\sum_{\b{\beta}\in\b{Z}}\ordz\left(\b{Q}(\ull{\beta})\right)\right|,
\end{equation}
\end{lemma}
\begin{proof}
See inequality~(1.20) at the end of section~1.2.2 of~\cite{EZ2010}.
\end{proof}
In Definition~\ref{def_delta} here below we associate to each bi-projective ideal $I$ a couple of integers, $\left(\delta_0(I),\delta_1(I)\right)$. This quantity plays an important role in our article. It seems to be quite complicated at first glance, so we make a short intuitive comment to explain it in Remark~\ref{rem_def_delta} just after the definition.
Note that in Definition~\ref{def_delta} below we use constants $\mu,\nu_0$ and $\nu_1$. These constants are supposed to be the same as in the property~\eqref{degphiQleqdegQ}. In fact we shall use Definition~\ref{def_delta} only in situations when we have a map $\phi$ with a fixed choice of constants $\mu,\nu_0$ and $\nu_1$ to satisfy the property~\eqref{degphiQleqdegQ}. Hence this implicit dependence will not lead to any ambiguity.
\begin{definition} \label{def_delta}
\begin{enumerate}
\item Let $I,\idp \subset \AnneauDePolynomes$ be bi-homogeneous ideals, such that $I\not\subset\idp$. We choose a bi-homogeneous polynomial $P \in I \setminus \idp$
that minimizes the quantity
\begin{multline} \label{def_delta_condition_minimum_crossproduct}
\mu\dd_{(0,n-\rg I+1)}I\deg_{\ul{X}}P + \nu_0\dd_{(1,n-\rg I)}I \deg_{\ul{X}'}P \\ + \nu_1\dd_{(1,n-\rg I)}I \deg_{\ul{X}}P.
\end{multline}
If there exists more than one bi-homogeneous polynomial from $I\setminus\idp$ minimizing~(\ref{def_delta_condition_minimum_crossproduct})
we choose among them the polynomial with the minimal degree $\deg_{\ul{X}}P$.
We introduce notation $\delta_0(I,\idp) \eqdef \deg_{\ul{X}'}P$ and $\delta_1(I,\idp) \eqdef \deg_{\ul{X}}P$.
\item For all cycles $Z$ (defined over $\kk$) in $\mpp^1\times\mpp^n$ and such that $\I(Z)\not\subset\idp$
we define $\delta_i(Z,\idp) \eqdef \delta_i(\I(Z),\idp)$, $i=0,1$.
\item Let $\ul{f}=(1:\b{z},1:f_1(\b{z}):...:f_n(\b{z})) \in \mpp^1_{\kk[[\b{z}]]}\times\mpp^n_{\kk[[\b{z}]]}$. We recall Definition~\ref{def_tf}, where the bi-homogeneous ideal $\idp_{\ul{f}}$ is defined as a bi-homogeneous ideal generated by polynomials $P\in\A$ vanishing at $\ul{f}$. We introduce the notations
$$
\delta_i(I,\ul{f}):=\delta_i(I,\idp_{\ul{f}})\text{ and } \delta_i(Z,\ul{f}):=\delta_i(Z,\idp_{\ul{f}})\text{ for } i=0,1,
$$
for all bi-homogeneous ideals $I$ such that $I\not\subset\idp_{\ul{f}}$ and all cycles $Z\subset\mpp^1\times\mpp^n$ such that $I(Z)\not\subset\idp_{\ul{f}}$.
\end{enumerate}
\end{definition}
\begin{remark} \label{rem_def_delta}
The quantities that we shall use in the subsequent considerations are eventually $\delta_i(I,\ul{f})$ and $\delta_i(Z,\ul{f})$, $i=0,1$.
Note that $\delta_i(I,\{0\})=\delta_i(I)$, $i=0,1$, in the sense of Definition~2.37 from~\cite{EZ2011}, hence if functions $f_1(z),\dots,f_n(z)$ are algebraically independent over $\kk(z)$ we have $\delta_i(I,\idp_{\ul{f}})=\delta_i(I)$, $i=0,1$ and we appear in the situation considered in~\cite{EZ2011}.
The quantities $\delta_i(I,\ul{f})$, $i=0,1$, plays in our proofs a role of an important characteristic of an ideal $I$. We refer the reader to~\cite{EZ2011}, Remark~2.38 for more detailed discussion on this matter. The modification that we have introduced in this work, compared to $\delta_i(I)$ in~\cite{EZ2011}, has the following reason. In our proofs we consider ideals generated by bi-homogeneous polynomials of degree comparable to $\delta_i(I,\ul{f})$, $i=0,1$ (for instance, see Definitions~\ref{V_irho_i} and ~\ref{def_i0} below). The essential part of the proof is the comparison for such ideals $I$ of their degrees $\deg I$ and the quantities $\ord_{\ul{f}}I$. Loosely speaking, we deduce the multiplicity estimate from the statement that in our construction the ideals of a bounded degree can't have an arbitrary big order of vanishing at $\ul{f}$. However we naturally have to exclude all the polynomials vanishing at $\ul{f}$, that is $\idp_{\ul{f}}$ (see Definition~\ref{def_tf}).
\end{remark}
Here is the first property of the quantity $\left(\delta_0(I,\ul{f}),\delta_1(I,\ul{f})\right)$.
\begin{lemma}\label{deltainfty} Fix a point
$$
\ul{f}=(1:\b{z},1:f_1(\b{z}):...:f_n(\b{z})) \in \mpp^1_{\kk[[\b{z}]]}\times\mpp^n_{\kk[[\b{z}]]}
$$
and consider a sequence of cycles $Z_i \subset \mpp^1\times\mpp^n$, $\in\mnn$, defined over $\kk$ and such that $\ul{f} \not\in Z_i$ for $i\in\mnn$. If $\ord_{\ul{f}}(Z_i)$ tends to $+\infty$ (as $i\rightarrow\infty$), then $\max\left(\delta_0(Z_i,\ul{f}),\delta_1(Z_i,\ul{f})\right)$ also tends to the infinity (as $i\rightarrow\infty$).
\end{lemma}
The proof of Lemma~\ref{deltainfty} is easy and can be found in~\cite{EZ2010} (Lemma~1.23). We shall need only its weak corollary:
\begin{corollary}\label{cor1_deltainfty} Let $z,f_1(z),\dots,f_n(z)\in\kk[[\b{z}]]$. There exists a constant $C_{sg}$ which depends only on $\ul{f}=(1:\b{z},1:f_1(\b{z}):...:f_n(\b{z})) \in \mpp^1_{\kk[[\b{z}]]}\times\mpp^n_{\kk[[\b{z}]]}$ such that if a cycle $Z \subset \mpp^1\times\mpp^n$ (defined over $\kk$) does not contain $\ul{f}$ and satisfies $\ord_{\ul{f}}Z \geq C_{sg}$, then either $\delta_0(Z,\ul{f})\geq\max(4,2n!+1,\frac{2\nu_1}{\max(\mu,\nu_0)})$ or $\delta_1(Z,\ul{f})\geq \max(2^n,4n)$.
\end{corollary}
The following definition is widely used in the subsequent considerations.
\begin{definition} \label{V_irho_i}
\hidden{======================================
We define
\begin{equation}\label{def_nu}
\nu\eqdef\begin{cases}1 &\text{ if } \nu_1=0,\\
2^{n+2}\max\left(1,\frac{4\nu_0}{\nu_1}\right)^{n+1} &\text{ if } \nu_1\ne 0.
\end{cases}
\end{equation}
\begin{equation}\label{def_nu}
\nu\eqdef 2^{n+2}\max\left(1,\frac{4\nu_0}{\nu_1}\right)^{n+1}.
\end{equation}
==========================================}
We define a sequence of numbers $\rho_i$ recursively. We put $\rho_0=0$, $\rho_1=1$ and
$$
\rho_{i+1} = 6^{n+2}(n+2)^{(n+1)^2}\rho_i^{n+2}\max\left(\mu,\nu_0\right)^{6^{n+2}(n+2)^{(n+1)^2}\rho_i^{n+1}}
$$
for $i=1,...,n+1$. The constants $\mu$, $\nu_0$ and $\nu_1$ in this definition are the same as in~(\ref{degphiQleqdegQ}).
Let $Z$ be an algebraic bi-projective cycle defined over $\kk$ in the space $\mpp^1\times\mpp^n$. Let
$$
\ul{f}=(1:\b{z},1:f_1(\b{z}):...:f_n(\b{z})) \in \mpp^1_{\kk[[\b{z}]]}\times\mpp^n_{\kk[[\b{z}]]}.
$$
We denote by $V_i$, or more precisely by $V_i(Z,\ul{f})$, the vector space (over $\kk$) generated by $\idp_{\ul{f}}$ (see Definition~\ref{def_tf}) and the bi-homogeneous
polynomials from $\kk[X_0':X_1',X_0:...:X_n]$ vanishing over the cycle $Z$ and of degree in $\ul{X}'$ at most $\rho_i\left(\delta_0(Z,\ul{f})+\frac{\nu_1}{\max(\mu,\nu_0)}\delta_1(Z,\ul{f})\right)$ and of degree in $\ul{X}$ at most $\rho_i\delta_1(Z,\ul{f})$
(recall that $\delta_0(Z,\ul{f})$ and $\delta_1(Z,\ul{f})$ are introduced in Definition~\ref{def_delta}).
If $I$ is a proper bi-homogeneous ideal of $\AnneauDePolynomes$ we also use the notation
\begin{equation*}
V_i(I):=V_i(\V(I)),
\end{equation*}
where $\V(I)$ is the cycle of $\mpp^1\times\mpp^n$ defined by $I$.
\end{definition}
\hidden{==========================
\begin{remark} \label{rem_V_irho_i}
Basically, $V_i$ represents a set of polynomials of bi-degree only a constant (i.e. $\rho_i$) times bigger than a "minimal" bi-degree (in terms explained in Definition~\ref{def_delta} and Remark~\ref{rem_def_delta}). Note that constants $\rho_i$ are absolute (they depend only on initial data: number of functions $n$ and constants $\mu$, $\nu_0$ and $\nu_1$ controlling the growth of degree of polynomial under the action of $\phi$).
\end{remark}
===============================}
\begin{definition} \label{def_i0} We associate to every bi-projective variety $W$ and a point
$$
\ul{f}=(1:\b{z},1:f_1(\b{z}):...:f_n(\b{z})) \in \mpp^1_{\kk[[\b{z}]]}\times\mpp^n_{\kk[[\b{z}]]}
$$
a number $i_0(W,\ul{f})$. We define $i_0(W,\ul{f})$ to be the biggest positive integer such that
$\rg\left(I\left(V_i(W,\ul{f}),\I(W)\right)\right)\geq i+r_{\ul{f}}$ for all $1\leq i\leq i_0(W)$.
\end{definition}
\begin{remark} \label{rem_i0}
In view of Definitions~\ref{def_delta} and~\ref{V_irho_i}, one readily verifies the inequality
$$
\rg\left(I\left(V_1(W,\ul{f}),\I(W)\right)\right)\geq 1+r_{\ul{f}}.
$$
So, the index $i_0(W,\ul{f})\geq 1+r_{\ul{f}}$ is well defined
for all varieties $W$. On the other hand the rank of any bi-homogeneous ideal in $\AnneauDePolynomes$ can not exceed $n+1$, thus $i_0(W,\ul{f})\leq n+1$ for every variety $W\subset\mpp^1\times\mpp^n$.
\end{remark}
The lemmas below represent two important ingredients of the proof of our main result.
The proof of Lemma~\ref{LemmeProp13} can be found in~\cite{EZ2011},~\S3 or in~\cite{EZ2010}, subsection~2.2.2.
\begin{lemma}\label{LemmeProp13}
Let $\idp$ be a prime ideal of $\A$
such that the map $\phi$ is correct with respect to this ideal and
let $V \subset \A$, $V\ne\{0\}$, be a $\kk$-linear subspace of
$\A$. If $e_{\phi}(V,\idp)>m(I(V,\idp))$, then there
exists an equidimensional $\phi$-stable ideal $J$ such that
\begin{list}{\alph{tmpabcdmine})}{\usecounter{tmpabcdmine}}
\item \label{LemmeProp13_a} $I(V,\idp) \subset J \subset \idp$,
\item \label{LemmeProp13_b} $\rg(J)=\rg(I(V,\idp))$,
\item \label{LemmeProp13_c} all the primes associated to $J$ are contained in $\idp$.
\end{list}
In particular,
\begin{equation} \label{LemmeProp13_en_part}
\begin{aligned}
&m(J) \leq m(I(V,\idp)),\\
&\dd_{(1, n-\rg J)} J \leq \dd_{(1, n-\rg I(V,\idp))}(I(V,\idp)),\\
&\dd_{(0, n-\rg J+1)} J \leq \dd_{(0, n-\rg I(V,\idp)+1)}(I(V,\idp)).
\end{aligned}
\end{equation}
\end{lemma}
Lemma~\ref{LemmeCor14NumberW} below is an analogue of Lemmas~2.18 and~2.21 of~\cite{EZ2010}, or also of Lemma~2.44 of~\cite{EZ2011}. We prove this lemma in the next section.
\begin{lemma} \label{LemmeCor14NumberW}
Let
$$
\ul{f}=(1:\b{z},1:f_1(\b{z}):...:f_n(\b{z})) \in \mpp^1_{\kk[[\b{z}]]}\times\mpp^n_{\kk[[\b{z}]]}.
$$
Let $\idp\subset\AnneauDePolynomes$ be a prime bi-homogeneous ideal such that
$\idp_{\ul{f}}\subsetneq\idp$ and $\V(\idp)$ is projected onto $\mpp^1$.
We recall the notations
$V_i=V_i(\idp,\ul{f})$ and $\rho_i$ introduced in
Definition~\ref{V_irho_i} and $I(V_i,\idp)=(V_i\A_{\idp})\cap\A$ introduced in Definition~\ref{def_I}. Assume that either $\delta_0\geq \max\left(2,\frac{2\nu_1}{\max(\mu,\nu_0)}\right)$ or $\delta_1(\idp,\ul{f})\geq 2^n$. One has the following upper bound for $m(I(V_i,\idp))$:
\begin{equation} \label{majorationm}
m(I(V_i,\idp)) \leq 6^{n+2}(n+2)^{(n+1)(n-t_{\ul{f}}+1)}\rho_i^{t_{\ul{f}}+1}.
\end{equation}
\begin{definition} \label{def_Cm}
We introduce the following notation:
\begin{equation}
C_m:=6^{n+2}(n+2)^{(n+1)^2}\rho_{n+1}^{n+1}.
\end{equation}
So, $C_m$ is the upper bound for the r.h.s. of~\eqref{majorationm}. Note that $C_m$ depends on $n$ only.
\end{definition}
\end{lemma}
\subsection{Proof of Lemma~\ref{LemmeCor14NumberW}
} \label{section_proof_LemmeCor14NumberW_case_nu1_eq_zero}
We recall that the ideal $\idp_{\ul{f}}$, its rank $r_{\ul{f}}$ and the transcendence degree $t_{\ul{f}}$ are introduced in Definition~\ref{def_tf}. In this section we use the following notation (see~\cite{PP}, p.12)
\begin{equation} \label{def_deg_X_ab}
\deg(X,a,b):=\deg_{(0,\dim X)}(X)\cdot b^{\dim X}+\dim X\cdot\deg_{(1,\dim X-1)}(X)\cdot a\cdot b^{\dim X-1}
\end{equation}
where $X\subset\mpp^1\times\mpp^n$ is a variety and $a,b\in\mrr^+$.
\begin{lemma} \label{LemmeProp14_1_general} Let $I$ be a bi-homogeneous ideal of $\AnneauDePolynomes$, $I\ne\AnneauDePolynomes$ and
$$
\ul{f}=(1:\b{z},1:f_1(\b{z}):...:f_n(\b{z})) \in \mpp^1_{\kk[[\b{z}]]}\times\mpp^n_{\kk[[\b{z}]]}.
$$
We denote $\delta_0:=\delta_0(I,\ul{f})$, $\delta_1=\delta_1(I,\ul{f})$ (recall that the quantities $\delta_0(I,\ul{f})$ and $\delta_1(I,\ul{f})$ are introduced in Definition~\ref{def_delta}).
Let $W \subsetneq \V(\idp_{\ul{f}})$ be an irreducible (bi-projective) variety
projecting onto the factor $\mpp^1$ and let positive integers $a,b\in\mnn$ satisfy
\begin{multline} \label{LemmeProp14_ie1_1}
\mu\cdot b\cdot\dd_{(0,\dim I)} I + \nu_0\cdot a\cdot\dd_{(1,\dim I-1)}I + \nu_1\cdot b\cdot\dd_{(1,\dim I-1)}I\\<\mu\delta_1\dd_{(0,\dim I)} I + \nu_0\delta_0\dd_{(1,\dim I-1)}I
+ \nu_1\delta_1\dd_{(1,\dim I-1)}I
\end{multline}
or
\begin{multline} \label{LemmeProp14_ie1_1_2}
\mu\cdot b\cdot\dd_{(0,\dim I)} I + \nu_0\cdot a\cdot\dd_{(1,\dim I-1)}I + \nu_1\cdot b\cdot\dd_{(1,\dim I-1)}I\\=\mu\delta_1\dd_{(0,\dim I)} I + \nu_0\delta_0\dd_{(1,\dim I-1)}I
+ \nu_1\delta_1\dd_{(1,\dim I-1)}I
\end{multline}
and
\begin{equation} \label{LemmeProp14_ie1_1_2part2}
b<\delta_1.
\end{equation}
Assume
\begin{equation} \label{LemmeProp14_hypothese1_1}
\deg(W,a,b) + \dim(W) \\< (t_{\ul{f}}+2)2^{-n-1}(n+2)^{-(n+1)(n-t_{\ul{f}}+1)}\deg\left(\V(\idp),a,b\right).
\end{equation}
Then, there is a polynomial $Q \in \I(W) \setminus (I\cup\idp_{\ul{f}})$ (that is, $Q$ vanishes on $W$, does not belong to $I$ and does not vanish at $\ul{f}$) satisfying two inequalities:
\begin{equation} \label{LemmeProp14_petitPolynome_1_case_b}
\begin{aligned}
\deg_{\ul{X}'}Q&\leq a,\\
\deg_{\ul{X}}Q&\leq b.
\end{aligned}
\end{equation}
\end{lemma}
\begin{proof}
Note that the assumption that $W$ projects onto $\mpp^1$ implies $1\leq\dim W$, and the assumption $W \subsetneq \V(\idp_{\ul{f}})$ implies $\dim W<t_{\ul{f}}+1$.
Suppose first that no polynomial of bi-degree at most $(a,b)$ simultaneously vanishes on $W$ and does not vanish at $\ul{f}$. In other terms, suppose that the $\kk$-linear space of polynomials vanishing on $W$ and of bi-degree upper bounded by $(a,b)$ is included\footnote{Moreover, the assumption $W\subsetneq\V(\idp_{\ul{f}})$ implies the inclusion in the opposite direction, $\idp_{\ul{f}}\subset\V(W)$, so these two $\kk$-linear spaces in fact coincide in the case under consideration. However this is not important for our proof. We mentioned this fact just to show that the lower bound in~\eqref{LemmeProp14_Hgminoration_1_preliminary} is in fact an equality.} in the $\kk$-linear space of polynomials of bi-degree at most $(a,b)$ and belonging to the ideal $\idp_{\ul{f}}$. Hence
\begin{equation} \label{LemmeProp14_Hgminoration_1_preliminary}
H_g(W,a,b)\geq H_g\left(\V(\idp),a,b\right).
\end{equation}
Using Corollary~9 of~\cite{PP} we continue~\eqref{LemmeProp14_Hgminoration_1_preliminary}
\begin{equation} \label{LemmeProp14_Hgminoration_1}
H_g(W,a,b)\geq H_g\left(\V(\idp),a,b\right) \geq
(t_{\ul{f}}+2)2^{-n-1}(n+2)^{-(n+1)(n-t_{\ul{f}}+1)}\deg\left(\V(\idp),a,b\right).
\end{equation}
At the same time, by another part of Corollary~9 of~\cite{PP} we have
\begin{equation} \label{LemmeProp14_Hgmajoration_1}
H_g(W,a,b) \leq \deg(W,a,b)+ \dim W.
\end{equation}
The system of inequalities~\eqref{LemmeProp14_Hgminoration_1} and~\eqref{LemmeProp14_Hgmajoration_1}
contradicts the assumption~\eqref{LemmeProp14_hypothese1_1}. We conclude that there exists a bi-homogeneous polynomial $Q_0$ of bi-degree at most $(a,b)$ that vanishes on $W$ and does not vanish at $\ul{f}$. Moreover, we claim that $Q_0$ does not belong to $I$. Indeed, it follows from assumptions~\eqref{LemmeProp14_ie1_1}, \eqref{LemmeProp14_ie1_1_2} and \eqref{LemmeProp14_ie1_1_2part2}, because the r.h.s. of~\eqref{LemmeProp14_ie1_1} minimizes the expression
\begin{equation*}
\mu\deg_{\ul{X}}(Q)\dd_{(0,\dim I)} I + \nu_0\deg_{\ul{X}'}(Q)\dd_{(1,\dim I-1)}I + \nu_1\deg_{\ul{X}}(Q)\dd_{(1,\dim I-1)}I
\end{equation*}
for all the bi-homogeneous polynomials $Q$ from $I\setminus\{\idp_{\ul{f}}\}$ (see Definition~\ref{def_delta}). Lemma~\ref{LemmeProp14_1_general} is proved.
\end{proof}
\begin{corollary} \label{LemmeCor14_1}
Consider the situation of Lemma~\ref{LemmeProp14_1_general}. So, let $I$ be a bi-homogeneous ideal of $\AnneauDePolynomes$, $I\ne\AnneauDePolynomes$ and
$$
\ul{f}=(1:\b{z},1:f_1(\b{z}):...:f_n(\b{z})) \in \mpp^1_{\kk[[\b{z}]]}\times\mpp^n_{\kk[[\b{z}]]}.
$$
We denote $\delta_0:=\delta_0(I,\ul{f})$, $\delta_1=\delta_1(I,\ul{f})$ (recall that the quantities $\delta_0(I,\ul{f})$ and $\delta_1(I,\ul{f})$ are introduced in Definition~\ref{def_delta}). Let $W \subsetneq \V(\idp_{\ul{f}})$ be an irreducible (bi-projective) variety projecting onto the factor $\mpp^1$
Assume in addition that $I$ is a radical ideal and assume that the variety $W$ contains $\V(I)$. Moreover, assume that $\V(I)$ itself projects onto the factor $\mpp^1$. Assume $\dim(W)\geq 2$ and assume that either $\delta_0\geq 2$ or $\delta_1\geq 2^n$.
Then, if $\nu_1=0$,
\begin{equation} \label{minorationdegW_1}
\deg\left(W,\delta_0,\delta_1\right) \\ \geq (t_{\ul{f}}+2)2^{-n-2}(n+2)^{-(n+1)(n-t_{\ul{f}}+1)}\deg\left(\V(\idp_{\ul{f}}),\delta_0,\delta_1\right).
\end{equation}
\end{corollary}
\begin{proof}
In the beginning, we consider the problem with an auxiliary assumption
\begin{equation} \label{LemmeCor14_1_assumption_one}
\delta_0\geq 1\text{ and }\delta_1\geq 1.
\end{equation}
By hypothesis, we have either $\delta_0\geq 2$ or $\delta_1\geq 2^n$. If $\delta_0\geq 2$, we apply Lemma~\ref{LemmeProp14_1_general} with $a=\delta_0-1$ and $b=\delta_1$. Otherwise, we necessarily have $\delta_1\geq 2^n$ and we apply Lemma~\ref{LemmeProp14_1_general} with $a=\delta_0$ and $b=\delta_1-1$. Clearly, condition~\eqref{LemmeProp14_ie1_1} is satisfied in both cases (note that $\deg_{(1,\dim I-1)}I\geq 1$ in view of our assumption that $\V(I)$ projects onto the factor $\mpp_1$).
In the following text we assume $\delta_0\geq 2$. The case $\delta_1\geq 2^n$ can be treated in exactly the same way, and it is left to the interested reader as an exercise.
As we assume $\V(I)\subset W$, the conclusion of Lemma~\ref{LemmeProp14_1_general} can not hold true and we infer that assumption~\eqref{LemmeProp14_hypothese1_1} has to fail, that is we have
\begin{multline} \label{minorationdegW_1_1}
\deg\left(W,\delta_0-1,\delta_1\right)+\dim(W) \\ \geq (t_{\ul{f}}+2)2^{-n-1}(n+2)^{-(n+1)(n-t_{\ul{f}}+1)}\deg\left(\V(\idp_{\ul{f}}),\delta_0-1,\delta_1\right).
\end{multline}
Because of our assumption that $W$ projects onto $\mpp_1$, we have
\begin{eqnarray*}
\dim(W)&\geq& 1,\\
\deg_{(1,\dim(W)-1)}(W)&\geq& 1,
\end{eqnarray*}
hence
\begin{equation*}
\begin{aligned}
\deg&\left(W,\delta_0-1,\delta_1\right)+\dim(W)=\deg_{(0,\dim W)}(W)\cdot \delta_1^{\dim W}
\\
&\qquad+\dim W\cdot\deg_{(1,\dim W-1)}(W)\cdot (\delta_0-1)\cdot \delta_1^{\dim W-1}+\dim(W)\\
&\leq \deg_{(0,\dim W)}(W)\cdot \delta_1^{\dim W}
\\
&\qquad+\dim W\cdot\deg_{(1,\dim W-1)}(W)\cdot \delta_0\cdot \delta_1^{\dim W-1}
\\
&\leq\dim(W,\delta_0,\delta_1).
\end{aligned}
\end{equation*}
At the same time, in view of our assumption $\delta_0\geq 2$ we have
$$
\deg\left(\V(\idp_{\ul{f}}),\delta_0-1,\delta_1\right)\geq\frac12\deg\left(\V(\idp_{\ul{f}}),\delta_0,\delta_1\right)
$$
and we readily deduce~\eqref{minorationdegW_1}.
Now assume that~\eqref{LemmeCor14_1_assumption_one} does not hold true. If $\delta_1=0$, then the r.h.s. of~\eqref{minorationdegW_1} is zero and the claim readily follows. It remains us to consider the case $\delta_0=0$ and $\delta_1\geq 2^n$. In this case, we apply Lemma~\ref{LemmeProp14_1_general} with $a=\delta_0=0$ and $b=\delta_1-1$. We infer
\begin{multline} \label{minorationdegW_1_1_final}
\deg\left(W,\delta_0,\delta_1-1\right)+\dim(W) \\ \geq (t_{\ul{f}}+2)2^{-n-1}(n+2)^{-(n+1)(n-t_{\ul{f}}+1)}\deg\left(\V(\idp_{\ul{f}}),\delta_0,\delta_1-1\right).
\end{multline}
Further, $\delta_1\geq 2^n$ implies
$$
\deg\left(W,\delta_0,\delta_1-1\right)+\dim(W) \leq \deg\left(W,\delta_0,\delta_1\right),
$$
and
$$
\deg\left(\V(\idp_{\ul{f}}),\delta_0,\delta_1-1\right)\geq\frac12\deg\left(\V(\idp_{\ul{f}}),\delta_0,\delta_1\right).
$$
The conclusion~\eqref{minorationdegW_1} readily follows.
\end{proof}
\begin{corollary} \label{LemmeCor14_1_nu_one_positive}
In the situation of Corollary~\ref{LemmeCor14_1}, replace the hypothesis $\nu_1=0$ by the hypothesis $\nu_1>1$. Further, introduce the following hypothesis on $\delta_0$ and $\delta_1$: either $\delta_0\geq \max\left(2,\frac{2\nu_1}{\max(\mu,\nu_0)}\right)$ or $\delta_1\geq 2^n$. Then we have the following inequality:
\begin{multline} \label{minorationdegW_1_nu_one_positive}
\deg\left(W,\delta_0+\frac{\nu_1}{\max(\mu,\nu_0)}\delta_1,\delta_1\right) \\ \geq (t_{\ul{f}}+2)6^{-n-2}(n+2)^{-(n+1)(n-t_{\ul{f}}+1)
\deg\left(\V(\idp_{\ul{f}}),\delta_0+\frac{\nu_1}{\max(\mu,\nu_0)}\delta_1,\delta_1\right).
\end{multline}
\end{corollary}
\begin{proof}
The proof of this corollary is similar to the proof of Corollary~\eqref{LemmeCor14_1}. For the purpose of not to waste space on the trivial calculations, we consider here only the case $\delta_0\geq 2$ and $\delta_1\geq 2$, leaving details for the reader.
Note that the numbers $a=\left[\delta_0/2+\frac{\nu_1}{2\max(\mu,\nu_0)}\delta_1\right]$ and $b=\left[\delta_1/2\right]$ satisfy the hypothesis~\eqref{LemmeProp14_ie1_1}. Hence we deduce with Lemma~\ref{LemmeProp14_1_general}
\begin{multline} \label{minorationdegW_1_nu_one_positive_ie_one}
\deg\left(W,\left[\delta_0/2+\frac{\nu_1}{2\max(\mu,\nu_0)}\delta_1\right],\left[\delta_1/2\right]\right) \\ \geq (t_{\ul{f}}+2)2^{-n-2}(n+2)^{-(n+1)(n-t_{\ul{f}}+1)}\\\times\deg\left(\V(\idp_{\ul{f}}),\left[\delta_0/2+\frac{\nu_1}{2\max(\mu,\nu_0)}\delta_1\right],\left[\delta_1/2\right]\right),
\end{multline}
where $[x]$ denotes the integer part of a real $x$, that is the biggest integer $n\in\mzz$ satisfying $n\leq x$.
Using the inequality $[r/2]\geq r/3$ for every real $r\geq 2$, we readily deduce
\begin{multline} \label{minorationdegW_1_nu_one_positive_ie_two}
\deg\left(W,\delta_0/2+\frac{\nu_1}{2\max(\mu,\nu_0)}\delta_1,\delta_1/2\right) \\ \geq (t_{\ul{f}}+2)2^{-n-2}(n+2)^{-(n+1)(n-t_{\ul{f}}+1)}\\\times\deg\left(\V(\idp_{\ul{f}}),\delta_0/3+\frac{\nu_1}{3\max(\mu,\nu_0)}\delta_1,\delta_1/3\right).
\end{multline}
Finally, definition~\eqref{def_deg_X_ab} readily implies
\begin{equation} \label{minorationdegW_1_nu_one_positive_deg_X_a_b}
\deg(X,\lambda a,\lambda b)=\lambda^{\dim X}\deg(X,a,b)
\end{equation}
for any variety $X\in\mpp^1\times\mpp^n$ and $a,b,\lambda\in\mrr^+$. Applying~\eqref{minorationdegW_1_nu_one_positive_deg_X_a_b} to the both sides of~\eqref{minorationdegW_1_nu_one_positive_ie_two} we deduce~\eqref{majorationm}.
\end{proof}
\begin{proof}[Proof of Lemma~\ref{LemmeCor14NumberW}
]
We denote by $r$ the rank $\rg I(V_i,\idp)$ and by $\delta_i$ the quantities $\delta_i(\idp,\ul{f})$, $i=0,1$.
To start with, consider the case $\nu_1=0$. Then the ideal $I(V_i,\idp)=V_i\A_{\idp}\cap\A$ is extended-contracted by localization at $\idp$ of an ideal generated by $\idp_{\ul{f}}$ and by $r-r_{\ul{f}}$
polynomials of bi-degree upper bounded by $\left(\rho_i\delta_0,\rho_i\delta_1\right)$ (see Definition~\ref{def_i0}). Using B\'ezout's theorem (see Lemma~\ref{lemma_BT}) and definition~\eqref{def_deg_X_ab} we readily verify
\begin{equation} \label{majorationTordue0_1}
\begin{aligned}
\deg\left(\V(I(V_i,\idp)),\delta_0,\delta_1\right)\leq\rho_i^{r-r_{\ul{f}}}\deg\left(\V(\idp_{\ul{f}}),\delta_0,\delta_1\right).
\end{aligned}
\end{equation}
Let $W=\V(\idq)$, where $\idq$ is a minimal prime ideal associated to $\V(I(V_i,\idp))$. By construction of $I(V_i,\idp)$
all its associated primes are contained in $\idp$, thus $\V(\idp) \subset W$. Moreover, since
$\V(\idp)$ is projected onto $\mpp^1$, we deduce that $W$ is projected onto $\mpp^1$ as well. Further, by construction of $I(V_i,\idp)$ we have $\dim\V\left(I(V_i,\idp)\right)\geq 1$, and if $\dim\V\left(I(V_i,\idp)\right)=1$ then $I(V_i,\idp)=\idp$, hence $m\left(I(V_i,\idp)\right)=1$ and~\eqref{majorationm} follows. Thus we have to consider only the case $\dim(W)=\dim\V\left(I(V_i,\idp)\right)\geq 2$, thus we are in measure to apply Corollary~\ref{LemmeCor14_1}. We find that $W$
satisfies~(\ref{minorationdegW_1}); in other words, every prime $\idq$ associated to $I(V_i,\idp)$ satisfies
\begin{equation} \label{minorationDegQ_1}
\deg\left(\V(\idq),\delta_0,\delta_1\right) \\ \geq (t_{\ul{f}}+2)2^{-n-2}(n+2)^{-(n+1)(n-t_{\ul{f}}+1)}\deg\left(\V(\idp_{\ul{f}}),\delta_0,\delta_1\right).
\end{equation}
But
\begin{equation} \label{degDecomposition_1}
\begin{aligned}
\dd_{(1,n-r)}(I(V_i,\idp)) = \sum_{\substack{\idq \in \Spec\A,\\ \rg(\idq)=r}}\dd_{(1,n-r)}(\idq)l_{\AnneauDePolynomes_{\idq}}(\left(\AnneauDePolynomes/I(V_i,\idp)\right)_{\idq}),
\end{aligned}
\end{equation}
and
\begin{equation} \label{htDecomposition_1}
\begin{aligned}
\dd_{(0,n-r+1)}(I(V_i,\idp)) = \sum_{\substack{\idq \in \Spec\A,\\ \rg(\idq)=r}}\dd_{(0,n-r+1)}(\idq)l_{\AnneauDePolynomes_{\idq}}(\left(\AnneauDePolynomes/I(V_i,\idp)\right)_{\idq}),
\end{aligned}
\end{equation}
Summing up~(\ref{degDecomposition_1}) with coefficient $\delta_0\delta_1^{n-r}$
and~(\ref{htDecomposition_1}) with coefficient $\delta_1^{n+1-r}$, we find
\begin{equation}\label{minorationDegHtRightParm_1}
\deg\left(I(V_i,\idp),\delta_0,\delta_1\right)=\sum_{\substack{\idq \in \Spec\A,\\ \rg(\idq)=r}}\deg(\idq,\delta_0,\delta_1)l_{\AnneauDePolynomes_{\idq}}(\left(\AnneauDePolynomes/I(V_i,\idp)\right)_{\idq})
\end{equation}
Applying~\eqref{majorationTordue0_1} to the l.h.s. of~\eqref{minorationDegHtRightParm_1} and~\eqref{minorationDegQ_1} to the r.h.s. of~\eqref{minorationDegHtRightParm_1} we obtain
\begin{multline} \label{majorationm_pre_1}
\rho_i^{r-r_{\ul{f}}}\deg\left(\V(\idp_{\ul{f}}),\delta_0,\delta_1\right)
\\
\geq
(t_{\ul{f}}+2)2^{-n-2}(n+2)^{-(n+1)(n-t_{\ul{f}}+1)}\deg\left(\V(\idp_{\ul{f}}),\delta_0,\delta_1\right)m(I(V_i,\idp)).
\end{multline}
Finally, we deduce~\eqref{majorationm} from~\eqref{majorationm_pre_1} with simplification and using the remark $0\leq t_{\ul{f}}\leq n$, $0\leq r \leq n+1$ (in fact, in this case, $\nu_1=0$, we obtain even a better constant in the r.h.s. of~\eqref{majorationm}).
In the case $\nu_1>0$ we proceed in the similar way. In this case the ideal $I(V_i,\idp)=V_i\A_{\idp}\cap\A$ is generated by $\idp_{\ul{f}}$ and by $r-r_{\ul{f}}$
polynomials of bi-degree upper bounded by $\left(\rho_i\left(\delta_0+\frac{\nu_1}{\max(\mu,\nu_0)}\delta_1\right),\rho_i\delta_1\right)$ (see Definition~\ref{def_i0}). Again, using B\'ezout's theorem (see Lemma~\ref{lemma_BT}) and definition~\eqref{def_deg_X_ab} we find
\begin{multline} \label{majorationTordue0_1}
\deg\left(\V(I(V_i,\idp)),\delta_0+\frac{\nu_1}{\max(\mu,\nu_0)}\delta_1,\delta_1\right)
\\
\leq\rho_i^{r-r_{\ul{f}}}\deg\left(\V(\idp_{\ul{f}}),\delta_0+\frac{\nu_1}{\max(\mu,\nu_0)}\delta_1,\delta_1\right).
\end{multline}
We consider a minimal prime ideal $\idq$ associated to $\V(I(V_i,\idp))$. We readily verify that we can apply Corollary~\ref{LemmeCor14_1_nu_one_positive} (see the first part of this proof for more details). We deduce with Corollary~\ref{LemmeCor14_1_nu_one_positive} the lower bound
\begin{multline} \label{Main_Lemma_minorationdegW_1_nu_one_positive}
\deg\left(\V(\idq),\delta_0+\frac{\nu_1}{\max(\mu,\nu_0)}\delta_1,\delta_1\right) \\ \geq (t_{\ul{f}}+2)6^{-n-2}(n+2)^{-(n+1)(n-t_{\ul{f}}+1)}\\\times\deg\left(\V(\idp_{\ul{f}}),\delta_0+\frac{\nu_1}{\max(\mu,\nu_0)}\delta_1,\delta_1\right).
\end{multline}
Using formulae~\eqref{degDecomposition_1} and~\eqref{htDecomposition_1} we find
\begin{multline} \label{majorationm_pre_1_part_two}
\rho_i^{r-r_{\ul{f}}}\deg\left(\V(\idp_{\ul{f}}),\delta_0+\frac{\nu_1}{\max(\mu,\nu_0)}\delta_1,\delta_1\right)
\\
\geq
(t_{\ul{f}}+2)6^{-n-2}(n+2)^{-(n+1)(n-t_{\ul{f}}+1)}
\\
\times\deg\left(\V(\idp_{\ul{f}}),\delta_0+\frac{\nu_1}{\max(\mu,\nu_0)}\delta_1,\delta_1\right)m(I(V_i,\idp)).
\end{multline}
Finally, we deduce~\eqref{majorationm} from~\eqref{majorationm_pre_1_part_two} with simplification and remark $0\leq t_{\ul{f}}\leq n$, $0\leq r \leq n+1$.
\end{proof}
\section{Transference lemma of P. Philippon} \label{section_transference_lemma}
From here on we assume that $t_{\ul{f}}\geq 1$. In the case $t_{\ul{f}}=0$, all the functions $f_1(z),\dots,f_n(z)$ are algebraic, hence multiplicity estimate follows immediately from Liouville's inequality (see Lemma~\ref{lemma_Liouville_ie}).
In this section we present (a particular case of) the transference lemma elaborated in~\cite{PP}. In the sequel we shall use the constant $c_n$ defined by
\begin{equation} \label{def_cn}
c_n=2^{n+1}(n+2)^{(n+1)(n+3)}.
\end{equation}
Note that in terms of~\cite{PP} one has $c_n=c_{\mpp^1\times\mpp^n}$.
\begin{theorem}[Transference $(1,n)$-Projective Lemma]\label{LdT}
Let $\ull{f} \in \mpp^n_{\kk[[\b{z}]]}$. We denote $t=t_{\ul{f}}$. Let $C$ be a real number satisfying
\begin{eqnarray} \label{LdT_Cestgrande}
C&\geq&\left(c_n\right)^{t}\left(C_{\ul{f}}\right)^{t+1}\max\left(1,\nu_0,\mu\right)^{-t-1},\\
C&\geq&\left(\left(h(\idpf)+\deg(\idpf)\right)\max\left(1,\frac1{\nu_0},\frac1{\mu}\right)\right)^{1/(t+1)}. \label{LdT_Cestgrande_2}
\end{eqnarray}
If a homogeneous polynomial $\b{P} \in \kk[\b{z}][X_0,X_1,...,X_n]\setminus\idpf$ satisfies
\begin{multline} \label{ordPplusqueb}
\ordz (\b{P}(\ull{f})) - \deg\b{P}\cdot\ordz (\ull{f}) - h(\b{P})\\ > C\cdot t\cdot\left((\nu_0+\mu) \left(h(\b{P})+1\right)+(\nu_1+\mu)\deg P\right)\mu^{t-1}\left(\deg\b{P}+1\right)^{t},
\end{multline}
then there is an irreducible cycle $\b{Z} \in \mpp_n\left(\overline{\kk(\b{z})}\right)$ defined over $\kk(\b{z})$, of
dimension~0, contained in the zero locus of $\b{P}$ and in the zero locus of the ideal $\idp_{\ul{f}}$, satisfying
\begin{multline} \label{LdTdegZ}
\nu_0\deg\b{Z}\cdot h(\b{P})+\nu_1\deg\b{Z}\cdot\deg\b{P}+\mu\cdot h(\b{Z})\cdot\deg\b{P}\\ \leq (c_n C)^{\frac{t-1}{t+1}}\mu^t\left(\deg P+1\right)^t\left( h(\idpf)\deg(P) + \deg(\idpf)h(P)\right),
\end{multline}
and
\begin{equation} \label{LdTordZ}
\sum_{\alpha \in \b{Z}} \Ord_{\ull{f}}(\alpha) > C^{\frac{1}{t+1}}c_n^{-\frac{t}{t+1}}\Big( \nu_0\deg(\b{Z})h(\b{P})+\nu_1\deg(\b{Z})\deg\b{P}\\+ \mu\cdot h(\b{Z})\deg\b{P} \Big).
\end{equation}
In particular,
(\ref{LdTordZ}) implies
\begin{equation} \label{LdTordZ0}
\sum_{\ul{\alpha} \in \b{Z}} \Ord_{\ull{f}}(\ul{\alpha}) > C_{\ul{f}}\Big( \nu_0\deg(\b{Z})h(\b{P}) + \nu_1\deg(\b{Z})\deg\b{P} \\ + \mu\cdot h(\b{Z})\deg\b{P} \Big).
\end{equation}
\end{theorem}
\begin{proof}
We denote by $I_0$ the ideal corresponding to the intersection of $\V(\idpf)$ and $\V(P)$, i.e. the ideal given by Proposition~4.11 of Chapter~3, \cite{NP}. By this proposition, the ideal $I_0$ satisfies
\begin{eqnarray}
\deg I_0 &\leq& \deg\idpf\cdot\deg P, \label{LdT_bound_degI0}\\
h(I_0) &\leq& h(\idpf)\cdot\deg P+\deg\idpf \cdot h(P), \label{LdT_bound_hI0}
\end{eqnarray}
and
\begin{multline} \label{LdT_lb_ordI0}
\Ord_{\ul{f}} I_0 \geq \Ordz (\b{P}(\ull{f})) - \deg P\cdot h(\idpf) - h(P)\cdot\deg\idpf
\\
\geq C\cdot t\cdot\left((\mu+\nu_0) \left(h(\b{P})+1\right)+(\nu_1+\mu)\deg\b{P}\right)\mu^{t-1}\left(\deg\b{P}+1\right)^{t}
\\
-\deg P\cdot h(\idpf) - h(P)\cdot\deg\idpf,
\end{multline}
the second inequality here is implied by the hypothesis~\eqref{ordPplusqueb}.
We can consider $(1:\b{z})$ as coordinates of a point in $\mpp^1_{\overline{\kk(\b{z})}}$ (see Remark~\ref{genZP}). Under this convention, the cycle $\tilde{X}_0:=\V(I_0)$ can be considered as a cycle in $\mpp^1\times\mpp^n$ of dimension $t$ and it has the following degrees:
\begin{eqnarray*}
\deg_{(0,t)}\tilde{X}_0=h(I_0),\\
\deg_{(1,t-1)}\tilde{X}_0=\deg I_0.
\end{eqnarray*}
We apply Corollary~11 of~\cite{PP} to $\tilde{\Phi}=\ullt{f}$ and
$\tilde{X}_0=\V(I_0)\subset\mpp^1\times\mpp^n$. We choose the multi-degree $(\eta,\delta)$ to be
\begin{eqnarray*}
\eta &=& \left[(c_nC)^{\frac{1}{t+1}}\left(\nu_0(h(\b{P})+1)+\nu_1\deg\b{P}\right)\right], \\
\delta &=& \left[(c_nC)^{\frac{1}{t+1}}\mu(\deg\b{P}+1)\right].
\end{eqnarray*}
Inequalities~\eqref{LdT_Cestgrande} and~\eqref{LdT_Cestgrande_2} imply
\begin{equation} \label{LdT_preuve_majoration_hdeg}
\begin{aligned}
h(\b{P}) &\leq\eta, \\
\deg\b{P} &\leq\delta,\\
\max(c_n,C_{\ul{f}}) &\leq \min\left(\eta,\delta\right)
\end{aligned}
\end{equation}
(recall that the constant $C_{\ul{f}}$ is introduced in Definition~\ref{def_tf}), hence $\tilde{X}_0$ is defined by forms of multidegree $\leq(\eta,\delta)$ with
$$
\min(\eta,\delta)\geq c_n=c_{\mpp^1\times\mpp^n},
$$
where the constant $c_{\mpp^1\times\mpp^n}$ is the one defined in~\cite{PP}.
In our case we have
\begin{multline} \label{LdT_preuve_degX0etadelta}
\deg\left(\tilde{X}_0,\eta,\delta\right)=h(I_0)\cdot\delta^{t}+t\cdot\deg I_0\cdot\eta\delta^{t-1}\\
\leq\left( h(\idpf)\cdot\deg P+\deg\idpf \cdot h(P)\right)\cdot\delta^{t}+t\cdot\deg\idpf\cdot\deg P\cdot\eta\delta^{t-1}\\
\leq (c_nC)^{\frac{t}{t+1}}\Bigg(\left( h(\idpf)\cdot\deg P+\deg\idpf \cdot h(P)\right)\cdot\mu^t(\deg P+1)^t\\+t\cdot\deg\idpf\cdot\deg P\cdot\left(\nu_0(h(\b{P})+1)+\nu_1\deg\b{P}\right)\mu^{t-1}(\deg P+1)^{t-1}\Bigg)\\
=(c_nC)^{\frac{t}{t+1}}\mu^{t-1}(\deg P+1)^{t-1}\Bigg(\left( h(\idpf)\cdot\deg P+\deg\idpf \cdot h(P)\right)\cdot\mu(\deg P+1)\\+t\cdot\deg\idpf\cdot\deg P\cdot\left(\nu_0(h(\b{P})+1)+\nu_1\deg\b{P}\right)\Bigg),
\end{multline}
the first inequality in~\eqref{LdT_preuve_degX0etadelta} is a consequence of~\eqref{LdT_bound_degI0} and~\eqref{LdT_bound_hI0}, and the second follows by a direct application of definitions of $\eta$ and $\delta$.
The condition
$$
\Ord_{\ull{f}}\tilde{X}_0\geq c_n^{-1}\deg\left(\tilde{X}_0,\eta,\delta\right)
$$
is assured by direct comparison of the r.h.s in~\eqref{LdT_lb_ordI0} (recall that by definition $\Ord_{\ull{f}}\tilde{X}_0=\Ord_{\ull{f}}I_0$) and~\eqref{LdT_preuve_degX0etadelta} and taking into account the hypothesis~\eqref{LdT_Cestgrande_2}.
The conclusion of Corollary~11 of~\cite{NP} gives us exactly the conclusion of the theorem. Indeed, this corollary provides us a cycle $\b{Z}\subset\tilde{X}_0(\ol{\kk(\b{z})})$ defined over $\kk(\b{z})$ and of dimension 0 such that
\begin{eqnarray}
\delta\cdot h(\b{Z})+\eta\deg\b{Z} &\leq& \deg\left(\tilde{X}_0,(\eta,\delta)\right) \label{LdT_preuve_cor_conclusion_deg},\\
\sum_{\ul{\alpha}\in\b{Z}} \Ord_{\ull{f}}(\ul{\alpha}) &>& c_n^{-1}\left(\eta\deg\b{Z} + \delta\cdot h(\b{Z})\right) \label{LdT_preuve_cor_conclusion_Ord}.
\end{eqnarray}
Inequality~(\ref{LdT_preuve_cor_conclusion_deg}) (together with~\eqref{LdT_preuve_degX0etadelta}) gives us inequality~(\ref{LdTdegZ}), and~(\ref{LdT_preuve_cor_conclusion_Ord}) provides us~(\ref{LdTordZ0}).
\end{proof}
\begin{definition} \label{defZP} Let $C$ be a real number satisfying~(\ref{LdT_Cestgrande}). We associate to each non-zero homogeneous polynomial $\b{P} \in \kk[\b{z}][X_1,...,X_n]$ and a real constant $C>0$ satisfying~(\ref{ordPplusqueb})
an irreducible 0-dimensional cycle $\b{Z}_{C}(\b{P})$ defined over $\kk(\b{z})$, contained in the zero locus of $P$ and satisfying
inequalities~(\ref{LdTdegZ}) and~(\ref{LdTordZ0}). In view of the transference lemma there exists at least one cycle satisfying all these conditions (provided polynomial $P$ and constant $C$ satisfy~(\ref{ordPplusqueb})). If there exists more than one such cycle, we choose one of them and fix this choice.
\end{definition}
\begin{remark} \label{genZP} Considering $(1:\b{z})$ as coordinates of a point
in $\mpp^1_{\overline{\kk(\b{z})}}$ we can consider the cycle $\b{Z}$ as an
$1$-dimensional cycle in $\mpp^1_{\kk}\times\mpp^n_{\kk}$ (defined over $\kk$).
In this case we denote this cycle by $\Z_C(P)$.
\smallskip
We associate to a bi-homogeneous polynomial $P(X_0',X_1',X_0,X_1,...,X_n)\in\AnneauDePolynomes$ satisfying
\begin{equation} \label{ordPplusqueb2}
\frac{\ordz (P(1,\b{z},\ull{f}) - (\deg_{\ul{X}}P) \ordz (\ull{f}) - \deg_{\ul{X}'}P}{t\cdot\left((\nu_0+\mu) \left(h(\b{P})+1\right)+(\nu_1+\mu)\deg P\right)\mu^{t-1}\left(\deg\b{P}+1\right)^{t}} > C,
\end{equation}
the homogeneous polynomial
$$
\tilde{P}(X_0,X_1,...,X_n)=P(1,\b{z},X_0,X_1,...,X_n)
$$
(satisfying in this case~(\ref{ordPplusqueb})). We have already defined the cycles $\b{Z}_C(\tilde{P})$ and $\Z_{C}(\tilde{P})$ for the latter polynomial, as $\tilde{P}\in\kk[z][X_0,\dots,X_n]$ (see Definition~\ref{defZP}). By this procedure we associate equally the cycles $\b{Z}_C(P)$ and $\Z_{C}(P)$ to every bi-homogeneous polynomial $P \in \AnneauDePolynomes$ satisfying~(\ref{ordPplusqueb2}).
\end{remark}
\begin{remark} \label{rem_def_delta}
Note that if $P\not\in\idpf$, that is if $P(\ul{f})\ne 0$, we necessarily have $\ul{f}\not\in Z_C(P)$ (because $Z_C(P)\in\Zeros(P)$ by definition), or in other terms $\I(Z_C(P))\setminus\idpf\ne\emptyset$. Hence quantities $\delta_0(Z_C(P))$ and $\delta_1(Z_C(P))$ (introduced in Definition~\ref{def_delta}) are defined if $P(\ul{f})\ne 0$.
\end{remark}
\begin{remark} \label{rem_LdT_Z_nonisotrivial} Note that combining~(\ref{ordPplusqueb2}) (for $C$ large enough) with the transference lemma (Theorem~\ref{LdT}, (\ref{LdTordZ0})) we can assure that the cycle $\Z_{C}(P)$ is not an isotrivial one (hence $\b{Z}_C(P)$ is not defined over~$\kk$). Indeed,
each point defined over $\kk$ contributes at most $\Ordz\left(\ullt{f}\wedge\ull{f}(0)\right)$ to $\Ord_{\ull{f}}\Z_{C}(P)$, so for an isotrivial cycle $Z$ one has
$$
\Ord_{\ull{f}}Z\leq\Ordz\left(\ull{f}(\b{z})\wedge\ull{f}(0)\right)\deg Z.
$$
Thus
\begin{equation} \label{def_Ciso}
C>\Ciso:=\left(\frac{c_n\Ordz\left(\ull{f}\wedge\ull{f}(0)\right)+1}{\min(\nu_0,\mu)}\right)^n
\end{equation}
implies that $\Z_{C}(P)$ is not isotrivial.
\end{remark}
We recall the notation introduced in Definition~\ref{def_tf}. Let $f_1(z),\dots,f_n(z)$ be a set of functions, then we define
\begin{equation*}
t=t(\ul{f}):=\trdeg_{\kk(z)}\kk(z,f_1(z),\dots,f_n(z)).
\end{equation*}
The following theorem plays an important role in the proof of our principal result, Theorem~\ref{LMGP}.
\begin{theorem} \label{dist_alpha} Let $\ull{f}=(1:\b{f}_1:\dots:\b{f}_n) \in
\mpp^n_{\kk[[\b{z}]]}$ and let $\b{P}\in\kk[\b{z}][X_0,\dots,X_n]$ be a
homogeneous polynomial such that
\begin{equation*}
P(z,\b{f}_1(z),\dots,\b{f}_n(z))\ne 0.
\end{equation*}
Assume that $P$ satisfies~(\ref{ordPplusqueb}) with
\begin{equation} \label{dist_alpha_Cestgrande}
C \geq \max\left(\left(3t!c_n/\min(\nu_0,\mu)\right)^t,\left(\frac{c_nC_{sg}+1}{\min(\nu_0,\mu)}\right)^{t+1},\Ciso\right)
\end{equation}
(where constant $C_{sg}$ is described in Corollary~\ref{cor1_deltainfty} and $\Ciso$ in Remark~\ref{rem_LdT_Z_nonisotrivial}, \eqref{def_Ciso}).
Let $\b{Z}=\b{Z}_C(\b{P})$ and let $\b{P}_0 \in \kk[\b{z}][X_0,\dots,X_n]$ be a
homogeneous non-zero polynomial in $\ul{X}$, vanishing on $\b{Z}$ and realizing
the minimum of the quantity
\begin{equation} \label{dist_alpha_q}
\nu_0\cdot h(\b{Z})h(\b{Q})+\nu_1\cdot\deg\b{Z}\cdot h(\b{Q})+\mu\cdot h(\b{Z})\deg_{\ul{X}}\b{Q}
\end{equation}
over all homogeneous polynomials $\b{Q}\in\kk[\b{z}][X_0,\dots,X_n]\setminus\idpf$
vanishing on $\b{Z}$. We denote $\delta_0:=h(\b{P}_0)$ and
$\delta_1:=\deg_{\ul{X}}\b{P}_0$ (cf. Definition~\ref{def_delta}).
There exists a point $\ull{\alpha}\in\b{Z}$ satisfying
\begin{equation} \label{Cdirect}
\Ord(\ull{f},\ull{\alpha})
> \tilde{C} (\delta_0+1)(\delta_1+1)^t,
\end{equation}
where $\tilde{C}=C^{\frac{1}{t}}\min(\nu_0,\mu)\left(3\cdot t!\cdot c_n^{\frac{t}{t+1}}\right)^{-1}$.
\end{theorem}
\begin{proof}
We claim that there exists a point $\ull{\alpha}_1 \in \b{Z}_C(\b{P})$ satisfying
\begin{equation} \label{dist_alpha_Ord_assezgrand}
\Ord(\ull{\alpha}_1,\ull{f}) \geq C_{sg}.
\end{equation}
Indeed, by definition of $\b{Z}_C(\b{P})$ (see Definition~\ref{defZP}) we have for this cycle the lower bound~(\ref{LdTordZ0}). This inequality implies that there exists a point $\ull{\alpha}_1 \in \b{Z}_C(\b{P})$ satisfying $\Ord(\ull{\alpha}_1,\ull{f}) \geq c_n^{-1}\left[(c_nC)^{\frac{1}{t+1}}\min(\nu_0,\mu)\right]$ and we deduce~(\ref{dist_alpha_Ord_assezgrand}) from~(\ref{dist_alpha_Cestgrande}).
In view of Corollary~\ref{cor1_deltainfty} condition~\eqref{dist_alpha_Ord_assezgrand} provides us
\begin{equation} \label{dist_alpha_soit0_soit1}
\delta_0 \geq 2\cdot n!+1 \mbox{ or } \delta_1 \geq 4n.
\end{equation}
Let $(a,b)\in\mnn^2$. By linear algebra one can construct a bi-homogeneous
polynomial $Q_{(a,b)}=Q_{(a,b)}(X_0',X_1',X_0,X_1,...,X_n)\in\AnneauDePolynomes
\setminus \idpf$ of bi-degree $(a,b)$ and of vanishing order at
$\ullt{f}=\left(1,\b{z},1,f_1(z),\dots,f_n(z)\right)$ satisfying
\begin{equation}\label{ie_Qab}
\ordz Q_{(a,b)}(\ullt{f}) \geq \lfloor\frac{1}{t!}(a+1)(b+1)^t\rfloor.
\end{equation}
Indeed, by definition of $t$ we can chose indexes $i_1,\dots,i_t$ in such a way that $z,f_{i_1},\dots,f_{i_t}$ are all algebraically independent over $\kk$. The space of all bi-homogeneous polynomials of bi-degree up to $(a,b)$ and depending only on variables $X_0',X_1',X_0,X_{i_1},\dots,X_{i_t}$ has dimension
$$
(a+1)\binom{b+t}{t}>\frac{1}{t!}(a+1)(b+1)^t
$$
over $\kk$, so we can choose among them a non-zero polynomial satisfying~\eqref{ie_Qab}. By construction this polynomial can not belong to $\idpf$, otherwise it would provide a non-trivial algebraic relation between $z,f_{i_1},\dots,f_{i_t}$ that is impossible by the choice of indexes $i_1,\dots,i_t$.
Let
\begin{equation} \label{dist_alpha_choix_ab}
(a,b)=\left\{\begin{aligned}&(\delta_0-1,\delta_1), \; \mbox{ if }\delta_0 \geq 2\cdot n!+1,\\
&(\delta_0,\delta_1-1), \; \mbox{ otherwise, i.e. } \delta_1 \geq 4n \mbox{ in view of~(\ref{dist_alpha_soit0_soit1}) }.\end{aligned}\right.
\end{equation}
We claim that for this choice of $(a,b)$ the following inequality holds
\begin{equation} \label{dist_alpha_est_ab}
\ordz Q_{(a,b)}(\ullt{f}) > \frac{1}{2\cdot t!}(\delta_0+1)(\delta_1+1)^t.
\end{equation}
In view of~(\ref{dist_alpha_choix_ab}), exactly two cases are possible:
\begin{list}{}{\usecounter{tmpabcdmine}}
\item a) \label{dist_alpha_cas_a} $\delta_0 \geq 2\cdot n!+1$,
\item b) \label{dist_alpha_cas_b} $\delta_1 \geq 4n$.
\end{list}
By~(\ref{ie_Qab}) we have
\begin{equation} \label{ordQ_gt_abm1}
\ordz Q_{(a,b)}(\ullt{f}) \geq
\lfloor\frac{1}{t!}(a+1)(b+1)^t\rfloor > \frac{1}{t!}(a+1)(b+1)^t -1.
\end{equation}
In the case~\ref{dist_alpha_cas_a} we proceed as follows. First, in this case~\eqref{ordQ_gt_abm1} can be rewritten as
\begin{equation}
\ordz Q_{(a,b)}(\ullt{f}) > \frac{1}{t!}\left(\delta_0(\delta_1+1)^t-t!\right)
\end{equation}
and in order to show~(\ref{dist_alpha_est_ab}) it is sufficient to verify
\begin{equation} \label{dist_alpha_suffit_cas_a}
2\delta_0(\delta_1+1)^t-2\cdot t!\geq (\delta_0+1)(\delta_1+1)^t.
\end{equation}
The latter inequality is obvious for $\delta_0 \geq 2\cdot n!+1\geq 2\cdot t!+1$ (and $\delta_1 \geq 0$).
In the case~\ref{dist_alpha_cas_b} the same procedure brings us to the point where it is sufficient to verify
(instead of~(\ref{dist_alpha_suffit_cas_a}))
\begin{equation} \label{dist_alpha_suffit_cas_b}
2(\delta_0+1)\delta_1^t-2\cdot t! \geq (\delta_0+1)(\delta_1+1)^t.
\end{equation}
We can rewrite this inequality as
\begin{equation} \label{ie_dist_alpha_suffit_cas_b}
\left(2\left(\frac{\delta_1}{\delta_1+1}\right)^t-1\right)(\delta_0+1) \geq \frac{2\cdot t!}{(\delta_1+1)^t}.
\end{equation}
The l.h.s. of~(\ref{ie_dist_alpha_suffit_cas_b}) is an increasing function of $\delta_0$ and $\delta_1$, and the r.h.s. of~(\ref{ie_dist_alpha_suffit_cas_b}) is a decreasing function of $\delta_1$. So it is sufficient to verify this inequality for $\delta_0=0$ and $\delta_1=4t\leq 4n$. We can directly calculate
\begin{equation}
2\left(\frac{4t}{4t+1}\right)^t(0+1) > 1/2 > \frac{2\cdot t!}{(4t+1)^t},
\end{equation}
hence~(\ref{dist_alpha_suffit_cas_b}) is true for all the values $\delta_0 \geq 0$, $\delta_1 \geq 4n$. This completes the proof of~(\ref{dist_alpha_est_ab}).
We define
\begin{equation*}
\b{Q}(X_0,...,X_n)=Q_{(a,b)}(1,\b{z},X_0,...,X_n)^q,
\end{equation*}
where $q = \lceil 2\cdot t! \tilde{C} \rceil$; therefore we have
$\ordz Q_{(a,b)}(1,\b{z},1,\b{f}_1,...,\b{f}_n)^q \geq \tilde{C} (\delta_0+1)(\delta_1+1)^t$. As the polynomial $Q_{(a,b)}$ was constructed in a way to satisfy $Q_{(a,b)}(\ul{f})\ne 0$, we have $\b{Q}(f_1,\dots,f_n) \ne
0$, hence $Q\not\in\idpf$.
It is easy to verify
\begin{equation*}
\begin{aligned}
&h(\b{Q})\leq\deg_{\ul{X}'}Q_{(a,b)} = a,\\
&\deg_{\ul{X}}\b{Q}=\deg_{\ul{X}}Q_{(a,b)} = b.
\end{aligned}
\end{equation*}
We recall that in the statement we have introduced notation $Z=Z_C(P)$. One obviously has
\begin{equation*}
\deg\b{Z} \geq 1
\end{equation*}
and, as $\b{Z}$ is not defined over $\kk$ (see Remark~\ref{rem_LdT_Z_nonisotrivial}), one has
\begin{equation*}
h(\b{Z}) \geq 1.
\end{equation*}
In view of~(\ref{dist_alpha_choix_ab}) we obtain that the polynomial $\b{Q}_{(a,b)}$ makes the quantity~(\ref{dist_alpha_q}) strictly smaller than the minimum realized by $\b{P}_0$. So $Q_{(a,b)}$ (and hence $Q$) can not vanish
on $\b{Z}$ (by the definition of $\b{P}_0$); in other words: $\b{Q}$ does not belong to $\I(\b{Z})$.
We apply Theorem~4.11 of chapter~3 of~\cite{NP} to
polynomial $\b{Q}(X_0,...,X_n)$
and to the ideal $\I(\b{Z})$, which is 0-dimensional over $\kk(\b{z})$.
Let $\ull{\alpha}\in\b{Z}$ realize the maximum of $\Ord(\cdot,\ull{f})$
for points of $\b{Z}$; in other words: let
$\Ord(\ull{f},\ull{\alpha})=\max_{\beta \in
\b{Z}}\Ord(\ull{f},\ull{\beta})$.
We define
\begin{equation} \label{def_theta}
\theta=\left\{ \begin{aligned} \Ordz \b{Q}(\ull{f}) &\mbox{, if }
\Ord(\ull{f},\ull{\alpha})>\Ordz \b{Q}(\ull{f})\\
\Ord_{\ull{f}} \I(\b{Z}) &\mbox{, if }
\Ord(\ull{f},\ull{\alpha}) \leq \Ordz \b{Q}(\ull{f})
\end{aligned}
\right.
\end{equation}
By Theorem~4.11 of chapter~3 of~\cite{NP} one has
\begin{equation} \label{thetaMaj}
\theta \leq h(\b{Q})\deg(\I(\b{Z}))+h(\I(\b{Z}))\deg(\b{Q})
\end{equation}
(in our case the base field is $\kk(\b{z})$ and all its valuations
are non-archimedean ones, so $\nu=0$ and the term $\nu m^2
\deg(\I(\b{Z})) \deg(\b{Q})$ is equal to zero in the statement of this theorem).
We claim that the inequality
\begin{equation} \label{dist_alpha_ie1}
\Ord(\ull{f},\ull{\alpha}) \leq \Ordz \b{Q}(\ull{f})
\end{equation}
is in fact impossible.
Indeed, in this case $\theta=\Ord_{\ull{f}} \I(\b{Z})$, so~(\ref{thetaMaj}) implies
\begin{equation*}
\Ord_{\ull{f}} \I(\b{Z}) \leq q \delta_0 \deg(\b{Z}) + q \delta_1 h(\b{Z}),
\end{equation*}
and we can weaken this inequality
\begin{equation*}
\Ord_{\ull{f}} \I(\b{Z}) \leq \frac{q}{\min(\nu_0,\mu)}\left(\nu_0\delta_0 \deg(\b{Z}) + \nu_1\delta_1\deg(\b{Z}) + \mu\delta_1 h(\b{Z})\right).
\end{equation*}
Using the definition of $\delta_0$ and $\delta_1$ we deduce
\begin{equation} \label{majorationZf}
\begin{aligned}
\Ord_{\ull{f}} \I(\b{Z}) &\leq \frac{q}{\min(\nu_0,\mu)}\Big(\nu_0 h(\b{P}_0) \deg(\b{Z})\\&\qquad\qquad\qquad\qquad + \nu_1\deg\b{P}_0\deg(\b{Z}) + \mu\deg\b{P}_0h(\b{Z})\Big)\\
&\leq \frac{q}{\min(\nu_0,\mu)}\Big(\nu_0 h(\b{P}) \deg(\b{Z})\\&\qquad\qquad\qquad\qquad + \nu_1\deg\b{P}\deg(\b{Z}) + \mu\deg\b{P}h(\b{Z})\Big).
\end{aligned}
\end{equation}
Further, as $\b{P}$ vanishes on $\b{Z}$ one has
\begin{multline*}
\nu_0 h(\b{P}_0) \deg(\b{Z}) + \nu_1\deg\b{P}_0\deg(\b{Z}) + \mu\deg\b{P}_0h(\b{Z}) \\ \leq \nu_0 h(\b{P}) \deg(\b{Z}) + \nu_1\deg\b{P}\deg(\b{Z}) + \mu\deg\b{P}h(\b{Z})
\end{multline*}
by the minimality from the definition of $\b{P}_0$.
Then, applying~(\ref{LdTordZ}) (recall our notation $\b{Z}=\b{Z}_C(\b{P})$), one has
\begin{equation*}
\Ord_{\ull{f}}\I(\b{Z})> C^{\frac{1}{t}}c_n^{-\frac{t}{t+1}} \left(\nu_0 h(\b{P}) \deg(\b{Z}) + \nu_1\deg\b{P}\deg(\b{Z}) + \mu\deg\b{P}h(\b{Z})\right)
\end{equation*}
and gluing this inequality with~(\ref{majorationZf}) we obtain
\begin{multline*}
C^{\frac{1}{t}}c_n^{-\frac{t}{t+1}} \left(\nu_0 h(\b{P}) \deg(\b{Z}) + \nu_1\deg\b{P}\deg(\b{Z}) + \mu\deg\b{P}h(\b{Z})\right)\\ < \frac{q}{\min(\nu_0,\mu)}\left(\nu_0 h(\b{P}) \deg(\b{Z}) + \nu_1\deg\b{P}\deg(\b{Z}) + \mu\deg\b{P}h(\b{Z})\right).
\end{multline*}
Simplifying $\nu_0 h(\b{P}) \deg(\b{Z}) + \nu_1\deg\b{P}\deg(\b{Z}) + \mu\deg\b{P}h(\b{Z})$ we deduce inequality
\begin{equation*}
3\cdot t!\tilde{C}=C^{\frac{1}{t}}\min(\nu_0,\mu)c_n^{-\frac{t}{t+1}} < q = \lceil 2\cdot t!\tilde{C} \rceil
\end{equation*}
which contradicts the definition of~$q$ and $\tilde{C} \geq 1$ (recall that $\tilde{C}$ is defined at the end of the statement of this theorem and $\tilde{C} \geq 1$ in view of~(\ref{dist_alpha_Cestgrande})). So the inequality~(\ref{dist_alpha_ie1}) is impossible.
Thus unavoidably one has
\begin{equation*}
\Ord(\ull{f},\ull{\alpha})>\ordz \b{Q}(\ull{f}).
\end{equation*}
By construction of $\b{Q}$ one has $\ordz \b{Q}(\ull{f}) > \tilde{C} (\delta_0+1)(\delta_1+1)^t$, so we
deduce
\begin{equation*}
\Ord(\ull{f},\ull{\alpha}) > \tilde{C} (\delta_0+1)(\delta_1+1)^t.
\end{equation*}
\end{proof}
\section{Principal result} \label{section_principal result}
In this section we introduce the main result of this paper
and prove it.
Recall that general framework imposed for this article is given in subsection~\ref{subsection_general_framework}. So, we have an algebraically closed field $\kk$, a polynomial ring $\A=\kk[X_0',X_1',X_0,\dots,X_n]$ bi-graduated with respect to $\left(\deg_{\ul{X}'},\deg_{\ul{X}}\right)$, a point
\begin{equation*}
\ul{f}=\left(1:z,1:f_1(z):\dots:f_n(z)\right)
\end{equation*}
and a map $\phi:\A\rightarrow\A$ satisfying properties~(\ref{degphiQleqdegQ}) and~(\ref{condition_T2_facile}). We also recall the notation
\begin{equation} \label{def_r}
t:=t_{\ul{f}}=\trdeg_{\kk(z)}\kk\left(f_1(z),\dots,f_n(z)\right).
\end{equation}
In the statement below as well as in the subsequent considerations we use various notions defined in subsections~\ref{definitions_comm_algebra} and~\ref{definitions_multiprojective_dg}. In particular, $m(I)$ (as well as $V_i$ and $e_{\phi}$) is defined in Definition~\ref{definDePP}, $i_0$ is defined in Definition~\ref{def_i0} and $\ord_{\ul{f}}$ is defined in Definition~\ref{defin_ord_xy}.
\begin{theorem}[Formal multiplicity lemma]\label{LMGP} Let $\kk$, $\A$, $\ul{f}$ and $\phi$ be as above. Assume that the map $\phi$ is $\ul{f}$-admissible.
Let $n_1\in\{1,\dots,n\}$ and $C_0, C_1\in\mrr^+$. We denote by $\cK_{n_1}$ the set of all equidimensional bi-homogeneous ideals $I\subset\AnneauDePolynomes$ of rank $\geq n_1$, such that $\idp_{\ul{f}}\subsetneq I$, $\ul{f}\not\in\V(I)$ and $m(I)\leq C_m$ (recall that the constant $C_m$ is introduced in Definition~\ref{def_Cm}),
and moreover such that all its associated prime ideals satisfy
\begin{equation} \label{theoLMGP_condition_ordp_geq_C0}
\ord_{\ull{f}}\idq \geq C_0.
\end{equation}
Assume also that $\ul{f}$ has the $\left(\phi,\cK_{n_1}\right)$-property (see Definition~\ref{def_weakDproperty}).
Then there exists a constant $K>0$ such that for all $P \in
\AnneauDePolynomes$, satisfying $P(1,z,1,f_1(z),\dots,f_n(z))\ne 0$ and for all $C\geq C_1$
\begin{equation}\label{condition_n1}
i_0(\Z_C(P))\geq n_1
\end{equation}
(recall that the cycle $\Z_C(P)$ is introduced in Remark~\ref{genZP} and $i_0$ in the Definition~\ref{def_i0}),
satisfy also
\begin{equation} \label{LdMpolynome2}
\ordz(P(\ullt{f})) \leq K\left((\mu+\nu_0)(\deg_{\ul{X}'}P+1)+\nu_1\deg_{\ul{X}}P\right)\\ \times\mu^{n-1}(\deg_{\ul{X}} P + 1)^t.
\end{equation}
\end{theorem}
\begin{remark} \label{rem_n1}
Condition~(\ref{condition_n1}) is tautologically true with parameters $n_1=1+r_{\ul{f}}$ and $C_1=0$ (in view of the definition of $i_0(Z(P))$, see Definition~\ref{def_i0} and Remark~\ref{rem_i0}).
Using this choice of $n_1$ and $C_1$ and
enlarging class $\cK_{n_1}$ to $\cK_{1+r_{\ul{f}}}$ we obtain the statement of Theorem~\ref{intro_LMGP}.
\end{remark}
\begin{remark}
Parameters $n_1$ and $C_1$ are introduced because in certain situations it is possible to provide direct lower estimate of $i_0(Z(P))$ better than 1 (see Remark~\ref{rem_i0}), so excluding the necessity of analysis of $\phi$-stable ideals of a small codimension. Sometimes it could appear a decisive step, e.g. see proof of Proposition~4.11 in~\cite{EZ2010}.
\end{remark}
We deduce Theorem~\ref{LMGP} at the end of this section as a consequence of Lemma~\ref{LemmeProp13} and the following Proposition~\ref{PropositionLdMprincipal}.
\begin{proposition} \label{PropositionLdMprincipal}
Let $P \in \AnneauDePolynomes$
and $C\in\mrr$ satisfy
$$
P(1,z,1,f_1(z),\dots,f_n(z))\ne 0
$$
and:
\begin{align}\label{C_bornee_enP}
C &< \frac{\ordz (P \circ \ullt{f}) - (\deg_{\ul{X}}P) \ordz (\ull{f})-\deg_{\ul{X}'}P}{t\left((\nu_0+\mu)(\deg_{\ul{X}'}P+1)+\nu_1\deg_{\ul{X}}P\right)\mu^{t-1}(\deg_{\ul{X}}P+1)^{t}}\\
C &\geq \left(\min(\nu_0,\mu)\right)^{-t}. \label{C_minore_numu}
\end{align}
Let $\idp$ be the ideal defined as $\idp=\I(\Z_C(P))$, where $\Z_C(P)$ is the cycle introduced in Remark~\ref{genZP}.
Assume that for $i=i_0(\Z_C(P))$ one has
\begin{equation} \label{majoration_e_par_m}
e_{\phi}(V_i(\idp),\idp) \leq m(I(V_i(\idp),\idp)).
\end{equation}
Then,
\begin{equation}\label{Cestsmall}
C \leq \frac{(2\rho_{n+1})^tc_n^{t}}{\min(1;\lambda)^t\min(1;\mu)^t}.
\end{equation}
Moreover, for all polynomials $P\in\AnneauDePolynomes$, one has
\begin{equation} \label{estimationP}
\begin{aligned}
\ordz &P(\ullt{f}(\b{z}))\leq\max\left(\frac{t}{\left(\min(\nu_0,\mu)\right)^{t}},\frac{(2\rho_{n+1})^tc_n^{t}}{\min(1;\lambda)^t\min(1;\mu)^t}\right)\\
&\times\left((\mu+\nu_0)(\deg_{\ul{X}'}P+1)+\nu_1\deg_{\ul{X}}P\right)\mu^{t-1}(\deg_{\ul{X}} P+1)^t\\
&+(\ordz \ull{f})(\deg_{\ul{X}} P)+\deg_{\ul{X}'}P.
\end{aligned}
\end{equation}
\end{proposition}
\begin{proof}
Note that if $\deg_{\ul{X}}P=0$, then the conclusion of the proposition
is automatically satisfied. Thus we need only to treat the case $\deg_{\ul{X}}P\geq 1$.
{\it Ad absurdum} assume
\begin{equation} \label{cnstCgrande}
C > \frac{(2\rho_{t+1})^tc_n^{t}}{\min(1;\lambda)^t\min(1;\mu)^t}
\end{equation}
Recall that $i_0=i_0(\Z_C(P))\geq 1$ is
the largest index $i \in \{1,...,n\}$ such that $\rg(V_i\A_{\idp}) \geq i+r_{\ul{f}}$ (see Definition~\ref{def_i0}).
We put $e_0$ the largest integer $\leq e_{\phi}(V_{i_0},\idp)$
such that $V_{i_0}+...+{\phi}^{e_0}(V_{i_0})\subset\idp$ (we use the notation $V_{i_0}$ as a shorthand for $V_{i_0}(\idp)$). Note that the assumption~(\ref{majoration_e_par_m}) implies that
$e_{\phi}(V_{i_0},\idp)$ is finite, so $e_0$ is a well-defined integer.
\smallskip
Let $Q$ be a generator of ${\phi}^{e_0}(V_{i_0})$; by Lemma~\ref{majorationphinQ} one has
\begin{equation} \label{PropositionLdMprincipal_majoration_deg_Q}
\begin{aligned}
&\deg_{\ul{X}}Q \leq \mu^{e_0}\rho_{i_0}\delta_1(\idp),\\
&\deg_{\ul{X}'}Q \leq (\nu_0\delta_0(\idp)+e_0\nu_1\delta_1(\idp))\max(\nu_0,\mu)^{e_0-1}\rho_{i_0}.
\end{aligned}
\end{equation}
With the substitution $(X_0':X_1')=(1:\b{z})$ we can consider $Q$ as a polynomial from $\kk[\b{z}][X_0:...:X_n]$.
We denote $\b{Z}=\b{Z}_C(P)$. Let $\b{\alpha}\in\b{Z}$. By Lemma~\ref{Representants}, b), there is a system of projective coordinates $\ull{\alpha}$ satisfying
\begin{eqnarray*}
\ordz\ull{\alpha} &=& \ordz\ull{f}, \\
\ordz(\ull{\alpha} - \ull{f}) - \ordz(\ull{f}) &=& \Ordz(\b{\alpha},\b{f}).
\end{eqnarray*}
In view of $\ordz\ull{f}=0$, we deduce immediately
\begin{eqnarray}
\ordz\ull{\alpha} &=& 0,\label{PropositionLdMprincipal_alpha_c1} \\
\ordz(\ull{\alpha} - \ull{f}) &=& \ordz(\ull{\alpha}\wedge\ull{f}). \label{PropositionLdMprincipal_alpha_c2}
\end{eqnarray}
We fix a choice of projective coordinate systems satisfying~(\ref{PropositionLdMprincipal_alpha_c1}) and~(\ref{PropositionLdMprincipal_alpha_c2})
for all $\b{\alpha}\in\b{Z}$.
We claim that for any $\alpha\in Z$
\begin{equation} \label{point_Crucial}
\begin{aligned}
\ordz(\b{\phi}(\b{Q})(\ull{\alpha})
&\geq \min(\ordz(\b{\phi}(\b{Q})(\ull{f})),\ordz(\ull{\alpha}\wedge\ull{f})).
\end{aligned}
\end{equation}
Indeed,
\begin{equation*}
\begin{aligned}
\ordz&\left(\b{\phi}(\b{Q})(\ull{\alpha})\right)=\ordz\left(\left(\b{\phi}(\b{Q})(\ull{\alpha})-\b{\phi}(\b{Q})(\ull{f})\right)+\b{\phi}(\b{Q})(\ull{f})\right)\\
&\geq\min\left(\ordz\left(\b{\phi}(\b{Q})(\ull{\alpha})-\b{\phi}(\b{Q})(\ull{f})\right),\ordz\left(\b{\phi}(\b{Q})(\ull{f})\right)\right)\\
&\geq\min\left(\ordz\left(\ull{\alpha}-\ull{f}\right),\ordz\left(\b{\phi}(\b{Q})(\ull{f})\right)\right)\\
&\geq\min\left(\ordz\left(\ull{\alpha}\wedge\ull{f}\right),\ordz\left(\b{\phi}(\b{Q})(\ull{f})\right)\right).
\end{aligned}
\end{equation*}
Then, by~(\ref{condition_T2_facile}) and in view of $\b{Q}(\ull{\alpha})=0$ (according to the choice of $e_0$),
\begin{equation} \label{point_Crucial_1}
\begin{split}
\ordz\left(\b{\phi}(\b{Q})(\ull{f})\right) &\geq \lambda \ordz\b{Q}(\ull{f})\\
&\geq \lambda \ordz\big(\b{Q}(\ull{f})-\b{Q}(\ull{\alpha})\big)\\
&\geq \lambda\ordz\left(\ull{\alpha}\wedge\ull{f}\right).
\end{split}
\end{equation}
We deduce from~(\ref{point_Crucial}) and~(\ref{point_Crucial_1})
\begin{equation} \label{PropositionLdMprincipal_ie0}
\ordz(\b{\phi}(\b{Q})(\ull{\alpha})) \geq \min(1,\lambda) \ordz(\ull{\alpha}\wedge\ull{f})
\end{equation}
for all $\b{\alpha}\in\b{Z}$.
By~(\ref{PropositionLdMprincipal_ie0}) one has
\begin{equation} \label{PropositionLdMprincipal_ie1_1}
\begin{split}
&\frac{1}{\min(1,\lambda)}\sum_{\ull{\alpha} \in \b{Z}_{C}(P)}\left(\ordz\left(\phi(\b{Q})(\ull{\alpha})\right)\right)
\\&\geq\sum_{\ull{\alpha} \in \b{Z}_{C}(P)}\left(\ordz(\ull{\alpha}\wedge\ull{f})\right)=:M
\end{split}
\end{equation}
(note that $M$ is equal to the l.h.s. of~(\ref{LdTordZ})). By definition of $\b{Z}_C(P)$ (see Definition~\ref{defZP} and Remark~\ref{genZP}) and with~(\ref{cnstCgrande}) we estimate
\begin{multline} \label{PropositionLdMprincipal_ie1}
M>C^{\frac{1}{t}}c_t^{-\frac{t}{t+1}}\left(\nu_0\deg(\b{Z})\deg_{\b{z}}P+\nu_1\deg(\b{Z})\deg_{\ul{X}}P + \mu h(\b{Z})\deg_{\ul{X}}P\right)\\
\geq \frac{2\rho_{n+1}}{\min(1,\lambda)\min(1,\mu)}\Big(\nu_0\deg(\b{Z})\deg_{\b{z}}P\\ + \nu_1\deg(\b{Z})\deg_{\ul{X}}P + \mu h(\b{Z})\deg_{\ul{X}}P\Big).
\end{multline}
We deduce from~(\ref{PropositionLdMprincipal_ie1_1}) and (\ref{PropositionLdMprincipal_ie1})
\begin{multline} \label{PropositionLdMprincipal_ie15}
\sum_{\ull{\beta} \in \b{Z}_{C}(P)}\ordz\left(\phi(\b{Q})(\ul{\beta})\right)
>\frac{2\rho_{n+1}}{\min(1,\mu)}\\ \times\left(\nu_0\deg(\b{Z})\deg_{\b{z}}P + \nu_1\deg(\b{Z})\deg_{\ul{X}}P + \mu h(\b{Z})\deg_{\ul{X}}P\right).
\end{multline}
Also Liouville's inequality~(\ref{iet_main})
implies (under assumption that $\b{\phi}(\b{Q})$ does not vanish on $\b{Z}_{C}(P)$)
\begin{equation} \label{PropositionLdMprincipal_ie2}
\begin{split}
&\sum_{\ull{\beta} \in \b{Z}_{C}(P)}\ordz\left(\phi(\b{Q})(\ul{\beta})\right)\leq\deg(\b{Z})h(\phi(\b{Q})) + h(\b{Z})\deg\phi(\b{Q})\\
&\leq \max(\mu,\nu_0)^{e_0}\rho_{i_0}\left(\nu_0\deg(\b{Z})\delta_0 + \nu_1(e_0+1)\deg(\b{Z})\delta_1 + \mu h(\b{Z})\delta_1\right)\\
&\leq \max(\mu,\nu_0)^{e_0}(e_0+1)\rho_{i_0}\left(\nu_0\deg(\b{Z})\delta_0 + \nu_1\deg(\b{Z})\delta_1 + \mu h(\b{Z})\delta_1\right)
\end{split}
\end{equation}
(the second inequality in~\eqref{PropositionLdMprincipal_ie2} is implied by~(\ref{PropositionLdMprincipal_majoration_deg_Q})).
According to the definition of $e_0$, the hypothesis~(\ref{majoration_e_par_m}) and Lemma~\ref{LemmeCor14NumberW} we have
\begin{equation} \label{PropLdMprincipal_ie_e0_leq_m}
e_0 \leq e_{\phi}(V_{i_0},\idp) \leq m(I(V_{i_0},\idp)) \leq \nu(n+1)!\rho_{i_0}^{n+1},
\end{equation}
hence $\max(\mu,\nu_0)^{e_0}(e_0+1)\rho_{i_0}\leq \rho_{i_0+1} \leq \rho_{n+1}$ by definition of $\rho_{n+1}$.
Thus~(\ref{PropositionLdMprincipal_ie15}) and~(\ref{PropositionLdMprincipal_ie2}) lead to:
\begin{multline*}
\frac{2\rho_{n+1}}{\min(1,\mu)}\left(\nu_0\deg(\b{Z})\deg_{\b{z}}P + \nu_1\deg(\b{Z})\deg_{\ul{X}}P + \mu h(\b{Z})\deg_{\ul{X}}P\right)\\
<\rho_{n+1}\left(\nu_0\deg(\b{Z})\delta_0 + \nu_1\deg(\b{Z})\delta_1 + \mu h(\b{Z})\delta_1\right).
\end{multline*}
This inequality contradicts Definition~\ref{def_delta}, thus $\b{\phi}(\b{Q})(\ull{\alpha})=0$.
So we have
\begin{equation} \label{PropositionLdMprincipal_incl_e0p1}
{\phi}^{e_0+1}(V_{i_0}) \subset \idp,
\end{equation}
and this inclusion contradicts the definition of $e_0$ if $e_0 < e_{\phi}(V_{i_0},\idp)$. We conclude
$e_0=e_{\phi}(V_{i_0},\idp)$.
Moreover, (\ref{PropositionLdMprincipal_incl_e0p1}) implies
\begin{equation} \label{PropositionLdMprincipal_est_rg_rgP}
\rg\left((V_{i_0}+...+\phi^{e_0+1}(V_{i_0}))\AnneauDePolynomes_{\idp}\right)\leq\rg\left(\idp\AnneauDePolynomes_{\idp}\right)=n.
\end{equation}
As $e_0+1>e_{\phi}(V_{i_0},\idp)$ and by definition of $e_{\phi}(V_{i_0},\idp)$ we have
\begin{equation} \label{e0plus1Total}
\begin{split}
\rg\left((V_{i_0}+...+\phi^{e_0+1}(V_{i_0}))\AnneauDePolynomes_{\idp}\right) > \rg(V_{i_0}\AnneauDePolynomes_\idp) \geq i_0,
\end{split}
\end{equation}
we obtain
\begin{equation*}
\rg(V_{i_0+1}\AnneauDePolynomes_\idp)\geq\rg\left((V_{i_0}+...+\phi^{e_0+1}(V_{i_0}))\AnneauDePolynomes_{\idp}\right)\geq i_0+1.
\end{equation*}
If $i_0<n$ this inequality contradicts the definition of $i_0$ (Definition~\ref{def_i0}), and if $i_0=n$ inequality~(\ref{e0plus1Total})
implies
\begin{equation*}
\rg\left((V_{i_0}+...+\phi^{e_0+1}(V_{i_0}))\AnneauDePolynomes_{\idp}\right) > n,
\end{equation*}
in contradiction with~(\ref{PropositionLdMprincipal_est_rg_rgP}).
So, we have verified that the hypothesis~(\ref{cnstCgrande}) can not be satisfied, establishing therefore~(\ref{Cestsmall}).
It remains to verify~(\ref{estimationP}). We fix an arbitrary polynomial $P\in\A$ and consider the set $\mathcal{M}(P)$ of reals $C$ satisfying (with our choice of polynomial $P$) inequalities~(\ref{C_bornee_enP}) and~(\ref{C_minore_numu}).
If $\mathcal{M}(P)=\emptyset$, we have
\begin{equation*}
\left(\min(\nu_0,\mu)\right)^{-n}\geq\frac{\ordz (P \circ \ullt{f}) - (\deg_{\ul{X}}P) \ordz (\ull{f})-\deg_{\ul{X}'}P}{n\left((\mu+\nu_0)(\deg_{\ul{X}'}P+1)+\nu_1\deg_{\ul{X}}P\right)\mu^{n-1}(\deg_{\ul{X}}P+1)^{n}}
\end{equation*}
obtaining immediately~(\ref{estimationP}).
In the opposite case, if $\mathcal{M}(P)\ne\emptyset$, we let $C_s$ denote the upper bound of $\mathcal{M}(P)$, which is a real finite number: in fact the inequality~(\ref{C_bornee_enP}) implies
\begin{equation*}
C_s=\frac{\ordz (P \circ \ullt{f}) - (\deg_{\ul{X}}P) \ordz (\ull{f})-\deg_{\ul{X}'}P}{n\left((\mu+\nu_0)(\deg_{\ul{X}'}P+1)+\nu_1\deg_{\ul{X}}P\right)\mu^{n-1}(\deg_{\ul{X}}P+1)^{n}}.
\end{equation*}
In the first part of the proof we have established the inequality~(\ref{Cestsmall}) for all the elements of $\mathcal{M}(P)$, therefore $C_s$ also satisfies this
inequality, hence~(\ref{estimationP}).
\end{proof}
Now we are ready to prove the main result of this paper, Theorem~\ref{LMGP}.
\begin{proof}[Proof of Theorem~\ref{LMGP}] We define
\begin{equation} \label{def_Ktwo}
K_2:=(n-r_{\ul{f}})\left(\deg_{(0,\dim\idp_{\ul{f}})}\idp_{\ul{f}}+\deg_{(1,\dim\idp_{\ul{f}}-1)}\idp_{\ul{f}}\right)
\\
\left(1+\frac{\nu_1}{\max(\mu,\nu_0)}\right)\rho_n^{n}
\end{equation}
and
\begin{equation} \label{Cestgrande}
C=1+\max\Bigg(c_n^{t}C_0^{t}, \left(\min(\nu_0,\mu)\right)^{t}, \Ciso,\left(\frac{3n!c_n}{\min(\nu_0,\mu)}K_0K_2\right)^{t},C_1\Bigg)
\end{equation}
(recall that $c_n$ is defined in~(\ref{def_cn}), $C_0$, $C_1$ are introduced in the statement of Theorem~\ref{LMGP}) and the constant $K_0$ is implied by the $\left(\phi,\cK_{n_1}\right)$-property (this property is imposed in the statement of Theorem~\ref{LMGP} as well).
Let $P \in \AnneauDePolynomes\setminus\idpf$ be a polynomial that does not satisfy~(\ref{LdMpolynome2}) for
\begin{equation}\label{preuve_theo_defK}
K=\max\left(2nC,\left(\frac{2\rho_{n+1}c_n}{\max(1,\lambda)\max(1,\mu)}\right)^t\right).
\end{equation}
Then it satisfies~(\ref{ordPplusqueb2}). In particular, $C$ and $P$ satisfy
hypothesis~(\ref{C_bornee_enP}) and~(\ref{C_minore_numu}) of Proposition~\ref{PropositionLdMprincipal}.
Define $\idp:=\I(\Z_C(P))$, where $\Z_C(P)$ is the cycle introduced in Remark~\ref{genZP}.
In view of~(\ref{LdTordZ}) and~(\ref{Cestgrande}), we have $\ord_{\ullt{f}}\idp > C_0$ and thus
$\phi$ is correct with respect to $\idp$. Moreover, $\Z_C(P)$ is projected onto $\mpp^1$ (see Remark~\ref{rem_LdT_Z_nonisotrivial}).
Recall that $V_i=V_i(\Z_C(P))$ is introduced in Definition~\ref{V_irho_i} and
$e_{\phi}$, $m$ are introduced
in Definition~\ref{definDePP}: (\ref{definDePP_defin_e}) and (\ref{definDePP_defin_m}) respectively.
If for $i=i_0(\Z_C(P))$ we have
\begin{equation}\label{e_bornee}
e_{\phi}(V_i,\idp) \leq m(I(V_{i},\idp)),
\end{equation}
we verify~(\ref{majoration_e_par_m}) and we can apply Proposition~\ref{PropositionLdMprincipal}.
This proposition gives us~(\ref{LdMpolynome2}) in view of our choice of~$K$~\eqref{preuve_theo_defK}. This estimate contradicts our hypothesis that $P$ does not satisfy~(\ref{LdMpolynome2}).
On the other hand, if~(\ref{e_bornee}) is not satisfied, we apply Lemma~\ref{LemmeProp13} to the ideal $\idp$
and the $\kk$-linear space $V=V_i(\idp)$ (we recall the notation $i=i_0(\Z_C(P))$).
We denote by $J$ the equidimensional $\phi$-stable ideal provided by
Lemma~\ref{LemmeProp13}. In view of the property~$b)$ of this proposition we have
\begin{equation}\label{prove_LMGP_rkJ_eq_rkV_i}
\rg(J)=\rg\left(I\left(V_i,\idp\right)\right)\geq i\geq n_1.
\end{equation}
The property~a) ensures $\ul{f}\not\in\V(J)$, because the ideal $I(V_{i},\idp)$ contains at least one polynomial that does not vanish at $\ul{f}$.
We verify (In view of~(\ref{LemmeProp13_en_part}))
\begin{equation}
\begin{aligned}
& m(J) \leq m(I(V,\idp)),\\
& \dd_{(0,n-\rg J+1)}(J) \leq \dd_{(0,n-\rg J+1)}(I(V_i,\idp)).
\end{aligned}
\end{equation}
As $\V(\idp)=Z_C(P)$ is projected onto $\mpp^1$ we have by Lemma~\ref{LemmeCor14NumberW}
$$
m(I(V,\idp)) \leq C_m
$$
and also $\delta_1(\idp) \geq 1$.
Recall that $I(V_i,\idp)\subset\idp$ and thus
\begin{equation*}
r:=\rg I(V_i,\idp) \leq \rg\idp = n.
\end{equation*}
As the ideal~$I(V_i,\idp) \subset \idp$ is extended-contracted of an ideal generated by polynomials of bi-degree
$\leq \left(\rho_i\left(\delta_0(\idp)+\frac{\nu_1}{\max(\mu,\nu_0)}\delta_1(\idp)\right), \rho_i\delta_1(\idp)\right)$ (see Definitions~\ref{def_I} and~\ref{V_irho_i}), we have by Lemma~\ref{lemma_BT}
\begin{multline} \label{est_basique1}
\dd_{(0, n-\rg I(V_i,\idp)+1)}(I(V_i,\idp))
\\
\leq(r-r_{\ul{f}})\left(\deg_{(1,\dim\idp_{\ul{f}}-1)}\idp_{\ul{f}}\right)\left(\delta_0(\idp)+\frac{\nu_1}{\max(\mu,\nu_0)}\delta_1(\idp)\right)
\\
\times
\delta_1(\idp)^{r-r_{\ul{f}}-1}\rho_i^{r-r_{\ul{f}}}
+\left(\deg_{(0,\dim\idp_{\ul{f}})}\idp_{\ul{f}}\right)\delta_1(\idp)^{r-r_{\ul{f}}}\rho_i^{r-r_{\ul{f}}}
\end{multline}
where we use the notation $r:=\rg I(V_i,\idp)$.
Let's temporarily denote by $R(\delta_0,\delta_1)$ the r.h.s. of~\eqref{est_basique1}.
Using~\eqref{prove_LMGP_rkJ_eq_rkV_i} we infer
\begin{equation} \label{estdegW}
\dd_{(0, n-\rg J+1)}J \leq R(\delta_0,\delta_1).
\end{equation}
As $J$ is an equidimensional ideal, we obtain for all $\idq\in\Ass(\AnneauDePolynomes/J)$
\begin{equation} \label{estdegQ}
\begin{aligned}
\dd_{(0, n-\rg\idq+1)}\idq&\leq\dd_{(0, n-\rg J+1)}J\\
&\leq R(\delta_0,\delta_1).
\end{aligned}
\end{equation}
The same calculation for $\dd_{(1, n-\rg\idq)}\idq$ gives us
\begin{equation} \label{estdeg1Q}
\begin{aligned}
\dd_{(1, n-\rg\idq)}\idq
\leq\rho_n^{n-r_{\ul{f}}}\left(\dd_{(1, n-\rg\idp_{\ul{f}})}\idp_{\ul{f}}\right)\left(\delta_1(\idp)+1\right)^{r-r_{\ul{f}}}.
\end{aligned}
\end{equation}
Summing up~\eqref{estdegQ} and~\eqref{estdeg1Q} we find, for every $\idq\in\Ass(\AnneauDePolynomes/J)$,
\begin{multline}
\dd_{(0, n-\rg\idq+1)}\idq+\dd_{(1, n-\rg\idq)}\idq
\\
\leq R(\delta_0,\delta_1)+\rho_n^{n-r_{\ul{f}}}\left(\dd_{(1, n-\rg\idp_{\ul{f}})}\idp_{\ul{f}}\right)\left(\delta_1(\idp)+1\right)^{r-r_{\ul{f}}}
\\
\leq (r-r_{\ul{f}})\left(\deg_{(1,\dim\idp_{\ul{f}}-1)}\idp_{\ul{f}}\right)
\\
\times
\left(\delta_0(\idp)+\frac{\nu_1}{\max(\mu,\nu_0)}\delta_1(\idp)\right)
\delta_1(\idp)^{r-r_{\ul{f}}-1}\rho_i^{r-r_{\ul{f}}}
\\
+\left(\deg_{(0,\dim\idp_{\ul{f}})}\idp_{\ul{f}}+\deg_{(1,\dim\idp_{\ul{f}}-1)}\idp_{\ul{f}}\right)\delta_1(\idp)^{r-r_{\ul{f}}}\rho_i^{r-r_{\ul{f}}}
\\
\leq K_2\left(\delta_0(\idp)+1\right)\left(\delta_1(\idp)+1\right)^{n-r_{\ul{f}}},
\end{multline}
where $K_2$ is defined in~\eqref{def_Ktwo}.
As $P$ and $C$ satisfy~(\ref{ordPplusqueb2}), by Lemma~\ref{dist_alpha} there exists a point $\ull{\alpha}\in\b{Z}_C(P)$
satisfying~(\ref{Cdirect}) with $\tilde{C} = \frac{C^{\frac{1}{t}}\min(\nu_0,\mu)}{3t!c_n}\geq K_0K_2$
(the last inequality is implied by the definition~(\ref{Cestgrande})),
and thus one has for all $\idq\in\Ass(\AnneauDePolynomes/J)$ (in view of Lemma~\ref{LemmeProp13}, point~\ref{LemmeProp13_c})
\begin{equation} \label{estordQ}
\ord_{\ull{f}}\idq \geq \ord(\ull{f},\ull{\alpha}) > K_0K_2(\delta_0(\idp)+1)(\delta_1(\idp)+1)^{t_{\ul{f}}}.
\end{equation}
We recall that $r_{\ul{f}}$ and $t_{\ul{f}}$ are introduced in Definition~\ref{def_tf} and satisfy
$$
t_{\ul{f}}+r_{\ul{f}}=n.
$$
The estimates~(\ref{estdegQ}), (\ref{estdeg1Q})
and~(\ref{estordQ}) put together (and been verified for \emph{all} $\idq\in\Ass(\AnneauDePolynomes/J)$)
contradict the $\left(\phi,\cK\right)$-property assumed in the statement. So, the assumption~(\ref{ordPplusqueb2})
with $C$ given by~(\ref{Cestgrande}) is untenable and we deduce~(\ref{LdMpolynome2}) with our choice of~$K$.
It contradicts again our assumption that $P$ does not satisfy~(\ref{LdMpolynome2}).
Finally, we conclude that polynomial~$P$, which does not satisfy~(\ref{LdMpolynome2}), can not exist, and this completes the proof.
\end{proof}
\section{Applications}
Our Theorem~\ref{LMGP} extends the corresponding result from~\cite{EZ2010,EZ2011} to sets of functions $f_1(z),\dots,f_n(z)\in\kk[[z]]$ that possibly admit algebraic relations over $\kk(z)$. Once proving this generalization, we can immediately apply it to the cases of Mahler's functions and to the case of solutions of a differential system (with polynomial relations). In all these cases proofs can be transferred word for word from the case of algebraically independent functions, just replacing the references to formal multiplicity lemma with references to our Theorem~\ref{LMGP}.
In this section we only give statements of theorems and recall the corresponding frameworks. For proofs, we give references to the corresponding proofs in~\cite{EZ2010} and~\cite{EZ2011}.
\subsubsection*{Zero order estimates for functions satisfying functional equations of generalized Mahler's type}
Let $A_0,...,A_n$ be polynomials with coefficients in $\kk$ and satisfying $\deg_z A_i\leq s$, $\deg_{\ul{X}}A_i\leq q$. Let $p(z)$ be a rational fraction with $\delta=\ord_{z=0} p({z})$ and $d:=\deg p(z)$.
We consider the following system of functional equations
\begin{equation} \label{relsTopfer}
{f_i}({p(z)}) =
\frac{A_i(z, f_1(z),...,f_n(z))}{A_0(z, f_1(z),...,f_n(z))},\quad i=1,\dots,n.
\end{equation}
Let $\T$ be a rational map from $\mpp^1 \times \mpp^{n}$ to itself
defined by
\begin{multline} \label{defT2}
(X_0':X_1',X_0:...:X_n)\rightarrow\Big(A_0'(X_0',X_1'):A_1'(X_0',X_1'),\\
A_0(X_0',X_1',X_0,...,X_n):...:A_n(X_0',X_1',X_0,...,X_n)\Big),
\end{multline}
where $A_i'\in\kk[X_0',X_1']$, $i=0,1$, are homogeneous polynomials of degree $r$ in $\ul{X}'$ and $A_j\in\AnneauDePolynomes$, $j=0,...,n$,
are bi-homogeneous polynomials of bi-degree $(s,q)$ in $\ul{X}'$ and $\ul{X}$.
\begin{remark}\label{rem_Mutual_Association} We define $p(\b{z})=\frac{A_1'(1,\b{z})}{A_0'(1,\b{z})}$ and associate to every rational map $\T$ defined by~(\ref{defT2}) and such that $A_0$, $A_0'$
are non-zero polynomials,
a system of functional equations~(\ref{relsTopfer}):
\begin{equation} \label{relsTopfer2}
A_0(\ullt{f}(\b{z}))f_i(p(\b{z}))=A_i(\ullt{f}(\b{z})), \quad i=1,...,n
\end{equation}
(where $\ullt{f}$ denotes $(1,\b{z},1,f_1(\b{z}),...,f_n(\b{z}))$).
The other way around, is to start from the system~(\ref{relsTopfer}), then
formulae~(\ref{defT2}) define a morphism $\T:\mpp^1 \times \mpp^{n}\rightarrow\mpp^1 \times \mpp^{n}$.
\end{remark}
\begin{definition} \label{def_Mutual_Association}
We say that the morphism defined by~(\ref{defT2}) and the system~(\ref{relsTopfer2}) are mutually associated.
\end{definition}
\begin{definition}\label{def_irrT}
For any morphism $\T$ defined by~(\ref{defT2}), we denote by $\irrT$ the union of zero loci of the polynomial bi-homogeneous systems $A_i'(X_0',X_1',X_0,...,X_n)=0$, $i=0,1$, and $A_j(X_0',X_1',X_0,...,X_n)=0$, $j=0,...,n$. One has $\irrT \subset \mpp_{\kk}^1\times\mpp_{\kk}^n$ and this is a set of points where bi-projective application $\T$ is not well-defined (if $\irrT=\emptyset$ the map $\T$ is a regular bi-projective map).
\end{definition}
\begin{remark} \label{rem_TV_simplified}
To simplify the notation we write $\T(W)$ instead of $\T(W\setminus\irrT)$.
\end{remark}
\begin{definition}\label{definVarieteTstable}
We say that a sub-variety $W\subset\mpp^1\times\mpp^n$
is $\T$-stable, if
\begin{equation*}
\ol{\T(W)}=W.
\end{equation*}
\end{definition}
\begin{remark} \label{varietestable_et_idealstable}
If a variety $W$ is $\T$-stable then the ideal $\I(W)$ is $\Talg$-stable, but the reciprocal statement is not true. The condition $\Talg(\I(W))\subset\I(W)$ geometrically means only that $W$ is a sub-scheme of $\T^{-1}(W)$. However, if we impose that the variety $W$ is irreducible and $\dim \T(W)=\dim W$, then
\begin{equation}
\mbox{a variety $W$ is $\T$-stable} \Leftrightarrow \mbox{the ideal $\I(W)$ is $\Talg$-stable}.
\end{equation}
\end{remark}
\begin{theorem}\label{LMGPF} Let $\kk$ be a field of an arbitrary characteristic and
$\T: \mpp^1_{\kk}\times\mpp^n_{\kk} \rightarrow \mpp^1_{\kk}\times\mpp^n_{\kk}$ a rational dominant map defined as in~(\ref{defT2}),
by the homogeneous polynomials $A_i'$, $i=0,1$ in $\ul{X}'$ of degree $r$, and polynomials $A_i$, $i=0,\dots,n$ bi-homogeneous in $\ul{X}'$ and $\ul{X}$,
of bi-degree $(s,q)$.
Let $f_1(\b{z})$,...,$f_n(\b{z}) \in \kk[[\b{z}]]$
and $n_1\in\{1,\dots,n\}$, $C_1\in\mrr^+$.
We denote, as before, $\ul{f}=(1,\b{z},1,f_1(\b{z}),...,f_n(\b{z}))$.
Suppose moreover that there exists $\lambda \in \mrr_{>0}$, such that for all $Q\in\AnneauDePolynomes$
\begin{equation} \label{section_AB_IElambda}
\Ordz Q(\T(\ullt{f})) \geq \lambda \Ordz Q(\ullt{f}),
\end{equation}
and that there exists a
constant $K_0 \in \mrr^{+}$ (dependent on $\T$ and
$\ullt{f}$ only) such that for every positive integer
\begin{equation}\label{Nmajoration}
N\leq C_m
\end{equation}
(where the constant $C_m$ is introduced in Definition~\ref{def_Cm})
every irreducible ${\T}^N$-stable variety $W\varsubsetneq\mpp^1\times\mpp^n$ (defined over the field $\kk$) of dimension $\dim W\leq n-n_1+1$
satisfies necessarily
\begin{equation} \label{RelMinN}
\ord_{\bt{f}}(W) < K_0\left(\dd_{(0,\dim W)}W+\dd_{(1,\dim W-1)}W\right).
\end{equation}
Then there exists a constant $K_1>0$ such that for all $P \in \AnneauDePolynomes\setminus\idp_{\ul{f}}$ satisfying for all $C\geq C_1$
\begin{equation}\label{condition_n1F}
i_0(\Z_C(P))\geq n_1,
\end{equation}
satisfies also
\begin{equation} \label{LdMpolynome}
\ordz(P(\ullt{f})) \leq K_1(\deg_{\ul{X'}}P + \deg_{\ul{X}}P + 1)(\deg_{\ul{X}} P + 1)^n.
\end{equation}
\end{theorem}
In case of linear system we can provide an unconditional result. The proof is the same as that of Theorem~3.1 in~\cite{EZ2011} (or Theorem~3.11 in~\cite{EZ2010}).
\begin{theorem} \label{theoNishioka}
Let $\kk$ be a field of an arbitrary characteristic and $\T:\mpp^1_{\kk}\times\mpp^n_{\kk}\rightarrow\mpp^1_{\kk}\times\mpp^n_{\kk}$ a map defined by~(\ref{defT2}) with the polynomials $A_i$ linear in $\ul{X}$. Assume that
\begin{equation}\label{theoNishioka_lambda_pgq2}
\lambda:=\ordz p(\b{z})\geq 2.
\end{equation}
Suppose that there is a solution $\ul{f}=(1,f_1(\b{z}),\dots,f_n(\b{z}))$ of the system of functional equations~(\ref{relsTopfer}) associated to $\T$. Denote by $t_{\ul{f}}$ the transcendence degree of $\ul{f}$ over $\kk(z)$ (see Definition~\ref{def_tf}).
Then there exists a constant $K_1$ such that for any polynomial $P\in\AnneauDePolynomes\setminus\idp_{\ul{f}}$ one has
\begin{equation*}
\ordz(P(\ullt{f})) \leq K_1(\deg_{\ul{X'}}P + \deg_{\ul{X}}P + 1)(\deg_{\ul{X}} P + 1)^{t_{\ul{f}}}.
\end{equation*}
\end{theorem}
\subsubsection*{Multiplicity estimates for solutions
of algebraic differential equations} \label{subsection_ApplicationsDifferential}
In this subsection we consider an $n$-tuple $\ull{f}=(f_1(\b{z}),\dots,f_n(\b{z}))$ of analytic functions (or, more generally, power series) satisfying a system of differential equations
\begin{equation} \label{syst_diff}
f_i'(\b{z})=\frac{A_i(\b{z},\ull{f})}{A_0(\b{z},\ull{f})}, \quad i=1,\dots,n,
\end{equation}
where $A_i(\b{z},X_1,\dots,X_n)\in\kk[\b{z},X_1,\dots,X_n]$ for $i=0,...,n$ (we suppose that $A_0$ is a non-zero polynomial).
We associate to the system~(\ref{syst_diff}) the following derivation
\begin{equation} \label{defD}
D = A_0(\b{z}, X_1,\dots, X_n)\diff{\b{z}} + \sum_{i=1}^nA_i(\b{z}, X_1,\dots, X_n)\diff{X_i}.
\end{equation}
This operator is an application $D:\kk[\b{z},X_1,\dots,X_n]\rightarrow\kk[\b{z},X_1,\dots,X_n]$. We also consider $D$ as acting on $\AnneauDePolynomes=\kk[X_0',X_1'][X_1,\dots,X_n]$ defining
\begin{equation} \label{defhD}
D = {^h\!A}_0(X_0',X_1', X_1,\dots, X_n)\diff{X_1'}+\sum_{i=1}^n{^h
\!A_i}(X_0',X_1', X_1,\dots, X_n)\diff{X_i},
\end{equation}
where $^h\!P$ denotes the bi-homogenization of the polynomial $P\in
\kk[\b{z},X_1,\dots,X_n]$:
\begin{equation*}
^h\!P(X_0',X_1', X_1,\dots, X_n):=X_0'^{\deg_{\b{z}}P}\cdot
X_0^{\deg_{\ul{X}}P}\cdot P\left(\frac{X_1'}{X_0'},\frac{X_1}{X_0},
\dots,\frac{X_n}{X_0}\right).
\end{equation*}
One readily verifies~$D({^h\!P})={^h\!\left(D(P)
\right)}$, so the application $D:\AnneauDePolynomes\rightarrow
\AnneauDePolynomes$ is exactly the "bi-homogenization" of $D:
\kk[\b{z},X_1,\dots,X_n]\rightarrow\kk[\b{z},X_1,\dots,X_n]$.
The application $D$ is a correct application with respect to any
ideal $\idp \subset \AnneauDePolynomes$, according to Corollary~2.11 of~\cite{EZ2011} (or also corollary~2.10 of~\cite{EZ2010}).
Let's deduce from Theorem~\ref{LMGP} an improvement of the following Nesterenko's famous theorem (proved in~\cite{N1996}):
\begin{theorem}[Nesterenko, see Theorem~1.1 of Chapter~10, \cite{NP}] \label{theoNesterenko_classique}
Suppose that functions
\begin{equation*}
\ull{f} = (f_1(\b{z}),\dots,f_n(\b{z})) \in \mcc[[\b{z}]]^n
\end{equation*}
are analytic at the point $\b{z}=0$ and form a solution of the system~(\ref{syst_diff}) with $\kk=\mcc$.
If there exists a constant $K_0$ such that every $D$-stable prime ideal $\idp \subset \mcc[X_1',X_1,\dots,X_n]$,
$\idp\ne(0)$, satisfies
\begin{equation} \label{ordIleqKdegI}
\min_{P \in \idp}\ordz P(\b{z},\ull{f}) \leq K_0,
\end{equation}
then there exists a constant $K_1>0$ such that for any polynomial $P \in
\mcc[X_1',X_1,\dots,X_n]$, $P\ne 0$, the following inequality holds
\begin{equation}
\ordz(P(\b{z},\ull{f})) \leq K_1(\deg_{\ul{X}'} P + 1)(\deg_{\ul{X}} P + 1)^n.
\end{equation}
\end{theorem}
\begin{remark}
Condition~\eqref{ordIleqKdegI} is the \emph{$D$-property}~\cite{N1996}.
Assuming $A_0(0,\ull{f}(0))\ne 0$ in the system~(\ref{syst_diff}), it is easy to verify the condition~(\ref{ordIleqKdegI}), cf.~\cite{NP}, chapitre 10, example~1 (p.~150).
Also, the condition~(\ref{ordIleqKdegI}) is established in the case when the polynomials $A_i$, $i=0,\dots,n$ are of degree 1 in $X_1,\dots,X_n$, cf.~\cite{N1974}.
In the latter case the proof is based on the differential Galois theory.
\end{remark}
In what follows we denote by $\cK_{prime}$ the class of prime ideals of $\A$ and $\cK_{primary}$ the class of primary ideals of $\A$. Using Theorem~\ref{LMGP} we can replace~(\ref{ordIleqKdegI}) in Theorem~\ref{theoNesterenko_classique} by
a weaker assumption, notably a $\left(D,\cK_{prime}\right)$-property (see Definition~\ref{def_weakDproperty}). In the same time we provide a result valid in an arbitrary characteristic.
\begin{theorem} \label{LMGPD}
Let
$(f_1(\b{z}),\dots,f_n(\b{z})) \in \kk[[\b{z}]]^n$
be a set of formal power series forming a solution of the system~(\ref{syst_diff}).
We assume that $\ul{f}=(1:z,1:f_1(\b{z}):\dots:f_n(\b{z})$ satisfies the $\left(\phi,\cK_{primary}\right)$-property, and if $\car\kk=0$ we assume only that $\ul{f}$ satisfies the $\left(\phi,\cK_{prime}\right)$-property.
Under these conditions there is a constant $K>0$ such that every $P \in
\AnneauDePolynomes \setminus \idp_{\ul{f}}$
satisfies
\begin{equation}
\ordz(P(\b{z},\ull{f})) \leq K(\deg_{\ul{X'}} P + 1)(\deg_{\ul{X}} P + 1)^{t_{\ul{f}}}.
\end{equation}
\end{theorem}
\begin{center
{\bfseries Acknowledgement\vspace{-.5em}
\end{center
\thanks{
The author would like to express his profound gratitude to Patrice \textsc{Philippon}. His interventions at many stages of this research was of decisive importance.}
{\small
\def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$}
\def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$}
\def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$}
\def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$}
\def\polhk#1{\setbox0=\hbox{#1}{\ooalign{\hidewidth
\lower1.5ex\hbox{`}\hidewidth\crcr\unhbox0}}} \def\cprime{$'$}
\def\cprime{$'$}
|
2,869,038,155,899 | arxiv | \section{Introduction}
\label{sec:introduction}
Let $K : \reals^d \times \reals^d \to R$ be a kernel function, such as a Gaussian kernel; $K(p,q)$ describes how similar two points $p,q \in \reals^d$ are. For point sets $\c{P}, \c{Q}$ we can define a similarity function $\kappa(\c{P},\c{Q}) = \sum_{p \in \c{P}} \sum_{q \in \c{Q}} K(p,q)$. Then the \emph{kernel distance} is defined as
\begin{equation}
D_K(\c{P},\c{Q}) = \sqrt{\kappa(\c{P},\c{P}) + \kappa(\c{Q},\c{Q}) - 2 \kappa(\c{P},\c{Q})}.
\label{eq:CD-def}
\end{equation}
By altering the kernel $K$, and the weighting of elements in $\kappa$, the kernel distance can capture distance between distributions, curves, surfaces, and even more general objects.
\paragraph{Motivation.}
The earthmover distance (EMD) takes a metric space and two probability distributions over the space, and computes the amount of work needed to "transport" mass from one distribution to another. It has become a metric of choice in computer vision, where images are represented as intensity distributions over a Euclidean grid of pixels. It has also been applied to shape comparison~\cite{knauer}, where shapes are represented by point clouds (discrete distributions) in space.
While the EMD is a popular way of comparing distributions over a metric space, it is also an expensive one. Computing the EMD requires solving an optimal transportation problem via the Hungarian algorithm, and while approximations exist for restricted cases like the Euclidean plane, they are still expensive, requiring either quadratic time in the size of the input or achieving at best a constant factor approximation when taking linear time~\cite{SJ08, ADIW09}. Further, it is hard to index structures using the EMD for performing near-neighbor, clustering and other data analysis operations. Indeed, there are lower bounds on our ability to embed the EMD into well-behaved normed spaces~\cite{DBLP:conf/soda/AndoniIK08}.
The kernel distance has thus become an effective alternative to comparing distributions on a metric space. In machine learning, the kernel distance has been used to build metrics on distributions~\cite{Suq95,HB05,hilbert,smola,muller97} and to learn hidden Markov models~\cite{SBSGS10}. In the realm of shape analysis~\cite{Vaillant2005,glaunesthesis,GlaunesJoshi:MFCA:06,DBLP:conf/miccai/DurrlemanPTA08}, the kernel distance (referred to there as the \emph{current distance}) has been used to compare shapes, whether they be point sets, curves, or surfaces.
All of these methods utilize key structural features of the kernel distance. When constructed using a \emph{positive definite}\footnote{A positive definite function generalizes the idea of a positive definite matrix; see Section~\ref{sec:definitions}.} similarity function $K$, the kernel distance can be interpreted through a lifting map $\phi : \reals^d \to \c{H}$ to a reproducing kernel Hilbert space (RKHS), $\c{H}$. This lifting map $\phi$ is isometric; the kernel distance is precisely the distance induced by the Hilbert space ($D_K(\{p\},\{q\}) = \|\phi(p) - \phi(p)\|_{\c{H}}$).
Furthermore, a point set $\c{P}$ has an isometric lifted representation $\Phi(\c{P}) = \sum_{p \in \c{P}} \phi(p)$ as a single vector in $\c{H}$ so $D_K(\c{P}, \c{Q}) = \|\Phi(\c{P}) - \Phi(\c{Q})\|_\c{H}$.
Moreover, by choosing an appropriately scaled basis, this becomes a simple $\ell_2$ distance, so all algorithmic tools developed for comparing points and point sets under $\ell_2$ can now be applied to distributions and shapes.
Dealing with uncertain data provides another reason to study the kernel distance. Rather than thinking of $K(\cdot, \cdot)$ as a similarity function, we can think of it as a way of capturing spatial uncertainty; $K(p,q)$ is the likelihood that the object claimed to be at $p$ is actually at $q$. For example, setting $K(p,q) = \exp( - \|p-q\|^2/\sigma)/(\sqrt{2\pi} \sigma)$ gives us a Gaussian blur function.
In such settings, the kernel distance $D^2_K(\c{P},\c{Q})$ computes the symmetric difference $|\c{P} \triangle \c{Q}|$ between shapes with uncertainty described by $K$.
\paragraph{Our work.}
We present the first algorithmic analysis of the kernel distance. Our
main contributions are as follows:
\begin{inparaenum}[(i)]
\item We present fast approximation algorithms for computing the kernel distance between two point sets $\c{P}$ and $\c{Q}$ that runs in near-linear time in the size of $\c{P} \cup \c{Q}$ (note that an explicit calculation would take quadratic time).
\item We present polynomial-time algorithms for approximately minimizing the kernel distance under rigid transformation; they run in time $O(n + \text{poly}(1/\varepsilon, \log n))$.
\item We provide several general techniques for reducing complex objects to convenient sparse representations (specifically to point sets or sets of points sets) which approximately preserve the kernel distance. In particular, this allows us to reduce problems of computing the kernel distance between various types of objects such as curves, surfaces, and distributions to computing the kernel distance between point sets.
\end{inparaenum}
We build these results from two core technical tools.
The first is a lifting map that maps objects into a finite-dimensional Euclidean space while approximately preserving the kernel distance. We believe that the analysis of lifting maps is of independent interest; indeed, these methods are popular in machine learning~\cite{hilbert,smola,SBSGS10} but (in the realm of kernels) have received less attention in algorithms.
Our second technical tool is an theorem relating $\varepsilon$-samples of range spaces defined with kernels to standard $\varepsilon$-samples of range spaces on $\{0,1\}$-valued functions. This gives a simpler algorithm than prior methods in learning theory that make use of the $\gamma$-fat shattering dimension, and yields smaller $\varepsilon$-samples.
\section{Preliminaries}
\paragraph{Definitions.}
\label{sec:definitions}
For the most general case of the kernel distance (that we will consider in this paper) we associate a unit vector $U(p)$ and a weighting $\mu(p)$ with every $p \in \c{P}$. Similarly we associate a unit vector $V(p)$ and weighting $\nu(q)$ with every $q \in \c{Q}$. Then we write
\begin{equation}
\kappa(\c{P},\c{Q}) = \int_{p \in \c{P}} \int_{q \in \c{Q}} K(p,q) \IP{U(p)}{V(q)} \, d\mu(p) d\nu(q),
\label{eq:kappa}
\end{equation}
where $\langle \cdot, \cdot \rangle$ is the Euclidean inner product. This becomes a distance $D_K$, defined through (\ref{eq:CD-def}).
When $\c{P}$ is a curve in $\b{R}^d$ we let $U(p)$ be the tangent vector at $p$ and $\mu(p) = 1$. When $\c{P}$ is a surface in $\b{R}^3$ we let $U(p)$ be the normal vector at $p$ and $\mu(p)=1$. This can be generalized to higher order surfaces through the machinery of $k$-forms and $k$-vectors~\cite{Vaillant2005,GI}.
When $\c{P}$ is an arbitrary probability measure\footnote{We avoid the use of the term 'probability distribution' as this conflicts with the notion of a (Schwarz) distribution that itself plays an important role in the underlying theory.} in $\b{R}^d$, then all $U(p)$ are identical unit vectors and $\mu(p)$ is the probability of $p$.
For discrete probability measures, described by a point set, we replace the integral with a sum and $\mu(p)$ can be used as a weight $\kappa(\c{P},\c{Q}) = \sum_{p \in \c{P}} \sum_{q \in \c{Q}} K(p,q) \mu(p) \nu(q)$.
\paragraph{From distances to metrics.}
When $K$ is a symmetric similarity function (i.e. $K(p,p) = \max_{q \in \b{R}^d} K(p,q)$, \, $K(p,q) = K(q,p)$, and $K(p,q)$ decreases as $p$ and $q$ become ``less similar'') then $D_K$ (defined through (\ref{eq:kappa}) and (\ref{eq:CD-def})) is a distance function, but may not be a metric.
However, when $K$ is positive definite, then this is sufficient for $D_K$ to not only be a metric\footnote{Technically this is not completely correct; there are a few special cases, as we will see, where it is a pseudometric~\cite{hilbert}.}, but also for $D^2_K$ to be of negative type~\cite{deza}.
We say that a symmetric function $K : \reals^d \times \reals^d \rightarrow \reals$ is a \emph{symmetric positive definite kernel} if for any nonzero $L_2$ function $f$ it satisfies
$
\int_{p \in \c{P}} \int_{q \in \c{Q}} f(q) K(p, q) f(p) \; dp dq > 0.
$
The proof of $D_K$ being a metric follows by considering the reproducing kernel Hilbert space $\c{H}$ associated with such a $K$~\cite{Aronszajn1950}. Moreover, $D_K$ can be expressed very compactly in this space. The lifting map $\phi : \reals^d \to \c{H}$ associated with $K$ has the ``reproducing property'' $K(p,q) = \langle \phi(p), \phi(q) \rangle_\c{H}$.
So by linearity of the inner product, $\Phi(\c{P}) = \int_{p \in \c{P}} \phi(p) \, d\mu(p)$ can be used to retrieve $D_K(\c{P}, \c{Q}) = \| \Phi(\c{P}) - \Phi(\c{Q})\|_\c{H}$ using the induced norm $\| \cdot \|_{\c{H}}$ of $\c{H}$. Observe that this defines a norm $\|\Phi(\c{P})\|_{\c{H}} = \sqrt{\kappa(\c{P},\c{P})}$ for a shape.
\paragraph{Examples.}
If $K$ is the ``trivial'' kernel, where $K(p,p) = 1$ and $K(p,q) = 0$ for $p \ne q$, then the distance between any two sets (without multiplicity) $\c{P}, \c{Q}$ is $D^2_K(\c{P}, \c{Q}) = |\c{P} \Delta \c{Q}|$, where $\c{P} \Delta \c{Q} = \c{P} \cup \c{Q} \setminus (\c{P} \cap \c{Q})$ is the symmetric difference. In general for arbitrary probability measures, $D_K(\c{P}, \c{Q}) = \|\mu - \nu\|_2$.
If $K(p,q) = \langle p,q \rangle$, the Euclidean dot product, then the resulting lifting map $\phi$ is the identity function, and the distance between two measures is the Euclidean distance between their means, which is a pseudometric but not a metric.
\paragraph{Gaussian properties.}
To simplify much of the presentation of this paper we focus on the case where the kernel $K$ is the Gaussian kernel; that is $K(p,q) = e^{-||p-q||^2/h}$. Our techniques carry over to more general kernels, although the specific bounds will depend on the kernel being used.
We now encapsulate some useful properties of Gaussian kernels in the following lemmata.
When approximating $K(p,q)$, the first allows us to ignore pairs of points further that $\sqrt{h \ln(1/\gamma)}$ apart, the second allows us to approximate the kernel on a grid.
\begin{lemma}[Bounded Tails]
If $||p-q|| > \sqrt{h \ln (1/\gamma)}$ then $K(p,q) < \gamma$.
\label{lem:G-dist}
\end{lemma}
\begin{lemma}[Lipschitz]
\label{lem:grid-eps}
For $\delta \in \b{R}^d$ where $\|\delta\| < \varepsilon$, for points $p,q \in \b{R}^d$ we have $|K(p,q) - K(p,q+\delta)| \leq \varepsilon/\sqrt{h}$.
\end{lemma}
\begin{proof}
The slope for $\psi(x) = e^{-x^2/h}$ is the function $\psi'(x) = -(2/h)xe^{-x^2/h}$. $\psi'(x)$ is maximized when $x = \sqrt{h/2}$, which yields $\psi'(\sqrt{h/2}) = -\sqrt{2/he} < 1/\sqrt{h}$. Thus $|\psi(x) - \psi(x+\varepsilon)| < \varepsilon/\sqrt{h}$. And since translating by $\delta$ changes $\|p-q\|$ by at most $\varepsilon$, the lemma holds.
\end{proof}
\subsection{Problem Transformations}
In prior research employing the kernel distance, \emph{ad hoc} discretizations are used to convert the input objects (whether they be distributions, point clouds, curves or surfaces) to weighted discrete point sets. This process introduces error in the distance computations that usually go unaccounted for. In this subsection, we provide algorithms and analysis for \emph{rigorously} discretizing input objects with guarantees on the resulting error. These algorithms, as a side benefit, provide a formal error-preserving reduction from kernel distance computation on curves and surfaces to the corresponding computations on discrete point sets.
After this section, we will assume that all data sets considered $\c{P}, \c{Q}$ are discrete point sets of size $n$ with weight functions $\mu : \c{P} \to \reals$ and $\nu : \c{Q} \to \reals$.
The weights need not sum to $1$ (i.e. need not be \emph{probability} measures), nor be the same for $\c{P}$ and $\c{Q}$; we will set $W = \max(\sum_{p \in \c{P}} \mu(p), \sum_{q \in \c{Q}} \nu(q))$ to denote the total measure. This implies (since $K(p,p)=1$) that $\kappa(\c{P},\c{P}) \leq W^2$.
All our algorithms will provide approximations of $\kappa(\c{P},\c{Q})$ within additive error $\varepsilon W^2$. Since without loss of generality we can always normalize so that $W = 1$, our algorithms all provide an additive error of $\varepsilon$.
We also set $\Delta = (1/h)\max_{u,v \in \c{P} \cup \c{Q}} \|u-v\|$ to capture the normalized diameter of the data.
\paragraph{Reducing orientation to weights.}
\label{sec:from-2-surfaces}
The kernel distance between two oriented curves or surfaces can be reduced to a set of distance computations on appropriately weighted point sets.
We illustrate this in the case of surfaces in $\reals^3$. The same construction will also work for curves in $\reals^d$.
For each point $p \in \c{P}$ we can decompose $U(p) \triangleq (U_1(p), U_2(p), U_3(p))$ into three fixed orthogonal components such as the coordinate axes $\{e_1, e_2, e_3\}$. Now
\vspace{-.1in}
\begin{eqnarray*}
\kappa(\c{P},\c{Q})
= &
\displaystyle{\int_{p \in \c{P}} \int_{q \in \c{Q}} K(p,q) \IP{U(p)}{V(q)} \, d\mu(p) d\nu(q)}
&=
\int_{p \in \c{P}} \int_{q \in \c{Q}} K(p,q) \sum_{i=1}^3 (U_i(p) V_i(q)) \, d\mu(p) d\nu(q)
\\ =&
\displaystyle{\sum_{i=1}^3 \int_{p \in \c{P}} \int_{q \in \c{Q}} K(p,q) (U_i(p) V_i(q)) \, d\mu(p) d\nu(q)}
&=
\sum_{i=1}^3 \kappa(\c{P}_i, \c{Q}_i),
\end{eqnarray*}
where each $p \in \c{P}_i$ has measure $\mu_i(p) = \mu(p) \|U_i(p)\|$.
When the problem specifies $U$ as a unit vector in $\b{R}^d$, this approach reduces to $d$ independent problems without unit vectors.
\paragraph{Reducing continuous to discrete.}
\label{sec:surface kernel distance}
We now present two simple techniques (gridding and sampling) to reduce a continuous $\c{P}$ to a discrete point set, incurring at most $\varepsilon W^2$ error.
We construct a grid $G_\varepsilon$ (of size $O((\Delta/\varepsilon)^d)$) on a smooth shape $\c{P}$, so no point\footnote{For distributions with decaying but infinite tails, we can truncate to ignore tails such that the integral of the ignored area is at most $(1-\varepsilon/2)W^2$ and proceed with this approach using $\varepsilon/2$ instead of $\varepsilon$.} $p \in \c{P}$ is further than $\varepsilon \sqrt{h}$ from a point $g \in G_\varepsilon$.
Let $P_g$ be all points in $\c{P}$ closer to $g$ than any other point in $G_\varepsilon$.
Each point $g$ is assigned a weight $\mu(g) = \int_{p \in P_g} 1 \, d\mu(p)$.
The correctness of this technique follows by Lemma \ref{lem:grid-eps}.
Alternatively, we can sample $n = O((1/\varepsilon^2) (d + \log(1/\delta))$ points at random from $\c{P}$.
If we have not yet reduced the orientation information to weights, we can generate $d$ points each with weight $U_i(p)$.
This works with probability at least $1-\delta$ by invoking a coreset technique summarized in Theorem \ref{thm:random-coreset}.
For the remainder of the paper, we assume our input dataset $\c{P}$ is a weighted point set in $\b{R}^d$ of size $n$.
\section{Computing the Kernel Distance I: WSPDs}
\label{sec:WSPD}
The well-separated pair decomposition (WSPD)~\cite{CK95,HP} is a standard data structure to approximately compute pairwise sums of distances in near-linear time. A consequence of Lemma~\ref{lem:grid-eps} is that we can upper bound the error of estimating $K(p,q)$ by a nearby pair $K(\tilde p,\tilde q)$. Putting these observations together yields (with some work) an approximation for the kernel distance.
Since $D_K^2(\c{P},\c{Q}) = \kappa(\c{P},\c{P}) + \kappa(\c{Q},\c{Q}) - 2\kappa(\c{P},\c{Q})$, the problem reduces to computing $\kappa(\c{P},\c{Q})$ efficiently and with an error of at most $(\varepsilon/4)W^2$.
Two sets $A$ and $B$ are said to be \emph{$\alpha$-separated}~\cite{CK95} if
$
\max\{\diam{A},\diam{B}\} \leq \alpha \min_{a \in A, b \in B} ||a-b||
$. Let $A \otimes B = \{ \{x,y\} \mid x\in A, y \in B\}$ denote the set of all unordered pairs of elements formed by $A$ and $B$.
An \emph{$\alpha$-WSPD} of a point set $P$ is a set of pairs $\c{W} = \left\{ \{A_1, B_1\}, \ldots, \{A_s, B_s\}\right\}$ such that
\begin{itemize}[(i)] \vspace{-.1in} \itemsep -2pt\parsep=-1pt\partopsep -2pt
\item[(i)] $A_i, B_i \subset P$ for all $i$,
\item[(ii)] $A_i \cap B_i = \emptyset$ for all $i$,
\item[(iii)] disjointly $\bigcup_{i=1}^s A_i \otimes B_i = P \otimes P$, and
\item[(iv)] $A_i$ and $B_i$ are $\alpha$-separated for all $i$.
\end{itemize}
For a point set $P \subset \b{R}^d$ of size $n$, we can construct an $\alpha$-WSPD of size $O(n/\alpha^d)$ in time $O(n \log n + n/\alpha^d)$~\cite{HP,Cla83}.
We can use the WSPD construction to compute $D_K^2(\c{P},\c{Q})$ as follows. We first create an $\alpha$-WSPD of $\c{P} \cup \c{Q}$ in $O(n \log n + n/\alpha^d)$ time. Then for each pair $\{A_i, B_i\}$ we also store four sets $A_{i,\c{P}} = \c{P} \cap A_i$, $A_{i,\c{Q}} = \c{Q} \cap A_i$, $B_{i,\c{P}} = \c{P} \cap B_i$, and $B_{i,\c{Q}} = \c{Q} \cap B_i$. Let $a_i \in A_i$ and $b_i \in B_i$ be arbitrary elements, and let $D_i = \|a_i - b_i\|$. By construction, $D_i$ approximates the distance between any pair of elements in $A_i \times B_i$ with error at most $2\alpha D_i$.
In each pair $\{A_i, B_i\}$, we can compute the weight of the edges from $\c{P}$ to $\c{Q}$:
\[
W_i = \bigg(\sum_{p \in A_{i,\c{P}}}\mu(p)\bigg) \bigg(\sum_{q \in B_{i,\c{Q}}}\nu(q)\bigg) +
\bigg(\sum_{q \in A_{i,\c{Q}}} \nu(q) \bigg) \bigg(\sum_{p \in B_{i,\c{P}}} \mu(p)\bigg).
\]
We estimate the contribution of the edges in pair $(A_i, B_i)$ to $\kappa(\c{P}, \c{Q})$ as
\[
\sum_{(a,b) \in A_{i,P} \times B_{i,Q}} \mu(a) \nu(b) e^{-D_i^2/h} + \sum_{(a,b) \in A_{i,Q} \times B_{i,P}} \mu(b) \nu(a) e^{-D_i^2/h}
=
W_i e^{-D_i^2 /h}.
\]
Since $D_i$ has error at most $2\alpha D_i$ for each pair of points, Lemma \ref{lem:grid-eps} bounds the error as at most $W_i (2\alpha D_i/\sqrt{h})$.
In order to bound the total error to $(\varepsilon/4) W^2$, we bound the error for each pair by $(\varepsilon/4) W_i$ since $\sum_i W_i = \sum_{p \in P} \sum_{q \in Q} \mu(p) \nu(q) = W^2$. By Lemma \ref{lem:G-dist}, if $D_i > \sqrt{h \ln(1/\gamma)}$, then $e^{- D_i^2/h} < \gamma$. So for any pair with $D_i > 2\sqrt{h \ln(4/\varepsilon)}$, (for $\alpha < 1/2$) we can ignore, because they cannot have an effect on $\kappa(\c{P}, \c{Q})$ of more than $(1/4) \varepsilon W_i$, and thus cannot have error more than $(1/4) \varepsilon W_i$.
Since we can ignore pairs with $D_i > 2 \sqrt{h \ln(4/\varepsilon)}$, each pair will have error at most $W_i(2\alpha(2 \sqrt{h \ln(4/\varepsilon)}/\sqrt{h})$ $=$ $W_i(4\alpha \sqrt{\ln(4/\varepsilon)})$. We can set this equal to $(\varepsilon/4)W_i$ and solve for $\alpha < (1/4) \varepsilon /\sqrt{\ln(4/\varepsilon)}$. This ensures that each pair with $D_i \leq 2 \sqrt{h \ln(4/\varepsilon)}$ will have error less than $(\varepsilon/4)W_i$, as desired.
\begin{theorem}
By building and using an $((\varepsilon/4)/\sqrt{\ln(4/\varepsilon)})$-WSPD, we can compute a value $U$ in time $O(n \log n + (n/\varepsilon^d) \log^{d/2}(1/\varepsilon))$, such that
\[
\left| U - D_K^2(\c{P},\c{Q}) \right| \leq \varepsilon W^2.
\]
\label{thm:WSPD}
\end{theorem}
\section{Computing the Kernel Distance II: Approximate Feature Maps}
In this section, we describe (approximate) feature representations $\Phi(\c{P}) = \sum_{p \in \c{P}} \phi(p) \mu(p)$ for shapes and distributions that reduce the kernel distance computation to an $\ell_2$ distance calculation $\|\Phi(\c{P}) - \Phi(\c{Q}) \|_\c{H}$ in an RKHS, $\c{H}$. This mapping immediately yields algorithms for a host of analysis problems on shapes and distributions, by simply applying Euclidean space algorithms to the resulting feature vectors.
The feature map $\phi$ allows us to translate the kernel distance (and norm) computations into operations in a RKHS that take time $O(n \rho)$ if $\c{H}$ has dimension $\rho$, rather than the brute force time $O(n^2)$. Unfortunately, $\c{H}$ is in general infinite dimensional, including the case of the Gaussian kernel. Thus, we use dimensionality reduction to find an approximate mapping $\tilde{\phi} : \reals^d \to \reals^\rho$ (where $\tilde \Phi(\c{P}) = \sum_{p \in \c{P}} \tilde \phi(p)$) that approximates $\kappa(\c{P}, \c{Q})$:
\[
\bigg| \sum_{p \in P}\sum_{q \in Q} K(p,q)\mu(p)\nu(q) - \sum_{p \in P}\sum_{q \in Q}\IP{\tilde \phi(p)}{\tilde \phi(q)} \bigg| \leq \varepsilon W^2.
\]
The analysis in the existing literature on approximating feature space does not directly bound the dimension $\rho$ required for a specific error bound\footnote{Explicit matrix versions of the Johnson-Lindenstraus lemma~\cite{JL84} cannot be directly applied because the source space is itself infinite dimensional, rather than $\reals^d$.}.
We derive bounds from two known techniques:
random projections~\cite{Rahimi2007} (for shift-invariant kernels, includes Gaussians)
and
the Fast Gauss Transform~\cite{Yang2003,Greengard1991} (for Gaussian kernel).
We produce three different features maps, with different bounds on the number of dimensions $\rho$ depending on $\log n$ ($n$ is the number of points), $\varepsilon$ (the error), $\delta$ (the probability of failure), $\Delta$ (the normalized diameter of the points), and/or $d$ (the ambient dimension of the data \emph{before} the map).
\subsection{Random Projections Feature Space}
\label{sec:rahimi-recht-appr}
Rahimi and Recht~\cite{Rahimi2007} proposed a feature mapping that essentially applies an implicit Johnson-Lindenstrauss projection from $\c{H} \to \b{R}^\rho$.
The approach works for any shift invariant kernel (i.e one that can be written as $K(p,q) = k(p-q)$).
For the Gaussian kernel, $k(z) = e^{-\|z\|^2/2}$, where $z \in \b{R}^d$.
Let the Fourier transform of $k : \reals^d \to \reals^+$ is $g(\omega) = (2\pi)^{-d/2} e^{-\|\omega\|^2/2}$.
A basic result in harmonic analysis \cite{bochner} is that $k$ is a kernel if and only if $g$ is a measure (and after scaling, is a probability distribution).
Let $\omega$ be drawn randomly from the distribution defined by $g$:
\[
k(x-y)
=
\int_{\omega \in \reals^d} g(\omega) e^{\iota \IP{\omega}{x-y}} \; d\omega
=
E_\omega[\IP{\psi_{\omega}(x)}{\psi_{\omega}(y)}],
\]
where $\psi_{\omega}(z) = (\cos (\IP{\omega}{z}), \sin(\IP{\omega}{z}))$ are the real and imaginary components of $e^{\iota \IP{\omega}{z}}$. This implies that $\IP{\psi_\omega(x)}{\psi_\omega(y)}$ is an unbiased estimator of $k(x-y)$.
We now consider a $\rho$-dimensional feature vector $\phi_{\Upsilon} : P \to \b{R}^{\rho}$ where the $(2i-1)$th and $(2i)$th coordinates are described by $\mu(p) \psi_{\omega_i}(p)/(\rho/2) = (2\mu(p) \cos(\IP{\omega_i}{z})/\rho, 2\mu(p) \sin(\IP{\omega_i}{z})/\rho)$ for some $\omega_i \in \Upsilon = \{\omega_1, \ldots, \omega_{\rho/2}\}$ drawn randomly from $g$. Next we prove a bound on $\rho$ using this construction.
\begin{lemma}\label{lem:rand-feat}
For $\phi_\Upsilon : \c{P} \cup \c{Q} \to \reals^\rho$ with $\rho = O((1/\varepsilon^2) \log(n/\delta))$, with probability $\geq 1-\delta$
\[
\bigg|\sum_{p \in \c{P}} \sum_{q \in \c{Q}} K(p,q) \mu(p) \nu(q) - \sum_{p \in \c{P}} \sum_{q \in \c{Q}} \IP{\phi_\Upsilon(p)}{\phi_\Upsilon(q)} \bigg| \leq \varepsilon W^2.
\]
\end{lemma}
\begin{proof}
We make use of the following Chernoff-Hoeffding bound. Given a set $\{X_1, \ldots, X_n\}$ of independent random variables, such that $| X_i - E[X_i] | \leq \Lambda$, then for $M = \sum_{i=1}^n X_i$ we can bound $\Pr[| M - E[M] | \geq \alpha] \leq 2 e^{-2\alpha^2 / (n\Lambda^2)}$.
We can now bound the error of using $\phi_{\Upsilon}$ for any pair $(p,q) \in \c{P} \times \c{Q}$ as follows:
\begin{eqnarray*}
\lefteqn{\Pr\left[\left| \IP{\phi_{\Upsilon}(p)}{\phi_{\Upsilon}(q)} - \mu(p)\nu(q)k(p-q)\right| \geq \varepsilon \mu(p) \nu(q)\right] }
\\ &= &
\Pr\left[\left| \IP{\phi_{\Upsilon}(p)}{\phi_{\Upsilon}(q)} - E_\Upsilon \left[\IP{\phi_{\Upsilon}(p)}{\phi_{\Upsilon}(q)}\right] \right| \geq \varepsilon \mu(p)\nu(q)\right]
\\ &\leq&
\Pr\left[\left| \sum_i \frac{2}{\rho} \mu(p)\nu(q)\IP{\psi_{\omega_i}(p)}{\psi_{\omega_i}(q)} - E_\Upsilon\left[\sum_i \frac{2}{\rho} \mu(p) \nu(q) \IP{\psi_{\omega_i}(p)}{\psi_{\omega_i}(q)}\right] \right| \geq \varepsilon \mu(p)\nu(q)\right]
\\ &\leq&
2 e^{-2 (\varepsilon \mu(p)\nu(q))^2 / (\rho \Lambda^2/2)}
\leq
2 e^{- \rho \varepsilon^2/64},
\end{eqnarray*}
where the last inequality follows by $\Lambda \leq 2 \max_{p,q} (2/\rho) \mu(p)\nu(q) \IP{\psi_\omega(p)}{\psi_\omega(q)} \leq 8(2/\rho) \mu(p)\nu(q)$ since for each pair of coordinates $\|\psi_\omega(p)\| \leq 2$ for all $p \in \c{P}$ (or $q \in \c{Q}$).
By the union bound, the probability that this holds for all pairs of points $(p,q) \in \c{P} \times \c{Q}$ is given by
\[
\Pr\left[\forall_{(p,q) \in \c{P} \times \c{Q}}\left| \IP{\phi_{\Upsilon}(p)}{\phi_{\Upsilon}(q)} - \mu(p)\nu(q)k(p-q)\right| \geq \varepsilon \mu(p) \nu(q)\right]
\leq (n^2) 2 e^{-\rho \varepsilon^2/64}.
\]
Setting $\delta \geq n^2 2 e^{-\rho\varepsilon^2/64}$ and solving for $\rho$ yields that for $\rho = O((1/\varepsilon^2) \log (n/\delta))$, with probability at least $1-\delta$, for all $(p,q) \in \c{P} \times \c{Q}$ we have $|\mu(p)\nu(q)k(p-q) - \IP{\phi_\Upsilon(x)}{\phi_\Upsilon(y)}| \leq \varepsilon \mu(p)\nu(q)$.
It follows that with probability at least $1-\delta$
\[
\bigg| \sum_{p \in \c{P}}\sum_{q \in \c{Q}} \mu(p)\nu(q) K(p,q) - \sum_{p \in \c{P}}\sum_{q \in \c{Q}} \IP{\phi_\Upsilon(p)}{\phi_\Upsilon(q)} \bigg| \leq \varepsilon \sum_{p\in \c{P}} \sum_{q \in \c{Q}}\mu(p) \nu(q) \leq \varepsilon W^2. \qedhere
\]
\end{proof}
Note that the analysis of Rahimi and Recht~\cite{Rahimi2007} is done for unweighted point sets (i.e. $\mu(p) = 1$) and actually goes further, in that it yields a guarantee for any pair of points taken from a manifold $\c{M}$ having diameter $\Delta$. They do this by building an $\varepsilon$-net over the domain and applying the above tail bounds to the $\varepsilon$-net. We can adapt this trick to replace the $(\log n)$ term in $\rho$ by a $(d \log (\Delta/\varepsilon))$ term, recalling $\Delta = (1/h) \max_{p,p' \in \c{P} \cup \c{Q}} \|p - p'\|$. This leads to the same guarantees as above with a dimension of $\rho = O((d/\varepsilon^2) \log (\Delta/ \varepsilon \delta))$.
\subsection{Fast Gauss Transform Feature Space}
\label{ssec:ifgt}
The above approach works by constructing features in the frequency domain. In what follows, we present an alternative approach that operates in the spatial domain directly. We base our analysis on the Improved Fast Gauss Transform (IFGT)~\cite{Yang2003}, an improvement on the Fast Gauss Transform. We start with a brief review of the IFGT (see the original work~\cite{Yang2003} for full details).
\paragraph{IFGT feature space construction.}
The goal of the IFGT is to approximate $\kappa(\c{P},\c{Q})$.
First we rewrite $\kappa(\c{P},\c{Q})$ as the summation $\sum_{q \in \c{Q}}G(q)$ where $G(q) = \nu(q) \sum_{p \in \c{P}} e^{-\|p-q\|^2/h^2} \mu(p)$.
Next, we approximate $G(q)$ in two steps. First we rewrite
\[
G(q) = \nu(q) \sum_{p \in P} \mu(p) e^{-\frac{\|q - x_*\|^2}{h^2}} e^{-\frac{\|p - x_*\|^2}{h^2}} e^{\frac{2\|q - x_*\| \cdot \|p - x_*\|}{h^2}},
\]
where the quantity $x_*$ is a fixed vector that is usually the centroid of $\c{P}$.
The first two exponential terms can be computed for each $p$ and $q$ once.
Second, we approximate the remaining exponential term by its Taylor expansion
$e^v = \sum_{i\ge 0} \frac{v^i}{i!}$.
After a series of algebraic manipulations, the following expression emerges:
\[
G(q) = \nu(q) e^{- \frac{\|q - x_*\|^2}{h^2}} \sum_{\alpha \ge 0} C_\alpha \Bigl(\frac{q - x_*}{h}\Bigr)^\alpha
\]
where $C_\alpha$ is given by
\[
C_\alpha = \frac{2^{|\alpha|}}{\alpha !}\sum_{p \in P} \mu(p) e^{- \frac{\|p - x_*\|^2}{h^2}}\Bigl(\frac{p - x_*}{h}\Bigr)^\alpha .
\]
The parameter $\alpha$ is a \emph{multiindex}, and is actually a vector $\alpha = (\alpha_1, \alpha_2, \ldots, \alpha_d)$ of dimension $d$.
The expression $z^\alpha$, for $z \in \reals^d$, denotes the monomial $z_1^{\alpha_1} z_2^{\alpha_2} \ldots z_d^{\alpha_d}$, the quantity $|\alpha|$ is the \emph{total degree} $\sum \alpha_i$, and the quantity $\alpha! = \Pi_i (\alpha_i !)$. The multiindices are sorted in \emph{graded lexicographic order}, which means that $\alpha$ comes before $\alpha'$ if $|\alpha| < |\alpha'|$, and two multiindices of the same degree are ordered lexicographically.
The above expression for $G(q)$ is an exact infinite sum, and is approximated by truncating the summation at multiindices of total degree $\tau-1$. Note that there are at most $\rho = \binom{\tau+d-1}{d} = O(\tau^d)$ such multiindices. We now construct a mapping $\tilde{\phi} : \reals^d \rightarrow \reals^\rho$.
Let
$
\tilde{\phi}(p)_\alpha = \sqrt{\frac{2^{|\alpha|}}{\alpha!}} \mu(p) e^{-\frac{\|p - x_*\|^2}{h^2}}\Bigl(\frac{p - x_*}{h}\Bigr)^\alpha.
$
Then
\[
G(q)= \sum_\alpha \tilde{\phi}(q)_\alpha \sum_{p \in P}\tilde{\phi}(p)_\alpha
\]
and $S = \sum_{q \in Q} G(q)$ is then given by
\[
S =
\sum_{p \in P}\sum_{q \in Q} \sum_\alpha \tilde{\phi}(q)_\alpha \tilde{\phi}(p)_\alpha
=
\sum_{p \in P} \sum_{q \in Q} \langle \tilde{\phi}(q), \tilde{\phi}(p) \rangle.
\]
\paragraph{IFGT error analysis.}
The error incurred by truncating the sum at degree $\tau-1$ is given by
\[
\textsf{Err}(\tau)
=
\big| \sum_{p \in P} \sum_{q \in Q} K(p,q) \mu(p) \nu(q) - \sum_{p \in P} \sum_{q \in Q} \IP{\tilde \phi(p)}{\tilde \phi(q)} \big|
\leq
\sum_{p \in P} \sum_{q \in Q} \mu(p) \nu(q) \frac{2^\tau}{\tau!} \Delta^{2\tau}
=
W^2 \frac{2^\tau}{\tau!} \Delta^{2 \tau}.
\]
Set $\varepsilon W^2= \textsf{Err}(\tau)$. Applying Stirling's approximation, we solve for $\tau$ in $\log (1/\varepsilon) \geq \tau \log (\tau/4 \Delta^2)$. This yields the bounds $\tau = O(\Delta^2)$ and $\tau = O(\log (1/\varepsilon))$. Thus our error bound holds for $\tau = O(\Delta^2 + \log (1/\varepsilon))$. Using $\rho = O(\tau^d)$, we obtain the following result.
\begin{lemma} \label{lem:IFGT-feat}
There exists a mapping $\tilde \phi : \c{P} \cup \c{Q} \to \reals^\rho$ with $\rho = O(\Delta^{2d} + \log^d (1/\varepsilon))$ so
\[
\bigg|\sum_{p \in \c{P}} \sum_{q \in \c{Q}} K(p,q) \mu(p) \nu(q) - \sum_{p \in \c{P}} \sum_{q \in \c{Q}} \IP{\tilde \phi(p)}{\tilde \phi(q)} \bigg| \leq \varepsilon W^2.
\]
\end{lemma}
\subsection{Summary of Feature Maps}
We have developed three different bounds on the dimension required for feature maps that approximate $\kappa(\c{P}, \c{Q})$ to within $\varepsilon W^2$.
\begin{itemize} \vspace{-.1in} \itemsep -2pt\parsep=-1pt\partopsep -2pt
\item[\textsf{IFGT}:] $\rho = O(\Delta^{2d} + \log^d(1/\varepsilon))$. Lemma \ref{lem:IFGT-feat}.
Advantages: deterministic, independent of $n$, logarithmic dependence on $1/\varepsilon$.
Disadvantages: polynomial dependence on $\Delta$, exponential dependence on $d$.
\item[\textsf{Random-points}:] $\rho = O((1/\varepsilon^2) \log(n/\delta))$. Lemma \ref{lem:rand-feat}.
Advantages: independent of $\Delta$ and $d$.
Disadvantages: randomized, dependent on $n$, polynomial dependence on $1/\varepsilon$.
\item[\textsf{Random-domain}:] $\rho = O((d/\varepsilon^2) \log(\Delta/\varepsilon \delta))$. (above)
Advantages: independent of $n$, logarithmic dependence on $\Delta$, polynomial dependence on $d$.
Disadvantages: randomized, dependence on $\Delta$ and $d$, polynomial dependence on $1/\varepsilon$.
\end{itemize}
For simplicity, we (mainly) use the \textsf{Random-points} based result from Lemma \ref{lem:rand-feat} in what follows. If appropriate in a particular application, the other bounds may be employed.
\paragraph{Feature-based computation of $D_K$.}
\label{sec:fast-current-norm}
As before, we can decompose $
D_K^2(\c{P},\c{Q}) = \kappa(\c{P},\c{P}) + \kappa(\c{Q},\c{Q}) - 2\kappa(\c{P},\c{Q})$
and use Lemma \ref{lem:rand-feat} to approximate each of $\kappa(\c{P},\c{P}), \kappa(\c{Q},\c{Q})$, and $\kappa(\c{P},\c{Q})$ with error $\varepsilon W^2/4$.
\begin{theorem} \label{thm:fast-CN}
We can compute a value $U$ in time
$O((n/\varepsilon^2) \log(n/\delta))$ such that $|U - D_K^2(\c{P},\c{Q})| \leq \varepsilon W^2$, with probability at least $1-\delta$.
\end{theorem}
\paragraph{A nearest-neighbor algorithm.}
The feature map does more than yield efficient algorithms for the kernel distance. As a representation for shapes and distributions, it allows us to solve other data analysis problems on shape spaces using off-the-shelf methods that apply to points in Euclidean space. As a simple example of this, we can combine the \textsf{Random-points} feature map with known results on approximate nearest-neighbor search in Euclidean space~\cite{DBLP:conf/focs/AndoniI06} to obtain the following result.
\begin{lemma}
Given a collection of $m$ point sets $\c{C} = \{\c{P}_1, \c{P}_2, \ldots, \c{P}_m\}$, and a query surface $\c{Q}$, we can compute the $c$-approximate nearest neighbor to $\c{Q}$ in $\c{C}$ under the kernel distance in time $O(\rho m^{1/c^{2}+o(1)})$ query time using $O(\rho m^{1+1/c^{2}+o(1)})$ space and preprocessing.
\end{lemma}
\section{Coresets for the Kernel Distance}
\label{sec:core-set-current}
The kernel norm (and distance) can be approximated in near-linear time; however, this may be excessive for large data sets. Rather, we extract a small subset (a coreset) $\c{S}$ from the input $\c{P}$ such that the kernel distance between $\c{S}$ and $\c{P}$ is small. By triangle inequality, $\c{S}$ can be used as a proxy for $\c{P}$. Specifically, we extend the notion of $\varepsilon$-samples for range spaces to handle non-binary range spaces defined by kernels.
\paragraph{Background on range spaces.}
Let $\xi(P)$ denote the total weight of a set of points $P$, or cardinality if no weights are assigned.
Let $P \subset \b{R}^d$ be a set of points and let $\c{A}$ be a family of subsets of $P$. For examples of $\c{A}$, let $\c{B}$ denote the set of all subsets defined by containment in a ball and let $\c{E}$ denote the set of all subsets defined by containment in ellipses.
We say $(P, \c{A})$ is a \emph{range space}. Let $\bar \xi_P(A) = \xi(A)/\xi(P)$.
An \emph{$\varepsilon$-sample} (or \emph{$\varepsilon$-approximation}) of $(P, \c{A})$ is a subset $Q \subset P$ such that
\[
\max_{A \in \c{A}} \left| \bar \xi_Q(Q \cap A) - \bar \xi_P(P \cap A)\right| \leq \varepsilon.
\]
To create a coreset for the kernel norm, we want to generalize these notions of $\varepsilon$-samples to non-binary ($(0,1)$-valued instead of $\{0,1\}$-valued) functions, specifically to kernels.
For two point sets $P,Q$, define $\bar \kappa(P,Q) = (1/\xi(P))(1/\xi(Q)) \sum_{p \in P} \sum_{q \in Q} K(p,q)$, and when we have a singleton set $Q = \{q\}$ and a subset $P' \subseteq P$ then we write $\bar \kappa_P(P',q) = (1/\xi(P)) \sum_{p \in P'} K(p,q)$. Let $K^+ = \max_{p,q \in P} K(p,q)$ be the maximum value a kernel can take on a dataset $P$, which can be normalized to $K^+=1$.
We say a subset of $S \subset P$ is an \emph{$\varepsilon$-sample of $(P,K)$} if
\[
\max_q \left| \bar \kappa_P(P,q) - \bar \kappa_S(S,q) \right| \leq \varepsilon K^+.
\]
The standard notion of VC-dimenion~\cite{VC71} (and related notion of shattering dimension) is fundamentally tied to the binary ($\{0,1\}$-valued) nature of ranges, and as such, it does not directly apply to $\varepsilon$-samples of $(P,K)$.
Other researchers have defined different combinatorial dimensions that can be applied to kernels~\cite{DGL96,KS94,ABCH97,Vap89}. The best result is based on $\gamma$-fat shattering dimension $\textsc{f}_\gamma$~\cite{KS94}, defined for a family of $(0,1)$-valued functions $\c{F}$ and a ground set $P$. A set $Y \subset P$ is \emph{$\gamma$-fat shattered} by $\c{F}$ if there exists a function $\alpha : Y \to [0,1]$ such that for all subsets $Z \subseteq Y$ there exists some $F_Z \in \c{F}$ such that for every $x \in Z$ $F_Z(x) \geq \alpha(x) + \gamma$ and for every $x \in Y \setminus Z$ $F_Z(x) \leq \alpha(x) - \gamma$. Then $\textsc{f}_\gamma = \xi(Y)$ for the largest cardinality set $Y \subset P$ that can be $\gamma$-fat shattered.
Bartlett \etal~\cite{BLW96} show that a random sample of $O((1/\varepsilon^2) (\textsc{f}_\gamma \log^2(\textsc{f}_\gamma/\varepsilon) + \log(1/\delta))$ elements creates an $\varepsilon$-sample (with probability at least $1-\delta$) with respect to $(P,\c{F})$ for $\gamma = \Omega(\varepsilon)$.
Note that the $\gamma$-fat shattering dimension of Gaussian and other symmetric kernels in $\b{R}^d$ is $d+1$ (by setting $\alpha(x) = .5$ for all $x$), the same as balls $\c{B}$ in $\b{R}^d$, so this gives a random-sample construction for $\varepsilon$-samples of $(P,K)$ of size $O((d/\varepsilon^2) (\log^2(1/\varepsilon) + \log(1/\delta))$.
In this paper, we improve this result in two ways by directly relating a kernel range space $(P,K)$ to a similar (binary) range space $(P,\c{A})$.
First, this improves the random-sample bound because it uses sample-complexity results for binary range spaces that have been heavily optimized.
Second, this allows for all deterministic $\varepsilon$-sample constructions
(which have no probability of failure) and can have much smaller size.
\paragraph{Constructions for $\varepsilon$-samples.}
Vapnik and Chervonenkis~\cite{VC71} showed that the complexity of $\varepsilon$-samples is tied to the VC-dimension of the range space. That is, given a range space $(X, \c{A})$ a subset $Y \subset X$ is said to be \emph{shattered} by $\c{A}$ if all subsets of $Z \subset Y$ can be realized as $Z = Y \cap R$ for $R \in \c{A}$. Then the \emph{VC-dimension} of a range space $(X,\c{A})$ is the cardinality of the largest subset $Y \subset X$ that can be shattered by $\c{A}$.
Vapnik and Chervonenkis~\cite{VC71} showed that if the VC-dimension of a range space $(X,\c{A})$ is $\nu$, then a random sample $Y$ of $O((1/\varepsilon^2)(\nu \log (1/\varepsilon) + \log(1/\delta))$ points from $X$ is an $\varepsilon$-sample with probability at least $1-\delta$.
This bound was improved to $O((1/\varepsilon^2) (\nu + \log 1/\delta))$ by Talagrand~\cite{Tal94,LLS01}.
Alternatively, Matousek~\cite{Mat91} showed that $\varepsilon$-samples of size $O((\nu/\varepsilon^2) \log (\nu/\varepsilon))$ could be constructed deterministically, that is there is no probability of failure. A simpler algorithm with more thorough runtime analysis is presented in Chazelle and Matousek~\cite{CM96}, which runs in $O(d)^{3d} |X|(1/\varepsilon)^{2\nu}\log^{\nu}(1/\varepsilon)$ time.
Smaller $\varepsilon$-samples exist; in particular Matousek, Welzl, and Wernisch~\cite{MWW93} and improved by Matousek~\cite{Mat95} show that $\varepsilon$-samples exist of size $O((1/\varepsilon)^{2-2/(\nu+1)})$, based on a discrepancy result that says there exists a labeling $\chi : X \to \{-1,+1\}$ such that $\max_{R \in \c{A}} \sum_{x \in R \cap X} \chi(x) \leq O(|X|^{1/2 - 1/2\nu} \log^{1/2} |X|)$.
It is alluded by Chazelle~\cite{Cha00} that if an efficient construction for such a labeling existed, then an algorithm for creating $\varepsilon$-samples of size $O((1/\varepsilon)^{2-2/(\nu+1)}\log(\nu/\varepsilon)^{2-1/{d+1}})$ would follow, see also Phillips~\cite{Phi08}.
Recently, Bansal~\cite{Ban10} provided a randomized polynomial algorithm for the entropy method, which is central in proving these existential coloring bounds. This leads to an algorithm that runs in time $O(|X| \cdot \poly{1/\varepsilon})$, as claimed by Charikar \etal~\cite{CNN11,N11}.
An alternate approach is through the VC-dimension of the \emph{dual range space} $(\c{A},\bar{\c{A}})$, of (primal) range space $(X,\c{A})$, where $\bar{\c{A}}$ is the set of subsets of ranges $\c{A}$ defined by containing the same element of $X$.
In our context, for range spaces defined by balls $(\b{R}^d,\c{B})$ and ellipses of fixed orientation $(\b{R}^d, \c{E})$ their dual range spaces have VC-dimension $\bar \nu = d$.
Matousek~\cite{Mat99} shows that a technique of matching with low-crossing number~\cite{CW89} along with Haussler's packing lemma~\cite{Hau95} can be used to construct a low discrepancy coloring for range spaces where $\bar \nu$, the VC-dimension of the dual range space is bounded. This technique can be made deterministic and runs in $\poly{|X|}$ time. Invoking this technique in the Chazelle and Matousek~\cite{CM96} framework yields an $\varepsilon$-sample of size $O((1/\varepsilon)^{2-2/(\bar \nu +1)} (\log(1/\varepsilon))^{2-1/(\bar \nu+1)})$ in $O(|X| \cdot \poly{1/\varepsilon})$ time.
Specifically, we attain the following result:
\begin{lemma}
\label{lem:e-samp-ball}
For discrete ranges spaces $(X,\c{B})$ and $(X,\c{E})$ for $X \in \b{R}^d$ of size $n$, we can construct an $\varepsilon$-sample of size $O((1/\varepsilon)^{2-2/(d+1)}(\log (1/\varepsilon))^{2-1/d+1})$ in $O(n \cdot \emph{\poly{1/\varepsilon}})$ time.
\end{lemma}
For specific range spaces, the size of $\varepsilon$-samples can be improved beyond the VC-dimension bound. Phillips~\cite{Phi08} showed for ranges $\c{R}_d$ consisting of axis-aligned boxes in $\b{R}^d$, that $\varepsilon$-samples can be created of size $O((1/\varepsilon) \cdot \log^{2d} (1/\varepsilon))$. This can be generalized to ranges defined by $k$ predefined normal directions of size $O((1/\varepsilon) \cdot \log^{2k}(1/\varepsilon))$. These algorithms run in time $O(|X| (1/\varepsilon^3) \textrm{poly}\log (1/\varepsilon))$.
And for intervals $\c{I}$ over $\b{R}$, $\varepsilon$-samples of $(X, \c{I})$ can be created of size $O(1/\varepsilon)$ by sorting points and retaining every $\varepsilon|X|$th point in the sorted order~\cite{LP09}.
\paragraph{$\varepsilon$-Samples for kernels.}
The \emph{super-level set} of a kernel given one input $q \in \b{R}^d$ and a value $v \in \b{R}^+$, is the set of all points $p \in \b{R}^d$ such that $K(p,q) \geq v$. We say that a kernel is \emph{linked} to a range space $(\b{R}^d, \c{A})$ if for every possible input point $q \in \b{R}^d$ and any value $v \in \b{R}^+$ that the super-level set of $K(\cdot,q)$ defined by $v$ is equal to some $H \in \c{A}$. For instance multi-variate Gaussian kernels with no skew are linked to $(\b{R}^d, \c{B})$ since all super-level sets are balls, and multi-variate Gaussian kernels with non-trivial covariance are linked to $(\b{R}^d, \c{E})$ since all super-level sets are ellipses.
\begin{theorem}
For any kernel $K : \c{M} \times \c{M} \to \b{R}^+$ linked to a range space $(\c{M}, \c{A})$, an $\varepsilon$-sample $S$ of $(P, \c{A})$ for $S \subseteq \c{M}$ is a $\varepsilon$-sample of $(P,K)$.
\label{thm:kernel-sample}
\end{theorem}
\emph{A (flawed) attempt at a proof may proceed by considering a series of approximate level-sets, within which each point has about the same function value. Since $S$ is an $\varepsilon$-sample of $(P,\c{A})$, we can guarantee the density of $S$ and $P$ in each level set is off by at most $2\varepsilon$. However, the sum of absolute error over all approximate level-sets is approximately $\varepsilon K^+$ times the number of approximate level sets. This analysis fails because it allows error to accumulate; however, a more careful application of the $\varepsilon$-sample property shows it cannot. A correct analysis follows using a charging scheme which prevents the error from accumulating. }
\begin{proof}
We can sort all $p_i \in P$ in similarity to $q$ so that $p_i < p_j$ (and by notation $i<j$) if $K(p_i,q) > K(p_j,q)$. Thus any super-level set containing $p_j$ also contains $p_i$ for $i<j$. We can now consider the one-dimensional problem on this sorted order from $q$.
We now count the deviation $E(P,S,q) = \bar \kappa_P(P,q) - \bar \kappa_S(S,q)$ from $p_1$ to $p_n$ using a charging scheme. That is each element $s_j \in S$ is charged to $\xi(P)/\xi(S)$ points in $P$. For simplicity we will assume that $k = \xi(P)/\xi(S)$ is an integer, otherwise we can allow fractional charges.
We now construct a partition of $P$ slightly differently, for positive and negative $E(P,S,q)$ values, corresponding to undercounts and overcounts, respectively.
\textbf{Undercount of $\bar \kappa_S(S,q)$.}
For undercounts, we partition $P$ into $2\xi(S)$ (possibly empty) sets $\{P'_1, P_1, P'_2,$ $P_2, \ldots, P'_{\xi(S)}, P_{\xi(S)}\}$ of consecutive points by the sorted order from $q$.
Starting with $p_1$ (the closest point to $q$) we place points in sets $P'_j$ or $P_j$ following their sorted order. Recursively on $j$ and $i$, starting at $j=1$ and $i=1$, we place each $p_i$ in $P'_j$ as long as $K(p_i,q) > K(s_j,q)$ (this may be empty). Then we place the next $k$ points $p_i$ into $P_j$. After $k$ points are placed in $P_j$, we begin with $P'_{j+1}$, until all of $P$ has been placed in some set. Let $t \leq \xi(S)$ be the index of the last set $P_j$ such that $\xi(P_j) = k$.
Note that for all $p_i \in P_j$ (for $j \leq t$) we have $K(s_j,q) \geq K(p_i,q)$, thus $\bar \kappa_S(\{s_j\},q) \geq \bar \kappa_P(P_j,q)$.
We can now bound the undercount as
\[
E(P,S,q) =
\sum_{j=1}^{\xi(S)} \left(\bar \kappa_P(P_j,q) - \bar \kappa_S(\{s_j\},q) \right)
+
\sum_{j=1}^{\xi(S)} \bar \kappa_P(P'_j,q)
\leq
\sum_{j=1}^{t+1} \bar \kappa_P(P'_j,q)
\]
since the first term is at most $0$ and since $\xi(P'_j) = 0$ for $j > t+1$.
Now consider a super-level set $H \in \c{A}$ containing all points before $s_{t+1}$; $H$ is the smallest range that contains every non-empty $P'_j$.
Because (for $j \leq t$) each set $P_j$ can be charged to $s_j$, then $\sum_{j=1}^t \xi(P_j \cap H) = k \cdot \xi(S \cap H)$.
And because $S$ is an $\varepsilon$-sample of $(P,\c{A})$, then
$\sum_{j=1}^{t+1} \xi(P'_j) = \left( \sum_{j=1}^{t+1} \xi(P'_j) + \sum_{j=1}^t \xi(P_j \cap H)\right) - k \cdot \xi(S \cap H) \leq \varepsilon \xi(P)$.
We can now bound
\[
E(P,S,q)
\leq
\sum_{j=1}^{t+1} \bar \kappa_P(P_j',q)
=
\sum_{j=1}^{t+1} \sum_{p \in P_j'} \frac{K(p,q)}{\xi(P)}
\leq
\frac{1}{\xi(P)} \sum_{j=1}^{t+1} \xi(P_j') K^+
\leq
\frac{1}{\xi(P)} (\varepsilon \xi(P)) K^+
=
\varepsilon K^+.
\]
\textbf{Overcount of $\bar \kappa_S(S,q)$:}
The analysis for overcounts is similar to undercounts, but we construct the partition in reverse and the leftover after the charging is not quite as clean to analyze.
For overcounts, we partition $P$ into $2\xi(S)$ (possibly empty) sets $\{P_1, P'_1, P_2, P'_2, \ldots, P_{\xi(S)},$ $P'_{\xi(S)}\}$ of consecutive points by the sorted order from $q$.
Starting with $p_n$ (the furthest point from $q$) we place points in sets $P'_j$ or $P_j$ following their reverse-sorted order. Recursively on $j$ and $i$, starting at $j=\xi(S)$ and $i=n$, we place each $p_i$ in $P'_j$ as long as $K(p_i,q) < K(s_j,q)$ (this may be empty). Then we place the next $k$ points $p_i$ into $P_j$. After $k$ points are placed in $P_j$, we begin with $P'_{j-1}$, until all of $P$ has been placed in some set.
Let $t \leq \xi(S)$ be the index of the last set $P_j$ such that $\xi(P_j) = k$ (the smallest such $j$).
Note that for all $p_i \in P_j$ (for $j \geq t$) we have $K(s_j,q) \leq K(p_i,q)$, thus $\bar \kappa_S(\{s_j\},q) \leq \bar \kappa_P(P_j,q)$.
We can now bound the (negative) undercount as
\begin{align*}
E(P,S,q) = &
\sum_{j=\xi(S)}^t \left(\bar \kappa_P(P_j,q) - \bar \kappa_S(\{s_j\},q) \right)
+
\sum_{j=t-1}^1 \left(\bar \kappa_P(P_j,q) - \bar \kappa_S(\{s_j\},q) \right)
+
\sum_{j=1}^{\xi(S)} \bar \kappa_P(P'_j,q)
\\ \geq &
\left(\bar \kappa_P(P_{t-1},q) - \bar \kappa_S(\{s_{t-1}\}, q) \right) - \sum_{j=t-2}^1 \bar \kappa_S(\{s_j\},q),
\end{align*}
since the first full term is at least $0$, as is each $\bar \kappa_P(P_j,q)$ and $\bar \kappa_P(P'_j,q)$ term in the second and third terms. We will need the one term $\bar \kappa_P(P_{t-1},q)$ related to $P$ in the case when $1 \leq \xi(P_{t-1}) < k$.
Now, using that $S$ is an $\varepsilon$-sample of $(P,\c{A})$, we will derive a bound on $t$, and more importantly $(t-2)$. We consider the maximal super-level set $H \in \c{A}$ such that no points $H \cap P$ are in $P'_j$ for any $j$. This is the largest set where each point $p \in P$ can be charged to a point $s \in S$ such that $K(p,q) > K(s,q)$, and thus presents the smallest (negative) undercount.
In this case, $H \cap P = \cup_{j=1}^s P_j$ for some $s$ and $H \cap S = \cup_{j=1}^s \{s_j\}$. Since $t \leq s$, then $\xi(H \cap P) = (s-t+1) k +\xi(P_{t-1})= (s-t+1) \xi(P)/\xi(S) + \xi(P_{t-1})$ and $\xi(H \cap S) = s$. Thus
\[
\varepsilon
\geq
\bar \xi_S(H \cap S) - \bar \xi_P(H \cap P)
=
\frac{s}{\xi(S)} - \frac{(s-t+1) \xi(P)/\xi(S)}{\xi(P)} - \frac{\xi(P_{t-1})}{\xi(P)}
\geq
\frac{t-1}{\xi(S)} - \frac{\xi(P_{t-1})}{\xi(P)}.
\]
Thus $(t -2) \leq \varepsilon \xi(S)+\xi(P_{t-1}) (\xi(S)/\xi(P)) - 1$. Letting $p_i = \min_{i' \in P_{t-1}} K(p_{i'},q)$ (note $K(p_i,q) \geq K(s_{t-1},q)$)
\begin{align*}
E(P,S,q)
&\geq
\frac{\kappa(P_{t-1},q)}{\xi(P)} - \frac{K(s_{t-1},q)}{\xi(S)} - \left(\varepsilon \xi(S)+ \xi(P_{t-1}) \frac{\xi(S)}{\xi(P)} - 1\right) \frac{K^+}{\xi(S)}
\\ &=
- \varepsilon K^+ + K^+ \left(\frac{k - \xi(P_{t-1})}{\xi(P)}\right) - \frac{k \cdot K(s_{t-1},q) - \kappa(P_{t-1},q) }{\xi(P)}
\\ &\geq
- \varepsilon K^+ + K^+ \left(\frac{k - \xi(P_{t-1})}{\xi(P)}\right) - K(p_i,q) \left(\frac{k - \xi(P_{t-1})}{\xi(P)}\right)
\geq
-\varepsilon K^+. \qedhere
\end{align*}
\end{proof}
\begin{corollary}\label{cor:Gaussian-coreset}
For a Gaussian kernel, any $\varepsilon$-sample $S$ for $(P, \c{B})$ (or for $(P, \c{E})$ if we consider covariance) guarantees that for any query $q \in \b{R}^d$ that
$
\left| \bar\kappa_P(P,q) - \bar\kappa_S(S,q) \right| \leq \varepsilon K^+.
$
\end{corollary}
\paragraph{Coreset-based computation of kernel distance.}
For convenience here we assume that our kernel has been normalized so $K^+ = 1$.
Let $P$ be an $\varepsilon$-sample of $(\c{P}, K)$, and all points $p \in P$ have uniform weights so $\xi(P) = \xi(\c{P}) = W$.
Then for any $q \in \b{R}^d$
\[
\varepsilon
\geq
| \bar \kappa_P(P,q) - \bar \kappa_{\c{P}}(\c{P},q) |
=
\left| \frac{\kappa(P,q)}{\xi(P)} - \frac{\kappa(\c{P},q)}{\xi(\c{P})} \right|.
\]
and hence
\[
\left| \kappa(P,q) - \kappa(\c{P},q)\right|
\leq
\varepsilon \xi(\c{P}) = \varepsilon W.
\]
It follows that if $Q$ is also an $\varepsilon$-sample for $(\c{Q},K)$, then
$\| \kappa(P,Q) - \kappa(\c{P},\c{Q}) \| \leq 2\varepsilon W^2.$
Hence, via known constructions of $\varepsilon$-samples randomized~\cite{VC71,Tal94} or deterministic~\cite{Mat91,CM96,MWW93,Mat95,STZ04,Phi08} (which can be applied to weighted point sets to create unweighted ones~\cite{Mat91}) and Theorem \ref{thm:kernel-sample} we have the following theorems. The first shows how to construct a coreset with respect to $D_K$.
\begin{theorem}\label{thm:random-coreset}
Consider any kernel $K$ linked with $(\b{R}^d,\c{B})$ and objects $\c{P},\c{Q} \subset \c{M} \subset \b{R}^d$, each with total weight $W$, and for constant $d$. We can construct sets $P \subset \c{P}$ and $Q \subset \c{Q}$ such that $|D_K(\c{P},\c{Q}) - D_K(P,Q)| \leq \varepsilon W^2$ of size:
\begin{itemize} \vspace{-.1in} \itemsep -2pt\parsep=-1pt\partopsep -2pt
\item $O((1/\varepsilon^{2 - 1/(d+1)}) \log^{2-1/d+1}(1/\varepsilon))$, via Lemma \ref{lem:e-samp-ball}; or
\item $O(1/\varepsilon^2)(d+\log(1/\delta)))$ via random sampling (correct with probability at least $(1-\delta)$).
\end{itemize}
\end{theorem}
We present an alternative sampling condition to Theorem \ref{thm:random-coreset} in Appendix \ref{sec:coreset}. It has larger dependence on $\varepsilon$, and also has either dependence on $\Delta$ or on $\log n$ (and is independent of $K^+$).
Also in Appendix \ref{sec:coreset} we show that it is NP-hard to optimize $\varepsilon$ with a fixed subset size $k$.
The associated runtimes are captured in the following algorithmic theorem.
\begin{theorem}\label{thm:aprox-KD}
When $K$ is linked to $(\b{R}^d,\c{B})$,
we can compute a number $\tilde D$ such that $|D_K(\c{P},\c{Q}) - \tilde D| \leq \varepsilon$ in time:
\begin{itemize} \vspace{-.1in} \itemsep -2pt\parsep=-1pt\partopsep -2pt
\item $O(n \cdot (1/\varepsilon^{2d+2}) \log^{d+1}(1/\varepsilon))$; or
\item $O(n + (\log n) \cdot ((1/\varepsilon^2) \log(1/\delta)) + (1/\varepsilon^4)\log^2(1/\delta))$ that is correct with probability at least $1-\delta$.
\end{itemize}
\end{theorem}
Notice that these results automatically work for any kernel linked with $(\b{R}^d, \c{B})$ (or more generally with $(\b{R}^d, \c{E})$) with no extra work; this includes not only Gaussians (with non-trivial covariance), but any other standard kernel such as triangle, ball or Epanechnikov.
\section{Minimizing the Kernel Distance under Translation and Rotation}
\label{sec:an-fptas-minimizing}
We attack the problem of minimizing the kernel distance between $\c{P}$ and $\c{Q}$ under a set of transformations: translations or translations and rotations.
A \emph{translation} $T \in \b{R}^d$ is a vector so $\c{Q} \oplus T = \{q+T \mid q \in Q\}$. The translation
$
T^* = \arg \min_{T \in \b{R}^d} D_K(\c{P}, \c{Q} \oplus T),
$
applied to $\c{Q}$ minimizes the kernel norm.
A \emph{rotation} $R \in \SO{d}$ can be represented as a special orthogonal matrix.
We can write $R \circ \c{Q} = \{R(q) \mid q \in Q\}$, where $R(q)$ rotates $q$ about the origin, preserving its norm. The set of a translation and rotation
$
(T^\star, R^\star) = \arg \min_{(T,R) \in \b{R}^d \times \SO{d}} D_K(\c{P}, R \circ (\c{Q} \oplus T))
$
applied to $\c{Q}$ minimizes the kernel norm.
\paragraph{Decomposition.}
In minimizing $D_K(\c{P}, R \circ (\c{Q} \oplus T))$ under all translations and rotations, we can reduce this to a simpler problem. The first term $\kappa(\c{P}, \c{P}) = \sum_{p_1 \in \c{P}} \sum_{p_2 \in \c{P}} \mu(p_1) \mu(p_2) K(p_1, p_2)$ has no dependence on $T$ or $R$, so it can be ignored. And the second term $\kappa(\c{Q}, \c{Q}) = \sum_{q_1 \in \c{Q}} \sum_{q_2 \in \c{Q}} \nu(q_1) \nu(q_2) K(R(q_1 + T), R(q_2+T))$ can also be ignored because it is invariant under the choice of $T$ and $R$. Each subterm $K(R(q_1 +T), R(q_2 + T))$ only depends on $||R(q_1 + T) - R(q_2 + T)|| = ||q_1 - q_2||$, which is also independent of $T$ and $R$. Thus we can rephrase the objective as finding
\\
$\displaystyle{\hspace{0.5in}
(T^\star,R^\star) = \arg \max_{(T,R) \in \b{R}^d \times \SO{d}} \sum_{p \in \c{P}} \sum_{q \in \c{Q}} \mu(p) \nu(q) K(p, R(q + T)) = \arg \max_{(T,R) \in \b{R}^d \times \SO{d}} \kappa(\c{P},R\circ (\c{Q} \oplus T)).
}$
We start by providing an approximately optimal translation. Then we adapt this algorithm to handle both translations and rotations.
\subsection{Approximately Optimal Translations}
We describe, for any parameter $\varepsilon > 0$, an algorithm for a translation $\hat{T}$ such that $D_K^2(\c{P}, \c{Q} \oplus \hat{T}) - D_K^2(\c{P}, \c{Q} \oplus T^*) \leq \varepsilon W^2$.
We begin with a key lemma providing analytic structure to our problem.
\begin{lemma}
$\kappa(\c{P}, \c{Q} \oplus T^*) \geq W^2/n^2$.
\label{lem:lb1}
\end{lemma}
\begin{proof}
When $T \in \b{R}^d$ aligns $q \in Q$ so $q + T = p$ for $p \in P$ it ensures that $K(p,q) = 1$. We can choose the points $p$ and $q$ such that $\mu(p)$ and $\nu(q)$ are as large as possible. They must each be at least $W/n$, so $K(p,q) \mu(p) \nu(q) \geq W^2/n^2$. All other subterms in $\kappa(\c{P},\c{Q}\oplus T)$ are at least $0$. Thus $\kappa(\c{P},\c{Q} \oplus T) \geq W^2/n^2$.
\end{proof}
Thus, if $\kappa(\c{P},\c{Q} \oplus T^*) \geq W^2/n^2$, then for some pair of points $p \in \c{P}$ and $q \in \c{Q}$ we must have $\mu(p) \nu(q) K(p,q+T^*) \geq \mu(p)\nu(q)/n^2$, i.e. $K(p,q+T^*) \geq 1/n^2$. Otherwise, if all $n^2$ pairs $(p,q)$ satisfy $\mu(p)\nu(q)K(p,q+T^*) < \mu(p)\nu(q)/n^2$, then
\[
\kappa(\c{P},\c{Q} \oplus T^*)
=
\sum_{p \in \c{P}}\sum_{q \in \c{Q}} \mu(p)\nu(q) K(p,q+T^*)
<
\sum_{p \in \c{P}}\sum_{q \in \c{Q}} \mu(p)\nu(q) /n^2
=
W^2/n^2.
\]
Thus some pair $p \in \c{P}$ and $q \in \c{Q}$ must satisfy $||p-(q+T^*)|| \leq \sqrt{\ln (n^2)}$, via Lemma \ref{lem:G-dist} with $\gamma = 1/(n^2)$.
Let $G_\varepsilon$ be a grid on $\b{R}^d$ so that when any point $p \in \b{R}^d$ is snapped to the nearest grid point $g \in G_\varepsilon$, we guarantee that $||g-p|| \leq \varepsilon$.
We can define an orthogonal grid $G_\varepsilon = \{(\varepsilon/\sqrt{d}) z \mid z \in \b{Z}^d \}$, where $\b{Z}^d$ is the $d$-dimensional lattice of integers.
Let $\c{G}[\varepsilon,p,\Lambda]$ represent the subset of the grid $G_\varepsilon$ that is within a distance $\Lambda$ of the point $p$.
In other words, $\c{G}[\varepsilon, p, \Lambda] = \{g \in G_\varepsilon \mid ||g - p|| \leq \Lambda\}$.
\paragraph{Algorithm.}
These results imply the following algorithm.
For each point $p \in \c{P}$, for each $q \in \c{Q}$, and for each $g \in \c{G}[\varepsilon/2, p, \sqrt{\ln(n^2)}]$ we consider the translation $T_{p,q,g}$ such that $q + T_{p,q,g} = g$. We return the translation $T_{p,q,g}$ which maximizes $\kappa(\c{P}, \c{Q} \oplus T_{p,q,g})$, by evaluating $\kappa$ at each such translation of $\c{Q}$.
\begin{theorem}
The above algorithm runs in time $O((1/\varepsilon)^d n^{4} \log^{d/2} n)$,
for fixed $d$, and is guaranteed to find a translation $\hat{T}$
such that $D_K^2(\c{P}, \c{Q} \oplus \hat{T}) - D_K^2(\c{P}, \c{Q} \oplus T^*) \leq \varepsilon W^2$.
\label{lem:trans}
\end{theorem}
\begin{proof}
We know that the optimal translation $T^*$ must result in some pair of points $p \in \c{P}$ and $q \in \c{Q}$ such that $||p - (q+T^*)|| \leq \sqrt{\ln (n^2)}$ by Lemma \ref{lem:lb1}. So checking all pairs $p \in \c{P}$ and $q \in \c{Q}$, one must have $||p - q|| \leq \sqrt{\ln(n^2)}$.
Assuming we have found this closest pair, $p \in \c{P}$ and $q \in \c{Q}$, we only need to search in the neighborhood of translations $T = p - q$.
Furthermore, for some translation $T_{p,q,g} = g-q$ we can claim that $\kappa(\c{P},\c{Q} \oplus T^*) - \kappa(\c{P},\c{Q} \oplus T_{p,q,g}) \leq \varepsilon$. Since $||T^* - T_{p,q,g}|| \leq \varepsilon/2$, we have the bound on subterm $| K(p,q+T^*) - K(p,q+T_{p,q,g})| \leq \varepsilon/2$, by Lemma \ref{lem:grid-eps}. In fact, for every other pair $p' \in \c{P}$ and $q' \in \c{Q}$, we also know $| K(p',q'+T^*) - K(p',q'+T_{p,q,g})| \leq \varepsilon/2$. Thus the sum of these subterms has error at most $(\varepsilon/2) \sum_{p \in \c{P}} \sum_{q \in \c{Q}} \mu(p) \nu(q) = (\varepsilon/2)W^2$.
Since, the first two terms of $D_K^2(\c{P},\c{Q} \oplus T)$ are unaffected by the choice of $T$, this provides an $\varepsilon$-approximation for $D_K^2(\c{P},\c{Q} \oplus T)$ because all error is in the $(-2)\kappa(\c{P},\c{Q} \oplus T)$ term.
For the runtime we need to bound the number of pairs from $\c{P}$ and $\c{Q}$ (i.e. $O(n^2)$), the time to calculate $\kappa(\c{P},\c{Q} \oplus T)$ (i.e. $O(n^2)$), and finally the number of grid points in $\c{G}[\varepsilon/2, p, \sqrt{\ln (n^2)}]$. The last term requires $O((1/\varepsilon)^d)$ points per unit volume, and a ball of radius $\sqrt{\ln (n^2)}$ has volume $O(\log^{d/2} n)$, resulting in $O((1/\varepsilon)^d \log^{d/2} n)$ points. This product produces a total runtime of $O((1/\varepsilon)^d n^4 \log^{d/2} n)$.
\end{proof}
For a constant dimension $d$, using Theorem \ref{thm:random-coreset} to construct a coreset, we can first set $n = O((1/\varepsilon^2)\log (1/\delta))$ and now
the time to calculate $\kappa(\c{P},\c{Q} \oplus T)$ is $O((1/\varepsilon^4) \log^2 (1/\delta))$ after spending $O(n + (1/\varepsilon^2) \log (1/\delta) \log n)$ time to construct the coresets.
Hence the total runtime is
\[
O(n + \log n (1/\varepsilon^2)( \log(1/\delta)) + (1/\varepsilon^{d+8}) \cdot \log^{d/2} ((1/\varepsilon) \log(1/\delta))\log^4(1/\delta)),
\]
and is correct with probability at least $1-\delta$.
\begin{theorem}
For fixed $d$, in
\[
O(n + \log n (1/\varepsilon^2)( \log(1/\delta)) + (1/\varepsilon^{d+8}) \cdot \log^{d/2} ((1/\varepsilon) \log(1/\delta))\log^4(1/\delta))
\]
time we can find a translation $\hat T$ such that
$D_K^2(\c{P}, \c{Q} \oplus \hat{T}) - D_K^2(\c{P}, \c{Q} \oplus T^*) \leq \varepsilon W^2$,
with probability at least $1-\delta$.
\label{thm:trans}
\end{theorem}
\subsection{Approximately Optimal Translations and Rotations}
For any parameter $\varepsilon > 0$, we describe an algorithm to find a translation $\hat T$ and a rotation $\hat R$ such that
\[
D_K^2(\c{P}, \hat R \circ (\c{Q} \oplus \hat T)) - D_K^2(\c{P}, R^\star \circ (\c{Q} \oplus T^\star)) \leq \varepsilon W^2.
\]
We first find a translation to align a pair of points $p \in \c{P}$ and $q \in \c{Q}$ within some tolerance (using a method similar to above, and using Lemma~\ref{lem:G-dist} to ignore far-away pairs) and then rotate $\c{Q}$ around $q$. This deviates from our restriction above that $\hat R \in \SO d$ rotates about the origin, but can be easily overcome by performing the same rotation about the origin, and then translating $\c{Q}$ again so $q$ is at the desired location. Thus, after choosing a $q \in \c{Q}$ (we will in turn choose each $q' \in \c{Q}$) we let all rotations be about $q$ and ignore the extra modifications needed to $\hat T$ and $\hat R$ to ensure $\hat R$ is about the origin.
Given a subset $S \subset \c{Q}$ of fewer than $d$ points and a pair $(p,q) \in \c{P} \times \c{Q}$ where $q \notin S$, we can define a rotational grid around $p$, with respect to $q$, so that $S$ is fixed. Let $\c{R}_{d,S}$ be the subset of rotations in $d$-space under which the set $S$ is invariant. That is for any $R \in \c{R}_{d,S}$ and any $s \in S$ we have $R(s) = s$. Let $\tau = d - |S|$. Then (topologically) $\c{R}_{d,S} = \SO{\tau}$.
Let $R_{S,p,q} = \min_{R \in \c{R}_{d,S}} ||R(q) - p||$ and let $\hat q = R_{S,p,q}(q)$.
Let $\c{H}[p,q,S,\varepsilon,\Lambda] \subset \c{R}_{d,S}$ be a set of rotations under which $S$ is invariant with the following property: for any point $q'$ such that there exists a rotation $R' \in \c{R}_{d,S}$ where $R'(q) = q'$ and where $||q' - \hat q|| \leq \Lambda$, then there exists a rotation $R \in \c{H}[p,q,S,\varepsilon,\Lambda]$ such that $||R(q) - q'|| \leq \varepsilon$.
For the sanity of the reader, we will not give a technical construction, but just note that it is possible to construct $\c{H}[p,q,S,\varepsilon,\Lambda]$ of size $O((\Lambda/\varepsilon)^{\tau})$.
\paragraph{Algorithm.}
For each pair of ordered sets of $d$ points $(p_1, p_2, \ldots, p_d) \subset \c{P}$ and $(q_1, q_2, \ldots, q_d) \subset \c{Q}$ consider the following set of translations and rotations. Points in $\c{P}$ may be repeated.
For each $g \in \c{G}[\varepsilon/d,p_1,\sqrt{\ln(\max\{1/\varepsilon,n^2\})}]$ consider translation $T_{p_1, q_1, g}$ such that $q_1 + T_{p_1, q_1, g} = g$. We now consider rotations of the set $(\c{Q} \oplus T_{p_1, q_1, g})$.
Let $S = \{q_1\}$ and consider the rotational grid $\c{H}[p_2, q_2+T_{p_1,q_1,g}, S, \varepsilon/d, \sqrt{\ln(1/\varepsilon)}]$. For each rotation $R_2 \in \c{H}[p_2, q_2+T_{p_1,q_1,g}, S, \varepsilon/d, \sqrt{\ln(1/\varepsilon)}]$ we recurse as follows. Apply $R_2(\c{Q} \oplus T_{p_1, q_1, g})$ and place $R_2(q_2 + T_{p_1, q_1, g})$ in $S$. Then in the $i$th stage consider the rotational grid $\c{H}[p_i, R_{i-1}(\ldots R_2(q_2 + T_{p_1, q_1, g}) \ldots ), S, \varepsilon/d, \sqrt{\ln(1/\varepsilon)}]$. Where $R_i$ is some rotation we consider from the $i$th level rotational grid, let $\bar{R} = R_d \circ R_{d-1} \circ \ldots \circ R_2$. Let $(\hat T, \hat R)$ be the pair $(T_{p, q, g}, \bar{R})$ that maximize $\kappa(\c{P},\bar R \circ (\c{Q} \oplus T_{p,q,g}))$.
\begin{theorem}
The above algorithm runs in time
\[
O(n^{2d+2} (1/\varepsilon)^{(d^2-d+2)/2} \log^{(d^2-3d+2)/4} (1/\varepsilon) \log^{d/2} (\max\{n^2, 1/\varepsilon\})),
\]
for a fixed $d$, and is guaranteed to find a translation and rotation pair $(\hat T, \hat R)$, such that
\[
D_K^2(\c{P}, \hat R \circ (\c{Q} \oplus \hat T)) - D_K^2(\c{P}, R^\star \circ \c{Q} \oplus T^\star) \leq \varepsilon W^2.
\]
\label{lem:rot}
\end{theorem}
\begin{proof}
We compare our solution $(\hat T, \hat R)$ to the optimal solution $(T^\star, R^\star)$. Note that only pairs of points $(p,q) \in \c{P} \times \c{Q}$ such that $||p - R^\star(q + T^\star)|| < \sqrt{\ln(1/\varepsilon)}$ need to be considered.
We first assume that for the ordered sets of $d$ points we consider $(p_1, p_2, \ldots, p_d) \subset \c{P}$ and $(q_1, q_2, \ldots, q_d) \subset \c{Q}$ we have
(A1) $||p_i - R^\star(q_i + T^\star)|| \leq \sqrt{\ln(1/\varepsilon)}$, and
(A2) for $S = \{q_1, \ldots, q_{i-1}\}$, let $q_i \in \c{Q}$ be the furthest point from $S$ such that $||p_i - (q_i + T^\star)|| \leq \sqrt{\ln(1/\varepsilon)}$.
Note that (A2) implies that for any rotation $R \in \c{R}_{d,S}$ that $||q_i - R(q_i)|| > ||q' - R(q')||$ for all $q' \in \c{Q}$ that can be within the distance threshold under $(T^\star, R^\star)$.
In the case that fewer than $d$ pairs of points are within our threshold distance, then as long as these are the first pairs in the ordered sequence, the algorithm works the same up to that level of recursion, and the rest does not matter. Finally, by Lemma \ref{lem:lb1} we can argue that at least one pair must be within the distance threshold for our transition grid.
For each point $q \in \c{Q}$ we can show there exists some pair $(T,R)$ considered by the algorithm such that $||R^\star(q + T^\star) - R(q + T)|| \leq \varepsilon.$
First, there must be some translation $T = T_{p_1, q_1, g}$ in our grid that is within a distance of $\varepsilon/d$ of $T^\star$. This follows from Lemma \ref{lem:grid-eps} and similar arguments to the proof for translations.
For each $q_i$ we can now show that for some $R_i \in \c{H}$ (the rotational grid) we have $||R_i(R_{i-1}(\ldots R_2(q_i + T_{p_1, q_1, g}) \ldots )) - R^\star(q_i + T^\star)|| \leq \varepsilon$.
By our assumptions, the transformed $q_i$ must lie within the extents of $\c{H}$. Furthermore, there is a rotation $R_j'$ that can replace each $R_j$ for $j \in [2,i]$ that moves $q_i$ by at most $\varepsilon/d$ such that $R'_i(R'_{i-1}( \ldots R'_2(q_i) \ldots )) = R^\star(q_i)$. Hence, the composition of these rotations affects $q_i$ by at most $\varepsilon/(i-1)$, and the sum effect of rotation and translation errors is at most $\varepsilon$.
Since each $q_i$ is invariant to each subsequent rotation in the recursion, we have shown that there is a pair $(T, R)$ considered so $||R(q_i + T) - R^\star(q_i + T^\star)|| \leq \varepsilon$ for $q_i$ in the ordered set $(q_1, q_2, \ldots, q_d)$.
We can now use our second assumption (A2) that shows that at each stage of the recursion $q_i$ is the point affected most by the rotation. This indicates that we can use the above bound for all points $q \in \c{Q}$, not just those in our ordered set.
Finally, we can use Lemma \ref{lem:grid-eps} to complete the proof of correctness.
Since if each $K(p,q)$ has error at most $\varepsilon$, then
\[
\left|\sum_{p \in \c{P}} \sum_{q \in \c{Q}} \mu(p) \nu(q) K(p, \hat R(q + \hat T)) - \sum_{p \in \c{P}} \sum_{q \in \c{Q}} \mu(p) \nu(q) K(p, R^*(q + T^*))\right|
\leq
\sum_{p \in \c{P}} \sum_{q \in \c{Q}} \mu(p) \nu(q) \varepsilon
=
\varepsilon W^2.
\]
We can bound the runtime as follows. We consider all $d! {n \choose d} = O(n^d)$ ordered sets of points in $\c{Q}$ and all $n^d$ ordered sets of points from $\c{P}$. This gives the leading $O(n^{2d})$ term.
We then investigate all combinations of grid points from each grid in the recursion. The translation grid has size $O((\Delta/\varepsilon)^d) = O((1/\varepsilon)^d \log^{d/2} (\max\{1/\varepsilon, n^2\}))$. The size of the $i$th rotational grid is $O((\sqrt{\log(1/\varepsilon)}/\varepsilon)^{d-i}$, starting at $i=2$. The product of all the rotational grids is the base to the sum of their powers $\sum_{i=1}^{d-1} (d-i) = \sum_{i=1}^{d-1} i = (d-1)(d-2)/2 = (d^2 - 3d +2)/2$, that is $O((1/\varepsilon)^{(d^2 - 3d +2)/2} \log^{(d^2 - 3d +2)/4} (1/\varepsilon))$. Multiplying by the size of the translational grid we get $O((1/\varepsilon)^{(d^2 - d + 2)/2} \log^{(d^2 - 3d +2)/4} (1/\varepsilon) \log^{d/2} (\max\{n^2, 1/\varepsilon\}))$.
Then for each rotation and translation we must evaluate $\kappa(\c{P}, R \circ (\c{Q} \oplus T))$ in $O(n^2)$ time.
Multiplying these three components gives the final bound of
\[
O(n^{2d+2} (1/\varepsilon)^{(d^2 -d +2)/2} \log^{(d^2 -3d +2)/4} (1/\varepsilon) \log^{d/2} (\max\{n^2 , 1/\varepsilon\})). \qedhere
\]
\end{proof}
The runtime can again be reduced by first computing a coreset of size $O((1/\varepsilon^2) \log (1/\delta))$ and using this value as $n$.
After simplifying some logarithmic terms we reach the following result.
\begin{theorem}
For fixed $d$, in
\[
O(n + \log n (1/\varepsilon^2)( \log(1/\delta)) + (1/\varepsilon)^{(d^2 + 7d + 6)/2} (\log(1/\varepsilon\delta))^{(d^2 + 7d +10)/4}),
\]
time we can find a translation and rotation pair $(\hat T, \hat R)$, such that
\[
D_K^2(\c{P}, \hat R \circ (\c{Q} \oplus \hat T)) - D_K^2(\c{P}, R^\star \circ \c{Q} \oplus T^\star) \leq \varepsilon W^2,
\]
with probability at least $1-\delta$.
\label{thm:rot}
\end{theorem}
\newpage
|
2,869,038,155,900 | arxiv | \section{Introduction}\label{introduction}}
Software production, use, and reuse is an increasingly crucial part of
scholarly work
\citep{open_source_code_repo_predict_impact, Trisovic2021ALS}. While
historically underutilized, citing and referencing software used during
the course of research is becoming common with new standards for
software citation \citep{Katz2021RecognizingTV, Du2022UnderstandingPI}
and work in extracting software references in existing literature
\citep{cz_software_mentions}. However, records of software production
are not readily identifiable or available at scale in the way that
peer-reviewed publications or other scholarly outputs are
\citep{Schindler2022TheRO}. To make progress on this problem, we
introduce two related datasets for studying and inferring software
produced as a part of research, which we refer to as the Soft-Search
dataset.
The Soft-Search dataset is aimed at identifying research projects which
are likely to have produced software while funded by a federal grant. We
start by identifying GitHub repositories that acknowledge funding from
at least one National Science Foundation (NSF) award. We then annotate
each GitHub repository found with a binary decision for its contents:
software or not-software (e.g.~not all github repositories contain
software, they might include research notes, course materials, etc.). We
then link each annotated GitHub repository to the specific NSF award
ID(s) referenced in its README.md file. Finally, we compile the
Soft-Search Training dataset using the annotations for each GitHub
repository, and the text from the linked NSF award abstract and the
project outcomes report.
Using the Soft-Search Training dataset, we train a variety of models to
predict software production using either the NSF award abstract or
project outcomes report text as input. We use the best performing models
to then infer software production against all awards funded by the
National Science Foundation from 2010 to 2023 (additional details are
offered in Section~\ref{sec-data-collection}). The predictions and
metadata for each NSF award between the 2010 and 2023 period are
compiled to form the Soft-Search Inferred dataset.
In total, our new Soft-Search dataset includes the following
contributions:
\begin{enumerate}
\def\arabic{enumi}.{\arabic{enumi}.}
\item
Soft-Search Training: A ground truth dataset compiled using linked NSF
awards and GitHub repositories which have been annotated for software
production.
\item
Multiple classifiers which infer software production from either the
text of an NSF award's abstract or project outcomes report.
\item
Soft-Search Inferred: A dataset of more than 150,000 NSF funded awards
from between 2010 and 2023. Each award has two predictions for
software production: one from prediction using the abstract text and
the other from prediction using the project outcomes report text.
\end{enumerate}
The rest of the paper proceeds as follows. In
Section~\ref{sec-data-collection} we detail the data collection and
annotation process used for creating the Soft-Search Training dataset.
In Section~\ref{sec-models} we briefly describe the model training
process and report results. In Section~\ref{sec-soft-search-dataset} we
provide summary statistics for the Soft-Search Inferred dataset and
observe trends in software production over time. We conclude with
discussion regarding the limitations of our approach and opportunities
for future work.
\hypertarget{sec-data-collection}{%
\section{Data Collection and Annotation}\label{sec-data-collection}}
\hypertarget{sec-finding-soft}{%
\subsection{Finding Software Produced by NSF
Awards}\label{sec-finding-soft}}
The first step in our data collection process was to find software
outputs from National Science Foundation (NSF) funded research. This
step has two potential approaches. The first approach is a manual search
for references and promises of software production within NSF award
abstracts, project outcome reports, and papers supported by each award.
This first approach is labor intensive and may be prone to labeling
errors because while there may be a promise of software production in
these documents, it may not be possible to verify such software was
ultimately produced. The other approach is to predict software
production using a trained model. We pursue this approach with the
caveat that there are also potential label errors.
To gather examples of verifiable software production, we created a
Python script which used the GitHub API to search for repositories which
included reference to financial support from an NSF award in the
repositories README.md file. Specifically our script queried for
README.md files which contained any of the following text snippets:
`National Science Foundation', `NSF Award', `NSF Grant', `Supported by
NSF', or `Supported by the NSF'. GitHub was selected as the basis for
our search because of its widespread adoption and mention in scholarly
publication \citep{riseofgithubinscholarlypublication}. This search
found 1520 unique repositories which contained a reference to the NSF in
the repository's README.md file.
\hypertarget{software-production-annotation}{%
\subsection{Software Production
Annotation}\label{software-production-annotation}}
The next step in our data collection process was to annotate each of the
GitHub repositories found as either ``software'' or ``not software.'' In
our initial review of the repositories we had collected, we found that
the content of repositories ranged from documentation, experimental
notes, course materials, collections of one-off scripts written during a
research project, to more typical software libraries with installation
instructions, testing, and community support and use.
Using existing definitions of what constitutes research software to form
the basis of our annotation criteria
\citep{martinez_ortiz_carlos_2022_7185371, sochat2022research}, we
conducted multiple rounds of trial coding on samples of the data.
Fleiss' kappa was used to determine if there was agreement between our
research team on whether ten GitHub repositories contained `software' or
not. On each round of trial coding ten GitHub repositories were randomly
selected from our dataset for each member of our research team to
annotate independently. When assessing a repository, members of the
research team were allowed to use any information in the repository to
determine their annotation (i.e.~the content of the README.md file, the
repository activity, documentation availability, etc.)
Our final round of trial coding showed that there was near perfect
agreement between the research team (K=0.892)
\citep{viera2005understanding}.
Our final annotation criteria was generally inclusive of labeling
repositories as software, rather there were specific exclusion criteria
that resulted in a repository being labeled as ``not software''.
Specifically repositories were labeled as ``not software'' when a
repository primarily consisted of:
\begin{enumerate}
\def\arabic{enumi}.{\arabic{enumi}.}
\item
project documentation or research notes
\item
teaching materials for a workshop or course
\item
the source code for a project or research lab website
\item
collections of scripts specific to the analysis of a single experiment
without regard to further generalizability
\item
utility functions for accessing data without providing any additional
processing capacity
\end{enumerate}
We then annotated all GitHub repositories in our dataset as either
``software'' or ``not software'' according to our agreed upon annotation
criteria.
\hypertarget{linking-github-repositories-to-nsf-awards}{%
\subsection{Linking GitHub Repositories to NSF
Awards}\label{linking-github-repositories-to-nsf-awards}}
Our final step in the data collection process was to link the annotated
GitHub repositories back to specific NSF awards. To do so, we created a
script which would load the webpage for each GitHub repository, scrape
the content of the repository's README and find the specific NSF award
ID number(s) referenced. While annotating the dataset, and with this
script, our dataset size was reduced as we found some repositories were
returned in the initial search because of the ``NSF'' acronym being used
by other, non-United-States governmental agencies which also fund
research.
When processing each repository, our Python script would load the README
content, search for NSF Award ID patterns with regular expressions, and
then verify that each NSF award ID found was valid by requesting
metadata for the award from the NSF award API.
We then retrieved the text for each award's abstract and project
outcomes report. This was the final step of our data collection process
and allowed us to create a dataset of 446 unique NSF awards labeled as
`produced software' and 471 unique NSF awards labeled as `did not
produce software'.
\hypertarget{sec-models}{%
\section{Predictive Models}\label{sec-models}}
Using the compiled Soft-Search Training dataset, we trained three
different models using the text from either the award abstract or
project outcomes report. The models trained include a logistic
regression model trained with TF-IDF word embeddings
(\texttt{tfidf-logit}), a logistic regression model trained with
semantic embeddings (\texttt{semantic-logit}), and a fine-tuned
transformer (\texttt{transformer}). The semantic embeddings and the base
model from which we fine-tuned our own transformer model was the
`distilbert-base-uncased-finetuned-sst-2-english' model
\citep{hf_canonical_model_maintainers_2022}. Each model was trained with
80\% of the Soft-Search Training dataset. We then test each of the
models and use F1 to rank each model's performance.
\hypertarget{tbl-model-results-from-abstract}{}
\begin{table}
\caption{\label{tbl-model-results-from-abstract}Predictive Model Results (Trained with Abstract Text) }\tabularnewline
\centering
\begin{tabular}{llrrrr}
\toprule
{} & model & accuracy & precision & recall & f1 \\
\midrule
0 & tfidf-logit & 0.674 & 0.674 & 0.674 & 0.673 \\
1 & transformer & 0.636 & 0.608 & 0.697 & 0.649 \\
2 & semantic-logit & 0.630 & 0.630 & 0.630 & 0.630 \\
3 & regex & 0.516 & 0.515 & 0.516 & 0.514 \\
\bottomrule
\end{tabular}
\end{table}
Table~\ref{tbl-model-results-from-abstract} reports the results from
training using the abstract text as input. The best performing model was
the \texttt{tfidf-logit} which achieved an F1 of 0.673.
\hypertarget{tbl-model-results-from-project-outcomes}{}
\begin{table}
\caption{\label{tbl-model-results-from-project-outcomes}Predictive Model Results (Trained with Project Outcomes Report Text) }\tabularnewline
\centering
\begin{tabular}{llrrrr}
\toprule
{} & model & accuracy & precision & recall & f1 \\
\midrule
0 & tfidf-logit & 0.745 & 0.745 & 0.745 & 0.745 \\
1 & transformer & 0.673 & 0.638 & 0.771 & 0.698 \\
2 & semantic-logit & 0.633 & 0.633 & 0.633 & 0.632 \\
3 & regex & 0.510 & 0.507 & 0.510 & 0.482 \\
\bottomrule
\end{tabular}
\end{table}
Table~\ref{tbl-model-results-from-project-outcomes} reports the results
from training using the project outcomes reports as input. The best
performing model was the \texttt{tfidf-logit} which achieved an F1 of
0.745.
While the models trained with the project outcomes reports were trained
with less data, the best model of the group achieved a higher F1 than
any of the models trained with the abstracts. While we have not
investigated further, we believe this to be because the project outcomes
reports contain more direct citation of produced software rather than an
abstract's promise of software production.
The data used for training, and functions to reproduce these models, are
made available via our Python package:
\href{https://github.com/si2-urssi/eager}{\texttt{soft-search}}.
\hypertarget{sec-soft-search-dataset}{%
\section{The Soft-Search Dataset}\label{sec-soft-search-dataset}}
Using the predictive models, we compile the Soft-Search Inferred dataset
which contains the metadata, abstract text, and project outcomes report
text, for all NSF awarded projects during the 2010-2023 period. The
Soft-Search Inferred dataset additionally contains our predictions for
software production using both texts respectively.
\hypertarget{tbl-soft-search-stats}{}
\begin{table}
\caption{\label{tbl-soft-search-stats}Composition of the NSF Soft Search Dataset }\tabularnewline
\centering
\begin{tabular}{llrrr}
\toprule
{} & Program & \# Awards & \# Software & \% Software \\
\midrule
0 & MPS & 32885 & 19178 & 0.583184 \\
1 & CISE & 24633 & 13274 & 0.538871 \\
2 & ENG & 22900 & 11242 & 0.490917 \\
3 & GEO & 17822 & 5142 & 0.288520 \\
4 & BIO & 16990 & 6013 & 0.353914 \\
5 & EHR & 13703 & 575 & 0.041962 \\
6 & SBE & 13318 & 1966 & 0.147620 \\
7 & TIP & 8597 & 4501 & 0.523555 \\
8 & OISE & 2329 & 636 & 0.273079 \\
9 & OIA & 498 & 123 & 0.246988 \\
\bottomrule
\end{tabular}
\end{table}
\hypertarget{trends-and-observations}{%
\subsection{Trends and Observations}\label{trends-and-observations}}
\begin{figure}
{\centering \includegraphics[width=\linewidth]{paper_files/figure-pdf/fig-soft-over-time-output-1.pdf}
}
\caption{\label{fig-soft-over-time}Software Production Over Time (Using
Predictions from Abstracts)}
\end{figure}
Using the Soft-Search Inferred dataset we can observe trends in software
production over time. Figure~\ref{fig-soft-over-time} plots the percent
of awards which we predict to have produced software (using the award's
abstract) over time. While there are minor year-to-year deviations in
predicted software production, we observe the ``Math and Physical
Sciences'' (MPS) funding program as funding the most awards which we
predict to produce software, with ``Computer and Information Science and
Engineering'' (CISE), and ``Engineering'' (ENG) close behind.
\begin{figure}
{\centering \includegraphics[width=\linewidth]{paper_files/figure-pdf/fig-soft-over-duration-output-1.pdf}
}
\caption{\label{fig-soft-over-duration}Software Production Grouped By
Award Duration (Using Predictions from Abstracts)}
\end{figure}
We can additionally observe trends in software production as award
duration increases. Figure~\ref{fig-soft-over-duration} plots the
percent of awards which we predict to have produced software (using the
award's abstract) grouped by the award duration in years. We note that
as award duration increases, the percentage of awards which are
predicted to have produced software also tends to increase.
\hypertarget{conclusion}{%
\section{Conclusion}\label{conclusion}}
We introduce Soft-Search, a pair of novel datasets for studying software
production from NSF funded projects. The Soft-Search Training dataset is
a human-labeled dataset with almost 1000 examples used to train models
which predict software production from either the NSF award abstract
text or the project outcomes report text. We used these models to
generate the Soft-Search Inferred dataset. The Soft-Search Inferred
dataset includes project metadata, the awards abstract and project
outcomes report, and predictions of software production for each NSF
funded project between 2010 and 2023. We hope that Soft-Search helps
further new studies and findings in understanding the role software
development plays in scholarly publication.
All datasets and predictive models produced by this work are available
from our GitHub repository:
\href{https://github.com/si2-urssi/eager}{\texttt{si2-urssi/eager}}.
\hypertarget{limitations}{%
\subsection{Limitations}\label{limitations}}
As discussed in Section~\ref{sec-data-collection}, the Soft-Search
Training dataset was entirely composed of NSF awards which ultimately
released or hosted software (and other research products) on GitHub. Due
to our data collection strategy, it is possible that each of the
predictive models learned not to predict if an NSF award would produce
software, but rather, if an NSF award would produce software hosted on
GitHub.
\hypertarget{future-work}{%
\subsection{Future Work}\label{future-work}}
As discussed in Section~\ref{sec-finding-soft}, our initial method for
attempting to find research software produced from NSF supported awards
was to search for references and promises of software production in the
abstract, project outcomes report, and attached papers of each award.
While attempting this approach to create the dataset, we found that many
awards and papers that reference computational methods do not provide a
reference web link to their code repositories or websites. In some
cases, we found repositories related to an award or paper via Google and
GitHub search ourselves. While we support including references to code
repositories in award abstracts, outcomes reports, and papers, future
research should be conducted on how to enable automatic reconnection of
papers and their software outputs.
\hypertarget{acknowledgements}{%
\section{Acknowledgements}\label{acknowledgements}}
We thank the USRSSI team, especially Karthik Ram for their input. This
material is based upon work supported by the National Science Foundation
under Grant 2211275.
\bibliographystyle{ACM-Reference-Format}
|
2,869,038,155,901 | arxiv | \section{Introduction}
Recently John H. Schwarz conjectured \cite{schwarz} that the world-volume action of a probe $p$-brane
in a maximally (or 3/4 maximal) supersymmetric spacetime containing $AdS_{p+2}$ can be reinterpreted as
the highly effective action (HEA) of a superconformal field theory in $(p+1)$-dimensions
on the Coulomb branch. The HEA is defined by taking a conformal gauge theory on the Coulomb branch
and integrating out the massive fields, thereby obtaining an effective action in terms of massless
Abelian multiplets only. Then the HEA is conjecturally identified with the world-volume action
for a probe $p$-brane in an $AdS_{p+2} \times K$ background geometry with $N$ units of flux
threading a compact space $K$. Examples considered in \cite{schwarz} are a D3-brane in $AdS_5
\times \mathbb{S}^5$, an M2-brane in $AdS_4 \times \mathbb{S}^7/\mathbb{Z}_k$, a D2-brane
in $AdS_4 \times \mathbb{CP}^3$ and an M5-brane in $AdS_7 \times \mathbb{S}^4$.
This conjecture was driven by a guiding principle \cite{schwarz}: ``Take coincidences seriously,"
with the observation that the probe brane theory has all of the expected symmetries and dualities.
The brane actions fully incorporate the symmetry of the background as an exact global symmetry
of the world-volume theory. For example, in the case of a D3-brane in $AdS_5
\times \mathbb{S}^5$, this symmetry is the superconformal group $PSU(2,2|4)$.
In this example, it also includes the $SL(2, \mathbb{Z})$ duality group, which is known to be an
exact symmetry of type IIB superstring theory. This conjecture may be further strengthened by showing
that the world-volume actions describing probe branes in AdS space exhibit
not only (super)conformal symmetry but also dual (super)conformal symmetry and, taken together,
have an infinite-dimensional Yangian-like symmetry.\footnote{Indeed this problem was addressed by
A. Lipstein and J. H. Schwarz in arXiv:1311.6067. But, unfortunately, this paper was withdrawn
due to an error in some equation.} There have also been earlier
works \cite{ads-cft1,cfs-p1,cfs-p2,cfs-p3,cfs-p4} to note the conformal symmetry of
the worldvolume theory of a $p$-brane in an AdS background as well as
works \cite{hea-col1,hea-col2,hea-col3,ferrari}
to emphasize the relationship between probe-brane actions and low-energy effective actions
on the Coulomb branch.
In this paper we will argue that the HEA can be derived from the noncommutative (NC) field theory
representation of the AdS/CFT correspondence as recently formulated in \cite{mypaper}
(see, in particular, section 6). Our argument is based only on the well-known facts that
the master fields of large $N$ matrices are higher-dimensional NC $U(1)$ gauge fields \cite{japan-matrix,nc-seiberg,hsy-epjc09,hsy-jhep09} and the Seiberg-Witten (SW) map
\cite{ncft-sw} defining a spacetime field redefinition between ordinary and NC gauge fields
is a local coordinate transformation eliminating $U(1)$ gauge fields via the Darboux theorem
in symplectic geometry \cite{cornalba,jur-sch,liu,hsy-ijmp09,hsy-jhep09}.
The underlying math for the argument is rather fundamental. For simplicity, let us consider
two-dimensional NC space, denoted by $\mathbb{R}^2_\theta$, whose coordinates obey the commutation relation
\begin{equation}\label{nc2-space}
[y^1, y^2] = i \theta
\end{equation}
where $\theta > 0$ is a constant parameter measuring the noncommutativity of the space $\mathbb{R}^2_\theta$.
If we define annihilation and creation operators as
\begin{equation}\label{ancr}
a = \frac{y^{1} + i y^{2}}{\sqrt{2\theta}}, \qquad
a^\dagger = \frac{y^{1} - i y^{2}}{\sqrt{2\theta}},
\end{equation}
the NC algebra \eq{nc2-space} of $\mathbb{R}^2_\theta$ reduces to the Heisenberg algebra
of harmonic oscillator, i.e.,
\begin{equation}\label{haho}
[a, a^\dagger]= 1.
\end{equation}
The representation space of the Heisenberg algebra (\ref{haho}) is given by the Fock space defined by
\begin{equation}\label{fock-space}
\mathcal{H} = \{| n \rangle | \; n \in \mathbb{Z}_{\geq 0} \},
\end{equation}
which is orthonormal, i.e., $\langle n| m \rangle = \delta_{n,m}$ and
complete, i.e., $\sum_{n = 0}^{\infty} | n \rangle \langle n | = \mathbf{1}_{\mathcal{H}}$,
as is well-known from quantum mechanics.
A crucial, though elementary, fact for our argument is that the NC space $\mathbb{R}^2_\theta$
admits an infinite-dimensional separable Hilbert space \eq{fock-space} \cite{ncft-rev}.
Let us apply this elementary fact to dynamical fields defined on $\mathbb{R}^{d-1,1} \times \mathbb{R}^2_\theta$
with local coordinates $(x^\mu, y^1, y^2)$ where $\mathbb{R}^{d-1,1} \ni x^\mu$ is a $d$-dimensional
Minkowski spacetime. Consider two arbitrary fields $\widehat{\Phi}_1(x,y)$ and $\widehat{\Phi}_2(x,y)$
on $\mathbb{R}^{d-1,1} \times \mathbb{R}^2_\theta$.
In quantum mechanics physical observables are considered as operators acting on a Hilbert space.
Similarly the dynamical variables $\widehat{\Phi}_1(x,y)$ and $\widehat{\Phi}_2(x,y)$ can be regarded as
operators acting on the Hilbert space $\mathcal{H}$ which are elements of the deformed algebra $C^\infty(\mathbb{R}^{d-1,1}) \otimes \mathcal{A}_\theta$. Thus one can represent the operators acting
on the Fock space (\ref{fock-space}) as $N \times N$ matrices in $\mathrm{End}(\mathcal{H})
\equiv \mathcal{A}_N$ where $N = \mathrm{dim}(\mathcal{H}) \to \infty$:
\begin{eqnarray}\label{matrix-rep}
&& \widehat{\Phi}_1(x,y) = \sum_{n,m=0}^\infty | n \rangle \langle n| \widehat{\Phi}_1 (x, y)
| m \rangle \langle m| := \sum_{n,m=0}^\infty (\Phi_1)_{nm} (x) | n \rangle \langle m|, \nonumber \\
&& \widehat{\Phi}_2(x,y) = \sum_{n,m=0}^\infty | n \rangle \langle n| \widehat{\Phi}_2 (x,y)
| m \rangle \langle m| := \sum_{n,m=0}^\infty (\Phi_2)_{nm}(x) | n \rangle \langle m|,
\end{eqnarray}
where $\Phi_1 (x)$ and $\Phi_2 (x)$ are $N \times N$ matrices in $C^\infty(\mathbb{R}^{d-1,1})
\otimes \mathcal{A}_N$. Then one gets a natural composition rule for the products
\begin{eqnarray}\label{matrix-comp}
(\widehat{\Phi}_1 \star \widehat{\Phi}_2) (x,y) &=& \sum_{n,l,m=0}^\infty | n \rangle \langle n|
\widehat{\Phi}_1 (x,y) | l \rangle \langle l| \widehat{\Phi}_2(x,y) | m \rangle \langle m| \nonumber \\
&=& \sum_{n,l,m=0}^\infty (\Phi_1)_{nl} (x) (\Phi_2)_{lm} (x) | n \rangle \langle m|.
\end{eqnarray}
The above composition rule implies that the ordering in the NC algebra $\mathcal{A}_\theta$
is compatible with the ordering in the matrix algebra $\mathcal{A}_N$ and so it is straightforward to
translate multiplications of NC fields in $\mathcal{A}_\theta$ into those of matrices in $\mathcal{A}_N$
using the matrix representation (\ref{matrix-rep}) without any ordering ambiguity.
It is easy to generalize the matrix representation to $2n$-dimensional NC space $\mathbb{R}^{2n}_{\theta}$
whose coordinate generators obey the commutation relation
\begin{equation}\label{extra-nc2n}
[y^a, y^b] = i \theta^{ab}, \qquad a, b = 1, \cdots, 2n,
\end{equation}
where the Poisson bivector $\theta = \frac{1}{2} \theta^{ab} \frac{\partial}{\partial y^a} \bigwedge \frac{\partial}{\partial y^b}$ is assumed to be invertible and so $B \equiv \theta^{-1}$ defines
a symplectic structure on $\mathbb{R}^{2n}$. Consider a $D=(d+2n)$-dimensional
NC space $\mathbb{R}^{d-1,1} \times \mathbb{R}^{2n}_{\theta}$ with coordinates $Y^M = (x^\mu, y^a), \;
M = 0, 1, \cdots, D-1, \; \mu=0, 1, \cdots, d-1$. The star product for smooth functions
$\widehat{f}(Y), \widehat{g}(Y) \in C^\infty (\mathbb{R}^{D-1,1})$ is defined by
\begin{equation}\label{star-prod}
(\widehat{f} \star \widehat{g}) (Y) = e^{\frac{i}{2} \theta^{ab} \frac{\partial}{\partial y^a}
\otimes \frac{\partial}{\partial z^b}} \widehat{f}(x,y) \widehat{g}(x, z)|_{y=z}.
\end{equation}
Therefore, in order to formulate a gauge theory on $\mathbb{R}^{d-1,1} \times \mathbb{R}^{2n}_{\theta}$,
it is necessary to dictate the gauge covariance under the NC star product \eq{star-prod}.
The covariant field strength of NC $U(1)$ gauge fields $\widehat{A}_M (Y)
= (\widehat{A}_\mu, \widehat{A}_a)(x,y)$ is then given by
\begin{equation}\label{d-ncfs}
\widehat{F}_{MN}(Y) = \partial_M \widehat{A}_N (Y) - \partial_N \widehat{A}_M (Y)
- i[\widehat{A}_M, \widehat{A}_N]_\star (Y).
\end{equation}
Using the matrix representation \eq{matrix-rep},
one can show \cite{japan-matrix,nc-seiberg,hsy-epjc09,hsy-jhep09} that the $D=(d+2n)$-dimensional
NC $U(1)$ gauge theory is exactly mapped to the $d$-dimensional $U(N \to \infty)$ Yang-Mills theory:
\begin{eqnarray} \label{equiv-ncu1}
S &=& - \frac{1}{4 G_{YM}^2} \int d^D Y (\widehat{F}_{MN} - B_{MN})^2 \\
\label{equiv-u1un}
&=& - \frac{1}{g_{YM}^2} \int d^d x \mathrm{Tr} \Bigl( \frac{1}{4} F_{\mu\nu}F^{\mu\nu}
+ \frac{1}{2} D_\mu \Phi_a D^\mu \Phi^a - \frac{1}{4}[\Phi_a, \Phi_b]^2 \Bigr)
\end{eqnarray}
where $G_{YM}^2 = (2 \pi )^n |\mathrm{Pf}\theta| g_{YM}^2$ and $B_{MN} = \left(
\begin{array}{cc}
0 & 0 \\
0 & B_{ab} \\
\end{array}
\right)$.
We refer more details to the section 6.1 of Ref. \cite{mypaper}.
We emphasize that the equivalence between the $D$-dimensional NC $U(1)$ gauge theory (\ref{equiv-ncu1})
and $d$-dimensional $U(N \to \infty)$ Yang-Mill theory (\ref{equiv-u1un}) is an exact mathematical
identity, not a dimensional reduction, and has been known long ago, for example,
in \cite{japan-matrix,nc-seiberg}.
A remarkable point is that the resulting matrix models or large $N$ gauge theories described
by the action (\ref{equiv-u1un}) arise as a nonperturbative formulation of string/M theories.
For instance, we get the IKKT matrix model for $d=0$ \cite{ikkt}, the BFSS matrix quantum mechanics
for $d=1$ \cite{bfss} and the matrix string theory for $d=2$ \cite{mst}.
The most interesting case arises for $d=4$ and $n=3$ which suggests an engrossing duality that
the 10-dimensional NC $U(1)$ gauge theory on $\mathbb{R}^{3,1} \times \mathbb{R}^{6}_{\theta}$ is
equivalent to the bosonic action of 4-dimensional $\mathcal{N} = 4$ supersymmetric $U(N)$ Yang-Mills theory,
which is the large $N$ gauge theory of the AdS/CFT duality \cite{ads-cft1,ads-cft2,ads-cft3}.
According to the large $N$ duality or gauge/gravity duality, the large $N$ matrix model (\ref{equiv-u1un})
is dual to a higher dimensional gravity or string theory.
Hence it should not be surprising that the $D$-dimensional NC $U(1)$ gauge theory
should describe a theory of gravity (or a string theory) in $D$ dimensions.
Nevertheless the possibility that gravity can emerge from NC $U(1)$ gauge fields
has been largely ignored until recently. But the emergent gravity picture based
on NC $U(1)$ gauge theory \cite{mypaper,hsy-jhep09,hsy-jpcs12} debunks that this coincidence did not arise
by some fortuity. Here we want to take an advantage following the advice
of John H. Schwarz \cite{schwarz}: ``Take coincidences seriously."
In this paper, we will seriously take the equivalence between the $D$-dimensional NC $U(1)$
gauge theory (\ref{equiv-ncu1}) and $d$-dimensional $U(N \to \infty)$ Yang-Mill theory (\ref{equiv-u1un})
to derive the HEA conjectured in \cite{schwarz}. It is to be hoped that we also clarify why the emergent
gravity from NC gauge fields is actually the manifestation of the gauge/gravity duality or
large $N$ duality in string/M theories. We think that the emergent gravity from NC gauge fields opens
a lucid avenue to understand the gauge/gravity duality such as the AdS/CFT correspondence.
While the large $N$ duality is still a conjectural duality and its understanding is far from being complete
to identify an underlying first principle for the duality, it is possible \cite{mypaper,hsy-jhep09,hsy-jpcs12}
to reasonably identify the first principle for the emergent gravity from NC $U(1)$ gauge fields and
to derive in a systematic way gravitational variables from gauge theory quantities.
Moreover it can be shown \cite{mypaper} that the 4-dimensional $\mathcal{N} = 4$ supersymmetric $U(N)$
Yang-Mills theory is equivalent to the 10-dimensional $\mathcal{N} = 1$ supersymmetric
NC $U(1)$ gauge theory on $\mathbb{R}^{3,1} \times \mathbb{R}^{6}_{\theta}$ if we consider
the Moyal-Heisenberg vacuum (\ref{extra-nc2n}) which is a consistent solution of
the former -- the $\mathcal{N} = 4$ super Yang-Mills theory. Here is a foothold for our departure.
The paper is organized as follows. In section 2 we review the result in Ref. \cite{mypaper} showing
that the four-dimensional $\mathcal{N}=4$ superconformal field theory on the Coulomb branch defined
by the NC space \eq{extra-nc2n} is equivalent to the ten-dimensional $\mathcal{N}=1$
supersymmetric NC $U(1)$ gauge theory. In section 3 we consider the ten-dimensional $\mathcal{N}=1$
NC $U(1)$ super Yang-Mills theory \eq{10dsym-action} as a nontrivial leading approximation
of the supersymmetric completion of the NC DBI action. The supersymmetric completion is
postponed to section 5. In section 4, we identify a commutative DBI action which is mapped to the NC one
by the exact SW map defining a spacetime field redefinition between ordinary and NC gauge fields \cite{ncft-sw}.
It is observed that the spacetime geometry dual to four-dimensional
large $N$ matrices or ten-dimensional NC $U(1)$ gauge fields is simply derived from the Darboux transformation
eliminating $U(1)$ gauge fields whose statement is known as the Darboux theorem in symplectic geometry.
We also identify a possible candidate giving rise to $AdS_5 \times \mathbb{S}^5$ geometry.
It is shown and will also be checked in appendix A that the duality between NC $U(1)$ gauge fields and
gravitational fields is the SW map between commutative and NC $U(1)$ gauge fields. See Eq. \eq{rexp-dbi}.
We thus argue that the emergent gravity from NC gauge fields is the manifestation of the gauge/gravity
duality or large $N$ duality in string/M theories \cite{mypaper}.
In section 5, we derive the worldvolume action of
a probe D3-brane in $AdS_5 \times \mathbb{S}^5$ geometry from the DBI action of ten-dimensional NC $U(1)$
gauge fields which was obtained from the four-dimensional $\mathcal{N}=4$ superconformal field theory
on the Coulomb branch. We consider a supersymmetric D9-brane with the local $\kappa$-symmetry \cite{sdbi1,sdbi12,sdbi3,sdbi4,sdbi5,sdbi6} to yield the supersymmetric version of DBI actions.
We finally identify the supersymmetric worldvolume action of a probe D3-brane in $AdS_5 \times \mathbb{S}^5$ geometry with the HEA conjectured by John H. Schwarz \cite{schwarz}.
Our approach sheds light on why $N=1$ (i.e., Abelian gauge group) is the proper choice for the HEA
which was elusive in the original conjecture (see the discussion in section 5 of Ref. \cite{schwarz}).
In section 6, we discuss why the emergent gravity from NC gauge fields provides a lucid avenue
to understand the gauge/gravity duality such as the AdS/CFT correspondence \cite{ads-cft1,ads-cft2,ads-cft3}.
We conclude the paper with a few speculative remarks. In appendix A,
we demonstrate how to determine $2n$-dimensional K\"ahler metrics from $U(1)$ gauge fields by solving
the identities (\ref{dbi-idc}) and (\ref{dbi-idn}) between DBI actions
which are underlying equations for our argument. In particular, we show that Calabi-Yau $n$-folds
for $n=2$ and $3$ arise from symplectic $U(1)$ instantons in four and six dimensions, respectively.
\section{NC $U(1)$ gauge fields from large $N$ matrices}
The AdS/CFT correspondence \cite{ads-cft1,ads-cft2,ads-cft3} implies that a wide variety of quantum field
theories provide a nonperturbative realization of quantum gravity. In the AdS/CFT duality,
the dynamical variables are large $N$ matrices and so gravitational physics at a fundamental level
is described by NC operators. We argued in \cite{mypaper} that the AdS/CFT correspondence is a particular
case of emergent gravity from NC U(1) gauge fields. An underlying argumentation is to realize
the equivalence between the actions \eq{equiv-ncu1} and \eq{equiv-u1un} in a reverse way by observing that
the Moyal-Heisenberg vacuum (\ref{extra-nc2n}) is a consistent vacuum solution of
the $\mathcal{N} = 4$ super Yang-Mills theory.
It is easy to understand an underlying logic and so we recapitulate only the essential points deferring
to \cite{mypaper} on a detailed description.
The action of four-dimensional $\mathcal{N} = 4$ super Yang-Mills theory is given by \cite{n=4sym}
\begin{eqnarray}\label{n=4-action}
S &=& \int d^4 x \mathrm{Tr} \left\{- \frac{1}{4} F_{\mu\nu} F^{\mu\nu} - \frac{1}{2} D_\mu \Phi_a
D^\mu \Phi_a + \frac{g^2}{4}[\Phi_a, \Phi_b]^2 + i \overline{\lambda}_i
\overline{\sigma}^\mu D_\mu \lambda^i \right. \nonumber \\
&& \qquad \qquad \left. + \frac{ig}{2} \overline{\Sigma}^a_{ij} \lambda^i [ \Phi_a, \lambda^j]
- \frac{ig}{2} \Sigma^{a, ij} \overline{\lambda}_i [ \Phi_a, \overline{\lambda}_j] \right\}.
\end{eqnarray}
Consider a vacuum configuration defined by
\begin{equation}\label{n=4vacuum}
\langle \Phi_a \rangle_{\mathrm{vac}} = p_a, \quad
\langle A_\mu \rangle_{\mathrm{vac}} = 0, \quad \langle \lambda^i \rangle_{\mathrm{vac}} = 0.
\end{equation}
Assume that the vacuum expectation value (vev) $p_a \in \mathcal{A}_N \;
(N \to \infty)$ satisfies the Moyal-Heisenberg algebra
\begin{equation}\label{n=4moyal}
[p_a, p_b] = - i B_{ab} I_{N \times N}.
\end{equation}
Of course the commutation relation (\ref{n=4moyal}) is meaningful only when we take
the limit $N \to \infty$. It is obvious that the vacuum configuration (\ref{n=4vacuum}) in this limit
is definitely a solution of the theory. We emphasize that the vev (\ref{n=4vacuum})
of adjoint scalar fields does not break four-dimensional Lorentz symmetry.
Actually the vacuum algebra (\ref{n=4moyal}) refers to NC space $\mathbb{R}_\theta^6$ if we define
$p_a \equiv B_{ab} y^b$ and $B \equiv \theta^{-1}$.
Now fluctuations of large $N$ matrices around the vacuum (\ref{n=4vacuum}) are parameterized by
\begin{eqnarray} \label{n=4bfluct}
&& \widehat{D}_\mu (x,y) = \partial_\mu - i\widehat{A}_\mu(x,y), \quad
\widehat{D}_a (x,y) \equiv -i\widehat{\Phi}_a (x,y) = -i\bigl( p_a + \widehat{A}_a(x,y) \bigr), \\
\label{n=4ffluct}
&& \widehat{\Psi} (x, y) = \left(
\begin{array}{c}
P_+ \widehat{\lambda}^i \\
P_- \widetilde{\widehat{\lambda}}_i \\
\end{array}
\right) (x, y),
\end{eqnarray}
where we assumed that fluctuations also depend on vacuum moduli $y^a$.
Note that, if we apply the matrix representation (\ref{matrix-rep}) to the fluctuations
in Eqs. (\ref{n=4bfluct}) and (\ref{n=4ffluct}) again, we recover the original
large $N$ gauge fields in the action (\ref{n=4-action}).
Therefore let us introduce 10-dimensional coordinates $Y^M = (x^\mu, y^a), \; M = 0, 1, \cdots, 9$
and 10-dimensional connections defined by
\begin{equation}\label{10d-conn}
\widehat{D}_M(Y) = \partial_M - i\widehat{A}_M (x,y) = (\widehat{D}_\mu, \widehat{D}_a) (x,y)
\end{equation}
whose field strength is given by
\begin{equation}\label{10d-fs}
\widehat{F}_{MN}(Y) = i [\widehat{D}_M, \widehat{D}_N]_\star
= \partial_M \widehat{A}_N - \partial_N \widehat{A}_M - i[\widehat{A}_M, \widehat{A}_N]_\star.
\end{equation}
Thus the correspondence between the NC $\star$-algebra $\mathcal{A}_\theta$ and
the matrix algebra $\mathcal{A}_N = \mathrm{End}(\mathcal{H})$ under the Moyal-Heisenberg
vacuum (\ref{n=4moyal}) implies that the master fields of large $N$ matrices
are higher-dimensional NC $U(1)$ gauge fields. In the end large $N$ matrices in $\mathcal{N}=4$
vector multiplet on $\mathbb{R}^{3,1}$ are mapped to NC gauge fields and their superpartners
in $\mathcal{N}=1$ vector multiplet on $\mathbb{R}^{3,1} \times \mathbb{R}_{\theta}^6$
where $\mathbb{R}_\theta^{6}$ is an extra NC space whose coordinate generators $y^a \in \mathcal{A}_\theta$
obey the commutation relation (\ref{extra-nc2n}).
Using the ordering (\ref{matrix-comp}) for $U(N)$ and NC $U(1)$ gauge fields, it is straightforward
to organize the 4-dimensional $\mathcal{N}=4 \; U(N)$ super Yang-Mills theory (\ref{n=4-action})
into the 10-dimensional $\mathcal{N}=1$ NC $U(1)$ super Yang-Mills theory with the action \cite{mypaper}
\begin{equation}\label{10dsym-action}
S = \int d^{10} Y \left\{ - \frac{1}{4G_{YM}^2} (\widehat{F}_{MN} - B_{MN})^2
+ \frac{i}{2} \overline{\widehat{\Psi}} \Gamma^M \widehat{D}_M \widehat{\Psi} \right\}
\end{equation}
where $B$-fields take the same form as Eq. (\ref{equiv-ncu1}). Now the fermion $\widehat{\Psi}(Y)$
is a 10-dimensional gaugino, the superpartner of the 10-dimensional NC $U(1)$ gauge field $\widehat{A}_M(x)$,
that is the Majorana-Weyl spinor of $SO(9,1)$. The action (\ref{10dsym-action}) is invariant
under $\mathcal{N}=1$ supersymmetry transformations given by
\begin{equation}\label{10dsusytr}
\delta \widehat{A}_M = i \overline{\alpha} \Gamma_M \widehat{\Psi}, \qquad
\delta \widehat{\Psi} = \frac{1}{2} (\widehat{F}_{MN} - B_{MN}) \Gamma^{MN} \alpha.
\end{equation}
It should be remarked that the relationship between the 4-dimensional $U(N)$ super Yang-Mills
theory (\ref{n=4-action}) and 10-dimensional NC $U(1)$ super Yang-Mills theory (\ref{10dsym-action})
is not a dimensional reduction but they are exactly equivalent to each other.
Therefore any quantity in lower-dimensional $U(N)$ gauge theory can be transformed into an object
in higher-dimensional NC $U(1)$ gauge theory using the compatible ordering (\ref{matrix-comp}) \cite{mypaper}.
The coherent condensate (\ref{n=4vacuum}) is described by vev's of adjoint scalar fields.
Thus we will call the vacuum (\ref{n=4vacuum}) a ``Coulomb branch" although $[\Phi_a, \Phi_b]|_{\mathrm{vac}}
\neq 0$.\footnote{\label{n=1}The usual Coulomb branch is defined by $[\Phi_a, \Phi_b]|_{\mathrm{vac}} = 0$
and so $\langle \Phi_a \rangle_{\mathrm{vac}} = \mathrm{diag}(\alpha_{a_1}, \cdots, \alpha_{a_N})$.
In this case the gauge group $U(N)$ or $SU(N+1)$ is broken to $U(1)^N$.
But we remark that the HEA is conjectured to correspond to the choice, $N = 1$ \cite{schwarz}
while the probe brane approximation requires $N \to \infty$.
Therefore the conventional choice of vacuum finds difficulty in explaining
why $N=1$ (i.e., Abelian gauge group) is the proper choice for the HEA.
We emphasize that the Coulomb branch as the NC space (\ref{n=4vacuum}) is a key origin
of emergent gravity and is completely consistent with the HEA because it requires
the $N \to \infty$ limit and preserves only the $U(1)$ gauge group.
Hence our approach sheds light on why HEA preserves only the $U(1)$ gauge symmetry
in spite of $N \to \infty$ which was elusive in the original conjecture
as discussed in section 5 of Ref. \cite{schwarz}.}
However note that $[\Phi_a, \Phi_b]|_{\mathrm{vac}} = - i B_{ab} I_{N \times N}$ take values
in a center of the gauge group $U(N)$, which may be identified with the unbroken $U(1)$ gauge group.
Hence the Coulombic vacuum (\ref{n=4vacuum}) is compatible with the usual definition of the Coulomb branch.
We also remark that the conformal symmetry of 4-dimensional $\mathcal{N} = 4$ super Yang-Mills theory
is spontaneously broken by the vev (\ref{n=4vacuum}) of scalar fields because
it introduces a NC scale $|\theta| \equiv l^2_{NC}$. But it needs not be specified because the theories
with different $\theta$'s are SW-equivalent \cite{ncft-sw}. These are also a typical feature of
the Coulomb branch.
Under a Coulomb branch described by the coherent condensate (\ref{n=4vacuum}),
large $N$ matrices in $\mathcal{N}=4$ supersymmetric
gauge theory can be regarded as a linear representation of operators acting on a separable Hilbert
space $\mathcal{H}$ that is the Fock space of the Moyal-Heisenberg vacuum (\ref{n=4moyal}).
Therefore an important point is that a large $N$ matrix $\Phi(x)$ on four-dimensional spacetime $\mathbb{R}^{3,1}$ in the limit $N \to \infty$ on the Coulomb branch (\ref{n=4vacuum}) can be represented by its master field $\widehat{\Phi}(x,y)$ which is a higher-dimensional NC $U(1)$ gauge field or its superpartner.
Since the large $N$ gauge theory (\ref{n=4-action}) on the Coulomb branch (\ref{n=4vacuum}) is mathematically equivalent to the NC $U(1)$ gauge theory described by the action (\ref{10dsym-action}), it should be possible to isomorphically map the 10-dimensional NC $U(1)$ super Yang-Mills theory to a 10-dimensional type IIB supergravity according to the AdS/CFT correspondence \cite{ads-cft1,ads-cft2,ads-cft3}.
Indeed the emergent gravity from NC $U(1)$ gauge fields provides the first principle to found
the large $N$ duality or gauge/gravity duality in a systematic way \cite{mypaper,hsy-jhep09,hsy-jpcs12}.
\section{Commutative and NC D-branes}
The worldvolume action for a D$p$-brane can be viewed as $(p+1)$-dimensional nonlinear sigma model
with a target space $M$ where
the embedding functions $X^M(\sigma)$ define a map $X: W \to M$ from the $(p+1)$-dimensional
worldvolume $W$ with coordinates $\sigma^\alpha \; (\alpha = 0, 1, \cdots, p)$ to the target space $M$
with coordinates $X^M \; (M = 0, 1, \cdots, 9)$. This embedding induces a worldvolume metric
\begin{equation}\label{ind-metric}
h_{\alpha\beta} = g_{MN} (X) \partial_\alpha X^M \partial_\beta X^N.
\end{equation}
The D-brane action in general contains a dilaton coupling $e^{-\phi}$ where $\phi$ is the 10-dimensional
dilaton field. Then the string coupling constant is defined by $g_s =e^{\langle \phi \rangle}$ where
the vev $\langle \phi \rangle$ at hand is assumed to be constant.
The worldvolume also carries $U(1)$ gauge fields $A_\alpha (\sigma)$ with field strength
\begin{equation}\label{wvgauge}
F_{\alpha\beta} = \partial_\alpha A_\beta - \partial_\beta A_\alpha.
\end{equation}
Recall that the Dirac-Born-Infeld (DBI) action is a nonlinear generalization of electrodynamics
with self-interactions of $U(1)$ gauge fields and reproduces the usual Maxwell theory at quadratic order.
In string theory a generalization of this action appears in the context of D$p$-branes.
Open strings ending on the D$p$-brane couple directly to closed string background fields
$(g_{MN}, B_{MN}, \phi)$ in the bulk.
A low energy effective field theory deduced from the open string dynamics on a single D-brane
is obtained by integrating out all the massive modes, keeping only massless fields which are slowly varying
at the string scale $\kappa \equiv 2 \pi \alpha'$. The DBI action describes the dynamics of
$U(1)$ gauge fields on a D-brane worldvolume in the approximation of slowly varying fields,
$\sqrt{\kappa} |\frac{\partial F}{F}| \ll 1$, in the sense keeping field strengths
(without restriction on their size) but not their derivatives.
The resulting DBI action on a D$p$-brane is given by
\begin{equation}\label{cdbi}
S_1 = - T_{\mathrm{D}p} \int_W d^{p+1} \sigma \sqrt{-\det \bigl(h
+ \kappa \mathcal{F} \bigr)} + \mathcal{O} (\sqrt{\kappa} \partial F, \cdots),
\end{equation}
where
\begin{equation}\label{total-f}
\mathcal{F} \equiv B + F
\end{equation}
is the total $U(1)$ field strength and the D$p$-brane tension is given by
\begin{equation}\label{cdp-tension}
T_{\mathrm{D}p} = \frac{2 \pi}{g_s (2 \pi \kappa)^{\frac{p+1}{2}}}.
\end{equation}
In general the DBI action (\ref{cdbi}) contains derivative corrections
$\mathcal{O} (\sqrt{\kappa} \partial F, \cdots)$. However we will ignore possible terms involving
higher derivatives of fields since we are mostly interested in the approximation that worldvolume
fields are slowly varying. We will also consider the probe brane approximation ignoring
the backreaction of the brane on the geometry and the other background fields.
The worldvolume theory of a D-brane is given as the sum of two terms $S = S_1 + S_2$.
The first term $S_1$ is given by the DBI action (\ref{cdbi}) and the second term $S_2$ is
the form of the Wess-Zumino-type given by
\begin{equation}\label{wz-action}
S_2 = \int_W C_{RR} \wedge e^{\kappa \mathcal{F}}
\end{equation}
where the coupling to background RR $n$-form gauge fields is collected in the formal sum
\begin{equation}\label{rr-field}
C_{RR} = \bigoplus_{n=0}^{10} C_n.
\end{equation}
The coupling $S_2$ is a characteristic feature of D-branes that they carry an RR charge \cite{polchinski}
and support the worldvolume gauge fields \eq{wvgauge}.
Some important remarks are in order. The DBI action (\ref{cdbi}) respects several local gauge symmetries.
It has $(p+1)$-dimensional general coordinate invariance since the integrand transforms
as a scalar density in Diff$(W)$. It also admits the so-called $\Lambda$-symmetry:
\begin{equation}\label{l-symmetry}
(B, A) \mapsto (B-d\Lambda, A + \Lambda)
\end{equation}
where the two-form $B \equiv X^* \bigl( B_{\mathrm{bulk}} \bigr)$ is the pull-back of target space
$B$-field $B_{\mathrm{bulk}}$ to the worldvolume $W$ and the gauge parameter $\Lambda$ is a one-form
in $\Gamma(T^* W)$. Let $(W, B)$ be a symplectic manifold. The symplectic structure $B$ is a nondegenerate,
closed two-form, i.e. $dB=0$, and so it can be locally written as $B = d \xi$ by the Poincar\'e lemma.
The $B$-field transformation (\ref{l-symmetry}) can then be understood as a shift of the canonical
one-form, $\xi \to \xi - \Lambda$. An important point for us is that the symplectic structure defines
a bundle isomorphism $B: TW \to T^* W$ by $X \mapsto \Lambda = - \iota_X B$.
Thus the $B$-field transformation (\ref{l-symmetry}) is equivalent to
$(B, A) \mapsto (B + \mathcal{L}_X B, A - \iota_X B)$ where $\mathcal{L}_X = d\iota_X
+ \iota_X d$ is the Lie derivative with respect to the vector field $X$. Since vector fields
are infinitesimal generators of local coordinate transformations, in other words,
Lie algebra generators of Diff$(W)$, the $B$-field transformation (\ref{l-symmetry}) can be identified
with a coordinate transformation generated by a vector field $X \in \Gamma(TW)$.
Consequently the $\Lambda$-symmetry (\ref{l-symmetry}) can be considered on par
with diffeomorphisms \cite{mypaper,hsy-jhep09}. Moreover it is well-known \cite{sdbi1,sdbi12,sdbi3,sdbi4,sdbi5,sdbi6}
that the D-brane worldvolume theory has a local fermionic symmetry called ``$\kappa$-symmetry"
if fermion coordinates $\psi^\alpha \; (\alpha=1, \cdots, 32)$
are included in the target spacetime with supercoordinates $Z^{\mathbf{M}} = (X^M, \psi^\alpha)$.
See a recent review \cite{jsimon} for brane effective actions with the $\kappa$-symmetry.
In sum, the worldvolume theory of a supersymmetric D-brane admits the following local gauge symmetries:
(I) Diff$(W)$, (II) $\Lambda$-symmetry, and (III) $\kappa$-symmetry.
We can use the general coordinate invariance of the action $S = S_1 + S_2$ to eliminate unphysical
degrees of freedom. We choose a static gauge so that $X^M = \bigl( x^\mu (\sigma), \phi^a (\sigma) \bigr)
= \bigl(\delta^\mu_\alpha \sigma^\alpha, \phi^a(x) \bigr)$ where $\mu = 0, \cdots, p$ and
$a = p+1, \cdots, 9$. The $(9-p)$ coordinates $\phi^a (x)$ will be identified as the worldvolume scalar
fields of the D$p$-brane. In this gauge the metric \eq{ind-metric} becomes
\begin{equation}\label{wv-metric}
h_{\mu\nu} = \eta_{\mu\nu} + \partial_\mu \phi^a \partial_\nu \phi^a
\end{equation}
where we assumed $g_{MN} (X) = \eta_{MN}$ for the target spacetime.
Now we focus on a D9-brane for which there are no worldvolume scalar fields, i.e., $\phi^a = 0$
and so $h_{MN} = g_{MN}$. Suppose that the D9-brane supports the two-form $B$-field with $\mathrm{rank}(B) = 6$.
In this case it is convenient to split the worldvolume coordinates $X^M = \sigma^M$ in the static gauge
into two parts, $X^M = (x^\mu, z^a), \; \mu=0,1,2, 3, \; a=1, \cdots, 6$,
so that $B = \frac{1}{2} B_{ab} dz^a \wedge dz^b$.
Then the total field strength \eq{total-f} takes the form
\begin{equation}\label{matrix-10f}
\mathcal{F}_{MN} = \left(
\begin{array}{cc}
F_{\mu\nu} & F_{\mu a} \\
F_{a \mu} & B_{ab} + F_{ab} \\
\end{array}
\right).
\end{equation}
It is well-known \cite{ncft-sw} that the open string gives rise to the NC geometry
when the two-form $B$-field is present on a D-brane worldvolume.
The D-brane dynamics in the static gauge is then described by $U(1)$ gauge fields on
a NC spacetime with coordinates $Y^M = (x^\mu, y^a)$ obeying the commutation relation \eq{extra-nc2n}.
The resulting DBI action on the NC D9-brane is given by
\begin{equation}\label{ncdbi}
\widehat{S}_1 = - T_{9} \int d^{10} Y \sqrt{-\det \bigl(G
+ \kappa (\widehat{F} + \Phi ) \bigr)} + \mathcal{O} (\sqrt{\kappa} \widehat{D}\widehat{F}, \cdots),
\end{equation}
where the NC $U(1)$ field strength $\widehat{F}_{MN}(Y)$ is given by Eq. \eq{d-ncfs} and
the NC D9-brane tension is
\begin{equation}\label{dp-tension}
T_{9} = \frac{2 \pi}{G_s (2 \pi \kappa)^{5}}.
\end{equation}
The open string moduli $(G, \Phi, G_s)$ in the NC description \eq{ncdbi} are related
to the closed string moduli $(g, B, g_s)$ in the commutative description \eq{cdbi} by \cite{ncft-sw}
\begin{eqnarray}\label{open-closed1}
&& \frac{1}{g + \kappa B} = \frac{1}{G + \kappa \Phi} + \frac{\theta}{\kappa}, \\
\label{open-closed2}
&& G_s = g_s \sqrt{\frac{\det(G + \kappa \Phi)}{\det (g + \kappa B)}}
= g_s \left( \frac{\det G}{\det g} \right)^{\frac{1}{4}},
\end{eqnarray}
where the two-form $\Phi$ parameterizes some freedom in the description of commutative and
NC gauge theories. It is worthwhile to remark that the NC DBI action \eq{ncdbi} can be obtained
by applying the (exact) SW map to the commutative one \eq{cdbi} \cite{liu,jsw2,ban-yan},
as will be shown later. Similarly the Wess-Zumino-type term $\widehat{S}_2$ for the NC D9-brane
can be obtained from the RR couplings in Eq. \eq{wz-action} for a commutative D9-brane
by considering the (exact) SW map \cite{liu,nc-coupling}.
Let us expand the NC DBI action \eq{ncdbi} in powers of $\kappa$.
First note that
\begin{eqnarray}\label{det-exp}
\sqrt{-\det \bigl(G + \kappa (\widehat{F} + \Phi ) \bigr)} &=& \sqrt{-\det G}
\sqrt{\det (1 + \kappa M)} \\
&=& \sqrt{-\det G} \Bigl( 1 - \frac{\kappa^2}{4} \mathrm{Tr} M^2
- \frac{\kappa^4}{8} \mathrm{Tr} M^4 + \frac{\kappa^4}{32} \bigl (\mathrm{Tr} M^2 \bigr)^2
+ \cdots \Bigr), \nonumber
\end{eqnarray}
where
\begin{equation}\label{def-m}
{M_N}^Q \equiv (\widehat{F} + \Phi)_{NP}G^{PQ}
\end{equation}
and so $\mathrm{Tr} M = 0$. At nontrivial leading orders, we find
\begin{equation}\label{exp-ncym}
\widehat{S}_1 = - T_{9} \int d^{10} Y \sqrt{-\det G} - \frac{1}{4G_{YM}^2} \int d^{10} Y \sqrt{-\det G}
G^{MP} G^{NQ} (\widehat{F} + \Phi)_{MN}(\widehat{F} + \Phi)_{PQ} + \mathcal{O} (\kappa^4),
\end{equation}
where the 10-dimensional Yang-Mills coupling constant is given by
\begin{equation}\label{10ymcs}
G_{YM}^2 = (\kappa^2 T_9)^{-1} = (2\pi)^4 \kappa^3 G_s.
\end{equation}
In our case at hand, the open string metric can be set to be flat, i.e., $G_{MN} = \eta_{MN}$.
The first term of $\widehat{S}_1$ is a vacuum energy due to the D-brane tension which will be
canceled against a contribution from $\widehat{S}_2$ \cite{schwarz,ads-cft1}.
The second term is precisely equal to the bosonic part of the action \eq{10dsym-action}
when the background independent prescription is employed, i.e., $\Phi = - B$ \cite{ncft-sw}.
Therefore we will consider the 10-dimensional $\mathcal{N}=1$ NC $U(1)$ super Yang-Mills theory
\eq{10dsym-action} as a nontrivial leading approximation of the supersymmetric completion of the NC DBI
action \eq{ncdbi}. The supersymmetric completion with the $\kappa$-symmetry will be discussed in section 5.
\section{AdS/CFT correspondence from NC $U(1)$ gauge fields}
In their famous paper \cite{ncft-sw}, Seiberg and Witten showed that there exists an equivalent commutative
description of the low energy effective theory for the open string ending on a NC D-brane.
From the point of view of open string sigma model, an explicit form of the effective action depends
on the regularization scheme of two-dimensional field theory.
The difference due to different regularizations is always in a choice of contact terms,
leading to the redefinition of coupling constants which are spacetime fields.
So low energy field theories defined with different regularizations should be related to each other
by the field redefinitions in spacetime. Now we will explain how the NC DBI action \eq{ncdbi} arises
from a low energy effective action in a curved background that will be identified with the HEA speculated
by John H. Schwarz \cite{schwarz}. First we identify a commutative description that is SW-equivalent
to the NC DBI action \eq{ncdbi}. From a conventional approach, the answer is obvious. It is given by
the D9-brane action \eq{cdbi} (with $p=9$) with the field strength \eq{matrix-10f}.
But, for our purpose, it is more proper to consider the NC DBI action \eq{ncdbi} as a particular
commutative limit of the full NC D9-brane described by the star product
\begin{equation}\label{10d-star-prod}
(\widehat{f} \star \widehat{g}) (Y) = e^{\frac{i}{2} \Theta^{MN} \frac{\partial}{\partial Y^M}
\otimes \frac{\partial}{\partial Z^N}} \widehat{f}(Y) \widehat{g}(Z)|_{Y=Z}
\end{equation}
for $\widehat{f}(Y), \widehat{g}(Y) \in C^\infty (\mathbb{R}^{10})$. We implicitly assumed
the Wick rotation, $\mathbb{R}^{9,1} \to \mathbb{R}^{10}$, although it is simply formal because
we eventually come back to the space $\mathbb{R}^{3,1} \times \mathbb{R}^{6}_\theta$.
For this purpose, it is convenient to take the split $\Theta^{MN} = (\zeta^{\mu\nu}, \theta^{ab})$
where an $SO(10)$ rotation was used to put $\zeta^{\mu a} = 0$.
We intend to understand the star product \eq{star-prod} as a particular case of
Eq. \eq{10d-star-prod} with $\zeta^{\mu\nu} = 0$.
Later we will explain why the star product \eq{10d-star-prod} is more relevant for our context,
especially, from the viewpoint of emergent spacetime. Hence we need to identify a commutative DBI action
that is SW-equivalent to the NC DBI action \eq{ncdbi}, instead, using the star product \eq{10d-star-prod}.
It is given by the D9-brane action \eq{cdbi} with the $U(1)$ field strength
\begin{equation}\label{10t-fs}
\mathcal{F} = \frac{1}{2} \mathcal{F}_{MN} (X) dX^M \wedge dX^N = \frac{1}{2}
\bigl(B_{MN} + F_{MN}(X) \bigr) dX^M \wedge dX^N = B + F
\end{equation}
where $B = \Theta^{-1}$ and $\mathrm{rank}(B) = 10$.
We will assume that $\mathcal{F}$ is also nondegenerate, i.e., $\det(1 + F \Theta) \neq 0$.
In order to derive the HEA, it is enough only to employ the logic expounded
in the appendix A in Ref. \cite{mypaper}. Note that $\mathcal{F}$ in Eq. \eq{10t-fs} is the gauge
invariant quantity under the $\Lambda$-symmetry \eq{l-symmetry}.
In other words, the dynamical $U(1)$ gauge fields should appear only
as the combination \eq{10t-fs}. In particular, we can use the $\Lambda$-symmetry \eq{l-symmetry} so that
the $B$-field in Eq. \eq{10t-fs} is constant.
Then $dB=0$ trivially and $B$ is nondegenerate because of $\mathrm{rank}(B) = 10$.
Therefore $(\mathbb{R}^{10}, B)$ is a symplectic manifold.
Moreover, $(\mathbb{R}^{10}, \mathcal{F})$ is also a symplectic manifold since $d\mathcal{F}=0$ and
$\mathcal{F}$ is nondegenerate by our assumption. Then we can realize an important identity
\begin{equation}\label{darboux}
\mathcal{F} = (1 + \mathcal{L}_X) B
\end{equation}
as we explained below Eq. \eq{l-symmetry}. It implies that there exists a local coordinate transformation
$\phi \in \mathrm{Diff}(M)$ such that $\phi^* (\mathcal{F}) = B$, i.e.,
$\phi^* = (1 + \mathcal{L}_X)^{-1} \approx e^{-\mathcal{L}_X}$.
This statement is the famous theorem in symplectic geometry known as
the Darboux theorem \cite{sg-book1,sg-book2}. Its global statement is known as the Moser lemma \cite{moser}.
The Darboux theorem states that it is always possible to find a local coordinate
transformation $\phi \in \mathrm{Diff}(M)$ which eliminates dynamical $U(1)$ gauge fields
in $\mathcal{F}$. That is, in terms of local coordinates, there exists $\phi: Y \mapsto X = X(Y)$ so that
\begin{equation}\label{darboux-local}
\bigl(B_{MN} + F_{MN}(X) \bigr) \frac{\partial X^M}{\partial Y^P}
\frac{\partial X^N}{\partial Y^Q} = B_{PQ}.
\end{equation}
If we represent the local coordinate transformation by
\begin{equation}\label{cov-cod}
X^M (Y) = Y^M + \Theta^{MN} \widehat{A}_N (Y),
\end{equation}
Eq. \eq{darboux-local} can be written as
\begin{equation}\label{sym-gauge}
\mathfrak{P}^{MN} (X) \equiv \bigl(\mathcal{F}^{-1} \bigr)^{MN} (X) = \{ X^M (Y), X^N(Y) \}_\Theta
\end{equation}
where we introduced the Poisson bracket defined by
\begin{equation}\label{poisson-bra}
\{f(Y), g(Y)\}_\Theta = \Theta^{MN} \frac{\partial f(Y)}{\partial Y^M}
\frac{\partial g(Y)}{\partial Y^N}
\end{equation}
for $f, g \in C^\infty (\mathbb{R}^{10})$. We will call $\widehat{A}_M (Y)$ in Eq. \eq{cov-cod}
symplectic gauge fields and $X^M(Y)$ covariant (dynamical) coordinates.
The field strength of symplectic gauge fields is defined by
\begin{equation}\label{symp-f}
\widehat{F}_{MN} = \partial_M \widehat{A}_N - \partial_N \widehat{A}_M
+ \{ \widehat{A}_M, \widehat{A}_N \}_\Theta.
\end{equation}
Then Eq. \eq{sym-gauge} gives us the relation
\begin{equation}\label{new-poisson}
\mathfrak{P}^{MN} = [\Theta (B - \widehat{F})\Theta]^{MN}.
\end{equation}
By solving this equation, we yield the semi-classical version of the SW map \cite{cornalba,jur-sch,liu}:
\begin{eqnarray}\label{sw-mapf}
&& \widehat{F}_{MN} (Y) = \left( \frac{1}{1 + F\Theta} F \right)_{MN} (X), \\
\label{sw-mapv}
&& d^{10} Y = d^{10} X \sqrt{\det(1 + F\Theta)},
\end{eqnarray}
where the second equation is derived from Eq. \eq{darboux-local} by taking the determinant
on both sides.
The coordinate transformation \eq{darboux-local} leads to the identity
\begin{equation}\label{tr-dbi}
g_{MN} + \kappa \mathcal{F}_{MN} = \bigl(\mathcal{G}_{PQ} + \kappa B_{PQ} \bigr)
\frac{\partial Y^P}{\partial X^M} \frac{\partial Y^Q}{\partial X^N}
\end{equation}
where the dynamical (emergent) metric is defined by
\begin{equation}\label{de-metric}
\mathcal{G}_{MN} = g_{PQ} \frac{\partial X^P}{\partial Y^M}
\frac{\partial X^Q}{\partial Y^N}.
\end{equation}
The identity \eq{tr-dbi} in turn leads to a remarkable identity between DBI actions:
\begin{eqnarray} \label{dbi-idc}
\frac{1}{g_s} \int d^{10} X \sqrt{\det \bigl( g + \kappa \mathcal{F} \bigr)}
&=& \frac{1}{g_s} \int d^{10} Y \sqrt{\det \bigl(\mathcal{G} + \kappa B \bigr)} \\
\label{dbi-idn}
&=& \frac{1}{G_s} \int d^{10} Y \sqrt{\det \bigl(G + \kappa (\widehat{F} + \Phi ) \bigr)}.
\end{eqnarray}
It is straightforward to derive the second identity \eq{dbi-idn} by using Eqs. \eq{open-closed1}
and \eq{open-closed2} and the SW maps \eq{sw-mapf} and \eq{sw-mapv}.
For the derivation of Eq. \eq{dbi-idn},
see Eq. (5.10) in Ref. \cite{liu} and section 3.4 of Ref. \cite{jsw2}.
It may be instructive to check Eq. \eq{dbi-idn} by expanding the right-hand side (RHS) of Eq. \eq{dbi-idc}
around the background $B$-field, i.e.,
\begin{eqnarray}\label{exp-dbi}
\sqrt{\det \bigl(\mathcal{G} + \kappa B \bigr)} &=& \sqrt{\det \bigl( \kappa B \bigr)}
\sqrt{\det \Bigl(1 + \frac{M}{\kappa} \Bigr)} \nonumber \\
&=& \sqrt{\det \bigl( \kappa B \bigr)} \Bigl( 1 - \frac{1}{4\kappa^2} \mathrm{Tr} M^2
- \frac{1}{8\kappa^4} \mathrm{Tr} M^4 + \frac{1}{32\kappa^4} \bigl (\mathrm{Tr} M^2 \bigr)^2
+ \cdots \Bigr),
\end{eqnarray}
where
\begin{equation}\label{matrix-gm}
{M_N}^Q = \mathcal{G}_{NP} \Theta^{PQ}
\end{equation}
and
\begin{equation}\label{tr-pm}
\mathrm{Tr} M^2 = \mathrm{Tr} (g \mathfrak{P})^2, \qquad \mathrm{Tr} M^4 = \mathrm{Tr} (g \mathfrak{P})^4.
\end{equation}
But it is not difficult to show that $\mathrm{Tr} M^{2n} = \mathrm{Tr} (g \mathfrak{P})^{2n}, \;
\mathrm{Tr} M^{2n+1} = \mathrm{Tr} (g \mathfrak{P})^{2n+1} = 0$ for $n \in \mathbb{N}$ and thus
\begin{equation}\label{det-id}
\det \Bigl(1 + \frac{M}{\kappa} \Bigr) = \det \Bigl(1 + \frac{1}{\kappa} g \mathfrak{P} \Bigr)
\end{equation}
using the expansion of the determinant (see Eq. (4.30) in Ref. \cite{sdbi4}).
Then, using the result \eq{new-poisson}, the expansion in Eq. \eq{exp-dbi} can be arranged into the form
\begin{eqnarray}\label{rexp-dbi}
\sqrt{\det \bigl(\mathcal{G} + \kappa B \bigr)} &=& \sqrt{\frac{\det \bigl( \kappa B \bigr)}{\det G}}
\sqrt{\det \bigl(G + \kappa (\widehat{F} - B ) \bigr)} \nonumber \\
&=& \frac{g_s}{G_s} \sqrt{\det \bigl(G + \kappa (\widehat{F} - B ) \bigr)},
\end{eqnarray}
where
\begin{equation}\label{metric-coupling}
G_{MN} = - \kappa^2 (B g^{-1} B)_{MN}, \qquad G_s = g_s \sqrt{\det \bigl(\kappa B g^{-1} \bigr)}
\end{equation}
are the open string metric and coupling constant, respectively,
in the background independent prescription, i.e., $\Phi = - B$ \cite{ncft-sw}.
In order to demonstrate how $2n$-dimensional K\"ahler metrics arise from $U(1)$ gauge fields,
in appendix A, we will solve the identities (\ref{dbi-idc}) and (\ref{dbi-idn}).
In particular, it is shown that Calabi-Yau $n$-folds for $n=2$ and $3$ are emergent
from symplectic $U(1)$ instantons in four and six dimensions, respectively.
NC $U(1)$ gauge fields are obtained by quantizing symplectic gauge fields.
The quantization in our case is simply defined by the canonical quantization of
the Poisson algebra $\mathfrak{P} = (C^\infty(\mathbb{R}^{10}), \{-,-\}_\Theta)$.
The quantization map $\mathcal{Q}: C^\infty(\mathbb{R}^{10}) \to \mathcal{A}_\theta$ by
$f \mapsto \mathcal{Q}(f) \equiv \widehat{f}$ is a $\mathbb{C}$-linear algebra homomorphism
defined by
\begin{equation}\label{q-rule}
f \cdot g \mapsto \widehat{f \star g} = \widehat{f} \cdot \widehat{g}
\end{equation}
and
\begin{equation}\label{quantum-prod}
f \star g \equiv \mathcal{Q}^{-1} \Bigl( \mathcal{Q}(f) \cdot \mathcal{Q}(g) \Bigr)
\end{equation}
for $f, g \in C^\infty(\mathbb{R}^{10})$ and $\widehat{f}, \widehat{g} \in \mathcal{A}_\theta$.
The above star product is given by Eq. \eq{10d-star-prod} \cite{ncft-rev}. The DBI action \eq{ncdbi}
for the NC D9-brane relevant to the NC $U(1)$ gauge theory \eq{10dsym-action} is then obtained by
simply considering a particular NC parameter $\Theta^{MN} = (\zeta^{\mu\nu}, \theta^{ab})$
with $\zeta^{\mu\nu} = 0$. We understand the limit $\zeta^{\mu\nu} \to 0$ as $|\zeta|^2
\equiv G_{\mu\rho} G_{\nu\sigma} \zeta^{\mu\nu} \zeta^{\rho\sigma} = \kappa^2 |\kappa B_{\mu \lambda}
g^{\lambda\rho}|^2 \ll \kappa^2$ where the open string metric in Eq. \eq{metric-coupling} was used.
This means that $g_{\mu\nu} + \kappa B_{\mu\nu} = (\delta^\rho_\mu + \kappa B_{\mu\lambda}
g^{\lambda\rho}) g_{\rho\nu} \approx g_{\mu\nu}$, in other words,
the metric part in the DBI background $g_{\mu\nu} + \kappa B_{\mu\nu}$ is dominant so that
the $B$-field part can be ignored.
Why do we need to take the limit $\zeta^{\mu\nu} \to 0$ instead of simply putting $\zeta^{\mu\nu} = 0$?
Actually the answer is involved with the most beautiful aspect of emergent gravity.
In the emergent gravity picture, any spacetime structure is not assumed {\it a priori} but
defined by the theory itself. In a sonorous phrase, the theory of emergent gravity must be
background independent. Hence it is necessary to define a configuration in the algebra $\mathcal{A}_\theta$,
for instance, like Eq. (\ref{extra-nc2n}), to generate any kind of spacetime structure,
even for flat spacetime. Emergent gravity then says that the flat spacetime is emergent from
the Moyal-Heisenberg algebra (\ref{extra-nc2n}).
In other words, even the flat spacetime must have a dynamical origin \cite{mypaper,hsy-jhep09,hsy-jpcs12},
which is absent in general relativity.
This picture may also be convinced by gazing up at the identity \eq{dbi-idc}.
Note that the dynamical variables on the RHS of Eq. \eq{dbi-idc} are
(emergent) metric fields, $\mathcal{G}_{MN} (Y)$, whereas they on the left-hand side (LHS) are
$U(1)$ gauge fields, $F_{MN} (X)$, in a specific background $(g,B)$.
Therefore the gravitational fields $\mathcal{G}_{MN} (Y)$ are completely determined by dynamical $U(1)$
gauge fields and so the former is emergent from the latter. When $U(1)$ gauge fields are turned off,
the emergent metric reduces to the flat metric, i.e., $\mathcal{G}_{MN} = g_{MN}$.
But the background $B$-field still persists and it can be regarded
as a vacuum gauge field $A^{(0)}_M = - \frac{1}{2} B_{MN} X^N$.
Then it is natural to think that the flat metric $g_{MN}$ is emergent from the vacuum gauge fields $A^{(0)}_M$.
This remarkable picture can be rigorously confirmed from a background independent formulation,
e.g., matrix models \cite{mypaper,hsy-jhep09,hsy-jpcs12}. In consequence, any spacetime
structure did not exist {\it a priori} but the existence of spacetime requires
a coherent condensate of vacuum gauge fields. Nature allows ``no free lunch."
As a result, the usual commutative spacetime
has to be understood as a {\it commutative} limit of NC spacetime as we advocated above.
Indeed we do not know how to reproduce the NC DBI action \eq{ncdbi} via the identity \eq{dbi-idc}
starting with the $U(1)$ field strength \eq{matrix-10f}.\footnote{Note that
the Darboux theorem \eq{darboux-local} can be applied only to a symplectic form, i.e.,
a nondegenerate and closed 2-form. But the dynamical 2-form $F$ does not belong to this category
because it usually vanishes at an asymptotic infinity.}
Note that the coordinate transformation \eq{darboux-local} to a Darboux frame is defined only locally
and symplectic or NC gauge fields have been introduced to compensate local deformations of
an underlying symplectic structure by $U(1)$ gauge fields, i.e., the Darboux coordinates in $\phi:Y
\mapsto X=X(Y) \in \mathrm{Diff}(\mathbb{R}^{10})$ obey the relation $\phi^* (B+F) = B$.
The identity (\ref{rexp-dbi}) also manifests this local nature of NC gauge fields because
they manifest themselves only in a locally inertial frame (in free fall) with the
local metric (\ref{de-metric}) \cite{mypaper}. If the gravitational metric in Eq. (\ref{rexp-dbi})
were represented by a global form, e.g.,
\begin{equation}\label{global-metric}
\mathcal{G}_{MN} = g_{AB} E^A_M E^B_N, \qquad A, B = 0, 1, \cdots, 9
\end{equation}
where $E^A = E^A_M dx^M$ are elements of a global coframe on an emergent 10-dimensional
manifold $\mathcal{M}$, it would be difficult to find an imprint of symplectic or NC gauge
fields in the expression \eq{global-metric}.
Recall that the basic program of differential geometry is that all the world can be reconstructed
from the infinitely small. For example, manifolds are obtained by gluing open subsets of Euclidean space.
So the differential forms and vector fields on a manifold are defined locally and then glued together
to yield a global object. The gluing is possible because these objects are independent of the choice
of local coordinates. In reality this kind of globalization of a (spacetime) geometry by
gluing local data might be enforced because global comparison devices are not available
owing to the restriction of the finite propagation speed.
Indeed the global metric \eq{global-metric} can be constructed in a similar way.
First note that the D9-brane described by the LHS of Eq. \eq{dbi-idc} supports
a line bundle $L \to \mathbb{R}^{10}$ over a symplectic manifold $(\mathbb{R}^{10}, B)$.
Introduce an open covering $\{U_i: i \in I \}$ of $\mathbb{R}^{10}$, i.e.,
$\mathbb{R}^{10} = \bigcup_{i \in I} U_i$ and let $A^{(i)}$ be a connection of the line
bundle $L \to U_i$ on an open neighborhood $U_i$.
Consider all compatible coordinate systems $\{ (U_i, \varphi_i): i \in I \}$
as a family of local Darboux charts where $\varphi_i: U_i \to \mathbb{R}^{10}$ are
Darboux coordinates on $U_i$. Then we have the collection of
local data $\bigoplus_{i \in I}(A^{(i)}, Y_{(i)})$ on the D9-brane
where $Y_{(i)} = \varphi_i (U_i)$ are Darboux coordinates on $U_i$ obeying Eq. \eq{darboux-local},
i.e., $\varphi_i^* (B + F^{(i)}) =B$ where $F^{(i)} = d A^{(i)}$.
On an intersection $U_i \cap U_j$, local data $(A^{(i)}, Y_{(i)})$ and $(A^{(j)}, Y_{(j)})$
on Darboux charts $U_i$ and $U_j$, respectively, are glued together by \cite{jsw-ncl,buba-me}
\begin{eqnarray} \label{glue-g}
&& A^{(j)} = A^{(i)} + d\lambda^{(ji)}, \\
\label{glue-d}
&& Y_{(j)} = \varphi_{(ji)} (Y_{(i)}),
\end{eqnarray}
where $\varphi_{(ji)}$ is a symplectomorphism on $U_i \cap U_j$ generated by a Hamiltonian
vector field $X_{\lambda^{(ji)}}$ obeying $\iota_{X_{\lambda^{(ji)}}} B + d \lambda^{(ji)} = 0$.
Note that the symplectomorphism is a canonical transformation preserving
the Poisson structure $\Theta = B^{-1}$ and can be identified with a NC $U(1)$
gauge transformation upon quantization \cite{hsy-ijmp09,ncft-rev}.
Since the local metric (\ref{de-metric}) is the incarnation of
symplectic gauge fields in a Darboux frame, the gluing of local Darboux charts can be translated into
that of emergent metrics in locally inertial frames from the viewpoint of the RHS
of Eq. \eq{dbi-idc}. This kind of gluing should be well-defined because every manifold can be constructed
by gluing open subsets of Euclidean space together and both sides of Eq. \eq{dbi-idc} are coordinate
independent and so local Darboux charts can be consistently glued altogether. See Ref. \cite{lry3}
to illuminate how a nontrivial topology of an emergent manifold can be implemented by gluing local
data $\bigcup_{i \in I}(A^{(i)}, Y_{(i)})$.
It is in order to ponder on the results obtained. We showed in section 2 that the 4-dimensional
$\mathcal{N}=4$ super Yang-Mills theory on the Coulomb branch \eq{n=4vacuum} is equivalent to
the 10-dimensional $\mathcal{N}=1$ supersymmetric NC $U(1)$ gauge theory. And we considered
the resulting 10-dimensional NC $U(1)$ gauge theory as a low-energy effective theory
of supersymmetric NC D9-brane. Finally we got the important identity (\ref{rexp-dbi})
that the dynamics of NC $U(1)$ gauge fields after ignoring fermion fields is completely encoded
into a 10-dimensional emergent geometry described by the metric \eq{global-metric}.
According to the AdS/CFT correspondence, it is natural to expect that the metric \eq{global-metric}
must describe a 10-dimensional emergent geometry dual to the 4-dimensional $\mathcal{N}=4$ super
Yang-Mills theory. An immediate question to arise is how to realize the $AdS_5 \times \mathbb{S}^5$
vacuum geometry in our context.
Since there is no reason to further reside in Euclidean space, let us go back to the Lorentzian
spacetime with the NC parameter $\Theta^{MN} = (\zeta^{\mu\nu} = 0, \theta^{ab} \neq 0)$ by Wick rotation.
In order to pose the above question, let us consider a more general vacuum geometry which
is conformally flat. That is, we are interested in a background geometry with the metric given by
\begin{equation}\label{10vacgeo}
ds^2 = \lambda^2 (\eta_{\mu\nu} dx^\mu dx^\nu + dy^a dy^a).
\end{equation}
There are two interesting cases which are conformally flat \cite{mypaper}:
\begin{eqnarray} \label{vacgeo1}
&& \lambda^2=1 \qquad \Rightarrow \quad \mathcal{M} = \mathbb{R}^{9,1}, \\
\label{vacgeo2}
&& \lambda^2=\frac{R^2}{\rho^2} \quad \; \Rightarrow \quad \mathcal{M} = AdS_5 \times \mathbb{S}^5,
\end{eqnarray}
where $\rho^2 = \sum_{a=1}^6 y^a y^a$ and $R = \bigl (4 \pi g_s (\alpha')^2 N \bigr)^{1/4}$ is
the radius of $AdS_5$ and $\mathbb{S}^5$ spaces.
We already speculated before that the flat Minkowski
spacetime (\ref{vacgeo1}) arises from a uniform condensate of vacuum gauge fields
$A^{(0)}_M = - \frac{1}{2} B_{MN} X^N$. This can be confirmed by looking at the vacuum
configuration \eq{n=4vacuum}. Note that, from the 4-dimensional gauge theory point of view,
the vacuum configuration \eq{n=4vacuum} simply represents a particular configuration of large $N$ matrices
and it is connoted as an extra 6-dimensional ``emergent" space only in 10-dimensional description.
Its tangible existence must be addressed from the RHS of Eq. \eq{dbi-idc}.
(See section 1 in Ref. \cite{mypaper} for the rationale underlying this reasoning.)
Then it is easy to prove that the emergent metric \eq{de-metric} for the vacuum
configuration \eq{n=4vacuum} is precisely the flat Minkowski spacetime (\ref{vacgeo1}).
Note that a Darboux chart $(U, \varphi)$ in this case can be extended to entire spacetime and
so it is not necessary to consider the globalization prescribed before.
Now a perplexing problem is to understand what is the gauge field configuration to realize
the vacuum geometry \eq{vacgeo2}. In order to figure out the problem,
it is necessary to find a stable configuration of NC or large $N$ gauge fields and
so certainly a supersymmetric or BPS state. And this configuration must be consistent with the isometry
of the vacuum geometry (\ref{10vacgeo}), in particular, preserving $SO(6)_R$ Lorentz symmetry
as if a hydrogen atom preserves $SO(3)$ symmetry. It was conjectured in \cite{mypaper} that
the $AdS_5 \times \mathbb{S}^5$ geometry arises from the stack of NC Hermitian $U(1)$ instantons
at origin in the internal space $\mathbb{R}^6$ like a nucleus containing a lot of nucleons.
The NC Hermitian $U(1)$ instanton obeys the Hermitian Yang-Mills equations \cite{non-inst} given by
\begin{equation}\label{hym-eq}
\widehat{F}_{ab} = - \frac{1}{4} \varepsilon_{abcdef} \widehat{F}_{cd} I_{ef},
\end{equation}
where $I = \mathbf{I}_3 \otimes i \sigma^2$ is a $6 \times 6$ matrix of the complex structure
of $\mathbb{R}^6$ and the field strength is defined by Eq. \eq{10d-fs}.
Note that the 6-dimensional NC $U(1)$ gauge fields $\widehat{A}_a$ in Eq. \eq{hym-eq} are originally
adjoint scalar fields $\Phi_a = p_a + \widehat{A}_a$ in 4-dimensional $\mathcal{N} = 4$
super Yang-Mills theory. See Eq. \eq{n=4bfluct}. If true, the vacuum geometry \eq{vacgeo2}
will be emergent from the stack of infinitely many NC $U(1)$ instantons obeying Eq. \eq{hym-eq}
according to the identity \eq{rexp-dbi}.\footnote{Given the metric \eq{10vacgeo} of
$AdS_5 \times \mathbb{S}^5$ geometry on the LHS of Eq. \eq{rexp-dbi},
we may simply assume that we have solved Eq. \eq{rexp-dbi} to find some configuration of $U(1)$ gauge fields
which gives rise to the $AdS_5 \times \mathbb{S}^5$ geometry. In appendix A, we will solve Eq. \eq{rexp-dbi}
to illustrate how $2n$-dimensional Calabi-Yau manifolds arise from $2n$-dimensional symplectic $U(1)$
gauge fields. But it should be remarked that the underlying argument can proceed with impunity
whatever our conjecture is true or not.}
Since we are interested in the approximation of
slowly varying fields, $\sqrt{\theta} |\frac{\widehat{D} \widehat{F}}{\widehat{F}}| \ll 1$,
ignoring the derivatives of field strengths, the $U(1)$ field strength in Eq. \eq{hym-eq}
can be replaced by Eq. \eq{symp-f} in this limit and so we can use
the SW maps \eq{sw-mapf} and \eq{sw-mapv}. Thus, if we include NC corrections containing
higher-order derivatives of field strengths, the LHS of Eq. \eq{rexp-dbi} will
receive derivative corrections introducing a higher-order gravity
in the emergent geometry \cite{hsy-ijmp09}.
In conclusion, the AdS/CFT correspondence is a particular example of emergent gravity
from NC $U(1)$ gauge fields. And the duality between large $N$ gauge fields and
a higher-dimensional gravity is simply a consequence of the novel equivalence principle
stating that the electromagnetic force can always be eliminated by a local coordinate transformation
as far as spacetime admits a symplectic structure, in other words, a microscopic spacetime
becomes NC \cite{mypaper,hsy-jhep09}.
\section{HEA from NC $U(1)$ gauge fields}
Now we are ready to derive the HEA of four-dimensional $\mathcal{N}=4$ superconformal field theory
on the Coulomb branch. According to the conjecture \cite{schwarz}, the HEA should be
a $U(1)$ gauge theory in the $AdS_5 \times \mathbb{S}^5$ geometry with $N$ units of flux
threading $\mathbb{S}^5$. However the original conjecture did not allude any clue
why the HEA on the Coulomb branch must be described by the $U(1)$ gauge theory although
the probe-brane approximantion requires a large $N$ limit. For the discussion of this problem,
see, in particular, section 5 in Ref. \cite{schwarz}. As we emphasized in footnote \ref{n=1},
our approach based on the NC field theory representation of AdS/CFT correspondence
will clarify why $N=1$ is the relevant choice for the HEA.
We argued before that the $AdS_5 \times \mathbb{S}^5$ geometry is emergent from the stack of
infinitely many NC Hermitian $U(1)$ instantons near origin in $\mathbb{R}^6$.
Thus suppose that the vacuum configuration for the background geometry \eq{vacgeo2} is given by
\begin{equation}\label{inst-vacuum}
\langle \Phi_a \rangle_{\mathrm{vac}} = p_a + \widehat{A}_a, \quad
\langle A_\mu \rangle_{\mathrm{vac}} = 0, \quad \langle \lambda^i \rangle_{\mathrm{vac}} = 0,
\end{equation}
where $\widehat{A}_a$ is a solution of Eq. \eq{hym-eq} describing $N$ NC Hermitian $U(1)$ instantons
in 6 dimensions. We introduce fluctuations around the vacuum \eq{inst-vacuum} and represent them as
\begin{eqnarray}\label{conn-fluc4}
&& \widehat{D}_\mu = \partial_\mu - i \widehat{a}_\mu (x,y), \\
\label{conn-fluc6}
&& \widehat{D}_a = - i \bigl( p_a + \widehat{A}_a (y) + \widehat{a}_a (x,y) \bigr)
\equiv \widehat{\nabla}_a (y) - i \widehat{a}_a (x,y),
\end{eqnarray}
whose field strengths are given by
\begin{eqnarray}\label{f-fluc44}
\widehat{\mathcal{F}}_{\mu\nu} &=& \partial_\mu \widehat{a}_\nu - \partial_\nu \widehat{a}_\mu - i
[\widehat{a}_\mu, \widehat{a}_\nu]_\star \equiv \widehat{f}_{\mu\nu}, \\
\label{conn-fluc46}
\widehat{\mathcal{F}}_{\mu a} &=& \widehat{D}_\mu \widehat{a}_a - \widehat{\nabla}_a \widehat{a}_\mu \equiv \widehat{f}_{\mu a}, \\
\label{conn-fluc66}
\widehat{\mathcal{F}}_{ab} &=& - B_{ab} + \widehat{F}_{ab}
+ \widehat{\nabla}_a \widehat{a}_b - \widehat{\nabla}_b \widehat{a}_a
- i [\widehat{a}_a, \widehat{a}_b]_\star, \nonumber\\
&\equiv& - B_{ab} + \widehat{F}_{ab} + \widehat{f}_{ab}
\end{eqnarray}
where $\widehat{F}_{ab} (y) - B_{ab} = i [\widehat{\nabla}_a, \widehat{\nabla}_b]_\star (y)$.
We will include fermions later.
Note that we assumed that the instanton connection $\widehat{\nabla}_a (y)$ depends only
on NC coordinates in extra dimensions. Hence the solution has a translational invariance
along $\mathbb{R}^{3,1}$ which means that the solution describes extended objects
along $\mathbb{R}^{3,1}$. They were conjecturally identified with $N$ D3-branes in \cite{mypaper}.
Since the SW relation between commutative and NC gauge theories is true for general gauge fields,
we can apply to the gauge fields in Eqs. \eq{conn-fluc4} and \eq{conn-fluc6} the SW maps
\begin{eqnarray}\label{sw-instf}
&& \widehat{\mathcal{F}}_{MN} (Y) = \left( \frac{1}{1 + \mathfrak{F}\Theta} \mathfrak{F}
\right)_{MN} (X), \\
\label{sw-instv}
&& d^{10} Y = d^{10} X \sqrt{\det(1 + \mathfrak{F}\Theta)},
\end{eqnarray}
where $\mathfrak{F} \equiv B + F + f$ is the total $U(1)$ field strength including the background
instanton part $F_{ab}$ and the fluctuation part $f_{MN} = \partial_M a_N - \partial_N a_M$.
The result will be given by the following equivalence
\begin{equation} \label{dbi-fluc}
\frac{1}{g_s} \int d^{10} X \sqrt{ - \det \bigl( g + \kappa \mathfrak{F} \bigr)}
= \frac{1}{G_s} \int d^{10} Y \sqrt{- \det \bigl(G + \kappa (\widehat{\mathcal{F}} + \Phi ) \bigr)}.
\end{equation}
But we can also apply the Darboux transformation \eq{darboux-local} to the field strength $\mathfrak{F}$
such that the Darboux coordinates $Z^M$ eliminate only the instanton gauge fields $F_{ab}$.
Then we will get the following identity
\begin{equation}\label{tri-dbi}
g_{MN} + \kappa \mathfrak{F}_{MN} = \bigl(\mathcal{G}_{PQ} + \kappa (B + \widetilde{f})_{PQ} \bigr)
\frac{\partial Z^P}{\partial X^M} \frac{\partial Z^Q}{\partial X^N}
\end{equation}
where
\begin{equation}\label{tr-abelif}
\mathcal{G}_{MN} = g_{PQ} \frac{\partial X^P}{\partial Z^M}
\frac{\partial X^Q}{\partial Z^N}, \qquad
\widetilde{f}_{MN} = f_{PQ} \frac{\partial X^P}{\partial Z^M} \frac{\partial X^Q}{\partial Z^N}
= \frac{\partial \widetilde{a}_N}{\partial Z^M} - \frac{\partial \widetilde{a}_M}{\partial Z^N}
\end{equation}
with $\widetilde{a}_M = \frac{\partial X^P}{\partial Z^M} a_P$. This leads to an enticing result
\begin{eqnarray} \label{dbi-idif}
\frac{1}{g_s} \int d^{10} X \sqrt{-\det \bigl( g + \kappa \mathfrak{F} \bigr)}
&=& \frac{1}{g_s} \int d^{10} Z \sqrt{-\det \bigl(\mathcal{G} + \kappa (B + \widetilde{f}) \bigr)} \\
\label{dbi-idin}
&=& \frac{1}{G_s} \int d^{10} Y \sqrt{-\det \bigl(G + \kappa (\widehat{\mathcal{F}} + \Phi ) \bigr)}.
\end{eqnarray}
We can check the consistency of the above identities by showing that Eq. \eq{dbi-idin} can be derived
from the RHS of Eq. \eq{dbi-idif}.
Consider a Darboux transformation $\phi_1: Y^M \mapsto Z^M = Y^M + \Theta^{MN} \widehat{a}_N (Y)$
satisfying $\phi_1^* (B + \widetilde{f}) = B$. Then it leads to the identity
\begin{equation}\label{inter-dar}
\mathcal{G}_{MN} + \kappa (B + \widetilde{f})_{MN} = \bigl( \mathfrak{G}_{PQ} + \kappa B_{PQ} \bigr)
\frac{\partial Y^P}{\partial Z^M} \frac{\partial Y^Q}{\partial Z^N}
\end{equation}
where
\begin{equation}\label{inter-metric}
\mathfrak{G}_{MN} = \mathcal{G}_{PQ} \frac{\partial Z^P}{\partial Y^M}
\frac{\partial Z^Q}{\partial Y^N} = g_{PQ} \frac{\partial X^P}{\partial Y^M}
\frac{\partial X^Q}{\partial Y^N}.
\end{equation}
The previous Darboux transformation \eq{tri-dbi} satisfies $\phi_2^* (B + F) = B$ where
$\phi_2: Z^M \mapsto X^M = Z^M + \Theta^{MN} \widehat{A}_N (Z)$ which, in Eq. \eq{inter-metric},
has been combined with $\phi_1$, i.e.,
\begin{equation}\label{comb-darb}
\phi_2 \circ \phi_1: Y^M \mapsto X^M = Y^M + \Theta^{MN} (\widehat{A}_N + \widehat{a}_N ) (Y).
\end{equation}
Note that we can put $\widehat{A}_\mu = 0$ by our assumption.
Using the identity \eq{inter-dar}, we can derive the following equivalence between DBI actions:
\begin{equation} \label{dbi-ccc}
\frac{1}{g_s} \int d^{10} Z \sqrt{- \det \bigl(\mathcal{G} + \kappa (B + \widetilde{f}) \bigr)}
= \frac{1}{g_s} \int d^{10} Y \sqrt{- \det \bigl( \mathfrak{G} + \kappa B \bigr)}.
\end{equation}
By applying the same method as Eq. \eq{rexp-dbi} and using the coordinates \eq{comb-darb},
it is straightforward to derive Eq. \eq{dbi-idin} from the RHS of Eq. \eq{dbi-ccc}.
The conformally flat metric \eq{10vacgeo} takes the form
\begin{equation}\label{asd2s5}
ds^2 = R^2 \Bigl(\frac{dx \cdot dx + d\rho^2}{\rho^2} + d \Omega_5^2 \Bigr)
\end{equation}
where $dx \cdot dx = \eta_{\mu\nu} dx^\mu dx^\nu$. This form of the metric can be transformed into
the metric form used in \cite{schwarz} by a simple inversion $\rho = 1/ v$:
\begin{equation}\label{metric-sch}
ds^2 = R^2 \Bigl(v^2 dx \cdot dx + v^{-2} dv^2 + d \Omega_5^2 \Bigr) =
R^2 \Bigl(v^2 dx \cdot dx + v^{-2} dv \cdot dv \Bigr)
\end{equation}
where $dv \cdot dv = dv^a dv^a$.
Note that the four-dimensional supersymmetric gauge theory is defined on the boundary of $AdS_5$ space
where $v \to \infty$ in the metric (\ref{metric-sch}) and so the five-sphere $\mathbb{S}^5$
shrinks to a point near the conformal boundary of the AdS space.
Then the $SO(6)$ isometry of $\mathbb{S}^5$ is realized as a global symmetry in the gauge theory
and the (angular) momenta dual to five-sphere coordinates are given by generators of
the $SO(6)$ R-symmetry. Since we are interested in the HEA of the boundary theory where
the $\mathbb{S}^5$ shrinks to a point, we can thus consider a low energy limit by ignoring any $y$-dependence
for fluctuations, but leaving the background intact. Then the fluctuating $U(1)$ field strengths
on the LHS of Eq. \eq{dbi-ccc} reduce to
\begin{equation}\label{lowel-u1}
\begin{array}{l}
\widetilde{f}_{\mu\nu} (x,y) \to \partial_\mu \widetilde{a}_\nu (x) - \partial_\nu \widetilde{a}_\mu (x)
\equiv f_{\mu\nu}(x), \\
\widetilde{f}_{\mu a} (x,y) \to \partial_\mu \widetilde{a}_a (x) \equiv \partial_\mu \varphi_a (x), \\
\widetilde{f}_{ab} (x,y) \to 0.
\end{array}
\end{equation}
Since we assumed that the low energy theory does not depend on
the coordinates $y^a$ of extra dimensions, we will try to reduce the 10-dimensional theory to a
4-dimensional effective field theory. For this purpose, first let us consider the block matrix
\begin{equation}\label{bdi-10metric}
\mathcal{G}_{MN} + \kappa \bigl(B + \widetilde{f} \bigr)_{MN} = \left(
\begin{array}{cc}
\lambda^2 \eta_{\mu\nu} + \kappa f_{\mu\nu} & \kappa \partial_\mu \varphi_a \\
- \kappa \partial_\mu \varphi_a & \lambda^2 \delta_{ab} + \kappa B_{ab} \\
\end{array}
\right),
\end{equation}
where we put $B_{\mu\nu} = 0$ according to the reasoning explained in section 4.
Even we may take the approximation $\lambda^2 \delta_{ab} + \kappa B_{ab} \approx \lambda^2 \delta_{ab}$
because $\lambda^2 = R^2 v^2 \to \infty$ and the low energy limit applied to Eq. \eq{lowel-u1}
is basically equivalent to $\theta^{ab} \to 0$ and so the metric part is dominant similarly to the reasoning
below Eq. \eq{quantum-prod}. Considering the fact that NC corrections in NC gauge theory correspond to
$1/N$ expansions in large $N$ gauge theory \cite{hsy-ijmp09},
the approximation considered can be interpreted as the planar limit in AdS/CFT correspondence.
Using the determinant formula for a block matrix
\begin{equation}\label{det-block}
\det \left(
\begin{array}{cc}
A & B \\
C & D \\
\end{array}
\right) = \det D \; \det (A-B D^{-1} C),
\end{equation}
we get the following relation
\begin{eqnarray} \label{prod-det}
\sqrt{- \det \bigl(\mathcal{G} + \kappa (B + \widetilde{f}) \bigr)} &=&
\sqrt{\det (\lambda^2 + \kappa B )} \sqrt{- \det \Bigl( \lambda^2 \eta_{\mu\nu}
+ \kappa f_{\mu\nu}
+ \kappa^2 \partial_\mu \varphi_a \Bigl(\frac{1}{\lambda^2 + \kappa B} \Bigr)^{ab}
\partial_\nu \varphi_b \Bigr) } \nonumber \\
&\approx& \lambda^6 \sqrt{- \det \Bigl( \lambda^2 \eta_{\mu\nu} + \kappa^2 \lambda^{-2} \partial_\mu \varphi
\cdot \partial_\nu \varphi + \kappa f_{\mu\nu} \Bigr) }.
\end{eqnarray}
Suppose that a D3-brane is embedded in 10-dimensional target spacetime $\mathcal{M}$ with local coordinates
$X^M = (x^\mu, \phi^a)$ whose metric is given by $\mathcal{G}_{MN} (X)$. To be specific, we consider
$\mathcal{M} = AdS_5 \times \mathbb{S}^5$ and choose a static gauge for the embedding functions, i.e.,
$X^M (\sigma) = \bigl( x^\mu(\sigma), \phi^a(\sigma) \bigr) = \bigl( \delta^\mu_\alpha \sigma^\alpha,
v^a + \frac{\kappa}{R^2} \varphi^a(x) \big)$ where $v^a \equiv \langle \phi^a \rangle_{\mathrm{vac}}$
are vevs of worldvolume scalar fields. The fact that the worldvolume scalar fields $\phi^a$ are
originated from NC $U(1)$ gauge fields in Eq. \eq{conn-fluc6} implies that the vevs
$v^a = \langle \phi^a \rangle_{\mathrm{vac}}$ can be identified with the Coulomb branch
parameters $p_a$ in Eq. \eq{n=4vacuum}.
Then we see that the symmetric part in Eq. \eq{prod-det} is precisely the induced worldvolume
metric \eq{ind-metric}, i.e.,
\begin{equation}\label{d3-indmetric}
h_{\mu\nu} = \mathcal{G}_{MN} \partial_\mu X^M \partial_\nu X^N
= R^2 \bigl( v^2 \eta_{\mu\nu} + v^{-2} \partial_\mu \phi \cdot \partial_\nu \phi \bigr)
\end{equation}
where $\lambda^2 = R^2 v \cdot v = R^2 /\rho^2$.
Therefore, in the approximation considered above, we get the identity
\begin{equation}\label{hea-id}
\sqrt{- \det_{10} \bigl(\mathcal{G} + \kappa (B + \widetilde{f}) \bigr)} =
\lambda^6 \sqrt{- \det_4 (h + \kappa f)}
\end{equation}
where the subscript in the determinant indicates the size of matrix.
Using the identity \eq{hea-id}, we can reduce the 10-dimensional DBI action in $AdS_5 \times \mathbb{S}^5$
geometry to a 4-dimensional DBI action given by
\begin{equation}\label{hea-red}
- T_{D9} \int d^{10} Z \sqrt{- \det_{10} \bigl(\mathcal{G} + \kappa (B + \widetilde{f}) \bigr)} =
\Bigl( \frac{g_s N}{4\pi}\Bigr)^{\frac{3}{2}} L(\epsilon, R)
\left[ - T_{D3} \int_W d^4 x \sqrt{- \det_4 (h + \kappa f)} \right]
\end{equation}
where $\Bigl( \frac{g_s N}{4\pi}\Bigr)^{\frac{3}{2}} = \frac{T_{D9} R^6}{T_{D3}} \int_{\mathbb{S}^5} \mathrm{vol}(\mathbb{S}^5)$ and
\begin{equation}\label{reg-radius}
L(\epsilon, R) \equiv \int^R_\epsilon \frac{dv}{v} = \ln \frac{R}{\epsilon}
\end{equation}
is a regularized integral along the $AdS$ radius. We identify the DBI action in the bracket
in Eq. \eq{hea-red} with the worldvolume action of a probe D3-brane in $AdS_5 \times \mathbb{S}^5$ geometry.
John H. Schwarz speculated in \cite{schwarz} that the probe D3-brane action can be interpreted as the HEA
of 4-dimensional $\mathcal{N}=4$ superconformal field theory on the Coulomb branch.
We want to emphasize we directly derived the HEA from the 4-dimensional $\mathcal{N}=4$ superconformal
field theory on the Coulomb branch although we have not incorporated fermions yet.
One caveat is that our HEA is slightly different from Eq. (12) in Ref. \cite{schwarz} where our $v^2$
was replaced by $\phi^2$. But one needs to recall that $v^2$ is coming from the background geometry and
the probe brane approximation involves neglecting the backreaction of the brane on the geometry and
other background fields (which requires that $N$ is large).
In this description, the $AdS_5 \times S^5$ geometry is regarded as a background and
so it remains to be fixed against the fluctuations of worldvolume fields.
Thus the $\phi^2$ in the denominator in Eq. (12) of Ref. \cite{schwarz} can be replaced by $v^2$
in the probe brane approximation.
A demanding task is to understand how to derive the coupling \eq{wz-action} of background
RR gauge fields from the 4-dimensional $\mathcal{N}=4$ superconformal field theory.
Actually this issue is closely related to our previous conjecture for a possible realization of
D3-branes in terms of NC Hermitian $U(1)$ instantons. Hence we will only draw a plausible picture
based on this conjecture. If the conjecture is true, $N$ D3-branes correspond
to a stack of $N$ NC Hermitian $U(1)$ instantons at origin of $\mathbb{R}^6$. Then,
this instanton configuration generates a topological invariant given by (up to normalization)
\begin{equation}\label{top-number}
I \sim \int_{\mathbb{R}^6} \widehat{F} \wedge \widehat{F} \wedge \Omega =
\int_{\mathbb{S}^5} \Bigl(\widehat{A} \wedge \widehat{F}
- \frac{1}{3} \widehat{A} \wedge \widehat{A} \wedge \widehat{A} \Bigr) \wedge \Omega
\end{equation}
where $\Omega$ is a K\"ahler form on $\mathbb{R}^6$. The topological invariant $I$ refers to
the instanton number $N$ and so we identify $I = 2 \pi N$. Since the ``instanton flux" is
threading $\mathbb{S}^5 = \partial \mathbb{R}^6$ and the instanton flux emanating from the origin
is regarded as a background field, we make a simple identification for the five-form
in Eq. \eq{top-number}:
\begin{eqnarray}\label{5-form}
\mu_3 F_5 &:=& \frac{1}{g_{YM}^2} \Bigl(\widehat{A} \wedge \widehat{F} - \frac{1}{3} \widehat{A} \wedge
\widehat{A} \wedge \widehat{A} \Bigr) \wedge \Omega \nonumber \\
&=& \mu_3 k_3 \mathrm{vol} (\mathbb{S}^5)
\end{eqnarray}
where $\mu_3$ is the basic unit of D3-brane charge and $k_3$ is a coefficient depending
on the normalization convention. In the AdS/CFT correspondence, $F_5$ is the self-dual RR five-form of
$N$ D3-branes given by
\begin{equation}\label{rr-5form}
F_5 = k_3 \bigl( \mathrm{vol} (AdS_5) + \mathrm{vol} (\mathbb{S}^5) \bigr) = dC_4.
\end{equation}
Although we do not pin down the origin of the self-duality, the self-duality is necessary
for the conjecture to be true because it implies that the topological charge of NC $U(1)$ instantons
can be interpreted as the RR-charge of D3-branes, i.e.,
\begin{equation}\label{d3-rrc}
\mu_3 \int_{\mathbb{S}^5} F_5 = \mu_3 \int_{AdS_5} dC_4 = \mu_3 \int_W C_4
\end{equation}
where $W = \partial (AdS_5)$. Besides the background instanton gauge fields,
there exist worldvolume $U(1)$ gauge fields and they can induce a well-known topological instanton coupling
given by
\begin{equation}\label{w-instanton}
\frac{\chi}{8\pi} \int_W f \wedge f.
\end{equation}
Combining these two couplings leads to a moderate (if any) suggestion
for the Wess-Zumino coupling in Eq. \eq{wz-action} given by \cite{schwarz}
\begin{equation}\label{scs2}
S_2 = \mu_3 \int_W C_4 + \frac{\chi}{8\pi} \int_W f \wedge f.
\end{equation}
Now we will include the Majorana-Weyl fermion $\widehat{\Psi}(Y)$ in the HEA.
This means that we are considering a supersymmetric D9-brane which respects the
local $\kappa$-symmetry \cite{sdbi1,sdbi12,sdbi3,sdbi4,sdbi5,sdbi6}.
Thus we use the $\kappa$-symmetry to eliminate half of $(\psi_1, \psi_2)$ coordinates
where $\psi_{1,2}$ are two Majorana-Weyl spinors of the same chirality.
We adopt the gauge choice, $\psi_1 = 0$, used in Ref. \cite{sdbi1,sdbi12} and rename $\psi_2 := \psi$.
It was shown in \cite{sdbi1,sdbi12} that in this gauge the supersymmetric extension of 10-dimensional DBI action
has a surprisingly simple form. The supersymmetric case also respects the identity \eq{dbi-idif}
with the following replacement
\begin{eqnarray}\label{susy-rep1}
&& \mathfrak{F}_{MN} \to \mathfrak{F}_{MN} + i \overline{\psi} \Gamma_M \partial_N \psi -
\frac{\kappa}{4} \overline{\psi} \Gamma^P \partial_M \psi \overline{\psi} \Gamma_P \partial_N \psi \equiv
\mathfrak{F}_{MN} + \Upsilon_{MN}, \\
\label{susy-rep2}
&& \widetilde{f}_{MN} \to \widetilde{f}_{MN} + i \overline{\psi} \widetilde{\Gamma}_M
\widetilde{\partial}_N \psi - \frac{\kappa}{4} \overline{\psi} \widetilde{\Gamma}^P \widetilde{\partial}_M
\psi \overline{\psi} \widetilde{\Gamma}_P \widetilde{\partial}_N \psi \equiv \widetilde{f}_{MN} + \xi_{MN},
\end{eqnarray}
where $\widetilde{\Gamma}_M = \Gamma_P \frac{\partial X^P}{\partial Z^M}$ and $\widetilde{\partial}_M = \frac{\partial }{\partial Z^M}$. Again we can apply the Darboux transformation $\phi_1: Y^M \mapsto Z^M =
\Theta^{MN} \bigl( B_{NP} Y^P + \widehat{a}_N (Y) \bigr)$ satisfying $\phi_1^* (B + \widetilde{f}) = B$.
Then it leads to the following identity
\begin{equation}\label{finter-dar}
\mathcal{G}_{MN} + \kappa (B + \widetilde{f} + \xi)_{MN} = \bigl( \mathfrak{G}_{PQ}
+ \kappa (B + \widetilde{\xi})_{PQ} \bigr)
\frac{\partial Y^P}{\partial Z^M} \frac{\partial Y^Q}{\partial Z^N}
\end{equation}
where
\begin{equation}\label{finter-metric}
\widetilde{\xi}_{MN} = \xi_{PQ} \frac{\partial Z^P}{\partial Y^M}
\frac{\partial Z^Q}{\partial Y^N} = \Upsilon_{PQ} \frac{\partial X^P}{\partial Y^M}
\frac{\partial X^Q}{\partial Y^N}.
\end{equation}
The above identity \eq{finter-dar} leads to the following equivalence between DBI actions:
\begin{equation} \label{dbi-fcc}
\frac{1}{g_s} \int d^{10} Z \sqrt{- \det \bigl(\mathcal{G} + \kappa (B + \widetilde{f} + \xi) \bigr)}
= \frac{1}{g_s} \int d^{10} Y \sqrt{- \det \bigl( \mathfrak{G} + \kappa (B + \widetilde{\xi}) \bigr)}.
\end{equation}
Let us expand the RHS of Eq. \eq{dbi-fcc} around the background $B$-field as the bosonic case \eq{exp-dbi}:
\begin{equation}\label{exp-fdbi}
\sqrt{- \det \bigl( \mathfrak{G} + \kappa (B + \widetilde{\xi}) \bigr)}
= \sqrt{- \det(\kappa B)} \sqrt{\det \Bigl(1 + \frac{M}{\kappa} \Bigr)}
\end{equation}
where
\begin{equation}\label{susy-m}
{M_N}^Q = \bigl( \mathfrak{G} + \kappa \widetilde{\xi} \bigr)_{NP} \Theta^{PQ}
= \bigl( g + \kappa \Upsilon \bigr)_{RS} \frac{\partial X^R}{\partial Y^N}
\frac{\partial X^S}{\partial Y^P} \Theta^{PQ}.
\end{equation}
Note that $\mathrm{Tr} M \neq 0$ unlike the bosonic case.
Using the formula, $\det (1+A) = \exp {\sum_{k=1}^\infty \frac{(-)^{k+1}}{k} \mathrm{Tr} A^k}$,
it is not difficult to show that
\begin{equation}\label{susy-detid}
\det \Bigl(1 + \frac{M}{\kappa} \Bigr) = \det \Bigl(1 + \frac{1}{\kappa}
\bigl( g + \kappa \Upsilon \bigr) \mathfrak{P} \Bigr)
\end{equation}
where
\begin{equation}\label{gp-matrix}
\bigl( \Upsilon \mathfrak{P} \bigr)_M^{~~N} = - i \bigl( \delta^P_M + \frac{i \kappa}{4}
\overline{\psi} \Gamma^P \partial_M \psi \bigr) \overline{\psi} \Gamma_P \{ X^N, \psi \}_\Theta.
\end{equation}
In terms of the matrix notation, the matrix on the RHS of Eq. \eq{susy-detid} can be read as
\begin{eqnarray} \label{eval-matrix}
1 + \frac{1}{\kappa} \bigl( g + \kappa \Upsilon \bigr) \mathfrak{P}
&=& B \bigl(1 + \kappa G^{-1} ( \widehat{\mathcal{F}} - B )
+ \Theta \Upsilon \mathfrak{P} B \bigr) \Theta \nonumber \\
&=& B G^{-1} \bigl(G + \kappa ( \widehat{\mathcal{F}} - B )
+ G \Theta \Upsilon \mathfrak{P} B \bigr) \Theta
\end{eqnarray}
where the NC field strengths $\widehat{\mathcal{F}}_{MN}$
including an instanton background are given by Eqs. \eq{f-fluc44}-\eq{conn-fluc66}.
Using the result \eq{gp-matrix}, one can calculate the fermionic term, $G \Theta \Upsilon \mathfrak{P} B
= - \kappa^2 B g^{-1} \Upsilon \mathfrak{P} B$, which takes the form
\begin{eqnarray}\label{fermi-term}
- i \kappa^2 (Bg^{-1})_M^{~~P} \bigl( \delta^Q_P + \frac{i \kappa}{4} \overline{\psi}
\Gamma^Q \partial_P \psi \bigr) \overline{\psi} \Gamma_Q D_N \psi
&\equiv& - \kappa^2 (Bg^{-1})_M^{~~P} \widehat{\Upsilon}_{PN} \nonumber \\
&\approx & - i \kappa \overline{\psi} \mathbf{\Gamma}_M D_N \psi + \mathcal{O}(\kappa^2)
\end{eqnarray}
where $\mathbf{\Gamma}_M \equiv \kappa B_{MN} g^{NP} \Gamma_P$ obey the Dirac
algebra $\{ \mathbf{\Gamma}_M, \mathbf{\Gamma}_N \} = 2G_{MN}$ and
\begin{equation}\label{fermi-covd}
D_N \psi = \partial \psi/\partial Y^N + \{ \widehat{A}_N + \widehat{a}_N, \psi \}_\Theta.
\end{equation}
In the end, we get the supersymmetric version of Eqs. \eq{dbi-idif} and \eq{dbi-idin}:
\begin{eqnarray} \label{susy-dbi-idif}
&&\frac{1}{g_s} \int d^{10} X \sqrt{-\det \bigl( g + \kappa (\mathfrak{F} + \Upsilon) \bigr)}
= \frac{1}{g_s} \int d^{10} Z \sqrt{-\det \bigl(\mathcal{G} + \kappa (B + \widetilde{f} + \xi) \bigr)} \\
\label{susy-dbi-idin}
&& \hspace{3cm} = \frac{1}{G_s} \int d^{10} Y \sqrt{-\det \bigl(G + \kappa (\widehat{\mathcal{F}}
+ \Phi ) - \kappa^2 Bg^{-1} \widehat{\Upsilon} \bigr) }.
\end{eqnarray}
Let us redefine the fermion field, $\Psi \equiv (\kappa T_9)^{\frac{1}{2}} \psi$, and
use the approximation \eq{fermi-term} to take the expansion like Eq. \eq{det-exp}.
With this normalization, we correctly reproduce the action \eq{10dsym-action} at leading orders.
As before, we consider the limit $\Theta^{MN} \to (\zeta^{\mu\nu} = 0, \theta^{ab} \neq 0)$.
Then it is easy to see that, at nontrivial leading orders,
Eq. \eq{susy-dbi-idin} reproduces the 10-dimensional $\mathcal{N}=1$ supersymmetric NC $U(1)$
gauge theory \eq{10dsym-action} in the instanton background \eq{inst-vacuum}.
As we demonstrated in section 2, the action \eq{10dsym-action} is equivalent to the 4-dimensional
$\mathcal{N} = 4$ superconformal field theory on the Coulomb branch. And we argued in this section that
fluctuations in $AdS_5 \times \mathbb{S}^5$ background geometry are described by the
10-dimensional $\mathcal{N}=1$ supersymmetric NC $U(1)$ gauge theory in the background of NC
Hermitian $U(1)$ instantons obeying Eq. \eq{hym-eq}.
According to our construction, we thus declare that the RHS of Eq. \eq{susy-dbi-idif} has to describe
the fluctuations in $AdS_5 \times \mathbb{S}^5$ geometry.
Therefore we expect that the supersymmetric HEA for the $\mathcal{N} = 4$ superconformal field theory
on the Coulomb branch would be derived from a dimensional reduction of the RHS of Eq. \eq{susy-dbi-idif}
similar to Eq. \eq{hea-red}.
Before proceeding further, let us first address some subtle issues regarding to the equivalence in
Eqs. \eq{susy-dbi-idif} and \eq{susy-dbi-idin}. The first one is that an interpretation for
the factor $\bigl( \delta^Q_P + \frac{i \kappa}{4} \overline{\psi} \Gamma^Q \partial_P \psi \bigr)$
in $\widehat{\Upsilon}_{PN}$ is not clear from the point of view of NC $U(1)$ gauge theory.
Note that $\partial_P \psi = \partial \psi/\partial X^P$ and the Darboux transformations did not
touch the factor. Hence this factor behaves like a background part induced from the backreaction of fermions
at higher orders. Therefore a plausible picture from the viewpoint of NC $U(1)$ gauge fields is
to interpret this factor as vielbeins $\mathfrak{E}_M^A = \bigl( \delta^A_M
- \frac{i \kappa}{4} \overline{\psi} \Gamma^A \partial_M \psi \bigr)$ with an effective
metric $\mathfrak{G}_{MN} = \mathfrak{E}_M^A \mathfrak{E}_N^B g_{AB}$ and write
\begin{equation}\label{interp-dirac}
\kappa^2 (Bg^{-1})_M^{~~P} \widehat{\Upsilon}_{PN} = i \kappa \overline{\psi} \mathfrak{T}_M D_N \psi
\end{equation}
where
\begin{equation}\label{gamma-t}
\mathfrak{T}_M \equiv \kappa B_{MN} g^{NP} \mathfrak{E}_P^A \Gamma_A.
\end{equation}
Then the gamma matrices $\mathfrak{T}_M$ satisfy the Dirac algebra
\begin{equation}\label{mod-diracalg}
\{ \mathfrak{T}_M, \mathfrak{T}_N \} = - 2\kappa^2 (Bg^{-1}\mathfrak{G}g^{-1} B)_{MN} \equiv
2 \mathbb{G}_{MN}.
\end{equation}
Of course, if we ignore the backreaction from the fermions, we recover the previous Dirac
term \eq{fermi-term} in flat spacetime. Another issue is how to glue local Darboux charts now
involved with fermions as well as bosons. We argued before that the global metric \eq{global-metric}
can be constructed via the globalization in terms of the gluing of local Darboux charts described
by Eqs. \eq{glue-g} and \eq{glue-d}. Or the local frames in the metric \eq{tr-abelif} are replaced
by global vielbeins \cite{mypaper}:
\begin{equation}\label{global-frame}
\frac{\partial X^A}{\partial Z^M} \to E_M^A.
\end{equation}
Then the gamma matrices in Eq. \eq{susy-rep2} will also be replaced by $\Gamma_M
\equiv E^A_M \Gamma_A$ and $\Gamma^M \equiv E_A^M \Gamma^A$.\footnote{They should not be confused with
the gamma matrices in Eq. \eq{susy-rep1} which are defined on the flat spacetime $\mathbb{R}^{9,1}$
while those in Eq. \eq{susy-rep2} are now defined on a curved spacetime.}
Now it is also necessary to glue the fermions defined on local Darboux patches by local
Lorentz transformations
\begin{equation}\label{lorentz}
\psi^{(j)} = S_{(ji)} \psi^{(i)}
\end{equation}
acting on fermions on an intersection $U_i \cap U_j$. As usual, we introduce a spin connection
$\omega_M = \frac{1}{2} \omega_{M AB} \Gamma^{AB}$ to covariantize the local gluing \eq{lorentz}.
This means that the fermionic terms in Eq. \eq{susy-rep2} are now given by
\begin{equation}\label{spin-conn}
\xi_{MN} \to
i \overline{\psi} E_M^A \Gamma_A \nabla_N \psi - \frac{\kappa}{4} \overline{\psi} \Gamma^A \nabla_M
\psi \overline{\psi} \Gamma_A \nabla_N \psi,
\end{equation}
where the covariant derivative is defined by
\begin{equation}\label{spin-cov}
\nabla_M \psi = (\partial_M + \omega_M )\psi.
\end{equation}
The spin connections $\omega_M$ are determined by the metric \eq{asd2s5}.
Therefore the block matrix \eq{bdi-10metric} for the supersymmetric case is replaced by
\begin{equation}\label{susybdi}
\mathcal{G}_{MN} + \kappa \bigl(B + \widetilde{f} + \xi \bigr)_{MN} \approx \left(
\begin{array}{cc}
\lambda^2 \eta_{\mu\nu} + \kappa (f_{\mu\nu} + \xi_{\mu\nu}) & \kappa (\partial_\mu \varphi_a + \xi_{\mu a}) \\
- \kappa (\partial_\mu \varphi_a - \xi_{a \mu}) & \lambda^2 \delta_{ab} + \kappa (B_{ab} + \xi_{ab}) \\
\end{array}
\right).
\end{equation}
Since we are interested in the HEA of the four-dimensional supersymmetric gauge theory defined
on the boundary of $AdS_5$ space, the dimensional reduction similar to Eq. \eq{lowel-u1} was adopted too
for fermionic excitations, i.e.,
\begin{equation} \label{xi4}
\begin{array}{ll}
\xi_{\mu\nu} = i \overline{\psi} \Gamma_\mu \nabla_\nu \psi,
\qquad \xi_{ab} = i \overline{\psi} \Gamma_{a} \omega_{b} \psi, \\
\xi_{\mu a} = i \overline{\psi} \Gamma_\mu \omega_a \psi,
\qquad \; \xi_{a \mu} = i \overline{\psi} \Gamma_a \nabla_\mu \psi,
\end{array}
\end{equation}
where $\Gamma_M = E^A_M \Gamma_A$ and we ignored the quartic term in Eq. \eq{spin-conn}.
In order to get a four-dimensional picture after the dimensional reduction \eq{hea-red},
it is convenient to decompose the 16 components of the Majorana-Weyl spinor $\psi$ into
the four Majorana-Weyl gauginos $\lambda^i \; (i=1, \cdots, 4)$ as follows
\begin{eqnarray} \label{n=4spinors}
&& \psi = \left(
\begin{array}{c}
P_+ \lambda^i \\
P_- \widetilde{\lambda}_i \\
\end{array}
\right)
\quad \mathrm{with} \; P_\pm = \frac{1}{2}
(I_4 \pm \gamma_5) \; \mathrm{and} \; \widetilde{\lambda}_i = - C \overline{\lambda}^{iT}, \nonumber\\
&& \Gamma^A =(\gamma^{\hat{\mu}} \otimes I_8, \gamma_5 \otimes \gamma^{\hat{a}}), \qquad
\Gamma_{11} = \gamma_5 \otimes I_8,
\end{eqnarray}
where $C$ is the four-dimensional charge conjugation operator and the hat is used to indicate
tangent space indices. We take the four- and six-dimensional
Dirac matrices in the chiral representation
\begin{eqnarray} \label{4gamma-matrix}
&& \gamma^{\hat{\mu}} = \left(
\begin{array}{cc}
0 & i \sigma^{\hat{\mu}} \\
-i \overline{\sigma}^{\hat{\mu}} & 0 \\
\end{array}
\right), \qquad \sigma^{\hat{\mu}} = (I_2, \vec{\sigma})
= (\sigma^{\hat{\mu}})_{\alpha\dot{\beta}}, \quad
\overline{\sigma}^{\hat{\mu}} = (-I_2, \vec{\sigma})= (\overline{\sigma}^{\hat{\mu}})^{\dot{\alpha}\beta}, \\
\label{6gamma-matrix}
&& \gamma^{\hat{a}} = \left(
\begin{array}{cc}
0 & \Sigma^{\hat{a}} \\
\overline{\Sigma}^{\hat{a}} & 0 \\
\end{array}
\right), \qquad \Sigma^{\hat{a}} = (\vec{\eta}, i \vec{\overline{\eta}}) = \Sigma^{{\hat{a}},ij},
\quad \overline{\Sigma}^{\hat{a}}
= (\Sigma^{\hat{a}})^\dagger = (-\vec{\eta}, i \vec{\overline{\eta}}) = \overline{\Sigma}^{\hat{a}}_{ij},
\end{eqnarray}
where $\vec{\sigma}$ are Pauli matrices and the $4 \times 4$ matrices
$(\vec{\eta}, \vec{\overline{\eta}})$ are self-dual and anti-self-dual 't Hooft symbols.
Then the fermion bilinear terms in Eq. (\ref{xi4}) read as
\begin{equation} \label{fbil4}
\begin{array}{l}
\xi_{\mu\nu} = i v^{-1} \big( \overline{\lambda}_i \overline{\sigma}_{\hat{\mu}} \nabla_\nu \lambda^i
- \lambda^i \sigma_{\hat{\mu}} \nabla_\nu \overline{\lambda}_i \big), \\
\xi_{ab} = \partial_c v^{-1} \big( \overline{\lambda} \Sigma_{\hat{a}} \overline{\Sigma}_{\hat{b}\hat{c}} \overline{\lambda} - \lambda \overline{\Sigma}_{\hat{a}} \Sigma_{\hat{b}\hat{c}} \lambda \big), \\
\xi_{\mu a} = 2i \partial_b v^{-1} \big( \overline{\lambda} \overline{\sigma}_{\hat{\mu}}
\Sigma_{\hat{a}\hat{b}} \lambda \big), \\
\xi_{a \mu} = v^{-1} \big( \overline{\lambda} \Sigma_{\hat{a}} \nabla_\mu \overline{\lambda}
- \lambda \overline{\Sigma}_{\hat{a}} \nabla_\mu \lambda \big),
\end{array}
\end{equation}
where
\begin{equation}\label{sigma-lg}
\overline{\Sigma}^{\hat{a}\hat{b}} \equiv \frac{1}{2} \big( \overline{\Sigma}^{\hat{a}} \Sigma^{\hat{b}} - \overline{\Sigma}^{\hat{b}} \Sigma^{\hat{a}} \big), \qquad
\Sigma^{\hat{a}\hat{b}} \equiv \frac{1}{2} \big( \Sigma^{\hat{a}} \overline{\Sigma}^{\hat{b}} -
\Sigma^{\hat{b}} \overline{\Sigma}^{\hat{a}} \big)
\end{equation}
and the spin connection for the background geometry (\ref{10vacgeo}) is given by
\begin{equation}\label{spin-back}
\omega_\mu = - \Gamma^{\hat{\mu}\hat{a}}\partial_a \ln v, \qquad
\omega_a = - \Gamma^{\hat{a}\hat{b}} \partial_b \ln v.
\end{equation}
Since we are considering the HEA of the four-dimensional supersymmetric gauge theory
defined on the boundary of the $AdS_5$ space where $v \to \infty$ and so the $\mathbb{S}^5$
shrinks to a point, we can ignore $\xi_{ab}$ and $\xi_{\mu a}$ in Eq. (\ref{fbil4})
as well as the spin connections $\omega_M \to 0$.
After applying the formula \eq{det-block} to the matrix \eq{susybdi}, it is straightforward to yield
the supersymmetric completion of the bosonic HEA obtained in Eq. \eq{hea-red} and it is given by
\begin{eqnarray} \label{prod-sdet}
&& \sqrt{- \det \bigl(\mathcal{G} + \kappa (B + \widetilde{f} + \xi) \bigr)} \nonumber \\
&=& \sqrt{\det (\lambda^2 + \kappa B )} \sqrt{- \det \Bigl( \lambda^2 \eta_{\mu\nu}
+ \kappa (f_{\mu\nu} + \xi_{\mu\nu})
+ \kappa^2 \partial_\mu \varphi_a \Bigl(\frac{1}{\lambda^2 + \kappa B} \Bigr)^{ab}
(\partial_\nu \varphi_b - \xi_{b\nu}) \Bigr) } \nonumber \\
&\approx& \lambda^6 \sqrt{- \det \Bigl( h_{\mu\nu} + \kappa (f_{\mu\nu} + \xi_{\mu\nu}
- v^{-2} \partial_\mu \phi^a \xi_{a\nu}) \Bigr) }.
\end{eqnarray}
One may drop the last term since it is of $\mathcal{O} (v^{-3})$.
As the bosonic case (\ref{hea-red}), the 10-dimensional supersymmetric DBI action (\ref{susy-dbi-idif})
in $AdS_5 \times \mathbb{S}^5$ geometry is thus reduced to a 4-dimensional supersymmetric DBI
action given by
\begin{eqnarray}\label{shea-red}
&& - T_{D9} \int d^{10} Z \sqrt{- \det_{10} \bigl(\mathcal{G} + \kappa (B + \widetilde{f} + \xi) \bigr)}
\nonumber \\
&=& \Bigl( \frac{g_s N}{4\pi}\Bigr)^{\frac{3}{2}} L(\epsilon, R)
\left[ - T_{D3} \int_W d^4 x \sqrt{- \det_4 \big(h_{\mu\nu} + \kappa (f_{\mu\nu} + \xi_{\mu\nu}
- v^{-2} \partial_\mu \phi^a \xi_{a\nu}) \big)} \right].
\end{eqnarray}
If the quartic term in Eq. (\ref{spin-conn}) is included, it contributes an extra term
given by $\frac{\kappa^2 v^2}{4} (\xi_{\lambda\mu} {\xi^\lambda}_\nu + \xi_{a\mu} {\xi^a}_\nu)$
inside the determinant. Since the metric (\ref{metric-sch}) becomes flat when $v=1$,
the result in this case should be equal to the action of a supersymmetric D3-brane.
One can see that the action (\ref{shea-red}) is actually the case.
See the equation (88) in Ref. \cite{sdbi12}. According to the identity (\ref{susy-dbi-idif}),
the LHS of Eq. (\ref{shea-red}) is equal to the world-volume
action of a BPS D9-brane of type IIB string theory after fixing
the $\kappa$-symmetry, which is invariant under the supersymmetry transformations given
by Eqs. (90) and (91) in Ref. \cite{sdbi12}. Since Eq. (\ref{susy-dbi-idif}) is a mathematical identity,
the action on the LHS of Eq. (\ref{shea-red}) will also be supersymmetric.
Its supersymmetry transformations basically take the form replacing the ordinary derivatives
in Eqs. (90) and (91) in Ref. \cite{sdbi12} by covariant derivatives on the $AdS_5 \times \mathbb{S}^5$ space.
But an explicit check of supersymmetry is somewhat lengthy though straightforward.
Its detailed exposition from the perspective of HEA deserves to pursue a separate work,
which will be reported elsewhere.
Note that, after the gauge fixing, $\psi_1 = 0$, for the $\kappa$-symmetry, the Wess-Zumino term
for the supersymmetric case is the same as the bosonic one \eq{scs2} \cite{sdbi12}.
The final result can be interpreted as the worldvolume action of a supersymmetric probe D3-brane
in the $AdS_5 \times \mathbb{S}^5$ background geometry.
According to the conjecture in Ref. \cite{schwarz}, it can be reinterpreted as
the HEA of four-dimensional $\mathcal{N}=4$ superconformal field theory on the Coulomb branch.
We emphasize that we directly derived the HEA from the four-dimensional $\mathcal{N}=4$
superconformal field theory on the Coulomb branch defined by the NC space \eq{n=4moyal}.
\section{Discussion}
We want to emphasize that NC spacetime should be regarded as
a more fundamental concept from which classical spacetime should be derived as
quantum mechanics is a more fundamental theory and the classical
phenomena are emergent from quantum physics.
Then the NC spacetime requires us to take a radical departure from the 20th century physics.
First of all, it introduces a new kind of duality, known as the gauge/gravity duality,
as formalized by the identity \eq{rexp-dbi}.
But we have to recall that quantum mechanics has already illustrated such kind of novel duality
where the NC phase space obeying the commutation relation $[x^i, p_j] = i\hbar \delta^i_j$ is
responsible for the so-called wave-particle duality. Remarkably there exists
a novel form of the equivalence principle stating that
the electromagnetic force can always be eliminated by a local coordinate
transformation as far as spacetime admits a symplectic structure.
The novel equivalence principle is nothing but the famous mathematical theorem known as
the Darboux theorem or the Moser lemma in symplectic geometry \cite{sg-book1,sg-book2}.
It proves the equivalence principle for the gravitational force in the context of emergent gravity.
Therefore we may conclude \cite{mypaper,hsy-jhep09} that the NC nature of spacetime is the origin
of the gauge/gravity duality and the first principle for the duality is the equivalence principle
for the electromagnetic force.
The AdS/CFT correspondence \cite{ads-cft1,ads-cft2,ads-cft3} is a well-tested gauge/gravity duality
and a typical example of emergent gravity and emergent space.
But we do not understand yet why the duality should work.
We argued that the AdS/CFT correspondence is a particular example of emergent gravity
from NC $U(1)$ gauge fields and the duality between large $N$ gauge fields and
a higher-dimensional gravity is simply a consequence of the novel equivalence principle
for the electromagnetic force. We note \cite{mypaper,hsy-jhep09} that the emergent gravity
from NC $U(1)$ gauge fields is an inevitable conclusion as far as spacetime admits a symplectic structure,
in other words, a microscopic spacetime becomes NC.
Moreover the emergent gravity is much more general than the AdS/CFT correspondence
because it holds for general background spacetimes as exemplified by the identity \eq{dbi-ccc}.
Therefore we believe that the emergent gravity from NC gauge fields provides a lucid avenue
to understand the gauge/gravity duality or large $N$ duality.
For example, it is interesting to notice that the transformation \eq{rexp-dbi} between NC $U(1)$
gauge fields and an emergent gravitational metric holds even locally.
Thus one may imagine an (infinitesimal) open patch $U$ where the field strength $F_U$ of
fluctuating $U(1)$ gauge fields has a maximal rank such that $(U, F_U)$ is a symplectic
Darboux chart. Then one can apply the Darboux theorem on the local patch
to transform the local $U(1)$ gauge fields into a corresponding local spacetime geometry supported on $U$.
But this local geometry is unfledged yet to be materialized into a classical spacetime geometry.
Hence this kind of immature geometry describes a bubbling geometry or spacetime foams which
intrinsically correspond to a quantum geometry. Even we may consider fluctuating $U(1)$ gauge fields
on a local patch $U$ whose field strengths $F_U$ do not support the maximal rank.
The dimension of emergent bubbling geometry will be determined by the rank of $F_U$ on $U$.
This implies that the dimension of quantum geometries is not fixed but fluctuates.
This picture is in a sense a well-known folklore in quantum gravity.
Then one may raise a question why NC spacetime reproduces all the results in string theory.
The connection between string theory and symplectic geometry becomes most manifest by
the Gromov's $J$-holomorphic curves. See section 7 in Ref. \cite{mypaper} for this discussion.
The $J$-holomorphic curve for a given symplectic structure is nothing but the minimal worldsheet
in string theory embedded in a target spacetime. Moreover $\alpha'$-corrections in string theory
correspond to derivative corrections in NC gauge theory. In this sense the string theory can be
regarded as a stringy realization of symplectic geometry or more generally Poisson geometry.
But the NC spacetime provides a more elegant framework for the background indepedent formulation
of quantum gravity in terms of matrix models \cite{hsy-jhep09,hsy-jpcs12} which is still elusive
in string theory.
We showed that the worldvolume effective action of a supersymmetric probe D3-brane in $AdS_5
\times \mathbb{S}^5$ geometry can be directly derived from the four-dimensional $\mathcal{N} = 4$
supersymmetric Yang-Mills theory on the Coulomb branch defined by the NC space \eq{extra-nc2n}.
Since our result, for example, described by the identity \eq{dbi-ccc} should be true for general
$U(1)$ gauge fields in an arbitrary background geometry, the remaining problem is to identify
a corresponding dual (super)gravity whose solution coincides with the emergent
metric $\mathfrak{G}_{MN}$. One may use the method in Refs. \cite{dbisugra1,dbisugra2}
to attack this problem. See also \cite{dbisugra3}. It was shown there that the worldvolume
effective action of a probe D3-brane is a solution to the Hamilton-Jacobi equation of
type IIB supergravity defined by the ADM formalism adopting the radial coordinate as time
for type IIB supergravity reduced on $\mathbb{S}^5$. In particular the radial time corresponds to
the vev of the Higgs field in the dual Yang-Mills theory as our case.
It will be interesting to find the relation between the DBI action obtained in
Refs. \cite{dbisugra1,dbisugra2} and the HEA derived in this paper.
Also there are several works \cite{cfs-p4,hea-col1,hea-col2,hea-col3,ferrari} to address
the relation of the HEA with the low-energy effective actions of $\mathcal{N}=4$
super Yang-Mills theory on the Coulomb branch. Thus it may be a vital project to understand
any relation between our approach based on the Coulomb branch defined by the NC space
and other approaches for the HEA cited above.
Recently there have been some developments \cite{asakawa-gcg,schupp-gcg} that describe D-branes
in the framework of generalized geometry. A D-brane including fluctuations in a static gauge
is identified with a leaf of foliations generated by the Dirac structure of a generalized tangent
bundle and the scalar fields and vector fields on the D-brane are unified as
a generalized connection \cite{asakawa-gcg}. It was also argued in \cite{schupp-gcg} that
the equivalence between commutative and NC DBI actions is naturally encoded in the generalized geometry
of D-branes. In particular, when considering a D-brane as a symplectic leaf of the Poisson structure,
describing the noncommutativity, the SW map is naturally interpreted in terms of the corresponding
Dirac structure. Thus NC gauge theories can be naturally interpreted within the generalized geometry.
Since the Darboux transformation relating the deformation of a symplectic structure
with diffeomorphism symmetry is one of the pillars for emergent gravity, we think that the emergent
gravity from NC gauge fields can be formulated in a natural way within the framework of generalized geometry.
It will be interesting to inquire further into this idea.
\section*{Acknowledgments}
The author thanks Hikaru Kawai and Shinji Shimasaki for warm hospitality and helpful discussions
during his visit to Kyoto University where a part of the work was done.
This work was supported by the National Research Foundation of Korea (NRF) grant funded
by the Korea government (MOE) (No. 2011-0010597).
This work was also supported by the National Research Foundation of Korea (NRF) grant funded
by the Korea government (MSIP) through the Center for Quantum Spacetime (CQUeST) of Sogang University
with grant number 2005-0049409.
|
2,869,038,155,902 | arxiv | \section{Introduction}
In the light scattering process, nonlinearities of the electromagnetic interaction are discovered and experimentally proved \cite{ATLAS:2017fur}. The exploration of the nonlinear electrodynamics (NLE) theories in general relativity (GR) theory was inspired by solving the singularity problem of a black hole. Usually the NLE theories are derived from the Lagrangians constructed with quadratic electromagnetic invariants $F_{\mu\nu} F^{\mu\nu}$ and $F_{\mu\nu} { }^{*}F^{\mu\nu}$. There is NLE depending only on the invariant $F_{\mu\nu} F^{\mu\nu}$ proposed by Max Born in \cite{Born:1934ji}, albeit the one depending on both invariants proposed in \cite{Born:1934gh} known as the Born--Infeld theory, which preserves Maxwell equations' electromagnetic duality invariance, seems to be more prevalent. Moreover, a one-loop QED correction to Maxwell’s theory incorporating vacuum polarization effects was raised by Heisenberg and Euler \cite{Heisenberg:1936nmg}. However, none of them keep the conformal invariance characteristics which is endowed to Maxwell theory. Recently, a nonlinear extension of the source-free Maxwell electrodynamics which preserves both conformal invariance and $SO(2)$ electromagnetic duality was proposed in \cite{Bandos:2020jsw,Kosyakov:2020wxv}. Following the convention used in \cite{BallonBordo:2020jtw}, we in this paper call it the {\it{conformal electrodynamics}}, though in \cite{Bandos:2020jsw,Flores-Alfonso:2020euz,Bokulic:2021dtz} it is also named as {\it{ModMax theory}}. The conformal electrodynamics is characterized by a dimensionless parameter $\gamma$, with the Maxwell theory being recovered in the $\gamma\to 0$ limit. For $\gamma>0$ the polarization mode is subluminal and for $\gamma<0$ the polarization mode is superluminal, so in this paper we consider the former case.
The no-hair theorem states that a black hole holds no more hairs other than mass, electric charge, and angular momentum \cite{Ruffini:1971bza}. A minimally coupled scalar cannot be held in the Einstein-scalar theory \cite{Herdeiro:2015waa}, however, being as a counterexample, the Bocharova-Bronnikov-Melnikov-Bekenstein (BBMB) black hole in the Einstein-conformal scalar vacuum leads to a scalar hair \cite{bocharova1970exact} that being unbounded at the horizon but not physically troublesome \cite{bekenstein1975black}. The Einstein-Maxwell-conformally coupled scalar (EMCS) theory triggered much attention in community \cite{Anabalon:2012tu,Simovic:2020dke,Zou:2020zxq}, which, besides the BBMB black hole, also gives a three-dimensional black hole with the scalar field being regular everywhere \cite{Martinez:1996gn} and a scalar hairy black hole with a constant scalar field \cite{Astorino:2013sfa}. For the latter one, emission rate of charged particles was investigated in \cite{Chowdhury:2019uwi}, the weak cosmic censorship conjecture was tested in \cite{Jiang:2020btc}, and it was turned out to be stable against full perturbations \cite{Chowdhury:2018pre,Zou:2019ays}.
Recently, soon after the proposition of the conformal electrodynamics, its applications in the Einstein gravity were put forward \cite{BallonBordo:2020jtw,Flores-Alfonso:2020euz}. It was shown that there is a screening factor that shields the actual charges of the black hole, and the electric and magnetic fields change qualitatively comparing to the Einstein-Maxwell counterpart. In this paper, we will seek Taub-NUT-like black hole solution in the Einstein-conformal electromagnetic system with a conformally coupled scalar field, which substitutes the Maxwell field in the EMCS with the conformal electrodynamic field. The reason we choose the Taub-NUT-like spacetime is that it provides a representative candidate for which the magnetic fields emergent and the conformal electrodynamics becomes non-trivial. We want to investigate how the conformal electrodynamics and the conformally coupled scalar field interact to modify the Einstein gravity. One may also wonder how the conformal electrodynamics deviates from linear electrodynamics \cite{BallonBordo:2020jtw}, in the situation where the conformally coupled scalar field is added. Thus we will show the strong gravitational effect of the solution, by investigating the innermost stable circular orbits (ISCOs) of charged massive particles around the black hole as well as studying the shadow of the object. To these ends, the NUTty dyons solution together with its thermodynamics will be given in Sec. \ref{sec1}. The gravitational effects of the black hole will be elaborated in Sec. \ref{sec2} where the ISCOs of charged massive particles will be studied and in Sec. \ref{sec3} where the shadow of the black hole will be explored. Throughout this paper the units are chosen to be ${\rm{c}}={\rm{\hbar}}={\rm{G}}=1$. Notice that in this paper ${\rm{e}}$ denotes the natural constant and $e$ is the electric charge parameter.
\section{Conformally scalar NUTty dyons solution in conformal electrodynamics}\label{sec1}
\subsection{Solution}
The theory we consider consists of the Einstein gravity, conformally coupled scalar field, and conformal electrodynamics, whose bulk action reads
\begin{equation}\label{act}
I=\frac{}{} \int {\rm{d}}^{4} x \mathcal{L}=I_{{\rm{G}}}+I_{{\rm{CS}}}+I_{{\rm{CE}}},
\end{equation}
where
\begin{equation}
I_{{\rm{G}}}=\frac{1}{2 \kappa} \int {\rm{d}}^{4} x \sqrt{-g}R,
\end{equation}
\begin{equation}
I_{{\rm{CS}}}=-\frac{1}{2} \int {\rm{d}}^{4} x \sqrt{-g}\left[g^{\mu \nu} \nabla_{\mu} \Psi \nabla_{\nu} \Psi+\xi_{D} R\Psi^{2}\right],
\end{equation}
\begin{equation}
I_{{\rm{CE}}}=-\frac{1}{4\pi}\int {\rm{d}}^4 x\sqrt{-g}\mathcal{L}_{CE},
\end{equation}
with
\begin{equation}
\mathcal{L}_{{\rm{CE}}}=-\frac{1}{2}\left(\mathcal{S} \cosh \gamma-\sqrt{\mathcal{S}^{2}+\mathcal{P}^{2}} \sinh \gamma\right),
\end{equation}
\begin{equation}
\mathcal{S}\equiv\frac{1}{2} F_{\mu \nu} F^{\mu \nu}, \quad \mathcal{P}\equiv\frac{1}{2} F_{\mu \nu}({ }^{*}F)^{\mu \nu},
\end{equation}
$\kappa=8\pi$, $R$ is the Ricci scalar, $\Psi$ is the conformally coupled scalar field, the electromagnetic field strength $F_{\mu \nu}$ is given by $F_{\mu \nu}=\nabla_{\mu} A_{v}-\nabla_{\nu} A_{\mu}$, with $A_\mu$ the vector potential. $\mathcal{S}$ and $\mathcal{P}$ are gauge-invariant Lorentz electromagnetic field invariants, which in the Minkowski spacetime are both zero. $\gamma$ is a dimensionless parameter characterizing the NLE. When $\gamma=0$, $L_{{\rm{CE}}}$ reduces to the Maxwell theory, when $\gamma$ increases, we can deem that the extent of the NLE's deviation from the Maxwell theory also increases. The value of $\xi_D$ is chosen to be $\xi_{D}=(D-2) /(4D-4)$ with $D=4$ the spacetime dimensions, such that $I_{{\rm{CS}}}$ together with the equation of motion for the scalar field is invariant under the conformal transformations
\begin{equation}\label{ctmet}
g_{\mu \nu} \rightarrow \Omega^{2} g_{\mu \nu}, \Psi \rightarrow \Omega^{1-D / 2} \Psi,
\end{equation}
with $\Omega$ a transformation function, and this is the reason of the scalar being conformally coupled, though the full action is not necessarily conformal invariant \cite{Martinez:1996gn,Cunha:2016bpi}.
$\mathcal{L}_{{\rm{CE}}}$ possesses both $SO(2)$ duality-rotation (or electromagnetic duality) invariance and conformal invariance and it is a generalization of the Maxwell theory. To see this, we can see the Euler–Lagrange equation and the Bianchi identity
\begin{equation}\label{feq1}
\nabla_{\mu} E^{\mu \nu}=0,
\end{equation}
\begin{equation}\label{feq2}
\nabla_{\mu}{ }^{*} F^{\mu \nu}=0,
\end{equation}
where the strength tensor is defined by
\begin{equation}
E_{\mu v}=\frac{\partial \mathcal{L}}{\partial F^{\mu \nu}}=2\left(\mathcal{L}_{\mathcal{S}} F_{\mu \nu}+\mathcal{L}_{\mathcal{P}}^{*} F_{\mu \nu}\right),
\end{equation}
with
\begin{equation}
{\cal L_S}= \pdv{\mathcal{L}}{\mathcal{S}}
=\frac{1}{2}\left(\frac{\cal S}{\sqrt{\mathcal{S}^2 + \mathcal{P}^2}}\sinh \gamma-\cosh\gamma\right),
\end{equation}
\begin{equation}
{\cal L_P}=\pdv{\mathcal{L}}{\mathcal{P}}
=\frac{1}{2}\frac{\cal P}{\sqrt{\mathcal{S}^2 + \mathcal{P}^2}}\sinh \gamma\,.
\end{equation}
Under the electromagnetic duality rotation, we have
\begin{equation}
E_{\mu \nu}^{\prime}=E_{\mu \nu} \cos \theta+{ }^{*} F_{\mu \nu} \sin \theta,
\end{equation}
\begin{equation}
{ }^{*} F_{\mu \nu}^{\prime}={ }^{*} F_{\mu \nu} \cos \theta-E_{\mu \nu} \sin \theta,
\end{equation}
which mean that $(E_{\mu \nu}^{\prime},\,{ }^{*} F_{\mu \nu}^{\prime})$ is invariant under $SO(2)$ rotation. On the other hand, under the conformal transformation (\ref{ctmet}), the field equations (\ref{feq1}) and (\ref{feq2}) are also invariant, as ${ }^{*}F\to { }^{*}F$ and $E\to E$.
Varying the action (\ref{act}) individually with respect to the metric $g^{\mu\nu}$, the scalar field $\Psi$, we obtain
\begin{equation}\label{eosmet}
R_{\mu \nu}-\frac{R}{2} g_{\mu \nu}=\kappa\left(T_{\mu \nu}^{(S)}+T_{\mu \nu}^{(E M)}\right),
\end{equation}
\begin{equation}\label{eossc}
\square \Psi-\frac{1}{6} R \Psi=0,
\end{equation}
where we denoted $\square=\nabla_\mu\nabla^\mu$, the energy-momentum tensor of the scalar field is
\begin{equation}
\begin{aligned}
T_{\mu \nu}^{(S)}=&\nabla_{\mu} \Psi \nabla_{\nu} \Psi-\frac{1}{2} g_{\mu \nu} \nabla_{\sigma} \Psi \nabla^{\sigma} \Psi\\&+\frac{1}{6}\left[g_{\mu \nu} \square-\nabla_{\mu} \nabla_{\nu}+G_{\mu \nu}\right] \Psi^{2},
\end{aligned}
\end{equation}
and the traceless stress-energy tensor of the conformal electromagnetic field is
\begin{equation}
\begin{aligned}
T_{\mu\nu}^{{\rm{(EM)}}}&=-\frac{1}{4\pi}\left(2F_{\mu\sigma}F_{\nu}{}^\sigma {\cal L_S}+{\cal P}{\cal L_P} g_{\mu\nu}-{\cal L}g_{\mu\nu}\right)\\&=\frac{1}{4\pi}\left({\cal S} g_{\mu\nu}-2F_{\mu\sigma}F_{\nu}{}^\sigma \right){\cal L_S},
\end{aligned}
\end{equation}
where the criterion for conformal invariance
\begin{equation}
\mathcal{L}_{\mathcal{S}} \mathcal{S}+\mathcal{L}_{\mathcal{P}} \mathcal{P}=\mathcal{L}
\end{equation}
was used in the second step \cite{kosyakov2007introduction}.
We assume the metric as
\begin{equation}\label{metric}
ds^2 ={ -f\left[{\rm{d}}t + 2n \cos\theta {\rm{d}} \phi\right]^2}+\frac{{\rm{d}}r^2}{f} + (r^2+n^2){\rm{d}} \Omega_2^{2}\,,\
\end{equation}
with $f=f(r)$ the blackening factor, $n$ the NUT parameter, ${\rm{d}}\Omega_2^2$ the metric on the unit sphere, and the electromagnetic potential
\begin{equation}\label{epa}
A = a\left({\rm{d}}t + 2n \cos\theta {\rm{d}}\phi\right)\,,
\end{equation}
where $a=a(r)$. We will seek for solutions of $f(r)$ and $a(r)$ in the theory described by the action (\ref{act}), which gives the equations of motion (\ref{feq1}) and (\ref{feq2}) for the conformal electromagnetic field, as well as the ones for the spacetime and the scalar field in (\ref{eosmet}) and (\ref{eossc}).
For the electromagnetic field under the spacetime ansatz, we can obtain the following quantities
\begin{equation}
F = -a' dt\wedge {\rm{d}}r + 2na' \cos\theta {\rm{d}}r\wedge {\rm{d}}\phi
- 2n a\sin\theta {\rm{d}}\theta\wedge {\rm{d}}\phi,
\end{equation}
\begin{equation}
\begin{aligned}
{ }^{*}F= &{-}\frac{2n a}{n^2+r^2}{\rm{d}}t\wedge {\rm{d}}r {+}\frac{4n^2a}{n^2 + r^2} \cos\theta {\rm{d}}r\wedge {\rm{d}}\phi\\&+(n^2 + r^2)a' \sin\theta {\rm{d}}\theta \wedge {\rm{d}}\phi,
\end{aligned}
\end{equation}
\begin{equation}
{\cal S}= -{ a'^2}+\frac{4 n^2 a^2}{(n^2+r^2)^2},
\end{equation}
\begin{equation}
{\cal P} = {-}\frac{4 n a a'}{n^2+r^2},
\end{equation}
\begin{equation}
\begin{aligned}
E =&{a' {\rm{e}}^{\gamma}}{\rm{d}}t\wedge {\rm{d}}r {-} 2n a'{\rm{e}}^\gamma \cos\theta {\rm{d}}r\wedge d\phi\\\quad&+2n a {\rm{e}}^{-\gamma} \sin\theta {\rm{d}}\theta \wedge {\rm{d}}\phi\,,
\end{aligned}
\end{equation}
\begin{equation}
\begin{aligned}
{ }^{*}E =& {\frac{2an {\rm{e}}^{-\gamma}}{n^2+r^2} {\rm{d}}t\wedge {\rm{d}}r -\frac{4n^2a {\rm{e}}^{-\gamma}}{n^2+r^2}\cos\theta {\rm{d}}r\wedge d\phi}\\ \quad&{-(n^2+r^2)a'{\rm{e}}^\gamma \sin\theta} {\rm{d}}\theta \wedge {\rm{d}}\phi\,,
\end{aligned}
\end{equation}
where the ${}^{\prime}$ denotes derivative with respect to $r$. Then the field equation gives
\begin{align}
-{\rm{e}}^{\gamma } \Bigl[\left(n^2+r^2\right) a''+2 r a'\Bigr]-\frac{4 {\rm{e}}^{-\gamma } n^2 a}{n^2+r^2}=0\,,
\end{align}
from which we have the specific expressions of $a$,
\begin{equation}
a(r) = c_1 \sin\left(2{\rm{e}}^{-\gamma} \arctan\frac{r}{n}\right)+c_2 \cos\left(2{\rm{e}}^{-\gamma} \arctan\frac{r}{n}\right),
\end{equation}
where $c_1$ and $c_2$ are integral constants restricted by the asymptotic conditions
\begin{equation}\label{asye}
\lim_{r\to\infty} q_e = e= \frac{1}{4\pi} \int_{\infty}{ }^{*}E,
\end{equation}
\begin{equation}\label{asym}
\lim_{r\to\infty} q_m = -2n g=\frac{1}{4\pi}\int_{\infty} F,
\end{equation}
meaning that the asymptotic electric charge and magnetic charge are $e$ and $-2n g$, respectively. As a result, we get the values of the constants $c_1$ and $c_2$, dependent on the asymptotic charges, as
\begin{align}
c_1 &= -g \cos\left(2 {\rm{e}}^{-\gamma}\pi\right)- \frac{ e \cos(2 {\rm{e}}^{-\gamma}\pi)}{2n}\,,\\
c_2 &= -g \sin\left(2 {\rm{e}}^{-\gamma}\pi\right)+ \frac{ e \sin(2 {\rm{e}}^{-\gamma}\pi)}{2n}\,.
\end{align}
Thus, the electromagnetic gauge potential is
\begin{equation}
\begin{aligned}
a =& -g\cos\Bigl[ {\rm{e}}^{-\gamma}\left(\pi-2\arctan\frac{r}{n}\right)\Bigr]\\&- \frac{e}{2n} \sin\Bigl[ {\rm{e}}^{-\gamma} \left(\pi - 2\arctan\frac{r}{n}\right)\Bigr]\,.
\end{aligned}
\end{equation}
From the Eq. (\ref{eossc}), we have
\begin{equation}
\frac{3}{2} \Psi\left[\frac{2 \left(-1+f+2 r f^{\prime}\right)}{n^{2}+r^{2}}+f^{\prime \prime}\right]=0,
\end{equation}
for which either $\Psi=0$ or
\begin{equation}
f=1+\frac{c_{3}}{n^{2}+r^{2}}+\frac{r c_{4}}{n^{2}+r^{2}}
\end{equation}
solves it. As the former solution is trivial, we just consider the latter one. The specific values of $c_3$ and $c_4$ can be restricted by the Eq. (\ref{eosmet}), which just yields
\begin{equation}\label{blf}
f(r)=\frac{r^{2}-n^{2}}{n^{2}+r^{2}}+\frac{{\rm{e}}^{-\gamma}\left(e^{2}+4 g^{2} n^{2}\right)\left(e^{2}+\alpha\right)}{e^{2}\left(n^{2}+r^{2}\right)}-\frac{2 m r}{n^{2}+r^{2}},
\end{equation}
\begin{equation}
\Psi=\sqrt{\frac{\alpha}{12\pi\left(e^{2}+\alpha\right)}},
\end{equation}
where $\alpha$ is the conformal scalar parameter, rendering the scalar field $\Psi$ being constant. This scalar hair does not vanish even when the electric charge is absent. Notice that in this paper we will only consider the real scalar field, so that $\alpha>0$ or $\alpha<-e^2$. It is obvious that in the former parameter range, the black hole tends to be Reissner-Nordström-like, while for the latter one, the black hole tends to be Schwarzschild-like, and in the $\gamma\to 0$ and $n\to 0$ limit, this solution reduces to the one in the Maxwell case obtained in \cite{Astorino:2013sfa}.
\subsection{Cohomogeneity Thermodynamics}
Thermodynamics of the black hole with NUT charge have been studied recently in \cite{Mann:2020wad,Abbasvandi:2021nyv,Hennigar:2019ive,Bordo:2019slw,BallonBordo:2020mcs,BallonBordo:2019vrn,Awad:2020dhy}, especially in \cite{BallonBordo:2020jtw} for the Taub-NUT solution in Einstein case with conformal electrodynamics, which are main references for our study here. The event horizon of the NUTty dyon black hole generated by the Killing vector $\xi=\partial_t$ is
\begin{equation}
r_+ =m+\frac{\sqrt{{\rm{e}}^{\gamma}\left(m^{2}+n^{2}\right)-\alpha-4 g^{2} n^{2}-e^{2}-4 g^{2} n^{2} \alpha/e^{2}}}{{\rm{e}}^{\gamma / 2}}.
\end{equation}
The temperature, entropy, and mass of the black hole can be obtained as
\begin{equation}
\begin{aligned}
T&=\left.\frac{1}{4\pi}\frac{{\rm{d}}f(r)}{{\rm{d}}r}\right|_{r=r_+}\\&=\frac{1}{4 \pi r_{+}}\left[1-\frac{{\rm{e}}^{-\gamma}\left(4 g^{2} n^{2}+\alpha\right)}{n^{2}+r_{+}^{2}}-\frac{{\rm{e}}^{-\gamma}\left(e^{4}+4 g^{2} n^{2} \alpha\right)}{e^{2}\left(n^{2}+r_{+}^{2}\right)}\right],
\end{aligned}
\end{equation}
\begin{equation}
S=-2 \pi \oint {\rm{d}}^{2} x \sqrt{\hat{h}} \frac{\partial \mathcal{L}}{\partial R_{a b c d}} \hat{\epsilon}_{a b} \hat{\epsilon}_{c d}=\frac{\pi e^{2}(r_+^2+n^2)}{e^{2}+\alpha},
\end{equation}
\begin{equation}
M=\frac{e^2 m}{e^2+\alpha},
\end{equation}
where $\hat{\epsilon}_{a b}$ is a normal bivector which satisfies $\epsilon_{a b} \epsilon^{a b}=-2$, $\hat{h}$ is the determinant of the induced line element from $g_{\mu\nu}$ at the hypersurface $t={\rm{const.}}$ and $r=r_{+}$, the mass can be obtained by the Euclidean method \cite{Martinez:1996gn,Ashtekar:2003jh}, as we will show in what follows.
As mentioned above in Eqs. (\ref{asye}) and (\ref{asym}), the asymptotic electric and magnetic charges of the black hole are
\begin{equation}
Q=e, \quad Q_{m}=-2 g n,
\end{equation}
and they are related by the electromagnetic duality
\begin{equation}\label{duality}
e \leftrightarrow-2 n g, \quad 2 n g \leftrightarrow e.
\end{equation}
At the event horizon, the charges become
\begin{equation}
Q_e^{+}=q_{e}\left(r_{+}\right)=e^{\gamma}\left(n^{2}+r_{+}^{2}\right) a^{\prime}\left(r_{+}\right),
\end{equation}
\begin{equation}
Q_{m}^{+}=q_{m}\left(r_{+}\right)=2 n a\left(r_{+}\right).
\end{equation}
The gauge electric potential can be calculated by extracting the Killing vector with the vector potential as
\begin{equation}
\begin{aligned}
\varphi&=-\left(\left.\xi_{\mu} A^{\mu}\right|_{r=r_{+}}-\left.\xi_{\mu} A^{\mu}\right|_{r\to\infty}\right)\\&=-a\left(r_{+}\right)-g \\
&=g\left[\cos \left({\rm{e}}^{-\gamma}\left(\pi-2 \arctan \frac{r_{+}}{n}\right)\right)-1\right] \\
&\quad+\frac{e}{2 n} \sin \left[{\rm{e}}^{-\gamma}\left(\pi-2 \arctan \frac{r_{+}}{n}\right)\right] .
\end{aligned}
\end{equation}
The magnetic potential
\begin{equation}
\begin{aligned}
\varphi_{m}=& \frac{e}{2 n}\left(\cos \left({\rm{e}}^{-\gamma}\left(\pi-2 \arctan \frac{r_{+}}{n}\right)\right)-1\right) \\
&-g \sin \left({\rm{e}}^{-\gamma}\left(\pi-2 \arctan \frac{r_{+}}{n}\right)\right)
\end{aligned}
\end{equation}
can be yielded directly based on the electric potential by using the electromagnetic duality (\ref{duality}).
The Gibbs free energy can be obtained by the Euclidean action \cite{Lee:2018hrd,Bueno:2018xqc,Sebastiani:2017rxr,Monteiro:2009tc,Mann:2020wad}
\begin{equation}
\begin{aligned}
\mathcal{I}=& I+I_{\mathrm{GH}} \\
=&- \frac{1}{16 \pi} \int_{M} \mathrm{~d}^{4} x \sqrt{-g}R \\
&-\frac{1}{4 \pi} \int \mathrm{d}^{4} x \sqrt{-g} \mathcal{L}_{{\rm{CE}}}\\& -\frac{1}{8 \pi} \int_{\partial M} \mathrm{~d}^{3} x \sqrt{-h}(1-8\pi\xi_D\Psi^2)\left(K-K_0\right),
\end{aligned}
\end{equation}
where $K_0=2/r$ is the extrinsic curvature of the background flat spacetime. Notice that the Wick rotations $n\to -i n,\,e\to -i e,\,g\to -i g$ (or $n\to i n,\,e\to i e,\,g\to i g$) should be conducted to calculate the action and finally the reverse procedure should also be done (For the $\mathcal{L}_{{\rm{CE}}}$ term, one can first directly calculate $\mathcal{L}_{{\rm{CE}}}$, then do Wick rotation to conduct the integral, and finally rotate back). Then we have the specific expression for the Gibbs energy,
\begin{equation}
\begin{aligned}
G&=\mathcal{I}/\beta\\&=\frac{e\left(e^{2} g+e m+g \alpha\right)}{2\left(e^{2}+\alpha\right)}\\&\quad-\frac{1}{2} eg \cos \left[2 {\rm{e}}^{-\gamma}\left(\pi-2 \operatorname{arctan}\frac{r_+}{n}\right)\right]\\&\quad+\frac{1}{8} \left(4 g^2 n-\frac{e^2}{n}\right) \sin \left[2 e^{-\gamma } \left(\pi -2 \arctan\frac{r_+}{n}\right)\right],
\end{aligned}
\end{equation}
where $\beta$ is the inverse of the temperature. The Gibbs function satisfies
\begin{equation}
\begin{aligned}
G=M-T S-\varphi Q-\psi N,
\end{aligned}
\end{equation}
where $N$ is the Misner charge conjugated to the Misner potential $\psi$. The conformal scalar, though being a primary hair, here will not enter the first law of the black hole, which reads
\begin{equation}
\delta G=-S \delta T-N \delta \psi-Q \delta \varphi+\varphi_{m} \delta Q_{m}^{+}.
\end{equation}
After taking the Misner potential $\psi$ as
\begin{equation}
\psi=\frac{\kappa_\pm}{4\pi}=\frac{1}{8\pi n},
\end{equation}
where $\kappa_\pm$ are surface gravity corresponding to the Killing vectors
\begin{equation}
k_\pm=\partial_t\pm\frac{1}{2n}\partial_\phi,
\end{equation}
the integration Smarr relation for the black hole then can be written as
\begin{equation}\label{smax}
M=2 T S+\phi Q+\phi_{m} Q_{m}^{+}+2 \psi N.
\end{equation}
Note that $\psi$ can also be attributed physical treatment of angular velocity of the string, as discussed in \cite{Durka:2019ajz,Clement:2019ghi}. Then the quantity $N$ conjugate to the angular velocity is interpreted as string angular momentum. By conducting the method of Komar integration raised in \cite{Clement:2019ghi}, alternative Smarr relation can be dervied, with ``reduced string angular momentum''. But one can prove that it can be reduced to Eq. (\ref{smax}), only by identifying the string angular velocity as the Misner potential and the string angular momentum as the Misner charge \cite{BallonBordo:2020mcs,Clement:2019ghi,BallonBordo:2019vrn}.
In above, it is obvious that we have not set $a(r_+)=0$. Correspondingly, the electromagnetic potential $A\neq 0$, and neither does the magnetic charge. If, in the other way, the regularity condition $A(r_+)=0$ is imposed on, like the Einstein case, we will have the electric first law
\begin{equation}
\delta M=T \delta S+\varphi \delta Q+\psi \delta N,
\end{equation}
together with the supplementary Smarr relation
\begin{equation}
M=2(T S+\psi N)+\varphi Q.
\end{equation}
In such situation the magnetic parameter is encoded into the electric parameter by the relation
\begin{equation}
g=-\frac{e}{2 n} \tan \left[{\rm{e}}^{-\gamma}\left(\pi-2 \arctan \frac{r_{+}}{n}\right)\right].
\end{equation}
\section{Circular motions of massive particles around the NUTty dyons}\label{sec2}
In this section, we will study the effects of the NLE dimensionless parameter $\gamma$ and the conformal scalar parameter $\alpha$ on the motion of the charged massive particle, and we will put our emphasis on the circular motion around the NUTty dyons and investigate the ISCO of the particles. Our related investigations in this section benefit from Refs. \cite{Carter:1968rr,Cebeci:2015fie,Lim:2021ejg,Lim:2020bdj}. The Lagrangian describing the motion of a charged massive particle reads
\begin{equation}
\mathcal{L}=\frac{1}{2} g_{\mu \nu} \dot{x}^{\mu} \dot{x}^{\nu}+q A_{\mu} \dot{x}^{\mu},
\end{equation}
where the overhead dot means ordinary derivative with regard to the affine parameter $\lambda$, which is connected to the proper time through the relation $\tau=\mu\lambda$, $q$ is the charge of the particle. The normalizing condition of the charged particle thus can be written as
\begin{equation}
g_{\mu \nu} \dot{x}^{\mu} \dot{x}^{\nu}=-\mu^{2},
\end{equation}
where $\mu=0,\,1$ for the massless photon and massive particle, respectively. The momenta of the charged massive particle is
\begin{equation}
P_{\mu}=\frac{\partial \mathcal{L}}{\partial \dot{x}^{\mu}}=g_{\mu \nu} \dot{x}^{\nu}+q A_{\mu},
\end{equation}
and the Hamiltonian can be obtained as
\begin{equation}
H=P_{\mu} \dot{x}^{\mu}-\mathcal{L}=\frac{1}{2} g^{\mu \nu}\left(P_{\mu}-q A_{\mu}\right)\left(P_{\nu}-q A_{\nu}\right).
\end{equation}
To solve the equation of motion for the charged massive particle, we can seek help from the Hamilton-Jacobi method, with the Hamilton-Jacobi equation being written as
\begin{equation}\label{hj1}
\frac{\partial S}{\partial \lambda}=H=\frac{1}{2} g^{\mu \nu}\left(P_{\mu}-q A_{\mu}\right)\left(P_{\nu}-q A_{\nu}\right),
\end{equation}
where $S$ is the Jacobian action which can be written in the variable-separated form as
\begin{equation}\label{hj2}
S=-\frac{1}{2} \lambda+J \phi-E t+S_{r}(r)+S_{\theta}(\theta),
\end{equation}
with $J=P_\phi\,,E=-P_t$ individually the angular momentum and the energy of the charged particle measured at the spatial infinity as constants of motion due to the symmetries of the geometry. $S_r(r)$ and $S_\theta (\theta)$ are functions of $r$ and $\theta$ to be determined. With the help of Eqs. (\ref{hj1}) and (\ref{hj2}), we can have the separated equations fulfilled by the functions $S_r$ and $S_\theta$ as
\begin{equation}
-f(r) [{\rm{d}}S_r (r)/{\rm{d}}r]^2=\frac{K}{n^2+r^2}-\frac{[q a(r)+E]^2}{f(r)}+\mu^2,
\end{equation}
\begin{equation}
[{\rm{d}}S_\theta (\theta)/{\rm{d}}r]^2=K-\left[J \csc (\theta )+2 E n \cot (\theta )\right]^2,
\end{equation}
where $K$ is a separation constant.
\begin{figure*}[htpb!]
\begin{center}
\includegraphics[width=3.4in,angle=0]{rWithVaryingGamma.pdf}
\includegraphics[width=3.5in,angle=0]{thetaWithVaryingGamma.pdf}
\end{center}
\vspace{-5mm}
\caption {Variations of the radius $r_i$ and the latitude $\theta_i$ for the massive particle on the ISCO circular orbits with respect to the NLE parameter $\gamma$ for $m=1\,,n=1,\,e=1/2\,,g=1/2,\,q=1/2$. The solid purple, dot-and-dash blue, dashed cyan, and dotted green lines are for the $\alpha=0.1>0$, $\alpha=0$, $\alpha=-0.26\lessapprox -e^2$ and $\alpha=-0.5<-e^2$ cases, respectively. The green dot denotes the extreme point on the green curve.}\label{pic1}
\end{figure*}
According to the relations
\begin{equation}
P_{r}=\frac{\partial S}{\partial r}, P_{\theta}=\frac{\partial S}{\partial \theta},
\end{equation}
we further have
\begin{equation}
P_r=f(r)^{-1}\sqrt{R(r)},
\end{equation}
\begin{equation}
P_\theta=\sqrt{\Theta(\theta)},
\end{equation}
where we have denoted
\begin{equation}\label{ep1}
R(r)=[E+q a(r)]^2-\left(\frac{K}{n^2+r^2}+\mu^2\right)f(r),
\end{equation}
\begin{equation}\label{ep2}
\Theta(\theta)=K-(2 n E \cot\theta+J\sin^{-1}\theta)^2,
\end{equation}
which are radial and latitudinal effective potentials of the particles. As a result, the Jacobian action as a solution of the Hamilton-Jacobi equation can be written in the form
\begin{equation}
\begin{aligned}
S=-\frac{1}{2} \mu^{2} \tau-E t+J \phi +\int^{\theta}\sqrt{\Theta(\theta)} {\rm{d}} \theta+\int^{r} \frac{\sqrt{R(r)}}{f(r)} {\rm{d}} r.
\end{aligned}
\end{equation}
After differentiating the Jacobian action relative to the constants of the motion $K,\,\mu,\,E\,,J$ for the particle, we can obtain the integrated forms of the geodesics for the particle, expressed as
\begin{equation}
\int^{\theta} \frac{{\rm{d}} \theta}{\sqrt{\Theta}}=\int^{r} \frac{{\rm{d}} r}{(n^2+r^2)\sqrt{R}},
\end{equation}
\begin{equation}
\tau=\int^{r} \frac{{\rm{d}} r}{\sqrt{R}},
\end{equation}
\begin{equation}
\begin{aligned}
t=&\int^{\theta} \frac{-2n\cot\theta\left(2 n E\cot\theta+J\sin^{-1}\theta\right) {\rm{d}} \theta}{\sqrt{\Theta(\theta)}}\\&+\int^r \frac{E+q a(r)}{f(r)\sqrt{R(r)}}{\rm{d}}r,
\end{aligned}
\end{equation}
\begin{equation}
\phi=\int^\theta \frac{2nE\cot\theta+J\sin^{-1}\theta}{\sqrt{\Theta(\theta)}\sin\theta}{\rm{d}}\theta.
\end{equation}
Their first-order forms can be explicitly got as
\begin{equation}
\frac{{\rm{d}}t}{{\rm{d}}\tau}=\frac{-2n\cot\theta(2nE\cot\theta+J\sin^{-1}\theta)}{n^2+r^2}+\frac{E+qa(r)}{f(r)},
\end{equation}
\begin{equation}
\frac{{\rm{d}}r}{{\rm{d}}\tau}=\sqrt{R(r)},
\end{equation}
\begin{equation}
\frac{{\rm{d}}\theta}{{\rm{d}}\tau}=\frac{\sqrt{\Theta(\theta)}}{n^2+r^2},
\end{equation}
\begin{equation}
\frac{{\rm{d}}\phi}{{\rm{d}}\tau}=\frac{2nE\cot\theta+J\sin^{-1}\theta}{(n^2+r^2)\sin\theta}.
\end{equation}
With these equations of motion in hand, we can immediately find the locations of the ISCO for the charged massive particles around the NUTty dyons. To that end, we should let
\begin{equation}
R\left(r_{i}\right)=0, \quad \left.\frac{{\rm{d}}R\left(r\right)}{{\rm{d}}r}\right|_{r=r_i}=0, \quad \left.\frac{{\rm{d}}^2 R\left(r\right)}{{\rm{d}}r^2}\right|_{r=r_i}=0.
\end{equation}
The first one is satisfied by the radial turning point; the second one together with the first one produces the circular orbit with constant radius; the last one restricts that the circular orbit is marginally stable, or in other words, it provides the innermost circular orbit. Besides, the latitudinal conditions
\begin{equation}
\Theta\left(\theta_{i}\right)=0, \quad \left.\frac{{\rm{d}}\Theta\left(\theta\right)}{{\rm{d}}\theta}\right|_{\theta=\theta_i}=0, \quad \left.\frac{{\rm{d}}^2 \Theta\left(\theta\right)}{{\rm{d}}\theta^2}\right|_{\theta=\theta_i}=0
\end{equation}
should also be satisfied, which ensure that the particle is stably located on the position with constant latitude. We numerically calculate the ISCO for the massive particles and obtain the related parameters, of which two representative ones, the radius and the latitude, are shown in Fig. \ref{pic1}. When the NLE parameter is large enough, all parameters tend to be constant. This is easy to understand once we glimpse at the blackening factor Eq. (\ref{blf}) where $\gamma$ penetrates. Besides, from the diagrams, we can also find other important properties. First, the changing tendency of the ISCO radius depends on the value range of the conformal scalar parameter $\alpha$. That is, if $\alpha\geqslant 0$, the ISCO radius increases with respect to the increasing NLE parameter $\gamma$; otherwise, if $\alpha$ belongs to the other branch where $\alpha<-e^2$, the ISCO radius behaves contrarily. Secondly, due to the emergence of the NUT parameter $n$, the ISCO will not locate on the equatorial plane. Here we see that how the conformal scalar parameter and the NLE parameter interplay to change the circular plane. With a non-negative conformal scalar parameter, the latitude of the ISCO plane decreases monotonically with respect to the increasing NLE parameter, and this style can also be shared by the case where $\alpha<-e^2$. However, if $\alpha$ is small enough, the effect of the NLE parameter $\gamma$ changes. There will be an extreme point where the effect of the NLE parameter $\gamma$ dominates. Lastly, when the NLE parameter is kept unchanged and the conformal scalar parameter increases, the ISCO radius increases while the ISCO latitude decreases. In other words, the closer the ISCO to the equatorial plane, the larger the ISCO radius.
\section{Shadows of the NUTty dyons}\label{sec3}
In this section, we will explore the effects of the NLE dimensionless parameter $\gamma$ and the conformal scalar parameter $\alpha$ on the shadow of the conformally scalar NUTty dyons. Refs. \cite{Cunha:2016bpi,Konoplya:2019sns,Grenzebach:2014fha,Zhang:2020xub,Grenzebach:2015oea,Perlick:2021aok,Wei:2018xks,Li:2020drn} are important literature to advance our work here. For simplicity, we set $m=1, E=1$ in what follows. Notice that $\mu=0$ for the photon, whose radial and latitude effective potentials are described by
Eqs. (\ref{ep1}) and (\ref{ep2}) individually. Firstly we should obtain the circular orbits of the photons around the NUTty dyon black hole, which demands
\begin{equation}
R\left(r_{p}\right)=0, \quad \left.\frac{{\rm{d}}R\left(r\right)}{{\rm{d}}r}\right|_{r=r_p}=0, \quad \left.\frac{{\rm{d}}^2 R\left(r\right)}{{\rm{d}}r^2}\right|_{r=r_p}>0,
\end{equation}
where the last condition means that the circular orbit of the photon is radically unstable. We should also have
\begin{equation}
\Theta\left(\theta_{p}\right)=0, \quad \left.\frac{{\rm{d}}\Theta\left(\theta\right)}{{\rm{d}}\theta}\right|_{\theta=\theta_p}=0, \quad \left.\frac{{\rm{d}}^2 \Theta\left(\theta\right)}{{\rm{d}}\theta^2}\right|_{\theta=\theta_p}<0,
\end{equation}
where the last condition denotes that the orbit is latitudinally stable. Using them, we get the related characterized parameters of the photons on the circular orbit as
\begin{equation}
K_p=4 n^{2} \tan\theta_p^{2},
\end{equation}
\begin{equation}
J_p=-2 n \sec\theta_p,
\end{equation}
\begin{equation}
\tan\theta_p=\frac{\sqrt{n^2+r_p^2}}{2 n \sqrt{f(r_p)}}.
\end{equation}
Besides, the radius of the photon is restricted by the equation
\begin{equation}
\begin{aligned}
-f'(r_p) +\frac{2 r_p f(r_p)}{n^2+r_p^2}=0.
\end{aligned}
\end{equation}
\begin{figure*}[htpb!]
\begin{center}
\includegraphics[width=4in,angle=0]{gamma.pdf}
\end{center}
\vspace{-5mm}
\caption {Variations of the shadow radius with respect to the NLE parameter $\gamma$ for $m=1\,,n=1/10,\,e=1/2,\,g=1/2$. The solid purple, dot-and-dash blue, dashed cyan, and dotted green lines are for the $\alpha=0.5>0$, $\alpha=0$, $\alpha=-0.26\lessapprox -e^2$ and $\alpha=-0.5<-e^2$ cases, respectively. Note that the cyan curve is not horizontal; it shows that $R_s$ decreases slowly with increasing $\gamma$.}\label{pic2}
\end{figure*}
The basis $\left\{\hat{e}_{(t)}, \hat{e}_{(r)}, \hat{e}_{(\theta)}, \hat{e}_{(\varphi)}\right\}$ for the observer can be projected onto basis $\left\{\partial_{t}, \partial_{r}, \partial_{\theta}, \partial_{\varphi}\right\}$ for the spacetime. One usually used orthogonal and normalized tetrad for the observer reads \cite{Cunha:2016bpi}
\begin{equation}
\begin{aligned}
\hat{e}_{(t)} &=\sqrt{\frac{g_{\phi \phi}}{g_{t \phi}^{2}-g_{t t} g_{\phi \phi}}}\left(\partial_{t}-\frac{g_{t \phi}}{g_{\phi \phi}} \partial_{\phi}\right), \\
\hat{e}_{(r)} &=\frac{1}{\sqrt{g_{r r}}} \partial_{r}, \\
\hat{e}_{(\theta)} &=\frac{1}{\sqrt{g_{\theta \theta}}} \partial_{\theta}, \\
\hat{e}_{(\phi)} &=\frac{1}{\sqrt{g_{\phi \phi}}} \partial_{\phi},
\end{aligned}
\end{equation}
which corresponds to a zero-angular-momentum-observer (ZAMO). An observer in this frame moves with an angular velocity $-g_{t \phi} / g_{\phi \phi}$ relative to spatial infinity, due to the dragging effect of the black hole.
The locally measured four-momentum of the photon can be obtained by its projecting onto $\hat{e}_{(t)}^{\mu}$,
\begin{equation}
\begin{aligned}
&p^{(t)}=-p_{\mu} \hat{e}_{(t)}^{\mu}, \\
&p^{(i)}=p_{\mu} \hat{e}_{(i)}^{\mu},
\end{aligned}
\end{equation}
with $i=r, \theta, \phi$.
For the massless photon, we have
\begin{equation}
\left[p^{(t)}\right]^{2}=\left[p^{(r)}\right]^{2}+\left[p^{(\theta)}\right]^{2}+\left[p^{(\varphi)}\right]^{2}.
\end{equation}
So the observation angles can be defined as
\begin{equation}
\begin{aligned}
&p^{(r)}=p^{(t)} \cos \tilde{\alpha} \cos \beta, \\
&p^{(\theta)}=p^{(t)} \sin \tilde{\alpha}, \\
&p^{(\phi)}=p^{(t)} \cos \tilde{\alpha} \sin \beta,
\end{aligned}
\end{equation}
Explicitly, the angular coordinates can be written as
\begin{equation}
\sin \tilde{\alpha}=\frac{p^{(\theta)}}{p^{(t)}},
\end{equation}
\begin{equation}
\tan \beta=\frac{p^{(\phi)}}{p^{(r)}}.
\end{equation}
The perimeter radius of a circumference at constant $\theta$ and $r$ can be defined by
\begin{equation}
\tilde{r} \equiv \frac{1}{2 \pi}\int_{0}^{2 \pi} \sqrt{g_{\phi \phi}} {\rm{d}} \phi=\sqrt{g_{\phi \phi}}.
\end{equation}
Then the Cartesian coordinate on the sky plane of the observer can be written as
\begin{equation}
\begin{aligned}
x &\equiv-\tilde{r} \beta=-\tilde{r}\arctan \left[\frac{p^{(\phi)}}{p^{(r)}}\right]\\&=\left.\sqrt{g_{\phi \phi}}\arctan\frac{f(r)\sqrt{g_{rr}}}{\sqrt{R(r)g_{\phi\phi}}}\right|_{(r_o\,,\theta_o)},
\end{aligned}
\end{equation}
\begin{equation}
\begin{aligned}
y &\equiv \tilde{r} \tilde{\alpha}=\tilde{r}\arctan \left[\frac{p^{(\theta)}}{p^{(r)}}\right]\\&=\sqrt{\left(n^2+r^2\right)\sin ^2\theta -4 n^2 f(r) \cos ^2\theta} \\&\left.\quad\times \arcsin \frac{\sqrt{\Theta (\theta )}}{(\zeta -\iota J) \sqrt{n^2+r^2}}\right|_{(r_o\,,\theta_o)},
\end{aligned}
\end{equation}
where $r_o$ and $\theta_o$ are individually radial coordinate and inclination angle of the observer. Here we have denoted $\zeta \equiv \hat{e}_{(t)}^{t}, \iota \equiv \hat{e}_{(t)}^{\phi}$, which are evaluated at the photon orbit.
At very large distance, we have
\begin{equation}
\lim_{r_o\to \infty}x \equiv X=-J,
\end{equation}
\begin{equation}
\lim_{r_o\to \infty}y\equiv Y=\sin\theta_o\sqrt{\Theta(\theta_o)}.
\end{equation}
To directly reflect the effects of the conformal scalar parameter and the NLE parameter on the shadow of the black hole, we here set the observer on the plane determined by the photon doing circular motion, so that $Y=0$. Then we have the shadow radius of the black hole
\begin{equation}\begin{aligned}
R_s=|X|=|2 n \sec\theta_p|=2 n\sqrt{1+\frac{n^2+r_p^2 }{2 n^2 f(r_p)}}.
\end{aligned}\end{equation}
One can check that when $n=\gamma=g=\alpha=0$, $R_s$ reduces to $3\sqrt{3}$, which is the shadow radius of the Schwarzschild black hole. To visually see the effect of the interplay between the NLE parameter and the conformal scalar parameter on the shadow radius of the black hole, we plot Fig. \ref{pic2}, from which we can see that: (1) if $\alpha\geqslant 0$, the shadow radius increases with the increasing NLE parameter, but the changing style becomes opposite if $\alpha<-e^2$; (2) the shadow radius decreases if the conformal parameter $\alpha$ increases.
\section{Conclusions}
In this paper, we first found a NUTty dyon black hole with NUT charge, electric and magnetic charges, as well as the conformal scalar parameter and the NLE parameter. This is a nontrivial conformally scalar black hole solution in the conformal electrodynamics, incorporating the characteristics of the conformally scalar coupled gravity and the conformal invariance and $SO(2)$ electromagnetic duality of the ModMax theory. The Euclidean method was used to calculate the Gibbs free energy of the black hole and the cohomogeneity thermodynamics for the asymptotically flat black hole was formulated. The conformal scalar incorporates into the mass and entropy of the black hole but does not enter the first law of the black hole as an independent variable, though it is a primary hair as it does exist independently even with vanishing electric charge.
To visualize the strong gravitational effects of the conformal scalar hair and the NLE and their interplay, we further studied the ISCO of the charged massive particle around the black hole as well as the shadow formed by the photons. We calculated the characteristic radius and latitude of the ISCO for the charged massive particle. We showed that if the black hole is Reissner-Nordström-like, corresponding to a non-negative conformal scalar parameter, the radius increases but the latitude decreases with respect to the increasing NLE parameter. If the black hole is Schwarzschild-like, endowed with a negative scalar hair, the radius of the ISCO decreases with the increasing NLE parameter, but the latitude of the ISCO may increase first and then decrease. This is due to that the nonlinearity of the electromagnetic field counterbalance the effect of the scalar hair. Other than that, we found that the greater the conformal scalar parameter, the larger the ISCO radius and the nearer the circular plane apart from the equatorial plane.
Choosing the ZAMO, we obtained the shadow radius of the NUTty dyon black hole. On the one hand, not like most other black holes whose unstable circular photon orbit locates on the equatorial plane, the circular orbit of the photons around the NUTty dyon black hole derivates from the equator; on the other hand, $-g_{t \phi} / g_{\phi \phi}\neq 0$ for the black hole, so we calculate its shadow with a method similar to the one for the Kerr black hole, albeit the black hole has vanishing angular momentum, resulting in that the formulae of its shadow radius being similar to the Schwarzschild one. This is quite unexpected! When the nonlinearity of the conformal electrodynamics increases, the shadow radius of the NUTty dyon with positive conformal scalar parameter also increases, but the one with nonpositive parameter decreases. Besides, the greater the conformal scalar parameter, the larger the shadow.
In summary, we found a NUTty dyon solution with conformal scalar hair in conformal electrodynamics and showed the strong gravitational effects of the interplay between the conformal scalar hair and the nonlinear electrodynamics. It is worthwhile to further explore the gravitational effects of the novel conformal electrodynamics in other theories beyond GR and our work here may be helpful.
\section*{Acknowledgements}
M. Z. is supported by the National Natural Science Foundation of China (Grant No. 12005080) and Young Talents Foundation of Jiangxi Normal University (Grant No. 12020779). J. J. is supported by the National Natural Science Foundation of China (Grants No. 11775022 and No. 11873044).
|
2,869,038,155,903 | arxiv | \section{Introduction}
\subsection{Background and Main Results}
Our starting point is a theorem of Bishop and Jones, stated below, which roughly says that a connected subset of ${\mathbb{R}}^{2}$ that is uniformly non-flat in every ball centered upon it (or in other words, is very ``wiggly"), must have large dimension. We measure flatness with Jones' $\beta$-numbers: if $K$ is a subset of a Hilbert space ${\mathscr{H}}$, $x\in K$ and $r>0$, we define
\begin{equation}
\beta(x,r)=\beta_{K}(x,r)=\frac{1}{r}\inf_{L}\sup\{\mbox{dist}(y,L):y\in K\cap B(x,r)\}
\label{e:euclidean-beta}
\end{equation}
where the infimum is taken over all lines $L\subseteq {\mathscr{H}}$.
\begin{theorem}(\cite[Theorem 1.1]{BJ91-wiggly}) There is a constant $c>0$ such that the following holds. Let $K\subseteq {\mathbb{R}}^{2}$ be a compact connected set and suppose that there is $r_{0}>0$ such that for all $r\in (0,r_{0})$ and all $x\in K$, $\beta_{K}(x,r)>\beta_{0}$. Then the Hausdorff dimension\footnote{See \Section{prelims} for the definition of Hausdorff dimension and other definitions and notation.} of $K$ satisfies $\mbox{dim} K\geq 1+c\beta_{0}^{2}$.
\label{t:BJ}
\end{theorem}
There are also analogues of \Theorem{BJ} for surfaces of higher topological dimension, see for example \cite{Guy04}.
Our main theorem extends this result to the metric space setting using an alternate definition of $\beta$. Before stating our results, however, we discuss the techniques and steps involved in proving \Theorem{BJ} to elucidate why the original methods don't immediately carry over, and to discuss how they must be altered for the metric space setting.\\
The main tool in proving \Theorem{BJ} is the {\it Analyst's Traveling Salesman Theorem}, which we state below. First recall that for a metric space $(X,d)$, a {\it maximal $\varepsilon$-net} is a maximal collection of points $X'\subseteq X$ such that $d(x,y)\geq \varepsilon$ for all $x,y\in X'$.
\begin{theorem}(\cite[Theorem 1.1]{Schul-TSP}) Let $A>1$, $K$ be a compact subset of a Hilbert space ${\mathscr{H}}$, and $X_{n}\supseteq X_{n+1}$ be a nested sequence of maximal $2^{-n}$-nets in $K$. For $A>1$, define
\begin{equation}
\beta_{A}(K):=\text{diam} K+\sum_{n\in{\mathbb{Z}}}\sum_{x\in X_{n}} \beta_K^{2}(x,A2^{-n} )2^{-n}.
\label{e:betaK}
\end{equation}
There is $A_{0}$ such that for $A>A_{0}$ there is $C_{A}>0$ (depending only on $A$) so that for any $K$, $\beta_{A}(K)<\infty$ implies there is a connected set $\Gamma$ such that $K\subseteq \Gamma$ and
\[{\mathscr{H}}^{1}(\Gamma)\leq C_{A} \beta_{A}(K).\]
Conversely, if $\Gamma$ is connected and ${\mathscr{H}}^{1}(\Gamma)<\infty$, then for any $A>1$,
\begin{equation}
\beta_{A}(\Gamma)\leq C_{A} {\mathscr{H}}^{1}(\Gamma).
\label{e:beta_gamma}
\end{equation}
\label{t:TST}
\end{theorem}
At the time of \cite{BJ91-wiggly}, this was only known for the case ${\mathscr{H}}={\mathbb{R}}^2$, due to Jones \cite{Jones-TSP}. This was subsequently generalized to ${\mathbb{R}}^{n}$ by Okikiolu \cite{O-TSP} and then to Hilbert space by Schul \cite{Schul-TSP}.
The proof of \Theorem{BJ} goes roughly as follows: one constructs a {\it Frostmann measure} $\mu$ supported on $K$ satisfying \begin{equation}
\mu(B(x,r))\leq C r^{s}
\label{e:frostmann}
\end{equation}
for some $C>0$, $s=1+c \beta_{0}^{2}$ and for all $x\in K$ and $r>0$. This easily implies that the Hausdorff dimension of $K$ is at least $s$ (see \cite[Theorem 8.8]{Mattila} and that section for a discussion on Frostmann measures). One builds such a measure on $K$ inductively by deciding the values $\frac{\mu(Q_n)}{\mu(Q)}$ for each dyadic cube $Q$ intersecting $K$ and for each $n$-th generation descendant $Q_n$ intersecting $K$, where $n$ is some large number that will depend on $\beta_{0}$. If the number of such $n$-th generation descendants is large enough, we can choose the ratios and hence disseminate the mass $\mu(Q)$ amongst the descendants $Q_{n}$ in such a way that the ratios will be very small and \eqn{frostmann} will be satisfied. To show that there are enough descendants, one looks at the skeletons of the $n$-th generation descendants of $Q$ and uses the second half of \Theorem{TST} coupled with the non-flatness condition in the satement of \Theorem{BJ} to guarantee that the total length of this skeleton (and hence the number of cubes) will be large.
In the metric space setting, however, no such complete analogue of \Theorem{TST} exists, and it is not even clear what the appropriate analogue of a $\beta$-number should be. Note, for example, that it does not make sense to estimate the length of a metric curve $\Gamma$ using the original $\beta$-number, even if we consider $\Gamma$ as lying in some Banach space. A simple counter example is if $\Gamma\subseteq L^{1}([0,1])$ is the image of $s:[0,1]\rightarrow L^{1}([0,1])$ defined by $t\mapsto \mathds{1}_{[0,t]}$. This a geodesic, so in particular, it is a rectifiable curve of finite length. However, $\beta_{\Gamma}(x,r)$ (i.e. the width of the smallest tube containing $\Gamma\cap B(x,r)$ in $L^{1}$, rescaled by a factor $r$) is uniformly bounded away from zero, and in particular, $\beta_{A}(\Gamma)=\infty$.
In \cite{Hah05}, Hahlomaa gives a good candidate for a $\beta$-number for a general metric space $X$ using Menger curvature and uses it to show that if the sum in \eqn{betaK} is finite for $K=X$ (using his definition of $\beta_{X}$), then it can be contained in the Lipschitz image of a subset of the real line (analogous to the first half of \Theorem{TST}). An example of Schul \cite{Schul-survey}, however, shows that the converse of \Theorem{TST} is false in general: \eqn{beta_gamma} with Hahlomaa's $\beta_{X}$ does not hold with the same constant for all curves in $\ell^{1}$. We refer to \cite{Schul-survey} for a good summary on the Analyst's Traveling Salesman Problem.
To generalize \Theorem{BJ}, we use a $\beta$-type quantity that differs from both Jones' and Hahlomaa's definitions. It is inspired by one defined by Bishop and Tyson in \cite{BT01-antenna} that measures the deviation of a set from a geodesic in a metric space: if $X$ is a metric space, $B_{X}(x,r)=\{y\in X:d(x,y)<r)\}$, and $y_{0},...,y_{n}\in B_{X}(x,r)$ an ordered sequence, define
\begin{equation}
\d(y_{0},...,y_{n})=\sum_{i=0}^{n-1}d(y_{i},y_{i+1}) -d(y_{0},y_{n}) +\sup_{z\in B_{X}(x,r)}\min_{i=1,...,n}d(z,y_{i})
\label{e:d}
\end{equation}
and define
\begin{equation}
\hat{\beta}_{X}(x,r)= \inf_{\{y_{i}\}\subseteq B_{X}(x,r)} \frac{\d(y_{0},...,y_{n})}{d(y_{0},y_{n})}
\label{e:bd}
\end{equation}
where the infimum is over all finite ordered sequences in $B_{X}(x,r)$ of any length $n$.
In \cite{BT01-antenna}, Bishop and Tyson ask whether, for a compact connected metric space $X$, \eqn{bd} being uniformly larger than zero is enough to guarantee that $\mbox{dim} X>1$. We answer this in the affirmative.
\begin{theorem}
There is $\kappa>0$ such that the following holds. If $X$ is a compact connected metric space and $\hat{\beta}_{X}(x,r)>\beta>0$ for all $x\in X$ and $r\in(0,r_{0})$ for some $r_{0}>0$, then $\mbox{dim} X\geq 1+\kappa\beta^{4}$.
\label{t:BT-answer}
\end{theorem}
Instead of $\hat{\beta}$, however, we work with a different quantity, which we define here for a general compact metric space $X$. First, by Kuratowski embedding theorem, we may assume $X$ is a subset of $\ell^{\infty}$, whose norm we denote by $|\cdot |$. Let $B(x,r)=B_{\ell^{\infty}}(x,r)$ and define
\begin{equation}
\beta_{X}'(x,r) =\inf_{s} \frac{\ell(s)-|s(0)-s(1)| + \sup_{z\in X\cap B(x,r)}\mbox{dist} (z,s([0,1]))}{|s(0)-s(1)|}
\end{equation}
where the infimum is over all curves $s:[0,1]\rightarrow B(x,r)\subseteq \ell^{\infty}$ and
\[\ell(s)= \sup_{\{t_{i}\}_{i=0}^{n}} \sum_{i=0}^{n-1} |s(t_{i})-s(t_{i+1})|\]
is the length of $s$, where the supremum is over all partitions $0=t_{0}<t_{1}<\cdots <t_{n}=1$. In general, if $s$ is defined on a union of disjoint open intervals $\{I_{j}\}_{j=1}^{\infty}$, we set
\[\ell(s|_{\bigcup I_{j}})=\sum_{j} \ell(s|_{I_{j}}).\]
The case in which $s$ is just a straight line segment through the center of the ball with length $2r$ gives the estimate $\beta_{X}'(x,r)\leq \frac{1}{2}$.
The quantity $\beta'(x,r)$ measures how well $X\cap B(x,r)$ may be approximated by a geodesic. To see this, note that if, for some $s:[0,1]\rightarrow\ell^{\infty}$, the $\frac{\beta'(x,r)}{2}|s(0)-s(1)|$-neighborhood of $s([0,1])$ contains $X\cap B(x,r)$, then the length of $s$ must be at least $(1+\frac{\beta'(x,r)}{2})|s(0)-s(1)|$, which is $\frac{\beta'(x,r)}{2}|s(0)-s(1)|$ more than the length of any geodesic connecting $s(0)$ and $s(1)$. The quantity $\hat{\beta}$ similarly measures how well the portion of $X\cap B(x,r)$ may be approximated by a geodesic polygonal path with vertices in $X$. In Figure \ref{f:betas}, we compare the meanings of $\beta,\hat{\beta},$ and $\beta'$.
We will refer to the quantities $\ell(s)$ and $\d(y_{0},...,y_{n})$ as the {\it geodesic deviation} of $s$ and $\{y_{0},...,y_{n}\}$ respectively. We will also say $\hat{\beta}_{X}(x,r)$ and $\beta_{X}'(x,r)$ measure the {\it geodesic deviation} of $X$ inside the ball $B(x,r)$.
\begin{figure}[t!]
\begin{picture}(100,300)(130,0)
\put(0,0){\includegraphics[width=360pt]{betas.pdf}}
\put(55,200){ $\beta(x,r)2r$}
\put(45,105){${ B(y_{i},\beta|y_{0}-y_{n}|)}$}
\put(85,20){${ |y_{0}-y_{n}|}$}
\put(240,15){${ |s(0)-s(1)|}$}
\put(220,85){$<\beta|s(0)-s(1)|$}
\put(285,38){${ s([0,1])}$}
\put(275,200){${B= B(x,r)}$}
\put(90,165){$X$}
\end{picture}
\caption{ In each of the three figures above is a ball $B=B(x,r)$ containing a portion of a curve $X$. In the first picture, $\beta(x,r)2r$ is the width of the smallest tube containing $X\cap B(x,r)$. In the second, we see that $\hat{\beta}(x,r)$ is such that for $\beta>\hat{\beta}(x,r)$, there are $y_{0},...,y_{n}\in X$ with vertices in $X\cap B$ so that balls centered on the $y_{i}$ of radius $\beta|y_{0}-y_{n}|$ cover $X\cap B$, and so that the geodesic deviation (that is, its length minus $|y_{0}-y_{n}|$ is at most $\beta|y_{0}-y_{n}|$. In the last, we show that if $\beta'(x,r)<\beta$, there is $s:[0,1]\rightarrow \ell^{\infty}$ whose geodesic deviation and whose distance from any point in $X\cap B$ are both at most $\beta|s(0)-s(1)|$.}
\label{f:betas}
\end{figure}
Note that for the image of $t\mapsto\mathds{1}_{[0,t]}\in L^{1}([0,1])$ described earlier, it is easy to check that $\hat{\beta}(x,r)=\beta'(x,r)=0$ for all $x\in X$ and $r>0$, even though $\beta_{X}(x,r)$ is bounded away from zero. This, of course, makes the terminology ``wiggly" rather misleading in metric spaces, since there are certainly non-flat or highly ``wiggly" geodesics in $L^{1}$; we use this terminology only to be consistent with the literature. Later on in \Proposition{bb''}, however, we will show that in a Hilbert space we have for some $C>0$,
\begin{equation}
\beta'(x,r)\leq \beta(x,r) \leq C \beta'(x,r)^{\frac{1}{2}}.
\label{e:bb'-intro}
\end{equation}
That the two should be correlated in this setting seems natural as $\beta(x,r)$ is measuring how far $X$ is deviating from a straight line, which are the only geodesics in Hilbert space.
In \Lemma{beta'-bhat} below, we will also show that for some $C>0$,
\[
\beta'(x,r)\leq \hat{\beta}(x,r) \leq C \beta'(x,r)^{\frac{1}{2}}
\]
so that \Theorem{BT-answer} follows from the following theorem, which is our main result.
\begin{theorem}
There is $c_{0}>0$ such that the following holds. If $X$ is a compact connected metric space and $\beta'_{X}(x,r)>\beta>0$ for all $x\in X$ and $r\in(0,r_{0})$ for some $r_{0}>0$, then $\mbox{dim} X\geq 1+c_{0}\beta^{2}$.
\label{t:main}
\end{theorem}
We warn the reader, however, that the quadratic dependence on $\beta$ appears in \Theorem{main} and \Theorem{BJ} for completely different reasons. In \Theorem{BJ}, it comes from using \Theorem{TST}, or ultimately from the Pythagorean theorem, which of course does no hold in general metric spaces; in \Theorem{main}, it seems to be an artifact of the construction and can perhaps be improved.
Our approach to proving \Theorem{main} follows the original proof of \Theorem{BJ} described earlier: to show that a metric curve $ X$ has large dimension, we approximate it by a polygonal curve, estimate its length from below and use this estimate to construct a Frostmann measure, but in lieu of a traveling salesman theorem. (In fact, taking $\beta'(x,A2^{-n})$ instead of $\beta(x,A2^{-n})^2$ in \Theorem{TST} does not lead to a metric version of \Theorem{TST} for a similar reason that Hahlomaa's $\beta$-number doesn't work; one need only consider Schul's example \cite[Section 3.3.1]{Schul-survey}.)\\
\subsection{An Application to Conformal Dimension}
The original context of Bishop and Tyson's conjecture, and the motivation for \Theorem{main}, concerned conformal dimension. Recall that a {\it quasisymmetric map} $f:X\rightarrow Y$ between two metric spaces is a map for which there is an increasing homeomorphism $\eta:(0,\infty)\rightarrow(0,\infty)$ such that for any distinct $x,y,z\in X$,
\[\frac{|f(x)-f(y)|}{|f(z)-f(y)|}\leq \eta\ps{\frac{|x-y|}{|z-y|}}.\]
The {\it conformal dimension} of a metric space $X$ is
\[\mbox{C-dim} X=\inf_{f}\mbox{dim} f(X)\]
where the infimum ranges over all quasisymmetric maps $f:X\rightarrow f(X)$. For more information, references, and recent work on conformal dimension, see for example \cite{conformal-dimension}.
In \cite{BT01-antenna}, it is shown that the antenna set has conformal dimension one yet every quasisymmetric image of it into any metric space has dimension strictly larger than one. The {\it antenna set} is a self similar fractal lying in ${\mathbb{C}}$ whose similarities are the following:
\[f_{1}(z)=\frac{z}{2},\;\; f_{2}(z)=\frac{z+1}{2}, \;\; f_{3}(z)=i\alpha z+\frac{1}{2},f_{4}(z)=-i\alpha z+\frac{1}{2}+i\alpha\]
where $\alpha\in (0,\frac{1}{2})$ is some fixed angle (see Figure \ref{f:antenna}).
\begin{figure}[h]
\includegraphics[width=\textwidth]{antenna.pdf}
\caption{The antenna set with $\alpha=\frac{1}{4}$.}
\label{f:antenna}
\end{figure}
To show the conformal dimension $1$ is never attained under any quasisymmetric image of the antenna set, the authors show by hand that any quasisymmetic map of the antenna set naturally induces a Frostmann measure of dimension larger than one. At the end of the paper, however, the authors suggested another way of showing the same result by proving an analogue of \Theorem{BJ} for a $\beta$-number which is uniformly large for the antenna set as well as any quasisymmetric image of it.
\Theorem{main} doesn't just give a much longer proof of Bishop and Tyson's result, but it lends itself to more general sets lacking any self-similar structure.
\begin{definition}
Let $c>0$, $Y=[0,e_{1}]\cup [0,e_{2}]\cup [0,e_{3}]\subseteq {\mathbb{R}}^{3}$, where $e_{j}$ is the $j$th standard basis vector in ${\mathbb{R}}^{3}$, and let $X$ be a compact connected metric space. For $x\in X$, $r>0$, we say $B_{X}(x,r)$ has a {\it $c$-antenna} if there is a homeomorphism $h:Y\rightarrow h(Y)\subseteq B_{X}(x,r)$ such that the distance between $h(e_{i})$ and $h([0,e_{j}]\cup [0,e_{k}]))$ is at least $cr$ for all permutations $(i,j,k)$ of $(1,2,3)$. We say $X$ is {\it $c$-antenna-like} if $B_{X}(x,r)$ has a $c$-antenna for every $x\in X$ and $r<\frac{\text{diam} X}{2}$,
\end{definition}
Clearly, the classical antenna set in ${\mathbb{R}}^{2}$ is antenna-like.
\begin{theorem}
Let $ X$ be a compact connected metric space in $\ell^{\infty}$.
\begin{enumerate}
\item If $B_{X}(x,r)$ has a $c$-antenna, then $\beta'(x,r)>\frac{c}{7}$. Hence, if $ X$ is $c$-antenna-like, we have $\mbox{dim} X\geq 1+\frac{c_{0}}{49} c^{2}$.
\item Any quasisymmetric image of an antenna-like set into any metric space is also antenna-like and hence has dimension strictly larger than one.
\end{enumerate}
\label{t:antenna-like}
\end{theorem}
Note that this result doesn't say the conformal dimension of an antenna-like set is larger than one, only that no quasisymmetric image of it has dimension equal to one. However, see \cite{Mackay10}, where the author bounds the conformal dimension of a set from below using a different quantity.
\subsection{Outline}
In \Section{prelims}, we go over some necessary notation and tools before proceeding to the proof of \Theorem{main} in \Section{proof}. In \Section{antenna}, we prove \Theorem{antenna-like}, and in \Section{betas} we compare $\beta',\hat{\beta},$ and $\beta$.
\subsection{Acknowledgements}
The author would like to thank Steffen Rohde, Tatiana Toro, and Jeremy Tyson for their helpful discussions, and to Matthew Badger, John Garnett, Raanan Schul, and the anonymous referee for their helpful comments on the manuscript. Part of this manuscript was written while the author was at the IPAM long program Interactions Between
Analysis and Geometry, Spring 2013.
\section{Preliminaries}
\label{s:prelims}
\subsection{Basic notation}
Since we are only dealing with compact metric spaces, by the Kuratowski embedding theorem, we will implicitly assume that all our metric spaces are contained in $\ell^{\infty}$, whose norm we will denote $|\cdot|$.
For $x\in \ell^{\infty}$ and $r>0$, we will write
\[B(x,r)=\{y\in\ell^{\infty}:|x-y|<r\}\subseteq\ell^{\infty}.\]
If $B=B(x,r)$ and $\lambda>0$, we write $\lambda B$ for $B(x,\lambda r)$.
For a set $A\subseteq \ell^{\infty}$ and $\delta>0$, define
\[A_{\delta}=\{x\in \ell^{\infty}:\mbox{dist}(x,A)<\delta\} \;\; \mbox{ and } \;\;\text{diam} A=\sup\{|x-y|:x,y\in A\}\]
where
\[\mbox{dist}(A,B)=\inf\{|x-y|: x\in A,y\in B\}, \;\;\; \mbox{dist}(x,A)=\mbox{dist}(\{x\},A).\]
For a set $E\subseteq {\mathbb{R}}$, let $|E|$ denote its Lebesgue measure. For an interval $I\subseteq {\mathbb{R}}$, we will write $a_{I}$ and $b_{I}$ for its left and right endpoints respectively. For $s>0$, $\delta\in (0,\infty]$ and $A\subseteq \ell^{\infty}$, define
\[{\mathscr{H}}_{\delta}^{s}(A)=\inf\ck{ \sum\text{diam} A_{j}: A\subseteq \bigcup A_{j}, \text{diam} A_{j}<\delta},\]
\[{\mathscr{H}}^{s}(A)=\lim_{\delta\rightarrow 0} {\mathscr{H}}_{\delta}^{1}(A).\]
The {\it Hausdorff dimension} of a set $A$ is
\[\mbox{dim} A:=\inf\{s:{\mathscr{H}}^{s}(A)=0\}.\]
\subsection{Cubes}
In this section, we construct a family of subsets of $\ell^{\infty}$, tailored to a metric space $X$, that have properties similar to dyadic cubes in Euclidean space. These cubes appeared in \cite{Schul-TSP} (where they were alternatively called ``cores") and are similar to the so-called Christ-David Cubes (\cite{David88,Christ-T(b)}) in some respects, although they are not derived from them.
Fix $M>0$ and $c\in (0,\frac{1}{8})$. Let $X_{n}\subseteq X$ be a nested sequence of maximal $M^{-n}$-nets in $X$. Let
\[{\mathscr{B}}_{n}=\{B(x,M^{-n}): x\in X_{n}\}, \;\; {\mathscr{B}}=\bigcup_{n} {\mathscr{B}}_{n}.\]
For $B=B(x,M^{-n})\in {\mathscr{B}}_{n}$, define
\[Q_{B}^{0}=cB, \;\; Q_{B}^{j}=Q_{B}^{j-1}\cup\bigcup\{cB: B\in \bigcup_{m\geq n} {\mathscr{B}}_{m}, cB\cap Q_{B}^{j-1}\neq\emptyset\}, Q_{B}=\bigcup_{j=0}^{\infty} Q_{B}^{j}.\]
Basically, $Q_{B}$ is the union of all balls $B'$ that may be connected to $B$ by a chain $\{cB_{j}\}$ with $B_{j}\in {\mathscr{B}}$, $\text{diam} B_{j}\leq \text{diam} B$, and $cB_{j}\cap cB_{j+1}$ for all $j$.
For such a cube $Q$ constructed from $B(x,M^{-n})$, we let $x_{Q}=x$ and $B_{Q}=B(x,cM^{-n})$.
Let
\[\Delta_{n}=\{Q_{B}:B\in {\mathscr{B}}_{n}\}, \;\; \Delta=\bigcup \Delta_{n}.\]
Note that, for $Q\in \Delta_{n}$, $x_{Q}\in X_{n}$.
\begin{lemma}
If $c<\frac{1}{8}$, then for $X$ and $\Delta$ as above, the family of cubes $\Delta$ satisfy the following properties.
\begin{enumerate}
\item If $Q,R\in \Delta$ and $Q\cap R\neq\emptyset$, then $Q\subseteq R$ or $R\subseteq Q$.
\item For $Q\in \Delta$,
\begin{equation}
B_{Q}\subseteq Q\subseteq (1+8M^{-1})B_{Q}.
\label{e:1+2N^-1}
\end{equation}
\end{enumerate}
\label{l:cubes}
\end{lemma}
The proof is essentially in \cite{Schul07}, but with slightly different parameters. So that the reader need not perform the needed modifications, we provide a proof here.
\begin{proof}
Part 1 follows from the definition of the cubes $Q$. To prove Part 2, we first claim that if $\{B_{j}\}_{j=0}^{n}$ is a chain of balls with centers $x_{j}$ for which $cB_{j}\cap cB_{j+1}\neq\emptyset$, then for $C=\frac{1}{1-2M^{-1}}$,
\begin{equation}
\sum_{j=0}^{n} \text{diam} cB_{j} \leq C \max_{j=0,...,n} \text{diam} cB_{j}.
\label{e:ballchain}
\end{equation}
We prove \eqn{ballchain} by induction. Let $x_{j}$ denote the center of $B_{j}$ If $n=1$, $\text{diam} B_{0}\leq \text{diam} B_{1}$, and $x_{0}$ and $x_{1}$ are the centers of $B_{0}$ and $B_{1}$ respectively, then $\text{diam} B_{0}\leq M^{-1}\text{diam} B_{1}$ since otherwise $B_{0},B_{1}\in {\mathscr{B}}_{N}$ for some $N$ and
\[ M^{-n}\leq |x_{0}-x_{1}|\leq \frac{\text{diam} cB_{0}}{2} + \frac{\text{diam} cB_{1}}{2}= 2cM^{-n}<M^{-n}\]
since $c<\frac{1}{8}$, which is a contradiction. Hence,
\[ \text{diam} cB_{0}+\text{diam} cB_{1}\leq (1+2M^{-1}) \text{diam} cB_{1} \leq C \text{diam} cB_{1}.\]
Now suppose $n>1$. Let $j_{0}\in \{1,...,n\}$ and $N$ be an integer so that
\begin{equation}
\text{diam} B_{j_{0}}=\max_{j=1,...,n} \text{diam} B_{j}=2M^{-N}.
\label{e:maxball}
\end{equation}
Recall that all balls in ${\mathscr{B}}$ have radii that are powers of $M^{-1}$, so there exists an $N$ so that the above happens.
Note that $B_{j_{0}-1}$ and $B_{j_{0}}$ cannot have the same diameter (which follows from the $n=1$ case we proved earlier). Since $B_{j_{0}}$ has the maximum diameter of all the $B_{j}$, we in fact know that $\text{diam} B_{j_{0}-1}\leq M^{-1}B_{j_{0}}$ (again, recall that all balls have radii that are powers of $M^{-1}$).
Let $i_{0}\leq j_{0}$ be the minimal integer for which $\text{diam} B_{i_{0}}\leq M^{-1} \text{diam} B_{j_{0}}$ (which exists by the previous discussion) and let $k_{0}\geq j_{0}$ be the maximal integer such that $B_{k_{0}}\leq M^{-1} \text{diam} B_{j_{0}}$. By the induction hypothesis,
\[\sum_{j=j_{0}+1}^{k_{0}} \text{diam} cB_{j}\leq C \max_{j_{0}<j\leq k_{0}}\text{diam} cB_{j}\leq CM^{-1} \text{diam} cB_{j_{0}}\]
and
\begin{equation}
\sum_{j=i_{0}}^{j_{0}-1} \text{diam} cB_{j}\leq C\max_{i_{0}\leq j<j_{0}}\text{diam} cB_{j}\leq CM^{-1} \text{diam} cB_{j_{0}}
\label{e:itoj-1}
\end{equation}
so that
\begin{equation}
\sum_{j=i_{0}}^{k_{0}}\text{diam} B_{j} \leq (1+2CM^{-1})\text{diam} cB_{j_{0}}=C\text{diam} c B_{j_{0}}.
\label{e:itok}
\end{equation}
{\bf Claim: } $i_{0}=0$. Note that if $i_{0}>0$, then
\begin{align*}
|x_{i_{0}-1}-x_{j_{0}}|
& \leq \sum_{i=i_{0}-1}^{j_{0}}\text{diam} cB_{i}
\leq \text{diam} c B_{i_{0}-1}+\text{diam} c B_{j_{0}} + \sum_{i=i_{0}}^{j_{0}-1} 2 c B_{j_{0}} \\
& \stackrel{\eqn{maxball} \atop \eqn{itoj-1}}{\leq}2\text{diam} c B_{j_{0}} + CM^{-1}\text{diam} c B_{j_{0}}\\
& = (2c+cCM^{-1})\text{diam} B_{j_{0}}=(2c+cCM^{-1})2M^{-N}
<M^{-N}
\end{align*}
for $c<\frac{1}{4}$ and $M>4$ (this makes $C<2$). Since $x_{j_{0}}\in X_{N}$ and points in $X_{N}$ are $M^{-N}$-separated, we must have $x_{i_{0}-1}\not\in X_{N}$, hence $B_{i_{0}-1}\not\in {\mathscr{B}}_{N}$. Thus,
\[\text{diam} B_{i_{0}-1}\leq M^{-1}\text{diam} B_{j_{0}},\]
which contradicts the minimality of $i_{0}$, hence $i_{0}=0$. We can prove similarly that $k_{0}=n$, and this with \eqn{itoj-1} proves \eqn{ballchain}. This in turn implies that for any $N\in{\mathbb{N}}$, if $Q\in \Delta_{N}$, then $\text{diam} Q\leq C\text{diam} cB_{Q}$, hence
\begin{align*}
Q
& \subseteq B(x_{Q},cM^{-N}+(C-1)\text{diam} cB_{Q})
= B\ps{x_{Q}, c\ps{1+\frac{4M^{-1}}{1-2M^{-1}}}M^{-N}}\\
& \subseteq (1+8M^{-1})B_{Q}.
\end{align*}
\end{proof}
For $N$ large enough, this means we can pick our cubes so that they don't differ much from balls. We will set $8M^{-1}=\varepsilon\beta$ for some $\varepsilon\in (0,1)$ to be determined later, so that
\begin{equation}
B_{Q}\subseteq Q\subseteq (1+\varepsilon\beta)B_{Q}
\label{e:1+veb}
\end{equation}
\begin{remark}
There are a few different constructions of families of metric subsets with properties similar to dyadic cubes, see \cite{David88}, \cite{Christ-T(b)}, and \cite{HK12} for example, and the references therein. Readers familiar with any of these references will see that Schul's ``cores" we have just constructed are very different from the cubes constructed in the aforementioned references. In particular, each $\Delta_{n}$ does not partition any metric space in the same way that dyadic cubes (half-open or otherwise) would partition Euclidean space, not even up to set of measure zero). However, for each $n$ we do have
\begin{equation}
X\subseteq \bigcup \{ c^{-1}Q:Q\in \Delta_{n}\},
\label{e:1/cQ}
\end{equation}
and we still have the familiar intersection properties in \Lemma{cubes}. The reason for the ad hoc construction is the crucial ``roundness" property \eqn{1+veb}.
\end{remark}
\begin{lemma}
Let $\gamma:[0,1]\rightarrow \ell^{\infty}$ be a piecewise linear function and set $\Gamma=\gamma([0,1])$, whose image is a finite union of line segments, and let $\Delta$ be the cubes from \Lemma{cubes} tailored to $X$. Then for any $Q\in \Delta$, ${\mathscr{H}}^{1}(\d Q)=0$ and $|\gamma^{-1}(\d Q)|=0$.
\label{l:zero-boundary}
\end{lemma}
\begin{proof}
Note that since $\Gamma$ is a finite polynomial curve, $\mu={\mathscr{H}}^{1}|_{\Gamma}$ is {\it doubling} on $\Gamma$, meaning there is a constant $C$ so that $\mu(B(x,Mr))\leq C\mu(B(x,r))$ for all $x\in \Gamma$ and $r>0$. If $x\in\d Q$ for some $Q\in \Delta$, then there is a sequence $x_{n}\in X_{n}$ such that $|x_{n}-x|<M^{-n}$ since the $X_{n}$ are maximal $M^{-n}$-nets. To each $x_{n}$ corresponds a ball $B_{n}=B(x_{n},M^{-n})\in {\mathscr{B}}_{n}$. Let $N$ be such that $Q\in \Delta_{N}$. Since $cB_{n}\subseteq Q_{B_{n}}\in\Delta_{n}$, we have by \Lemma{cubes} that either $cB_{n}\subseteq Q$ (if $Q_{B_{n}}\cap Q\neq\emptyset$) or $cB_{n}\subseteq R$ for some $R\in \Delta_{N}$ with $Q\cap R=\emptyset$. In either case, since cubes don't contain their boundaries (since they are open), we have that $cB_{n}\cap \d Q=\emptyset$. This implies that $Q$ is porous, and it is well known that such sets have doubling measure zero. More precisely, the doubling condition on $\mu$ guarantees that $\lim_{n\rightarrow\infty} \frac{\mu(\d Q\cap B(x,M^{-n}))}{\mu(B(x,M^{-n}))}=1$ $\mu$-a.e. $x\in \Gamma$ (see \cite[Theorem 1.8]{Heinonen}), but if $x\in \d Q$ and $B_{n}$ is as above, then one can show using the doubling property of $\mu$ that
\[\limsup_{n\rightarrow\infty} \frac{\mu(\d Q\cap B(x,M^{-n}))}{\mu(B(x,M^{-n}))}
\leq \limsup_{n\rightarrow\infty} \frac{\mu(B(x,M^{-n}) \backslash B_{n})}{\mu(B(x,M^{-n}))}<1,\]
and thus $\mu(\d Q)=0$.
The last part of the theorem follows easily since $\gamma$ is piecewise affine.
\end{proof}
The following lemma will be used frequently.
\begin{lemma}
Let $I\subseteq {\mathbb{R}}$ be an interval, $s:I\rightarrow \ell^{\infty}$ be continuous and $I'\subseteq I$ a subinterval. Then
\begin{equation}
\ell(s|_{I'})-|s(a_{I'})-s(b_{I'})|\leq \ell(s|_{I})-|s(a_{I})-s(b_{I})|.
\label{e:subarc}
\end{equation}
\label{l:subarc}
\end{lemma}
\begin{proof}
We may assume $\ell(s_{I})<\infty$, otherwise \eqn{subarc} is trivial. We estimate
\begin{multline*}
\ell(s|_{I'})-|s(a_{I'})-s(b_{I'})|
= \ell(s|_{I})-\ell(s|_{I\backslash I'}) -|s(a_{I'})-s(b_{I'})|\\
\leq \ell(s|_{I})- (|s(a_{I})-s(a_{I'})|+|s(b_{I})-s(b_{I'})|)-|s(a_{I'})-s(b_{I'})|\\
\leq \ell(s|_{I})-|s(a_{I})-s(b_{I})|.
\end{multline*}
\end{proof}
\section{Proof of \Theorem{main}}
\label{s:proof}
\subsection{Setup}
For this section, we fix a compact connected set $X$ satisfying the conditions of \Theorem{main}. The main tool is the following Lemma, which can be seen as a very weak substitute for \Theorem{TST}.
\begin{lemma}
Let $c'<\frac{1}{8}$. We can pick $M$ large enough (by picking $\varepsilon>0$ small enough) and pick $\beta_{0},\kappa>0$ such that, for any $X$ satisfying the conditions of \Theorem{main} for some $\beta\in (0,\beta_{0})$, the following holds. If $X_{n}$ is any nested sequence of $M^{-n}$-nets in $X$, there is $n_{0}=n_{0}(\beta)$ such that for $x_{0}\in X_{n}$ with $M^{-n}<\min\ck{r_{0},\frac{\text{diam} X}{2}}$,
\begin{equation}
\# X_{n+n_{0}}\cap B(x_{0},c'M^{-n})\geq M^{(1+\kappa\beta^{2})n_{0}}.
\label{e:main-ineq}
\end{equation}
\label{l:lemma-main}
\end{lemma}
We will prove this in Section \ref{s:lemma-main}, but first, we'll explain why this proves \Theorem{main}.
\begin{proof}[Proof of \Theorem{main}]
Without loss of generality, we may assume $r_{0}>2$ by scaling $X$ if necessary. We first consider the case that $\beta<\beta_{0}$. Let $\Delta$ be the cubes from \Lemma{cubes} tailored to the metric space $X$ with $c=c'$ and define inductively,
\[\Delta_{0}'=\Delta_{0}, \;\;\; \Delta_{n+1}'=\{R\in \Delta_{(n+1)n_{0}}: R\subseteq Q\mbox{ for some }Q\in \Delta_{n}\}.\]
By \Lemma{lemma-main}, for any $Q\in \Delta_{n}'$, if $B_{Q}=B(x_{Q},cM^{-N})$, then
\begin{equation}
\# \{R\in \Delta_{n+1}',R\subseteq Q\} \geq \# X_{N+n_{0}}\cap Q \geq \#X_{n_{0}}\cap c'B_{Q} \geq M^{(1+\kappa \beta^{2})n_{0}}
\label{e:enough}
\end{equation}
and moreover, since $c'<\frac{1}{8}$,
\begin{equation}
2B_{Q}\cap 2B_{R}=\emptyset \mbox{ for }Q,R\in \Delta_{n}.
\label{e:doubles}
\end{equation}
Define a probability measure $\mu$ inductively by picking $Q_{0}\in \Delta_{0}'$, setting $\mu(Q_{0})=1$ and for $Q\in \Delta_{n}'$ and $R\in\Delta_{n+1}'$, $R\subseteq Q$
\begin{equation}
\frac{\mu(R)}{\mu(Q)}
= \frac{1}{\# \{S\in\Delta_{n+1}':S\subseteq Q\}} \stackrel{\eqn{enough}}{ \leq} M^{-(1+\kappa \beta^{2})n_{0}}.
\label{e:frost}
\end{equation}
Let $x\in X$, $r\in (0,\frac{r_{0}}{M})$. Pick $n$ so that
\begin{equation}
M^{-n_{0}(n+1)}\leq r<M^{-n_{0}n}.
\label{e:r<M}
\end{equation}
{\bf Claim: } There is at most one $y\in X_{(n-1)n_{0}}$ such that
\begin{equation}
B(y,c'M^{-(n-1)n_{0}})\cap B(x,r)\neq\emptyset\;\; \mbox{ and } \;\; Q=Q_{B(y,c'M^{-(n-1)n_{0}})}\in \Delta_{n-1}'.
\label{e:BcapB}
\end{equation}
Indeed, if there were another such $y'\in X_{(n-1)n_{0}}$ with $B(y',c'M^{-(n-1)n_{0}})\cap B(x,r)\neq\emptyset$, then
\begin{multline*}
M^{-(n-1)n_{0}} \leq |y'-y|\\
\leq c'M^{-(n-1)n_{0}}+\mbox{dist}\ps{B(y,c'M^{-(n-1)n_{0}}), B(y',c'M^{-(n-1)n_{0}})} +c'M^{-(n-1)n_{0}}\\
\leq 2c'M^{-(n-1)n_{0}}+\text{diam} B(x,r)
\leq 2c'M^{-(n-1)n_{0}}+2r \\
\stackrel{\eqn{r<M}}{\leq} 2M^{-(n-1)n_{0}}(c'+M^{-n_{0}})
<4c'M^{-(n-1)n_{0}}
<M^{-(n-1)n_{0}}
\end{multline*}
since $c'<\frac{1}{8}$ and we can pick $\varepsilon<\frac{c'}{8}$ so that $M^{-n_{0}}\leq M^{-1}<c'$, which gives a contradiction and proves the claim.
Now, assuming we have $y\in X_{(n-1)n_{0}}$ satisfying \eqn{BcapB},
\begin{align*}
B(x,r)
& \subseteq B(y,c'M^{-(n-1)n_{0}}+2r)
\stackrel{\eqn{r<M}}{\subseteq} B(y,c'M^{-(n-1)n_{0}}+2M^{-nn_{0}})\\
& \subseteq B(y,2c'M^{-(n-1)n_{0}})
=2B_{Q}
\end{align*}
for $M$ large enough (that is, for $2M^{-1}<c'$, which is possible by picking $\varepsilon<\frac{c'}{16}$). If $Q\not\in \Delta_{n-1}'$, then \eqn{doubles} implies $2B_{Q}\cap 2B_{R}=\emptyset$ for all $R\in \Delta_{n-1}'$, and so
\[\mu(B(x,r))
\leq \mu(2B_{Q})=0.\]
Otherwise, if $Q\in\Delta_{n-1}'$, then $Q\subseteq Q_{0}$, so that
\begin{align*}
\mu(B(x,r))
& \leq \mu(2B_{Q})
\stackrel{\eqn{doubles}}{=}\mu(Q)
\stackrel{\eqn{frost}}{=} M^{-(1+\kappa \beta^{2})n_{0}(n-1)}\mu(Q_{0}) \stackrel{\eqn{r<M}}{\leq} M^{2(1+\kappa\beta^{2})} r^{-(1+\kappa\beta^{2})}
\end{align*}
thus $\mu$ is a $(1+\kappa\beta^{2})$-Frostmann measure supported on $X$, which implies $\mbox{dim} X\geq 1+\kappa\beta^{2}$ (c.f. \cite[Theorem 8.8]{Mattila}).
Now we consider the case when $\beta\geq\beta_{0}$. Trivially, $\beta'(x,r)\geq \beta\geq \beta_{0}$ for all $x\in X$ and $r<r_{0}$, and our previous work gives $\mbox{dim} X\geq 1+\kappa t^{2}$ for all $t<\beta_{0}$, hence $\mbox{dim} X\geq 1+\kappa \beta_{0}^{2}$. Since $\beta'\leq \frac{1}{2}$, we must have $\beta,\beta_{0}\leq \frac{1}{2}$, and so
\[\mbox{dim} X\geq 1+\kappa\beta_{0}^{2}\geq 1+4\kappa\beta_{0}^{2}\beta^{2}\]
and the theorem follows with $c_{0}=4\kappa\beta_{0}^{2}$.
\end{proof}
To show \Lemma{lemma-main}, we will approximate $X$ by a tree containing a sufficiently dense net in $X$ and estimate its length from below. The following lemma relates the length of this tree to the number of net points in $X$.
\begin{lemma}
Let $X_{n_{0}}$ be a maximal $M^{-n_{0}}$-net for a connected metric space $X$ where $n_{0}$ is so that $4M^{-n_{0}}<\frac{\text{diam} X}{4}$. Then we may embed $X$ into $\ell^{\infty}$ so that there is a connected union of finitely many line segments $\Gamma_{n_{0}}\subseteq \ell^{\infty}$ containing $X_{n_{0}}$ such that for any $x\in X_{n_{0}}$ and $r\in (4M^{-n_{0}}, \frac{\text{diam} X}{4})$,
\begin{equation}
{\mathscr{H}}^{1}\ps{\Gamma_{n_{0}}\cap B\ps{x,\frac{r}{2}}}\leq 8M^{-n_{0}} \# (X_{n_{0}}\cap B(x,r)).
\label{e:length-points}
\end{equation}
\label{l:tree}
\end{lemma}
\begin{proof}
Embed $X$ isometrically into $\ell^{\infty}({\mathbb{N}})$ so that for any $x\in X$, the first $\#X_{n_{0}}$ coordinates are all zero. Construct a sequence of trees $T_{j}$ as follows. Enumerate the elements of $X_{n_{0}}=\{x_{1},...,x_{\# X_{n_{0}}}\}$. For two points $x$ and $y$, let
\[A_{xy,i}=\{tx+(1-t)y+\max\{t,1-t\}|x-y|e_{i}:t\in [0,1]\}\]
where $e_{i}$ is the standard basis vector in $\ell^{\infty}({\mathbb{N}})$ (i.e. it is equal to $1$ in the $i$th coordinate and zero in every other coordinate).
Now construct a sequence of trees $T_{j}$ in $\ell^{\infty}(\mathbb{N})$ inductively by setting $T_{0}=\{x_{0}\}$ and $T_{j+1}$ equal to $T_{j}$ united with $S_{j+1}:=A_{x_{j+1}x_{j+1}',j+1}$, where $x_{j+1}'\in \{x_{1},...,x_{j}\}$ and $x_{j+1}\in X_{n_{0}}\backslash \{x_{1},...,x_{j}\}$ are such that
\[|x_{j+1}-x_{j+1}'|=\mbox{dist} ( X_{n_{0}}\backslash \{x_{1},...,x_{j}\},\{x_{1},...,x_{j}\}).\]
Since $ X$ is connected, $|x_{j+1}-x_{j+1}'|\leq 2M^{-n_{0}}$, so that
\[{\mathscr{H}}^{1}(S_{j})={\mathscr{H}}^{1}(A_{x_{j},x_{j}',j})\leq 2|x_{j}-x_{j}'|\leq 4\cdot 2M^{-n_{0}}=8M^{-n_{0}}.\]
Then $\Gamma_{n_{0}}:=T_{\# X_{n_{0}}}$ is a tree contained in $\ell^{\infty}({\mathbb{N}})$ containing $X_{n_{0}}$ (the reason we made the arcs $S_{j}$ reach into an alternate dimension is to guarantee that the branches of the tree don't intersect except at the points $X_{n_{0}}$).
To prove \eqn{length-points}, note that since $\frac{r}{2}>2M^{-n_{0}}$ and
\[x_{j}\in S_{j}\subseteq B(x_{j},2M^{-n_{0}}),\]
we have
\begin{align*}
{\mathscr{H}}^{1}\ps{\Gamma_{n_{0}}\cap B\ps{x,\frac{r}{2}}}
\leq \sum_{S_{j}\cap B(x,\frac{r}{2})\neq\emptyset} {\mathscr{H}}^{1}(S_{j})
& \leq \sum_{x_{j}\in B(x,\frac{r}{2}+2M^{-n_{0}})} 8M^{-n_{0}}\\
& \leq 8\# (X_{n_{0}}\cap B(x,r)).
\end{align*}
\end{proof}
\subsection{Proof of \Lemma{lemma-main}}
\label{s:lemma-main}
We now dedicate ourselves to the proof of \Lemma{lemma-main}. Again, let $ X$ be a connected metric space satisfying the conditions of \Theorem{main}. Without loss of generality, $n=0$, so that $\text{diam} X>2$. Embed $ X$ into $\ell^{\infty}$ as in \Lemma{tree}. Fix $n_{0}\in {\mathbb{N}}$. Let $\Gamma_{n_{0}}$ be the tree from \Lemma{tree} containing the $M^{-n_{0}}$-net $X_{n_{0}}\subseteq X$.
Since $\Gamma_{n_{0}}$ is a tree of finite length that is a union of finitely many line segments, it is not hard to show that there is a piecewise linear arc length parametrized path $\gamma:[0,2{\mathscr{H}}^{1}(\Gamma_{n_{0}})]\rightarrow \Gamma_{n_{0}}$ that traverses almost every point in $\Gamma_{n_{0}}$ at most twice (except at the discrete set of points $X_{n_{0}}$). The proof is similar to that of its graph theoretic analogue.
Let $\Delta$ be the cubes from \Lemma{cubes} tailored to $\Gamma_{n_{0}}$ and fix $Q_{0}\in \Delta_{0}$. We will adjust the values of $c>0$ in \Lemma{cubes} and the value $\varepsilon>0$ in the definition of $M$ as we go along the proof. Note that $\text{diam} X>2$ implies $\text{diam} \Gamma_{n_{0}}>1>(1+\varepsilon\beta)c$ if $c<\frac{1}{8}$, and so $\Gamma_{n_{0}}\not\subseteq Q_{0}$.
\def\tilde{\cL}{\tilde{{\mathscr{L}}}}
For $Q,R\in \Delta$, write $R^{1}=Q$ if $R$ is a maximal cube in $\Delta$ properly contained in $Q$. For $n\geq 0$ and $Q\in \Delta$, define
\[{\mathscr{L}}_{1}(Q)=\{R\in \Delta: R^{1}=Q\}, \;\;\; {\mathscr{L}}_{n}(Q)=\bigcup_{R\in {\mathscr{L}}_{n-1}(Q)}{\mathscr{L}}_{1}(R),\]
\[ \tilde{\cL}_{n}(Q)={\mathscr{L}}_{n}(Q)\cap \bigcup_{j=0}^{n_{0}-1}\Delta_{j}, \;\;\; \tilde{\cL}(Q)=\bigcup \tilde{\cL}_{n}(Q)\]
\[\tilde{\cL}_{n}=\tilde{\cL}_{n}(Q_{0}), \;\;\tilde{\cL}=\tilde{\cL}(Q_{0}).\]
For $Q\in \Delta$, let
\[\lambda(Q)=\{[a,b]: (a,b)\mbox{ is a connected component of }\gamma^{-1}(Q)\}\]
and for $n\leq n_{0}$, define $\gamma_{n}$ to be the continuous function such that for all $Q\in {\mathscr{L}}_{n}(Q_{0})$ and $[a,b]\in \lambda(Q)$,
\[\gamma_{n}|_{[a,b]}(at+(1-t)b)=t\gamma(a)+(1-t)\gamma(b)\mbox{ for }t\in [0,1],\]
that is, $\gamma_{n}$ is linear in all cubes in $\Delta_{n}$ and agrees with $\gamma$ on the boundaries of the cubes (see Figure \ref{f:cubes}).
\begin{figure}[h]
\begin{picture}(100,200)(60,0)
\put(0,0){\scalebox{.23}{\includegraphics{cubes.pdf}}}
\put(65,105){(a)}
\put(0,0){(b)}
\put(125,0){(c)}
\put(45,170){$Q$}
\put(170,175){$R\in {\mathscr{L}}_{1}(Q)$}
\put(195,105){$\gamma_{n+1}|_{I}$}
\put(20,110){$\gamma|_{I}$}
\put(105,15){$\gamma_{n}|_{I}$}
\end{picture}
\caption{In (a), we have a typical cube $Q\in \Delta_{n}$, and some of its children in ${\mathscr{L}}_{1}(Q)$. Note that their sizes can be radically different. In (b) are the components $\gamma|_{\gamma^{-1}(Q)}$, where in this case $\gamma^{-1}(Q)$ consists of two intervals, and we've pointed at a particular component $\gamma|_{I}$ for some $I\in \lambda(Q)$. In (c), the dotted lines represent the components of $\gamma_{n}|_{\gamma^{-1}(Q)}$, which is affine in cubes in $\Delta_{n}$, and hence is affine in $Q$, and the solid piecewise-affine curves represent the components of $\gamma_{n+1}|_{\gamma^{-1}(Q)}$, which are affine in the children of $Q$ (since they are in $\Delta_{n+1}$). }
\label{f:cubes}
\end{figure}
\Lemma{lemma-main} will follow from the following two lemmas:
\begin{lemma}
There is $K\in (0,1)$ and $\beta_{0}>0$ (independent of $n_{0}$ above) such that if $\beta\in (0,\beta_{0})$, $n<n_{0}$, and $Q\in \tilde{\cL}_{n}$, either
\begin{equation}
\sum_{I\in \lambda(Q)}(\ell(\gamma_{n+1}|_{I})- \ell(\gamma_{n}|_{I})) \geq \frac{\varepsilon\beta}{4}\text{diam} Q
\label{e:eb/4}
\end{equation}
or $Q\in \Delta_{Bad}$, where
\begin{equation}
\Delta_{Bad}=\{R\in\tilde{\cL}: {\mathscr{H}}^{1}_{\infty}(\Gamma_{n_{0}}\cap R) \geq (1+K\beta)\text{diam} R\}
\label{e:bad}
\end{equation}
\label{l:good-or-bad}
\end{lemma}
\begin{lemma}
With $\Delta_{Bad}$ defined as above, we have
\begin{equation}
\sum_{Q\in \Delta_{Bad}}\beta\text{diam} Q \leq \frac{2}{K} {\mathscr{H}}^{1}(\Gamma_{n_{0}}).
\label{e:bad}
\end{equation}
\label{l:bad}
\end{lemma}
We'll prove these in sections \ref{s:good-or-bad} and \ref{s:lemma-bad} respectively, but first let us finish the proof of \Lemma{lemma-main}. \\
For $Q\in \tilde{\cL}$, let $n(Q)$ be such that $Q\in {\mathscr{L}}_{n}$ and define
\[d(Q)= \sum_{I\in \lambda(Q)} \ps{\ell(\gamma_{n(Q)+1}|_{I})-\ell(\gamma_{n(Q)}|_{I})}.\]
By telescoping sums and \Lemma{zero-boundary}, we have
\begin{align}
\sum_{Q\in \tilde{\cL}}d(Q)
& =\sum_{n=0}^{n_{0}-1}\sum_{Q\in\tilde{\cL}_{n}}\sum_{I\in \lambda(Q)}\ps{\ell(\gamma_{n+1}|_{I})-\ell(\gamma_{n}|_{I}))} \notag \\
& =\sum_{n=0}^{n_{0}-1} \ps{\ell(\gamma_{n+1}|_{\gamma^{-1}(Q_{0}))}-\ell(\gamma_{n}|_{\gamma^{-1}(Q_{0}))}} \notag \\
& \leq \ell(\gamma|_{\gamma^{-1}(Q_{0})})= 2{\mathscr{H}}^{1}(\Gamma_{n_{0}}\cap Q_{0}).
\label{e:sumdQ}
\end{align}
Note that $\text{diam} (\Gamma_{n_{0}}\cap Q_{0})\geq 1$ since $Q_{0}\in \Delta_{0}$, $\text{diam} \Gamma_{n_{0}}>1$, and $\Gamma_{n_{0}}$ is connected. This, \Lemma{good-or-bad}, and \Lemma{bad} imply
\begin{align*}
\frac{10}{K\varepsilon}{\mathscr{H}}^{1} & (\Gamma_{n_{0}}\cap Q_{0})
\geq \frac{2}{K\varepsilon} {\mathscr{H}}^{1}(\Gamma_{n_{0}}\cap Q_{0}) + \frac{8}{\varepsilon}{\mathscr{H}}^{1}(\Gamma_{n_{0}}\cap Q_{0}) \notag \\
& \stackrel{\eqn{bad} \atop \eqn{sumdQ}}{\geq} \sum_{Q\in \Delta_{Bad}}\beta \text{diam} Q + \frac{4}{\varepsilon}\sum_{Q\in \tilde{\cL}\backslash\Delta_{Bad}} d(Q) \notag \\
& \stackrel{\eqn{eb/4}}{\geq} \sum_{Q\in \Delta_{Bad}}\beta \text{diam} Q+ \sum_{Q\in \tilde{\cL}\backslash \Delta_{Bad}}\beta \text{diam} Q =\sum_{Q\in\tilde{{\mathscr{L}}}}\beta\text{diam} Q \notag \\
& = \sum_{n=0}^{n_{0}-1}\sum_{Q\in \Delta_{n}}\beta\text{diam} Q
\geq \sum_{n=0}^{n_{0}-1}\sum_{Q\in \Delta_{n}}\beta\text{diam} B_{Q} \notag \\
& = \sum_{n=0}^{n_{0}-1}c\sum_{Q\in \Delta_{n}}\beta\text{diam} \frac{1}{c}B_{Q}
\stackrel{\eqn{1/cQ}}{\geq}cn_{0} \beta \text{diam} (\Gamma_{n_{0}}\cap Q_{0})
\geq cn_{0}\beta
\end{align*}
so that
\[\frac{Kcn_{0}\beta \varepsilon}{10}\leq {\mathscr{H}}^{1}(\Gamma_{n_{0}}\cap Q_{0}).\]
By \Lemma{tree}, and since $B_{Q_{0}}$ has radius $c$,
\begin{align*}
{\mathscr{H}}^{1}(\Gamma_{n_{0}}\cap Q_{0})
& \leq {\mathscr{H}}^{1}(\Gamma_{n_{0}}
\cap (1+\varepsilon\beta)B_{Q_{0}})
\leq {\mathscr{H}}^{1}(\Gamma_{n_{0}}\cap B(x,2c)) \\
& \leq 8 \#(X_{n_{0}}\cap B(x,4c))M^{-n_{0}}
\end{align*}
Combining these two estimates we have, for $c<\frac{c'}{4}$ that
\[ \delta n_{0} M^{n_{0}} \beta \leq \#(X_{n_{0}}\cap B(x_{0},c')), \;\;\; \delta=\frac{Kc\varepsilon}{80} \]
Pick $n_{0}=\ceil{\frac{8}{\delta\beta^{2}\varepsilon}}$. Since $\frac{8}{\varepsilon\beta}=M$, we get
\begin{multline*}
\#(X_{n_{0}}\cap B(x_{0},c'))
\geq \delta n_{0}M^{n_{0}}\beta
=n_{0}\ps{\frac{\delta \varepsilon \beta^{2}}{8}}M^{n_{0}}\frac{8}{\varepsilon\beta}
\geq M^{n_{0}+1}\\
=M^{n_{0}(1+\frac{1}{n_{0}})}
\geq M^{n_{0}(1+\frac{1}{\frac{8}{\delta\beta^{2}}-1})}
\geq M^{n_{0}(1+\frac{\delta}{16}\beta^{2})}
\end{multline*}
since $\frac{8}{\delta\beta^{2}}\geq 2$, and this proves \Lemma{lemma-main} with $\kappa=\frac{\delta}{16}$.\\
\begin{remark}
By inspecting the proof of \Lemma{good-or-bad} below, one can solve for explicit values of $\varepsilon,c,\beta_{0}$, and $K$. In particular, one can choose $\varepsilon<\frac{1}{12288}$, $K<\frac{1}{4096}$, $c<\frac{1}{64}$, and $\beta_{0}=\frac{1}{356}$, so that the supremum of permissible values of $\kappa$ is at least $ 2^{-41}$, and is by no means tight.
\end{remark}
In the next two subsections, we prove \Lemma{good-or-bad} and \Lemma{bad}.
\subsection{Proof of \Lemma{good-or-bad}}
\label{s:good-or-bad}
Fix $Q$ as in the statement of the lemma. For any $I\in \lambda(Q)$,
\begin{align*}
\ell(\gamma_{n+1}|_{I})-\ell(\gamma_{n}|_{I})
& \geq \ell(\gamma_{n+1}|_{I})- |\gamma_{n}(a_{I})-\gamma_{n}(b_{I})|\\
& = \ell(\gamma_{n+1}|_{I})- |\gamma_{n+1}(a_{I})-\gamma_{n+1}(b_{I})|\geq 0.
\end{align*}
Hence, to prove the lemma, it suffices to show that either $Q\in\Delta_{Bad}$ or there is an interval $I\in \lambda(Q)$ for which
\[\ell(\gamma_{n+1}|_{I})-\ell(\gamma_{n}|_{I})\geq \frac{\varepsilon\beta}{4}\text{diam} Q.\]
Fix $N$ so that $Q\in \Delta_{N}$. Let $\tilde{Q}\in \Delta_{N+1}$ be such that
\[x_{Q}\in \tilde{Q}\subset\tilde{Q}^{1}=Q\]
and pick $I\in\lambda(Q)$ such that $\gamma_{n+1}(I)\cap \tilde{Q}\neq\emptyset$. Note that $\gamma_{n}|_{I}\subseteq Q$ is a segment with endpoints the same as $\gamma_{n+1}|_{I}$, hence
\begin{align}
\ell(\gamma_{n}|_{I})
& ={\mathscr{H}}^{1}(\gamma_{n}(I))
=\text{diam} \gamma_{n}(I)
=|\gamma_{n}(a_{I})-\gamma_{n}(b_{I})|\notag \\
& =|\gamma_{n+1}(a_{I})-\gamma_{n+1}(b_{I})| \leq \text{diam} Q
\label{e:allthesame}
\end{align}
Before proceeding, we'll give a rough idea of how the proof will go. We will consider a few cases, which are illustrated in Figure \ref{f:cases} below.\\
\begin{figure}[h]
\begin{picture}(100,240)(80,0)
\put(0,0){{\includegraphics[width=240pt]{cases.pdf}}}
\put(55,145){$\gamma_{n}(I)$}
\put(70,175){$\gamma_{n+1}(I)$}
\put(40,190){$\tilde{Q}$}
\put(10,220){$Q$}
\put(52,85){$z$}
\put(35,95){{}\color{gray} $ X$}
\put(200,80){$\rho$}
\put(0,125){Case 1}
\put(125,125){Case 2a}
\put(0,0){Case 2b}
\put(185,85){$z$}
\put(125,0){Case 2b cont.}
\end{picture}
\caption{Illustrations of cases 1,2a, and 2b.}
\label{f:cases}
\end{figure}
In the first case, we assume the diameter of $\gamma_{n}(I)$ is small with respect to $Q$; since $\gamma_{n+1}|_{I}$ has the same endpoints as $\gamma_{n}|_{I}$ and intersects the center cube $\tilde{Q}$, there must be a large difference in length between $\gamma_{n+1}(I)$ and $\gamma_{n}(I)$ since the former must enter $Q$, hit $\tilde{Q}$, and then exit $Q$, and so \eqn{eb/4} will hold. For the next two cases, we assume $\gamma_{n}(I)$ has large diameter. The second case (2a) assumes that $\gamma_{n+1}(I)$ contributes more length than $\gamma_{n}(I)$, again implying \eqn{eb/4} trivially. (It is possible to combine this case with (1), but we found this split to be somewhat convenient.) In the final case (2b) we assume the difference in length between $\gamma_{n+1}(I)$ and $\gamma_{n}(I)$ is small. Since $\beta_{X}(B_{Q})>\beta$, we can show this implies the existence of $z\in X$ far away from $\gamma_{n+1}(I)$ (since $\gamma_{n+1}|_{I}$ has small geodesic deviation, so it can't approximate all of $ X$ in $B_{Q}$). Since $\Gamma_{n_{0}}$ approximates $ X$, we can find a large curve $\rho\subseteq \Gamma_{n_{0}}$ entering $B_{Q}$, approaching $z$, and then leaving $B_{Q}$. The presence of both $\gamma(I)$ and $\rho$ inside $Q$ implies that the total length of $\Gamma_{n_{0}}\cap Q$ must be large, which means $Q\in \Delta_{Bad}$. \\
Now we proceed with the actual proof.
{\bf Case 1:} Suppose $\ell(\gamma_{n}(I))<\frac{\text{diam} Q}{4}$. Since $\gamma_{n+1}|_{I}$ is a path entering $Q$, hitting $\tilde{Q}$, and then leaving $Q$, we can estimate
\begin{align}
\ell(\gamma_{n+1}|_{I})
& \geq 2\mbox{dist}(\tilde{Q},Q^{c})
\stackrel{\eqn{1+veb}}{\geq} 2\mbox{dist} ((1+\varepsilon\beta)B_{\tilde{Q}},B_{Q}) \notag \\
& =2(cM^{-N}-(1+\varepsilon\beta)cM^{-N-1})
=2cM^{-N}(1-(1+\varepsilon\beta)M^{-1})\notag \\
& \geq \text{diam} B_{Q} \ps{1-\frac{\varepsilon\beta}{8}-\frac{\varepsilon^{2}\beta^{2}}{8}}
>(1-\varepsilon\beta)\text{diam} B_{Q} \notag \\
& \stackrel{\eqn{1+veb}}{\geq} \frac{1-\varepsilon\beta}{1+\varepsilon\beta}\text{diam} Q =\ps{\frac{1+\varepsilon\beta}{1+\varepsilon\beta}-\frac{2\varepsilon\beta}{1+\varepsilon\beta}}\text{diam} Q \geq (1-2\varepsilon\beta)\text{diam} Q.
\label{e:ln+1}
\end{align}
Thus,
\[
\ell(\gamma_{n+1}|_{I})-\ell(\gamma_{n}|_{I})
\stackrel{\eqn{ln+1}}{\geq} (1-2\varepsilon\beta)\text{diam} Q-\frac{\text{diam} Q}{4}
\geq \frac{\text{diam} Q}{8}
\]
if $\varepsilon<\frac{1}{16}$, which implies the lemma in this case.\\
{\bf Case 2:} Suppose
\begin{equation}
\ell(\gamma_{n}|_{I})\geq \frac{\text{diam} Q}{4}
\label{e:case2}
\end{equation}
We again split into two cases.\\
{\bf Case 2a:} Suppose
\[ \ell(\gamma_{n+1}|_{I})\geq (1+\varepsilon \beta) \ell(\gamma_{n}|_{I}).\]
Then
\[\ell(\gamma_{n+1}|_{I}) - \ell(\gamma_{n}|_{I}) \geq \varepsilon \beta \ell(\gamma_{n}|_{I}) \stackrel{\eqn{case2}}{\geq} \frac{\varepsilon\beta}{4}\text{diam} Q.\]
{\bf Case 2b:} Now suppose
\begin{equation}
\ell(\gamma_{n+1}|_{I})< (1+\varepsilon \beta) \ell(\gamma_{n}(I)).
\label{e:2b}
\end{equation}
Note that in this case, we have a better lower bound on $\ell(\gamma_{n}|_{I})$, namely,
\begin{equation}
\ell(\gamma_{n}|_{I})
\stackrel{\eqn{2b}}{\geq} \frac{\ell(\gamma_{n+1}|_{I})}{1+\varepsilon\beta}
\stackrel{\eqn{ln+1}}{\geq} \frac{1-2\varepsilon\beta}{1+\varepsilon\beta}\text{diam} Q
\geq (1-3\varepsilon\beta)\text{diam} Q.
\label{e:ln}
\end{equation}
Let $C\in (0,1)$ (we will pick its value later).
\begin{sublemma}
Assuming the conditions in case 2b, let $I'\subseteq I$ be the smallest interval with
\[\gamma_{n+1}(a_{I'}),\gamma_{n+1}(b_{I'})\in \d ((1-C\beta) B_{Q})\]
and $\gamma_{n+1}(I')\cap \tilde{Q}\neq\emptyset$. Then
\begin{equation}
\ell(\gamma_{n+1}|_{I'})-|\gamma_{n+1}(a_{I'})-\gamma_{n+1}(b_{I'})| \leq 2\varepsilon\beta |\gamma_{n+1}(a_{I'})-\gamma_{n+1}(b_{I'})|
\label{e:betaI'}
\end{equation}
\end{sublemma}
\begin{proof}
Since $\gamma_{n+1}$ enters $(1-C\beta)B_{Q}$, hits $\tilde{Q}$, and then leaves $(1+C\beta)B_{Q}$, we have
\begin{align}
\ell(\gamma_{n+1}|_{I'})
&\geq 2\mbox{dist}(\tilde{Q},(1-C\beta)B_{Q}^{c})
\stackrel{\eqn{1+veb}}{\geq} 2\mbox{dist} ((1+\varepsilon\beta)B_{\tilde{Q}},(1-C\beta)B_{Q}^{c}) \notag \\
& = 2( (1-C\beta)cM^{-N}-(1+\varepsilon\beta)cM^{-N-1}) \notag \\
& =2cM^{-N}(1-C\beta-(1+\varepsilon\beta)M^{-1}) > \text{diam} B_{Q}(1-C\beta-2M^{-1}) \notag \\
& = (1-C\beta-\frac{\varepsilon\beta}{4})\text{diam} B_{Q}
\stackrel{\eqn{1+veb}}{\geq} \frac{1-C\beta-\frac{\varepsilon\beta}{4}}{1+\varepsilon\beta}\text{diam} Q \notag \\
& = \ps{\frac{1+\varepsilon\beta}{1+\varepsilon\beta}-\frac{C\beta+\frac{5\varepsilon\beta}{4}}{1+\varepsilon\beta}}\text{diam} Q
> (1-C\beta-2\varepsilon\beta)\text{diam} Q
\label{e:ln+1'}
\end{align}
Hence,
\begin{align}
|\gamma_{n+1} & (a_{I}) -\gamma_{n+1}(b_{I})| - |\gamma_{n+1}(a_{I'})-\gamma_{n+1}(b_{I'})| \notag \\
& \leq |\gamma_{n+1}(a_{I})-\gamma_{n+1}(a_{I'})|+|\gamma_{n+1}(b_{I})-\gamma_{n+1}(b_{I'})| \notag \\
& \leq \ell(\gamma_{n+1}|_{I\backslash I'}) \notag =\ell(\gamma_{n+1}|_{I})-\ell(\gamma_{n+1}|_{I'}) \notag \\
& \stackrel{\eqn{2b} \atop \eqn{ln+1'}}{\leq} (1+\varepsilon\beta)\ell(\gamma_{n}(I))-(1-C\beta-2\varepsilon\beta)\text{diam} Q \notag \\
& \stackrel{\eqn{ln+1}}{\leq} (1+\varepsilon\beta)\text{diam} Q-(1-C\beta-2\varepsilon\beta)\text{diam} Q \notag \\
& = (3\varepsilon\beta+C\beta)\text{diam} Q \label{e:3veb+Cb}
\stackrel{\eqn{allthesame}\atop\eqn{case2}}{ \leq} 4(3\varepsilon\beta +C\beta)|\gamma_{n+1}(a_{I})-\gamma_{n+1}(b_{I})|
\end{align}
Thus,
\begin{align}
|\gamma_{n+1}(a_{I})-\gamma_{n+1}(b_{I})| & \leq \frac{|\gamma_{n+1}(a_{I'})-\gamma_{n+1}(b_{I'})|}{1-4(3\varepsilon\beta +C\beta)}\notag \\
& \leq 2|\gamma_{n+1}(a_{I'})-\gamma_{n+1}(b_{I'})|
\label{e:aba'b'}
\end{align}
if we pick $\varepsilon<\frac{1}{24}$ and $\beta<\frac{1}{8}$ (recall $C\in(0,1)$). By \Lemma{subarc},
\begin{multline*}
\ell(\gamma_{n+1}|_{I'})-|\gamma_{n+1}(a_{I'})-\gamma_{n+1}(b_{I'})|
\stackrel{\eqn{subarc}}{\leq} \ell(\gamma_{n+1}|_{I})- |\gamma_{n+1}(a_{I})-\gamma_{n+1}(b_{I})|\\
\stackrel{\eqn{2b}}{<} \varepsilon\beta |\gamma_{n+1}(a_{I})-\gamma_{n+1}(b_{I})|
\stackrel{\eqn{aba'b'}}{\leq} 2\varepsilon\beta |\gamma_{n+1}(a_{I'})-\gamma_{n+1}(b_{I'})|
\end{multline*}
which proves \eqn{betaI'}.
\end{proof}
By the main assumption in \Theorem{main}, and because we're assuming $n=0$ so that $M^{-n}=1<r_{0}$,
\begin{multline*}
\beta
<\beta_{X}'(x_{Q},(1-C\beta)cM^{-N})\\
\leq \frac{\ell(\gamma_{n+1}|_{I'})-|\gamma_{n+1}(a_{I'})-\gamma_{n+1}(b_{I'})| + \sup_{z\in (1-C\beta)B_{Q}\cap X}\mbox{dist}(z,\gamma_{n+1}(I'))}{|\gamma_{n+1}(a_{I'})-\gamma_{n+1}(b_{I'})|}\\
\stackrel{\eqn{betaI'}}{\leq} \frac{2\varepsilon\beta |\gamma_{n+1}(a_{I'})-\gamma_{n+1}(b_{I'})|+\sup_{z\in (1-C\beta)B_{Q}\cap X}\mbox{dist}(z,\gamma_{n+1}(I'))}{|\gamma_{n+1}(a_{I'})-\gamma_{n+1}(b_{I'})|}\\
= 2\varepsilon\beta+\frac{\sup_{z\in (1-C\beta)B_{Q}\cap X}\mbox{dist}(z,\gamma_{n+1}(I'))}{|\gamma_{n+1}(a_{I'})-\gamma_{n+1}(b_{I'})|}
\end{multline*}
so there is $z\in X\cap (1-C\beta)B_{Q}$ with
\begin{align}
\mbox{dist}(z,\gamma_{n+1}(I'))
& \geq (\beta-2\varepsilon\beta) |\gamma_{n+1}(a_{I'})-\gamma_{n+1}(b_{I'})| \notag \\
& \stackrel{\eqn{aba'b'}}{\geq} \frac{\beta-2\varepsilon\beta}{2}|\gamma_{n+1}(a_{I})-\gamma_{n+1}(b_{I})| \notag \\
& \stackrel{\eqn{case2}}{\geq} \frac{\beta-2\varepsilon\beta}{8}\text{diam} Q
\geq \frac{\beta}{16}\text{diam} Q
\label{e:b/16}
\end{align}
if $\varepsilon<\frac{1}{4}$.
Since $\gamma_{n+1}([0,1])$ hits every cube in ${\mathscr{L}}_{1}(Q)$, which all have diameter at most $2(1+\varepsilon\beta)cM^{-N-1}$ by \eqn{1+veb} (recall $N$ was chosen so that $Q\in \Delta_{N}$),
\[
\Gamma_{n_{0}}\cap Q\subseteq (\gamma_{n+1}([0,1]))_{2(1+\varepsilon\beta)cM^{-N-1}} \subseteq (\gamma_{n+1}([0,1]))_{4cM^{-N-1}} \]
Note that since $Q\in \tilde{\cL}_{n}$, we have $N<n_{0}$. Since $X_{n_{0}}\subseteq \Gamma_{n_{0}}\cap X$ and $N<n_{0}$,
\begin{align*}
X\cap (1-C\beta)B_{Q}
& \subseteq X \cap Q
\subseteq (\Gamma_{n_{0}}\cap Q)_{2M^{-n_{0}}}
\subseteq (\gamma_{n+1}([0,1]))_{4cM^{-N-1}+ 2M^{-n_{0}}}\\
& \subseteq (\gamma_{n+1}([0,1]))_{(4cM^{-N-1} + 2M^{-N-1})}
= (\gamma_{n+1}([0,1]))_{(2+\frac{1}{c})M^{-1}2cM^{-N}}\\
& =(\gamma_{n+1}([0,1]))_{(2+\frac{1}{c})M^{-1}\text{diam} B_{Q}}
\subseteq (\gamma_{n+1}([0,1]))_{\frac{2}{c}M^{-1}\text{diam} B_{Q}}
\end{align*}
since $c<\frac{1}{8}$. Since $z\in X\cap (1-C\beta)B_{Q}$, there is $t\in [0,1]$ such that
\begin{equation}
|\gamma_{n+1}(t)-z|<\frac{2}{c}M^{-1} \text{diam} B_{Q} = \frac{\varepsilon\beta}{4c}\text{diam} Q
\label{e:gt-z}
\end{equation}
and so
\begin{multline}
\mbox{dist}(\gamma_{n+1}(t),\gamma_{n+1}(I'))
\geq \mbox{dist} (z,\gamma_{n+1}(I'))-|\gamma_{n+1}(t)-z|\\
\stackrel{\eqn{b/16} \atop \eqn{gt-z}}{\geq} \ps{\frac{\beta}{16}-\frac{\varepsilon\beta}{4c}}\text{diam} Q\geq \frac{\beta}{32} \text{diam} Q
\label{e:b/32}
\end{multline}
for $\varepsilon<\frac{c}{8}$. Also, since $z\in (1-C\beta)B_{Q}$, we know that
\begin{align}
B_{Q} &
\supseteq B\ps{z,\frac{C\beta}{2} \text{diam} B_{Q}}
\hspace{-3pt} \stackrel{\eqn{1+veb}}{\supseteq} B\ps{z,\frac{C\beta}{2(1+\varepsilon\beta)}\text{diam} Q} \notag \\
& \supseteq B\ps{z, \frac{C\beta}{4} \text{diam} Q} \hspace{-5pt}\stackrel{\eqn{gt-z}}{\supseteq} B\ps{\gamma_{n+1}(t),\ps{\frac{C\beta}{4}-\frac{\varepsilon\beta}{4c}}\text{diam} Q} \notag \\
& \supseteq B\ps{\gamma_{n+1}(t),\frac{C\beta}{8}\text{diam} Q}
\end{align}
for $\varepsilon<\frac{Cc}{2}$. In particular, $t\in \gamma_{n+1}^{-1}(B_{Q})$. Note
\begin{align*}
\mbox{dist} & (\gamma_{n+1}(t),\gamma_{n+1}(I)) \notag \\
& \geq \mbox{dist}(\gamma_{n+1}(t),\gamma_{n+1}(I'))
-\max\{\text{diam}\gamma([a_{I},a_{I}']),\text{diam} \gamma([b_{I}',b_{I}])\} \notag \\
& \geq \mbox{dist}(\gamma_{n+1}(t),\gamma_{n+1}(I'))-\ell(\gamma|_{I/I'})
\stackrel{\eqn{3veb+Cb} \atop \eqn{b/32}}{\geq} \frac{\beta}{32}\text{diam} Q-(3\varepsilon\beta+C\beta)\text{diam} Q \notag \\
& \geq \frac{\beta}{64}\text{diam} Q\end{align*}
for $\varepsilon<\frac{1}{384}$ and $C<\frac{1}{128}$. Thus, since of course $\frac{C}{8}<\frac{1}{128}$, we have
\[B\ps{\gamma_{n+1}(t),\frac{C\beta}{8}\text{diam} Q}\subseteq Q\backslash (\gamma_{n+1}(I))_{\frac{\beta}{128}\text{diam} Q}\]
In particular, $\gamma_{n+1}(t)\in Q$, and so by construction, $t\in [a,b]$ for some $[a,b]\in \lambda(Q)$, where $\gamma_{n+1}(a)$ and $\gamma_{n+1}(b)$ are both in $\Gamma_{n_{0}}$. In particular, $\gamma_{n+1}((a,b))$ is a line segment in a cube $R\in \tilde{\cL}_{1}(Q)$. If $\zeta:=\gamma_{n+1}(a)\in \Gamma_{n_{0}}$, then
\begin{align}
|\zeta & -\gamma_{n+1}(t)|
\leq \text{diam} R \stackrel{\eqn{1+veb}}{\leq} (1+\varepsilon\beta)\text{diam} B_{R}
=2(1+\varepsilon\beta)cM^{-N-1} \notag \\
& \leq (1+\varepsilon\beta)M^{-1}\text{diam} Q
= (1+\varepsilon\beta)\frac{\varepsilon\beta}{8}\text{diam} Q
\leq \frac{\varepsilon\beta}{4}\text{diam} Q
\leq \frac{C\beta}{16}\text{diam} Q
\label{e:zeta-gamma}
\end{align}
for $\varepsilon<\frac{C}{4}$, and so
\begin{equation}
B\ps{\zeta,\frac{C\beta}{16}\text{diam} Q}\subseteq B\ps{\gamma_{n+1}(t),\frac{C\beta}{8}\text{diam} Q}\subseteq Q\backslash (\gamma_{n+1}(I))_{\frac{\beta}{128}\text{diam} Q}.
\label{e:b/128}
\end{equation}
%
Thus, since $\Gamma_{n_{0}}$ is connected and $\text{diam} \Gamma_{n_{0}}>\text{diam} Q_{0}>\frac{C\beta}{16}\text{diam} Q$, we know there is a curve $\rho\subseteq \Gamma_{n_{0}}\cap B(\zeta,\frac{C\beta}{16}\text{diam} Q)$ connecting $\zeta$ to $B(\zeta,\frac{C\beta}{16}\text{diam} Q)^{c}$, and hence has diameter at least $\frac{C\beta}{16}\text{diam} Q$. Hence,
\[{\mathscr{H}}^{1}_{\infty}(\rho) \geq \text{diam} \rho \geq \frac{C\beta}{16}\text{diam} Q.\]
Moreover,
\[
{\mathscr{H}}^{1}_{\infty}(\gamma(I))
\geq \text{diam} \gamma(I)
\geq |\gamma(a_{I})-\gamma(b_{I})|
\stackrel{\eqn{allthesame}}{=} |\gamma_{n}(a_{I})-\gamma_{n}(b_{I})|
\stackrel{\eqn{ln}}{\geq} (1-3\varepsilon\beta)\text{diam} Q.
\]
Hence, since any cube in ${\mathscr{L}}^{1}(Q)$ intersecting $\rho$ has diameter at most $\frac{\varepsilon\beta}{4}\text{diam} Q<\frac{\beta}{128}$ by \eqn{zeta-gamma}, they are disjoint from those intersecting $\gamma(I)$ by \eqn{b/128} if we choose $\varepsilon<\frac{1}{128}$ (since if they intersect $\gamma(I)$, they also intersect $\gamma_{n+1}(I)$ by the definition of $\gamma_{n+1}$). Thus, we have
\[{\mathscr{H}}^{1}_{\infty}(Q)\geq \frac{C\beta}{16}\text{diam} Q+(1-3\varepsilon\beta)\text{diam} Q\geq \ps{1+\frac{C\beta}{32}}\text{diam} Q\]
for $\varepsilon<\frac{C}{96}$. Hence, by picking $K=\frac{C}{32}$, we see that $Q\in \Delta_{Bad}$, which finishes the proof of \Lemma{good-or-bad}
\subsection{Geometric martingales and the proof of \Lemma{bad}}
\label{s:lemma-bad}
For $Q\in \Delta$, define $k(Q)$ to be the number of cubes in $\Delta_{Bad}$ that properly contain $Q$, and set
\[\Delta_{Bad,j}=\{Q\in \Delta_{Bad}: k(Q)=j\},\]
\[Bad_{j}(Q)=\{R\subseteq Q:k(R)=k(Q)+j\},\]
\[G(Q)=(\Gamma_{n_{0}}\cap Q)\backslash \bigcup_{R\in Bad_{1}(Q)}R.\]
We will soon define, for each $Q\in\Delta_{bad}$, a nonnegative weight function $w_{Q}:\Gamma_{n_{0}}\rightarrow [0,\infty)$ ${\mathscr{H}}^{1}|_{\Gamma_{n_{0}}}$-a.e. in a martingale fashion by defining it as a limit of a sequence $w_{Q}^{j}$. Each $w_{Q}^{j}$ will be constant on various subsets of $\Gamma_{n_{0}}$ that partition $\Gamma_{0}$. We will actually decide the value of $w_{Q}^{j}$ on an element $A$ of the partition, say, by declaring the value of
\[w_{Q}^{j}(A):=\int_{\Gamma_{n_{0}}\cap A }w_{Q}^{j}d{\mathscr{H}}^{1}.\]
Then we will define $w_{Q}^{j+1}$ to be constant on sets in a partition subordinate to the previous partition so that, on sets $A$ in the $j$th partition, $w_{Q}^{j+1}(A)=w_{Q}^{j}(A)$, and so forth. We do this in such a way that we disseminate the mass of the weight function $w_{Q}$ so that $w_{Q}$ is supported in $Q$, has integral $\text{diam} Q$, and so that $w_{Q}(x)\leq \frac{1}{(1+K\beta)^{k(x)-k(Q)}}$, where $k(x)$ is the total number of bad cubes containing $x$. By geometric series, this will mean that $\sum_{Q\in \Delta_{Bad}}w_{Q}\mathds{1}_{Q}$ is a bounded function, so that its total integral is at most a constant times ${\mathscr{H}}^{1}(\Gamma_{0})$. However, the integral of each of these functions $w_{Q}$ is $\text{diam} Q$, and so the integral is also equal to $\sum_{Q\in \Delta_{Bad}}\text{diam} Q$, which gives us \eqn{bad}. This method appears in \cite{Schul-TSP}. Now we proceed with the proof.
First set
\begin{equation}
w_{Q}^{0}(Q)=\text{diam} Q, \;\;\; w_{Q}^{0}|_{Q^{c}}\equiv 0
\label{e:wQQ}
\end{equation}
and construct $w_{Q}^{j+1}$ from $w_{Q}^{j}$ as follows:
\begin{enumerate}
\item If $R\in Bad_{j}(Q)$ for some $j$, and $S\in Bad_{1}(R)$, set $w_{Q}^{j+1}$ to be constant in $S$ so that
\begin{equation}
w_{Q}^{j+1}(S)=w_{Q}^{j}(R)\frac{\text{diam} S}{\sum_{T\in Bad_{1}(R)} \text{diam} T+{\mathscr{H}}^{1}(G(R))}.
\label{e:wQS}
\end{equation}
\item Set $w_{Q}^{j+1}$ to be constant in $G(R)$ so that
\begin{equation}
w_{Q}^{j+1}(G(R))=w_{Q}^{j}(R)-\sum_{S\in Bad_{1}(R)}w_{Q}^{j+1}(S).
\label{e:wQG}
\end{equation}
\item For points $x$ not in in any $R\in Bad_{j}(Q)$, set $w_{Q}^{j+1}(x)=w_{Q}^{j}(x)$.
\end{enumerate}
Like a martingale, we have by our construction that, if $R\in Bad_{j}(Q)$, then $w_{Q}^{i}(R)=w_{Q}^{j}(R)$ for all $i\geq j$, and in particular, $w_{Q}^{j}(Q)=\text{diam} Q$ for all $j\geq 0$. \\
We will need the following inequality:
\begin{equation}
\sum_{T\in Bad_{1}(R)} \text{diam} T+{\mathscr{H}}^{1}(G(R))
\geq {\mathscr{H}}^{1}_{\infty}(R\cap\Gamma_{n_{0}})\geq (1+K\beta)\text{diam} R.
\label{e:>delta}
\end{equation}
The first inequality comes from the fact that if $\delta>0$ and $A_{i}$ is a cover of $G(R)$ by sets so that $\sum\text{diam} A_{i}<{\mathscr{H}}^{1}(G(R))+\delta$, then $\{A_{i}\}\cup Bad_{1}(R)$ is a cover of $R$ (up to a set of ${\mathscr{H}}^{1}$-measure zero by \Lemma{zero-boundary}), and so
\begin{align*}
\sum_{T\in Bad_{1}(R)}\text{diam} T+{\mathscr{H}}^{1}(G(R))+\delta &
>\sum\text{diam} A_{i}+\sum_{T\in Bad_{1}(R)}\text{diam} T \\
& \geq {\mathscr{H}}^{1}_{\infty}(R\cap \Gamma_{n_{0}})
\end{align*}
which gives the first inequality in \eqn{>delta} by taking $\delta\rightarrow 0$. The last inequality in \eqn{>delta} is from the definition of $\Delta_{Bad}$.
For $S\in Bad_{1}(R)$ and $R\in Bad_{j}(Q)$, by induction we have
\begin{align}
\frac{w_{Q}^{j+1}(S)}{\text{diam} S}
& \stackrel{\eqn{wQS}}{=} \frac{w_{Q}^{j}(R)}{\sum_{T\in Bad_{1}(R)} \text{diam} T+{\mathscr{H}}^{1}(G(R))}
\stackrel{\eqn{>delta}}{\leq} \frac{w_{Q}^{j}(R)}{\text{diam} R}\frac{1}{1+K\beta} \notag \\
& \leq \frac{w_{Q}^{0}(Q)}{\text{diam} Q}\frac{1}{(1+K\beta)^{j+1}}
\stackrel{\eqn{wQQ}}{=}\frac{1}{(1+K\beta)^{j+1}}
\label{e:1+Kvebj}
\end{align}
Hence, since $w_{Q}^{j+1}$ is constant in $S$, for $x\in S\cap \Gamma_{n_{0}}$,
\begin{align}
w_{Q}^{j+1}(x)
& \stackrel{\eqn{wQS}}{=}w_{Q}^{j}(R) \frac{\text{diam} S}{\sum_{T\in Bad_{1}(R)}\text{diam} T+{\mathscr{H}}^{1}(G(R))}\frac{1}{{\mathscr{H}}^{1}(S\cap \Gamma_{n_{0}})} \notag \\
& \stackrel{\eqn{>delta}}{ \leq} \frac{w_{Q}^{j}(R) }{\sum_{T\in Bad_{1}(R)}\text{diam} T+{\mathscr{H}}^{1}(G(R))}\frac{1}{1+K\beta} \notag \\
& \stackrel{\eqn{>delta}}{ \leq} \frac{w_{Q}^{j}(R)}{\text{diam} R}\frac{1}{(1+K\beta)^{2}} \stackrel{\eqn{1+Kvebj}}{\leq} \frac{w_{Q}^{0}(Q)}{\text{diam} Q}\frac{1}{(1+K\beta)^{j+2}}
=\frac{1}{(1+K\beta)^{j+2}}.
\label{e:wonR}
\end{align}
Moreover, if $x\in G(R)$,
\begin{align}
w_{Q}^{j+1}(x)
&=\frac{w_{Q}^{j+1}(G(R))}{{\mathscr{H}}^{1}(G(R))}
\stackrel{\eqn{wQG}}{=} \frac{w_{Q}^{j}(R)-\sum_{S\in Bad_{1}(R)}w_{Q}^{j+1}(S)}{{\mathscr{H}}^{1}(G(R))} \notag \\
& \stackrel{\eqn{wQS}}{=}\frac{w_{Q}^{j}(R)}{{\mathscr{H}}^{1}(G(R))}\ps{1-\sum_{S\in Bad_{1}(R)}\frac{\text{diam} S}{\sum_{T\in Bad_{1}(R)}\text{diam} T+{\mathscr{H}}^{1}(G(R))}} \notag \\
& =\frac{w_{Q}^{j}(R)}{{\mathscr{H}}^{1}(G(R))}\frac{{\mathscr{H}}^{1}(G(R))}{\sum_{T\in Bad_{1}(R)}\text{diam} T+{\mathscr{H}}^{1}(G(R))} \notag \\
& = \frac{w_{Q}^{j}(R)}{\sum_{T\in Bad_{1}(R)}\text{diam} T+{\mathscr{H}}^{1}(G(R))}
\stackrel{\eqn{>delta}}{<}\frac{w_{Q}^{j}(R)}{\text{diam} R}\frac{1}{1+K\beta} \\
&
\stackrel{\eqn{1+Kvebj}}{\leq}\frac{1}{(1+K\beta)^{j+1}}
\label{e:wonG}
\end{align}
Since $\Delta_{Bad}\subseteq \bigcup_{j=0}^{n_{0}}\Delta_{j}$, and ${\mathscr{H}}^{1}(\bigcup_{Q\in \Delta}\d Q)=0$, almost every point $x\in Q_{0}\cap \Gamma_{n_{0}}$ is contained in at most finitely many cubes in $\Delta_{Bad}$, and hence the value of $w_{Q}^{j+1}(x)$ changes only finitely many times in $j$, thus the limit $w_{Q}=\lim_{j}w_{Q}^{j}$ is well defined almost everywhere. For $x\in Q\cap \Gamma_{n_{0}}$, set $k(x)=k(R)$ where $R\subseteq Q$ is the smallest cube in $\Delta_{Bad}$ containing $x$. Then \eqn{wonR} and \eqn{wonG} imply
\[w_{Q}(x)\leq \frac{1}{(1+K\beta)^{k(x)-k(Q)}}\]
and so
\[\sum_{x\in Q\in \Delta_{Bad}}w_{Q}(x) \leq \sum_{j=0}^{k(x)}\frac{1}{(1+K\beta)^{j}}
\leq \sum_{j=0}^{\infty}\frac{1}{(1+K\beta)^{j}}
= \frac{1+K\beta}{K\beta}\leq \frac{2}{K\beta}\]
since $K\beta< 1$. Hence,
\begin{align*}
\sum_{Q\in \Delta_{Bad}}\text{diam} Q
& =\sum_{Q\in \Delta_{Bad}}\int_{Q}w_{Q}(x)d{\mathscr{H}}^{1}(x)
=\int_{\Gamma_{n_{0}}}\ps{\sum_{x\in Q\in \Delta_{Bad}} w_{Q}(x)}d{\mathscr{H}}^{1}(x)\\
& \leq \frac{2}{K\beta}{\mathscr{H}}^{1}(\Gamma_{n_{0}})
\end{align*}
which finishes the proof of \Lemma{bad}.
\section{Antenna-like sets}
\label{s:antenna}
This section is devoted to the proof of \Theorem{antenna-like}.
It is easy to verify using the definitions that being antenna-like is a quasisymmetric invariant quantitatively, so by \Theorem{main}, it suffices to verify that, if $ X$ is $c$-antenna-like, then any ball $B(x,r)$ with $x\in X$ and $0<r<\frac{\text{diam} X}{2}$ has $\beta'(x,r)>\frac{c}{7}$.
Fix such a ball, so there is a homeomorphism $h:\bigcup_{i=1}^{3}[0,e_{i}]\rightarrow X\cap B(x,r)$ so that
\begin{equation}
\mbox{dist} (h(e_{i}),h([0,e_{j}]\cup [0,e_{k}]))\geq cr
\label{e:hy}
\end{equation}
for all permutations $(i,j,k)$ of $(1,2,3)$ (see Figure \ref{f:antenna-beta}).
Let $s:[0,1]\rightarrow B(x,r)$ satisfy
\[\ell(s|_{[0,1]})-|s(0)-s(1)|+\sup_{z\in X\cap B(x,r)}\mbox{dist}(z,s([0,1]))<2\beta'(x,r)|s_{0}-s_{1}|=:\beta.\]
Set $x_{i}=h(e_{i})$ for $i=1,2,3$ and let
\[
t_{1}=\inf s^{-1}\ps{\bigcup_{i=1}^{3}B(x_{i},\beta)}.
\]
This always exists since $X\cap B(x,r)\subseteq (s([0,1]))_{\beta}$. Without loss of generality, assume $s(t_{1})\in B(x_{1},\beta).$ Similarly, let
\begin{equation}
t_{2}=\inf s^{-1}\ps{\bigcup_{i=2}^{3}B(x_{i},\beta)}
\label{e:t_2}
\end{equation}
and again, without loss of generality, assume $s(t_{2})\in B(x_{2},\beta)$.
\begin{figure}[h]
\begin{picture}(100,180)(45,0)
\put(0,0){\includegraphics[width=200pt]{antenna-beta.pdf}}
\put(37,77){$s(\zeta_{2})$}
\put(60,60){$z$}
\put(13,35){$x_{1}$}
\put(2,17){$s(t_{1})$}
\put(175,33){$s(t_{2})$}
\put(75,30){$s(\zeta_{1})$}
\put(75,177){$x_{3}$}
\put(175,65){$x_{2}$}
\put(80,110){${\color{gray} h(Y)\subseteq X\cap B(x,r)}$}
\end{picture}
\caption{}
\label{f:antenna-beta}
\end{figure}
Note that $h([0,e_{1}]\cup [0,e_{3}])$ is a path connecting $x_{1}$ to $x_{3}$, where the latter point is not contained in $(s([t_{1},t_{2}]))_{\beta}$ by our choices of $t_{1}$ and $t_{2}$, although the latter point is; otherwise, there would be $t\in [t_{1},t_{2}]$ such that $s(t)\in B(x_{3},\beta)$, contradicting the minimality of $t_{2}$. Since $h([0,e_{1}]\cup [0,e_{3}])$ is connected and $(s([t_{1},t_{2}]))_{\beta}$ contains $x_{1}$ but not $x_{3}$, we can pick a point $z\in h([0,e_{1}]\cup [0,e_{3}])$ so that $\mbox{dist} (z,s([t_{1},t_{2}]))=\beta$ . Pick $\zeta_{1}\in [t_{1},t_{2}]$ and $\zeta_{2}\in (t_{2},1]$ so that
\begin{equation}
|s(\zeta_{1})-z|=\mbox{dist} (z,s([t_{1},t_{2}]))=\beta \;\;\; \mbox{ and } \;\;\; |s(\zeta_{2})-z|<\beta.
\label{e:zeta12}
\end{equation}
Then by \Lemma{subarc},
\begin{align*}
2\beta'(x,r)|s_{0}-s_{1}|
& >\ell(s|_{[0,1]})-|s(0)-s(1)| \geq \ell(s|_{[\zeta_{1},\zeta_{2}]})-|s(\zeta_{1})-s(\zeta_{2})|\\
& \geq \ell(s|_{[\zeta_{1},t_{2}]}) + \ell(s|_{[t_{2}, \zeta_{2}]}) -|s(\zeta_{1})-z|-|z-s(\zeta_{2})|\\
& \hspace{-3pt}\stackrel{\eqn{zeta12}}{>} |s(\zeta_{1})-s(t_{2})|+|s(t_{2})-s(\zeta_{2})|-\beta-\beta\\
& \geq |z-x_{2}|-|s(\zeta_{1})-z|-|x_{2}-s(t_{2})| \\
& \hspace{13pt}+|x_{2}-z|-|s(t_{2})-x|-|s(\zeta_{2})-z|-2\beta\\
& \hspace{-13pt}\stackrel{\eqn{hy},\eqn{zeta12}}{ \geq} cr-\beta-\beta+cr-\beta-\beta-2\beta \\
& =2cr-6\beta\geq c|s(0)-s(1)|-12\beta(x,r)|s(0)-s(1)|
\end{align*}
which yields $\beta'(x,r)\geq \frac{c}{7}$ and completes the proof of \Theorem{antenna-like}
\section{Comparison of the $\beta$-numbers}
\label{s:betas}
For quantities $A$ and $B$, we will write $A\lesssim B$ if there is a universal constant $C$ so that $A\leq CB$, and $A\sim B$ if $A\lesssim B\lesssim A$.
\begin{lemma}
Let $X\subseteq \ell^{\infty}$ be a compact connected set, $x\in X$, and $0<r<\frac{\text{diam} X}{2}$. Then
\begin{equation}
\beta'(x,r) \leq \hat{\beta}(x,r)\lesssim \beta'(x,r)^{\frac{1}{2}}.
\end{equation}
\label{l:beta'-bhat}
\end{lemma}
\begin{proof}
The first inequality follows trivially from the definitions, since each sequence $y_{0},...,y_{n}\in X$ induces a finite polygonal Lipschitz path $s$ in $\ell^{\infty}$ for which
\[\ell(s)-|s(0)-s(1)|=\sum_{i=0}^{n-1}|y_{i}-y_{i+1}|-|y_{0}-y_{n}|.\]
For the opposite inequality, let $s:[0,1]\rightarrow\ell^{\infty}$ be such that
\begin{equation}
\frac{\ell(s)-|s(0)-s(1)| + \sup_{z\in B(x,r)\cap X}\mbox{dist}(z,s([0,1]))}{|s(0)-s(1)|}\leq 2\beta'(x,r)=:\beta.
\label{e:infs}
\end{equation}
Let
\[A=s^{-1}\ps{(s([0,1]))_{2\beta|s_{0}-s_{1}|}}\]
which is a relatively open subset of $[0,1]$. Let $a=\inf A$ and define $a=t_{0}<t_{1}<\cdots<t_{n}\leq 1$ inductively by setting
\[t_{i+1}=\inf\{t\in A\cap (t_{i},b]: \mbox{dist} (s(t),s([t_{0},t_{i}]))>\beta^{\frac{1}{2}}|s(0)-s(1)|\}.\]
We claim that
\begin{equation}
n\sim \beta^{-\frac{1}{2}}|s(0)-s(1)|.
\label{e:nb1/2}
\end{equation}
To see this, note that since $|s(t_{i})-s(t_{i+1})|\geq \beta^{\frac{1}{2}}|s(0)-s(1)|$ , the sets $B(s(t_{i}), \frac{\beta^{\frac{1}{2}}}{2}|s(0)-s(1)|)$ are disjoint, so that
\[n\frac{\beta^{\frac{1}{2}}}{2}|s(0)-s(1)|\leq \ell(s)\leq (1+\beta)|s(0)-s(1)|\leq 2|s(0)-s(1)|\]
which gives $n\leq 4\beta^{-\frac{1}{2}}$. On the other hand, the balls $B(s(t_{i}), 2\beta^{\frac{1}{2}}|s(0)-s(1)|)$ cover $s([0,1])$, and so
\begin{align*}
|s(0)-s(1)|
& \leq \ell(s)
\leq \sum_{i=0}^{n} \text{diam} B(s(t_{i}),2\beta^{\frac{1}{2}}|s(0)-s(1)|)\\
& \leq (n+1)4\beta^{\frac{1}{2}}|s(0)-s(1)|
\leq 8n\beta^{\frac{1}{2}}|s(0)-s(1)|
\end{align*}
which gives $n\geq (8\beta)^{-1}$, and this proves \eqn{nb1/2}.
By the definition of $A$, there are
\[y_{i}\in \cnj{B(s(t_{i}),2\beta|s(0)-s(1)|)}.\]
Then
\begin{multline*}
\sum_{i=0}^{n-1}|y_{i}-y_{i+1}|-|y_{0}-y_{1}|
\leq \sum_{i=0}^{n-1}|s(t_{i})-s(t_{i+1})|+4n\beta|s(0)-s(1)|-|s(t_{0})-s(t_{n})|\\
\stackrel{\eqn{nb1/2}}{\leq} \ell(s|_{[t_{0},t_{n}]})-|s(t_{0})-s(t_{n})|+C\beta^{\frac{1}{2}}|s(0)-s(1)|\\
\stackrel{\eqn{infs}}{\leq} \beta|s_{0}-s_{1}|+C\beta^{\frac{1}{2}}|s(0)-s(1)|\lesssim \beta^{\frac{1}{2}}|s(0)-s(1)|.
\end{multline*}
{\bf Claim:} $|s(0)-s(1)|\lesssim |s(t_{0})-s(t_{n})|$.
Since $\text{diam}$ is connected and $r<\frac{\text{diam} X}{2}$, there is a path connecting $x$ to $B(x,r)^{c}$, which naturally must be of diameter at least $r$, hence
\begin{align*}
|s(0)-s(1)| & \leq 2 r\leq 2(\ell(s|_{[t_{0},t_{n}]})-4\beta|s_{0}-s_{1}|) \\
& \leq 2|s(t_{0})-s(t_{n})|+C\beta^{\frac{1}{2}}|s(0)-s(1)|,\end{align*}
which, if $\beta^{\frac{1}{2}}$ is small enough, this implies
\[|s(0)-s(1)|\leq 4|s(t_{0})-s(t_{n})|=4|y_{0}-y_{n}|\]
so that the above estimates imply
\begin{equation} \sum_{i}|y_{i}-y_{i+1}|-|y_{0}-y_{n}|\lesssim \beta^{\frac{1}{2}}|s(0)-s(1)| \leq 4\beta^{\frac{1}{2}}|y_{0}-y_{n}|
\label{e:yb1/21}
\end{equation}
Moreover,
\begin{align}
\text{diam} \cap B(x,r)
& \subseteq (s([0,1]))_{\beta|s(0)-s(1)|}
\subseteq \bigcup_{i} B(s(t_{i}),(2\beta^{\frac{1}{2}} +\beta)|s(0)-s(1)|) \notag \\
& \subseteq \bigcup_{i}B(y_{i},(2\beta^{\frac{1}{2}} +\beta+2\beta)|s(0)-s(1)|) \notag \\
& \subseteq \bigcup_{i}B(y_{i},5\beta^{\frac{1}{2}}|s(0)-s(1)|)
\subseteq \bigcup_{i}B(y_{i},20\beta^{\frac{1}{2}}|y_{0}-y_{n}|)
\label{e:yb1/22}
\end{align}
Thus \eqn{yb1/21} and \eqn{yb1/22} imply $\hat{\beta}(x,r)\leq 20\beta^{\frac{1}{2}}= 20\sqrt{2}\beta'(x,r)^{\frac{1}{2}}$.
\end{proof}
\begin{proposition}
If $ X$ is a compact connected subset of some Hilbert space, then
\[\beta''(x,r)\leq \beta(x,r)\lesssim \beta''(x,r) \mbox{ for }x\in \Gamma \mbox{ and }r<\frac{\text{diam} X}{2}\]
where
\[\beta''(x,r) =\inf_{s} \ps{\ps{\frac{\ell(s)-|s(0)-s(1)|}{|s(0)-s(1)|}}^{\frac{1}{2}} + \frac{\sup_{z\in B(x,r)\cap X]}\mbox{dist} (z,s([0,1]))}{|s(0)-s(1)|}}.\]
In particular,
\begin{equation}
\beta'(x,r)\leq \beta(x,r)\lesssim \beta'(x,r)^{\frac{1}{2}}.
\label{e:bb''-prop}
\end{equation}
\label{p:bb''}
\end{proposition}
Note that \eqn{bb''-prop} is tight in the sense that if $ X\subseteq {\mathbb{C}}$, $0\in X$, and $B(0,1)\cap \Gamma=[-1,1]\cup [0,i\varepsilon]$, then by \Theorem{antenna-like} and \eqn{bb''-prop}, for all $\varepsilon>0$,
\[ \beta(0,1)\leq \varepsilon \leq 7\beta'(0,1)\leq 7\beta(0,1)\leq 7\varepsilon.\]
However, if $ X\cap B(x,r)=[-1,0]\cup [0,e^{i\varepsilon}]$, then for all $\varepsilon>0$, again by \eqn{bb''-prop} (and estimating $\beta''(0,1)$ by letting $s$ be the path traversing the segments $[-1,0]\cup [0,e^{i\varepsilon}]$),
\[\beta(0,1)^{2}\sim \varepsilon^{2}\gtrsim \beta'(0,1)\gtrsim \beta(0,1)^{2}.\]
\begin{proof}
For the first inequality, simply let $s:[0,1]\rightarrow {\mathscr{H}}$ be the line segment spanning $L\cap B(x,r)$ where $L$ is some line passing through $B(x,\frac{r}{2})$. Then $\ell(s)={\mathscr{H}}^{1}(L\cap B(x,r))\geq r$ and hence
\[
\beta''(x,r)
\leq \frac{\sup_{z\in B(x,r)\cap X}\mbox{dist} (z,s([0,1]))}{|s(0)-s(1)|}
\leq \frac{\sup_{z\in B(x,r)\cap X}\mbox{dist} (z,L)}{r}.\]
Since $x\in X$, the range of admissible lines in the infimum in \eqn{euclidean-beta} can be taken to be lines intersecting $B(x,\frac{r}{2})$. Using this fact and infimizing the above inequality over all such lines proves the first inequality in \eqn{bb''-prop}.
For the opposite inequality, let $s$ satisfy
\[\ps{\frac{\ell(s)-|s(0)-s(1)|}{|s(0)-s(1)|}}^{\frac{1}{2}} + \frac{\sup_{z\in B(x,r)\cap X}\mbox{dist} (z,s([0,1]))}{|s(0)-s(1)|} \leq 2\beta''(B(x,r))=:\beta.\]
Let
\[\beta(s):=\sup_{t\in [0,1]}\mbox{dist} (s(t),[s(0),s(1)]).\]
Then by the Pythagorean theorem, there is $c>0$ so that
\[(1+c\beta(s)^{2})|s(0)-s(1)|\leq \ell(s)\leq (1+\beta^{2})|s(0)-s(1)|\]
so that $\beta(s)\leq c^{-1}\beta,$. Hence, if $L$ is the line passing through $s(0)$ and $s(1)$,
\begin{multline*}
\beta(x,r)
\leq \sup_{z\in B(x,r)\cap X}\mbox{dist} (z,L)
\leq \sup_{z\in B(x,r)\cap X}\mbox{dist} (z,[s(0),s(1)])\\
\leq \beta(s)+\sup_{z\in B(x,r)\cap X}\mbox{dist} (z,s([0,1]))
\leq c^{-1}\beta+\beta
\lesssim \beta
\end{multline*}
\end{proof}
\bibliographystyle{amsplain}
\def$'${$'$}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{%
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
|
2,869,038,155,904 | arxiv | \section{Introduction}
Early in the theory of fracture, Griffith\cite{g} used Inglis' stress
analysis\cite{inglis} of an elliptical flaw in a linear elastic material to
predict the critical stress under which a crack irreversibly grows, causing
the material to fracture. Conversely, for a stressed solid the Griffith
criterion determines the crack nucleation barrier: if the material has
micro-cracks due to disorder or (less commonly) thermal fluctuations, how
long does a micro-crack have to be to cause failure under a given load? In
a sense, a solid under stretching is similar to a supercooled gas: the point
of zero external stress plays the role of the liquid-gas condensation point.
Fisher's\cite{fisher} theory of the condensation point predicts that the
free energy of the system develops an essential singularity at the
transition point. In this paper we develop a framework for the
field-theoretical calculations of the thermodynamics of linear elastic
theory with cracks (voids) that naturally incorporates the quadratic
fluctuations, and we calculate the analogue of Fisher's essential
singularity. As it is well known, the imaginary part of this essential
singularity can be used to give the lifetime to fracture: what is the rate
per unit volume of a micro-crack fluctuations large enough to nucleate
failure?
There is much work on thermal fluctuations leading to failure at rather high
tensions, near the threshold for instability (the spinodal
point)\cite{selinger}; there is also work on the role of disorder in
nucleating cracks at low tensions\cite{k1}. We are primary interested in
the thermal statistical mechanics of cracks under {\it small} tension. We
must admit and emphasize that, practically speaking, there are no thermal
crack fluctuations under small tension --- our calculations are of no
practical significance. Why are we studying thermal cracks in this formal
limit? First, for sufficiently small tension, the bulk of the material
(excluding regions near the crack tips) obeys linear elastic theory, thus
making analytical analysis of the fracture thermodynamics tractable.
Second, the {\it real} part of our essential singularity implies that
nonlinear elastic theory is not convergent! Just as in quantum
electrodynamics\cite{dyson} and other field theories\cite{zj}, for all
finite temperatures, nonlinear elastic theory is an asymptotic expansion,
with zero radius of convergence at zero pressure. We will calculate the
high-order terms in the perturbation expansion governing the response of a
system to infinitesimal tension. We find it intriguing that Hook's law is
actually a first term in the asymptotic series.
The paper is organized as follows. In the next section, following methods
known in the crack community\cite{b}---\cite{sl}, we carefully examine the
thermodynamic limit of an equilibrium linear elastic theory with voids. We
consider a crack as a special case of a void. We specify the class of
boundary conditions which insure that the energy release is independent of
the shape of the material boundary at infinity and independent of the
prescribed boundary conditions. This is extremely important for the
investigation of the singular structure of the free energy, for the latter
can develop singularities only in the thermodynamic limit \cite{g1}. Using
the complex variable method in a two-dimensional elastic theory, we
calculate the energy release of an arbitrary curvy crack to quadratic order
in kink angles in section \uppercase\expandafter{\romannumeral3}. In
section \uppercase\expandafter{\romannumeral4} we find the spectrum and the
normal modes of the boundary fluctuations (surface phonons) of a straight
cut under uniform isotropic tension at infinity. Section
\uppercase\expandafter{\romannumeral5} is devoted to the calculation of the
imaginary part of the free energy. The calculation of the contribution of
thermal fluctuations depends on the ``molecular structure'' of our material
at short length scales --- in field theory language, it is {\it
regularization} {\it dependent}. We calculate the imaginary part of the free
energy both for $\zeta$-function and a particular lattice regularization,
and determine the temperature dependent renormalization of the surface
tension. Earlier we showed\cite{we} that the thermal instability of an
elastic material with respect to fracture results in non-analytical behavior
of the elastic constants (e.g. the bulk modulus) at zero applied stress. In
section \uppercase\expandafter{\romannumeral6} we extend the
calculation\cite{we} of the high order expansion of the inverse bulk modulus
by including quadratic fluctuations. We show there that the asymptotic
ratio of the high order elastic coefficients, written in terms of the
renormalized surface tension, is {\it independent} of regularization (for
the cases we have studied), and we argue also that they are independent of
nonlinear effects near the crack tips. (The asymptotic {\it nonlinear}
coefficients depend only on the {\it linear} elastic moduli.) In section
\uppercase\expandafter{\romannumeral7} we perform the simplified calculation
(without fluctuations) in several more general contexts: anisotropic strain
(nonlinear Young's modulus), cluster nucleation and dislocation nucleation,
and three-dimensional brittle fracture. We also discuss the effects of vapor
pressure --- nonperturbative effects when bits detach from the crack!
Finally, we summarize our results in section
\uppercase\expandafter{\romannumeral8}.
\section{The thermodynamic limit of the energy release}
Elastic materials under a stretching load can relieve deformation energy
through the formation of cracks and voids. The famous Griffith criteria
\cite{g} for a crack propagation is based on the balance between the energy
release and the increase in the material surface energy due to extending the
crack. For a finite size system the energy release is a well defined
quantity that depends on the shape of the material boundary. The situation
becomes more subtle in case of an infinite elastic media. In principle one
can calculate the energy release analyzing stress fields near the crack tips
and thus avoid the necessity of worrying about infinite-sized media. This
method, developed by Irvin in the 1950s, is known as the stress intensity
approach\cite{ewalds}. Despite its enormous practical importance in
numerical calculations, it is usually of little help in analytical
calculations. To apply the stress intensity factor approach, one has to be
able to compute the stresses near the crack tips, which is possible only in
several simple cases. (The extension of the Irvin's method though the use of
the path independent J-integral\cite{rice}, for example, is applicable only
for cracks with flat surfaces.) Alternatively, the energy release can be
calculated considering the system as a whole. In this approach, to compute
the energy release one has to evaluate the work done by external forces and
the change in the energy of elastic deformation. The change in the energy of
the elastic deformation involves the difference between two infinitely large
quantities for an infinite material; the latter thus requires some sort of
infinite-volume limit. In this section we discuss the energy release due to
the relaxation of the boundaries of a finite number of voids (cracks) in the
limit of an infinitely large stressed material. The methods of this section
will be used again later in the paper.
We focus on the energy release calculation for a void formed in an infinite
two-dimensional elastic material. The result is then extended to the case of
a finite number of voids (remember that a crack can be considered as a
degenerate void) and for the energy of void formation in a three dimensional
elastic material. The energy release for a finite size system with a crack
has been calculated by Bueckner\cite{b} and later generalized by
Rice\cite{rice2} for void formation. Their analysis allow for a large class
of boundary conditions, mixing regions of fixed displacements and fixed
stresses at the perimeter. In what follows we will use a slight modification
of Bueckner's argument.
\begin{figure}
\centerline{
\psfig{figure=figure1.ps,width=3truein}}
{\caption{To calculate the energy release in the thermodynamic
limit we specify displacements along $O_U$ and stresses along $O_{\sigma}$.}
\label{fr}}
\end{figure}
We define the energy release due to the formation of a void in an infinite
material as the energy in a previously stretched material, released by
cutting out a hole in it and letting the hole boundary relax the stress: the
deformation energy of the discarded piece is excluded. Let's consider an
infinite linear elastic material subject to stress fields
$\sigma_{xx}^{\infty}$, $\sigma_{yy}^{\infty}$ and $\sigma_{xy}^{\infty}$ at
infinity. We want to calculate the energy release $E_{\rm release}$ due to
cutting out a hole with boundary $\Gamma_h$. We assume that the hole
boundary is stress free and non self-intersecting as a result of the stress
relaxation. $\Gamma_b$ denotes a regularization boundary with a
characteristic size $L$. Before the void formation, displacements along
$\Gamma_b$ are $\vec{U}_1^b=(u_1^b,v_1^b)$, while those along the
prospective void contour $\Gamma_h$ correspondingly
$\vec{U}_1^h=(u_1^h,v_1^h)$. Let $O_{\sigma}$ and $O_U$ be nonintersecting
parts of $\Gamma_b$, such that $O_{\sigma}\cup O_U=\Gamma_b$. Along
$O_{\sigma}$ we fix the stresses $\sigma_{xx}^{\infty}$,
$\sigma_{yy}^{\infty}$ and $\sigma_{xy}^{\infty}$ as a function of position;
along $O_U$ we fix the displacements to be $\vec{U}_1^b$ (Figure \ref{fr}).
In analogy to the usage in field theory, each choice of boundary conditions
$O_\sigma$, $\sigma^{\infty}$, $O_U$, $U^b_1$ we call a {\it regularization}.
For a fixed $L$ we calculate the energy release $E_{\rm release}^L$. The
thermodynamic limit of the energy release is then given by
\begin{eqnarray}
E_{\rm release}=\lim_{L\to\infty} E_{\rm release}^L .
\label{3}
\end{eqnarray}
Physically the described regularization means that we compute
first the energy release due to the formation of the void $\Gamma_h$ in a
finite size material with the boundary $\Gamma_b$, and then push the outer
(regularization) boundary to infinity. We will show that defined as above,
$E_{\rm release}$ is independent of a particular choice of $O_{\sigma}$ and
$O_U$, even if they are themselves functions of $L$. The boundary condition
with $O_{\sigma}=\Gamma_b$ is known as a ``fixed tension boundary
condition'', while that with $O_U=\Gamma_b$ is referred to as a ``fixed grip
boundary condition''.
Let's first consider a straight cut. As shown by explicit
calculation\cite{good}, the ``fixed tension'' and the ``fixed grip''
boundary conditions for the special case of a straight cut of length $\ell$
opened by a uniform isotropic tension $T$, give the same energy release
\begin{equation}
E_{\rm release}={{\pi T^2 {\ell}^2 (\chi+1)}\over{32\mu}}
={{\pi T^2 {\ell}^2}\over {4 Y}} .
\label{st}
\end{equation}
The material elastic constants $\mu$ and $\chi$ can be expressed
through its Young's modulus $Y$ and Poisson ratio $\sigma$ as follows
\begin{eqnarray}
\mu&=&{Y\over {2 (1+\sigma)}} \label{26}\\
\chi&=&{{3-\sigma} \over {1+ \sigma} } .\nonumber
\end{eqnarray}
(The given value for $\chi$ corresponds to a plain stress in a three
dimensional elastic theory; for a plain strain one should use
$\chi=3-4\sigma$.) Note that (\ref{st}) coincides with Griffith's result.
This result is not trivial! If one calculates the energy change in a region
$\Gamma_b$ embedded in an infinite medium (i.e., neither fixing the
displacements nor allowing them to relax at fixed stress), the energy
release does depend on the shape of the boundary. The relaxations at the
boundary scale like $1/L$: even as $L\to\infty$, integrated over the
perimeter they must be included for a sensible thermodynamic limit.
For a general void, the energy release is a sum of the work performed by
the external $(W_e)$ and the internal $(W_i)$ forces as a result of the
void formation
\begin{eqnarray}
E_{\rm release}^L=W_e+W_i .
\label{4}
\end{eqnarray}
Let $e_{ij}^{(1)}$, $\sigma_{ij}^{(1)}$ be the strain and
stress fields of the first state before the cut is made and $e_{ij}^{(2)}$,
$\sigma_{ij}^{(2)}$ define the fields of the second state with the void;
finally displacements along $O_{\sigma}$ for the second state are
$\vec{U}_2^b=(u_2^b,v_2^b)$ and displacements along the void boundary
$\Gamma_h$ are given by $\vec{U}_2^h=(u_2^h,v_2^h)$. As the void
boundary relaxes, the external forces do work
\begin{eqnarray}
W_e=\int_{O_{\sigma}} \vec{F}_n (\vec{U}_2^b-\vec{U}_1^b) d{\ell}_1
\label{5}
\end{eqnarray}
where $\vec{F}_n=(F_{n_x},F_{n_y})$ is the traction along $O_{\sigma}$
defined through the asymptotic stress fields
\begin{eqnarray}
F_{n_x}=\sigma_{xx}^{\infty} n_{x} +\sigma_{xy}^{\infty} n_{y}
\label{6} \\
F_{n_y}=\sigma_{xy}^{\infty} n_{x} +\sigma_{yy}^{\infty} n_{y} \nonumber
\end{eqnarray}
with $(n_{x},n_{y})$ being the outwards normal to $O_{\sigma}$.
The work done by the internal forces is the change in the energy of
the elastic deformation of the first and the second states
\begin{eqnarray}
W_i={1\over2}{\int\int}_A \sigma_{ij}^{(1)} e_{ij}^{(1)} dA -
{1\over2}{\int\int}_A \sigma_{ij}^{(2)} e_{ij}^{(2)} dA .
\label{7}
\end{eqnarray}
The integration in (\ref{7}) is performed over the area $A$ of the
material excluding the void; the summation over repeating indexes
is assumed. As a consequence of Hook's law,
\begin{eqnarray}
{\int\int}_A \sigma_{ij}^{(1)} e_{ij}^{(2)} dA=
{\int\int}_A \sigma_{ij}^{(2)} e_{ij}^{(1)} dA
\label{8}
\end{eqnarray}
so we can rewrite (\ref{7}) as
\begin{eqnarray}
W_i={1\over2}{\int\int}_A (\sigma_{ij}^{(1)} +\sigma_{ij}^{(2)})
(e_{ij}^{(1)}-e_{ij}^{(2)}) dA .
\label{9}
\end{eqnarray}
Let's introduce a plus state specified by the stresses
$\sigma^{+}_{ij}=\sigma_{ij}^{(1)} +\sigma_{ij}^{(2)}$ and a minus state
defined by the strain fields $e^{-}_{ij}=e_{ij}^{(1)}-e_{ij}^{(2)}$.
Then the work of the internal forces $(\ref{9})$ is a mixed energy of
the plus and the minus states. According to Betti's theorem \cite{betti},
the mixed energy equals one half the work done by the stresses
of one state over the displacements of the other, no matter from
what state the stresses or displacements are taken. With the stresses
from the plus state and the displacements from the minus state we obtain
\begin{eqnarray}
W_i&=&{1\over2}\int_{O_{\sigma}} \vec{F}^{+}_n\vec{U}^{-}_b d{\ell}_1 +
{1\over2}\oint_{\Gamma_h} \vec{F}^{+}_h\vec{U}^{-}_h d{\ell}_h
\label{10}\\
\nonumber
&=&{1\over2}\int_{O_{\sigma}} (\vec{F}_n+\vec{F}_n)
(\vec{U}_1^b-\vec{U}_2^b) d{\ell}_1\nonumber\\
&&+{1\over2}\oint_{\Gamma_h} (\vec{F}_h+\vec{0})(\vec{U}_1^h-\vec{U}_2^h)
d{\ell}_h \nonumber\\
&=&\int_{O_{\sigma}} \vec{F}_n(\vec{U}_1^b-\vec{U}_2^b) d{\ell}_1 +
{1\over2}\oint_{\Gamma_h}\vec{F}_h(\vec{U}_1^h-\vec{U}_2^h) d{\ell}_h
\nonumber
\end{eqnarray}
where a traction of the first state $\vec{F}_h$ is
defined as in (\ref{6}) through the stresses of the first state
along $\Gamma_h$. From (\ref{4}), (\ref{5}) and (\ref{10}) we find
the regularized energy release\cite{b},\cite{rice2}
\begin{eqnarray}
E_{\rm release}^L={1\over2}\oint_{\Gamma_h}\vec{F}_h(\vec{U}_1^h-
\vec{U}_2^h) d{\ell}_h\label{11} .
\end{eqnarray}
The thermodynamic limit (\ref{3}) is
then taken by simply replacing $\vec{U}_2^h$ with its value
\begin{eqnarray}
\vec{U}_{\infty}^h=\lim_{L\to\infty}\vec{U}_2^h
\label{12}
\end{eqnarray}
for the infinite media. One can check (using for example the
complex variable method described below) that the difference between
two can at most be of order $O(1/L)$, thus according to (\ref{11})
and (\ref{3}) not contributing to $E_{\rm release}$. So we conclude that
\begin{eqnarray}
E_{\rm release}={1\over2}\oint_{\Gamma_h}\vec{F}_h(\vec{U}_1^h-
\vec{U}_{\infty}^h) d{\ell}_h .
\label{13}
\end{eqnarray}
Thus the {\it total} energy release is half the work
done at the cut boundary.
The above result is explicitly independent of a particular
regularization from the discussed class as well as of the shape of
the regularization contour $\Gamma_b$.
It has a straightforward extension for a finite number of voids
in a media. For a media with $n$ holes with contours
$\lbrace \Gamma_{h_i} \rbrace$, (\ref{13}) generalizes to
\begin{eqnarray}
E_{\rm release}={1\over2}\sum_{i=1}^n \oint_{\Gamma_{h_i}}\vec{F}_{h_i}
(\vec{U}_1^{h_i}-\vec{U}_{\infty}^{h_i})d{\ell}_{h_i} .
\label{18}
\end{eqnarray}
The same class of regularizations gives well defined thermodynamic limit in
a three dimensional case as well. The arguments are practically the same
resulting in the
energy release (\ref{13}) with the only change in the use of
surface integrals rather than the contour ones for the regularization
and void boundaries.
In order to use (\ref{13}) one has to know displacements along the void
contour for the second state. For all but several simple cases this is a
very complicated problem to approach analytically: at best, one can hope to
get an asymptotic behavior of the displacement and stress fields. So for
practical purposes we have to find an asymptotic analog of the above
expression that would allow the calculation of the energy release from the
asymptotic behavior of the displacement fields. To do this, it is
convenient to specify the stresses at infinity --- the regularization with
$O_U=\emptyset$ and $O_{\sigma}=\Gamma_b$. As we have already shown, this
does not restrict the applicability of the result. To get the asymptotic
expression for the energy release we first note that elastic states one and
two are each in equilibrium, so from Clapeyron's theorem\cite{betti}
\begin{eqnarray}
{1\over2}{\int\int}_A \sigma_{ij}^{(1)}e_{ij}^{(1)} dA &=&
{1\over2}\oint_{\Gamma_b}
\vec{F}_n\vec{U}_1^b d{\ell}_b +{1\over2}\oint_{\Gamma_h}\vec{F}_h
\vec{U}_1^h d{\ell}_h
\nonumber\\
\label{14}
{1\over2}{\int\int}_A \sigma_{ij}^{(2)}e_{ij}^{(2)} dA &=&
{1\over2}\oint_{\Gamma_b}\vec{F}_n\vec{U}_2^b d{\ell}_b .
\end{eqnarray}
Which, following (\ref{4}), (\ref{5}) and (\ref{7}) gives
\begin{eqnarray}
E_{\rm release}^L&=&{1\over2}\oint_{\Gamma_b}\vec{F}_n\vec{U}_2^b
d{\ell}_b -{1\over2}\oint_{\Gamma_b}\vec{F}_n\vec{U}_1^b
d{\ell}_b\nonumber\\
&&+{1\over2}\oint_{\Gamma_h}\vec{F}_h\vec{U}_1^h d{\ell}_h .
\label{15}
\end{eqnarray}
Next, we break $\vec{U}_2^b$ into two pieces: displacements of
the second elastic state for the infinite media along $\Gamma_b$,
$\vec{U}_f^b$, and the {\it boundary relaxation} displacements,
$\vec{U}_r^b$,
\begin{equation}
\vec{U}_2^b=\vec{U}_f^b+\vec{U}_r^b.
\label{15f}
\end{equation}
The {\it boundary relaxation} displacements comes from relaxing the
the stresses along $\Gamma_b$ for the infinite material to
comply with the thermodynamic limit prescription: the ``fixed
tension boundary condition''. For the energy release calculation
it is sufficient to relax the stresses to $O(1/L^3)$ --- the
stresses of order $O(1/L^3)$ generate the relaxation displacements
along $\Gamma_b$ of order $O(1/L^2)$ and thus do not contribute to the
energy release in the limit $L\to\infty$.
So, from (\ref{3}), (\ref{15}) and (\ref{15f})
\begin{eqnarray}
E_{\rm release}=\lim_{L\to\infty}\biggl\lbrace
&&{1\over2}\oint_{\Gamma_b}\vec{F}_n(\vec{U}_f^b
+\vec{U}_r^b) d{\ell}_b
-{1\over2}\oint_{\Gamma_b}\vec{F}_n\vec{U}_1^b d{\ell}_b\nonumber\\
&&+{1\over2}\oint_{\Gamma_h}\vec{F}_h\vec{U}_1^h d{\ell}_h
\biggr\rbrace .
\label{17}
\end{eqnarray}
This is the desired expression. One can see that (\ref{17}) is nothing but
the difference between the regularized elastic deformation energy of the
infinite media with the void (the first integral) and the elastic energy
stored in the same media before the formation of void, not accounting for
the elastic energy of the void itself (the remaining two integrals).
Although the asymptotic expression for the energy release may look more
complicated than (\ref{13}) and actually requires solving of two elastic
problems, its use is justified because it is difficult to find displacements
of the second state along the void contour.
\section{Energy release of ``slightly'' curvy cuts}
Armed with the knowledge that the energy release is independent of the shape
of the regularization boundary, we now turn to the calculation of the energy
release due to the opening of a ``slightly'' curvy cut $\Gamma_h$ in a two
dimensional isotropic linear elastic infinite media subject to a uniform
isotropic tension $T$ at infinity. We find an amazingly simple answer: the
energy release of a curvy cut coincides to cubic order with the energy
release of the straight cut with the same end points.
An elastic state is completely defined
once displacements $(u,v)$ are known everywhere. Rather than
considering these two functions, Muskhelishvili\cite{m} introduces two
complex functions $\phi(z)$ and $\psi(z)$ that in equilibrium
should be the functions of only one complex
variable $z$ (i.e. do not depend $\overline{z}$).
Moreover in our case (a uniform isotropic tension at infinity)
$\phi(z)$ decomposes as
\begin{eqnarray}
\phi(z)&=&{1\over2}T z + \phi_0(z) \label{35}
\end{eqnarray}
The functions $\phi_0(z)$ and $\psi(z)$ are holomorphic
in the complex $z$ plane including infinity but excluding the cut
contour. This description associates the components
of stress $( \sigma_{xx},\sigma_{yy},\sigma_{xy})$ and
displacement $(u,v)$ to $(\phi,\psi)$ by the following relations
\begin{eqnarray}
\sigma_{xx}+\sigma_{yy}&=&2 (\phi'(z)+\overline{\phi'(z)})\label{36}\\
\sigma_{yy}-\sigma_{xx}+2i\sigma_{xy}&=&2(\overline{z}\phi''(z)+\psi'(x))
\nonumber\\
2\mu(u+iv)&=&\chi\phi(z)-z\overline{\phi'(z)}-\overline{\psi(z)}
\nonumber
\end{eqnarray}
(The detailed discussion of the change of ``variables'' $(u,v)\to(\phi,\psi)$
along with the derivation of (\ref{35})-(\ref{36}) can be found in \cite{m}.)
Using a circular regularization contour ($\Gamma_b$ is a circle of radius
$L$) and an expression analogous to (\ref{17}), Sih and Liebowitz\cite{sl}
explicitly computed stresses along the outer boundary $\Gamma_b$ and found
the energy release
\begin{eqnarray}
E_{\rm release}=-{{\pi T}\over{4\mu}}(1+\chi)\rm{Re}[y_1]
\label{60}
\end{eqnarray}
with $y_1$ being the residue of $\psi(z)$ at infinity (the $1/z$ coefficient
in the expansion about $z=\infty$).
Of course, to determine $y_1$ we still need to solve the asymptotics of
the elasticity problem.
To illustrate the correspondence between the energy release of a curvy cut
and the straight one with the same end points, let's consider a rare
example where it is possible to find an exact analytical solution.
Suppose a material with a
``smile'' cut - an arc of a circle ABC - of total arc length
$\ell$ (Figure \ref{f1})
is subject to a uniform isotropic stretching $T$ at infinity. Expanding
the exact answer in\cite{m} about $z=\infty$, we find
\begin{equation}
\psi_{\rm ABC}(z)=-{{T \ell^2}\over {8
z}}{{8\sin^2\theta/2}
\over{\theta^2(3-\cos\theta)}}+O\biggl({{1}\over{z^2}}\biggr)
\label{61}
\end{equation}
which according to (\ref{60}) gives the energy release $E_{\rm ABC}$
\begin{equation}
E_{\rm ABC}={{\pi T^2 \ell^2}\over {32
\mu}}(1+\chi){{8\sin^2\theta/2}
\over{\theta^2(3-\cos\theta)}} .
\label{62}
\end{equation}
On the other hand, for a straight cut AC of length
$(\ell/\theta)\sin\theta$, the holomorphic function $\psi_{\rm AC}(z)$
has asymptotic behavior \cite{m}
\begin{equation}
\psi_{\rm AC}(z)=-{{T \ell^2}\over {8 z}}{{\sin^2\theta}
\over{\theta^2}}+O\biggl({{1}\over{z^2}}\biggr)
\label{63}
\end{equation}
resulting in the energy release $E_{\rm AC}$
\begin{equation}
E_{\rm AC}={{\pi T^2 \ell^2}\over {32
\mu}}(1+\chi){{\sin^2\theta}\over{\theta^2}} .
\label{64}
\end{equation}
For small $\theta$ we find from (\ref{62}) and (\ref {64}) the advertised
result: the energy release of ABC coincides with that of AC
to cubic order in $\theta$, but not to quartic order!
\begin{eqnarray}
E_{\rm ABC}&=&{{\pi T^2 \ell^2}\over {32
\mu}}(1+\chi)\biggl(1-{{\theta^2}\over3}+{{77\theta^4}\over{720}}
+O(\theta^6)\biggr)
\label{65}\\
E_{\rm AC}&=&{{\pi T^2 \ell^2}\over {32
\mu}}(1+\chi)\biggl(1-{{\theta^2}\over3}+{{2\theta^4}\over{45}}
+O(\theta^6)\biggr) . \nonumber
\end{eqnarray}
\begin{figure}
\centerline{
\psfig{figure=figure2.ps,width=3truein}}
{\caption{The ``smile''-like cut ABC and the straight
cut AC result in the same energy release to cubic order
in $\theta$.}
\label{f1}}
\end{figure}
We proceed now with the general proof. First, an arbitrary cut is
approximated by a finite number of line segments, parameterized by the kink
angles $\alpha_i$ --- the angles between consecutive kinks. The exact shape
of the cut is then restored as the length of each link goes to zero (as
their number goes to infinity). The energy release is evaluated to cubic
order in the kink angles, e.g for the $n$ - kink regularization, the energy
release $E_n(\lbrace\alpha_i\rbrace)$ for a curvy cut with a fixed
separation $\ell_p$ between the endpoints is approximated as
\begin{eqnarray}
E_n(\lbrace\alpha_i\rbrace) &=& E^{(0)} + \sum_{i=1}^n E_i^{(1)}\alpha_i
+ \sum_{i=1}^n\sum_{j=1}^n E^{(2)}_{ij} \alpha_i\alpha_j\nonumber\\
&&+ \sum_{i=1}^n\sum_{j=1}^n \sum_{m=1}^n E^{(3)}_{ijm} \alpha_i\alpha_j
\alpha_m+O(\alpha_i^4)
\label{66}
\end{eqnarray}
where $E^{0}$ is the energy release for a straight cut of
length $\ell_p$ and the coefficients $E_i^{(1)}$, $E^{(2)}_{ij}$ and
$E^{(3)}_{ijm}$ depend only on the positions of the kinks along the cut. We
claim that all coefficients up to cubic order are zero, and thus the energy
of a curvy cut and the straight one with the same endpoints can differ only
at $O(\alpha_{i}^4)$.
That $E_i^{(1)}$ and $E_{ijm}^{(3)}$ (in fact, all terms odd in the kink
angles) are zero follows from a symmetry argument: cuts (having the same
number of segments with the corresponding segments being of the same length)
with kink angles $\lbrace\alpha_i\rbrace$ and $\lbrace -\alpha_i\rbrace$
respectively, are mirror images to each other with respect to the first
link. The boundary condition for our problem (a uniform tension at infinity)
is reflection invariant, so
\begin{eqnarray}
E_n(\lbrace\alpha_i\rbrace)=E_n(\lbrace -\alpha_i\rbrace)
\label{67}
\end{eqnarray}
which requires that all energy release terms odd in the kink angles vanish.
To calculate $E_{ij}^{(2)}$ for a given pair of indexes we can put all kink
angles to zero except for $\alpha_{i}$ and $\alpha_{j}$, reducing the
$n$-kink problem to a two-kink one. From now on we will consider only the
two-kink problem to quadratic order in the kink angles.
\begin{figure}
\centerline{
\psfig{figure=figure3.ps,width=3truein}}
{\caption{The two-kink cut ABCD can be considered as a
{\it deformation} of a straight cut AD.}
\label{f2}}
\end{figure}
We choose the coordinate system $XY$ in the complex
$z$ plane in such a way that the ends of the two-kink cut are on the
$X$ axis, symmetric with respect to the $Y$ axis (Figure \ref{f2}).
Assuming a uniform isotropic tension $T$ at infinity we rewrite
(\ref{60}), explicitly indicating the dependence of the energy release on
the kink angles
\begin{eqnarray}
E_2(\alpha_1,\alpha_2)=-{{\pi T}\over{4\mu}}(1+\chi)\rm{Re}
[y_1(\alpha_1,\alpha_2)]
\label{68}
\end{eqnarray}
where $y_1(\alpha_1,\alpha_2)$ is $1/z$ coefficient in the expansion of the
function $\psi(z)$ at infinity. As discussed earlier in the section,
$\psi(z)$ is a holomorphic function in the complex $z$ plane including
infinity (the extended complex plane) but excluding the two-kink cut. The
other function $\phi(z)$ that is necessary for the specification of the
equilibrium elastic state satisfies (\ref{35}) with $\phi_0(z)$ holomorphic
in the same region as $\psi(z)$. The analytical functions $\phi(z)$ and
$\psi(z)$ must provide a stress free cut boundary, which following\cite{m}
can be expressed as
\begin{equation}
i f(\phi,\psi)=\bigg[\phi(z)+z\overline{\phi'(z)}+\overline{\psi(z)}
\bigg]\Bigg|_W^X=0
\label{70}
\end{equation}
where $f=F_x+iF_y$ is the complex analog of the force acting on the
portion of the cut boundary between points $W$ and $X$.
It is important to note that any two pairs of functions
$(\phi_0^1(z),\psi^1(z))$ and $(\phi_0^2(z),\psi^2(z))$
that are holomorphic in the
extended $z$ plane excluding the same curvy cut,
and which provide the stress free cut boundaries to
$O(\alpha^3)$ (\ref{70}),
can differ only by $O(\alpha^3)$ everywhere:
\begin{eqnarray}
\delta\phi(z)&=&\phi_0^1(z)-\phi_0^2(z)=O(\alpha^3)\label{71}\\
\delta\psi(z)&=&\psi^1(z)-\psi^2(z)=O(\alpha^3) .\nonumber
\end{eqnarray}
This follows explicitly from Cauchy's theorem, but also follows from
the elastic theory.
Each pair $(\phi_0^1(z)+T z/2,\psi^1(z))$ or
$(\phi_0^2(z)+T z/2, \psi^2(z))$ defines the equilibrium elastic state
with stresses of order $O(\alpha^3)$
along the cut boundary and uniform isotropic stretching $T$ at
infinity. So, $(\delta\phi(z),\delta\psi(z))$ corresponds to the
equilibrium state with the specified stresses of order
$O(\alpha^3)$ along the cut boundary and zero tension at infinity.
Thus (\ref{71}) follows because the response to this force within
linear elastic theory must be linear.
The above argument guarantees that once we find
$\phi_0(z)$ and $\psi(z)$ that satisfy
the discussed constraints to $O(\alpha^3)$, we can use them to calculate
the energy release of the curvy cut to quadratic order.
Let functions $\phi^s(z)$ and $\psi^s(z)$ define the
equilibrium elastic state of a material with a straight cut AD
subject to a uniform tension $T$ at infinity. $\phi_0^s(z)=\phi^s(z)-T z/2$
and $\psi^s(z)$ should then be holomorphic in the extended
complex $z$ plane excluding the straight cut and should provide
stress free boundaries along AD.
Muskhelishvili finds\cite{m}
\begin{eqnarray}
{{d\phi^s(z)}\over{dz}}&=&{T\over2} {z\over{\sqrt{z^2-{\ell_p}^2/4}}}
\label{72}\\
{{d\psi^s(z)}\over{dz}}&=&{T\over8}{{z{\ell_p}^2}\over{
(z^2-{\ell_p}^2/4)\sqrt{z^2-{\ell_p}^2/4}}} .
\nonumber
\end{eqnarray}
(To obtain $\phi^s(z)$ and $\psi^s(z)$ we integrate (\ref{72}); the
arbitrariness in the integration constants reflect the ambiguity in the
displacements up to a rigid motion of the material as a whole.) Note that
$\phi_0^s(z)$ and $\psi^s(z)$ can be ``made'' holomorphic everywhere in the
complex $z$ plane excluding the two-kink cut ABCD, and thus can serve as a
good starting point for the construction of $\phi_0(z)$ and $\psi(z)$. The
process of an analytical continuation is demonstrated by Figure \ref{f3}.
$\phi_0^s(z)$ (or equivalently $\psi^s(z)$) is holomorphic in the $z$ plane
excluding the straight cut AD (4a). Removing the region ABCDA (4b), we make
it holomorphic in the complex plane excluding ABCDA. Now we analytically
continue $\phi_0^s(z)$ from the link AD (4c) into the removed region (the
continuation is possible explicitly using (\ref{72})). The obtained
function becomes holomorphic everywhere in the complex $z$ plane excluding
the two-kink cut ABCD (4d), moreover the original function and the one
obtained through the analytical continuation coincide outside ABCDA.
\begin{figure}
\centerline{
\psfig{figure=figure4.ps,width=3truein}}
{\caption{Functions $\phi_0^s(z)$ and $\psi^s(z)$ holomorphic
in the complex $z$ plane excluding the straight cut AD (a),
can be ``made'' holomorphic in the complex $z$ plane
excluding the two-kink cut ABCD (d).}
\label{f3}}
\end{figure}
The idea of construction the holomorphic functions $\phi_0(z)$ and $\psi(z)$
is simple: we start with the functions $\phi_0^s(z)$ and $\psi^s(z)$ and
calculate to quadratic order the stresses along the two-kink cut boundary
ABCD under the analytical continuation as described by Figure \ref{f3}. The
stresses along the curvy cut boundary (Figure \ref{f5}) are then compensated
up to quadratic order in the kink angles by introducing counter-forces along
the original (straight) cut, leading to corrected functions
$\delta\phi^c(z)$ and $\delta\psi^c(z)$, where
$\phi(z)=\phi^s(z)+\delta\phi^c(z)+O(\alpha^3)$ and
$\psi(z)=\psi^s(z)+\delta\psi^c(z)+O(\alpha^3)$. For the calculation of the
energy release (\ref{68}) we need the real part of the residue of $\psi(z)$
at infinity: we will show that the residue of $\delta\psi^c(z)$ at infinity
is zero and thus the residues of $\psi(z)$ and $\psi^s(z)$ at $z=\infty$ are
the same --- which means that the energy release for the curvy cut ABCD is
the same as that of for the straight cut AD.
\begin{figure}
\centerline{
\psfig{figure=figure5.ps,width=3truein}}
{\caption{The stress free boundary of a two--kink
ABCD cut (b) can be mimicked by applying the tangential
force to the previously unstressed (a) straight cut boundary AD.}
\label{f5}}
\end{figure}
Let's assume that points $W$ and $X$ are on the upper boundary of the link
AB. From Figure \ref{f2}, $z=t+i\beta(t+\ell_p/2)+O(\alpha^3)$, where $t\in
{\rm AB'}$ and $\beta=(1-k_1)\alpha_1+(1-k_2)\alpha_2+O(\alpha^3)$. Using
(\ref{70}) we find
\begin{eqnarray}
&i& f(\phi_0^s,\psi^s)=\bigg[{\phi^s(t)}^++t\overline{{{\phi^s(t)}'}^+}+
\overline{{\psi^s(t)}^+}\bigg]\Bigg|_W^X
\nonumber\\
&&+i\beta(t+\ell_p/2)\biggl({{\phi^s(t)}'}^++\overline{{{\phi^s(t)}'}^+}
\nonumber\\
&&\qquad -t\overline{{{\phi^s(t)}''}^+}-\overline{{{\psi^s(t)}'}^+}\biggr)
\bigg|_W^X\nonumber
\\&&-{{\beta^2(t+\ell_p/2)^2}\over2}\biggl({{\phi^s(t)}''}^+
+t\overline{{{\phi^s(t)}'''}^+}-2\overline{{{\phi^s(t)}''}^+}
\nonumber\\&&\qquad +\overline{{{\psi^s(t)}''}^+}\biggr)\bigg|_W^X
+O(\alpha^3)\nonumber\\
&=&2\beta^2(t+\ell_p/2)^2 {{\phi^s(t)}''}^+\bigg|_W^X
+O(\alpha^3) ,\label{73}
\end{eqnarray}
where $t$ runs along ${\rm AB'}$; the $+$ superscript
means that the values of $\phi_0^s(t)$ and $\psi(t)$ should
be taken at the upper boundary of the straight cut.
To obtain the second expression in (\ref{73})
one can plug in the explicit form (\ref{72}),
or - more elegantly - note that for $t\in{\rm AB'}$,
$\phi^s(t)$ is pure imaginary and $\psi^s(z)'=-z\phi^s(z)''$.
Either way, it follows that the functions $\delta\phi^c(z)$
and $\delta\psi^c(z)$ satisfy
\begin{eqnarray}
i\delta f&=&\bigg[{\delta\phi^c(t)}^++t\overline{{{\delta\phi^c(t)}'}^+}+
\overline{{\delta\psi^c(t)}^+}\bigg]\Bigg|_W^X\nonumber\\
&=&-2\beta^2(t+\ell_p/2)^2 {{\phi^s(t)}''}^+\bigg|_W^X .\label{74}
\end{eqnarray}
This is the force we need to add along the straight cut just below segment
AB to cancel the stress along the curvy cut. Similar expressions can be
found for the forces needed below BC and CD. To find $\delta\phi^c(z)$ and
$\delta\psi^c(z)$ we have to solve the elasticity problem for the material
with the straight cut AD, subject to these applied forces $i\delta f$ along
the cut boundary. Fortunately, this problem allows a closed analytical
solution\cite{m}. Expanding the exact expression for $\delta\psi^c(z)$
in\cite{m} we find
\begin{equation}
\delta\psi^c(z)={{\ell_p}\over{4 \pi iz}}\oint_{\gamma}
{\rm Re}\biggl[i\delta f(x(\sigma))\biggr]d\sigma
+O\biggl({1\over {z^2}}\biggr)\label{75}
\end{equation}
where the integration is along the unit circle $\gamma$ in the complex
plane, and $i\delta f$ is a function of a variable point
$x(\sigma)=\ell_p(\sigma+1/\sigma)/4$ along the straight cut boundary AD.
Notice from (\ref{74}) that $i \delta f$ is pure imaginary evaluated on the
upper boundary of the link AB: in this case $|t|<\ell_p/2$, and so the
argument of the square root in (\ref{72}) is negative resulting in pure
imaginary ${\phi^s(t)}'$, thus ${\phi^s(t)}''$ is also pure imaginary. In
fact, as it can be checked explicitly, ${\rm Re}\biggl[i\delta f\biggr]=0$
for arbitrary $W$ and $X$ along the cut boundary. So we conclude from
(\ref{75}) that the residue of $\delta\psi^c(z)$ at infinity is zero and thus
the energy release for the curvy cut ABCD is the same as for the straight
cut AD. The underling physical reason for this seeming remarkable
coincidence is that to imitate the stress free curvy cut to quadratic order
in the kink angles we have to apply only tangential force along the straight
cut (pure imaginary $i\delta f$ means $F_y=0$), which do no work because a
straight cut under a uniform isotropic tension at infinity opens up but does
not shrink\cite{m}.
We find that the energy release $E(\ell_p)$ of the curvy cut with
projected distance
$\ell_p$ between the endpoints is the same to quadratic order in the kink
angles as the energy release of the straight cut of length $\ell_p$.
The latter one is given by the second formula in (\ref{65}) with
$\theta=0$ (it also coincides with Griffith's result (\ref{st}))
\begin{equation}
E(\ell_p)={{\pi T^2\ell_p^2}\over{32\mu}}(1+\chi) .
\label{c1}
\end{equation}
The natural variables to describe the curvy cut are its total length $\ell$
and its curvature $k(x),\ x\in [0,\ell]$. In what follows we express
(\ref{c1}) in these variables and find the normal modes of the curvature
that diagonalize the energy release.
For the two-kink cut ABCD (Figure \ref{f2}) of total length $\ell$
one can find
\begin{eqnarray}
\ell_p&=&\ell\biggl(1-{{x_1(\ell-x_1)}\over{2\ell^2}}\alpha_1^2
-{{x_1(\ell-x_2)}\over{\ell^2}}\alpha_1\alpha_2\nonumber\\
&&-{{x_2(\ell-x_2)}\over{2\ell^2}}\alpha_2^2\biggr)+O(\alpha^3)
\label{c2}
\end{eqnarray}
where $x_1$ and $x_2$ parameterize the kink positions:
the length of the link AB is assumed to be $x_1$ and the
length of the segment ABC equals $x_2$.
Similarly, for the n-kink cut of total length
$\ell$ with the kink angles $\lbrace\alpha_i\rbrace$ parameterized
by their distance $\lbrace x_i\rbrace$ from the cut end
\begin{eqnarray}
\ell_p&=&\ell\biggl(1-{1\over 2}\sum_{i,j=1}^n\alpha_i\alpha_j
\biggl[-{{x_ix_j}\over{\ell^2}}+{{{\rm min}(x_i,x_j)}\over\ell}
\biggr]\biggr)\nonumber\\
&&+O(\alpha^3) .\label{c3}
\end{eqnarray}
Expressing the kink angles through the local curvature of the curve,
$\alpha_i=k(x_i)\Delta x_i/\lambda$, we find the continuous
limit of (\ref{c3})
\begin{eqnarray}
\ell_p&=&\ell\biggl(1-{1\over 2}\int_{0}^{\ell}\int_{0}^{\ell}
{{dxdy}\over{\lambda^2}} k(x)M(x,y)k(y)\biggr)\nonumber\\
&&+O({k(x)}^3) ,\label{c4}
\end{eqnarray}
with
\begin{equation}
M(x,y)=-{{x y}\over{\ell^2}}+{{{\rm min}(x,y)}\over\ell}
\label{c5}
\end{equation}
and the scale $\lambda$ is introduced to make the curvature dimensionless
($\lambda$ can be associated with the ultraviolet cutoff of the theory ---
roughly the interatomic distance). Substituting (\ref{c5}) into (\ref{c1})
we find the energy release $E(\ell,k(x))=E(\ell_p)$ of the curvy cut in its
intrinsic variables
\begin{eqnarray}
E(\ell&,&k(x))=\nonumber\\
&&{{\pi T^2\ell^2}\over{32\mu}}(1+\chi)
\biggl(1-\int_{0}^{\ell}\int_{0}^{\ell}{{dxdy}\over{\lambda^2}}
k(x)M(x,y)k(y)\biggr)\nonumber\\
&&+O({k(x)}^3) .\label{c6}
\end{eqnarray}
To find the normal modes of the curvature we have to find the eigenvalues
and eigenvectors of the operator $M(x,y)$. If $k_n(x)$ is
an eigenvector of $M(x,y)$ with eigenvalue $\lambda_n$, then
\begin{equation}
\lambda_n k_n(x)=\int_{0}^{\ell}{{dy}\over\lambda}M(x,y)k_n(y) .
\label{c7}
\end{equation}
From (\ref{c5}), $M(0,y)=0$ and $M(\ell,y)=0$ for arbitrary
$y\in[0,\ell]$, so from (\ref{c7}) the eigenvectors of
$M(x,y)$ must be zero at $x=0$ and $x=\ell$: $k_n(0)=k_n(\ell)=0$.
An arbitrary function $k_n(x)$ with this property is given by the
Fourier series
\begin{equation}
k_n(x)=\sum_{m=1}^{\infty}c_m\sqrt{{{2\lambda}\over{\ell}}}\sin
{{\pi m x}\over{\ell}}
\label{c8}
\end{equation}
where the overall constant $\sqrt{2\lambda/\ell}$ is introduced to
normalize the Fourier modes with the integration measure $dx/\lambda$
over $x\in[0,\ell]$. One can explicitly check from (\ref{c7}) that each
Fourier mode $\sqrt{2\lambda/\ell}\sin(\pi m x/\ell)$ is in fact an
eigenvector of $M(x,y)$ with the eigenvalue $\lambda_m
=\ell/(\pi^2 m^2\lambda)$. In terms of the amplitudes
of the normal modes $\lbrace c_n\rbrace$, (\ref{c6}) is rewritten as
\begin{equation}
E(\ell,\lbrace c_n\rbrace)={{\pi T^2\ell^2}\over{32\mu}}(1+\chi)
\biggl(1-\sum_{n=1}^{\infty}{{\ell}\over{\pi^2 n^2\lambda}}
c_n^2\biggr)+O(c_n^3) .\label{c9}
\end{equation}
(\ref{c9}) is the main results of the section: we've calculated the
energy release of an arbitrary curvy cut in its intrinsic variables ---
the total length $\ell$ and the curvature $k(x),\ x\in[0,\ell]$ ---
to quadratic order in $k(x)$ and found the normal modes of the curvature
that diagonalize the energy release.
In conclusion we mention that the measure in the kink angle space is
Cartesian --- $\prod_i d\alpha_i$ --- (and thus the functional measure $D
k(x)\equiv D \alpha(x)$ is Cartesian), so the measure in the vector space of
the amplitudes of the normal modes $\lbrace c_n\rbrace$ is also Cartesian
--- $\prod_{n=1}^{\infty} dc_n$, because the Fourier transformation $\lbrace
k(x)\rbrace\to\lbrace c_n\rbrace$ is orthonormal. This will be important in
section V, where we will be integrating over crack shapes.
\section{Surface phonons}
In the previous sections we have extensively discussed the calculation of
the energy release due to the equilibrium opening of a cut in an elastic
material. Since our goal is to deal with cracks as thermal fluctuations, we
must also deal with the more traditional elastic fluctuations --- phonons,
or sound. We find here that the bulk fluctuations decouple from the new
surface phonon modes introduced by the cut. We discuss the quadratic
fluctuations for linear elastic material with a straight cut of length
$\ell$ subject to a uniform isotropic tension $T$ at infinity; more
specifically, we calculate the energy release for the material with an
arbitrary opening of the straight cut and we find collective coordinates
(normal modes) that diagonalize the change in the energy.
An elastic state of the material can be defined through the specification of
its displacements $\vec{U}=(u,v)$ at every point $(x,y)$. For the material
with a cut, the fields $u(x,y)$ and $v(x,y)$ can in principle have a
discontinuity along the cut: assuming that the cut is an interval
$(x,y)=([-\ell/2,\ell/2],0)$,
\begin{equation}
2 g_x(x)=u(x,0+)-u(x,0-), \mbox{\hspace{0.1in} }
{\rm for} \mbox{\hspace{0.1in} }x\in[-\ell/2,\ell/2]
\label{78}
\end{equation}
and
\begin{equation}
2 g_y(x)=v(x,0+)-v(x,0-), \mbox{\hspace{0.1in} }
{\rm for} \mbox{\hspace{0.1in} }x\in[-\ell/2,\ell/2]
\label{79}
\end{equation}
may be nonzero. It is clear that the arbitrary state $\vec{U}$ can be
decomposed into the superposition of two states $\vec{U}_g=(u_g,v_g)$ and
$\vec{U}_c=(u_c,v_c)$, where $\vec {U_g}$ is the equilibrium state for given
displacement discontinuity $(g_x,g_y)$ at the cut boundary
(\ref{78}-\ref{79}) and tension $T$ at infinity that maximizes the energy
release, and $\vec{U}_c$ given by $(u_c,v_c)=(u-u_g,v-v_g)$ is a continuous
displacement field everywhere. Recall that the energy release is the sum of
the work done by the external forces and the work done by the internal
forces (\ref{4}). We define the energy release $E$ for the elastic state
$\vec{U}$ with respect to the equilibrium state of the material
$\vec{U}_0=(u_0,v_0)$ without the cut under the same loading at infinity, as
a limit of this difference for finite size samples with boundary $\Gamma_b$
and enclosed area $A$. We find following (\ref{4}), (\ref{5}) and (\ref{6})
\begin{equation}
E=\oint_{\Gamma_b} T \vec{n}(\vec{U}-\vec{U}_0) d\ell
+{1\over 2}\int\int_A (\sigma_{ij}^0 e_{ij}^0 -\sigma_{ij} e_{ij}) d A
\label{81}
\end{equation}
where $\sigma_{ij}^0$ and $e_{ij}^0$ are the stresses and strains of the
equilibrium elastic state of the uncracked material $\vec{U}_0$;
$\sigma_{ij}$ and $e_{ij}$ are the stresses and strains of the elastic state
of material with the straight cut and displacement field $\vec{U}$; and
$\vec{n}$ is a unit normal pointing outwards from the regularization
boundary $\Gamma_b$. This argument is similar to that in the second section,
but the elastic state $\vec{U}$ is not an equilibrium one and so the
arguments there are not directly applicable. We rewrite the energy release
(\ref{81}) making use of the decomposition $\vec{U}=\vec{U}_g+\vec{U}_c$ to
get
\begin{eqnarray}
E&=&\oint_{\Gamma_b} T \vec{n}(\vec{U}_g-\vec{U}_0) d\ell
+{1\over 2}\int\int_A (\sigma_{ij}^0 e_{ij}^0 -\sigma_{ij}^g e_{ij}^g) d A
\nonumber\\
&&-{1\over 2}\int\int_A \sigma_{ij}^c e_{ij}^c d A\nonumber\\
&&+\oint_{\Gamma_b} T \vec{n}\vec{U}_c d\ell
-{1\over 2}\int\int_A(\sigma_{ij}^g e_{ij}^c+\sigma_{ij}^c e_{ij}^q) dA .
\label{82}
\end{eqnarray}
The first two integrals in (\ref{82}) give the energy release for the
equilibrium elastic state with the specified cut opening $(g_x,g_y)$ and
tension $T$ at infinity. According to our decomposition this energy release
is maximum for given $g_x(x)$ and $g_y(x)$ and thus can not increase
linearly by tuning $\vec{U}_c$. The latter is true only if the last two
integrals on the RHS of $(\ref{82})$ - linear in $\vec{U}_c$ - cancel each
other. (This can be verified explicitly by integrating by parts the last
integral on the RHS of (\ref{82}) and using the fact that $\vec{U}_g$ is an
equilibrium state.) Thus
\begin{eqnarray}
E&=&\oint_{\Gamma_b} T \vec{n}(\vec{U}_g-\vec{U}_0) d\ell
+{1\over 2}\int\int_A (\sigma_{ij}^0 e_{ij}^0 -\sigma_{ij}^g e_{ij}^g) d A
\nonumber\\
&&-{1\over 2}\int\int_A \sigma_{ij}^c e_{ij}^c d A :
\label{83}
\end{eqnarray}
the energy factors, and the last term representing the
continuous degrees of freedom does not ``feel''
the presence of the cut and thus will have exactly the
same spectrum as that of the uncracked material.
Although the elastic state $\vec{U}_g$ is an equilibrium one, the cut
boundary is in general stressed, and so we still have to modify the result
of the second section for the energy release (\ref{17}). From the first
equation in (\ref{14}), the elastic energy of the uncracked material is
given by
\begin{equation}
{1\over 2}\int\int_A \sigma_{ij}^0 e_{ij}^0 d A=
{1\over 2}\oint_{\Gamma_b}T\vec{n}\vec{U}_0 d\ell .
\label{sp1}
\end{equation}
The elastic energy of the material with the cut (the second equation in
(\ref{14})) is modified to incorporate the stressed cut boundary
\begin{equation}
{1\over 2}\int\int_A \sigma_{ij}^g e_{ij}^g d A=
{1\over 2}\oint_{\Gamma_b}T\vec{n}\vec{U}_g d\ell+{1\over 2}\oint_
{\Gamma_h}\vec{F}_h\vec{U}_g d\ell .
\label{sp2}
\end{equation}
The second integral in (\ref{sp2}) is over the cut boundary $\Gamma_h$ and
$\vec{F_h}$ is the force we have to apply to the cut boundary to insure its
displacements satisfy (\ref{78}-\ref{79}). With this change, following
(\ref{83}-\ref{sp2}) we find that the energy release as given by (\ref{17})
decreases by $\delta E$
\begin{equation}
\delta E={1\over 2}\oint_{\Gamma_h}\vec{F_h}\vec{U}_g d\ell .
\label{84}
\end{equation}
In the spirit of the third section, the equilibrium elastic
state $\vec{U}_g$ can be described by the analytical functions
$\phi(z)$ and $\psi(z)$; $\phi_0(z)=\phi(z)-T z/2$ and $\psi(z)$
are holomorphic in the extended
complex $z$ plane excluding the straight cut and are constrained
to provide displacement discontinuity $(g_x,g_y)$.
The energy release $E_g$ is then smaller than the one given by (\ref{60})
by $\delta E$
\begin{eqnarray}
E_g&=&-{{\pi T}\over{4\mu}}(1+\chi){\rm Re}[y_1^g]-\delta E\label{sh3}
\end{eqnarray}
where $y_1^g$ is the $1/z$ coefficient in the expansion of $\psi(z)$
at infinity.
To determine the functions $\phi_0(z)$ and $\psi(z)$ we conformally
map the complex $z$ plane with the cut to the outside of the
unit circle $\gamma$ (Figure \ref{f4})
\begin{equation}
z=\omega(\zeta)={{\ell}\over 4}\biggl(\zeta+{1\over{\zeta}}\biggr)
\label{85}
\end{equation}
so that the unit circle in the $\zeta$ plane is mapped to the straight
cut boundary $\Gamma_h$ in the original plane, $\Gamma_h=\omega(\gamma)$.
\begin{figure}
\centerline{
\psfig{figure=figure6.ps,width=3truein}}
{\caption{The determination of the holomorphic functions
describing the equilibrium elastic state of the material with
a straight cut is simplified in the conformal plane $\zeta$,
where the unit circle $\gamma$ corresponds to cut boundary $\Gamma_h$
in the original $z$ plane.}
\label{f4}}
\end{figure}
The elasticity problem is reformulated in the conformal plane as follows:
we have to find analytical functions $\phi_g(\zeta)=\phi(\omega(\zeta))$ and
$\psi_g(\zeta)=\psi(\omega(\zeta))$, such that
${\phi_g}_0(\zeta)=\phi_g(\zeta)-\ell\zeta/4$ and $\psi_g(\zeta)$ are
holomorphic in the extended complex $\zeta$ plane outside the unit circle
and give the maximum energy release with displacement discontinuity
$(g_x,g_y)$ at the cut boundary. We introduce
\begin{equation}
g(\sigma)=u_g+iv_g\bigg|_{\gamma}
\label{86}
\end{equation}
where $\sigma=\exp(i\alpha)$, $\alpha\in[0,2\pi)$, is a parameterization
of the unit circle $\gamma$.
Since $\sigma$ and $1/\sigma$ represent opposite points across the cut,
(\ref{78}-\ref{79}) require
\begin{equation}
g(\sigma)-g(1/\sigma)=2[g_x(\omega(\sigma))+ig_y(\omega(\sigma))],
\ \alpha\in[0,\pi) .
\label{sp4}
\end{equation}
It is important to note that the equilibrium elastic state that maximizes
the energy release for given displacement discontinuity $(g_x,g_y)$
is unique; on the other hand (\ref{sp4}) determines only the
asymmetric modes $g^{\rm asym}(\sigma)$ of the crack opening
displacement for this state
\begin{eqnarray}
2 g^{\rm asym}(\sigma)&=&[u_g(\sigma)+iv_g(\sigma)]-
[u_g(1/\sigma)+iv_g(1/\sigma)]\label{sp5}\\
&=&2[g_x(\omega(\sigma))+ig_y(\omega(\sigma))] .\nonumber
\end{eqnarray}
The symmetric modes $g^{\rm sym}(\sigma)$
\begin{equation}
2 g^{\rm sym}(\sigma)=[u_g(\sigma)+iv_g(\sigma)]+
[u_g(1/\sigma)+iv_g(1/\sigma)]\label{sp6}
\end{equation}
left unconstrained by (\ref{sp4}), should then be relaxed
to provide the maximum energy release for given $g^{\rm asym}(\sigma)$.
Thus, to calculate the energy release $E_g$ of the elastic state
$\vec{U}_g$ we first find the energy release
$E(g)=E(g^{\rm asym}+g^{\rm sym})$ for the equilibrium state with an
arbitrary displacement along the cut boundary $g(\sigma)$ and then
maximize the result with respect to $g^{\rm sym}$
\begin{equation}
E_g=\max\limits_{g^{\rm sym}} E(g^{\rm asym}+g^{\rm sym}) .
\label{sp7}
\end{equation}
In what follows we will use $\phi_{\zeta}(\zeta)$ and $\psi_{\zeta}(\zeta)$
to describe the equilibrium elastic state with an arbitrary
displacement $g(\sigma)$ along the cut boundary and
tension $T$ at infinity.
(The energy release for arbitrary $g(\sigma)$ is still given by
(\ref{84}-\ref{sh3}).)
Making the change of variables $z\to w(\zeta)$
in (\ref{36}) and putting $\zeta=\sigma$ we obtain a constraint on
$\phi_{\zeta}(\zeta)$ and $\psi_{\zeta}(\zeta)$ that guarantees
the displacements along $\gamma$ to be $g(\sigma)$
\begin{equation}
\chi\phi_{\zeta}(\sigma)-{{\omega( \sigma)}\over{\overline{\omega'(\sigma)}}}
\overline{\phi_{\zeta}'(\sigma)}-\overline{\psi_{\zeta}(\sigma)}
=2\mu g(\sigma) .\label{88}
\end{equation}
Once the solution $(\phi_{\zeta},\psi_{\zeta})$ of the elasticity problem is
found, we can compute the correction (\ref{84}) to the energy release.
Introducing the polar coordinates $(\rho,\theta)$ (Figure \ref{f4}) in the
complex $\zeta$ plane, $\vec{F_h}=F_{\rho}\vec{\rho}+F_{\theta}\vec{\theta}$
and $\vec{U}_g=v_{\rho}
\vec{\rho}+v_{\theta}\vec{\theta}$, and using
$d\ell=|\omega'(\sigma)||d\sigma|$ we find
\begin{eqnarray}
\delta E&=&{1\over 2}\oint_{\gamma}\biggl(F_{\rho}v_{\rho}
+F_{\theta}v_{\theta}\biggr)|\omega'(\sigma)||d\sigma|
\label{89}\\
&=&{1\over 2}\oint_{\gamma}\biggl(\sigma_{\rho\rho}v_{\rho}
+\sigma_{\rho\theta}v_{\theta}\biggr)|\omega'(\sigma)||d\sigma|
\nonumber
\end{eqnarray}
where in the second equality we express the force
through the stress tensor components: $F_{\rho}=\sigma_{\rho\rho}$
and $F_{\theta}=\sigma_{\rho\theta}$.
The stress tensor components $\sigma_{\rho\rho}$ and
$\sigma_{\rho\theta}$ are given in terms of the $\phi_{\zeta}(\zeta)$ and
$\psi_{\zeta}(\zeta)$ functions that, as we already mentioned, completely
determine the equilibrium elastic state. Muskhelishvili finds\cite{m}
\begin{eqnarray}
\sigma_{\rho\rho}-i\sigma_{\rho\theta}&=&
{{\phi_{\zeta}'(\zeta)}\over{\omega'(\zeta)}}
+{{\overline{\phi_{\zeta}'(\zeta)}}\over{\overline{\omega'(\zeta)}}}
-{{\zeta^2}\over{\rho^2
\overline{\omega'(\zeta)}}}\biggl\lbrace
{{\overline{\omega(\zeta)}}\over{\omega'(\zeta)}}
\phi_{\zeta}''(\zeta)\nonumber\\
&&-{{\overline{\omega(\zeta)} \omega''(\zeta)}
\over{{\omega'(\zeta)}}^2}\phi_{\zeta}'(\zeta)
+\psi_{\zeta}'(\zeta)\biggr\rbrace .
\label{90}
\end{eqnarray}
Noting that the transformation of the displacements
along the unit circle in the Cartesian coordinates $(u_g,v_g)$,
$g=u_g+iv_g$, to the polar coordinates $(v_{\rho},v_{\theta})$ is\cite{m}
\begin{equation}
v_{\rho}+i v_{\theta}={1\over{\sigma}}{{\overline{\omega'(\sigma)}}
\over{|\omega'(\sigma)|}}(u_g+iv_g)={1\over{\sigma}}{{\overline{\omega'
(\sigma)}}\over{|\omega'(\sigma)|}}g(\sigma) ,
\label{91}
\end{equation}
we conclude from (\ref{89})
\begin{eqnarray}
\delta E&=&{1\over 2}{\rm Re}\oint_{\gamma}(\sigma_{\rho\rho}-
i\sigma_{\rho\theta})(v_{\rho}+i v_{\theta})|\omega'(\sigma)||d\sigma|
\label{92}\\
&=&{1\over 2}{\rm Re}\oint_{\gamma}(\sigma_{\rho\rho}-
i\sigma_{\rho\theta}){{\overline{\omega'(\sigma)}}\over{\sigma}} g(\sigma)
{{d\sigma}\over{i\sigma}}\nonumber
\end{eqnarray}
where $\sigma_{\rho\rho}-i\sigma_{\rho\theta}$ is given by (\ref{90})
with $\zeta\to\sigma$ ($\rho\to 1$).
From (\ref{sh3}) and (\ref{92}) we find the energy release $E(g)$
\begin{eqnarray}
E(g)&=&-{{\pi T}\over{4\mu}}(1+\chi){\rm Re} [y_1(g)]\nonumber\\
&&-{1\over 2}{\rm Re}\oint_{\gamma}(\sigma_{\rho\rho}-i\sigma_{\rho\theta}){
{\overline{\omega'(\sigma)}}\over{\sigma}} g(\sigma)
{{d\sigma}\over{i\sigma}}\label{93}
\end{eqnarray}
where $y_1(g)$ is the $1/z$ coefficient in the expansion
of $\psi_{\zeta}(\omega^{-1}(z))$ at $z=\infty$.
The equilibrium elastic problem for material with the straight cut allows a
closed analytical solution for the arbitrary specified displacement
$g(\sigma)$ along the unit circle in the conformal plane $\zeta$. Using the
fact that ${\phi_{\zeta}}_0(\zeta)$ and $\psi_{\zeta}(\zeta)$ are
holomorphic functions outside the unit circle that satisfy (\ref{88}),
Muskhelishvili finds\cite{m}
\begin{eqnarray}
\phi_{\zeta}(\zeta)&=&{{T\ell\zeta}\over 8}-{{2\mu}\over\chi}{1\over{2\pi i}}
\oint_{\gamma}{{g(\sigma) d\sigma}\over{\sigma-\zeta}}+{{T\ell}\over
{8\chi\zeta}}\label{94}\\
\psi_{\zeta}(\zeta)&=&{{\mu}\over{\pi i}}\oint_{\gamma}{{\overline{g(\sigma)}
d\sigma}\over{\sigma-\zeta}}+{{T\ell}\over 8}\biggl({\chi\over\zeta}-
{{2\zeta}\over{\zeta^2-1}}\biggr)\nonumber\\
&&-\zeta{{1+\zeta^2}\over{\zeta^2-1}}
\biggl(\phi_{\zeta}'(\zeta)-{{T\ell}\over 8}\biggr)-{{\mu}\over{\pi i}}
\oint_{\gamma}{{\overline{g(\sigma)}d\sigma}\over{\sigma}} .
\nonumber
\end{eqnarray}
Assuming that $g(\sigma)$ is smooth, we represent it by a convergent
Fourier series
\begin{equation}
g(\sigma)=\sum_{-\infty}^{+\infty}(a_n+ib_n)\sigma^n .\label{95}
\end{equation}
Using representation (\ref{95}) for $g(\sigma)$ we find from
(\ref{93}) the energy release $E$
\begin{eqnarray}
E(g)&=&{{\pi T\ell(1+\chi)}\over{4\chi}}(
a_{-1}+\chi a_1)\label{97}\\
&&-2\pi\mu\sum_{n=1}^{+\infty}
n\biggl(a_n^2+b_n^2+{{a_{-n}^2+b_{-n}^2}\over\chi}\biggr)
\nonumber\\
&&-{{\pi T^2{\ell}^2(1+\chi)}\over{128\mu}}\biggl(\chi-2+
{1\over\chi}\biggr) .\nonumber
\end{eqnarray}
(The computations are tedious, but straightforward: first
we substitute (\ref{95}) into (\ref{94}) to find the solution
of the elasticity problem in terms of the Fourier amplitudes
$\lbrace a_n,b_n\rbrace$, then we calculate the stress tensor
components at the unit circle using (\ref{90}), and finally plugging
the result into (\ref{93}) we obtain (\ref{97}).)
The next step is to relax the symmetric modes in the crack opening
displacement given by $g(\sigma)$.
From $(\ref{95})$ and $(\ref{86})$ we find
\begin{eqnarray}
u_g&=&\sum_{n=1}^{+\infty}(a_n+a_{-n})\cos n\alpha +(b_{-n}-b_n)\sin n\alpha
\label{z1}\\
v_g&=&\sum_{n=1}^{+\infty}(b_n+b_{-n})\cos n\alpha +(a_n-a_{-n})\sin n\alpha
\nonumber
\end{eqnarray}
which with the change of variables
\begin{eqnarray}
u_n&=&b_{-n}-b_n\label{z2}\\
v_n&=&a_n-a_{-n}\nonumber\\
\tilde{u}_n&=&b_n+b_{-n}\nonumber\\
\tilde{v}_n&=&a_n+a_{-n}\nonumber
\end{eqnarray}
is rewritten as
\begin{eqnarray}
u_g&=&\sum_{n=1}^{+\infty}\tilde{v}_n\cos n\alpha +u_n\sin n\alpha
\label{z3}\\
v_g&=&\sum_{n=1}^{+\infty}\tilde{u}_n\cos n\alpha +v_n\sin n\alpha .
\nonumber
\end{eqnarray}
It is clear now that the asymmetric modes of the crack opening
displacement are described by $\lbrace u_n,v_n\rbrace$, while
the symmetric ones are specified by $\lbrace \tilde{u}_n,\tilde{v}_n\rbrace$.
(Recall that points parameterized by $\sigma$ and $1/\sigma$
(or equivalently $\alpha$ and $-\alpha$) are opposite from one
another across the cut.)
The amplitudes $\lbrace u_n,v_n\rbrace$ are uniquely determined
for the given $(g_x,g_y)$. From (\ref{sp5}) and (\ref{z3})
\begin{equation}
g_x(\ell/2\cos\alpha)+ig_y(\ell/2\cos\alpha)=
\sum_{n=1}^{+\infty}(u_n+iv_n)\sin n\alpha\label{z4}
\end{equation}
where $\alpha\in[0,\pi]$.
Using the transformation inverse to (\ref{z2}) we can express the
energy release (\ref{97}) in terms of
$\lbrace u_n,v_n,\tilde{u}_n,\tilde{v}_n\rbrace$. The obtained expression
is maximum for
\begin{eqnarray}
\tilde{u}_n&=&u_n{{\chi-1}\over{1+\chi}}\label{z5}\\
\tilde{v}_n&=&v_n{{1-\chi}\over{1+\chi}},
\mbox{\hspace{0.1in} }n\ne 1\nonumber\\
\tilde{v}_1&=&v_1{{1-\chi}\over{1+\chi}}+{{T\ell(\chi-1)}\over{8\mu}}
\nonumber
\end{eqnarray}
and gives the energy release $E_g$
\begin{equation}
E_g={{T\ell\pi}\over 2}v_1 -{{2\pi\mu}\over{\chi+1}}\sum_{n=1}^{+\infty}
n\biggl(u_n^2+v_n^2\biggr) .\label{z6}
\end{equation}
Finally, the maximum of (\ref{z6}) is achieved for
\begin{eqnarray}
u_n^{\rm max}&=&0\label{z7}\\
v_n^{\rm max}&=&0, \mbox{\hspace{0.1in} }n\ne 1\nonumber\\
v_1^{\rm max}&=&{{\pi T\ell(1+\chi)}\over{8\mu}}\nonumber
\end{eqnarray}
and
\begin{equation}
E_g^{\rm max}={{\pi T^2\ell^2(1+\chi)}\over{32\mu}}
\label{101}
\end{equation}
which, as one might expect, corresponds to the equilibrium opening
of the cut\cite{m} and the energy release associated
with this opening (\ref{st}).
Expanding (\ref{z6}) about $\lbrace u_n^{\rm max},
v_n^{\rm max}\rbrace$, $\lbrace u_n,v_n\rbrace\to
\lbrace u_n^{\rm max}+u_n,v_n^{\rm max}+v_n\rbrace$,
we find
\begin{equation}
E_g={{\pi T^2\ell^2(1+\chi)}\over{32\mu}}-{{2\pi\mu}\over{1+\chi}}
\sum_{n=1}^{+\infty}n \biggl(u_n^2+v_n^2\biggr) .
\label{102}
\end{equation}
Expression (\ref{102}) is the desired result:
we find that the crack opening displacements (specified on the unit
circle in the conformal plane)
\begin{eqnarray}
\lbrace u,v\rbrace=\bigg\lbrace &&v_n{{1-\chi}\over{1+\chi}}\cos n\alpha+
u_n\sin n\alpha,\nonumber\\
&& u_n{{\chi-1}\over{1+\chi}}\cos n\alpha+v_n\sin
n\alpha \bigg\rbrace\label{103}
\end{eqnarray}
imposed on the saddle point cut opening
\begin{equation}
\lbrace u^{\rm max},v^{\rm max}\rbrace=\lbrace 0,
{{\pi T\ell(1+\chi)}\over{8\mu}}\sin\alpha\rbrace\label{104}
\end{equation}
diagonalize the energy release
and thus are the normal modes; with the excitation of the $n$-th normal
mode with the amplitude $\lbrace u_n,v_n\rbrace$ the energy release
decreases by $2\pi\mu n (u_n^2+v_n^2)/(1+\chi)$.
Although (\ref{z6}) has been derived for the material under
uniform isotropic stretching at infinity, it can be reinterpreted
to describe the minimum increase in the energy $\Delta E$ of the material
under a uniform isotropic compression (pressure) $P$ at infinity,
due to the opening of the straight cut with specified displacement
discontinuity along its boundary. For the displacement discontinuity
given by (\ref{z4}) we find similar to (\ref{z6})
\begin{equation}
\Delta E={{P\ell\pi}\over 2}v_1 +{{2\pi\mu}\over{\chi+1}}\sum_{n=1}^{+\infty}
n\biggl(u_n^2+v_n^2\biggr) .\label{zz6}
\end{equation}
One can use the same arguments that lead to (\ref{83}) to show that the crack
opening normal modes (\ref{103}) decouple from all continuous
modes (that are present in the uncracked material) and thus
leave their spectrum unchanged.
The saddle point is however unphysical in this case:
as follows from (\ref{104}), ($T$ in (\ref{104}) should be replaced with $-P$),
it corresponds to a configuration where the material overlaps itself.
\section{The imaginary part of the partition function}
Elastic materials at finite temperature undergo a
phase transition to fracture at zero applied stress, similar to
the first order phase transition in spin systems below the critical
temperature at zero magnetic field.
The free energy of an elastic material under a stretching load develops
an imaginary part which determines the material lifetime with respect to
fracture. The imaginary part of the free energy has an essential singularity
at zero applied stress. In this section we calculate this singularity
at low temperatures in a saddle point approximation including quadratic
fluctuations.
Consider an infinite two-dimensional elastic material
subject to a uniform isotropic stretching $T$ at infinity. Creation of a
straight cut of length $\ell$ will increase the energy
by $2 \alpha \ell$, where $\alpha$ is the surface tension
(the energy per unit length of edge), with a factor of $2$ because of
the two free surfaces. On the other hand, the cut will open up because
of elastic relaxation. Using (\ref{101}) for the energy release we find the
total energy $E(\ell)$ of the straight cut in equilibrium under
stretching tension $T$:
\begin{equation} E(\ell)=2 \alpha \ell -{{\pi T^2\ell^2(1+\chi)} \over
{32\mu}} .
\label{105}
\end{equation}
Introducing
\begin{equation}
\ell_c={{32\mu\alpha} \over {\pi T^2 (1+\chi)}}
\label{106}
\end{equation}
we can rewrite the energy of the crack as
\begin{equation}
E(\ell)=2\alpha \ell -\alpha {\ell^2 \over \ell_c} .
\label{107}
\end{equation}
It follows that cracks with $\ell>\ell_c$ will grow, giving rise to
the fracture of the material, while those with $\ell<\ell_c$ will heal
--- a result first obtained by Griffith\cite{g}. At finite temperature
a crack of any size can appear as a thermal fluctuation, which
means that for arbitrary small stretching $T$ the true ground
state of the system is fractured into pieces and so the free
energy of the material cannot be analytical at $T=0$.
Because the energy $E(\ell_c) = \alpha \ell_c$
grows as $1/ T^2$ as $T \rightarrow 0$, interactions between
thermally nucleated cracks are unimportant at small $T$ and low temperatures
(allowing us to use the ``dilute gas approximation'').
The thermodynamic properties of a macroscopic system can be obtained
from its partition function $Z$:
\begin{equation}
Z= \sum_{N=0}^{\infty}\sum_n \exp (-\beta E_{nN})\label{107a}
\end{equation}
where the summation $N$ is over all possible numbers of particles
(cracks in our case) and the summation $n$ is over all states of the
system with $N$ cracks.
To begin with, let's consider the partition function of the material with
one cut $Z_1$
\begin{equation}
Z_1=\sum_E \exp (-\beta E)
\label{107c}
\end{equation}
where the summation is over all energy states of the material with a
single cut. The calculation of the imaginary part of the partition function is
dominated by a saddle point, that in our case is a straight cut of
length $\ell_c$. The straight cut is the saddle point because
it gains the most elastic relaxation energy for a given number of broken
bonds (we explicitly show in section III that curving a cut
reduces the energy release).
For now we neglect all fluctuations of the critical droplet
(the cut of length $\ell_c$) except for its uniform
contraction or expansion --- fluctuations in the length of the straight
cut. Introducing the deviation $\Delta\ell$ in the cut
length from the critical length $\ell_c$, $\Delta\ell=\ell-\ell_c$,
we find from (\ref{107})
\begin{equation}
E=\alpha\ell_c -\alpha{{{\Delta\ell}^2}\over{\ell_c}} .
\label{107b}
\end{equation}
The fact that this degree of freedom has negative eigenvalue means that
direct computation of the partition function yields a divergent result. A
similar problem for the three-dimensional Ising model was solved by
Langer\cite{langer}: one has to compute the partition function in a stable
state $P=-T$ (compression), and then do an analytical continuation in
parameter space to the state of interest. The free energy develops an
imaginary part in the unstable state, related to the decay rate for
fracture\cite{langer2}: the situation is similar to that of barrier
tunneling in quantum mechanics \cite{affleck}, where the imaginary part in
the energy gives the decay rate of a resonance. We have explicitly
implemented this prescription for the simplified calculation of the
imaginary part of the free energy\cite{we}: for the elastic material under
a uniform isotropic compression at infinity allowing for the nucleation of
straight cuts of an arbitrary length with an arbitrary elliptical opening
(mode $v_1$ in (\ref{zz6})), we calculated the free energy in a dilute gas
approximation. We carefully performed the analytical continuation to the
metastable state describing the elastic material under the uniform isotropic
stretching $T$ at infinity and found the imaginary part of the free energy
\begin{equation}
{\rm Im} F^{\rm simple}(T)={2 \over {\beta^2 T\lambda^2} }
\biggl ({ \pi {{A} \over {\lambda ^2}}} \biggr )\exp{
\biggl\lbrace {{-32 \beta\mu
\alpha^2 } \over { \pi T^2 (\chi+1)} } \biggr\rbrace}
\label{w1}
\end{equation}
where $A$ is the area of the material and $\lambda$ is the ultraviolet
cutoff of the theory. (The version of equation (\ref{w1})
as derived in\cite{we}, overcounts the contribution
from zero-restoring-force modes $(2\pi A/\lambda^2)$ by factor $2$.
Because cracks tilted by $\theta$ and $\pi+\theta$ are identical,
the proper contribution from rotations must be $\pi$, rather than $2\pi$.)
The alternative to this analytical continuation approach is to deform
the integration contour over the amplitude of the unstable
(negative eigenvalue) mode from the saddle point $\Delta\ell=0$ along
the path of the steepest descent\cite{langer}. More precisely, we
regularize the direct expression for the partition function
\begin{equation}
Z_1=Z_0\biggl(\pi{A\over{\lambda^2}}\biggr)
\int_{-\ell_c}^{\infty}{{d\Delta\ell}\over\lambda}\exp\biggl\lbrace-\beta
\biggl(\alpha\ell_c -\alpha{{{\Delta\ell}^2}\over{\ell_c}}\biggr)\biggr
\rbrace\label{i1}
\end{equation}
(which diverges at big $\Delta\ell$)
by bending the $\Delta\ell$ integration contour
from the saddle into the complex plane:
\begin{eqnarray}
Z_1&=&Z_0\biggl(\pi{A\over{\lambda^2}}\biggr)
\int_{-\ell_c}^{0}{{d\Delta\ell}\over\lambda}\exp\biggl\lbrace-\beta
\biggl(\alpha\ell_c -\alpha{{{\Delta\ell}^2}\over{\ell_c}}\biggr)\biggr
\rbrace\label{i2}\\
&&+Z_0\biggl(\pi{A\over{\lambda^2}}\biggr)\exp(-\beta\alpha\ell_c)
\int_{0}^{\pm i\infty}{{d\Delta\ell}\over\lambda}
\exp\biggl\lbrace\beta\alpha{{{\Delta\ell}^2}\over{\ell_c}}\biggl
\rbrace\nonumber
\end{eqnarray}
In (\ref{i1}-\ref{i2}) the factor $( \pi A/{\lambda^2})$ comes
from the zero-restoring-force modes for rotating and translating the cut,
and $Z_0$ is the partition function for the uncracked material
(unity for the present simplified calculation).
The second integral in (\ref{i2}) generates the imaginary part of the
partition function
\begin{equation}
{\rm Im}Z_1=\pm{1\over 2}Z_0\biggl(\pi{A\over{\lambda^2}}
\biggr)\exp(-\beta\alpha\ell_c){\biggl({{\pi\ell_c}\over{
\beta\alpha\lambda^2}}\biggr)}^{1/2}
\label{i3}
\end{equation}
with the $\pm$ sign corresponding to the analytical continuation
to either side of the branch cut of the partition function.
(We showed in\cite{we} that partition function is an analytical function
in complex $T$ with a branch cut along the line $T\in[0,+\infty$).)
In a dilute gas approximation the partition function for the material
with $N$ cuts $Z_N$ is given by
\begin{equation}
Z_N=Z_0{{{({Z_1}/ Z_0)}^N}\over{N!}}
\label{119}
\end{equation}
which from (\ref{107a}) determines the material free
energy
\begin{equation}
F=-{1\over\beta}\ln Z=-{1\over\beta}\ln\sum_{N=0}^{\infty}Z_N =
-{1\over\beta}\ln Z_0-{1\over\beta}{{Z_1}\over{Z_0}} .
\label{120}
\end{equation}
Following (\ref{i3}) and (\ref{120}) we find the imaginary part
of the free energy
\begin{eqnarray}
{\rm Im} F^{\rm simple}(T)=\pm&&{2 \over {\beta^2 T\lambda^2} }
{\biggl({{2\beta\mu\lambda^2}\over{\chi+1}}\biggr)}^{1/2}
\biggl ({ \pi {{A} \over {\lambda ^2}}} \biggr )\nonumber\\
&&\exp{ \biggl\lbrace {{-32 \beta\mu
\alpha^2 } \over { \pi T^2 (\chi+1)} } \biggr\rbrace}
\label{i4}
\end{eqnarray}
(\ref{i4}) differs from (\ref{w1}) only because for the calculation of the
imaginary part of the free energy in\cite{we} we used two degrees of
freedom: the length of the cut and its elliptical opening, while in current
calculation there is only one degree of freedom. One can immediately restore
(\ref{w1}) by adding the $v_1$ mode of (\ref{102}) to the energy of the
elastic material (\ref{107b}) and integrating it out. From (\ref{102}), the
$v_1$ mode generates an additional multiplicative contribution $Z_{v_1}$ to
the partition function for a single crack $Z_1$, and thus from (\ref{120})
changes the imaginary part of the free energy for multiple cracks $F^{\rm
simple}$, ${\rm Im}F^{\rm simple}\to Z_{v_1} {\rm Im}F^{\rm simple}$
\begin{equation}
Z_{v_1}=\int_{-\infty}^{+\infty}{{dv_1}\over{\lambda}}\exp\biggl\lbrace
-{{2\pi\mu\beta}\over{1+\chi}}v_1^2\biggr\rbrace={\biggl({{1+\chi}
\over{2\mu\beta\lambda^2}}\biggr)}^{1/2}
\label{ii4}
\end{equation}
which will cure the discrepancy between (\ref{i4}) and (\ref{w1}).
Although the analytical continuation method is theoretically
more appealing, the calculation of the imaginary part through
the deformation of the integration contour of the unstable mode
is more convenient once we include the quadratic fluctuations.
It is clear that both methods (properly implemented) must give the
same results.
We have already emphasized that the above calculation ignores
the quadratic fluctuations about the saddle point (except for the uniform
contraction or extension of the critical droplet),
which may change the prefactor in the expression (\ref{i4}) for the
imaginary part of the free energy and may renormalize the surface tension
$\alpha$. There are three kinds of quadratic fluctuations we have to deal
with. (I) {\it Curvy cuts} --- changes in the shape of the tear in the
material: deviations of the broken bonds from a straight-line configuration.
(II) {\it Surface phonons} --- thermal fluctuations of the free surface
of the crack about its equilibrium opening. (III) {\it Bulk phonons} ---
thermal fluctuations of the elastic media that are continuous at the cut
boundary. To incorporate these fluctuations we have to integrate out the
quadratic deviation from the saddle point energy coming from their degrees
of freedom (as we did for the surface phonon $v_1$ above).
In all cases the answer will depend upon the microscopic lattice-scale
structure of the material. In field-theory language, our
theory needs regularization: we must decide exactly how
to introduce the ultraviolet cut-off $\lambda$. Here we discuss the lattice
regularization, where the cut-off is explicitly introduced by the
interatomic distance, and $\zeta$-function
regularization, common in field theory. We find that the precise form
of the surface tension
renormalization and the prefactor in the imaginary part of the free energy
depends on the regularization prescription, but certain important quantities
appear regularization independent.
The partition function of the elastic material with one cut
$Z_1$ in the saddle point approximation (\ref{i2}),
will develop a multiplicative factor $Z_f$ upon inclusion of the quadratic
fluctuations $Z_1\to Z_f Z_1$ with
\begin{equation}
Z_f=\sum_{\Delta E}\exp(-\beta \Delta E) .
\label{131}
\end{equation}
A deviation $\Delta E$ from the saddle point energy is decomposed
into three parts, with each part describing fluctuations of one
of the mentioned three types
\begin{eqnarray}
\Delta E&=&{{\alpha\l_c^2}\over{\pi^2\lambda}}\sum_{n=1}^{\infty}{1\over{n^2}}
c_n^2+{{2\pi\mu}\over{1+\chi}}\sum_{n=1}^{\infty}n(u_n^2+v_n^2)\nonumber\\
&&+\Delta E_{\rm continuous} .\label{132}
\end{eqnarray}
The first term in (\ref{132}) accounts for the decrease in the
energy release due to the curving of the saddle point cut of
length $\ell_c$ with the curvature
\begin{equation}
k(x)=\sum_{n=1}^{\infty}c_n\sqrt{{{2\lambda}\over{\ell_c}}}\sin
{{\pi n x}\over{\ell_c}},\ x\in[0,\ell_c] .
\label{133}
\end{equation}
(The first term in (\ref{132}) follows from (\ref{c9}) with $l=l_c$
given by (\ref{106}).)
The second term in (\ref{132}) describes the asymmetric modes in the
thermal fluctuations of the free surface of the saddle point crack
about its equilibrium opening shape
\begin{equation}
u^{\rm asym}(t)+i v^{\rm asym}(t)=
\sum_{n=1}^{\infty}(u_n+iv_n)\sin n\vartheta,\ \vartheta\in[-\pi,\pi)
\label{134}
\end{equation}
where a point at the cut boundary is parameterized by its
distance $t=\ell_c(1+\cos\vartheta)/2$ from the cut end;
$\vartheta\in[-\pi,0)$ parameterize the lower boundary displacements
and $\vartheta\in[0,\pi)$ parameterize the displacements of the
upper boundary points. The symmetric modes of the crack
opening about its equilibrium opening shape are assumed to relax
providing the minimum increase in the elastic energy for a given
$\lbrace u_n,v_n\rbrace$. The latter guarantees that all additional
modes with the continuous displacement at the cut boundary
(the ones which give $\Delta E_{\rm continuous}$ --- the last
term in (\ref{132}) describing the bulk phonons)
decouple from $\lbrace c_n,u_n,v_n\rbrace$
and are the same as the ones for the uncracked material.
(The arguments here are the same as those that were used in derivation of
(\ref{83}).) Since the curvature modes $\lbrace c_n\rbrace$ give the
equilibrium energy of the curvy cut, the response of the surface
phonons to such a curving is already incorporated, so
the quadratic fluctuations $\lbrace c_n\rbrace$ can be calculated
independently from the quadratic fluctuations $\lbrace u_n,v_n\rbrace$.
The latter means that there are no coupling between $\lbrace c_n\rbrace$
and $\lbrace u_n,v_n\rbrace$ modes in (\ref{132}), and the
spectrum of $\lbrace u_n,v_n\rbrace$ modes is the same as that for
the straight cut of length $\ell_c$ (\ref{102}).
The last thing we have to settle before the calculation of $Z_f$
is the proper integration measure for the surface phonon modes
$\lbrace u_n,v_n\rbrace$. (We argued in the conclusion of section
III that the integration measure for the modes $c_n$ is Cartesian ---
$\prod_{n=1}^{\infty}dc_n$.) Here we show that because the functional measure
in the displacement fields $(u(x,y),v(x,y))$ defined at each point
of the material $(x,y)$ is naturally Cartesian ---
$D[u(x,y)/\lambda]D[v(x,y)/\lambda]$,
the integration measure for the modes $\lbrace u_n,v_n\rbrace$
must be of the form
$\prod_{n=1}^{\infty}(1/2\pi)du_n dv_n/\lambda^2$.
An arbitrary elastic displacement field for the material with a curvy cut
is defined by specifying its bulk part $(u_{\rm bulk}(x,y),v_{\rm bulk}(x,y))$
(point (x,y) can be anywhere except at the cut boundary) and the cut part
$(u_{\rm cut}^+(t),u_{\rm cut}^-(t),v_{\rm cut}^+(t),v_{\rm cut}^-(t))$
(the cut displacements are defined along the cut and are parameterized
by the distance $t=\ell_c(1+\cos\vartheta)/2,\ \vartheta\in[0,\pi)$
from the cut end; the $+$ and $-$ superscripts are correspondingly
the displacements at the upper and the lower boundary of the cut).
It is helpful to visualize the introduction of the cut into the material
as splitting in half each of the atoms of the material along the
cut boundary.
Then, the bulk part of the displacement field combines degrees of
freedom of all atoms left untouched by splitting and the cut part
describes the displacements of the split ones. Note that the splitting
increases the total number of the degrees of freedom.
The original measure is naturally
\begin{eqnarray}
D[u_{\rm bulk}(x,y)/\lambda]&&
D[v_{\rm bulk}(x,y)/\lambda]D[u_{\rm cut}(t)^+u_{\rm cut}(t)^-/\lambda^2]
\nonumber\\
&&D[v_{\rm cut}(t)^+v_{\rm cut}(t)^-/\lambda^2]\nonumber
\end{eqnarray}
First we separate the symmetric and asymmetric parts in the crack
opening displacement
\begin{eqnarray}
u^{\rm asym}(t)&=&{1\over 2}\biggl(u_{\rm cut}^+(t)-u_{\rm cut}^-(t)\biggr)
\label{135}\\
v^{\rm asym}(t)&=&{1\over 2}\biggl(v_{\rm cut}^+(t)-v_{\rm cut}^-(t)\biggr)
\nonumber\\
u^{\rm sym}(t)&=&{1\over 2}\biggl(u_{\rm cut}^+(t)+u_{\rm cut}^-(t)
\biggr)\nonumber\\
v^{\rm sym}(t)&=&{1\over 2}\biggl(v_{\rm cut}^+(t)+v_{\rm cut}^-(t)
\biggr)\nonumber
\end{eqnarray}
Because the Jacobian of the transformation
\begin{eqnarray}
&&(u_{\rm cut}(t)^+,u_{\rm cut}(t)^-,v_{\rm cut}(t)^+,v_{\rm cut}(t)^-)
\to\nonumber\\
&&(u^{\rm asym}(t),v^{\rm asym}(t),u^{\rm sym}(t),v^{\rm sym}(t))\nonumber
\end{eqnarray}
is constant
\begin{equation}
\Bigg|{{\partial(u_{\rm cut}(t)^+,
u_{\rm cut}(t)^-,v_{\rm cut}(t)^+,
v_{\rm cut}(t)^-)}\over{\partial(u^{\rm asym}(t),v^{\rm asym}(t),
u^{\rm sym}(t),v^{\rm sym}(t))}}\Bigg|={1\over 4} ,
\label{136}
\end{equation}
the integration measure remains Cartesian:
\begin{eqnarray}
D[u_{\rm bulk}(x,y)/\lambda]
&&D[v_{\rm bulk}(x,y)/\lambda]D[u^{\rm sym}(t)v^{\rm sym}(t)/\lambda^2]
\nonumber\\&&D[u^{\rm asym}(t)v^{\rm asym}(t)/4\lambda^2] .\nonumber
\end{eqnarray}
Now we can combine the bulk and the symmetric cut part of the measure
by introducing the continuous displacement fields $(u_c(x,y),v_c(x,y))$
everywhere, including the cut boundary.(In our atomic picture, the symmetric
modes of the cut part of the displacement fields represent the displacements
of the split atoms as if they were whole, and so it is natural
to combine these degrees of freedom with the bulk ones. Obtained as
a result of such combination the continuous degrees of freedom are
indistinguishable from the degrees of freedom of the uncracked material.)
The integration measure
becomes
$$D[u_c(x,y)/\lambda]D[v_c(x,y)/\lambda]D[u^{\rm asym}(t)
v^{\rm asym}(t)/4\lambda^2] .$$ According to our decomposition,
we specify the asymmetric cut opening and
find the equilibrium displacement fields that minimize the increase
in the elastic energy. In other words, given
$(u^{\rm asym}(t),v^{\rm asym}(t))$ determine
$(u_c^{\rm min}(u^{\rm asym},v^{\rm asym}),v_c^{\rm min}(u^{\rm asym},
v^{\rm asym}))$. The transformation
\begin{eqnarray}
u_c(x,y)&=&u_c^{\rm min}(u^{\rm asym},v^{\rm asym})+\tilde{u}_c(x,y)
\label{137}\\
v_c(x,y)&=&v_c^{\rm min}(u^{\rm asym},v^{\rm asym})+\tilde{v}_c(x,y)
\nonumber
\end{eqnarray}
then completely decouple the surface phonon modes and the continuous modes
that contribute to $\Delta E_{\rm continuous}$ in (\ref{132}).
The Jacobian of the transformation
\begin{eqnarray}
&&(u_c(x,y),v_c(x,y),u^{\rm asym}(t),
v^{\rm asym}(t))\to\nonumber\\&&(\tilde{u}_c(x,y),\tilde{v}_c(x,y),
u^{\rm asym}(t),v^{\rm asym}(t))\nonumber
\end{eqnarray}
is unity (the transformation is
just a functional shift) and so the measure remains unchanged
$$D[\tilde{u}_c(x,y)/\lambda]D[\tilde{v}_c(x,y)/\lambda]D[u^{\rm asym}(t)
v^{\rm asym}(t)/4\lambda^2] .$$
The Fourier transformation (\ref{134}) is orthogonal, but the Fourier
modes are not normalized:
\begin{equation}
\int_0^{\pi}d\vartheta\ \sin^2 n\vartheta ={\pi\over 2} .
\label{137a}
\end{equation}
The latter means that at the final stage of the change of variables
$(u^{\rm asym}(t),v^{\rm asym}(t))\to \lbrace u_n,v_n\rbrace$
there appear the Jacobian $\prod_{n=1}^{\infty}(2/\pi)$, and so we end up
with the integration measure
$\prod_{n=1}^{\infty}(1/2\pi)du_n dv_n/\lambda^2$.
From (\ref{131}-\ref{132}) with the proper integration measure over
the surface phonon modes we find
\begin{eqnarray}
Z_f&=&\prod_{n=1}^{\infty}\int_{-\infty}^{+\infty}dc_n
\exp\biggl\lbrace -\beta{{\alpha\l_c^2}\over{\pi^2\lambda n^2}}c_n^2
\biggr\rbrace\label{145}\\
&&\prod_{n=1}^{\infty}{\int\int}_{-\infty}^{+\infty}
{1\over{ 2\pi}}{{du_n dv_n}\over{\lambda^2}}
\exp\biggl\lbrace -\beta{{2\pi\mu n}\over{1+\chi}}(u_n^2+v_n^2)
\biggr\rbrace\nonumber\\
&&Z_{\rm continuous}\nonumber\\
&=&\prod_{n=1}^{\infty}{\biggl({{\pi^3\lambda n^2}\over{\beta\alpha
\ell_c^2}}\biggr)}^{1/2}\ \prod_{n=1}^{\infty}{{1+\chi}\over
{4\pi\beta\mu\lambda^2 n}}\ Z_{\rm continuous}
\label{145a}
\end{eqnarray}
where
\begin{equation}
Z_{\rm continuous}=\sum_{\Delta E_{\rm continuous}}
\exp(-\beta\Delta E_{\rm continuous}) .
\label{146}
\end{equation}
Because $\Delta E_{\rm continuous}$ corresponds to the degrees of freedom
of the uncracked material (with the same energy spectrum),
$Z_{\rm continuous}$ contributes to the partition function $Z_0$ of
the material without the crack, which according to (\ref{120}) drops out
from the calculation of the imaginary part of the free energy.
All the products over $n$ in these expressions diverge: we need a
prescription for cutting off the modes at short wavelengths (an ultraviolet
cutoff).
First we'll consider the $\zeta$-function regularization.
In this regularization prescription\cite{ramond},
the infinite product of the type $D=\prod_{n=1}^{\infty}\lambda_n$
is evaluated by introducing the function $D_{\zeta}(s)$
\begin{equation}
D_{\zeta}(s)=\sum_{n=1}^{\infty}{1\over{\lambda_n^s}}
\label{151}
\end{equation}
so that
\begin{equation}
D=\exp(-D_{\zeta}'(0)) .
\label{152}
\end{equation}
It is assumed that the sum (\ref{151}) is convergent in some region of the
complex $s$ plane and that it is possible to analytically continue
$D_{\zeta}(s)$ from that region to $s=0$.
From (\ref{145a}) we find
\begin{equation}
Z_f=D_1 D_2 Z_{\rm continuous}
\label{153}
\end{equation}
where $D_1$ and $D_2$ are obtained following (\ref{152})
from the corresponding $\zeta$-functions: ${D_1}_{\zeta}(s)$
and ${D_2}_{\zeta}(s)$
\begin{eqnarray}
{D_1}_{\zeta}(s)&=&{\biggl({{\beta\alpha\ell_c^2}\over{\pi^3\lambda}}
\biggr)}^{s/2}\sum_{n=1}^{\infty}{1\over{n^s}}=
{\biggl({{\beta\alpha\ell_c^2}\over{\pi^3\lambda}}
\biggr)}^{s/2}\zeta_R(s)\label{154}\\
{D_2}_{\zeta}(s)&=&{\biggl({{4\pi\beta\mu\lambda^2}\over{1+\chi}}
\biggr)}^s\sum_{n=1}^{\infty}n^s={\biggl({{4\pi\beta\mu\lambda^2}\over{1+\chi}}
\biggr)}^s\zeta_R(-s) .\nonumber
\end{eqnarray}
$\zeta_R(s)$ in (\ref{154}) is the standard Riemann $\zeta$-function,
holomorphic everywhere in the complex $s$ plane except
at $s=1$. Noting that $\zeta(0)=-1/2$ and
$\zeta'(0)=-(\ln 2\pi)/2$ we find from (\ref{152}-\ref{154})
\begin{equation}
Z_f={\biggl({{4\beta\alpha\ell_c^2}\over
{\pi\lambda}}\biggr)}^{1/4}{\biggl({{2\beta\mu\lambda^2}\over
{1+\chi}}\biggr)}^{1/2} Z_{\rm continuous} .\label{155}
\end{equation}
From (\ref{120}) and (\ref{145a}) we find the imaginary part of the
free energy in the $\zeta$-function regularization
\begin{equation}
{\rm Im}F^{\zeta}={\biggl({{16\beta^3\alpha\mu^2\lambda^5}\over
{\pi{(1+\chi)}^2}}\biggr)}^{1/4}{\biggl({{\ell_c}\over{\lambda}}\biggr)}
^{1/2}{\rm Im}F^{\rm simple}
\label{156}
\end{equation}
where ${\rm Im}F^{\rm simple}$ is given by (\ref{i4}).
Second, we consider lattice regularization --- which is more elaborate.
We represent a curvy cut by $N+1=\ell_c/\lambda$ segments of equal
length parameterized by the
kink angles $\lbrace\alpha_i\rbrace,\ i\in[1,N]$. With our conventional
parameterization of the cut $t=\ell_c(1+\cos\vartheta)/2,\
\vartheta[-\pi,\pi)$,
the asymmetric modes of the crack opening displacements
$\lbrace u^{\rm asym}(t), v^{\rm asym}(t)\rbrace$ are linear
piecewise approximation for given
asymmetric displacements of the ``split'' kink atoms $\lbrace u^{\rm asym}_i,
v^{\rm asym}_i\rbrace,\ i\in[1,N]$. More precisely, if $t_i$ and $t_{i+1}$
parameterize the adjacent kinks, we assume
\begin{eqnarray}
u^{\rm asym}(t)&=&u^{\rm asym}_i+{{u^{\rm asym}_{i+1}-u^{\rm asym}_i}
\over{t_{i+1}-t_i}}(t-t_i)
\label{152a}\\
v^{\rm asym}(t)&=&v^{\rm asym}_i+{{v^{\rm asym}_{i+1}-v^{\rm asym}_i}
\over{t_{i+1}-t_i}}(t-t_i)\nonumber
\end{eqnarray}
for $t\in[t_i,t_{i+1}]$.
From the integration measure arguments for the
$\zeta$-function regularization, it is clear that the integration
measure in this case must be $\prod_{i=1}^N d\alpha_i \prod_{i=1}^{N}du^
{\rm asym}_i dv^{\rm asym}_i/4\lambda^2$.
Now we have to write down the lattice regularization of the
quadratic deviation $\Delta E$ from the saddle point energy (\ref{132}).
From (\ref{c1}) and (\ref{c3}), the curving of the critical cut $\ell_c$ will
reduce the energy release by $\Delta E_c$
\begin{eqnarray}
\Delta E_c &=&{{T^2\pi(1+\chi)\ell_c^2}\over{32\mu}}\sum_{i,j=1}^N
\alpha_i\alpha_j M^c_{ij}\nonumber\\
&=&\alpha\ell_c\sum_{i,j=1}^N\alpha_i\alpha_j M^c_{ij}
\label{153a}
\end{eqnarray}
where
\begin{equation}
M^c_{ij}=-{{i j}\over{{(N+1)}^2}}+{{{\rm min}(i,j)}\over{N+1}} .
\label{154a}
\end{equation}
From (\ref{132}) the surface phonon contribution to $\Delta E$ is given by
\begin{equation}
\Delta E_p={{2\pi\mu}\over{1+\chi}}\sum_{n=1}^{+\infty}n(u_n^2+v_n^2)
\label{155a}
\end{equation}
where from (\ref{134})
\begin{eqnarray}
u_n&=&{2\over\pi}\int_0^{\pi}d\vartheta\ u^{\rm asym}(t(\vartheta))
\sin n\vartheta\label{156a}\\
v_n&=&{2\over\pi}\int_0^{\pi}d\vartheta\ v^{\rm asym}(t(\vartheta))
\sin n\vartheta .\nonumber
\end{eqnarray}
In principle, for a given piecewise approximation of the asymmetric
modes (\ref{152a}) determined by $\lbrace u^{\rm asym}_i,
v_i^{\rm asym}\rbrace$, $i\in[1,N]$, one could calculate the Fourier
amplitudes according to (\ref{156a}) and then plug the result into
(\ref{155a}) to obtain $\Delta E_p$ in terms of
$\lbrace u^{\rm asym}_i,v_i^{\rm asym}\rbrace$. We will use another
approach. Using $ u^{\rm asym}(0)=u^{\rm asym}(\ell_c)=
0$ (with the same equalities for $v^{\rm asym}(t)$) we integrate
(\ref{156a}) by parts to obtain
\begin{eqnarray}
u_n&=&{2\over\pi}\int_0^{\pi}d\vartheta\ {{d u^{\rm asym}(t(\vartheta))
}\over{d\vartheta}}
{{\cos n\vartheta}\over n}\label{157a}\\
v_n&=&{2\over\pi}\int_0^{\pi}d\vartheta\ {{d v^{\rm asym}(t(\vartheta))
}\over{d\vartheta}}{{\cos n\vartheta}\over n} .\nonumber
\end{eqnarray}
Substituting (\ref{157a}) into (\ref{155a}) we find
\begin{eqnarray}
\Delta E_p&=&{{8\mu}\over{\pi(1+\chi)}}\int_0^{\pi}\int_0^{\pi}
d\vartheta_1 d\vartheta_2 \biggl[{{d u^{\rm asym}(\vartheta_1)}
\over{d\vartheta_1}}{{d u^{\rm asym}(\vartheta_2)}\over{d\vartheta_2}}
\nonumber\\
&&+{{d v^{\rm asym}(\vartheta_1)}\over{d\vartheta_1}}
{{d v^{\rm asym}(\vartheta_2)
}\over{d\vartheta_2}}\biggr]K(\vartheta_1,\vartheta_2)
\label{158}
\end{eqnarray}
where
\begin{eqnarray}
K(\vartheta_1,\vartheta_2)&=&\sum_{n=1}^{\infty}{{\cos n\vartheta_1 \cos
n\vartheta_2}\over n} .
\label{158a}
\end{eqnarray}
Following\cite{rushic}
\begin{equation}
\sum_{k=1}^{\infty}{{\cos k x}\over{k}}={1\over 2}\ln{1\over{2(1-\cos x)}} ,
\label{159a}
\end{equation}
we find an analytical expression for the kernel (\ref{158a})
\begin{equation}
K(\vartheta_1,\vartheta_2)=-{1\over 2}\ln 2-{1\over 2}\ln|\cos\vartheta_1-
\cos\vartheta_2| .
\label{159}
\end{equation}
Finally, introducing $M_{ij}^p$ from
\begin{eqnarray}
\sum_{i,j=1}^N&& \biggl(u^{\rm asym}_i u^{\rm asym}_j+
v^{\rm asym}_i v^{\rm asym}_j \biggr) M_{ij}^p\nonumber\\
&=&\int_0^{\pi}\int_0^{\pi}
d\vartheta_1 d\vartheta_2 \biggl[{{d u^{\rm asym}(\vartheta_1)}\over
{d\vartheta_1}}{{d u^{\rm asym}(\vartheta_2)}\over{d\vartheta_2}}\nonumber\\
&&+{{d v^{\rm asym}(\vartheta_1)}\over{d\vartheta_1}}
{{d v^{\rm asym}(\vartheta_2)}\over{d\vartheta_2}}
\biggr]K(\vartheta_1,\vartheta_2)\label{160}
\end{eqnarray}
we obtain
\begin{equation}
\Delta E_p={{8\mu}\over{\pi(1+\chi)}}\sum_{i,j=1}^N \biggl(
u^{\rm asym}_i u^{\rm asym}_j+ v^{\rm asym}_i v^{\rm asym}_j
\biggr) M_{ij}^p .
\label{161}
\end{equation}
To calculate $M_{ij}^p$ we substitute (\ref{152a}) directly into the
RHS of (\ref{160}) and read off the corresponding coefficient,
given by the following three equations:
\begin{eqnarray}
&&M_{ij}^p=f_2(i,j)+f_2(i+1,j+1)-f_2(i+1,j)\nonumber\\
&&\qquad \quad -f_2(i,j+1)\nonumber\\\nonumber\\
&&f_2(i,j)={3\over 4}+{1\over{(\cos\vartheta_{i-1}-\cos\vartheta_i)
(\cos\vartheta_{j-1}-\cos\vartheta_j)}}\biggl[\nonumber\\
&&f_1(i,j)+f_1(i-1,j-1)-f_1(i-1,j)-f_1(i,j-1)\biggr]\nonumber\\
\nonumber\\&&f_1(i,j)=\nonumber\\
&&\cases{{{{(\cos\vartheta_i-\cos\vartheta_j)}^2}\over 4}
\ln\bigg|\sin{{\vartheta_i-\vartheta_j}\over 2}
\sin{{\vartheta_i+\vartheta_j}\over 2}\bigg|,&if $i\ne j$;\cr
0,&otherwise,\cr}\nonumber\\
\label{162}
\end{eqnarray}
where $\vartheta_0=0$, $\vartheta_{N+1}=\pi$, and $\vartheta_i$
parameterizes the $i$-th kink (kinks are equally spaced in real space):
\begin{equation}
\vartheta_i=\arccos\biggl(1-{{2 i}\over{N+1}}\biggr),\ i\in[1,N] .
\end{equation}
From (\ref{153a}) and (\ref{161}) we find the quadratic deviation
from the saddle point energy
\begin{eqnarray}
\Delta E &=& \Delta E_c +\Delta E_p +\Delta E_{\rm continuous}
\label{164}\\
&=&\alpha\ell_c\sum_{i,j=1}^N\alpha_i\alpha_j M^c_{ij}+
{{8\mu}\over{\pi(1+\chi)}}\sum_{i,j=1}^N \biggl(
u^{\rm asym}_i u^{\rm asym}_j\nonumber\\
&&+ v^{\rm asym}_i v^{\rm asym}_j
\biggr) M_{ij}^p+\Delta E_{\rm continuous}.\nonumber
\end{eqnarray}
Thus the multiplicative factor $Z_f$ to the partition function of
the elastic material with one cut in the lattice regularization
is given by
\begin{eqnarray}
Z_f&=&\prod_{n=1}^N\int_{-\infty}^{+\infty}d\alpha_n\exp
\biggl\lbrace -\beta\alpha\ell_c\sum_{i,j=1}^N
\alpha_i\alpha_j M^c_{ij}\biggr\rbrace\label{165}\\
&&\prod_{n=1}^N\int\int_{-\infty}^{+\infty}{{du_n^{\rm asym}
dv_n^{\rm asym}}\over{4\lambda^2}}\exp\biggl\lbrace -\beta
{{8\mu}\over{\pi(1+\chi)}}\nonumber\\
&&\sum_{i,j=1}^N \biggl(
u^{\rm asym}_i u^{\rm asym}_j+ v^{\rm asym}_i v^{\rm asym}_j
\biggr) M_{ij}^p\biggr\rbrace\nonumber\\
&& Z_{\rm continuous}\nonumber\\
&=& {\biggl({{\pi}\over{\beta\alpha\ell_c}}\biggr)}^{N/2}
{\det}^{-1/2} M_{ij}^c\ {\biggl({{\pi^2 (1+\chi)}\over{32\beta\mu\lambda^2}}
\biggr)}^N\nonumber\\ &&{\det}^{-1} M_{ij}^p\ Z_{\rm continuous}\nonumber
\end{eqnarray}
where $Z_{\rm continuous}$ is given by (\ref{146}).
The determinant coming from the curvy cuts $M_{ij}^c$ can be calculated
analytically. In section III we show that $\sin \pi n x/\ell$ are
eigenvectors of the operator (\ref{c5}), the continuous analog of $M_{ij}^c$.
One can explicitly check that for $n\in[1,N]$, vectors $\vec{m}_n=
\lbrace\sin\pi n i/(N+1)\rbrace$ are in fact eigenvectors of $M_{ij}^c$
with eigenvalues
\begin{equation}
\lambda_n={1\over{4(N+1)}}\ \sin^{-2} {{\pi n}\over{2(N+1)}}
\label{166}
\end{equation}
and so
\begin{equation}
\det M_{ij}^c =\prod_{n=1}^N\lambda_n={\biggl(N+1\biggr)}^{-(N+1)} .
\label{167}
\end{equation}
(To obtain (\ref{167}), we take the limit $x\to 0$ of
\begin{equation}
\sin 2(N+1) x=2^{2 N+1} \prod_{k=0}^{2 N+1}\sin\biggl(x+{{k\pi}\over{2(N+1)
}}\biggr)\label{167a}
\end{equation}
\cite{rushic}, to get
\begin{equation}
N+1=4^N\prod_{k=1}^{2 N+1}\sin{{k\pi}\over{2(N+1)
}}=4^N\prod_{k=1}^N\sin^2{{k\pi}\over{2(N+1)}} .\label{167b}
\end{equation}
With (\ref{167b}), the calculation in (\ref{167}) becomes straightforward.)
Recalling that $N+1=\ell_c/\lambda$, we can rewrite (\ref{165})
making use of (\ref{167})
\begin{eqnarray}
Z_f&=&\sqrt{{\beta\alpha\lambda}\over\pi}{\biggl({\pi\over{\beta\alpha\lambda}}
\biggr)}^{\ell_c/2\lambda}{\biggl({{\ell_c}\over\lambda}\biggr)}^{1/2}\
\sqrt{{32\beta\mu\lambda^2}\over{\pi^2 (1+\chi)}}\nonumber\\
&&{\biggl({{\pi^2 (1+\chi)}\over{32\beta\mu\lambda^2}}
\biggr)}^{\ell_c/2\lambda}{\det}^{-1} M_{ij}^p\ Z_{\rm continuous} .
\label{168}
\end{eqnarray}
Note that the first three factors on the RHS of (\ref{168})
(coming from the curvy cut fluctuations) have the asymptotic form,
$N\to\infty$,
\begin{equation}
\sqrt{{\beta\alpha\lambda}\over\pi}{\biggl({\pi\over{\beta\alpha\lambda}}
\biggr)}^{\ell_c/2\lambda}{\biggl({{\ell_c}\over\lambda}\biggr)}^{1/2}
\approx N^{c_2} \exp\lbrace c_0+c_1 N\rbrace
\label{168a}
\end{equation}
with $c_0=0$, $c_1=\ln(\pi/\beta\alpha\lambda)/2$ and $c_2=1/2$.
We were unable to obtain an analytical expression for the surface phonon
determinant $\det M^p_{ij}$. For $N=2...100$ kinks we calculate
the determinant numerically and fit its logarithm
with $f(N)= p_0+p_1 N+ p_2 \ln N$ (Figure \ref{f6}),
\begin{equation}
\det M^p_{ij}= N^{p_2} \exp\lbrace p_0+p_1 N\rbrace .
\label{169}
\end{equation}
We find $p_0=0.09\pm 0.02$, $p_1=0.166\pm 0.002$ and $p_2=0.24\pm 0.05$.
(We expect that the surface phonon fluctuations contribute to $Z_f$
similar to the curvy cut fluctuations (\ref{168a}) --- hence the
form of the fitting curve for $\det M^p_{ij}$.)
\begin{figure}
\centerline{
\psfig{figure=figure7.ps,width=3truein}}
{\caption{We calculate numerically the logarithm of the
surface phonon determinant, $\det M^p_{ij}$,
for $N=2\ ...\ 100$ kinks and fit the result with $f(N)=p_0+p_1 N+ p_2\ln N$
}
\label{f6}}
\end{figure}
From (\ref{168}-\ref{169})
\begin{eqnarray}
Z_f&=&\exp\lbrace p_1-p_0\rbrace{\biggl({{32\beta^2\mu\alpha
\lambda^3}\over{\pi^3(1+\chi)}}\biggr)}^{1/2}{\biggl({{\ell_c}\over
\lambda}\biggr)}^{-p_2+1/2}\nonumber\\
&&{\biggl({{\pi^3(1+\chi)\exp\lbrace -2 p_1\rbrace}
\over{32\beta^2\mu\alpha\lambda^3}}\biggr)}^{\ell_c/2\lambda}
\ Z_{\rm continuous}\label{170}
\end{eqnarray}
which following (\ref{120}) gives the imaginary part of the free energy
in the lattice regularization
\begin{eqnarray}
{\rm Im}F^{\rm lattice}&=&\exp\lbrace p_1-p_0\rbrace
{\biggl({{32\beta^2\mu\alpha
\lambda^3}\over{\pi^3(1+\chi)}}\biggr)}^{1/2}{\biggl({{\ell_c}\over
\lambda}\biggr)}^{-p_2+1/2}\nonumber\\
&&{\biggl({{\pi^3(1+\chi)\exp\lbrace -2 p_1\rbrace}
\over{32\beta^2\mu\alpha\lambda^3}}\biggr)}^{\ell_c/2\lambda}
{\rm Im}F^{\rm simple}\label{171}
\end{eqnarray}
with ${\rm Im}F^{\rm simple}$ from (\ref{i4}).
Throughout the calculation we ignored the kinetic term in the energy of the
elastic material: their behavior is pretty trivial, as momenta and positions
decouple. Because we introduce new degrees of freedom with our ``splitting
atoms'' model for the crack, we discuss the effects of the corresponding new
momenta. Before ``splitting'', $(\ell_c/\lambda-1)$ atoms along the cut
contribute
\begin{eqnarray}
&&Z_{\rm kinetic}^u=\nonumber\\
&&\prod_{i=1}^{\ell_c/\lambda-1}\int\int dp_{i_x}dp_{i_y}
{\biggl({{\lambda}\over{2\pi\hbar}}\biggr)}^2
\exp\biggl\lbrace-\beta\sum_{j=1}^{\ell_c/\lambda-1}
{{p_{j_x}^2+p_{j_y}^2}\over{2 m}}
\biggr\rbrace\nonumber\\
\label{k1}
\end{eqnarray}
to the partition function of the uncracked material $Z_0$.
(We do not consider the contribution
to the partition function from the bulk atoms --- they contribute in
a same way to $Z_0$ as they do to $Z_1$, and thus drop out from the
calculation of the imaginary part (\ref{120}).)
The configuration space integration measure for a classical statistical
system is $dx dp/2\pi\hbar$; because we integrated out the displacements
with the weight $1/\lambda$ to make them dimensionless,
the momentum integrals have measure $dp \lambda/2\pi\hbar$, (\ref{k1}).
The formation of the cut increases the number of the kinetic degrees
of freedom by $(\ell_c/\lambda-1)$ (the number of split atoms).
The split atoms contribute
\begin{eqnarray}
Z_{\rm kinetic}^s&=&\prod_{i=1}^{2(\ell_c/\lambda-1)}\int\int dp_{i_x}dp_{i_y}
{\biggl({{\lambda}\over{2\pi\hbar}}\biggr)}^2\nonumber\\
&&\exp\biggl\lbrace
-\beta\sum_{j=1}^{2(\ell_c/\lambda-1)}{{p_{j_x}^2+p_{j_y}^2}\over{ m}}
\biggr\rbrace\label{k2}
\end{eqnarray}
to the partition function of the material with the cut.
From (\ref{120}), (\ref{k1}) and (\ref{k2}) the kinetic energy of
the elastic material modify the imaginary part of the free energy
by a factor $Z_k$:
\begin{eqnarray}
{\rm Im}F^{\rm lattice}&\to& Z_k{\rm Im}F^{\rm lattice} \label{k3}
\end{eqnarray}
with
\begin{equation}
Z_k={{Z_{\rm kinetic}^s}\over{Z_{\rm kinetic}^u}}=
{\biggl({{m\lambda^2}\over{8\pi\beta\hbar^2}}\biggr)}^{\ell_c/\lambda-1}.
\label{k4}
\end{equation}
One might notice that for both ($\zeta$-function and lattice) regularizations
the effect of the quadratic fluctuations can be absorbed into the
renormalization of the prefactor of the imaginary part of the free energy
calculated in a simplified model (without the quadratic fluctuations)
(\ref{i4}),
and the material surface tension $\alpha$: the multiplicative
factor to the imaginary part of the free energy has a generic form
\begin{equation}
{\rm Im} F^{\rm simple}\to n_0{\biggl({{\ell_c}\over\lambda}\biggr)}^{n_1}
\exp\biggl\lbrace n_2 {{\ell_c}\over\lambda}\biggr\rbrace {\rm Im}
F^{\rm simple}\label{172}
\end{equation}
where the first two terms renormalize
the prefactor of ${\rm Im} F^{\rm simple}$ and the other one
can be absorbed into ${\rm Im} F^{\rm simple}$ through the effective
renormalization of the surface tension
\begin{equation}
\alpha\to\alpha_r=\alpha+{1\over{2\beta\lambda}}n_2
\label{173}
\end{equation}
From (\ref{k3}), (\ref{k4}) it follows that in case of the lattice
regularization, the inclusion of the kinetic energy
of the elastic material shifts the constants $n_0$ and $n_2$, thus
preserving (\ref{172}):
\begin{eqnarray}
n_0&\to& n_0 \biggl({{8\pi\beta\hbar^2}\over{m\lambda^2}}\biggr)\label{k5}\\
n_2&\to& n_2 +\ln{{m\lambda^2}\over{8\pi\beta\hbar^2}}\nonumber
\end{eqnarray}
The calculation of the kinetic terms in the $\zeta$-function regularization
is more complicated. We however have no reason to believe that
it will change the form (\ref{172}).
\section{The asymptotic behavior of the inverse bulk modulus}
In our earlier work\cite{we}, we discussed how the thermal instability
of elastic materials with respect to fracture under infinitesimal
stretching load determines the asymptotic behavior of the high order elastic
coefficients. Specifically, for the inverse bulk modulus K(P)
in two dimensions (material under compression)
\begin{eqnarray}
{1 \over K(P)} &=& -{1 \over A} \left({\partial A \over \partial P}\right)_{
\beta}= c_0 + c_1 P \cdots + c_n P^n + \cdots
\label{179}
\end{eqnarray}
we found within linear elasticity and ignoring the quadratic
fluctuations,
\begin{equation}
{c_{n+1} \over c_n}\rightarrow - n^{1/2}
{\biggl({{ \pi (\chi+1)}\over {64 \beta\mu \alpha^2 } } \biggr)}^{1/2}
\mbox{\hspace{0.1in} as {\it n}} \rightarrow\infty,
\label{180}
\end{equation}
which indicates that the high--order terms $c_n$ roughly grow as $(n/2)!$
and so the perturbative expansion for the inverse bulk modulus
is an asymptotic one.
In this section we show that, except for the temperature dependent
renormalization of the surface tension $\alpha\to\alpha_r=\alpha+
O(1/\beta)$, (\ref{180}) remains true even if we include the quadratic
fluctuations around the saddle point (the critical crack); moreover,
we argue that (\ref{180}) is also unchanged by the nonlinear corrections
to the linear elastic theory near the crack tips.
We review how one can calculate the high order coefficients of the
inverse bulk modulus\cite{we}. The free energy $F(T)$ of the elastic material
is presumably analytical in the complex $T$ plane function for small
$T$ except for a branch
cut $T\in[0,+\infty)$ --- the axis of stretching. (We show this explicitly
in the calculation within linear elastic theory without the
quadratic fluctuations\cite{we}.) It is assumed here that neither nonlinear
effects near the crack tips nor the quadratic fluctuations
change the analyticity domain of the free energy for reasonably
small $T$ (i.e. $T\leq Y$).
One can then use Cauchy's theorem to express
the free energy of the material under compression $F(-P)$,
(Figure \ref{f7}):
\begin{equation}
F(-P)={1 \over{2 \pi i}}\oint\limits_{\gamma}{{F(T)\over{T+P}}} dT .
\label{181}
\end{equation}
\begin{figure}
\centerline{
\psfig{figure=figure8.ps,width=3truein}}
{\caption{The free energy of the elastic material $F(T)$ is analytical
in the complex $T$ plane except for a branch cut
$T\in[0,+\infty)$. This allows a Cauchy representation for
the free energy $F(-P)$ of the material under compression.}
\label{f7}}
\end{figure}
The contribution to (\ref{181}) from the arc EFA goes to zero as the
latter shrinks to a point. In this limit we have
\begin{eqnarray}
F(-P)&=&{1 \over{2 \pi i}}\int\limits_0^B{{F(T+i 0)-F(T-i 0)}\over{T+P}} dT
\nonumber\\
&&+{1 \over{2 \pi i}}\oint\limits_{\rm BCD}{{F(T)}\over{T+P}} dT\nonumber\\
&=&{1 \over{\pi}}\int\limits_0^B{{{\rm Im}F(T)}\over{T+P}} dT
+{1 \over{2 \pi i}}\oint\limits_{\rm BCD}{{F(T)}\over{T+P}} dT .
\label{181a}
\end{eqnarray}
As it was first established for similar problems in field
theory\cite{f1} ---\cite{f3}, (\ref{181a}) determines the high--order
terms in the expansion of the free energy $F(-P)=\sum_n {f_n P^n}$
\begin{equation}
f_n={{(-1)}^n \over \pi}\int\limits_0^B{{{\rm Im} F(T)} \over
{T^{n+1}}} dT+{{(-1)}^n \over {2\pi i}}\oint\limits_{\rm BCD}{{F(T)} \over
{T^{n+1}}} dT .
\label{182}
\end{equation}
The second integral on the RHS of (\ref{182}) produces a
convergent series; and is hence unimportant to the asymptotics:
the radius of convergence by the ratio test is of
the order the radius of the circle BCD (i.e. larger than $P$ by
construction). The first integral generates the asymptotic divergence
of the inverse bulk modulus expansion:
\begin{equation}
f_n\to{{(-1)}^n \over \pi}\int\limits_0^B{{{\rm Im} F(T)} \over
{T^{n+1}}} dT
\mbox{\hspace{0.1in} as {\it n}} \rightarrow\infty .
\label{182a}
\end{equation}
Once a perturbative expansion for the free energy is known, one can calculate
the power series expansion for the inverse bulk modulus using
the thermodynamic relation
\begin{equation}
{1 \over K(P) }={1 \over {P A }}{\biggl ({{\partial F(-P)} \over
{\partial P}}\biggr)_{\beta}}
\label{183}
\end{equation}
so that
\begin{equation}
{{c_{n+1}}\over{c_n}}={{(n+3)f_{n+3}}\over{(n+2) f_{n+2}}} .
\label{184}
\end{equation}
Note that because the saddle point calculation becomes more and more accurate
as $T \to 0$, and because the integrals in equation (\ref{182a}) are dominated
by small $T$ as $n \to \infty$, using the saddle--point form for the
imaginary part of the free energy yields the correct $n\to\infty$ asymptotic
behavior of the high-order coefficients $f_n$ in the free energy.
Following (\ref{i4}) and (\ref{172}) the imaginary part of the
free energy including the quadratic fluctuations is given by
\begin{eqnarray}
{\rm Im} F(T)&=&{{2 n_0} \over {\beta^2 T\lambda^2} }
\biggl(1-{{n_1 n_2}\over{2\beta\lambda\alpha_r}}\biggr)
{\biggl({{2\beta\mu\lambda^2}\over{\chi+1}}\biggr)}^{1/2}
\nonumber\\&&{\biggl( {{32 \mu
\alpha_r } \over { \pi T^2 (\chi+1)\lambda} } \biggr)}^{n_1}
\biggl ({ \pi {{A} \over {\lambda ^2}}} \biggr )\exp{
\biggl\lbrace {{-32 \beta\mu
\alpha_r^2 } \over { \pi T^2 (\chi+1)} } \biggr\rbrace}
\nonumber\\ \label{185}
\end{eqnarray}
where $\alpha_r$ is given by (\ref{173}). Note that
$n_0$, $n_1$, $n_2$ and $\alpha_r$ in (\ref{185}) are regularization
dependent coefficients, by our calculations in the previous section.
From (\ref{182a}) and (\ref{184}) we find
\begin{equation}
{{c_{n+1}}\over{c_n}}\to -{\biggl({{ \pi (\chi+1)}\over {32 \beta\mu
\alpha_r^2 } } \biggr)}^{1/2}{{(n+3)\Gamma(n/2+n_1+2)}\over
{(n+2)\Gamma(n/2+n_1+3/2)}} .\label{186}
\end{equation}
(In the limit $n\to\infty$ (\ref{186}) is independent of $B$
in (\ref{182a}).) Using
\begin{equation}
{{\Gamma(n/2+n_1+2)}\over{\Gamma(n/2+n_1+3/2)}}\to\sqrt{n\over 2}
\mbox{\hspace{0.1in} as {\it n}} \rightarrow\infty
\label{187}
\end{equation}
we conclude from (\ref{186}) that
\begin{equation}
{{c_{n+1}}\over{c_n}}\to -n^{1/2}{\biggl({{ \pi (\chi+1)}\over {64 \beta\mu
\alpha_r^2 } } \biggr)}^{1/2}
\mbox{\hspace{0.1in} as {\it n}} \rightarrow\infty .
\label{188}
\end{equation}
Equation (\ref{188}) is a very powerful result: it shows that
apart from the temperature dependent (regularization dependent)
correction to the surface tension (\ref{173}), the asymptotic ratio of
the high order coefficient of the inverse bulk modulus is unchanged
by the inclusion of the quadratic fluctuations (at least for the
regularizations we have tried).
One would definitely expect the surface tension to be
regularization dependent: the energy to break an atomic bond explicitly
depends on the ultraviolet (short scale) physics, which is excluded in the
thermodynamic description of the system.
This has analogies with calculations in field theory, where physical
quantities calculated in different regularizations give the same
answer when expressed in terms of the renormalized masses and charges of
the particles\cite{zj}. Here only some physical quantities appear
regularization independent.
The analysis that leads to (\ref{188}) is based on linear elastic
theory that is known to predict unphysical singularities
near the crack tips. From\cite{m}, the stress tensor component
$\sigma_{yy}$, for example, has a square root divergence
\begin{equation}
\sigma_{yy}\sim T\sqrt{{\ell_c}\over{4 r}}, \mbox{\hspace{0.1in}
as {\it r}} \rightarrow 0
\label{189}
\end{equation}
as one approaches the crack tip. One might expect that the proper
non-linear description of the crack tips changes the asymptotic behavior
of the high order elastic coefficients. We argue here that linear
analysis gives however the correct asymptotic ration (\ref{188}): the
{\it linear} elastic behavior dominates the {\it nonlinear}
asymptotics within our model.
It is clear that the vital question is how the energy
release of the saddle point (critical) crack is changed by nonlinear
processes (microcracking, emission of dislocations, etc.) in the vicinity
of the crack tips as $T\to 0$. Following\cite{fbs} we
distinguish in the crack system two well-defined zones: the outer zone,
consisting of exclusively linear elastic material, transmits the applied
traction to the inner, crack tip zone where the nonlinear processes take
place (Figure \ref{f8}). Such separation introduces
two length scales to the problem: $r_{\rm nl}$ and $r_{\rm cross}$.
The first scale determines the size of the nonlinear process zone near
the crack tips. It can be readily
estimated from (\ref{189}) by requiring the stresses at
the boundary of the nonlinear zone to be of the order atomic
ones $\sigma_{ij}\sim Y$:
\begin{equation}
r_{\rm nl}\sim\ell_c{\biggl({{T}\over {Y}}\biggr)}^2\sim{\alpha\over Y} .
\label{190}
\end{equation}
The second length scale
is a crossover length $r_{\rm cross}$ where the elastic fields near a
crack tip deviate from the inner zone $\sqrt{r}$ strain asymptotics
to depend on the outer-zone boundary conditions (i.e. the length of
the crack in our case). Normally, $r_{\rm cross}$ is only
a few times smaller than the crack length\cite{ewalds},
\cite{jenif} --- for the present calculation
we assume $r_{\rm cross}\sim \ell_c\sim Y\alpha/T^2$, (\ref{106}).
\begin{figure}
\centerline{
\psfig{figure=figure9.ps,width=3truein}}
{\caption{In the crack system there are two well-defined
zones: the outer zone,
consisting of exclusively linear elastic material, and
the inner, crack tip zone where the nonlinear processes take
place. Such separation introduces two length scales: the
first $(r_{\rm nl})$ determines the size of the nonlinear zone, and
the second $(r_{\rm cross})$ gives the scale where the elastic
fields near the crack tip deviate from the ones predicted by SSY to comply
with the outer zone boundary conditions.}
\label{f8}}
\end{figure}
First, let's consider the energy in the nonlinear zone.
The saddle point energy is $\alpha\ell_c$ and diverges
as $1/T^2$ as $T\to 0$, while the elastic energy in the nonlinear zone
$E_{\rm nl}$ is bounded by the linear value
\begin{equation}
E_{\rm nl}\sim \int_{0}^{r_{\rm nl}}dr\ r \sigma_{ij}^2(r)/Y
\sim \alpha^2/Y.
\label{191}
\end{equation}
Since $E_{\rm nl}$ is fixed as $T\to 0$, it renormalizes $n_0$ in
(\ref{172}) and hence does not affect the asymptotics (\ref{188}).
Second, we consider how the existence of the inner (nonlinear) zone
changes the energy in the outer (linear) zone.
The elastic equations around the crack tip
allow many solutions\cite{jenif}; in each, the stresses
$\sigma_{ij}$ have the form $C_b r^b f_b(\theta)$, $r_{\rm nl}\ll
r\ll r_{\rm cross}$, in polar coordinates $(r,\theta)$ centered
at the crack tip, where $b$ is an half-integer, the $C_b$ are constants,
and the $f_b$ are known trigonometric functions.
Linear fracture mechanics predicts $b=-1/2$ to be the most singular
solution (compare with (\ref{189})) only because modes with $b<-1/2$
would give rise to singular displacements at the crack tip.
Incorporation of the nonlinear zone $r<r_{\rm nl}$ however,
removes this constraint. In other words,
the nonlinear zone introduces new boundary conditions for linear
elasticity solutions, allowing them to be more singular.
The dominance of $b=-1/2$ solution is known as small scale yielding
(SSY) approximation. Analyzing the mode III anti-plane shear fracture,
Hui and Ruina argued\cite{hui} that SSY approximation becomes more and
more accurate as $\epsilon=r_{\rm nl}/r_{\rm cross}\to 0$.
(They expect that the same result can be extended for mode I fracture.)
Clearly, in our case $\epsilon\to 0$ as $T\to 0$; thus
the dominant contribution still comes from $b=-1/2$ solution.
In fact, following \cite{jenif} we expect
\begin{eqnarray}
\sigma_{ij}\lbrace C_n\rbrace&=&T\biggl(C_{-1/2}\sqrt{{\ell_c}\over{r}}
\nonumber\\ &&+\sum_{n=\lbrace
\cdots,\ -7/2,\ -5/2,\ -3/2\rbrace}C_n{\biggl({r\over {r_{\rm nl}}}\biggr)}^n
\nonumber\\ &&+\sum_{n=\lbrace \ 1/2,\ 3/2,\ 5/2,\cdots\rbrace} C_n
{\biggl({r\over {l_c}}\biggr)}^n\biggr).
\label{191a}
\end{eqnarray}
The inelastic stresses at the outer boundary of the nonlinear zone
$r\sim r_{\rm nl}$ are of order $Y$, thus from (\ref{191a}), for $n<-1/2$,
$C_n=O(\epsilon^{-1/2})$
(recall that $\epsilon=r_{\rm nl}/r_{\rm cross}\sim (T/Y)^2$).
These more singular terms in turn generate corrections to $C_n$
with $n\ge 1/2$ of order $O(\epsilon)$. (One can see this from the fact
that the dominant contribution from the more singular terms at
$r\sim \ell_{c}$ is $C_{-3/2} ({\ell_c}/r_{\rm nl})^{-3/2}\sim \epsilon$.)
The dependence of $C_n$ in (\ref{191a}) on the polar angle $\theta$ is implied.
There is a formal analogy between the arguments presented here for
the stress fields in the crossover zone with the quantum mechanical
problem of the bound states of the hydrogen atom.
When we treat hydrogen nucleus as a point charge, for each orbital
quantum number, the electron wave function has two solutions near
the origin (the position of the nucleus): one is finite as $r\to 0$
and the other one is divergent\cite{ll,ps}.
In a point charge problem one immediately discards the divergent
solution because it can not be normalized and thus can not
represent a bound state. However,
in a finite-size nucleus model one notices that the electron wave
function outside the nucleus is a mixture of the finite and the
divergent solutions of the point charge problem. The normalization
is resolved because inside the nucleus the electron wave function
satisfies a different equation and becomes finite.
The radius of the nucleus serves as a short-distance cutoff similar
to $r_{\rm nl}$ in the crack problem.
The change in the contribution to the saddle point energy
from the outer zone as a result of the introduction
of the nonlinear zone, $\delta E_{\rm outer}$, is
given by
\begin{equation}
\delta E_{\rm outer}\sim \int_{r_{\rm nl}}^{\ell_c} dr\ r{{
\sigma_{ij}^2\lbrace C_n\rbrace-\sigma_{ij}^2\lbrace C_n^{\rm linear}
\rbrace}\over Y}.
\label{192a}
\end{equation}
The dominant contribution to (\ref{192a}) comes from the cross term between
$n=-1/2$ and $n=-3/2$ corrections in (\ref{191a}):
\begin{equation}
\delta E_{\rm outer}\sim {{\alpha^2}\over Y} \ln{{T\over Y}}:
\label{192}
\end{equation}
the correction renormalizes the $n_1$
coefficient in the imaginary part of the free energy (\ref{185})
(regularization dependent in a first place), leaving the asymptotic
ratio (\ref{188}) intact.
It is no surprise that the nonlinear effects do not change
the generic form of the imaginary part (\ref{185}). The
detailed nonlinear description of the crack tips is a specification
of the ultraviolet (short scale) physics and thus is nothing but
another choice of the regularization. From our experience with
$\zeta$-function and the lattice regularizations, we
naturally expect that this {\it nonlinear} regularization
preserves the form of the imaginary part (\ref{185}).
Finally, let's consider the enhanced nucleation of secondary cracks
in the high-strain outer-zone region --- a possible cause for breakdown
of the ``dilute gas'' approximation. Inside the nonlinear zone of
the saddle point crack, the critical crack length for a second crack
is of the order $\alpha/Y$ ( from (\ref{106}) with $T\sim Y$ ) and thus such
microcracks can be easily created. In fact, the nucleation of
these microcracks may well be the dominant mechanism of the main crack
propagation. Micro-crack nucleation in the nonlinear zone will
change the stress fields near the crack tips, but as we discuss above,
has little impact on the saddle point energy (as the total energy in the
nonlinear zone is finite). We show now that such secondary crack nucleation
is exponentially confined to the nonlinear zone of the main crack.
The probability $W(r_0)$ of the second crack nucleated somewhere at
$r>r_0\sim r_{\rm nl}$ ($r=0$ corresponds to a crack tip) is given by
\begin{equation}
W(r_0)\sim \int\limits_{r_0}^{+\infty}{{r dr}\over{\lambda^2}}
\exp\lbrace -\beta\alpha\ell(r)\rbrace
\label{195}
\end{equation}
where $\ell(r)$ is a critical crack length at distance $r$ from the
tip of the critical crack. From (\ref{106}) with $T$ replaced with
the stress field near the crack tip given by (\ref{189}), we find
\begin{equation}
\ell(r)\sim {{\alpha Y}\over{{\sigma_{ij}(r)}^2}}
\sim r
\label{196}
\end{equation}
(\ref{195}) with (\ref{196}) gives
\begin{equation}
W(r_0)\sim \int\limits_{r_0}^{+\infty}{{r dr}\over{\lambda^2}}
\exp\lbrace -\beta\alpha r\rbrace = {{1+\beta\alpha r_0}\over{
{(\beta\alpha\lambda)}^2}}\exp\lbrace -\beta\alpha r_0\rbrace
\label{198}
\end{equation}
The exponential dependence of $W(r_0)$ on the boundary
of the nonlinear zone $r_0\sim r_{nl}$ in equation (\ref{198})
means that the nucleation of another crack (in addition to the
saddle point one) is exponentially confined to the nonlinear zone,
justifying the dilute gas approximation.
\section{Other geometries, stresses, and fracture mechanisms}
In this section we discuss generalizations of our model,
more exactly its simplified version without the quadratic fluctuations.
We will do five things.
In {\bf (A)} we calculate the imaginary part of the free energy for
arbitrary uniform loading and find the high-order nonlinear corrections
to Young's modulus. We discuss the effects of dislocations and
vacancy clusters (voids) in {\bf (B)} and {\bf (C)}. Part {\bf (D)} deals with
three dimensional fracture through the nucleation of penny-shaped
cracks: we calculate the imaginary part of the free energy
and the asymptotic ratio of the successive coefficients of the
inverse bulk modulus. Finally, in {\bf (E)} we consider a non-perturbative
effect: the vapor pressure of a solid gas of bits fractured from
the crack surfaces, and show how it affects the saddle point calculation.
\subsection{Anisotropic uniform stress and the high order corrections
to Young's modulus.}
We calculated the essential singularity of the free energy
at zero tension only for uniform {\it isotropic} loads at infinity.
Within the approximation of ignoring the quadratic fluctuations,
we can easily generalize to any uniform loading. In general, consider
an infinite elastic material subject to a uniform asymptotic tension
with $\sigma_{\rm yy}=T$, $\sigma_{\rm xx}=\epsilon T$ ($0\le\epsilon < 1$)
and $\sigma_{\rm xy}=0$.
Using the strain-stress analysis of\cite{m} and following
(\ref{36})-(\ref{60}), we find the energy $E_{\rm release}$,
released from the
creation of the straight cut of length $\ell$ tilted by angle $\theta$ from
the $x$ axis
\begin{equation}
E_{\rm release}={{\pi T^2\ell^2(1+\chi)}\over{64\mu}}\biggl[
(1+\epsilon)+(1-\epsilon)\cos 2\theta\biggr].
\label{d2}
\end{equation}
(The isotropic result (\ref{st}) is restored for $\epsilon=1$.)
The new important feature that comes into play is that the crack rotation
ceased to be a zero-restoring-force mode.
Treating the crack rotation to quadratic order in $\theta$ from the
saddle point value $\theta=0$, we obtain the total energy
of the crack $E(\Delta\ell,\theta)$ similar to (\ref{105}-\ref{107}),
(\ref{107b})
\begin{equation}
E(\Delta\ell,\theta)=\alpha\ell_c-{{\alpha\Delta\ell^2}\over{\ell_c}}
+\alpha\ell_c (1-\epsilon)\theta^2.
\label{d4}
\end{equation}
As before, $\Delta\ell$ is the deviation of the crack length
from the saddle point value $\ell_c$, still given by (\ref{106}).
Following (\ref{i1})-(\ref{120}), the imaginary part of the free energy for a
dilute gas of straight cuts, excluding all quadratic fluctuations
except for the uniform contraction-expansion (mode $\Delta\ell$)
and the rotation (mode $\theta$) of the critical droplet, is given by
\begin{eqnarray}
{\rm Im} F^{\rm simple}(T,\epsilon)&=&\pm{\pi \over {2\beta^2\alpha\lambda} }
\biggl ({ {{A} \over {\lambda ^2}}} \biggr ){\biggl({1\over{1-\epsilon}}
\biggr)}^{1/2}\nonumber\\
&&\exp{ \biggl\lbrace {{-32 \beta\mu
\alpha^2 } \over { \pi T^2 (\chi+1)} } \biggr\rbrace}.
\label{d5}
\end{eqnarray}
One immediately notices an intriguing fact: the $\epsilon$-dependence of
the imaginary part is only in the prefactor, which, as we already know
is regularization dependent anyway.
In particular, the latter means that the inverse Young's modulus
--- the elastic coefficient corresponding to the transition
with path $\epsilon=0$ --- will have the same asymptotic behavior
as that of the inverse bulk modulus (\ref{188}): the asymptotic ratio of the
high-order elastic coefficients of the inverse Young's modulus $Y(P)$
\begin{eqnarray}
{1 \over Y(P)} &=& -{1 \over A} \left({\partial A \over \partial P}\right)_{
\beta}= Y_0 + Y_1 P \cdots + Y_n P^n + \cdots
\label{d6a}
\end{eqnarray}
($P$ in (\ref{d6a}) is a uniaxial compression) are given by
\begin{equation}
{{Y_{n+1}}\over{Y_n}}\to -n^{1/2}{\biggl({{ \pi (\chi+1)}\over {64 \beta\mu
\alpha^2 } } \biggr)}^{1/2}
\mbox{\hspace{0.1in} as {\it n}} \rightarrow\infty.
\label{d6}
\end{equation}
\subsection{Dislocations}
We have forbidden dislocation nucleation and plastic flow in our
model. Dislocation emission is crucial for ductile fracture, but
by restricting ourselves to a brittle fracture of defect--free
materials we have escaped many complications.
Dislocations are in principle important: the nucleation\cite{nelson}
barrier $E_{\rm dis}$ for two edge dislocations in an isotropic
linear--elastic material under a uniform tension $T$ with equal and
opposite Burger's vectors $\vec b$ is
\begin{equation}
E_{\rm dis}={{Y b^2}\over {4 \pi (1-\sigma^2)} }\ln {Y\over T} + E_0
\label{d1}
\end{equation}
where $E_0$ is a $T$ independent part that includes the dislocation
core energy.
The fact that $E_{\rm dis}$ grows like $1/\ln T$ as $T\to 0$ (much more
slowly than the corresponding barrier for cracks) tells that
in more realistic models dislocations and the resulting plastic
flow\cite{ambegaokar} cannot be ignored.
While dislocations may not themselves lead to a catastrophic instability
in the theory (and thus to an imaginary part in the free energy?),
they will strongly affect the dynamics of crack nucleation (e.g., crack
nucleation on grain boundaries and dislocation tangles)\cite{ewalds,fbs}.
\subsection{Vacancy clusters}
We ignore void formation. It would seem natural to associate the
negative pressure (tension) $(-T)$ times the unit cell size with the
chemical potential $\mu$ of a vacancy. At negative chemical potentials,
the dominant fracture mechanism becomes the nucleation of vacancy
clusters or voids (rather than Griffith-type microcracks),
as noted by Golubovi\'c and collaborators\cite{golubovic}.
If we identify the chemical potential of a vacancy with $-T$,
we find the total energy of creation a circular vacancy of radius $R$,
$E_{\rm vac}(R)$, to be
\begin{equation}
E_{\rm vac}(R)=2\pi R\alpha- T\pi R^2.
\label{d7}
\end{equation}
From (\ref{d7}) the radius of the critical vacancy is $R_c=\alpha/T$ and
its energy is given by $E_{\rm vac}(R_c)=\pi\alpha^2/T$.
A saddle point is a circular void because a circular void gains the most
energy ($\sim$ area of the void) for a given perimeter length.
In principle, the exact shape of the critical cluster is also affected
by the elastic energy release. The latter, however,
\begin{eqnarray}
E_{\rm release}(R_c)={{\pi T^2 R_c^2 (3\chi+1)}\over{8\mu}}
={{\pi \alpha^2 (3\chi+1)}\over{8\mu}}
\label{d8}
\end{eqnarray}
is fixed as $T\to 0$, and thus the energy of the vacancy is
dominated by $E_{\rm vac}(R_c)$ for small $T$.
(To obtain (\ref{d8}) we used the strain-stress analysis of\cite{m}
and expression (\ref{60}) for the energy release.)
Using the framework developed for the crack nucleation, we find that in
case of voids (again, ignoring the positive frequency quadratic fluctuations)
the imaginary part of the free energy is given by
\begin{equation}
{\rm Im} F^{\rm simple}_{\rm vacancy}
(T)=\pm{1 \over {2\beta} }
\biggl ({ {{A} \over {\lambda ^2}}} \biggr ){\biggl({1\over{\beta T\lambda^2}}
\biggr)}^{1/2}\exp{ \biggl\lbrace {{-\pi \beta
\alpha^2 } \over T } \biggr\rbrace}.
\label{d9}
\end{equation}
(The special feature of the calculation (\ref{d9}) is that translations
are the only zero modes: the rotation of a circular vacancy cluster
does not represent a new state of the system.)
From (\ref{d9}) we obtain following (\ref{182}) and (\ref{184})
the asymptotic ratio of the high-order coefficients of the inverse
bulk modulus
\begin{equation}
{{c_{n+1}}\over{c_n}}\to -{n\over {\pi \beta \alpha^2}}.
\label{d10}
\end{equation}
The divergence of the inverse bulk modulus is much stronger in
this case: the high-order coefficients grow as $c_n\sim n!$, rather
than as $(n/2)!$ (for the fracture through the crack nucleation).
Whether (\ref{d10}) is a realistic result is an open
question. Fracture through vacancy cluster nucleation is
an unlikely mechanism for highly brittle materials:
the identification of $\mu$ with $(-T)$ demands a mechanism for
relieving elastic tension by the creation of vacancies.
The only bulk mechanism for vacancy formation is
dislocation climb, which must be excluded from consideration ---
the dislocations in highly brittle materials are immobile\cite{fbs}.
Vacancy clusters might be important for the fracture of ductile (non-brittle)
materials. However, the nucleation of vacancies must be considered
in parallel with the nucleation of dislocations. Because at small $T$
dislocations are nucleated much more easily (\ref{d1}) than vacancy clusters
at low stresses, the dominant bulk mode of failure is much more likely to
be crack nucleation at a dislocation tangle or grain boundary --- as
indeed is observed in practice.
\subsection{Three dimensional fracture}
Our theory can be extended to describe a three dimensional fracture
transition as well. Studying elliptical cuts, Sih and Liebowitz\cite{sl}
found that a penny-shaped
cut in a three-dimensional elastic media subject to a uniform isotropic
tension $T$ relieves the most elastic energy for a given area of the cut.
The energy to create a penny-shaped cut of radius $R$, $E_{\rm penny}(R)$,
is given by\cite{sl}
\begin{equation}
E_{\rm penny}(R)=2\pi\alpha R^2 - {{4(1-\sigma)R^3 T^2}\over{3\mu}}.
\label{d11}
\end{equation}
The zero modes contribute in this case a factor $2\pi V/\lambda^3$ ---
$2\pi$ coming from the distinct rotations of the cut,
and $V/\lambda^3$ coming
from the translations of the cut.
Here we find the imaginary part of the free energy to be
\begin{eqnarray}
{\rm Im} F^{\rm simple}_{\rm penny}
(T)&=&\pm{1 \over {2\beta} }
\biggl ({2\pi {V \over {\lambda ^3}}} \biggr )
{\biggl({1\over{2\beta \alpha\lambda^2}}
\biggr)}^{1/2}\nonumber\\
&&\exp{ \biggl\lbrace {{- 2\beta\mu^2\pi^3\alpha^2
} \over {{3 (1-\sigma)}^2 T^4 } } \biggr\rbrace}
\label{d12}
\end{eqnarray}
and the asymptotic ratio of the high-order elastic coefficients of the
inverse bulk modulus
\begin{equation}
{{c_{n+1}}\over{c_n}}\to -{\biggl({{{3 (1-\sigma)}^2
}\over{2\beta\mu^2\pi^3\alpha^2}}\biggr)}^{1/4}{\biggl({n\over 4}
\biggr)}^{1/4}.\label{d13}
\end{equation}
\subsection{Vapor pressure}
The approach we used to calculate the imaginary part of
the free energy is a perturbative one. In a sense, nothing prohibits
us from considering cubic, quartic, etc. deviations from the saddle
point energy. In fact, it is possible to develop an analog of
Feynman diagram technique (as in quantum electrodynamics\cite{ll} or
quantum field theory\cite{zj}) and calculate
the contribution to the imaginary part to any finite order.
It is important to realize that even if we did this, the result would
still be incomplete: we would miss interesting and important
physics coming from nonperturbative effects.
Here we discuss one such nonperturbative effect, namely the ``vapor''
pressure of a solid gas of bits fractured from the crack surface.
We find that including the vapor pressure, the essential singularity
shifts from $T=0$ to $T=-P_{\rm vapor}$.
Consider a dilute gas of straight cuts of arbitrary length with an
elliptical opening (mode $v_1$ in (\ref{z6})) and a solid gas of
fractured bits from the crack surface. Following\cite{we}, the
partition function of the material with one cut $Z_1$ under
a uniform isotropic tension $T$ is
\begin{eqnarray}
Z_1 &=& Z_0\biggl(\pi{A\over{\lambda^2}}\biggr)\int_{0}^{
\infty}{{d \ell}\over{\lambda}}\int_{0}^{\infty}
{{dv_1}\over{\lambda}} \exp\biggl\lbrace -\beta \biggl(
2\alpha \ell\label{vp1}\\
&&+{{2\pi \mu}\over {\chi+1}}v_1^2 -{{\pi T\ell}\over{2}} v_1\biggr)
\biggr\rbrace Z_{\rm gas}(\ell,v_1)\nonumber,
\end{eqnarray}
where $Z_{\rm gas}(\ell,v_1)$ is the partition function
of the gas of fractured
bits inside the crack of area $\pi\ell v_1/2$. It costs $2\alpha\lambda$
to fracture one bit of size $\lambda\times \lambda$ from a crack
step, so in an ideal gas approximation, the partition
function of the gas is determined by
\begin{eqnarray}
Z_{\rm gas}(\ell,v_1)&=&\sum_{n=0}^{\infty}\exp(-2\beta\alpha\lambda N)
{\biggl({{\pi\ell v_1}\over{2\lambda^2}}\biggr)}^N {1\over {N!}}
\label{vp2}\\
&=&\exp\biggl\lbrace {{\pi\ell v_1}\over{2\lambda^2}}\exp
(-2\beta\alpha\lambda)\biggr\rbrace.\nonumber
\end{eqnarray}
From (\ref{vp1}) and (\ref{vp2}) it follows that the partition
function of the gas effectively increases the tension $T$ by
the vapor pressure $P_{\rm vapor}$, $T\to T+P_{\rm vapor}$, where
\begin{equation}
P_{\rm vapor}={1\over{\beta\lambda^2}}\exp(-2\beta\alpha\lambda).
\label{vp3}
\end{equation}
In particular, the essential singularity
of the free energy shifts from zero tension to minus the ``vapor''
pressure. This shift is clearly a nonperturbative
effect. We were able to describe it only by allowing topologically
different excitations in the system: a state of the elastic
material with a bit completely detached from the crack surface
{\it may not} be obtained by the continuous
deformation of the crack surface (surface phonons) or the cut shape
(curvy cuts).
At zero external pressure, our material is in the gas (fractured) phase
--- not until $P_{\rm vapor}$ is the solid stable.
\section{Summary}
In this paper we studied the stress-induced phase transition of
elastic materials under external stress: an elastic compression
``phase'' under positive pressure goes to a fractured ``phase''
under tension. We showed that in a properly formulated
thermodynamic limit, the free energy of an infinite elastic material
with holes of predetermined shapes is independent of the shape of the
outer boundary as the latter goes to infinity. Under a
stretching load the free energy develops an imaginary part with an essential
singularity at vanishing tension. To calculate the essential singularity
of the free energy including quadratic fluctuations we
determined the spectrum and normal modes of
surface fluctuations of a straight cut, and proved that
under the uniform isotropic tension a
curvy cut releases the same elastic energy (to cubic order) as the
straight one with the same end-points.
The imaginary part of the free energy determines the asymptotic
behavior of the high-order nonlinear correction to the
inverse bulk modulus\cite{we}. We find that although the prefactor
and the renormalization of the surface tension are both regularization
dependent (once we include the quadratic fluctuations), the
asymptotic ratio of the high-order successive coefficients of the inverse
bulk modulus apparently is a regularization-independent result.
Within our model, the asymptotic ratio is unchanged by the inclusion
of the nonlinear effects near the crack tips.
We generalize the simplified model (without the quadratic fluctuations)
to anisotropic uniform strain and calculated the asymptotic behavior
of the high order nonlinear coefficients of the inverse Young's
modulus. We computed the imaginary part of the free energy
(and the corresponding divergence of the high-order coefficients of
the inverse bulk modulus) for the fracture mechanism through void
nucleation which dominates at small external pressures: we argue
that it may not occur in brittle fracture and should be preempted by
dislocation motion in ductile fracture.
We find that the simplified model applied to three-dimensional
fracture predicts a $(n/4)!$ divergence of the nonlinear coefficients
of the inverse bulk modulus.
Our results can be viewed as a straightforward extension to the
solid-gas sublimation point of Langer \cite{{langer},{langer2}} and
Fisher's \cite{fisher} theory of the essential singularities at the
liquid-gas transition. Indeed, if we allow for vapor pressure in our
model, then our system will be in the gas phase at $P=0$,
as noted in section VII(E). The essential
singularity we calculate shifts from $P=0$ to the vapor pressure.
If we measure the nonlinear bulk modulus as an expansion about (say)
atmospheric pressure, it should converge --- but the radius of convergence
would be bounded by the difference between the point of expansion and the
vapor pressure.
\section*{Acknowledgment}
We acknowledge the support of DOE Grant DE-FG02-88-ER45364.
We would like to thank Yakov Kanter, Eugene Kolomeisky, Paul Houle, Tony
Ingraffea, Paul Wawrzynek, Lisa Wickham, Herbert Hui, Ken Burton and
Robb Thompson for useful conversations.
|
2,869,038,155,905 | arxiv | \section{Introduction}
The properties of the observable Universe are precisely constrained at redshift $z=1100$ by observations of the cosmic microwave background (CMB) \cite{Planck18I}, and at $z\lesssim 1$ by galaxy surveys.
In between, line intensity mapping (LIM) is a promising approach to fill the gap and study galaxy evolution and cosmology \cite{Kovetz17}.
Several promising lines like HI (21 cm), Ly-$\alpha$ (121.6 nm), H$\alpha$ (656.28 nm), [CII] (158 $\mu$m), CO 1-0 (2.6 mm) etc. are being targeted by the ongoing and upcoming LIM experiments to map out the 3D large-scale structure (LSS) of the Universe at high redshift.
However, some periods of the Universe's history, such as the Dark Ages when it was mostly neutral, will remain very challenging to probe.
For instance, probing the Dark Ages with 21cm will require peering through overwhelmingly large foregrounds \cite{Haslam82, Rengelink97, Santos05}.
The lensing of the CMB contains information about the high-redshift Universe, including the epoch of reionization and the dark ages \cite{Lewis06}, and will be measured to sub-percent precision by upcoming experiments \cite{SO19, CMBS419}.
However, the contribution to CMB lensing from eg., the Dark Ages, is dwarfed by that from the low-redshift ($z\lesssim 1$) Universe.
Subtracting this low-redshift contribution could in principle be done with tracers of the matter density (galaxy surveys and LIM surveys) \cite{McCarthy21}, however these would need to overlap on the sky and span the whole redshift range between $z=0$ to the redshift of reionization, without any gap.
This therefore appears unfeasible in practice.
Instead, a futuristic approach could be to reconstruct lensing from a LIM survey \cite{Zahn06, Pourtsidou14, Pourtsidou15, Pourtsidou16, Schaan18, Foreman18, Chakraborty19, Feng19} at high redshift, e.g., $z= 5$.
Combining LIM lensing with galaxy shear at $z= 1$, such as from the Rubin Observatory\footnote{\url{http://www.lsst.org}} \cite{LSSTScienceBook}, one can exactly null the contribution of $z \leq 1$ to the LIM lensing, thus delivering a unique probe of the matter distribution at $z=1-5$.
This redshift range is extremely difficult to probe any other way.
Combining instead LIM lensing with CMB lensing at $z=1100$, one can selectively extract the projected matter density field at $z=5-1100$, covering the epoch of reionization, cosmic dawn and the dark ages.
Again, this redshift range is difficult to observe any other way, and doing so with lensing would enable testing how much of the fluctuations in future 21 cm maps during reionization/the dark ages arise from density fluctuations as opposed to ionization or spin-temperature variations (see \cite{Doux16} for an analogous approach with CMB lensing and the Lyman-$\alpha$ forest).
To do this, we extend the so-called ``nulling'' method from the galaxy lensing tomography literature \cite{Huterer05, Bernardeau14, Barthelemy20}, and we generalize it to LIM lensing and CMB lensing below.
This method allows to not only suppress, but instead exactly null, the otherwise dominant low-redshift contribution to the lensing kernels.
LIM lensing has other applications, beyond enabling lensing tomography at high redshift.
For instance, continuum foregrounds typically render the modes perpendicular to the line of sight (LOS), i.e. with $k_\parallel \simeq 0$, unusable for cosmology.
This can prevent us from measuring the cross-correlation of LIMs with 2D fields, such as CMB lensing.
However, by reconstructing the lensing from LIMs, one obtains a field, $\hat{\kappa}_\text{LIM}$, where the modes with $k_\parallel \simeq 0$ are present, enabling cross-correlations with 2D fields like CMB lensing \cite{Foreman18, Schaan18}.
This therefore offers an alternative to tidal reconstruction \cite{Foreman18, Zhu18}, in order to enable these cross-correlations.
The prospect of measuring LIM lensing remains futuristic, because of several challenges.
Recent work \citep[e.g.][]{Foreman18, Schaan18} has shown that the non-Gaussian nature of LIMs (due to non-linear gravitational evolution at low redshifts) biases LIM lensing.
This bias can be avoided or subtracted to some extent with ``bias hardening'' \cite{Foreman18}, a method inspired from CMB lensing \cite{Osborne14, Namikawa13, Planck13XVII, Sailer20} which makes use of our knowledge of the LIM non-Gaussianity.
Another major challenge to LIM lensing is the fact that the observed LIMs are contaminated by foregrounds.
Continuum foregrounds like the cosmic infrared background (CIB) or Milky-Way emission can be highly dominant over the target line signal.
Thanks to their smooth spectral energy distributions, continuum foregrounds can typically be avoided by discarding the 3D Fourier modes with low $k_\parallel$, i.e. almost perpendicular to the line of sight (LOS).
However, line interlopers cannot be avoided in this way. These are galaxies at a different redshift, emitting in a different line which redshifts to the same observed frequency as the target line.
Methods exist to remove part of the interloper contamination, or to quantify it (see \cite{Kovetz17, Pullen13} for a summary). Methods like
bright voxel masking \cite{Gong14, Breysee15, Yue15, Silva15, Sun18}, secondary line identification \cite{Cheng20}, spectral deconfusion \cite{Cheng20}, cross-correlating the LIM with a template of the contaminant \cite{Silva15} alleviate the issue.
Measuring the anisotropy in the 3D power spectrum, analogous to the Alcock-Paczynski effect \cite{Visbal10, Cheng16, Liu16, Lidz16, Gong20}, allows to quantify the residual contamination.
While these methods reduce the amount of interloper emission, they do not completely remove them.
In this paper, we quantify the bias to LIM lensing from interlopers for the first time, and propose a new method to avoid them entirely, without any assumption other than their redshifts.
We derive a new ``LIM-pair'' quadratic estimator for LIM lensing, relying on a pair of LIMs, from two lines $X$ and $Y$ emitted at the same redshift but with uncorrelated interloper foregrounds.
This method is analogous in spirit to the gradient-cleaned estimators of CMB lensing \cite{Madhavacheril18, Darwish21}.
We forecast the signal-to-noise ratio for this estimator for one example line pair.
We compute the various foreground biases to its auto-spectrum, and show that its cross-spectrum with CMB lensing is exactly free of LIM foregrounds.
Furthermore, this cross-correlation of LIM line-pair lensing with CMB lensing has higher SNR than the auto-power spectrum of the LIM-line pair lensing, making it the first one to be detectable in the future.
This paper constitutes a step towards bias-free lensing reconstruction from LIM.
The ``nulling'' method, applied to LIM-pair lensing, constitutes a new potential probe of the Dark Ages in the future.
\section{Lensing tomography and ``Nulling''}
Similarly to galaxies and the CMB, LIMs constitute source images, emitted at a cosmological distances from us, which are lensed by all the intervening matter distribution in-between.
In the weak lensing regime and the Born approximation, this lensing is entirely determined by one scalar field for each source image (LIM or CMB), the lensing convergence $\kappa$.
In all cases, the lensing convergence is a projection of the matter overdensity field along the line of sight (LOS),
\beq
\kappa(\vec{n}) = \int d\chi \, W_\kappa(\chi) \, \delta (\chi \vec{n}, z(\chi)),
\label{eq:kappa_from_delta}
\eeq
weighted by the lensing kernel $W_\kappa$.
For an image source at a single redshift or distance $\chi_S$, the lensing kernel is given by
\beq
W_{\kappa} (\chi, \chi_S) = \frac{3}{2} \left( \frac{H_0}{c} \right)^2 \frac{\Omega_m^0}{a} \; \chi \left( 1 - \frac{\chi}{\chi_S} \right).
\label{eq:lensing_kernel_single_source}
\eeq
Here $H_0$ and $\Omega_m^0$ are the Hubble parameter and the matter fraction today, $c$ is the speed of light, $a$ is the scale factor, $\chi_S$ the distance of the source (image being lensed) and $\chi$ the distance of the lens (mass causing the lensing).
This lensing kernel is appropriate for CMB lensing, where the source redshift is $z=1100$, and for a thin redshift slice of LIM.
For extended source redshift distributions $dn/dz_S$, e.g., for a galaxy lensing tomographic bin or a LIM with a large redshift coverage, the lensing kernel is simply the redshift-average of the single-source lensing kernel, weighted by the source redshift distribution:
\beq
W_{\kappa} (\chi) = \int dz_S \; \frac{1}{n}\frac{dn}{dz_S} \; W_\kappa (\chi, \chi(z_S)),
\eeq
where
$n \equiv \int dz_S\ dn/dz_S$.
In this paper, we consider LIMs coming from a single redshift, or a thin redshift slice, making this last integral unnecessary.
In practice though, LIM lensing analyses will likely be performed in 3D \cite{Foreman18, Chakraborty19}, in order to discard the low $k_\parallel$ modes most affected by continuum foregrounds.
In what follows, we will therefore not address the question of contamination from continuum foregrounds, and we will assume that this problem is solved by the $k_\parallel$ cuts applied to the LIM.
In what follows, we derive the new ``LIM-pair'' estimator in 2D rather than 3D, to avoid technical distractions.
We also do not implement the bias-hardening weights.
Our 2D estimator generalizes trivially to 3D and to the bias-hardening case, while keeping insensitivity to interloper foregrounds, since it relies on using a pair of lines.
From Eq.~\eqref{eq:kappa_from_delta}, we infer all the auto- and cross-spectra of LIM lensing, galaxy lensing and CMB lensing, in the flat sky and Limber approximations:
\beq
C_\ell^{\kappa \kappa'}
=
\int d\chi \;
\frac{W_\kappa(\chi) W_{\kappa'}(\chi)}{\chi^2}
P_m\left( k=\frac{\ell+1/2}{\chi}, z(\chi)\right).
\eeq
As shown in Fig.~\ref{fig:schematic_nulling} for CMB lensing and LIM lensing at redshifts 5 and 6, these lensing kernels span the whole redshift range between the source and the observer.
\begin{figure}[h]
\centering
\includegraphics[width=\columnwidth]{Figures/schematic_nulling_gal.pdf}
\includegraphics[width=\columnwidth]{Figures/schematic_nulling_cmb.pdf}
\centering
\caption{
While the Universe's properties are very well constrained at low redshift from galaxy surveys and at high redshift with the CMB, many parts of its history remain unexplored.
Top: By combining LIM lensing (dashed black) at $z=5$ with galaxy lensing at $z=1, 1.5$ (dashed blue and green), we construct a linear combination sensitive only to $z=1-5$.
Bottom: By combining CMB lensing (dashed black) and lensing from two LIMs (e.g.,from $z=5$ in green and $z=6$ in blue), one can construct a linear combination which exactly nulls the signal from low redshift ($\kappa_\text{Null}$ in red).
This offers a potential new probe of the Dark Ages, complementary to 21~cm.
However, achieving these futuristic goals requires controlling the foregrounds in LIM, which is the goal of this paper.
}
\label{fig:schematic_nulling}
\end{figure}
However, interestingly, Eq.~\eqref{eq:lensing_kernel_single_source} shows that the lensing kernels have a very simple dependence on the lens distance $\chi$: apart from the common overall scale factor, they are second order polynomials in $\chi$.
Such a polynomial is only determined by three coefficients.
An appropriate linear combination of three lensing kernels is therefore sufficient to null these three coefficients, thereby exactly nulling the combined lensing kernel out to the redshift of the closest source \cite{Huterer05, Bernardeau14, Barthelemy20}.
More specifically, for three sources at distances
$\chi_1 < \chi_2 < \chi_3$,
the linear combination
\beq
W_{\kappa}(\chi, \chi_3)
+ \alpha W_{\kappa}(\chi, \chi_2)
- (1+\alpha) W_{\kappa}(\chi, \chi_1)
\eeq
with
\beq
\alpha=\frac{1/\chi_3 - 1/\chi_1}{1/\chi_1 - 1/\chi_2}
\eeq
is mathematically null for
$\chi \leq \chi_1$.
In other words, the linear combination
$\kappa_3
+ \alpha \kappa_2
- (1+\alpha) \kappa_1$
is only sensitive to the matter distribution from $\chi > \chi_1$.
Fig.~\ref{fig:schematic_nulling} illustrates two applications of the nulling method, using LIMs at high redshift.
First, we use one LIM at $z=5$ and two galaxy lensing tomographic bins at $z=1, 1.5$ from e.g., Rubin Observatory.
The nulling combination of these three allows to exactly null any contribution to lensing from $z\leq 1$, providing a probe of the $z=1-5$ Universe.
This probe is valuable because of its redshift range, difficult to access otherwise.
Because this gives the projected matter density field directly, it avoids the need to model the galaxy-halo connection (e.g., galaxy bias).
The second application shown in Fig.~\ref{fig:schematic_nulling} uses two LIMs at $z=5,6$ and CMB lensing.
The nulling combination allows to extract selectively the $z=5-1100$ Universe, exactly nulling any contribution from $z\leq 5$.
This disentangles the contribution from the dark ages, cosmic dawn and the epoch of reionization from the otherwise-dominant low-redshift Universe, yielding a unique probe of the pre-reionization Universe.
In either case, whether we construct $\kappa_\text{Null}$ from LIM and galaxy lensing, or from LIM and CMB lensing, we will be cross-correlating $\kappa_\text{Null}$ with CMB lensing.
Indeed, the CMB lensing kernel fully overlaps with the nulled lensing kernel, such that $\langle \kappa_\text{Null} \kappa_\text{CMB} \rangle$ is non-zero and probes the same exact redshift range as $\kappa_\text{Null}$.
Furthermore, we will show that this combination is free of interloper bias, when LIM lensing is measured with the LIM-pair estimator.
In the rest of this paper, we focus on a necessary step towards this futuristic prospect: suppressing interloper contamination in LIM.
We show that cross-power spectrum of the form
$C_L^{\hat{\kappa}_\text{LIM} \hat{\kappa}_\text{CMB}}$
can be measured without interloper bias, thanks to the LIM-pair estimator.
As a result, the cross-spectrum
$C_L^{\hat{\kappa}_\text{Null} \hat{\kappa}_\text{CMB}}$
can also be measured free of interloper bias.
These cross-spectra probe exclusively the high-redshift Universe.
In what follows, we focus on CMB lensing rather than galaxy lensing, but all the results apply identically.
\section{Interloper emission and line pairs}
Throughout this paper, we consider two different lines with widely separated rest-frame frequencies.
We denote by $X$ and $Y$ intensity maps in these two target lines, from galaxies at the same redshift.
Since $X$ and $Y$ trace the large-scale structure distribution of matter at the same redshift, they are correlated and have a non-zero cross-spectrum $C_l^{XY}$.
The two intensity maps $X$ and $Y$ are affected by interloper foregrounds.
However, we assume that the target lines and redshift of $X$ and $Y$ have been selected such that their interlopers do not originate from the same redshift, and are therefore statistically independent.
While our formalism applies identically to any pair of such lines $X$ and $Y$, we focus on a specific example below.
We consider intensity maps in [C{\sc ii}] and Ly-$\alpha$ at redshift $z=5$ as our intensity maps $X$ and $Y$.
The [C{\sc ii}] LIM is contaminated by CO and C{\sc i} rotational lines from various redshifts.
Similarly, the Ly-$\alpha$ LIM is contaminated by H$\alpha$ and H$\alpha$ interlopers at low redshift.
Crucially, as illustrated in Fig.~\ref{fig:dI_dz}, the interlopers for [C{\sc ii}] and Ly-$\alpha$ do not overlap in redshift, such that they are indeed statistically independent.
For concreteness, in what follows, we focus on CO (J=4-3) and H$\alpha$ lines as interlopers to the target [C{\sc ii}] and Ly-$\alpha$ lines respectively. Our analysis however, is equally applicable to all the interloper lines simultaneously, since they do not overlap in redshift.
\begin{figure}[h!]
\centering
\includegraphics[width=\columnwidth]{Figures/dI_dz_2.pdf}
\centering
\caption{
Although our formalism applies to any pair of LIMs $X$ and $Y$,
we consider the specific example of [C{\sc ii}] and Ly-$\alpha$ LIMs from redshift 5.
Although each LIM is contaminated by interlopers (CO and C{\sc i} for [C{\sc ii}], and H$\alpha$ and H$\beta$ for Ly-$\alpha$), these interlopers do not overlap in redshift, and are therefore uncorrelated.
As a result, they do not bias the LIM-pair lensing estimator, as we show below.
Neither axis is to scale in this schematic.}
\label{fig:dI_dz}
\end{figure}
A key input to the LIM-pair lensing estimator below is the auto- and cross-spectra of the LIMs $X$ and $Y$.
Computing the effect of interlopers on the bias and variance of this estimator further requires modeling the bispectra and trispectra of these LIMs.
For all this, we use the halo model formalism from \cite{Schaan21a, Schaan21b}, based on conditional luminosity functions, and use the publicly available code
\texttt{HaloGen}\footnote{\url{https://github.com/EmmanuelSchaan/HaloGen/tree/LIM}},
as described in App.~\ref{app:halo_model}.
\section{Line-pair lensing quadratic estimators}
To derive the LIM-pair lensing quadratic estimator, we follow Ref.~\cite{Hu02}.
We seek an estimator of the form
\beq
\hat{\kappa}_{XY}(\boldsymbol{L})
=
\int \frac{d^2 l_1}{(2 \pi)^2}
\frac{d^2 l_2}{(2 \pi)^2} \
\delta^D_{\boldsymbol{l}_1+\boldsymbol{l}_2} \
F_{XY}(\boldsymbol{l}_1, \boldsymbol{l}_2) \ X_{\boldsymbol{l}_1} Y_{\boldsymbol{L} - \bl1} \ ,
\eeq
where $\boldsymbol{L} = \boldsymbol{l}_1 + \boldsymbol{l}_2$ and the Dirac delta enforces the Fourier mode constraint,
and
$F_{XY}$
is uniquely determined by requiring $\hat{\kappa}_{XY}$ to be unbiased (to first order in the true $\kappa$) and to have minimum variance.
As shown in App.~\ref{sec:derivation_lensing_qe}, the solution is
\beq
\bal
&F_{XY}(\boldsymbol{l}_1, \boldsymbol{l}_2)
= \lambda_{XY}(L) \times\\
&\quad\quad\frac{C_{l_1}^{YY} C_{l_2}^{XX} f_{XY}(\boldsymbol{l}_1, \boldsymbol{l}_2) - C_{l_1}^{XY} C_{l_2}^{XY} f_{XY}(\boldsymbol{l}_2, \boldsymbol{l}_1)}{C_{l_1}^{XX} C_{l_2}^{YY}C_{l_1}^{YY} C_{l_2}^{XX} - \left(C_{l_1}^{XY} C_{l_2}^{XY}\right)^2} \, \eal
,
\eeq
where the Lagrange multiplier $\lambda_{XY}(L)$ is given by Eq.~\eqref{eq:lagrange_xy}.
In what follows, we compare this estimator to the ones built on LIM $X$ (denoted $\hat{\kappa}_{XX}$) or $Y$ (denoted $\hat{\kappa}_{YY}$) alone,
where $F_{XX}$ and $F_{YY}$ are given by Eq.~\eqref{eq:F_XX}.
\section{Gaussian noise bias $N^{(0)}$}
Similarly to all quadratic lensing estimators,
the LIM-pair estimator is affected by the Gaussian lensing reconstruction noise $N^{(0)}$, given in Eq.~\ref{eq:variance}.
In particular, the lensing noise for the LIM-pair estimator $\hat{\kappa}_{XY}$
receives contribution not only from the cross-spectrum $C_\ell^{XY}$,
but also the auto-spectra $C_\ell^{XX}$ and $C_\ell^{YY}$.
Interloper foregrounds, which do not affect the cross-spectrum, do enhance the auto-spectra, thus increasing the lensing noise.
As a result, the lensing noise for $\hat{\kappa}_{XY}$
is not significantly reduced compared to those of $\hat{\kappa}_{XX}$ and $\hat{\kappa}_{YY}$.
This makes sense intuitively: although the interlopers are nulled in the cross-spectrum, they are still present in the LIMs, acting as a source of noise.
This lensing noise $N^{(0)}$ receives contribution from the power spectra of the target line itself, the detector noise, and potential foregrounds.
However, the $N^{(0)}$ noise only takes into account the Gaussian part of these components.
If the interloper foregrounds were Gaussian random fields, they would be fully described by $N^{(0)}$, and would thus be automatically subtracted by the standard $N^{(0)}$ subtraction.
Thus, they would not be a concern.
In the next subsection, we thus focus on the non-Gaussianity of interloper foregrounds, to compute their bias to LIM lensing.
\section{Non-Gaussian interloper biases can overwhelm the standard lensing estimator}
Similarly to CMB lensing, interloper foregrounds cause a bias in LIM lensing because they are non-Gaussian and correlated with the true lensing field we seek to reconstruct.
In this section, we follow the CMB lensing derivation from \cite{vanEngelen14, Osborne14, Ferraro18, Schaan19} and adapt it to the case of LIM interlopers.
We leave the detailed derivation to App.~\ref{app:biases} and instead discuss the intuitive origin of the various terms, shown in Fig.~\ref{fig:HO02_ind} for the standard (non LIM-pair) lensing estimator $\hat{\kappa}_{XX}$.
\begin{figure}[h!]
\centering
\includegraphics[width=\columnwidth]{Figures/HO02_curves_onlyLya_allcomp_Ha_0_11_addedto_Lya_5_1halotrispec.pdf}
\centering
\caption{
For the standard LIM lensing estimator (here $\hat{\kappa}_{XX}$ with $X=$Ly-$\alpha$ at $z=5$), the lensing noise $N^{(0)}$ (light blue) is comparable to the lensing signal (solid black).
However, the interloper contamination (here H$\alpha$ at $z=0.12$) produces a dominant bias to the lensing power spectrum.
This non-Gaussian bias is the sum of the primary bispectrum (blue dot-dashed) and the trispectrum (blue dashed) terms.
We do not show the secondary bispectrum here as it is negligible with respect to the primary bispectrum and trispectrum biases.
This motivates the need for the new LIM lensing estimator we derive in this paper.
}
\label{fig:HO02_ind}
\end{figure}
Because the lensing estimators $\hat{\kappa}$ considered here are quadratic in the LIMs, the estimated power spectrum $C_L^{\hat{\kappa}\hat{\kappa}}$ is quartic in the LIMs.
One therefore naturally expects a bias coming from four powers of the interlopers.
As we discussed above, the Gaussian part of this term is already included in the $N^{(0)}$ term, and therefore automatically subtracted by the $N^{(0)}$ subtraction.
Thus the remaining bias comes from the connected, non-Gaussian four point function of the interlopers, i.e. their trispectrum.
This trispectrum bias is shown with a dashed line in Fig.~\ref{fig:HO02_ind}.
Not only are the interlopers non-Gaussian, leading to the trispectrum bias above, they are also correlated with the true lensing signal we seek to reconstruct.
Indeed, the interlopers trace the large-scale mass distribution, which contributes to the true lensing of the target LIMs.
In other words, the target LIM is lensed in part by the interloper, which contaminates the observed LIM.
This effect, called ``self-lensing'' in \cite{Schaan18}, originates from the bispectrum between two powers of the interlopers and the true lensing potential.
It can be split into two terms, the so-called primary and secondary bispectrum interloper biases.
If the two target lines contributing to the reconstructed $\kappa$ which enters the bispectrum with two interlopers belong to the pairs of multipoles $\boldsymbol{l}_1, \boldsymbol{l}_2$ and $\boldsymbol{l}_3, \boldsymbol{l}_4$ for which the lensing weights $F_{XX}(\boldsymbol{l}_1, \boldsymbol{l}_2)$ and $F_{XX}(\boldsymbol{l}_3, \boldsymbol{l}_4)$ optimize the quadratic estimator, we get the primary bispectrum bias. If that is not the case e.g. if one target line comes from $\boldsymbol{l}_1$ and the other one from $\boldsymbol{l}_3$ with lensing weights $F_{XX}(\boldsymbol{l}_1, \boldsymbol{l}_2)$ and $F_{XX}(\boldsymbol{l}_3, \boldsymbol{l}_4)$, this gives rise to an inefficient $\kappa$ reconstruction which enters the bispectrum and thus is called the secondary bispectrum. This is discussed in detail in App.~\ref{app:biases}.
In this analysis, we consider only the 1-halo term of the trispectrum and bispectrum biases, giving a lower bound to the total interloper bias.
We find the primary bispectrum to be smaller than the lensing signal (dot-dashed line in Fig.~\ref{fig:HO02_ind}), and that the secondary bispectrum is negligible.
However, the trispectrum bias term (dotted line in Fig.~\ref{fig:HO02_ind}) for $\hat{\kappa}_{XX}$ is comparable to the lensing signal for $L\lesssim 200$ and dominant for higher lensing multipoles.
In consequence, the standard LIM lensing reconstruction method is highly biased by interlopers, and another method is needed to control them.
\section{Avoiding all biases with the LIM-pair $\times$ CMB lensing cross-spectrum}
\subsection{Avoiding all interloper biases with LIM-only lensing?}
We have shown that the lensing power spectrum estimated from
$\hat{\kappa}_{XX}\hat{\kappa}_{XX}$
is biased by the primary, secondary and trispectrum terms.
We may instead try to use different combinations of the $X$
and $Y$ LIMs, to reconstruct the lensing power spectrum.
The combination
$\hat{\kappa}_{XX}\hat{\kappa}_{YY}$
avoids the interloper trispectrum, since the interlopers in $X$ and $Y$ originate from different redshifts, and are therefore independent.
This combination also avoids the secondary bispectrum bias.
However, it is not free of primary bispectrum bias, making it still largely biased by interlopers.
The combination
$\hat{\kappa}_{XX}\hat{\kappa}_{XY}$
is free of trispectrum bias, but not of primary or secondary bispectrum biases.
For this lensing cross-spectrum, the interloper bias is dominant and comes mostly from the secondary bispectrum.
Finally, the combination
$\hat{\kappa}_{XY}\hat{\kappa}_{XY}$
avoids the trispectrum and primary bispectrum terms,
but still suffers from the secondary bispectrum bias.
However, we find that the secondary bias for $\hat{\kappa}_{XY}\hat{\kappa}_{XY}$ is small and can potentially be neglected.
In short, combinations from two LIMs $X$ and $Y$ cannot suppress all the interloper bias terms,
but the auto-spectrum of the ``LIM-pair'' lensing estimator appears to sufficiently reduce them.
While the bias to $\hat{\kappa}_{XY}\hat{\kappa}_{XY}$ appears negligible (secondary bispectrum only), our bispectrum calculation only includes the 1-halo term, such that it is only a lower limit.
Furthermore, the secondary bias may be larger when considering different pairs of lines.
The interloper biases for the various combinations are shown in Fig.~\ref{fig:biases}.
\begin{figure}[h!]
\centering
\includegraphics[width=\columnwidth]{Figures/biasto_kappaxkappa.pdf}
\centering
\caption{
Even with two LIMs $X$=Ly-$\alpha$ and $Y$=[C{\sc ii}] at $z=5$, whose interlopers are independent, one cannot avoid all the interloper biases.
The combinations
$\hat{\kappa}_{XX}\hat{\kappa}_{XX}$ (green),
$\hat{\kappa}_{XX}\hat{\kappa}_{XY}$ (red)
and
$\hat{\kappa}_{XY}\hat{\kappa}_{XY}$ (cyan)
are dominated by the residual secondary bispectrum term.
The combinations
$\hat{\kappa}_{XX}\hat{\kappa}_{YY}$ (blue)
and
$\hat{\kappa}_{XX}\hat{\kappa}_\text{CMB}$ (grey)
are dominated by the residual primary bias.
However, the cross-correlation of the LIM-pair estimator and CMB lensing,
i.e.
$\hat{\kappa}_{XY}\hat{\kappa}_\text{CMB}$ (purple)
is entirely free of interloper bias.
This is the main result of this paper.}
\label{fig:biases}
\end{figure}
Interestingly, in Fig.~\ref{fig:biases}, the interloper bias to lensing is very different for $\hat{\kappa}_{XX}\hat{\kappa}_{XY}$ and $\hat{\kappa}_{XY}\hat{\kappa}_{XY}$, even though they are both dominated by secondary bispectrum-like terms.
We explain this in App.~\ref{app:biases}.
Using three LIMs $X$, $Y$ and $Z$ from the same redshift, with independent interlopers, still does not avoid all the interloper biases.
If four LIMs $X$, $Y$, $Z$ and $W$ were available from the same redshift, with independent interlopers, the combination
$\hat{\kappa}_{XY}\hat{\kappa}_{ZW}$
would be entirely free of interloper bias.
Although one may hope to use CO, [C{\sc ii}], Ly-$\alpha$ and 21~cm LIMs from the same redshift, this prospect remains futuristic.
\subsection{Avoiding all the biases via CMB lensing cross-correlation}
In order to further suppress interloper biases, we now turn to cross-correlations of LIM-lensing with CMB lensing.
The combination
$\hat{\kappa}_{XX} \hat{\kappa}_\text{CMB}$
is free of trispectrum and secondary bispectrum bias,
but it still suffers from the primary bispectrum.
As a result, it does not reduce the interloper bias, as illustrated in Fig.~\ref{fig:biases}.
On the other hand, the combination
$\hat{\kappa}_{XY} \hat{\kappa}_\text{CMB}$
is entirely free of interloper biases: it is not affected by the primary and secondary bispectra, nor the trispectrum.
This is the main result of this paper:
LIM lensing can be measured without any interloper bias, when cross-correlating the LIM-pair estimator with CMB lensing.
Given the uncertain and potentially large interloper biases for the standard LIM lensing estimators, this constitutes a dramatic progress.
\subsection{Detectability: Signal-to-noise ratio}
In this section, we answer the question of the detectability of the
$C_L^{\hat{\kappa}_\text{LIM} \hat{\kappa}_\text{CMB}}$ and $C_L^{\hat{\kappa}_\text{null} \hat{\kappa}_\text{CMB}}$ i.e. the
cross-spectrum of the CMB lensing with LIM-pair estimator and the "nulled" estimator respectively by computing its expected SNR.
We consider an idealized and futuristic experiment, signal-dominated in the LIMs out to $\ell_\text{max LIM} = 300 - 1500$.
Our SNR calculation is described in detail in App.~\ref{app:snr_lensing_cross}.
While it is technically an upper limit, we expect it to also be a good approximation to the truth.
In short, we adopt the Gaussian SNR formula, including the lensing noise $N^{(0)}$ as well as the non-Gaussian terms $\mathcal{B}^p$, $\mathcal{B}^s$, and $\mathcal{T}$ from interlopers in the noise for $C_L^{\hat{\kappa}_\text{LIM} \hat{\kappa}_\text{LIM}}$.
As $\hat{\kappa}_\text{Null}$ is constructed through a combination of $\hat{\kappa}_{XY}$ and $\hat{\kappa}_{\rm CMB}$, the $XY$ part adds a secondary bispectrum bias which as we show in Fig.~\ref{fig:biases} is quite small and can be neglected here. Thus we consider only the $N^{(0)}$ terms for $C_L^{\hat{\kappa}_\text{Null} \hat{\kappa}_\text{CMB}}$ SNR calculation.
The various angular resolutions assumed are conservative for the lines we consider (Ly-$\alpha$ and [C{\sc ii}]).
For instance, an experiment like CONCERTO \cite{Concerto20} should measure the [C{\sc ii}] line at $z=5$ with $0.24'$ resolution, significantly higher than assumed here.
SPHEREx \cite{Dore14, Dore18} is expected to produce a Ly-$\alpha$ LIM at $z=5$ with $6''$ resolution, even much higher.
As Fig.~\ref{fig:cumsnr_difflmax} shows, the SNR on $C_L^{\hat{\kappa}_\text{LIM} \hat{\kappa}_\text{CMB}}$ may reach several 10s of~$\sigma$, allowing for a significant detection of the LIM $\times$ CMB lensing cross-power spectrum.
At the same time the SNR for $C_L^{\hat{\kappa}_\text{Null} \hat{\kappa}_\text{CMB}}$ is slightly lower which is expected but it may still be significantly detected with an experiment like we have considered here. For the detector noise, an experiment with sensitivity like CONCERTO over a large sky fraction will be required for such a detection whereas the sensitivity of a SPHEREx like experiment may not be sufficient.
As for any LIM forecast, the theoretical uncertainty on the LIM power spectra at high redshift is very large, which may affect our conclusions.
We relied on the halo model predictions from \cite{Schaan21a, Schaan21b}, whose LIM power spectra were found in agreement with the literature.
While upcoming experiments may be limited by sensitivity and sky coverage, a futuristic experiment such as the one we considered here can thus detect LIM lensing with the LIM-pair lensing and the null combination, in cross-correlation with CMB lensing.
This therefore offers a powerful way to probe the high redshift Universe.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{Figures/cumSNR_ClkappaXYkappaCMB_ClkappanullkappaCMB_diff_lmax_fsky0_4.pdf}
\centering
\caption{
Including Gaussian noise and the noise from the non-Gaussian interlopers, cumulative SNR for the LIM $\times$ CMB lensing cross-power spectrum where $X=$~Ly-$\alpha$ and $Y=$~[C{\sc ii}] LIM are shown in dashed lines for different $\ell_\text{max LIM}$.
Solid lines show the corresponding cumulative SNR for the "nulled" $\hat{\kappa}_\text{Null}$ $\times$ CMB lensing power spectrum having contributions only from $z > 5$.
Both the power spectra are detectable for a simple idealized experiment where the LIMs are signal dominated over detector noise out to $\ell_\text{max LIM}\sim 1000$ at $z=5$.
The SNR is calculated with $f_{\rm sky} = 0.4$, and the CMB lensing is assumed to be noiseless out to $L=1500$ as appropriate for Simons Observatory (SO) \cite{Ade_19}.
For different $\ell_\text{LIM max}$ values, we provide the minimum angular scale the beam will have to resolve (calculated simply as $180^\circ/\ell_\text{LIM max}$) in arcminutes.
}
\label{fig:cumsnr_difflmax}
\end{figure}
\section{Conclusion}
Lensing from LIMs has the potential to allow lensing tomography at higher redshift than galaxy surveys, and to provide a new probe of the high-redshift Universe.
We show that the nulling technique allows us to selectively extract the matter density field at $z=1-5$ in combination with galaxy lensing, and at $z>5$ in combination with CMB lensing,
However, interloper foregrounds contaminating LIMs are a major hurdle to LIM lensing.
In this paper, we quantified the lensing bias from interlopers for the first time, showing it to be very significant for the standard LIM lensing estimators.
We derived a new LIM-pair lensing estimator, based on two LIMs in different lines, from the same redshift, with independent interlopers.
In cross-correlation with CMB lensing, it exactly nulls all the interloper bias terms, which would otherwise dominate.
When using the standard lensing estimator, the non-Gaussian interlopers can also largely enhance the lensing noise.
This enhancement is uncertain because it depends on our modeling of LIM bispectra and trispectra.
In contrast, the LIM-pair lensing estimator, in cross-correlation with CMB lensing, is exactly free of interloper bias, insensitive to these modeling uncertainty, making it dramatically more reliable.
We have shown that a simple, idealized LIM experiment can detect LIM lensing at $z=5$, provided that the detector noise is subdominant to the target lines lines in the pair estimator (here Ly-$\alpha$ and [C{\sc ii}]).
We have not addressed the biases to LIM lensing from the non-Gaussianity of the target lines, rather than their interlopers.
These were studied in \cite{Schaan18, Foreman18} and a bias-hardened estimator was derived to control these biases \cite{Foreman18}.
We have also not addressed the LIM lensing biases from continuum foregrounds, and assumed that they can be controlled by discarding the low $k_\parallel$ modes in the LIMs.
Finally, we have not quantified the bias due to the fact that the interlopers are themselves lensed.
Similarly to the case of CMB lensing \cite{Mishra19}, we expect this bias to be small.
Combining the interloper removal techniques like voxel masking \cite{Kovetz17, Pullen13} with the LIM-pair estimator will further help and we leave this study for future work.
If the future studies improve upon the interloper cleaning in the LIMs, the quadratic estimator which we propose here could potentially detect $C_L^{\hat{\kappa}_\text{LIM} \hat{\kappa}_\text{CMB}}$ with even higher SNR.
\acknowledgments
We thank Yacine Ali-Ha\"imoud, Patrick Breysse, Yun-Ting Cheng, Simone
Ferraro, Simon Foreman, Adam Lidz, Adrian Liu and Martin White for their helpful feedback on an early version of the manuscript.
E.S. thanks Francis Bernardeau for a helpful discussion of nulling in the context of galaxy lensing, and Simone Ferraro for helpful discussions on the sensitivity of CMB lensing to very early matter density fluctuations.
E.S. is supported by the Chamberlain fellowship at Lawrence Berkeley National Laboratory. A.R.P. was supported by NASA under award numbers 80NSSC18K1014 and NNH17ZDA001N.
\bibliographystyle{prsty.bst}
|
2,869,038,155,906 | arxiv |
\section{Introduction}
People are highly rich in emotions and expressions of emotions can be found throughout lives. People also possess a strong ability to recognize these emotions in order to generate appropriate responses. Similarly, machines that can recognize emotions to make them more human-like can have many application scenarios \cite{zhou2018emotional,polignano2021towards,ayata2020emotion}. People use multimodal information when perceiving emotions, like text, speech, vision and motion. In this paper, we focus our research on text and speech modalities.
Text is a highly relevant modality for emotion recognition, because the meanings of words and their relations express one's emotion \cite{alswaidan2020survey}. The main challenge of using emotion datasets directly for text emotion recognition is the lack of large-scale data \cite{pepino21_interspeech}.
This is mainly due to the difficulty posed by the subjective nature of labeling the emotion. One common solution to this problem is to leverage transfer learning-based approaches. For natural language processing, recently proposed pre-trained models such as Bidirectional Encoder Representations from Transformers (BERT) \cite{devlin2018bert}, Generative Pre-training 2 (GPT-2) \cite{radford2019language}, Text-to-Text Transfer Transformer (T5) \cite{raffel2019exploring} become popular because of their excellent performance on a number of important tasks. The difference between GPT-2, T5 and BERT is that GPT-2 and T5 are better at sequence generation while BERT is more appropriate for extracting embeddings \cite{raffel2019exploring,miller2019leveraging,klein2019learning}. For text-based emotion recognition, it is more vital to extract embeddings considering the characteristics of the task, so there are some previous works leveraging BERT for text emotion recognition and achieving promising results \cite{acheampong2021transformer,adoma2020comparative}.
In addition to text, speech is also commonly recognized as a modality with high importance for emotion recognition. When comparing to text, information conveyed by speech such as the intonation and pitch can be used to recognize emotions. Therefore the use of speech for emotion recognition has also been popular, but it faces the similar problem of insufficient data. Similarly, this problem can be mitigated through the use of speech-based pre-trained models such as wav2vec \cite{schneider2019wav2vec}, VQ-wav2vec \cite{baevski2019vq}, wav2vec~2.0 \cite{baevski2020wav2vec}. Recently, wav2vec 2.0 is becoming popular on speech processing tasks such as ASR \cite{zhang2020pushing}, speaker verification \cite{fan2020exploring} and it has also shown promising performance in speech emotion recognition in \cite{pepino21_interspeech}.
Given the importance of both text and speech for emotion recognition tasks, it is natural to leverage multimodal models that combine text and speech information to improve the performance of the emotion recognition and there are a number of works have explored this \cite{liu20b_interspeech,makiuchi2021multimodal,chen20b_interspeech,n20_interspeech,kumar21d_interspeech,LIU20221}. However, to the best of our knowledge, few of them have yet explored how to effectively combine pre-trained models for multimodal emotion recognition. The recent work in~\cite{makiuchi2021multimodal} considers both wav2vec 2.0 and BERT for emotion recognition. However, it mainly focuses on disentanglement representation learning but not on multimodal fusion, and the only fusion method considered is the score fusion. In this paper, we explore extensively the fusion of wav2vec 2.0 and BERT-based embeddings and models for the multimodal emotion recognition. In other recent papers with state-of-the-art results, \cite{kumar21d_interspeech} utilizes attention-based Gated Recurrent Unit for speech and BERT for text respectively and then concatenates them together to get the final prediction. Furthermore, \cite{LIU20221} proposes a method combining self-attentional bidirectional contextual LSTM and self-attentional multi-channel CNN with multi-scale fusion framework including feature-level fusion and decision-level fusion. We summarize our major contributions as follows:
\begin{itemize}
\item We explore the multi-level fusion of text and speech for emotion recognition leveraging both wav2vec 2.0 and BERT with both early fusion and late fusion, and also the combination of them. For the early fusion, both simple embedding concatenation and more sophisticated attention mechanism-based fusion methods~\cite{han2021bi, siriwardhana2020jointly} are investigated. To the best of our knowledge, this is the first work that explores the multi-level fusion of state-of-the-art pre-trained model for the multimodal emotion recognition task.
\item We propose a novel multi-granularity fusion framework which makes use of not only frame-level speech embeddings but also explores segment-level speech embeddings including word, syllable and phone-level embeddings to further improve the recognition performance.
\item The proposed models are experimented on the popular IEMOCAP dataset~\cite{busso2008iemocap}. Experimental results show that they can outperform existing state-of-the-art multimodal approaches using both speech and text modalities.
\end{itemize}
\section{Proposed Methods}
For emotion recognition, we mainly explore two fusion mechanisms including late fusion and early fusion, which can also be combined to further boost the performance. In all three models we utilize multi-granularity frameworks to extract embeddings. It is worth noting that in this part we discuss the scenario with all the considered segment-level embeddings, while in the experiments we will explore the performance of both single and combination of these embeddings.
\subsection{Multi-granularity Framework}
In this paper, we explore a multi-granularity framework to extract speech embeddings from multiple levels of subwords. As illustrated in the blue part of Figure \ref{1}, the embeddings wav2vec~2.0 extracted are normally frame-level embeddings and it has shown to be effective in obtaining abundant frame-level information. However, it lacks the ability to capture segment-level information which is useful for emotion recognition. Thus, in addition to frame-level embeddings, we introduce segment-level embeddings including word, phone and syllable-level embeddings which are closely related to the prosody \cite{arciuli2007and,du2021mixture,alex2020attention}. Prosody can convey characteristics of the utterance like emotional state \cite{wu2010emotion,rao2013emotion} because it contains the information of the cadence of speech signals. As a result, segment-level embeddings may be helpful for multimodal emotion recognition.
Using force alignment approaches, the temporal boundaries of phonemes can be obtained, which can then be grouped to get the boundaries of the syllables. Force alignment information is provided in \cite{busso2008iemocap}. The speech segments corresponding to those units can be extracted thereafter. The segment-level embeddings can then be obtained by:
\begin{equation}
\textbf{u}_k^{i,l,n} = \frac{{\sum\nolimits_{f = s_k^{i,l,n}}^{e_k^{i,l,n}} {\textbf{u}_f^{F,l,n}} }}{{e_k^{i.l,n} - s_k^{i,l,n}}}, i \in \rm{\{P,W,S\}}
\nonumber
\end{equation}
Here ${\rm{\textbf{u}}}_k^{i,l,n}$ is the $k$\textsuperscript{th} segmentation of the segment-level embedding of the $l$\textsuperscript{th} layer of the $n$\textsuperscript{th} sample, $f$ is the $f$\textsuperscript{th} frame of the frame-level embedding, $s$ is the starting frame, $e$ is the end frame and $\rm{F}$, $\rm{P}$, $\rm{W}$, $\rm{S}$ represent frame, phone, word and syllable-level speech embeddings respectively. Thereafter, we apply weighted average for layers, following \cite{pepino21_interspeech}, where they took embeddings from all 12 transformer encoder layers in wav2vec~2.0 as the frame-level embeddings and then learned a weight $w_l$ for each of the layer. Then, the weighted embeddings are obtained by:
\begin{equation}
\begin{split}
{\textbf{u}}_k^{i,n} = {\rm{LN}}\left( {\frac{{\sum\nolimits_l {w_l^i{\textbf{u}}_k^{i,l,n}} }}{{\sum\nolimits_l {w_l^i} }}} \right),i \in \{ {\rm{P}},{\rm{W}},{\rm{S}},{\rm{F}},{\rm{T}}\}
\label{U}
\end{split}
\end{equation}
A similar approach is applied for extracting text embeddings. Here $\rm{LN}$ represents Layer Normalization \cite{ba2016layer}, $w_l^i$ is the learnable weight for the layer $l$ and $\rm{T}$ represents text embeddings.
\subsection{Late Fusion}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{IEEEtran/latefusion.eps}
\caption{Proposed late fusion model.}
\label{1}
\end{figure}
The orange part of Figure \ref{1} illustrates the late fusion model. There are five branches in the model and for every branch of the late fusion model we use the same structure as that used in \cite{pepino21_interspeech}. First, the extracted embeddings $\textbf{u}_k^{i,n}$ obtained using Equation \ref{U} are input into two feed-forward layers:
\begin{equation}
{\rm{\textbf{h}}}_k^{i,n}{{ = }}{\rm{Relu}}({{\textbf{W}_1^{i}}}\textbf{u}_k^{i,n} + {b_1^{i}}),i \in \rm{\{ P,W,S,F,T\}}
\nonumber
\end{equation}
\begin{equation}
\begin{split}
\textbf{l}_k^{i,n} = {\rm{Relu}}({\textbf{W}_2^{i}}{\rm{\textbf{h}}}_k^{i,n} + {b_2^{i}}),
i \in \rm{\{ P,W,S,F,T\}}
\nonumber
\end{split}
\end{equation}
Then a global average module is applied to fuse embeddings from different segmentation or frames or wordpieces:
\begin{equation}
{\textbf{l}^{i,n}} = \frac{{\sum\nolimits_k {\textbf{l}_k^{i,n}} }}{{{l^{i,n}}}}
\label{ga}
\end{equation}
Here $l^{i,n}$ is the sequence length. Finally, the embeddings are sent to the last feed-forward layer to generate logits, and logits from different branches are added together to generate prediction ${\tilde {\textbf{y}}}^n$ for the late fusion model:
\begin{equation}
{{\tilde {\textbf{y}}}^n} = {\rm{softmax}}\left( {\sum\limits_i {\left( {{{\textbf{W}}_3^{i}}{{\textbf{l}}^{i,n}} + {b_3^{i}}} \right)} } \right)
\label{latefusionequation}
\end{equation}
\subsection{Early Fusion}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{IEEEtran/earlyfusion.eps}
\caption{Proposed coattention-based early fusion model.}
\label{3}
\vspace{-0.5cm}
\end{figure}
Figure \ref{3} illustrates the coattention-based early fusion model we adopt. Here multiple levels including phone, word, syllable and frame-level speech embeddings are sent to the coattention model with text embeddings respectively:
\begin{equation}
\begin{split}
{\textbf{c}^{i,n}} = {\rm{GA}}\left( {{\textbf{U}^{\rm{T},n}} \otimes {\textbf{U}^{i,n}}} \right),i \in \rm{\{ P,W,S,F\}}
\nonumber
\end{split}
\end{equation}
where the coattention operation is denoted as $\otimes$, and $\rm{GA}$ represents the global average as the same in Equation \ref{ga}, ${\textbf{U}^{i,n}} = \left[ {\textbf{u}_1^{i,n},\textbf{u}_2^{i,n},...,\textbf{u}_{{l^{i,n}}}^{i,n}} \right]$ and is of size $[seq\_length,embed\_size]$, where $\textbf{u}_k^{i,n}$ is calculated using Equation \ref{U}. $\textbf{c}^{i,n}$ is the embedding of the $n$\textsuperscript{th} sample produced by the coattention model and the global average module, the size of which is $[embed\_size]$. Then the embeddings are concatenated together as:
\begin{equation}
{{\bf{c}}^n} = {{\bf{c}}^{{\rm{P}},n}} \oplus {{\bf{c}}^{{\rm{W}},n}} \oplus {{\bf{c}}^{{\rm{S}},n}} \oplus {{\bf{c}}^{{\rm{F}},n}}
\nonumber
\end{equation}
where the concatenation operation is denoted as $\oplus$. Finally, the concatenated embedding is sent to feed-forward layers for multi-granularity fusion and outputs the estimated posterior probability of the emotion classes ${{\tilde {\textbf{y}}}^n}$:
\begin{equation}
{{\textbf{g}}^n} = {\rm{LN}}\left( {{\rm{Relu}}\left( {{{\textbf{W}}_4}{{\textbf{c}}^n} + {b_4}} \right) + {{\textbf{c}}^n}} \right)
\nonumber
\end{equation}
\begin{equation}
{{\tilde {\textbf{y}}}^n} = {\rm{softmax}}\left( {{{\textbf{W}}_5}{{\textbf{g}}^n} + {b_5}} \right)
\label{earlyfusioneqation}
\end{equation}
The structure of the coattention model in Figure \ref{3} is illustrated in Figure \ref{2}. There are two branches in one coattention layer, each of which has the same structure but different modalities as Q or K,V \cite{lu2019vilbert}. Without loss of generality, the calculation process of one-head attention is given here \cite{vaswani2017attention}:
\begin{equation}
{{\textbf{Q}}}^n = {\textbf{W}_6^{i}}\textbf{U}^{i,n} + {b_6^{i}}
\nonumber
\end{equation}
\begin{equation}
\textbf{K}^n = {\textbf{W}_7^{j}}\textbf{U}^{j,n} + {b_7^{j}}
\nonumber
\end{equation}
\begin{equation}
\textbf{V}^n = {\textbf{W}_8^{j}}\textbf{U}^{j,n} + {b_8^{j}}
\nonumber
\end{equation}
where the notations $i$ and $j$ indicate that they are from different modalities. Then we can get the processed embeddings as:
\begin{equation}
{\rm{Attention}}\left( {{{\textbf{Q}}^n},{{\textbf{K}}^n},{{\textbf{V}}^n}} \right) = {\rm{softmax}}\left( {\frac{{{{\textbf{Q}}^n}{{\textbf{K}}^n}^ \top }}{{\sqrt d }}} \right){{\textbf{V}}^n}
\nonumber
\end{equation}
where $d$ represents the dimension of the embeddings. Multi-head attention performs this process multiple times in order to learn information from various representation subspaces. After this, the model follows the classic transformer encoder structure in every branch. In our configuration, the coattention layer is repeated three times. Therefore, the output from every branch can incorporate enough information from another modality and as a result they are almost identical in our experiment, so we simply calculate the mean to obtain the final output for our coattention model in order to reduce the consumption of computational resources.
Concatenation-based early fusion is also investigated for comparison. For this fusion approach, we concatenate text and frame-level speech embeddings before feeding them into feed-forward layers which is the same as one branch of our late fusion model.
\subsection{The Combination of Early Fusion and Late Fusion}
We have introduced the models and processes for the early fusion and late fusion approaches in the previous two subsections. In this subsection, we will describe the combination of them to achieve better performance. The reason for the combination is that early fusion merges two modalities with low-level pre-trained features while late fusion merges them with high-level features. Thus, these two fusion schemes make predictions based on embeddings from various levels and it is expected that they can provide complementary information that can be leveraged to boost the performance.
In this work, score combination of best early-fused and late-fused models is considered. Their output logits are averaged as the combined prediction.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{IEEEtran/co.eps}
\caption{Coattention model.}
\label{2}
\vspace{-0.3cm}
\end{figure}
\section{Experiments and Discussion}
\subsection{Datasets}
The IEMOCAP dataset \cite{busso2008iemocap} is an acted, multimodal and multispeaker dataset with five sessions. Following the work in \cite{pepino21_interspeech}, we use 4 emotional classes: anger, happiness, sadness and neutral. We relabeled excitement utterances as happiness and discard utterances from other classes, which is a common practice using IEMOCAP. As a result, the number of utterances representing angry, happy, sad and neutral were 1103, 1636, 1084 and 1708 respectively. A 5-fold cross-validation (CV) configuration was implemented to evaluate our model, leaving one session out as test set in each fold, since it is the regular evaluation method for IEMOCAP. It is worth mentioning that there was no fixed manner to determine the validation set \cite{DBLP:journals/corr/abs-1802-05630}, so we randomly selected 10\% of the utterances in the training set as the validation set in each fold.
\subsection{Settings and Metrics}
Cross-entropy loss was adopted as our loss function, and an Adam optimizer~\cite{kingma2014adam} was applied using a learning rate of 1e-3 for the feed-forward layer-based models and 5e-5 for transformer-based models. The models were trained using a batch size of 32 and early stopping was also applied. Dropout~\cite{srivastava2014dropout} was applied with a probability of 0.2 after every feed-forward layer except the output layer to prevent overfitting. We use wav2vec~2.0-base and BERT-base un-cased models, both of which have 768-dimensional embeddings. All speech samples are normalized by global normalization which is a frequently used setting for this dataset.
We adopted the widely used metric unweighted accuracy (UA) as our evaluation metric. The results were taken from an average of 25 experiments. (we implemented five-fold cross-validation for 5 times. )
\begin{table}[]
\caption{Unweighted accuracy (UA) of the 5-fold CV results using single modality.}
\label{single}
\centering
\begin{tabular}{c|c|c}
\toprule
\textbf{Embeddings} & \textbf{Linear} & \textbf{Transformer Encoder} \\ \midrule
Text & \textbf{68.12}\% & \textbf{67.29}\% \\
Frame & 65.44\% & 62.85\% \\
Phone & 64.24\% & 62.79\% \\
Syllable & 63.86\% & 63.26\% \\
Word & 64.48\% & 62.38\% \\ \bottomrule
\end{tabular}
\end{table}
\subsection{Results of the Single Modality Models}
Table \ref{single} shows the results of the single modality. "Linear" means the linear model used in \cite{pepino21_interspeech}, which is also adopted in every branch of our late fusion model. The "Transformer Encoder" has three layers of transformer encoders followed by two feed-forward layers, which is used for comparison with our coattention-based early fusion model. We can observe that linear models outperform transformer encoder models in every embeddings. This is consistent with the observation in~\cite{siriwardhana2020jointly}, and we think it may be caused by the overfitting problem. It also can be seen that the text embeddings always outperform speech embeddings. This is because there are not many complex contexts where speech is more effective, such as sarcasm, in this dataset. When comparing to the three segment-level speech embeddings, frame-level speech embeddings perform better in linear models, while syllable-level embeddings perform better in transformer encoder models. The segment-level speech embeddings have more information linked to prosody, thus it can provide more information when combined with the frame-level speech embeddings and text embeddings. This will be validated in the next subsection. It is worth noting that when using frame-level speech embeddings, it corresponds to a reproduction of the approach in~\cite{pepino21_interspeech} and the UA (65.44\%) is very similar to that (65.80\%) given in~\cite{pepino21_interspeech}.
\begin{table}[]
\caption{Unweighted accuracy (UA) of the 5-fold CV results of coattention-based early fusion and late fusion models with different input embeddings. F - frame, P - phone, S - syllable, W - word, Coattention - coattention-based early fusion}
\label{main_result}
\centering
\resizebox{\linewidth}{!}{
\begin{tabular}{cc|cc}
\toprule
\multicolumn{2}{c|}{\textbf{Embeddings}} & \multicolumn{2}{c}{\textbf{Fusion}} \\ \midrule
\multicolumn{1}{c|}{\textbf{Text}} & \textbf{Speech} & \multicolumn{1}{c|}{\textbf{Coattention}} & \textbf{Late Fusion} \\ \midrule
\multicolumn{1}{c|}{\multirow{8}{*}{BERT}} & F & \multicolumn{1}{c|}{74.28\%} & 74.88\% \\ \cline{2-4}
\multicolumn{1}{c|}{} & F+P & \multicolumn{1}{c|}{\textbf{74.57}\%} & 75.15\% \\
\multicolumn{1}{c|}{} & F+S & \multicolumn{1}{c|}{74.05\%} & \textbf{75.80}\% \\
\multicolumn{1}{c|}{} & F+W & \multicolumn{1}{c|}{74.27\%} & 75.22\% \\ \cline{2-4}
\multicolumn{1}{c|}{} & F+P+S & \multicolumn{1}{c|}{73.35\%} & 74.67\% \\
\multicolumn{1}{c|}{} & F+P+W & \multicolumn{1}{c|}{73.96\%} & 75.52\% \\
\multicolumn{1}{c|}{} & F+S+W & \multicolumn{1}{c|}{73.53\%} & 75.59\% \\ \cline{2-4}
\multicolumn{1}{c|}{} & F+P+S+W & \multicolumn{1}{c|}{73.64\%} & 74.60\% \\
\bottomrule
\end{tabular}}
\vspace{-0.5cm}
\end{table}
\subsection{Results of the Multi-level Fusions}
Table~\ref{main_result} shows the results of our coattention-based early fusion model using Equation~\ref{earlyfusioneqation} and late fusion model using Equation~\ref{latefusionequation} with combinations of different segment-level speech embeddings. It shows that, compared to unimodal results in table~\ref{single}, any combination can yield substantial performance improvement. It demonstrates that there is indeed complementarity between two modalities on this task. It can also be seen from Table~\ref{main_result} that late fusion models generally have better results than early fusion models. The multi-granularity frame works quite well on the late fusion model and also achieves better results in the early fusion model than inputs without segment-level speech embeddings. It can be seen from both fusions that adding more embeddings cannot always lead to better performance, thus the improvement brought by the introduction of segment-level speech embeddings does not result from the effect of ensemble learning, but result from the introduction of prosodic information which is relevant to emotion recognition.
\begin{table}[]
\caption{Performance comparison between our models and
state-of-the-art multimodal models on the IEMOCAP dataset. F - frame, P - phone, S - syllable, W - word, Concatenation - concatenation-based early fusion, Coattention - coattention-based early fusion}
\label{sota}
\centering
\resizebox{\linewidth}{!}{
\begin{tabular}{ccc|c}
\toprule
\multicolumn{3}{c|}{\textbf{Proposed Methods}} & \multirow{2}{*}{\textbf{UA}} \\ \cline{1-3}
\multicolumn{2}{c|}{\textbf{Embeddings}} & \textbf{Fusion} & \\ \midrule
\multicolumn{1}{c|}{\multirow{4}{*}{BERT}} & \multicolumn{1}{c|}{F+P} & Concatenation & 71.29\% \\
\multicolumn{1}{c|}{} & \multicolumn{1}{c|}{F+P} & Coattention & 74.57\% \\
\multicolumn{1}{c|}{} & \multicolumn{1}{c|}{F+S} & Late Fusion & 75.80\% \\
\multicolumn{1}{c|}{} & \multicolumn{1}{c|}{F+P+S} & Coattention and Late Fusion & \textbf{76.31}\% \\ \toprule
\multicolumn{3}{c|}{\textbf{Baseline Methods}} & \textbf{UA} \\ \midrule
\multicolumn{3}{l|}{GBAN (Liu et al. \cite{liu20b_interspeech}, 2020)} & \multicolumn{1}{l}{70.08\%} \\
\multicolumn{3}{l|}{STSER (Chen et al.\cite{chen20b_interspeech}, 2020)} & \multicolumn{1}{l}{72.05\%} \\
\multicolumn{3}{l|}{Krishna et al. \cite{n20_interspeech}, 2020} & \multicolumn{1}{l}{72.82\%} \\
\multicolumn{3}{l|}{Makiuchi et al. \cite{makiuchi2021multimodal}, 2021} & \multicolumn{1}{l}{73.00\%} \\
\multicolumn{3}{l|}{Kumar et al. \cite{kumar21d_interspeech}, 2021} & \multicolumn{1}{l}{75.00\%} \\
\multicolumn{3}{l|}{Liu et al. \cite{LIU20221}, 2022} & \multicolumn{1}{l}{75.05\%} \\ \bottomrule
\end{tabular}}
\vspace{-0.5cm}
\end{table}
\subsection{Comparison of State-of-the-art Approaches and the Proposed Approaches}
This subsection further validates the effectiveness of the proposed approaches. In the first block of Table~\ref{sota}, the effectiveness of the coattention-based early fusion approach is first validated, as it can be seen that with the frame-level and phone-level speech embeddings, the coattention can give good performance gains when comparing with the concatenation-based fusion, improving the UA by about 3\%. Here we provide the comparison between the coattention and the concatenation-based early fusion only with frame-level and phone-level speech embeddings due to space limitation, but there are similar results in the remaining cases. We copy the best results of coattention-based early fusion (74.57\%) and late fusion (75.80\%) from Table \ref{main_result}. This demonstrates that by combining the best configurations of both early (F+P) and late (F+S) fusions (so used embeddings are F+P+S), it gives another 0.5\% UA increase (76.31\%) over the best performing late fusion model. This proves that combining the multi-modal models at multiple levels can improve the performance.
In the second block of Table~\ref{sota}, the proposed approaches are also compared with other state-of-the-art multi-modal speech-text emotion recognition approaches. For a fair comparison, all the approaches are based on the 5-fold cross validation configuration and use the same way of preprocessing of IEMOCAP. It shows that our model using text and only frame-level speech embeddings already surpasses the performance of most baseline approaches, demonstrating the advantage of combining wav2vec~2.0 and BERT. In addition, the best multi-level fusion surpasses the best baseline approach by 1.3\% UA.
\section{Conclusion and Future Work}
In this paper, we have proposed to leverage the state-of-the-art wav2vec~2.0 and BERT embeddings with a multi-level fusion framework to mitigate the issue of data sparsity in multimodal emotion recognition and also explored the multi-granularity framework. Our best fusion configuration achieves an accuracy of 76.31\% UA for the 5-fold CV on IEMOCAP. In the future, we plan to explore the fine-tuning of wav2vec 2.0 and BERT in our model while in this paper we fix them and only use them as feature extractor.
\bibliographystyle{IEEEtran}
|
2,869,038,155,907 | arxiv | \section{Introduction}
\label{sec1}
Chiral Yukawa models on the lattice
are interacting fermion-Higgs field theories
with a chiral symmetry, which are useful for studying the Standard Model
on the lattice in the absence of gauge couplings
\cite{Shige-lat}.
The phase structure of such models, in particular
the location of fixed points and second order phase transition lines, is
of interest for the definition of corresponding continuum field theories.
The action of a chiral Yukawa model is of the generic form
\begin{eqnarray}
S &=& -\frac k2 \sum_{x,\mu} \mbox{Tr}\ [\Phi_x^+ \Phi_{x+\hat\mu}
+ \mbox{h.c.} ]
\nonumber \\
&&\mbox{}+ y \sum_x \overline\Psi_x (P_L \Phi_x^+
+ P_R \Phi_x) \Psi_x
\label{S} \\
&&\mbox{+ fermion kinetic term + Higgs potential} , \nonumber
\end{eqnarray}
the detailed definition of these terms varying from one model to the
other.
With `triviality' in mind, we take the bare Higgs
self-coupling equal to $\infty$, which fixes the radial mode of the
Higgs field $\Phi$.
The phase diagram also depends on the details of the model, but the
phase structure in the small to intermediate-$y$ region is similar
in most models.
We shall concentrate on this part of the phase diagram (see Fig.~\ref{fig1}),
which is sufficient for our purposes, but the considerations
presented here are also valid for critical points at large-$y$,
where applicable.
There are paramagnetic (PM), ferromagnetic (FM), antiferromagnetic (AM)
and possibly ferrimagnetic (FI) phases, characterized by the vacuum
expectation values of the spatial average and staggered average (this
means that fields at odd sites contribute with a minus sign) of
the Higgs field and the fermion condensate.
In the context of chiral Yukawa models, phases where both the Higgs field
and its staggered analogue have non-zero expectation value
are known as FI phases \cite{Shigemitsu}.
In recent years, various kinds of Yukawa models, based on different
lattice fermion actions and with different symmetry groups, have been
studied both using mean-field methods and numerically \cite{Shige-lat}.
Here we shall concentrate on the mean-field approach \cite{MF}.
Because of the difficulty of dealing with the fermion determinant exactly,
mean-field techniques are often applied in combination with small or
large-$y$ expansions.
In Ref.\ \cite{Zaraphase} this type of analysis was extended to FI phases.
Although FI phases start off at values of $y$ where one might question the
validity of the small-$y$ expansion involved, it was argued that
the real expansion parameter is the quantity
$y\langle\overline\Psi\Psi\rangle$, which remains small as long as one keeps
close to the PM phase.
Furthermore, the phase structure of the models studied, obtained from
numerical simulation, appeared to be well-described by the mean-field results.
In a recent publication \cite{tomzen},
mean-field calculations were presented for a
general class of chiral Yukawa models with different symmetry groups
and lattice fermion actions.
(The details of this approach differ somewhat from other
applications of mean-field methods in the truncation of the fermion
determinant, but this is of no importance here.)
The authors claim that, in general,
if the PM--FM and PM--AM phase transition lines
intersect at a point $A$ (cf.\ Fig.~\ref{fig1}),
then they must necessarily continue beyond the point $A$ with smooth first
derivatives at $A$. There would always be
a FI phase, with the PM--FM line continuing as an AM--FI phase transition
line and the PM--AM line as an FM--FI line.
(Everything within the mean-field approximation.)
This is in contrast with the results of Refs.\ \cite{Zaraphase,ZaraphaseR},
where discontinuities were observed in the slopes of the phase transition
lines at $A$, with indications in one model \cite{ZaraphaseR} that a FI
phase is absent at all. (Again, within the mean-field approximation.)
These discrepancies motivated us to carry out the present study,
which may however have a wider applicability.
We analyse the phase structure around the point $A$ from a general
point of view, in the mean-field approximation.
It is found that
the first derivatives of the second order phase transition lines,
assuming that they intersect, are in general {\em discontinuous\/}
at $A$, as in Fig.~\ref{fig1}, and no conclusion can
be drawn {\em a priori\/} about the existence of a FI phase.
After presenting a general demonstration in Sec.\ \ref{sec2}, we
illustrate the results with a simple example in Sec.\ \ref{sec3}.
For definiteness we keep in mind a phase structure as in
Fig.~\ref{fig1}.
\section{Phase structure at $A$}
\label{sec2}
In the mean-field approximation,
the phase of the system at a point $(y,k)$ is determined by
minimizing the free energy $F$ with respect to a number of mean fields
$h^i$ and staggered mean fields $h_s^i$ $(i=1,\ldots,N)$,
collectively denoted as $h$, $h_s$.
($F$ is a function of $h$ and $h_s$ with coefficients depending on $y$ and
$k$.)
This gives the mean-field equations
\begin{equation}
\frac{\partial F}{\partial h} \ =\ 0,\ \ \ \ \ \ \
\frac{\partial F}{\partial h_s} \ =\ 0.
\label{mfeqs}
\end{equation}
A solution of these equations corresponds to a (local) minimum
of $F$ if the matrix $F''$
of second derivatives at the solution is positive definite.
A second order phase transition occurs when one of the eigenvalues of
this matrix goes through zero: a negative mode develops, destabilizing
the original solution and replacing it by one with a lower free energy,
belonging to a different phase.
Therefore, the condition for a second order phase transition line is
given by the additional equation
\begin{equation}
\det F'' \ =\ 0.
\label{detzero}
\end{equation}
If there is only one mean field $h$ and one staggered mean field $h_s$
this becomes\footnote{There is a square missing in the corresponding
formula (17) of Ref.~\cite{tomzen}.}
\begin{equation}
\frac{\partial^2 F}{\partial h^2} \
\frac{\partial^2 F}{\partial h_s^2} \ -\
\left( \frac{\partial^2 F}{\partial h \; \partial h_s} \right)^2 \ =\ 0.
\label{detzero1}
\end{equation}
In the chiral Yukawa models under consideration, the free energy $F$
is symmetric under a sign change of all the
fields $h^i$ at the same time or all the fields $h_s^i$ at the same time.
This implies the absence of terms containing odd powers of $h$ or $h_s$ in
the Taylor expansion of $F$, such as $h h_s$.
Furthermore, at the four phase transition lines under study here,
either all the $h^i=0$ or all the $h_s^i=0$ or both.
Hence $\partial^2 F/\partial h^i \partial h_s^j = 0$ there
and Eq.\ (\ref{detzero}) simplifies,
\begin{equation}
\det \left( \frac{\partial^2 F}{\partial h^2} \right) \
\det \left( \frac{\partial^2 F}{\partial h_s^2} \right) \ =\ 0.
\label{detzero2}
\end{equation}
Which of the two factors in this expression is zero depends on which
phase transition line one considers; at the intersection point $A$
both factors are zero.
For definiteness we shall focus on the PM--FM and AM--FI transitions.
Then condition (\ref{detzero2}) becomes
\begin{equation}
\det\frac{\partial^2 F}{\partial h^2} (h=0, h_s) \ =\ 0,
\label{detzero3}
\end{equation}
since in this case the unstable mode is in the direction of the $h$ fields.
Along the PM--FM line, $h_s=0$ and the solution to Eq.\ (\ref{detzero3})
is readily obtained.
Along the AM-FI line, however, $h_s\neq 0$. One has to solve
Eqs.\ (\ref{mfeqs})
and (\ref{detzero3}) simultaneously,
to find $h_s(y,k)$ and the function $k_c(y)$ parametrizing the line.
This procedure was applied numerically for phase diagrams of chiral Yukawa
models based on the Zaragoza proposal \cite{Zaraprop} with the `most local'
and `Roma I' \cite{RomaI} fermion actions \cite{Zaraphase,ZaraphaseR}.
Here, we are especially interested in
the behaviour of the free energy $F$
in the vicinity of the point $A$. In this region, close to the PM phase,
the mean fields are
small and one can study the phase structure by considering an expansion
in $h$ and $h_s$.
In Ref.~\cite{tomzen} it is claimed that it is
sufficient for this purpose
to know $F$ up to quadratic terms in $h$ and $h_s$.
The following calculation of the derivatives of the phase transition
lines at $A$ will show, however, that quartic terms
(or, in their absence, higher order terms) are indispensable;
neglecting them leads to incorrect statements
about a possible FI phase near $A$.
The PM--FM and AM--FI phase transition lines are given by a continuous
function $k_c(y)$ whose derivative we wish to determine.
Defining
\begin{equation}
f \ \equiv \
\det\left.\frac{\partial^2 F}{\partial h^2}\right|_{h=0},
\label{fdef}
\end{equation}
we can write Eq.\ (\ref{detzero3}) for these lines as
\begin{equation}
f(y,k_c(y),h_s^2(y,k_c(y))) = 0.
\label{fcond}
\end{equation}
Taking the derivative of this equation in the direction tangential to
the line we find
\begin{equation}
0 = \frac{df}{dy} = \frac{\partial f}{\partial y}
+ \frac{\partial f}{\partial k} \frac{dk_c}{dy}
+ \frac{\partial f}{\partial h^2_s} \left(
\frac{\partial h^2_s}{\partial y}
+ \frac{\partial h^2_s}{\partial k} \frac{dk_c}{dy} \right),
\label{dfcond}
\end{equation}
leading to a slope
\begin{equation}
\frac{dk_c}{dy} = -\left(
\frac{\partial f}{\partial y}
+ \frac{\partial f}{\partial h^2_s}
\frac{\partial h^2_s}{\partial y} \right) \left/ \left(
\frac{\partial f}{\partial k}
+ \frac{\partial f}{\partial h^2_s}
\frac{\partial h^2_s}{\partial k} \right) \right. .
\label{slope}
\end{equation}
On the left hand (PM--FM) side of point $A$, $h^2_s=0$
and Eq.\ (\ref{slope}) reduces to
\begin{equation}
\left( \frac{dk_c}{dy} \right)_{\rm PM-FM} \ =\
-\frac{\partial f}{\partial y} \left/
\frac{\partial f}{\partial k} \right. .
\label{slope2}
\end{equation}
On the AM--FI side, however, it is reasonable to
assume that $h_s^2$ approaches $A$ linearly in
$y-y_A$ and $k-k_A$, in accordance with a mean-field critical exponent
of $1/2$ for $h_s$ in this region
(cf.\ the example in Sec.\ \ref{sec3}).
It follows that both the numerator and the denominator of Eq.\ (\ref{slope})
receive an additional non-zero contribution on this side, so that
$dk_c/dy$ is discontinuous at $A$.
A similar analysis can be carried out for the change of slope between
the PM--AM and FM--FI lines.
We emphasize, however, that the AM--FI and PM--FI lines thus obtained
(at least, infinitesimally close to $A$)
remain `candidate phase transition lines' only until free energy
considerations establish that the phases on both sides of the lines
correspond to absolute minima of the free energy.
The slope of the AM--FI line suggested by these calculations may,
for example, come out bigger than that of the PM--FI line.
(This does in fact happen, at least to lowest order in the Yukawa
coupling $y$, in a chiral Yukawa model based
on the Roma I action \cite{ZaraphaseR}.)
In that case,
a comparison of free energy values should indicate which
of the calculated lines does or do not correspond to a true second order
phase transition.
\section{A simple example}
\label{sec3}
We would like to illustrate these results with a simple model of
one mean field $h$ with its staggered analogue $h_s$,
described by a quartic free energy $F$,
\begin{equation}
F = -\frac{1}{2} a h^2 - \frac{1}{2} b h_s^2 + \frac{1}{4} c h^2 h_s^2
+ \frac1{24} d h^4 + \frac1{24} e h_s^4 ,
\label{Fexp}
\end{equation}
where the parameters $a,\ldots,e$ are well-behaved
functions of $y$ and $k$.
We assume the presence of PM, FM and AM phases as in Fig.~\ref{fig1},
{\em i.e.}, $a$ increases and
$b$ decreases with increasing $k$, and $a(y,k)$ and $b(y,k)$
are such that the PM--FM and PM--AM lines meet in a point $A$.
Furthermore, for stability of $F$, we require $d>0$, $e>0$, $c>-\sqrt{de}/3$.
Apart from these stability conditions, one can consider this model as the
expansion of a general free energy in the neighbourhood of the point $A$
up to quartic terms in the fields.
In those points of the $(y,k)$-plane where
both $a<0$ and $b<0$, the mean-field equations (\ref{mfeqs}) imply that
the system is in a PM phase with $h=h_s=0$ and free energy normalized to
zero.
If $a>0$ or $b>0$ we are in one of the broken phases, FM, AM or FI.
Condition (\ref{detzero3}) for the PM--FM and AM--FI second order
phase transition lines becomes
\begin{equation}
-a + \frac{1}{2} c h_s^2 = 0.
\label{detzero4}
\end{equation}
Along the PM--FM line this becomes simply $a=0$, whereas along the AM--FI
line we need the value of $h_s$ which follows from the mean-field equations
(\ref{mfeqs}), taken at $h=0$,
\begin{equation}
-b h_s + \frac16 e h_s^3 = 0.
\label{hseq}
\end{equation}
At the point $A$,
both $a=0$ and $b=0$,
whereas in general $c$, $d$
and $e$ are non-zero. Close to $A$ we can therefore write,
\begin{eqnarray}
a(y,k) &\ =\ & a_y (y-y_A) + a_k (k - k_A) + h.o., \nonumber \\
b(y,k) &\ =\ & - b_y (y-y_A) - b_k (k - k_A) + h.o., \nonumber \\
c(y,k) &\ =\ & c_0 + h.o., \nonumber \\
d(y,k) &\ =\ & d_0 + h.o., \nonumber \\
e(y,k) &\ =\ & e_0 + h.o.,
\label{abcde-exp}
\end{eqnarray}
where $h.o.$ stands for higher orders in the $y-y_A,k-k_A$ expansion.
The signs in front of $a_k$ and $b_k$ have been chosen such that both
are positive.
In terms of these coefficients, the slopes of the PM--FM and PM--AM lines
at $A$ are
(cf.\ Eq.\ (\ref{slope2}))
\begin{eqnarray}
R_{\rm PM-FM}
&\ \equiv\ &\left( \frac{dk_c}{dy} (A) \right)_{\rm PM-FM}
\ =\ -\frac{a_y}{a_k},
\label{rFMPM} \\
R_{\rm PM-AM} &\ =\ & -\frac{b_y}{b_k}.
\label{rAMPM}
\end{eqnarray}
In order to find the AM--FI slope at $A$ we solve
Eqs.\ (\ref{detzero4}) and (\ref{hseq}) to lowest order in $y-y_A$ and $k-k_A$.
We find
\begin{equation}
h_s^2 = -\frac6{e_0} (b_y (y-y_A) + b_k (k-k_A)) + h.o.,
\label{hssol}
\end{equation}
corresponding to a critical exponent $1/2$ for $h_s$
as mentioned earlier,
and
\begin{equation}
R_{\rm AM-FI} = -\frac{a_y + 3(c_0/e_0) b_y}{a_k + 3(c_0/e_0) b_k}
\label{rAMFI}
\end{equation}
(cf.\ Eq.\ (\ref{slope})).
It is interesting to express Eq.\ (\ref{rAMFI}) as a `weighted average' of
the PM--FM and PM--AM slopes:
\begin{equation}
R_{\rm AM-FI} = \frac{e_0 a_k R_{\rm PM-FM} + 3 c_0 b_k
R_{\rm PM-AM}}{e_0 a_k + 3 c_0 b_k}.
\label{rAMFI2}
\end{equation}
Similarly, we find for the FM--FI slope
\begin{equation}
R_{\rm FM-FI} = \frac{3c_0 a_k R_{\rm PM-FM} + d_0 b_k
R_{\rm PM-AM}}{3c_0 a_k + d_0 b_k}.
\label{rFMFI2}
\end{equation}
The discontinuities in the slopes are evident.
Only for $c=0$ the first derivatives of these phase transition lines are
continuous (ignoring special cases like $c\neq 0, c_0=0$).
This corresponds to the trivial case that there is no coupling
between $h$ and $h_s$, as mentioned earlier.
Note that in the limit $e_0 \rightarrow 0$ Eq.\ (\ref{rAMFI2}) tends
to $R_{\rm AM-FI} =
R_{\rm PM-AM}$, while in the limit $d_0 \rightarrow 0$ which can be taken
simultaneously Eq.\ (\ref{rFMFI2}) leads to $R_{\rm FM-FI} = R_{\rm PM-FM}$.
In this case the AM--FI and FM--FI lines would be `interchanged'
compared with the $c=0$ case.
This obviously indicates that at least one of the calculated `candidate
transition lines' does not represent a genuine phase transition, and
additional free energy considerations must determine the real
nature of the transitions, as discussed before.
It is also instructive to consider the AM--FI and FM--FI slopes as a
function of $c_0$.
In the limit that $c_0$ approaches its `stability lower bound'
$-\sqrt{d_0 e_0}/3$, these lines form a 180 degree angle. Upon
increasing $c_0$ this angle decreases, until it vanishes
for $c_0 = \sqrt{d_0 e_0}/3$. For still larger values of $c_0$,
corresponding to strong coupling between $h$ and $h_s$ in the free energy,
the FI phase disappears (at least in a neighbourhood of the point $A$;
higher order terms in Eqs.\ (\ref{abcde-exp}) may give rise to a FI phase
a little farther out, as in Fig.~\ref{fig2}), and instead there is a
first order transition separating the AM and FM phases.
\section{Conclusion}
\label{sec4}
For determining the phase structure close to the point $A$ where several
phases meet, in a mean-field approximation, it is not sufficient to
expand the free energy up to terms quadratic in the mean fields.
Quartic terms are crucial for determining the slopes of
the transition lines enclosing the FI phase, and in general the slopes
are discontinuous at $A$.
Such behaviour was observed in mean-field studies
of phase diagrams of chiral Yukawa models \cite{Zaraphase,ZaraphaseR},
where the (candidate) transition
lines bordering the FI phase were determined numerically.
These results explain the discrepancy signalled in Sec.\ \ref{sec1}.
The authors of Ref.~\cite{tomzen} incorrectly assumed that it is sufficient
to consider the free energy up to quadratic terms in the mean fields.
As a consequence, the conclusions drawn there about the presence of
FI phases and the
transition lines separating them from the FM and AM phases are incorrect.
However, the results for the PM--FM and PM--AM lines are not affected.
We emphasize that these conclusions do not depend on the approximations
made in the treatment of the fermion determinant in chiral Yukawa models.
The actual location and slopes of the various phase transition lines do,
however, because different approximations lead to different
`effective' free energies.
Hence, improved approximations of the fermion determinant may change
the conclusion about the existence of a FI phase, apart from the
limitations of the mean-field approximation.\\
This work was supported by EC contracts ERBCHBICT941067 and
CHRX-CT92-0051, by DGICYT (Spain) and by Acci\'on Integrada
Hispano-francesa HF94-150B.
|
2,869,038,155,908 | arxiv | \section{Introduction}
In recent years the higher dimensional gravity is attracting much interest. Apart from the fact
that the higher dimensional gravity is interesting in its own right, the increasing amount of works
devoted to the study of the higher dimensional spacetimes is inspired from the string theory and
the brane-world scenario with large extra dimensions.
The gravity in higher dimensions exhibits much richer dynamics and spectrum of
solutions than in four dimensions. One of the most reliable routes for better understanding of higher
dimensional gravity and the related topics are the exact solutions. For example, recently discovered
exact black rings solutions with unusual horizon topology \cite{ER1,ER2}, demonstrated explicitly
that the 5D Einstein gravity
exhibits unexpected features completely absent in four dimensions. It was shown in \cite{ER2} that both
the black hole and the the black ring can carry the same conserved charges, the mass and a single angular
momentum, and therefore there is no uniqueness theorem in five dimensions.
Moreover, the black rings can also carry nonconserved charges which can be varied continuously without
altering the conserved charges. This fact leads to
continuous (classical) non-uniqness \cite{EMP}.
The higher dimensional solutions found so far are not so many. As yet to the best of our knowledge
there are no EM solutions found in the literature that describe rotating charged black holes in
higher dimensions. However, some numerical solutions were recently constructed in \cite{Kunz1}(see also \cite{Kunz2}).
Moreover the systematic construction
of new solutions in higher dimensions has not been accomplished in comparison with 4D case.
It is well known that both vacuum and electrovacuum 4D Einstein equations are completely integrable
being restricted to spacetimes with two-dimensional Abelian group of isometries \cite{RG}-\cite{N}. This nice property
is also shared by some effective string gravity models (or certain sectors of them) which allows us
to find many families of physically interesting exact solutions \cite{B}-\cite{YU}.
The $D$-dimensional vacuum
Einstein equations with $(D-2)$-dimensional Abelian group of isometries are completely integrable, too \cite{DM2},\cite{POM}.
The aim of this work is to make a step in systematic construction of exact solutions in 5D
EM gravity. We show here that a certain sector of EM gravity is completely integrable.
We also present an explicit method for generating exact 5D EM solutions from known solutions of the 5D vacuum Einstein equations.
As an illustration of the method we derive explicitly a new rotating
six parametric 5D EM solution which includes the dipole black ring solution as a particular case.
\section{Dimensional reduction, coset presentation and complete integrability }
The 5D EM gravity is described by the field equations
\begin{eqnarray}\label{EMFE}
&&R_{\mu\nu} = {1\over 2} \left(F_{\mu\lambda}F_{\nu}^{\,\lambda}
- {1\over 6} F_{\sigma\lambda}F^{\sigma\lambda} g_{\mu\nu}\right), \\
&&\nabla_{\mu} F^{\mu\nu} = 0 \nonumber.
\end{eqnarray}
In this paper we consider 5D EM gravity in spacetimes with three commuting Killing vectors:
one timelike Killing vector $T$ and two spacelike Killing vectors $K_{1}$ and $K_{2}$. We also assume
that the Killing vector $K_{2}$ is hypersurface orthogonal.
In adapted coordinates in which $K_{2}=\partial/\partial Y$, the spacetime
metric can be written into the form
\begin{equation}
ds^2 = e^{2u}dY^2 + e^{-u} h_{ij}dx^idx^j
\end{equation}
where $h_{ij}$ is a $4$-dimensional metric with Lorentz signature. Both $u$ and $h_{ij}$
depend on the coordinates $x^i$ only. The electromagnetic field is taken in the form\footnote{Throughout this paper
we denote the Killing vectors and their naturally corresponding 1-forms by the same letter. }
\begin{equation}
F = dA_{Y}\wedge dY.
\end{equation}
After a dimensional reduction along the Killing vector $K_{2}$, the field equations (\ref{EMFE})
are reduced to the following effective 4D theory:
\begin{eqnarray}
&&{\cal D}_{i}{\cal D}^{i}u =
- {1\over 3} e^{-2u}h^{ij}{\cal D}_{i}A_{Y} {\cal D}_{j}A_{Y},\\
&&{\cal D}_{i}\left(e^{- 2u}{\cal D}^{i}A_{Y} \right) = 0, \\
&&R(h)_{ij}= {3\over 2}\partial_{i}u\partial_{j}u
+ {1\over 2}e^{-2u}\partial_{i}A_{Y}\partial_{j}A_{Y}.
\end{eqnarray}
Here ${\cal D}_{i}$ and $R(h)_{ij}$ are the covariant derivative and Ricci tensor with respect
to the Lorentz metric $h_{ij}$. Let us introduce the symmetric matrix $M_{1}$ given by
\begin{eqnarray}
M_{1} = \left(%
\begin{array}{cc}
e^{u} + {1\over 3}e^{-u}A^2_{Y} & {1\over \sqrt{3}} e^{-u}A_{Y} \\
{1\over \sqrt{3}} e^{-u}A_{Y} & e^{-u} \\\end{array}%
\right)
\end{eqnarray}
with $\det M_{1}=1$. Then the dimensionally reduced EM equations become
\begin{eqnarray}
&&{\cal D}_{i}\left[{\cal D}^{i}M_{1}M^{-1}_{1}\right]=0 ,\\
&&R_{ij}(h) = -{3\over 4} Tr\left[\partial_{i}M_{1}\partial_{j}M_{1}^{-1}\right].
\end{eqnarray}
These equations are yielded by the action
\begin{eqnarray}
S = {1\over 16\pi} \int d^4x \sqrt{h} \left[R(h) + {3\over 4}h^{ij} Tr\left(\partial_{i}M_{1}\partial_{j}M_{1}^{-1}\right) \right].
\end{eqnarray}
Clearly the action is invariant under the $SL(2,R)$ group where the group action is given by
\begin{eqnarray}
M_{1} \to GM_{1}G^{T},
\end{eqnarray}
$G \in SL(2,R)$. In fact the matrices $M_{1}$ parameterize a $SL(2,R)/SO(2)$ coset. So we obtain non-linear
$\sigma$-model coupled to 4D Einstein gravity.
Next step is to further reduce the effective $4D$ theory along the Killing vectors $T$ and $K_{1}$.
In this connection it is useful to introduce the twist $\omega$ of the Killing vector $T$ defined by
\begin{eqnarray}\label{TD}
\omega = {1\over 2} \star (h)\left(T\wedge dT \right)
\end{eqnarray}
were $\star(h)$ is the Hodge dual with respect to the metric $h_{ij}$.
One can show that the Ricci 1-form ${\Re}_{h}[T]$ defined by
\begin{equation}
{\Re}_{h}[T] = R_{ij}(h)T^{j}dx^{i} ,
\end{equation}
satisfies
\begin{equation}
\star(h)\left( T\wedge {\Re}_{h}[T] \right) = d\omega .
\end{equation}
Obviously, in our case we have ${\Re}_{h}[T]=0$, i.e. $d\omega$=0. Therefore there exists (locally) a potential $f$ such that
\begin{equation}\label{EFORM}
\omega = df.
\end{equation}
In adapted coordinates for the Killing vectors $T=\partial/\partial t$ and $K_{1}=\partial/\partial X$,
and in the canonical coordinates $\rho$ and $z$ for the transverse space, the 4D metric $h_{ij}$ can be written into the form
\begin{eqnarray}
h_{ij}dx^idx^j = -e^{2U}\left(dt + {\cal A} dX \right)^2 + e^{-2U}\rho^2 dX^2 + e^{-2U}e^{2\Gamma}(d\rho^2 + dz^2).
\end{eqnarray}
For this form of the metric $h_{ij}$, combining (\ref{TD}) and (\ref{EFORM}), and after some algebra we find
that the twist potential $f$ satisfies
\begin{eqnarray}\label{TPS}
\partial_{\rho}f &=& -{1\over 2} {e^{4U}\over \rho} \partial_{z}{\cal A} ,\\
\partial_{z} f &=& {1\over 2} {e^{4U}\over \rho} \partial_{\rho}{\cal A}.
\end{eqnarray}
Before writing the 2D reduced equations we shall introduce the symmetric matrix
\begin{eqnarray}
M_{2} = \left(%
\begin{array}{cc}
e^{2U} + 4f^2e^{-2U} & 2fe^{-2U} \\
2fe^{-2U} & e^{-2U} \\\end{array}%
\right)
\end{eqnarray}
with $\det M_{2}=1$. Then the 2D reduced EM equations read
\begin{eqnarray}
&&\partial_{\rho}\left(\rho\partial_{\rho}M_{1} M^{-1}_{1} \right)
+ \partial_{z}\left(\rho\partial_{z}M_{1} M^{-1}_{1} \right) = 0 ,\\
&&\partial_{\rho}\left(\rho \partial_{\rho}M_{2}M^{-1}_{2} \right)
+ \partial_{z}\left(\rho \partial_{z}M_{2}M^{-1}_{2} \right) = 0 ,\\
\rho^{-1} \partial_{\rho}\Gamma &=&
- {1\over 8} \left[Tr\left(\partial_{\rho}M_{2}\partial_{\rho}M^{-1}_{2}\right)
- Tr\left(\partial_{z}M_{2}\partial_{z}M^{-1}_{2}\right) \right] \nonumber \\
&&- {3\over 8} \left[Tr\left(\partial_{\rho}M_{1}\partial_{\rho}M^{-1}_{1}\right)
- Tr\left(\partial_{z}M_{1}\partial_{z}M^{-1}_{1}\right) \right] ,\\
\rho^{-1} \partial_{z}\Gamma &=& - {1\over 4} Tr\left(\partial_{\rho}M_{2}\partial_{z}M^{-1}_{2}\right)
\nonumber \\
&& - {3\over 4} Tr\left(\partial_{\rho}M_{1}\partial_{z}M^{-1}_{1}\right).
\end{eqnarray}
As a result we find that that the "field variables" $M_{1}$ and $M_{2}$ satisfy the equations of two
$SL(2,R)/SO(2)$ $\sigma$-models in two dimensions, modified by the presence of the factor $\rho$.
The system equations for $\Gamma$
can be integrated, once a pair of solutions for the two $\sigma$-models is known. Therefore,
the problem of generating solutions to the 5D EM equations with the described symmetries reduces to
the solutions of the two $\sigma$-models.
It is well-known that the $\sigma$-model equations are completely integrable \cite{BZ1,BZ2}. This is a consequence
of the fact that the $\sigma$-model equations can be considered as the compatibility condition of
the linear differential equations (Lax-pair presentation)\cite{BZ1,BZ2}
\begin{eqnarray}\label{LPP}
D_{\rho} \Psi &=& {\rho {\cal U}+ \lambda V\over \lambda^2 + \rho^2} \Psi ,\\
D_{z} \Psi &=& {\rho V - \lambda {\cal U}\over \lambda^2 + \rho^2} \Psi \nonumber ,
\end{eqnarray}
where
\begin{eqnarray}
D_{\rho} = \partial_{\rho} + {2\lambda\rho \over \lambda^2 + \rho^2}\partial_{\lambda}, \,\,\,\,
D_{z} = \partial_{z} - {2\lambda^2 \over \lambda^2 + \rho^2}\partial_{\lambda}.
\end{eqnarray}
Here $V=\rho\partial_{z}M M^{-1}$, ${\cal U}=\rho\partial_{\rho}M M^{-1}$ and $\lambda$ is the complex
spectral parameter. The "wave function" $\Psi(\rho,z,\lambda)$ is
a complex matrix. The $\sigma$-model equations then follows from the compatibility condition
\begin{equation}
[D_{\rho}, D_{z}]\Psi = 0.
\end{equation}
The matrix $M$ can be found from the "wave function" $\Psi$ as $M(\rho,z)=\Psi(\rho,z,\lambda=0)$.
The inverse scattering transform (IST) method can be directly applied to (\ref{LPP}) to generate multisoliton
solutions. The dressing procedure allows us to generate new solutions from known ones. Since this dressing
technique is well known we will not discuss it here and refer the reader to \cite{BZ1,BZ2}.
In this paper we will not apply the IST method. In the next section we present new and simple enough
solution generating method which allows us to generate new 5D EM solutions from known solutions of the
5D vacuum Einstein equations.
\section{Solution construction}
Let us consider two solutions $M_{1}=M^{(1)}$ and $M_{2}=M^{(2)}$ of the $\sigma$-model equations
\begin{equation}
\partial_{\rho}\left(\rho\partial_{\rho}M M^{-1} \right)
+ \partial_{z}\left(\rho\partial_{z}M M^{-1} \right) = 0 .\\
\end{equation}
In addition let us denote by $\gamma^{(i)}$ the solution of the system
\begin{eqnarray}
\rho^{-1} \partial_{z}\gamma^{(i)} &=&
-{1\over 4} Tr\left(\partial_{\rho}M^{(i)}\partial_{z}{M^{(i)}}^{-1} \right), \\
\rho^{-1} \partial_{\rho}\gamma^{(i)} &=&
-{1\over 8} \left[Tr\left(\partial_{\rho}M^{(i)}\partial_{\rho}{M^{(i)}}^{-1} \right)
- Tr\left(\partial_{z}M^{(i)}\partial_{z}{M^{(i)}}^{-1} \right) \right].
\end{eqnarray}
Then we find for the metric function $\Gamma$
\begin{equation}\label{GGGM}
\Gamma = \gamma^{(2)} + 3\gamma^{(1)}.
\end{equation}
From a practical point of view it is more convenient to associate the $\sigma$-model solutions
$M^{(i)}$ with solutions of the vacuum Einstein equations\footnote{From now on all quantities
with subscript or superscript "E" correspond to the vacuum case.}
\begin{eqnarray}
ds^2_{E(i)} = e^{2u^{(i)}_{E}}dY^2 + e^{-u^{(i)}_{E}}\left [-e^{2{U^{(i)}_{E}}}\left(dt + {\cal A}^{(i)}_{E} dX \right)^2
\right. \\ \left. + e^{-2{U^{(i)}_{E}}}\rho^2 dX^2 + e^{-2{U^{(i)}_{E}}}e^{2\Gamma^{(i)}_{E}}(d\rho^2 + dz^2)\right]\nonumber ,
\end{eqnarray}
which correspond to the matrixes
\begin{eqnarray}
M^{(i)} = \left(%
\begin{array}{cc}
e^{2U^{(i)}_{E}} + 4\left(f^{(i)}_{E}\right)^2e^{-2U^{(i)}_{E}} & 2f^{(i)}_{E}e^{-2U^{(i)}_{E}} \\
2f^{(i)}_{E}e^{-2U^{(i)}_{E}} & e^{-2U^{(i)}_{E}} \\\end{array}%
\right) .
\end{eqnarray}
The metric function $\Gamma^{(i)}_{E}$ for the vacuum Einstein equations can be found from the equations
of $\Gamma$ by setting $A_{Y}=0$ in the matrix $M_{1}$. So we obtain
\begin{equation}\label{GGGMO}
\Gamma^{(i)}_{E} = \gamma^{(i)} + \Omega^{(i)}_{E}
\end{equation}
where $\Omega^{(i)}_{E}$ is a solution to the system
\begin{eqnarray}\label{OS}
\rho^{-1}\partial_{\rho}\Omega^{(i)}_{E} &=& {3\over 4}\left[\left(\partial_{\rho} u^{(i)}_{E}\right)^2
- \left(\partial_{z} u^{(i)}_{E}\right)^2 \right],\\
\rho^{-1}\partial_{z}\Omega^{(i)}_{E} &=& {3\over 2} \partial_{\rho} u^{(i)}_{E}\partial_{z} u^{(i)}_{E}.
\end{eqnarray}
We then find from (\ref{GGGM}) and (\ref{GGGMO}) that
\begin{equation}
\Gamma = \Gamma^{(2)}_{E} - \Omega^{(2)}_{E} + 3\left[\Gamma^{(1)}_{E} - \Omega^{(1)}_{E} \right].
\end{equation}
Comparing the matrixes $M_{1}$ and $M^{(1)}$ we obtain
\begin{eqnarray}
e^{2u} = e^{4U^{(1)}_{E}} ,\\
A_{Y} = 2\sqrt{3} f^{(1)}_{E},
\end{eqnarray}
where $f^{(i)}_{E}$ satisfies\footnote{Clearly, these equations are restriction of (\ref{TPS}) to the vacuum case. }
\begin{eqnarray}\label{TPS}
\partial_{\rho}f^{(i)}_{E} &=& -{1\over 2} {e^{4U^{(i)}_{E}}\over \rho} \partial_{z}{\cal A}^{(i)}_{E} ,\\
\partial_{z} f^{(i)}_{E} &=& {1\over 2} {e^{4U^{(i)}_{E}}\over \rho} \partial_{\rho}{\cal A}^{(i)}_{E}.
\end{eqnarray}
Once having the metric function $e^{2u}=g_{YY}$ we can write the EM metric
\begin{eqnarray}
ds^2 = e^{4U^{(1)}_{E}} dY^2 + e^{-2U^{(1)}_{E}} \left[ -e^{2U^{(2)}_{E}}\left(dt + {\cal A}^{(2)}_{E}dX \right)^2
+ e^{-2U^{(2)}_{E}}\rho^2 dX^2 \right. \nonumber \\
+\left. \left(e^{2\Gamma^{(1)}_{E}} \over e^{2\Omega^{(1)}_{E} + {2\over 3}\Omega^{(2)}_{E} } \right)^3
e^{-2U^{(2)}_{E}} e^{2\Gamma^{(2)}_{E}} (d\rho^2 + dz^2) \right].
\end{eqnarray}
Taking into account that
\begin{eqnarray}
g^{E(i)}_{00} &=& - e^{-u^{(i)}_{E}}e^{2U^{(i)}_{E}},\\
{\tilde g}^{E(i)}_{XX} &=& g^{E(i)}_{XX} - g^{E(i)}_{00}({\cal A}^{(i)}_{E})^2 = e^{-u^{(i)}_{E}}e^{-2U^{(i)}_{E}}\rho^2,\\
g^{E(i)}_{\rho\rho} &=& e^{-u^{(i)}_{E}}e^{-2U^{(i)}_{E}} e^{2\Gamma^{(i)}_{E}} ,
\end{eqnarray}
and
\begin{eqnarray}
e^{4U^{(i)}_{E}} &=& (g^{E(i)}_{00})^2 g^{E(i)}_{YY},\\
e^{2\Gamma^{(i)}_{E}} &=& |g^{E(i)}_{00}|g^{E(i)}_{YY} g^{E(i)}_{\rho\rho},
\end{eqnarray}
the metric can be presented in more elegant form
\begin{eqnarray}
ds^2 &=& \left[|g^{E(1)}_{00}|\sqrt{g^{E(1)}_{YY}} \right]^2 dY^2
+ \left[\sqrt{g^{E(2)}_{YY}} \over |g^{E(1)}_{00}|\sqrt{g^{E(1)}_{YY}} \right]
\left[g^{E(2)}_{00}\left(dt + {\cal A}^{(2)}_{E}dX \right)^{2} +
{\tilde g}^{E(2)}_{XX}dX^2 \nonumber \right. \\ && \left. +
\left(|g^{E(1)}_{00}|g^{E(1)}_{YY} g^{E(1)}_{\rho\rho}
\over e^{2\Omega^{(1)}_{E}
+ {2\over 3}\Omega^{(2)}_{E}} \right)^3 g^{E(2)}_{\rho\rho} (d\rho^2 + dz^2) \right] .
\end{eqnarray}
Summarizing, we obtain the following important result formulated as a
proposition.
{\bf Proposition.} {\it Let us consider two solutions of the vacuum
5D Einstein equations }
\begin{eqnarray}
ds_{E(i)}^2 = g^{E(i)}_{YY} dY^2 + g^{E(i)}_{00}\left(dt + {\cal A}^{(i)}_{E}dX \right)^{2} +
{\tilde g}^{E(i)}_{XX}dX^2 + g^{E(i)}_{\rho\rho} (d\rho^2 + dz^2)
\end{eqnarray}
{\it Then the following give a solution to the 5D EM equations\footnote{More generally
we can take $A_{Y}=\pm 2\sqrt{3} f^{(1)}_{E} + const$ which is
obvious.} }
\begin{eqnarray}
ds^2 &=& \left[|g^{E(1)}_{00}|\sqrt{g^{E(1)}_{YY}} \right]^2 dY^2
+ \left[\sqrt{g^{E(2)}_{YY}} \over |g^{E(1)}_{00}|\sqrt{g^{E(1)}_{YY}} \right]
\left[g^{E(2)}_{00}\left(dt + {\cal A}^{(2)}_{E}dX \right)^{2} +
{\tilde g}^{E(2)}_{XX}dX^2 \nonumber \right. \\ && \left. +
\left(|g^{E(1)}_{00}|g^{E(1)}_{YY} g^{E(1)}_{\rho\rho}
\over e^{2\Omega^{(1)}_{E}
+ {2\over 3}\Omega^{(2)}_{E}} \right)^3 g^{E(2)}_{\rho\rho} (d\rho^2 + dz^2) \right] ,\\
A_{Y} &=& 2\sqrt{3} f^{(1)}_{E} ,
\end{eqnarray}
{\it where $f^{(1)}_{E}$ is a solution to the system }
\begin{eqnarray}\label{TPS1}
\partial_{\rho}f^{(1)}_{E} &=& -{1\over 2} {(g^{E(1)}_{00})^2 g^{E(1)}_{YY}\over \rho} \partial_{z}{\cal A}^{(1)}_{E} ,\\
\partial_{z} f^{(1)}_{E} &=& {1\over 2} {(g^{E(1)}_{00})^2 g^{E(1)}_{YY}\over \rho} \partial_{\rho}{\cal A}^{(1)}_{E},
\end{eqnarray}
{\it and $\Omega^{(i)}_{E}$ satisfy }
\begin{eqnarray}\label{OS1}
\rho^{-1}\partial_{\rho}\Omega^{(i)}_{E} &=& {3\over 16}\left[\left(\partial_{\rho} \ln\left( g^{E(i)}_{YY}\right)\right)^2
- \left(\partial_{z} \ln \left(g^{E(i)}_{YY}\right)\right)^2 \right],\\
\rho^{-1}\partial_{z}\Omega^{(i)}_{E} &=&
{3\over 8} \partial_{\rho} \ln \left(g^{E(i)}_{YY}\right)\partial_{z}\ln\left( g^{E(i)}_{YY}\right).
\end{eqnarray}
Let us also note that, in general, the exchange of the two sigma models $M^{(1)} \longleftrightarrow M^{(2)}$
leads to different EM solutions.
The presented proposition gives us a tool to generate new 5D EM solutions in a simple way from
known solutions to the vacuum 5D Einstein equations. The technical difficulties are eventually
concentrating in finding of $\Omega_{E}$ from (\ref{OS}) and $f_{E}$ from (\ref{TPS}).
Through the use of the proposition we can generate the "5D EM images" of all known solutions of the vacuum
5D Einstein equations with the symmetries we consider here.
It is not possible to present explicitly here the "EM images" of all
known vacuum Einstein solutions. We shall demonstrate the application of the proposition on the case of rotating
neutral black rings generating in this way a new rotating six parametric EM solution
which includes he EM rotating dipole black ring solution as a particular case.
\section{ Derivation of the rotating dipole black ring\\ solution}
The rotating dipole black ring solutions in 5D Einstein-Maxwell-dilaton (EMd) gravity were
given in \cite{EMP} without any derivation. What is said in \cite{EMP} is that these solutions can be
obtained form generalized $C$-metric \cite{EMP1} by double Wick rotation and analytic continuation of parameters.
As far as we are aware there is no explicit derivation of the dipole black ring solutions.
Here we generate a new (six parametric) rotating EM solution
and as a byproduct we give an explicit derivation of the EM rotating dipole black ring solution\footnote{
This solution can be obtained form the EMd dipole black ring solutions in the limit when the dilaton coupling
parameter is zero.}.
The first step we should make in order to derive the EM dipole black ring solution is to
chose two known solutions of the vacuum 5D Einstein equations and to present them in canonical coordinates.
As should be expected, we take two copies of the neutral black ring solution with different parameters:
the first solution is with parameters $\{\lambda_{1},\nu_{1},{\cal R}_{1}\}$ while the second is parameterized
by $\{\lambda_{2},\nu_{2},{\cal R}_{2}\}$. It should be also noted that in the case under consideration
the Killing vectors are denoted by
\begin{equation}
K_{1} = {\partial/\partial \psi} , \,\,\,\, K_{2} = {\partial/\partial \phi}.
\end{equation}
The neutral black ring solution has already been written in canonical coordinates in \cite{HAR},
that is why we present here the final formulas:
\begin{eqnarray}
|g^{E(i)}_{00}| &=& {(1+\lambda_{i})(1-\nu_{i})R^{(i)}_{1} + (1-\lambda_{i})(1+\nu_{i})R^{(i)}_{2}
-2(\lambda_{i} - \nu_{i})R^{(i)}_{3}
- \lambda_{i}(1-\nu_{i}^2){\cal R}_{i}^2 \over (1+\lambda_{i})(1-\nu_{i})R^{(i)}_{1}
+ (1-\lambda_{i})(1+\nu_{i})R^{(i)}_{2}
-2(\lambda_{i} - \nu_{i})R^{(i)}_{3}
+ \lambda_{i}(1-\nu_{i}^2){\cal R}_{i}^2 } , \nonumber \\
g^{E(i)}_{\Phi\Phi} &=& {(R^{(i)}_{3}+z - {1\over 2}{\cal R}_{i}^2 )(R^{(i)}_{2} - z
+ {1\over 2}{\cal R}_{i}^2\nu_{i})
\over R^{(i)}_{1} - z - {1\over 2}{\cal R}_{i}^2\nu_{i} } \nonumber \\ &=& {(R^{(i)}_{1} + R^{(i)}_{2}
+ \nu_{i}{\cal R}^2_{i}) (R^{(i)}_{1} - R^{(i)}_{3} + {1\over 2}(1+ \nu_{i}){\cal R}^2_{i}) (R^{(i)}_{2}
+ R^{(i)}_{3} - {1\over 2}(1 - \nu_{i}){\cal R}^2_{i})
\over {\cal R}^2_{i} ((1-\nu_{i})R^{(i)}_{1} - (1+\nu_{i})R^{(i)}_{2} -2\nu_{i} R^{(i)}_{3}) } \nonumber \\
g^{E(i)}_{\rho\rho} &=& [(1+\lambda_{i})(1-\nu_{i})R^{(i)}_{1} + (1-\lambda_{i})(1+\nu_{i})R^{(i)}_{2}
-2(\lambda_{i} - \nu_{i})R^{(i)}_{3}
+ \lambda_{i}(1-\nu_{i}^2){\cal R}_{i}^2 ] \nonumber \\
&& \times {(1-\nu_{i})R^{(i)}_{1} + (1+\nu_{i})R^{(i)}_{2} + 2\nu_{i} R^{(i)}_{3}
\over 8(1-\nu_{i }^2)^2 R^{(i)}_{1}R^{(i)}_{2}R^{(i)}_{3}} , \\
{\cal A}^{(i)}_{E} &=& {-2 C(\nu_{i},\lambda_{i}) {\cal R}_{i} (1-\nu_{i})
[R^{(i)}_{3} -R^{(i)}_{1} + {1\over 2}{\cal R}_{i}^2 (1+\nu_{i})] \over
(1+\lambda_{i})(1-\nu_{i})R^{(i)}_{1} + (1-\lambda_{i})(1+\nu_{i})R^{(i)}_{2} -2(\lambda_{i} - \nu_{i})R^{(i)}_{3}
- \lambda_{i}(1-\nu_{i}^2){\cal R}_{i}^2 } \nonumber
\end{eqnarray}
where
\begin{eqnarray}
R^{(i)}_{1} =\sqrt{\rho^2 + (z + {\nu_{i}\over 2}{\cal R}_{i}^2)^2 } , \\
R^{(i)}_{2} =\sqrt{\rho^2 + (z - {\nu_{i}\over 2}{\cal R}_{i}^2)^2 }, \\
R^{(i)}_{3} = \sqrt{\rho^2 + (z - {1\over 2}{\cal R}_{i}^2)^2 },\\
C(\nu_{i},\lambda_{i}) = \sqrt{\lambda_{i}(\lambda_{i} -\nu_{i}) {1+\lambda_{i}\over 1- \lambda_{i} }} .
\end{eqnarray}
The next step is to find the functions $\Omega^{(i)}_{E}$ and $f^{(1)}_{E}$. After straightforward but tedious
calculations we obtain
\begin{eqnarray}
e^{{8\over 3} \Omega_{E}^{(i)}} &=& { [(1-\nu_{i})R^{(i)}_{1} + (1+\nu_{i})R^{(i)}_{2}
+ 2\nu_{i} R^{(i)}_{3}]^2\over 8(1-\nu_{i}^2)^2R^{(i)}_{1}R^{(i)}_{2}R^{(i)}_{3} }
g^{E(i)}_{\Phi\Phi} ,\\
f^{(i)}_{E} &=& {(1-\nu_{i}) {\cal R}_{i} C(\nu_{i},\lambda_{i}) [R^{(i)}_{1} - R^{(i)}_{3} +
{1\over 2}(1+ \nu_{i}) {\cal R}_{i}^2 ] \over (1+\lambda_{i})(1-\nu_{i})R^{(i)}_{1} +
(1-\lambda_{i})(1+\nu_{i})R^{(i)}_{2} + 2(\nu_{i}-\lambda_{i})R^{(i)}_{3}
+ \lambda_{i}(1-\nu_{i}^2){\cal R}_{i}^2 } \nonumber .
\end{eqnarray}
Once having the functions $\Omega^{(i)}_{E}$ and $f^{(1)}_{E}$ in explicit form we can immediately apply
the proposition and
we obtain explicitly a new EM solution presented in the canonical coordinates. The found EM solution
depends on six parameters $\{\lambda_{i}, \nu_{i}, {\cal R}_{i}, i=1,2 \}$ and, obviously, the solution is
very complicated. The detailed study of this new solution needs a separate investigation which we postpone for future publication.
Here we will consider only the particular case when
\begin{eqnarray}
\nu_{1}=\nu_{2}=\nu ,\,\,\, {\cal R}_{1} = {\cal R}_{2}={\cal R}.
\end{eqnarray}
In this case we also have
\begin{equation}
\Omega^{(1)}_{E} = \Omega^{(2)}_{E} ,\,\,\, R^{(1)}_{a} = R^{(2)}_{a} , a= 1,2,3
\end{equation}
which considerably simplifies the solution.
Even in this particular case the
solution looks complicated in the canonical coordinates. That is why it is more convenient to
present the solution in coordinates where it takes simpler form. Such coordinates are the so-called
$C$-metric coordinates given by
\begin{eqnarray}
\rho = {{\cal R}^2 \sqrt{-G(x)G(y)}\over (x-y)^2 } ,\,\,\,
z = {1\over 2} {{\cal R}^2(1-xy)(2+\nu x + \nu y )\over (x-y)^2 }
\end{eqnarray}
where
\begin{eqnarray}
G(x) = (1-x^2)(1+\nu x),\\
-1\le x \le 1,\,\,\,\, y\le -1.
\end{eqnarray}
Performing this coordinate change we find
\begin{eqnarray}
ds^2 &=&
\left[{F_{\lambda_{1}}(y) \over F_{\lambda_{1}}(x) }\right]^2 {{\cal R}^2 G(x)\over (x-y)^2} d\phi^2
+ \left[{F_{\lambda_{1}}(x) \over F_{\lambda_{1}}(y) }\right]
\left[ - {F_{\lambda_{2}}(y) \over F_{\lambda_{2}}(x)}
\left(dt + C(\nu,\lambda_{2}){\cal R} {1 + y\over F_{\lambda_{2}}(y) }d\psi \right)^2 \right. \nonumber \\
&& \left. - {{\cal R}^2 F_{\lambda_{2}}(x)\over (x-y)^2 } {G(y)\over F_{\lambda_{2}}(y) }d\psi^2
+ F^3_{\lambda_{1}}(y) {{\cal R}^2 F_{\lambda_{2}}(x) \over (x-y)^2} \left({dx^2\over G(x)}
- {dy^2\over G(y) }\right)\right] ,\\
A_{\phi} &=& \pm \sqrt{3} C(\nu,\lambda_{1}) {\cal R} {1+ x\over F_{\lambda_{1}}(x)} + const .
\end{eqnarray}
The metric can be rearranged into the form
\begin{eqnarray}
ds^2 &=& -{F_{\lambda_{2}}(y) \over F_{\lambda_{2}}(x)} {F_{\lambda_{1}}(x) \over F_{\lambda_{1}}(y) }
\left(dt + C(\nu,\lambda_{2}){\cal R} {1 + y\over F_{\lambda_{2}}(y) }d\psi \right)^2 \\
&+& \left[F_{\lambda_{1}}(x) F^2_{\lambda_{1}}(y)\right] {{\cal R}^2 F_{\lambda_{2}}(x)\over (x-y)^2 }
\left[- {G(x)\over F^3_{\lambda_{1}}(y) F_{\lambda_{2}}(y) } d\psi^2 + {dx^2\over G(x)}
- {dy^2\over G(y) } + {G(x)\over F^3_{\lambda_{1}}(x) F_{\lambda_{2}}(x) } d\phi^2 \right] \nonumber .
\end{eqnarray}
Finally, in order to exclude pathological behaviors of the metric we must consider only negative $\lambda_{1}$, i.e.
\begin{equation}
\lambda_{1} = - \mu \,\,\, , 0\le\mu<1 .
\end{equation}
and positive $\lambda_{2}$ and $\nu$ satisfying
\begin{equation}
0<\nu \le \lambda_{2} <1.
\end{equation}
One can easily see that the generated 5D EM solution is just the EM rotating dipole black ring solution.
Let us also recall\cite{EMP} that in order to avoid the possible conical singularities at $x=\pm 1$ and $y=-1$
we must impose
\begin{eqnarray}
&&\Delta \phi = \Delta \psi = 2\pi {(1 + \mu)^{3/2} \sqrt{1-\lambda_{2}}\over 1-\nu } ,\\
&&{1-\lambda_{2}\over 1+\lambda_{2} } \left( {1+ \mu\over 1-\mu } \right)^3 = \left({1-\nu\over 1+\nu } \right)^2 .
\end{eqnarray}
\section{Conclusion}
In this paper we considered EM gravity in spacetimes admitting three commuting Killing vectors: one timelike and
two spacelike one of them being hypersurface orthogonal. Assuming also a special ansatz for the electromagnetic
field we have shown that the EM equations reduce to two $SL(2,R)/SO(2)$ $\sigma$-models and
a separated linear system of first order partial differential equations. This ensures the existence of Lax-pair
presentation, therefore the complete integrability of the considered sector of EM gravity.
The Lax pair presentation also opens the way to apply the IST method and to generate multisoliton
solutions.
Using the two $\sigma$-models structure of the reduced EM sector we gave an explicit construction
for generating exact 5D EM solutions from known solutions of the 5D vacuum Einstein equations in the same
symmetry sector. As an explicit example we constructed a six parameter rotating EM solution including
as a particular case the rotating EM dipole black ring solution. In this way we gave, for the first time,
an explicit derivation of the dipole black ring solution.
We shall conclude with some prospects for future work. Here we have shown that the "superposition"
of two neutral black ring solutions with certain parameters yields the dipole EM black ring solution
which schematically can be expressed as
\begin{equation}
\{neutral\,\, black\,\, ring\} + \{neutral\,\, black \,\,ring\} \to \{EM \,\,dipole \,\, black\,\, ring\}.
\end{equation}
It would be interesting to find the EM solutions corresponding to the schemes
\begin{eqnarray}
\{neutral\,\, black\,\, hole\} &+& \{neutral\,\, black\,\, hole\} \to \{ ?\}, \\
\{neutral\,\, black\,\, hole\} &+& \{neutral\,\, black \,\,ring\} \to \{ ?\}, \\
\{neutral\,\, black \,\,ring\} &+& \{neutral\,\, black\,\, hole\} \to \{?\},
\end{eqnarray}
as well as other solutions. Some solutions of vacuum 5D Einstein equations which could serve as seeds
for new EM solutions are given in \cite{HAR}-\cite{AK}.
It would also be of interest to generalized this work for EM gravity in spacetimes with number of dimensions
greater than five and in the presence of a dilaton field non-minimaly coupled to
the electromagnetic field. Some results in these directions have already been found \cite{Y1}. They will be presented
elsewhere.
\section*{Acknowledgements}
I would like to thank I. Stefanov for reading the manuscript.
This work was partially supported by the
Bulgarian National Science Fund under Grant MUF04/05 (MU 408)
and the Sofia University Research Fund.
|
2,869,038,155,909 | arxiv | \section{Introduction}
The \textbf{uniform spanning forests} of an infinite, connected, locally finite graph $G$ are defined to be distributional limits of uniform spanning trees of large finite subgraphs of $G$. These limits can be taken with either free or wired boundary conditions, yielding the \textbf{free uniform spanning forest} (FUSF) and \textbf{wired uniform spanning forest} (WUSF) respectively.
Although they are defined as limits of trees, the USFs are not necessarily connected.
Indeed, Pemantle \cite{Pem91} proved that the FUSF and WUSF of $\mathbb Z^d$ coincide for all $d$ (so that we can refer to both simply as the USF of $\mathbb Z^d$), and are a single tree almost surely (a.s.) if and only if $d\leq 4$. A complete characterization of the connectivity of the WUSF was given by Benjamini, Lyons, Peres, and Schramm \cite{BLPS}, who proved that the WUSF of a graph is connected if and only if two independent random walks on $G$ intersect infinitely often a.s.
Extending Pemantle's result, Benjamini, Kesten, Peres, and Schramm \cite{BeKePeSc04} (henceforth referred to as BKPS) discovered the following surprising theorem.
\begin{thm*}[BKPS \cite{BeKePeSc04}]
Let $\mathfrak F$ be a sample of the USF of $\mathbb Z^d$. For each
$x,y \in \mathbb Z^d$, let $N(x,y)$ be the minimal number of edges that are not in $\mathfrak F$ used by a path from $x$ to $y$ in $\mathbb Z^d$.
Then
\[\max_{x,y \in \mathbb Z^d}N(x,y) = \left\lceil \frac{d-4}{4} \right\rceil\]
almost surely.
\end{thm*}
In particular, this theorem shows that every two trees in the uniform spanning forest of $\mathbb Z^d$ are adjacent almost surely if and only if $d\leq 8$.
Similar results have since been obtained for other models \cite{procaccia2011geometry,rath2010connectivity,broman2016connectedness,li2016percolative,procaccia2016connectivity}.
The purpose of this paper is to show that, once $d\geq 5$,
the uniform spanning forest undergoes qualitative changes to its connectivity \emph{every} time the dimension increases, rather than just every four dimensions.
In order to formulate such a theorem, we introduce the \emph{component graph} of the uniform spanning forest.
Let $G$ be a graph and let $\omega$ be a subgraph of $G$. The \textbf{component graph} $\mathcal{C}_1(\omega)$ of $\omega$ is defined to be the simple graph that has the connected components of $\omega$ as its vertices, and has an edge between two connected components $k_1$ and $k_2$ of $\omega$ if and only if
there exists an edge $e$ of $G$ that has one endpoint in $k_1$ and the other endpoint in $k_2$. More generally, for each $r\geq 1$, we define the \textbf{distance $r$ component graph} $\mathcal{C}_r(\omega)$ to be the graph which has the components of $\omega$ as its vertices, and has an edge between two components $k_1$ and $k_2$ of $\omega$ if and only if there is path in $G$ from $k_1$ to $k_2$ that has length at most $r$.
When formulated in terms of the component graph, the result of BKPS states that
the diameter of $\mathcal{C}_1(\mathfrak F)$ is almost surely $\lceil (d-4)/4\rceil$ for every $d\geq 1$. In particular, it implies that $\mathcal{C}_1(\mathfrak F)$ is almost surely a single point for all $1\leq d \leq 4$ (as follows from Pemantle's theorem), and is almost surely a complete graph on a countably infinite number of vertices for all $5\leq d\leq 8$.
We now introduce the notion of \emph{ubiquitous subgraphs}.
We define a \textbf{graph with boundary} $H=(\partial V, V_\circ,E)=(\partial V(H),V_\circ (H),E(H))$ to be a graph $H=(V,E)$ whose vertex set $V$ is partitioned into two disjoint sets, $V=\partial V \cup V_\circ$, which we call the \textbf{boundary} and \textbf{interior} vertices of $H$, such that $\partial V \neq \emptyset$.
Given a graph $G$, a graph with boundary $H$, and collection of distinct vertices $(x_u)_{u \in \partial V}$ of $G$ indexed by the boundary vertices of $H$,
we say that $H$ is \textbf{present} at $(x_u)_{u \in \partial V}$ if there exists a collection of vertices $(x_u)_{u \in V_\circ }$ of $G$ indexed by the interior vertices of $H$
such that $x_u \sim x_v$ or $x_u=x_v$ for every $u\sim v$ in $H$. (Note that, in this definition, we do \emph{not} require that $x_u$ and $x_v$ are not adjacent in $G$ if $u$ and $v$ are not adjacent in $H$.) We say that $H$ is \textbf{faithfully present} at $(x_u)_{u\in \partial V}$ if there exists a collection of \emph{distinct} vertices $(x_u)_{u \in V_\circ}$ of $G$, disjoint from $(x_u)_{u \in \partial V}$, indexed by the interior vertices of $H$
such that $x_u \sim x_v$ for every $u\sim v$ in $H$. In figures, we will use the convention that boundary vertices are white and interior vertices are black.
We say that $H$ is \textbf{ubiquitous} in $G$ if it is present at every collection of distinct vertices $(x_u)_{u\in \partial V}$ in $G$, and that $H$ is \textbf{faithfully ubiquitous} in $G$ if it is faithfully present at every collection of distinct vertices $(x_u)_{u\in \partial V}$ in $G$.
For example, if $H$ is a path of length $n$ with the endpoints of the path as its boundary, then $H$ is ubiquitous in a graph $G$ if and only if $G$ has diameter less than or equal to $n$. The same graph is \emph{faithfully} ubiquitous in $G$ if and only if every two vertices of $G$ can be connected by a simple path of length \emph{exactly} $n$. If $H$ is a star with $k$ leaves set to be in the boundary and the central vertex set to be in the interior, then $H$ is ubiquitous in a graph $G$ if and only if every $k$ vertices of $G$ share a common neighbour, and in this case $H$ is also faithfully ubiquitous.
\begin{figure}[t]
\centering
\includegraphics[width=0.7\textwidth]{family4.pdf}
\caption{
\label{fig:sepfamilybasic}
Three trees with boundary that can be used to distinguish the component graphs of the uniform spanning forest in dimensions $9,10,11,$ and $12$. Boundary vertices are white, interior vertices are black.}
\vspace{-0.5cm}
\end{figure}
The main result of this paper is the following theorem. We say that a transitive graph $\mathbb G$ is \textbf{$d$-dimensional} if there exist positive constants $c$ and $C$ such that $cn^d \leq |B(x,n)|\leq Cn^d$ for every vertex $x$ of $\mathbb G$ and every $n\geq 1$, where $B(x,n)$ denotes the graph-distance ball of radius $n$ around $x$ in $\mathbb G$. The WUSF and FUSF of any $d$-dimensional transitive graph coincide~\cite{BLPS}, and we speak simply of the USF of $\mathbb G$. Note that the geometry of a $d$-dimensional transitive graph may be very different from that of $\mathbb Z^d$. (Working at this level of generality does not add any substantial complications to the proof, however.)
\begin{thm}\label{thm:mainsimple}
Let $\mathbb G_1$ and $\mathbb G_2$ be transitive graphs of dimension $d_1 $ and $d_2$ respectively, and let $\mathfrak F_1$ and $\mathfrak F_2$ be uniform spanning forests of $\mathbb G_1$ and $\mathbb G_2$ respectively. Then the following claims hold for every $r_1,r_2\geq 1$:
\vspace{0.3em}
\begin{enumerate}[leftmargin=0.9cm]\itemsep0.5em
\item
\emph{(\textbf{Universality and monotonicity.})}
If $d_1 \geq d_2 \geq 9$, then every finite graph with boundary that is ubiquitous in $\mathcal{C}_{r_1}(\mathfrak F_1)$ is also ubiquitous in $\mathcal{C}_{r_2}(\mathfrak F_2)$ almost surely.
\item
\emph{(\textbf{Distinguishability of different dimensions.})}
If $d_1 > d_2 \geq 9$, then there exists a finite graph with boundary $H$ such that $H$ is almost surely ubiquitous in $\mathcal{C}_{r_2}(\mathfrak F_2)$ but not in $\mathcal{C}_{r_1}(\mathfrak F_1)$.
\end{enumerate}
Moreover, the same result holds with `ubiquitous' replaced by `faithfully ubiquitous'.
\end{thm}
In order to prove item $(2)$ of \cref{thm:mainsimple}, it will suffice to consider the case that $H$ is a tree. In this case,
the following theorem allows us to calculate the dimensions for which $H$ is ubiquitous in the component graph of the uniform spanning forest. The corresponding result for general $H$ is given in \cref{thm:main}. Examples of trees that can be used to distinguish between different dimensions using \cref{thm:maintree} are given in \cref{fig:sepfamilybasic,fig:sepfamily}.
\begin{thm}\label{thm:maintree}
Let $\mathbb G$ be a $d$-dimensional transitive graph for some $d >8$, let $\mathfrak F$ be a uniform spanning forest of $\mathbb G$, let $r\geq 1$, and let $T$ be a finite tree with boundary. Then $T$ is almost surely ubiquitous in $\mathcal{C}_r(\mathfrak F)$ if and only if $T$ is almost surely faithfully ubiquitous in $\mathcal{C}_r(\mathfrak F)$, if and only if
\[
\max\left\{\frac{|E(T')|}{|V_\circ(T')|}\,:\, T' \text{ is a subgraph of $T$}\right\} \leq \frac{d-4}{d-8}.
\]
\end{thm}
\medskip
Note that $(d-4)/(d-8)$ is a decreasing function of $d$ for $d>8$. The theorem of BKPS follows as a special case of \cref{thm:maintree} by taking $T$ to be a path.
\cref{fig:sepfamily} gives an example of a family of trees that can be used to deduce item $(2)$ of \cref{thm:mainsimple} from \cref{thm:maintree}. See \cref{fig:unbalanced} for another example application.
\begin{figure}[t]
\centering
\includegraphics[width=0.7\textwidth]{family2.pdf}
\caption{
\label{fig:sepfamily}
A family of trees with boundary that can distinguish between $d$ and $d+1$ for any $d\geq 9$.
}
\end{figure}
\begin{figure}[h!]
\includegraphics[width=0.7\textwidth]{unbalanced.pdf}
\caption{
Left: a finite tree with boundary $T$. Right: the subgraph $T'$ of $T$ maximizing $|E(T')|/|V_\circ(T')|$. By \cref{thm:maintree}, $T$ is almost surely faithfully ubiquitous in the component graph of the uniform spanning forest of $\mathbb Z^d$ if and only if $d\leq 9$.
}
\label{fig:unbalanced}
\end{figure}
The next theorem shows that uniform spanning forests in different dimensions between $5$ and $8$ also have qualitatively different connectivity properties.
The result is more naturally stated in terms of \emph{ubiquitous subhypergraphs} in the \emph{component hypergraph} of the USF; see the following section for definitions and \cref{fig:hyper} for an illustration of the relevant hypergraphs.
\begin{thm}[Distinguishing dimensions $5,6,7,$ and $8$.]
\label{thm:5678}
Let $\mathbb G$ be a $d$-dimensional transitive graph and let $\mathfrak F$ be a uniform spanning forest of $\mathbb G$. The following hold almost surely.
\begin{enumerate}[leftmargin=*]
\itemsep0.25em
\item
If $d=5$, then there exists a constant $r_0$ such that for every five trees of $\mathfrak F$, there exists a ball of radius $r_0$ in $\mathbb G$ that is intersected by each of the five trees. On the other hand, if $d\geq 6$, then for every $r\geq 1$, there exists a set of four trees in $\mathfrak F$ such that there does not exist a ball of radius $r$ in $\mathbb G$ intersecting all four trees.
\item
If $d=5$ or $6$, then there exists a constant $r_0$ such that for every three trees of $\mathfrak F$, there exists a ball of radius $r_0$ in $\mathbb G$ that is intersected by each of the three trees. On the other hand, if $d\geq 7$, then for every $r\geq 1$, there exists a set of three trees in $\mathfrak F$ such that there does not exist a ball of radius $r$ in $\mathbb G$ intersecting all three trees.
\item
If $d=5,6,$ or $7$, then there exists a constant $r_0$ such that for every $r\geq r_0$, every set of three pairs of trees of $\mathfrak F$ have the following property: There exist three trees $T_{1},T_{2},T_{3}$ in $\mathfrak F$
such that $T_i$ and the $i$th pair of trees all intersect some ball $B_i$ of radius $r$ in $G$ for each $i=1,2,3$, and the trees $T_1,T_2,T_3$ all intersect some ball $B_0$ of radius $r$ in $G$.
On the other hand, if $d \geq 8$, then for every $r\geq 1$ there exists a set of three pairs of trees of $\mathfrak F$ that do not have this property.
\end{enumerate}
\end{thm}
\begin{figure}
\includegraphics[width=0.775\textwidth]{Hypergraphs5to8.pdf}
\caption{
Three hypergraphs with boundary that can be used to distinguish the component hypergraphs of the uniform spanning forest in dimensions $5,6,7,$ and $8$. Edges are represented by shaded regions.
}
\label{fig:hyper}
\end{figure}
\subsection{Ubiquity of general graphs and hypergraphs in the component graph.}
\label{subsec:introgeneral}
\begin{figure}
\includegraphics[width=0.5\textwidth]{trivial.pdf}
\caption{
\small{
Considering the coarsening in which all edges of a hypergraph are merged into one shows that any connected graph with $|\partial V|\in \{0,1\}$ is faithfully ubiquitous in the component graph of the uniform spanning forest of $\mathbb Z^d$ for every $d>4$, since every subhypergraph of this coarsening has $d$-apparent weight either $-d$ or $-4$.
}
}
\label{fig:degenerate}
\end{figure}
In this section, we extend \cref{thm:maintree} to the case that $H$ is not a tree. In order to formulate this extension, it is convenient to consider the even more general setting in which $H$ is a \emph{hypergraph} with boundary.
Indeed, it is a surprising feature of the resulting theory that one is forced to consider hypergraphs even if one is interested only in graphs.
We define a \textbf{hypergraph} $H=(V,E,\perp)$ to be a triple consisting of a set of vertices $V$, a set of edges $E$, and a binary relation $\perp \subseteq V \times E$ such that the set $\{v\in V : (v,e)\in \perp\}$ is nonempty for every $e\in E$. We write $v \perp e$ or $e \perp v$ and say that $v$ is \textbf{incident} to $e$ if $(v,e)\in \perp$. Note that this definition is somewhat nonstandard, as it allows multiple edges with the same set of incident vertices. We say that a hypergraph is \textbf{simple} if it does not contain two distinct edges whose sets of incident vertices are equal.
Every graph is also a hypergraph.
A \textbf{hypergraph with boundary} $H=(\partial V,V_\circ,E,\perp)$ is defined to be a hypergraph $H=(V,E,\perp)$ together with a partition of $V$ into disjoint subsets, $V=\partial V \cup V_\circ$, the \textbf{boundary} and \textbf{interior} vertices of $H$, such that $\partial V \neq \emptyset$. The degree of a vertex in a hypergraph is the number of edges that are incident to it, and the degree of an edge in a hypergraph is the number of vertices it is incident to.
To lighten notation, we will often write simply $H=(\partial V, V_\circ, E)$ for a hypergraph with boundary, leaving the incidence relation $\perp$ implicit.
If $H=(\partial V, V_\circ,E,\perp)$ is a hypergraph with boundary, a \textbf{subhypergraph} (with boundary) of $H$ is defined to be a hypergraph with boundary of the form $H'=(\partial V',V_\circ',E',\perp')$, where \[
\text{$\partial V' \subseteq \partial V$,\, $V_\circ' \subseteq V_\circ$,\, $E' \subseteq E$,\, $V'=\partial V' \cup V_\circ'$,\, and\, $\perp' = \perp \cap\, (V'\times E')$.}\]
We say that a hypergraph with boundary $H'=(\partial V',V_\circ',E',\perp')$ is a \textbf{quotient} of a hypergraph with boundary $H=(\partial V,V_\circ,E,\perp)$ if there exists a surjective function $\phi_V : V\to V'$ mapping $\partial V$ bijectively onto $\partial V'$ and a bijective function $\phi_E:E\to E'$ such that
\[
\{v' : v' \in V',\, v' \perp' \phi_E(e) \} = \{ \phi_V(v) : v\in V,\, v\perp e\}
\]
for every $e\in E$.
Similarly, we say that $H'$ is a \textbf{coarsening} of $H$ (and call $H$ a \textbf{refinement} of $H'$) if there exists a bijection $\phi_V: V \to V'$ mapping $\partial V$ bijectively onto $\partial V'$ and a surjection $\phi_E : E \to E'$ such that
\[
\{e' : e'\in E',\, e' \perp' \phi_V(v)\}= \{ \phi_E(e) : e\in E,\, e\perp v\}
\]
for every $v\in V$.
In other words, $H'$ is a \emph{quotient} of $H$ if
it can be
obtained from $H$ by merging together some of the \emph{vertices} of $H$, while $H'$ is a \emph{coarsening} of $H$ if it can be obtained by merging together some of the \emph{edges} of $H$.
The following theorem allows us to calculate the dimensions for which an arbitrary finite simple graph $H$ is ubiquitous in the component graph of the uniform spanning forest. It will be used to deduce \cref{thm:mainsimple,thm:maintree}. See \cref{fig:example,fig:degenerate} for example applications. For each finite hypergraph with boundary $H=(\partial V, V_\circ,E)$ and $d \in \mathbb R$, we define the \textbf{weight} of $H$, denoted $\Delta(H)$, and the $d$-\textbf{apparent weight} of $H$, denoted by $\eta_d(H)$, by setting
\[\Delta(H) := \sum_{e\in E} \deg(e) = \sum_{v\in V} \deg(v) \quad \text{ and } \quad \eta_d(H) := (d-4)\Delta-d|E| -(d-4)|V_\circ|\]
respectively.
We say that $H$ is $d$-\textbf{buoyant} if $\eta_d(H) \leq 0$, i.e., if its $d$-apparent weight is non-positive.
If $H$ is a simple graph then $\Delta=2|E|$ and so $\eta_d(H) = (d-8)|E| -(d-4)|V_\circ|$.
\begin{samepage}
\begin{thm}\label{thm:main}
Let $\mathbb G$ be a $d$-dimensional transitive graph for some $d>4$, let $\mathfrak F$ be the uniform spanning forest of $\mathbb G$, let $H$ be finite simple graph with boundary, and let $r\geq 1$. Then $H$ is faithfully ubiquitous in $\mathcal{C}_r(\mathfrak F)$ almost surely if and only
\begin{equation}
\label{eq:thmmaincriterion}
\min\left\{ \max \left\{\eta_d(H'') : H'' \text{ is a subhypergraph of $H'$}\right\} : H' \text{ is a coarsening of $H$}\right\}\leq 0,
\end{equation}
that is, if and only if $H$ has a coarsening all of whose subhypergraphs are $d$-buoyant.
Moreover, $H$ is ubiquitous in $\mathcal{C}_r(\mathfrak F)$ if and only if it has a quotient that is faithfully ubiquitous in $\mathcal{C}_r(\mathfrak F)$ almost surely.
\end{thm}
\end{samepage}
This terminology used here arises from the following analogy: We imagine that from each vertex-edge pair $(v,e)$ of $H$ with $v\perp e$ we hang a weight exerting a downward force of $(d-4)$, while from each edge and each interior vertex of $H$ we attach a balloon exerting an upward force of either $d$ or $(d-4)$ respectively. The net force is equal to the apparent weight. The hypergraph is buoyant (i.e., floats) if the apparent weight is non-positive.
\cref{thm:main} is best understood as a special case of a more general theorem concerning the \emph{component hypergraph}.
Given a subset $\omega$ of a graph $G$ and $r\geq 1$, we define the \textbf{component hypergraph} $\mathcal{C}^{hyp}_r(\omega)$ to be the simple hypergraph that has the components of $\omega$ as vertices, and where a finite set of components $W$ is an edge of $\mathcal{C}^{hyp}_r(\omega)$ if and only if there exists a set of diameter $r$ in $G$ that intersects every component of $\omega$ in the set $W$.
Presence, faithful presence, ubiquity and faithful ubiquity of a hypergraph with boundary $H$ in a hypergraph $G$ are defined similarly to the graph case. For example, we say that a finite hypergraph with boundary $H=(\partial V, V_\circ, E)$ is \textbf{faithfully present} at $(x_u)_{u\in \partial V}$ in $G$ if there exists a collection of distinct vertices $(x_u)_{u \in V_\circ}$ of $G$, disjoint from $(x_u)_{u \in \partial V}$, indexed by the interior vertices of $H$
such that for each $e\in E$ there exists an edge $f$ of $G$ that is incident to all of the vertices in the set $\{x_v : v \perp e\}$.
Given a $d$-dimensional graph $\mathbb G$ and $M\geq 1$, we let $R_\mathbb G(M)$ be minimal such that there exists a set of vertices in $\mathbb G$ of diameter $R_\mathbb G(M)$ that intersects $M$ distinct components of the uniform spanning forest of $\mathbb G$ with positive probability. Given a hypergraph with boundary $H$, we let $R_\mathbb G(H) = R_\mathbb G(\max_{e\in E} \deg(e))$.
\begin{figure}
\centering
\includegraphics[width=1\textwidth]{coarsening_and_quotient2.pdf}
\caption{
\small{
The graph with boundary $H_1$, far left, has $d$-apparent weight $\eta_d(H_1)=6d-64$, and is therefore $d$-buoyant if and only if $d \leq 10 + 2/3$. Meanwhile, it has a coarsening $H'_1$, centre left, that has $d$-apparent weight $\eta_d(H'_1) = 5d-64$, so that $H'_1$ is $d$-buoyant if and only if $d \leq 12 + 4/5$. In fact, using \cref{thm:main}, it can be verified that $H_1$ is almost surely faithfully ubiquitous in the component graph of the uniform spanning forest of $\mathbb Z^d$ if and only if $d \leq 12$.
On the other hand, considering the quotient $H''_1$ of $H_1$, centre, and the coarsening $H_1'''$ of $H''_1$, centre right, along with other possible coarsenings of quotients, it can be verified that $H_1$ is almost surely ubiquitous in the component graph of the uniform spanning forest of $\mathbb Z^d$ if and only if $d \leq 16$. Thus, for $13 \leq d \leq 16$ the graph $H_1$ is ubiquitous but not faithfully ubiquitous in the component graph of the uniform spanning forest of $\mathbb Z^d$ a.s. A similar analysis shows that the graph $H_2$, far right, is faithfully ubiquitous in the component graph of the uniform spanning forest of $\mathbb Z^d$ almost surely if and only if $d \leq 9$, and is ubiquitous almost surely if and only if $d\leq 16$.
}
}
\label{fig:example}
\end{figure}
\begin{thm}\label{thm:mainhyper}
Let $\mathbb G$ be a $d$-dimensional transitive graph for some $d>4$, let $\mathfrak F$ be the uniform spanning forest of $\mathbb G$, and let $H$ be finite hypergraph with boundary. If
\begin{equation}
\label{eq:thmmainhypercriterion}
\min\left\{ \max \left\{\eta_d(H'') : H'' \text{ is a subhypergraph of $H'$}\right\} : H' \text{ is a coarsening of $H$}\right\}\leq 0
\end{equation}
then $H$ is faithfully ubiquitous in $\mathcal{C}^{hyp}_r(\mathfrak F)$ almost surely for every $r\geq R_\mathbb G(H)$. Otherwise, $H$ is not faithfully ubiquitous in $\mathcal{C}^{hyp}_r(\mathfrak F)$ for any $r\geq 1$ almost surely. Moreover, $H$ is ubiquitous in $\mathcal{C}^{hyp}_r(\mathfrak F)$ if and only if it has a quotient that is faithfully ubiquitous in $\mathcal{C}^{hyp}_r(\mathfrak F)$ almost surely.
\end{thm}
$H$ clearly cannot be faithfully ubiquitous in $\mathcal C^{hyp}_r(\mathfrak F)$ if $r < R_\mathbb G(H)$, so this condition is necessary.
Note that $R_\mathbb G(2)=1$ for any $d$-dimensional transitive graph with $d >4$, so that \cref{thm:main} follows as a special case of \cref{thm:mainhyper} as claimed.
\cref{thm:5678} follows immediately by applying \cref{thm:mainhyper} to the hypergraphs pictured in \cref{fig:hyper}. The $\min\max$ problem arising in \eqref{eq:thmmaincriterion} and \eqref{eq:thmmainhypercriterion} is studied in \cref{subsec:optimalcoarsenings}.
\subsection{Organisation}
In \cref{sec:background}, we give background on uniform spanning forests, establish notation, and prove some simple preliminaries that will be used throughout the rest of the paper.
In \cref{sec:notationsandoutline}, we outline some of the key steps in the proof of the main theorems; this section is optional if the reader prefers to go straight to the fully detailed proofs.
\cref{sec:moments} is the computational heart of the paper, where the quantitative estimates needed for the proof of the main theorems are established. In \cref{sec:wrappingup}, we deduce the main theorems from the estimates of \cref{sec:moments} together with the multicomponent indistinguishability theorem of \cite{MulticomponentIndistinguishability}, which is used as a zero-one law. This section is quite short, most the work having already been done in \cref{sec:moments}.
We conclude with some open problems and remarks in \cref{sec:closing}.
\section{Background, definitions, and preliminaries}
\label{sec:background}
\subsection{Basic notation}
Let $\mathbb G$ be a $d$-dimensional transitive graph with vertex set $\mathbb V$, and let $\mathfrak F$ be the uniform spanning forest of $\mathbb G$. For each set $W\subseteq \mathbb V$, we write
write $\mathscr F(W)$ for the event that the vertices of $W$ are all in the same component of $\mathfrak F$. Let $r \geq 1$ and let $H = (\partial V, V_\circ, E)$ be a finite hypergraph with boundary. We define
\[
\hat \eta_d(H) = \min\left\{\eta_d(H'): H' \text{ is a coarsening of $H$}\right\}.
\]
We write $\preceq,\succeq$, and $\asymp$ for inequalities or equalities that hold up to a positive multiplicative constant depending only on some fixed data that will be clear from the context, usually $\mathbb G, H$, and $r$, and write $\lesssim,\gtrsim$ and $\approx$ for inequalities or equalities that hold up to an additive constant depending only on the same data. In particular
\[a \asymp b \text{ if and only if } \log_2 a \approx \log_2 b.\]
We sometimes write $\exp_2 (a)$ to mean $2^a$.
For each two vertices $x$ and $y$ of $\mathbb G$, we write
$\langle xy \rangle = d_\mathbb G(x,y)+1$,
where $d_\mathbb G$ is the graph metric on $\mathbb G$.
For each vertex $x$ of $\mathbb G$ and $\infty \geq N > n \geq 0$, we define the dyadic shell
\[\Lambda_x(n,N) := \left\{y \in \mathbb V : 2^n \leq \langle x y \rangle \leq 2^N \right\}.\]
If $x=(x_u)_{u\in \partial V}$ is a collection of vertices in $\mathbb G$, we choose one such point $x_0$ arbitrarily and set $\Lambda_x(n,N)=\Lambda_{x_0}(n,N)$ for every $N> n \geq 0$.
Since $\mathbb G$ is $d$-dimensional, we have that
\begin{equation}
\label{eq:LambdaVolume}
\log_2 |\Lambda_x(n,N)| \approx dN
\end{equation}
for all $n \geq 0$ and $N \geq n+1$. The upper bound is immediate, while the lower bound follows because $\Lambda_x(n,N)$ contains both some point $y$ with $\langle x_0y \rangle = 2^{N-1}+2^{N-2}$ and the ball of radius $2^{N-2}$ around this point $y$.
\subsection{Uniform spanning forests}
\label{subsec:USFbackground}
Given a finite connected graph $G$, we define $\mathsf{UST}_G$ to be the uniform probability measure on the set of spanning trees of $G$, that is, connected subgraphs of $G$ that contain every vertex of $G$ and do not contain any cycles.
Now suppose that $G=(V,E)$ is an infinite, connected, locally finite graph, and let $(V_i)_{i\geq 1}$ be an \textbf{exhaustion} of $V$ by finite sets, that is, an increasing sequence of finite, connected subsets of $V$ such that $\bigcup_{i\geq1}V_i=V$. For each $i\geq 1$, let $G_i$ be the subgraph of $G$ induced\footnote{Given a graph $G=(V,E)$ and a set of vertices $W \subseteq V$, the subgraph of $G$ \emph{induced} by $W$ is defined to be the graph with vertex set $W$ and with edge set given by the set of edges of $G$ that have both endpoints in $W$.} by $V_i$, and let $G_i^*$ be the graph formed from $G$ by contracting $V \setminus V_i$ down to a single vertex and deleting all of the self-loops that are created by this contraction. The \textbf{free} and \textbf{wired uniform spanning forest} (FUSF and WUSF) measures of $G$, denoted $\mathsf{FUSF}_G$ and $\mathsf{WUSF}_G$, are defined to be the weak limits of the uniform spanning tree measures of $G_i$ and $G_i^*$ respectively. That is, for every finite set $S \subset E$,
\[\mathsf{FUSF}_G\bigl(\{\omega \subseteq E : S \subseteq \omega \}\bigr) = \lim_{i\to\infty} \mathsf{UST}_{G_i}\bigl(\{\omega \subseteq E : S \subseteq \omega \}\bigr)\]
and
\[\mathsf{WUSF}_G\bigl(\{\omega \subseteq E : S \subseteq \omega \}\bigr) = \lim_{i\to\infty} \mathsf{UST}_{G^*_i}\bigl(\{\omega \subseteq E : S \subseteq \omega \}\bigr).\]
Both limits were proven to exist by Pemantle \cite{Pem91} (although the WUSF was not considered explicitly until the work of H\"aggstr\"om \cite{Hagg95}), and do not depend on the choice of exhaustion.
Benjamini, Lyons, Peres, and Schramm \cite{BLPS} proved that the WUSF and FUSF of $G$ coincide if and only if $G$ does not admit harmonic functions of finite Dirichlet energy, from which they deduced that the WUSF and FUSF coincide on any amenable transitive graph. In particular, it follows that the WUSF and FUSF coincide for every transitive $d$-dimensional graph, and in this context we refer to both the FUSF and WUSF measures on $G$ as simply the uniform spanning forest measure, $\mathsf{USF}_G$, on $G$. We say that a random spanning forest of $G$ is a uniform spanning forest of $G$ if it has law $\mathsf{USF}_G$.
\subsection{Wilson's Algorithm}
\label{subsec:Wilson}
\textbf{Wilson's algorithm} \cite{Wilson96} is a way of generating the uniform spanning tree of a finite graph by joining together loop-erased random walks. It was extended to generate the wired uniform spanning forests of infinite, transient graphs by Benjamini, Lyons, Peres, and Schramm \cite{BLPS}.
Recall that, given a path $\gamma = ( \gamma_n )_{n\geq0}$ in a graph $G$ that is either finite or visits each vertex of $G$ at most finitely often, the \textbf{loop-erasure} of $\gamma$ is defined by deleting loops from $\gamma$ chronologically as they are created. The loop-erasure of a simple random walk path is known as \textbf{loop-erased random walk} and was first studied by Lawler \cite{Lawler80}. Formally, we define the loop-erasure of $\gamma$ to be $\mathsf{LE}(\gamma)=( \gamma_{\tau_i} )_{i\geq0}$, where $\tau_i$ is defined recursively by setting $\tau_0=0$ and
\[\tau_{i+1} = 1+ \sup\{t \geq \tau_i : \gamma_t = \gamma_{\tau_i}\}.\]
(If $G$ is not simple, then we also keep track of which edges are used by $\mathsf{LE}(\gamma)$.)
Let $G$ be an infinite, connected, transient, locally finite graph. Wilson's algorithm rooted at infinity allows us to sample the wired uniform spanning forest of $G$ as follows. Let $( v_i )_{i\geq1}$ be an enumeration of the vertices of $G$. Let $\mathfrak F_0=\emptyset$, and define a sequence of random subforests $( \mathfrak F_i )_{i\geq 0}$ of $G$ as recursively follows.
\begin{enumerate}
\item Given $\mathfrak F_i$, let $X^{i+1}$ be a random walk started at $v_{i+1}$, independent of $\mathfrak F_i$.
\item
Let $T_{i+1}$ be the first time $X^{i+1}$ hits the set of vertices already included in $\mathfrak F_i$, where $T_{i+1}=\infty$ if $X^{i+1}$ never hits this set.
Note that $T_{i+1}$ will be zero if $v_{i+1}$ is already included in $\mathfrak F_i$.
\item Let $\mathfrak F_{i+1}$ be the union of $\mathfrak F_i$ with the loop-erasure of the stopped random walk path $( X^{i+1}_n )_{n=0}^{T_{i+1}}$.
\end{enumerate}
Finally, let $\mathfrak F = \bigcup_{i\geq1}\mathfrak F_i$. This is Wilson's algorithm rooted at infinity: the resulting random forest $\mathfrak F$ is a wired uniform spanning forest of $G$.
\subsection{The main connectivity estimate}
Let $K$ be a finite set of vertices of $\mathbb G$. Following \cite{BeKePeSc04}, we define the \textbf{spread} of $K$, denoted $\langle K \rangle$, to be
\[\langle K \rangle = \min \Big\{\prod_{\{x,y\}\in E(\tau)} \langle x y \rangle : \tau=(W,E) \text{ is a tree with vertex set $K$}\Big\}. \]
Note that the tree $\tau$ being minimized over in the definition of $\langle K \rangle$ need not be a subgraph of $\mathbb G$. If we enumerate the vertices of $K$ as $x_1,\ldots,x_n$, then we have the simple estimate \cite[Lemma 2.6]{BeKePeSc04}
\begin{equation}
\label{eq:spread}
\langle K \rangle \asymp \prod_{i=1}^n
\min \left\{ \langle x_i x_j \rangle : 1 \leq j<i \right\},
\end{equation}
where the implied constant depends on the cardinality of $K$. In practice we will always use \eqref{eq:spread}, rather than the definition, to estimate the spread.
The main tool in our analysis of the USF is the following estimate of BKPS. Recall that $\mathscr F(K)$ is the event that every vertex of $K$ is in the same component of the uniform spanning forest $\mathfrak F$.
\begin{thm}[BKPS \cite{BeKePeSc04}]
\label{thm:sdim1}
Let $\mathbb G$ be a $d$-dimensional transitive graph with $d>4$, let $\mathfrak F$ be the uniform spanning forest of $G$, and let $K$ be a finite set of vertices of $\mathbb G$. Then there exists a constant $C=C(\mathbb G,|K|)$ such that
\begin{equation} \P\big(\mathscr F(K)\big) \leq C \big\langle K \big\rangle^{-(d-4)}. \end{equation}
\end{thm}
BKPS proved the theorem in the case $\mathbb G=\mathbb Z^d$. The general case follows from the same proof by applying the heat kernel estimates of Hebisch and Saloff-Coste \cite{HebSaCo93} (see \cref{thm:HSCGreen}), as stated in \cite[Remark 6.12]{BeKePeSc04}. These heat kernel estimates imply in particular that the Greens function estimate
\begin{equation}
\label{eq:HSC}
\sum_{n\geq 0} p_n(u,v) \asymp \langle uv \rangle^{-(d-2)}
\end{equation}
holds for every $d$-dimensional transitive graph $\mathbb G$ with $d>2$ and every pair $u,v \in \mathbb V$.
\begin{prop}\label{prop:sdim2}
Let $\mathbb G$ be a $d$-dimensional transitive graph, let $\mathfrak F$ be the uniform spanning forest of $G$, and let $K_i$ be a collection of finite sets of vertices of $\mathbb G$ indexed by some finite set $I$. Then there exists a constant $C=C(\mathbb G,|I|,\{|K_i|:i \in I\})$ such that
\begin{equation} \P\big(\mathscr F(K_i \cup K_j) \text{ if and only if $i=j$}\big) \leq C \prod_{i \in I} \big\langle K_i \big\rangle^{-(d-4)}. \end{equation}
\end{prop}
\begin{proof}
We may assume that $I=\{1,\ldots,k\}$ for some $k\geq 1$.
Given a collection of independent random walks $X^1,\ldots,X^n$, let $A(X^1,\ldots,X^n)$ be the indicator of the event that the forest generated by running the first $n$ steps of Wilson's algorithm using the walks $X^1,\ldots,X^n$, in that order, is connected.
Thus, given a finite set $K \subset \mathbb V$, we have
\[\P(\mathscr F(K)) = \P\left(A\left(X^1,\ldots,X^{|K|}\right) =1\right)\]
where $X^1,\ldots,X^{|K|}$ are independent random walks started at the vertices of $K$. Now suppose that $(K_i)_{i \in I}$ is a collection of finite sets, and suppose we generate a sample $\mathfrak F$ of the USF, starting with independent random walks $X^{1,1},\ldots,X^{1,|K_1|},X^{2,1},\ldots,X^{k,|K_k|}$, where $X^{i,j}$ starts from the $j$th element of $K_i$. Then we observe that
\[\left\{\mathscr F(K_i \cup K_j) \text{ if and only if $i=j$} \right\} \subseteq \bigcap_{i \in I} \left\{ A(X^{i,1},\ldots,X^{i,|K_i|}) =1\right\}, \]
and hence that
\begin{equation}
\label{eq:negdep}\P\big(\mathscr F(K_i \cup K_j) \text{ if and only if $i=j$}\big) \leq \prod_{i \in I}\P(\mathscr F(K_i)).\end{equation}
The claim now follows from \cref{thm:sdim1}. It is also possible to prove \eqref{eq:negdep} using the negative association property of the USF, see e.g.\ \cite{MR2060630}.
\end{proof}
\subsection{Witnesses}
\begin{figure}
\includegraphics[width=0.8\textwidth]{witness_example.pdf}
\caption{Schematic illustration of a witness for the faithful presence of a path of length two with endpoints as boundary points. Let $H$ be the graph with boundary defined by $V(H)=\{v_1,v_2,v_3\}$, $\partial V(H)=\{v_1,v_3\}$ and with edges $e_1=\{v_1,v_2\}$ and $e_2=\{v_2,v_3\}$. The configuration $(\xi_{(v,e)})_{(v,e)\in E_\bullet}$ is a witness for the $1$-faithful presence of $H$ at $(x_{v_1},x_{v_3})$ if
$\{\xi_{(v_1,e_1)},\xi_{(v_2,e_1)}\}$ and $\{\xi_{(v_2,e_2)},\xi_{(v_3,e_2)}\}$ are edges of $\mathbb G$ and
there exist three distinct trees of $\mathfrak F$ each containing one of the sets $\{x_{v_1},\xi_{(v_1,e_1)}\}$, $\{\xi_{(v_1,e_1)},\xi_{(v_2,e_1)}\}$, and $\{\xi_{(v_3,e_2)}, x_{v_3}\}$.}
\label{fig:witness}
\end{figure}
Let $H$ be a finite hypergraph with boundary, let $r\geq 1$, and let $x=(x_v)_{v\in \partial V}$ be a collection of vertices in $\mathbb G$. We say that $H$ is $r$-faithfully present at $x$ if it is faithfully present at the components of $x$ in $\mathcal C_r^{hyp}(\mathfrak F)$. We define $r$-presence of $H$ at $x$ similarly. Let $E_\bullet$ be the set of pairs $(e,v)$, where $e\in E$ is an edge of $H$ and $v\perp e$ is a vertex of $H$ incident to $e$. We say that $\xi= (\xi_{(e,v)})_{(e,v)\in E_\bullet}\in V^{E_\bullet}$ is a \textbf{witness} for the $r$-faithful presence of $H$ at $x$ if the following conditions hold:
\begin{enumerate}[leftmargin=*]
\itemsep0.3em
\item For every $e \in E$ and every $u,v \perp e$ we have that $\langle \xi_{(e,v)} \xi_{(e,u)} \rangle \leq r-1$.
\item For each boundary vertex $v \in \partial V$, every point in the set $\{x_v\} \cup \{\xi_{(e,v)} : e\perp v\}$ is in the same component of $\mathfrak F$,
\item for each interior vertex $v \in V_\circ$, every point in the set $\{\xi_{(e,v)} : e\perp v\}$ is in the same component of $\mathfrak F$, and
\item for any two distinct vertices $u,v \in V$, the components of $\mathfrak F$ containing $\{\xi_{(e,u)} : e\perp u\}$ and $\{\xi_{(e,v)} : e\perp v\}$ are distinct.
\end{enumerate}
See \cref{fig:witness} for an illustrated example.
We write $\mathscr W(x,\xi)=\mathscr W^H_r(x,\xi)$ for the event that $\xi$ is a witness for the $r$-faithful presence of $H$ at $x$.
Thus,
on the event that all the vertices of $x$ are in distinct components of $\mathfrak F$,
$H$ is $r$-faithfully present at $x$ if and only if $\mathscr W_r^H(x,\xi)$ occurs for some $\xi\in V^{E_\bullet}$, and is present at $x$ if and only if $\mathscr W_r^{H'}(x,\xi)$ occurs for some quotient $H'$ of $H$ and some $\xi\in V^{E_\bullet(H')}$.
We say that $H$ is $r$-\textbf{robustly faithfully present} at $x=(x_v)_{v\in V}$ if there is an infinite collection $\{ \xi^i = (\xi^i_{(e,v)})_{(e,v)\in E_\bullet} : i \geq 1 \}$
such that $\xi^i$ is a witnesses for the $r$-faithful presence of $H$ at $x$ for every $i$, and $\xi^j_{(e,v)} \neq \xi^j_{(e',v')}$ for every $i > j \geq 1$ and $(e,v),(e',v') \in E_\bullet$.
Often, $x$, $r$ and $H$ will be fixed. In this case we will speak simply of `faithful presence' to mean `$r$-faithful presence', `robustly faithfully present' to mean `$r$-robustly faithfully present', `witnesses' to mean `witnesses for the $r$-faithful presence of $H$ at $x$', and so on.
It will be useful to define the following sets in which witnesses must live.
For every $(x_v)_{v\in \partial V}$, $n\geq 0$ and $N > n$, let
\[\Xi_x(n,N) = \Xi^H_x(n,N) = \left(\Lambda_x(n,N)\right)^{E}\]
and let
$\Xi_{\bullet x}(n,N)=\Xi^{H,r}_{\bullet x}(n,N)$ be the set
\begin{multline*}\Xi_{\bullet x}(n,N) =\\ \left\{(\xi_{(e,v)})_{(e,v)\in E_\bullet} \in \left(\Lambda_x(n,N)\right)^{E_\bullet} : \, \langle \xi_{(e,v)} \xi_{(e,u)} \rangle \leq r-1 \text{ for every $e \in E$ and every $u,v \perp e$}\right\},\end{multline*}
so that $\xi \in \Lambda_x(n,N)^{E_\bullet}$ is a witness for the faithful presence of $H$ if and only if $\xi \in \Xi_{\bullet x}(n,N)$ and conditions $(2)$, $(3)$, and $(4)$ in the definition of witnesses, above, hold.
\subsection{Indistinguishability of tuples of trees}
In this section we provide background on the notion of \emph{indistinguishability theorems}, including the indistinguishability theorem of \cite{MulticomponentIndistinguishability} which will play a major role in the proofs of our main theorems.
Indistinguishability theorems tell us that, roughly speaking, `all infinite components look alike'. The first such theorem was proven in the context of Bernoulli percolation by Lyons and Schramm \cite{LS99}. Indistinguishability of components in uniform spanning forests was conjectured by Benjamini, Lyons, Peres, and Schramm \cite{BLPS} and proven by Hutchcroft and Nachmias \cite{HutNach2016a}. (Partial progress was made independently at the same time by Tim\'ar \cite{timar2015indistinguishability}.) All of the results just mentioned apply to \emph{individual} components. In this paper, we will instead apply the indistinguishability theorem of \cite{MulticomponentIndistinguishability}, which yields a form of indistinguishability for \emph{multiple} components in the uniform spanning forest.
We will use this theorem as a zero-one law that allows us to pass from an estimate showing that certain events occur with positive probability to knowing that these events must occur with probability one.
We now give the definitions required to state this theorem.
Let $G=(V,E)$ be a graph, and let $k\geq 1$.
We define $\Omega_k(G) =\{0,1\}^E \times V^k$, which we equip with its product $\sigma$-algebra and think of as the set of subgraphs of $G$ rooted at an ordered $k$-tuple of vertices. A measurable set $\mathscr A \subseteq \Omega_k(G)$ is said to be a $k$-\textbf{component property} if
\[ (\omega,(u_i)_{i=1}^k)\in \mathscr A \Longrightarrow (\omega,(v_i)_{i=1}^k)\in \mathscr A \,\,\,
\begin{array}{l}
\text{for all } (v_i)_{i=1}^k \in V^k \text{ such that $u_i$ is} \\\text{connected to $v_i$ in $\omega$ for each $i=1,\ldots,k$}.
\end{array}
\]
That is, $\mathscr A$ is a $k$-component property if it is stable under replacing the root vertices with other root vertices from within the same components.
Given a $k$-component property $\mathscr A$, we say that a $k$-tuple of components $(K_1,\ldots,K_k)$ of a configuration $\omega \in \{0,1\}^E$ \textbf{has property} $\mathscr A$ if $(\omega,(u_i)_{i=1}^k) \in \mathscr A$ whenever $u_1,\ldots,u_k$ are vertices of $G$ such that $u_i \in K_i$ for every $1 \leq i \leq k$.
Given a vertex $v$ of $G$ and a configuration $\omega \in \{0,1\}^E$, let $K_\omega(v)$ denote the connected component of $\omega$ containing $v$. We say that a $k$-component property $\mathscr A$ is a \textbf{tail} $k$-component property if
\[ (\omega,(v_i)_{i=1}^k)\in \mathscr A \Longrightarrow (\omega',(v_i)_{i=1}^k)\in \mathscr A \,\,\,
\begin{array}{l}
\forall \omega' \in \{0,1\}^E \text{ such that } \omega \hspace{.1em}\triangle\hspace{.1em} \omega' \text{ is finite and }\\
K_\omega(v_i)\hspace{.1em}\triangle\hspace{.1em} K_{\omega'}(v_i) \text{ is finite for every $ i =1,\ldots,k$,}
\end{array}
\]
where $\hspace{.1em}\triangle\hspace{.1em}$ denotes the symmetric difference.
In other words, tail multicomponent properties are stable under finite modifications to $\omega$ that result in finite modifications to each of the components of interest $K_\omega(v_1),\ldots,K_\omega(v_k)$.
\begin{thm}[\cite{MulticomponentIndistinguishability}]
\label{thm:indist}
Let $\mathbb G$ be a $d$-dimensional transitive graph with $d>4$ and with vertex set $\mathbb V$, and let $\mathfrak F$ be the uniform spanning forest of $\mathbb G$.
Then for each $k\geq 1$ and each tail $k$-component property $\mathscr A \subseteq \Omega_k(\mathbb G)$, either every $k$-tuple of distinct connected components of $\mathfrak F$ has property $\mathscr A$ almost surely or no $k$-tuple of distinct connected components of $\mathfrak F$ has property $\mathscr A$ almost surely.
\end{thm}
We say that $\mathscr A$ is a \textbf{multicomponent property} if it is a $k$-component property for some $k\geq 1$.
For our purposes, the key example of a tail multicomponent property is the property that some finite hypergraph with boundary $H$ is $r$-robustly faithfully present at $(x_v)_{v\in \partial V}$. Applying \cref{thm:indist}, we will deduce that if $H$ is $r$-robustly faithfully present at some $(x_v)_{v\in \partial V}$ with positive probability then it must be almost surely $r$-robustly faithfully present at \emph{every} $(x_v)_{v\in \partial V}$ for which the vertices $\{x_v\}_{v\in \partial V}$ are all in distinct components of $\mathfrak F$.
\subsection{Optimal Coarsenings}
\label{subsec:optimalcoarsenings}
In this section we study the $\min\max$ problem appearing in \cref{thm:main,thm:mainhyper}, proving the following.
\begin{lemma}
\label{lem:maxminswap}
Let $H$ be a finite hypergraph with boundary and let $d\geq 4$. Then
\begin{multline}
\label{eq:maxminswap}
\max\left\{ \min \left\{\eta_d (H'') : H'' \text{ is a coarsening of $H'$}\right\} : H' \text{ is a subhypergraph of } H\right\} =\\
\min\left\{ \max \left\{\eta_d (H'') : H'' \text{ is a subhypergraph of $H'$}\right\} : H' \text{ is a coarsening of } H\right\}.
\end{multline}
In particular, $H$ has a coarsening all of whose subhypergraphs are $d$-buoyant if and only if every subhypergraph of $H$ has a $d$-buoyant coarsening.
\end{lemma}
Given a hypergraph with boundary $H=(\partial V, V_\circ, E,\perp)$ and an equivalence relation $\bowtie$ on $E$, we can form a coarsening $\coarse{H}{\bowtie}$ of $H$ by taking $\partial V(\coarse{H}{\bowtie})=\partial V(H)$ and $V_\circ (\coarse{H}{\bowtie})=V_\circ(H)$, taking $E(H/\bowtie)$ to be the set of equivalence classes of $\bowtie$, and defining
\[
\perp\!(H/\bowtie) = \Bigl\{(v,[e]) : v\in V,\, [e] \in E(H/\bowtie), \text{ and } \exists f\in E \text{ such that } [f]=[e] \text{ and } v \perp f\Bigr\},
\]
where $[e]$ denotes the equivalence class of $e$ under $\bowtie$.
It is easily seen that every coarsening of $H$ can be uniquely represented in this way. We say that a coarsening $\coarse{H}{\bowtie}$ of a hypergraph with boundary $H$ is \textbf{proper} if there exist at least two non-identical edges of $H$ that are related under $\bowtie$.
Let $H=(\partial V, V_\circ,E,\perp)$ be a finite hypergraph with boundary.
We say that a subhypergraph $H'$ of $H$ is \textbf{subordinate} to an equivalence relation $\bowtie$ on $E$ if every equivalence class of $\bowtie$ is either contained in or disjoint from the edge set of $H'$.
Given an equivalence relation $\bowtie$ on $E$ and a subhypergraph $H'$ of $H$, we write $\coarse{H'}{\bowtie}$ for the coarsening $\coarse{H'}{\,(\,\bowtie |_{E'})}$, where $\bowtie|_{E'}$ is the restriction of $\bowtie$ to the edge set of $H'$.
The function $H' \mapsto \coarse{H'}{\bowtie}$ is a bijection from subhypergraphs of $H$ subordinate to $\bowtie$ to subhypergraphs of $\coarse{H}{\bowtie}$.
We say that an equivalence relation $\bowtie$ on $E$ is $d$\textbf{-optimal} if
\[ \eta_d(\coarse{H}{\bowtie}) = \min \bigl\{\eta_d(\coarse{H}{\bowtie'}) : \text{$\coarse{H}{\bowtie'}$ a coarsening of $H$} \bigr\}. \]
We call a coarsening $H'=\coarse{H}{\bowtie}$ of $H$ $d$-optimal if $\bowtie$ is $d$-optimal. We say that a subhypergraph $H'=(\partial V',V_\circ',E',\perp')$ of $H$ is \textbf{full} if $\{v\in V: v \perp e\}\subseteq V'$ for every $e \in E'$.
\begin{lemma}
\label{lem:optimal}
Let $H$ be a finite hypergraph with boundary, let $d\in \mathbb R$, and let $\coarse{H}{\bowtie}$ be a $d$-optimal coarsening of $H$. Then $\coarse{H'}{\bowtie}$ is a $d$-optimal coarsening of $H'$ for every full subhypergraph $H'$ of $H$ subordinate to $\bowtie$.
\end{lemma}
Recall from \cref{subsec:introgeneral} that $\eta_d(H)$ is defined to be $(d-4)\Delta(H)-d|E|-(d-4)|V_\circ|$.
\begin{proof}
Let $H'$ be a subhypergraph of $H$ subordinate to $\bowtie$. Let $\bowtie'$ be an equivalence relation on $E'$, and let $\bowtie''$ be the equivalence relation on $E$ defined by
\begin{equation*}
e \, \bowtie'' \, e'
\iff (\text{$e,e' \in E\setminus E'$ and $e \bowtie e'$}) \text{ or } (\text{$e,e' \in E'$ and $e \bowtie' e'$}).
\end{equation*}
Thus, $H'$ is subordinate to $\bowtie''$ and $\coarse{H'}{\bowtie''}=\coarse{H'}{\bowtie'}$. It is easily verified that, since $\bowtie$ and $\bowtie''$ differ only on edges of $H'$ and $H'$ is subordinate to $\bowtie$,
\begin{align*}
|V_\circ(\coarse{H'}{\bowtie'})| - |V_\circ(\coarse{H'}{\bowtie})| &= |V_\circ(\coarse{H}{\bowtie''})| - |V_\circ(\coarse{H}{\bowtie})|,\\
|E(\coarse{H'}{\bowtie'})| - |E(\coarse{H'}{\bowtie})| &= |E(\coarse{H}{\bowtie''})| - |E(\coarse{H}{\bowtie})|, \quad \text{ and }\\
\Delta(\coarse{H'}{\bowtie'}) - \Delta(\coarse{H'}{\bowtie}) &= \Delta(\coarse{H}{\bowtie''}) - \Delta(\coarse{H}{\bowtie}).
\end{align*}
We deduce that, since $\coarse{H}{\bowtie}$ is $d$-optimal,
\[\eta_d(\coarse{H'}{\bowtie'}) - \eta_d(\coarse{H'}{\bowtie}) = \eta_d(\coarse{H}{\bowtie''}) - \eta_d(\coarse{H}{\bowtie}) \geq 0,
\vspace{0.2em}
\]
and the result follows since $\bowtie'$ was arbitrary. \qedhere
\end{proof}
\begin{lem}
\label{lem:optimalisbest}
Let $H$ be a finite hypergraph with boundary, let $d\geq 4$, and let $\coarse{H}{\bowtie}$ be a $d$-optimal coarsening of $H$. Then
\begin{multline}
\label{eq:optimalisbest}
\max \left\{ \min \left\{\eta_d(\coarse{H'}{\bowtie'}) : \coarse{H'}{\bowtie'}
\text{ a coarsening of $H'$}\right\}: H'
\text{ a subhypergraph of $H$} \right\}\\
=
\max\left\{\eta_d(\coarse{H'}{\bowtie}) : H' \text{ a subhypergraph of $H$} \right\}.
\end{multline}
\end{lem}
\begin{proof}
Let $\coarse{H}{\bowtie}$ be a $d$-optimal coarsening of $H$.
Let $H'$ be a subhypergraph of $H$, and let $H''$ be the smallest full subhypergraph of $H$ that contains $H'$ and is subordinate to $\bowtie$. That is, $H''$ is obtained from $H'$ by adding every edge of $H$ that is contained in equivalence class of $\bowtie$ intersecting $E'$ and every vertex of $H$ that is incident to either an edge of $H'$ or one of these added edges.
Writing $\deg_{\coarse{H''}{\;\bowtie}}(v)$ for the degree of a vertex in $\coarse{H''}{\bowtie}$, we compute that
\[\eta_d(\coarse{H''}{\bowtie}) - \eta_d(\coarse{H'}{\bowtie}) = (d-4)\sum_{v\in V(H'') \setminus V(H') } \left(\deg_{\coarse{H''}{\;\bowtie}}(v) -1\right) \geq 0.\]
It follows that
\begin{multline}
\label{eq:optimalisbest2}
\min\{ \eta_d(\coarse{H'}{\bowtie'}) : \text{$\coarse{H'}{\bowtie'}$ a coarsening of $H'$}\} \leq
\eta_d(\coarse{H'}{\bowtie}) \leq \eta_d(\coarse{H''}{\bowtie})\\ = \min \{ \eta_d(\coarse{H''}{\bowtie'}) : \coarse{H''}{\bowtie'} \text{ is a coarsening of $H''$}\},
\end{multline}
where the equality on the second line follows from \cref{lem:optimal}. Taking the maximum over $H'$, we obtain that
\begin{align*}
\max &\left\{ \min \left\{\eta_d(\coarse{H'}{\bowtie'}) : \coarse{H'}{\bowtie'}
\text{ a coarsening of $H'$}\right\}: H'
\text{ a subhypergraph of $H$} \right\}\\
&\leq
\max \left\{ \eta_d(\coarse{H'}{\bowtie})
: H'
\text{ a subhypergraph of $H$} \right\}\\
&\leq
\max \left\{ \min \left\{\eta_d(\coarse{H''}{\bowtie'}) : \coarse{H''}{\bowtie'}
\text{ a coarsening of $H''$}\right\}:
\begin{array}{l}\text{$H''$ a subhypergraph of}\\ \text{$H$ subordinate to $\bowtie$} \end{array}\right\},
\end{align*}
where the second equality follows from \eqref{eq:optimalisbest2}.
The the final line of this display is clearly less than or equal to the first line, so that all the lines must be equal, completing the proof.
\end{proof}
\begin{proof}[Proof of \cref{lem:maxminswap}]
It follows immediately from \cref{lem:optimalisbest} that
\begin{multline*}
\max \left\{ \min \left\{\eta_d(\coarse{H'}{\bowtie'}) : \coarse{H'}{\bowtie'}
\text{ a coarsening of $H'$}\right\}: H'
\text{ a subhypergraph of $H$} \right\}\\
\geq \min \left\{ \max\left\{\eta_d(\coarse{H'}{\bowtie'}) : H' \text{ a subhypergraph of $H$} \right\} : \coarse{H}{\bowtie'} \text{ a coarsening of $H$}\right\},
\end{multline*}
and the reverse inequality is trivial. \qedhere
\end{proof}
\begin{remark}
\cref{lem:optimalisbest} yields a brute force algorithm for computing the value of the relevant $\max \min$ problem that is exponentially faster than the trivial brute force algorithm, although still taking superexponential time in the number of edges of $H$.
\end{remark}
\section{Sketch of the proof}
\label{sec:notationsandoutline}
In this section we give a detail-free overview of the most important components of the proof. This section is completely optional; all the arguments and definitions mentioned here will be repeated in full detail later on.
\subsection{Non-ubiquity in high dimensions}
\label{subsec:nonubiqsketch}
Let $\mathbb G$ be a $d$-dimensional transitive graph, let $H$ be a finite hypergraph with boundary, and let $\mathfrak F$ be the uniform spanning forest of $\mathbb G$.
We wish to show that
if every coarsening of $H$ has a subhypergraph that is not $d$-buoyant, then
$H$ is not faithfully ubiquitous in $\mathcal C^{hyp}_r(\mathfrak F)$ for any $r \geq 1$ a.s. By \cref{lem:maxminswap}, this condition is equivalent to there existing a subhypergraph of $H$ none of whose coarsenings are $d$-buoyant. If $H$ is faithfully ubiquitous then so are all of its subhypergraphs, and so it suffices to consider the case that $H$ does not have any $d$-buoyant coarsenings, i.e., that $\hat \eta_d(H) >0$.
To show that $H$ is not faithfully ubiquitous, it would suffice to show that if the vertices $x=(x_v)_{v\in \partial V}$ are far apart from each other, then the expected total number of witnesses for the faithful presence of $H$ at $x$ is small. As it happens, we are not able to control the total number of witnesses without making further assumptions on $H$.
Nevertheless, the most important step in our argument is to show
that if $x$ is contained in $\Lambda_x(0,n-1)$, then the expected number of witnesses in $\Lambda_x(n,n+1)$ is exponentially small as a function of $n$.
Once we have done this, we will control the expected number of witnesses that occur `at the same scale' as $x$ by a similar argument. We are not finished at this point, of course, since we have not ruled out the existence of witnesses that are spread out across multiple scales. However, given the single-scale estimates, we are able to handle multi-scale witnesses of this form via an inductive argument on the size of $H$ (\cref{lem:inductionestimate,lem:firstmoment2,lem:firstmoment3}), which allows us to reduce from the multi-scale setting to the single-scale setting.
Let us briefly discuss how the single-scale estimate is attained. Write $\Xi = \Xi_x(n,n+1)$. \cref{prop:sdim2} implies that the expected number of witnesses in $\Lambda_x(n,n+1)$ is at most a constant multiple of
\[
\sum_{\xi \in \Xi}
\prod_{u \in \partial V} W(x,\xi),\]
where
\[ W(x,\xi) = \prod_{u\in \partial V}\langle x_u, \{\xi_e: e \perp u\} \rangle^{-(d-4)} \prod_{u \in V_\circ} \langle \{\xi_e: e \perp u\} \rangle^{-(d-4)}. \]
To control this sum, we split it as follows. Let $L$ be the set of symmetric functions $\ell:E^2 \to \{0,\ldots,n\}$ such that $\ell(e,e)=0$ for every $e\in E$.
For each $\ell \in L$, let
\[\Xi_\ell = \left\{\xi \in \Xi :
\begin{array}{l}
2^{\ell(e,e')} \leq \langle \xi_e \xi_{e'} \rangle \leq 2^{\ell(e,e')+2} \text{ for all $e,e' \in E$}
\end{array}
\right\},\]
so that $\Xi = \bigcup_{\ell \in L} \Xi_\ell$. The advantage of this decomposition is that $W$ is approximately constant on each set $\Xi_\ell$:
\[\log_2 W(x,\xi) \approx -(d-4)|\partial V| n -(d-4)\sum_{i=1}^{|E|}\sum_{u \perp e}\min \left\{\ell(e_i,e_j) : j<i,\, e_j \perp u\right\}\]
for every $\xi \in \Xi_\ell$.
On the other hand, by considering the number of choices we have for $\xi_{e_i}$ at each step given our previous choices, it follows that
\begin{align}\log_2 |\Xi_\ell| \lesssim
dn+ d\sum_{i=1}^{|E|}\min\left\{\hat \ell(e_i,e_j) : j<i\right\},
\end{align}
where $\hat \ell$ is the largest ultrametric on $E$ that is dominated by $\ell$. ($\Xi_\ell$ could be much smaller than this of course -- it could even be empty.) We deduce that
\begin{multline*}
\sum_{\xi \in \Xi}
W(x,\xi) \preceq \exp_2\left( dn-(d-4)|\partial V|n\right) \\ \cdot \sum_{\ell \in L} \exp_2 \left[ d\sum_{i=1}^{|E|}\min\left\{\hat \ell(e_i,e_j) : j<i\right\} -(d-4)\sum_{i=1}^{|E|}\sum_{u \perp e}\min \left\{\ell(e_i,e_j) : j<i,\, e_j \perp u\right\} \right]
\end{multline*}
and hence that
\begin{multline*}
\log_2 \sum_{\xi \in \Xi}
W(x,\xi) \lesssim \log_2|L| + dn-(d-4)|\partial V|n
\\ + \max_{\ell \in L} \left[ d\sum_{i=1}^{|E|}\min\left\{\hat \ell(e_i,e_j) : j<i\right\} -(d-4)\sum_{i=1}^{|E|}\sum_{u \perp e}\min \left\{\ell(e_i,e_j) : j<i,\, e_j \perp u\right\} \right].
\end{multline*}
We have that $\log_2 |L| = E^2 \log_2(n+1)$, which will be negligible compared with the rest of the expression in the case that $\hat \eta_d(H) >0$. From here, the problem is to identify the $\ell \in L$ achieving the maximum above. We will argue, by invoking a general lemma (\cref{lem:ultrametric1}) about optimizing linear combinations of minima of distances on the ultrametric polytope, that there is an $\ell \in L$ maximizing the expression such that $\ell$ is an ultrametric and $\ell(e,e') \in \{0,n\}$ for every $e,e' \in E$. The set of such functions $\ell$ are in bijection with the set of coarsenings $H'$ of $H$, where two edges of $H$ are identified in $H'$ if and only if $\ell(e,e')=0$. Choosing such a coarsening optimally, it is not hard to deduce that
\[ \log_2 \sum_{\xi \in \Xi}
W(x,\xi) \lesssim - \hat \eta_d(H)\, n + |E|^2\log_2 n, \]
giving the desired exponential decay.
\subsection{Ubiquity in low dimensions}
We now sketch the proof of ubiquity in low dimensions. Here we will only discuss the case in which $d/(d-4)$ is not an integer (i.e., $d\notin \{5,6,8\}$). The case in which $d/(d-4)$ is an integer raises several additional technical complications, see \cref{sec:speciald}.
Let $\mathbb G$ be a $d$-dimensional transitive graph with $d\in \{7\} \cup \{9,10,\ldots\}$, let $H$ be a finite hypergraph with boundary, and let $\mathfrak F$ be the uniform spanning forest of $\mathbb G$. Recall the definition of $R_\mathbb G(H)$ from \cref{subsec:introgeneral}.
Working in the opposite direction to the previous subsection, we wish to prove that if $H$ has a coarsening all of whose subhypergraphs are $d$-buoyant, then $H$ is faithfully ubiquitous in the component hypergraph $\mathcal C^{hyp}_{r}(\mathfrak F)$ for every $r \geq R_\mathbb G(H)$ a.s.
We say that $H$ is $r$-\textbf{robustly faithfully present} at $x=(x_v)_{v\in V}$ if there are infinitely many disjoint witnesses for the faithful presence of $H$ at $x$. The event that $H$ is $r$-robustly faithfully present at $x$ is a tail $|\partial V|$-component property. Thus, by \cref{thm:indist}, it suffices to prove that there exists an $x$ such that, with positive probability, the points of $x$ are all in different components of $\mathfrak F$ and $H$ is $R_\mathbb G(H)$-robustly faithfully present at $x$.
Let us suppose for now that every subhypergraph of $H$ is $d$-buoyant
(i.e., that we do not have to pass to a coarsening for this to be true).
To prove that $H$ has a positive probability of being robustly faithfully present at some $x$, we perform a first and second moment analysis on the number of witnesses in dyadic shells. Suppose that $x$ is contained in $\Lambda_x(0,n-1)$. Since we are now interested in existence rather than nonexistence, we can make things easier for ourselves by considering only $\xi$ that are both contained in a dyadic shell $\Lambda_x(n,n+1)$, and such that $\langle \xi_{(e,u)} \xi_{(e',u')} \rangle \geq 2^{n-C_1}$ whenever $e\neq e'$, for some appropriate chosen constant $C_1$. Furthermore, for each $e\in E$ the points $\{ \xi_{(e,u)} : u \perp e\}$ must be sufficiently well separated that there are not local obstructions to $\xi$ being a witness -- this is where we need that $r \geq R_\mathbb G(H)$. Call such a $\xi$ \textbf{good}, and denote the set of good $\xi$ by $\Omega_x(n)$. We then argue that for good $\xi$, the probability that $\xi$ is a witness is comparable to
\begin{multline*}
W(x,\xi) = \prod_{u \in \partial V} \langle x_u, \{\xi_e: e \perp u\} \rangle^{-(d-4)} \prod_{u \in V_\circ} \langle \{\xi_e: e \perp u\} \rangle^{-(d-4)}\\ \asymp \exp_2 \left[ -(d-4)(\Delta -|V_\circ|) \, n \right],
\end{multline*}
where $\xi_e$ is chosen arbitrarily from $\{\xi_{(e,u)} : u \perp e\}$ for each $e$, and hence that
the expected number of witnesses in $\Omega_x(n)$ is comparable to $2^{-\eta_d(H) n}$.
In other words, we have that the upper bound on the probability that $\xi$ is a witness provided by \cref{prop:sdim2} is comparable to the true probability when $\xi$ is good. Our proof of this estimate appears in \cref{Sec:technical}; unfortunately it is quite long.
Taking this lower bound on trust for now, the rest of the analysis proceeds similarly to that sketched in \cref{subsec:nonubiqsketch}, and is in fact somewhat simpler thanks to our restriction to good configurations. The bound implies that the expected number of good witnesses in $\Lambda_x(n,n+1)$ is comparable to
$\exp_2\left[ - \eta_d(H) \, n \right]$. Estimating the second moment is equivalent to estimating the expected number of pairs $\xi,\zeta$ such that $\xi$ and $\zeta$ are both good witnesses. Observe that if $\xi$ and $\zeta$ are both good witnesses then the following hold:
\begin{enumerate}
\item For each $v \in V$, there is at most one $v' \in V$ such that $\xi_{(e,v)}$ and $\zeta_{(e',v')}$
are in the same component of $\mathfrak F$ for some (and hence every) $e \perp v$ and $e' \perp v'$.
\item For each $e \in E$, there is at most one $e'\in E$ such that $\langle \xi_e \zeta_{e'} \rangle \leq 2^{n-C_1-1}$.
\end{enumerate}
To account for the degrees of freedom given by (1), we define $\Phi$ to be the set of functions $\phi: V_\circ \to V_\circ \cup \{\star\}$ such that the preimage of $\phi^{-1}(v)$ has at most one element for each $v\in V_\circ$.
(Here and elsewhere, we use $\star$ as a dummy symbol so that we can encode partial bijections by functions.)
For each $\phi \in \Phi$, we define $\tilde \mathscr W_\phi(\xi,\zeta)$ to be the event that $\xi$ and $\zeta$ are both witnesses, and that $\xi_{(e,v)}$ and $\zeta_{(e',v')}$
are in the same component of $\mathfrak F$ if and only if $e'=\phi(e)$.
Thus, to control the expected number of pairs of good witnesses, it suffices to control
\[\sum_{\phi \in \Phi} \sum_{\xi,\zeta \text{ good}} \P\left(\tilde \mathscr W_\phi(\xi,\zeta)\right) \preceq \max_{\phi \in \Phi} \sum_{\xi,\zeta \text{ good}} \P\left(\tilde \mathscr W_\phi(\xi,\zeta)\right).\]
Next, to account for the degrees of freedom given by (2), we define $\Psi$ to be the set of functions $\psi: E \to E \cup \{\star\}$ such that the preimage $\psi^{-1}(e)$ has at most one element for each $e\in E$. For each $\psi \in \Psi$ and $k = (k_e)_{e \in E} \in \{0,\ldots,n\}^{E}$, let
\begin{multline*}\Omega^{\psi,k} = \\\left\{(\xi,\zeta) \in (\Omega_x(n))^2 :
\begin{array}{l} 2^{n-k_e} \leq \langle \zeta_e \xi_{\psi(e)} \rangle \leq 2^{n-k_e+2} \text{ for all $e\in E$ such that $\psi(e) \neq \star$,}
\vspace{0.3em} \\
\text{and }\langle \zeta_e \xi_{e'} \rangle \geq 2^{n-C_1-2} \text{ for all $e,e'\in E$ such that $e' \neq \psi(e)$}
\end{array}
\right\}.
\end{multline*}
We can easily upper bound the volume
\begin{equation*}\log_2|\Omega^{\psi,k}(x)| \lesssim 2d|E|n - d\sum_{\psi(e)\neq\star} k_e. \end{equation*}
Using this together with \cref{prop:sdim2}, is is straightforward to calculate that
\begin{multline*}
\log_2 \sum_{(\xi,\zeta) \in \Omega^{\psi,k}}\P\left(\tilde \mathscr W_\phi(\xi,\zeta)\right) \lesssim
-2\eta_d(H)\, n - (d-4)|\{ u \in V_\circ : \phi(u) \neq \star\}|\,n
\\+ (d-4)\sum_{\psi(e)\neq \star}|\{u \perp e : \phi(u) \perp \psi(e)\}|k_e - d \sum_{\psi(e)\neq\star}k_e
\end{multline*}
for every $\phi\in \Phi$, $\psi \in \Psi$ and $k \in \{0,\ldots,n\}^E$.
We now come to some case analysis. Observe that for every $\psi\in \Psi$ and $e\in E$, we have that
\begin{multline*}
\sum_{k_e=0}^n \exp_2\left[(d-4)|\{u\perp e:\phi(u)\perp \psi(e)\}| - d\right]k_e \\\preceq \begin{cases}
\exp_2\left[(d-4)|\{u\perp e:\phi(u)\perp \psi(e)\}| - d\right]n & \text{ if $(d-4)|\{u\perp e:\phi(u)\perp \psi(e)\}| >d$}\\
n & \text{ if $(d-4)|\{u\perp e:\phi(u)\perp \psi(e)\}| =d$}\\
1 & \text{ if $(d-4)|\{u\perp e:\phi(u)\perp \psi(e)\}| <d$}.
\end{cases}
\end{multline*}
Since $d/(d-4)$ is not an integer, the middle case cannot occur and we obtain that
\begin{multline*}
\log_2 \sum_{k}\sum_{(\xi,\zeta) \in \Omega^{\psi,k}} \P(\tilde \mathscr W_\phi(\xi,\zeta))
\lesssim
-2\eta_d(H)\, n - (d-4)|\{ u \in V_\circ : \phi(u) \neq \star\}|\,n \\
+\sum_{e}\left[(d-4)|\{u \perp e : \phi(u) \perp \psi(e)\}|-d\right]
\mathbbm{1}\left(|\{u \perp e : \phi(u) \perp \psi(e)\}| > d/(d-4) \right) n.
\end{multline*}
From here, our task is to show that the expression on the right hand side is maximized when $\phi \equiv \star$ and $\psi \equiv \star$, in which case it is equal to $-2d \eta_d(H) n$. To do this, we identify optimal choices of $\phi$ and $\psi$ with subhypergraphs of $H$, and use the assumption that every subhypergraph of $H$ is $d$-buoyant. This should be compared to how, in the proof of non-ubiquity sketched in the previous subsection, we identified optimal choices of $\ell$ with coarsenings of~$H$.
Once we have this, since there are only a constant number of choices for $\phi$ and $\psi$, we deduce that the second moment of the number of good witnesses is comparable to the square of the first moment. Thus, it follows from the Cauchy-Schwarz inequality that the probability of there being a good witness in each sufficiently large dyadic shell is bounded from below by some $\varepsilon>0$, and we deduce from Fatou's lemma that there are good witnesses in infinitely many dyadic shells with probability at least $\varepsilon$. This completes the proof that robust faithful presence occurs with positive probability.
It remains to remove the simplifying assumption we placed on $H$, i.e., to allow ourselves to pass to a coarsening of $H$ all of whose subhypergraphs are $d$-buoyant before proving faithful ubiquity. To do this,
we introduce the notion of \emph{constellations of witnesses}. These are larger collections of points, defined in such a a way that every constellation of witness for $H$ contains a witness for each refinement of $H$. In the actual, fully detailed proof we will work with constellations from the beginning. This does not add many complications.
\section{Moment Estimates}
\label{sec:moments}
\subsection{Non-ubiquity in high dimensions}\label{sec:1stupper}
The goal of this section is to prove the following.
\begin{prop}\label{prop:nonubiquity}
Let $\mathbb G$ be a $d$-dimensional transitive graph with $d>4$, let $\mathfrak F$ be the uniform spanning forest of $G$, let $H$ be a finite hypergraph with boundary, and let $r\geq 1$. Then the following hold:
\begin{enumerate}[leftmargin=*]
\itemsep0.2em
\item
If $H$ has a subhypergraph that does not have any $d$-buoyant coarsenings,
then $H$ is not faithfully ubiquitous in $\mathcal{C}^{hyp}_r(\mathfrak F)$ almost surely.
\item
If every quotient $H'$ of $H$ such that $R_\mathbb G(H') \leq r$ has a subhypergraph that does not have any $d$-buoyant coarsenings,
then $H$ is not ubiquitous in $\mathcal{C}^{hyp}_r(\mathfrak F)$ almost surely.
\end{enumerate}
\end{prop}
Let $\mathbb G$ be a $d$-dimensional graph with $d > 4$, and let $\mathfrak F$ be the uniform spanning forest of $\mathbb G$. Let $H=(\partial V,V_\circ,E)$ be a finite hypergraph with boundary such that $E \neq \emptyset$, and let $r\geq 1$. Recall that $\mathscr W(x,\xi)$ is defined to be the event that $\xi$ is a witness for the faithful presence of $H$ at $x$.
For each $N> n$, we define
\[
S^H_x(n,N) = \sum_{\xi \in \Xi_{\bullet x}(n,N)} \mathbbm{1}\left[\mathscr W(x,\xi)\right].
\]
For each $(\xi_e )_{e\in E} \in \mathbb V^{E}$, we also define
\[
W^H(x,\xi) = \prod_{u \in \partial V} \langle x_u, \{\xi_e: e \perp u\} \rangle^{-(d-4)} \prod_{u \in V_\circ} \langle \{\xi_e: e \perp u\} \rangle^{-(d-4)}
\]
and
\[
\mathbb{W}^H_x(n,N) = \sum_{\xi\in \Xi_x(n,N)} W^H(x,\xi),
\]
so that, if we choose a vertex $u(e) \perp e$ arbitrarily for each $e\in E$ and set $(\xi_e)_{e\in E} = (\xi_{(e,u(e))})_{e\in E}$, it follows from \cref{prop:sdim2} that
\[
\mathbb E\left[ S^H_{x}(n,N)\right]
=
\sum_{\xi\in \Xi_{\bullet x}(n,N)} \P(\mathscr W(x,\xi)) \preceq \sum_{\xi\in \Xi_{\bullet x}(n,N)^{E}} W^H(x,\xi) = \mathbb W^H_{x}(n,N)
\]
for every $x$, $n$, and $N$.
To avoid trivialities, in the case that $H$ does not have any edges we define $\mathbb W^H_x(n,N)=1$ for every $x\in \mathbb V^{\partial V}$ and $N>n$.
\medskip
In order to prove \cref{prop:nonubiquity}, it will suffice to show that if $H$ has a subhypergraph with boundary that does not have any $d$-buoyant coarsenings, then for every $\varepsilon>0$ there exists a collection of vertices $(x_u)_{u\in \partial V}$ such that all the vertices $x_u$ are in a different component of $\mathfrak F$ with probability at least $1/2$ (which, by \cref{thm:sdim1}, will be the case if the vertices are all far away from each other), but $\P(H$ is faithfully present at $x)= \P(S^H_{x}(0,\infty) >0) \leq \varepsilon$.
In order to prove this, we seek to obtain upper bounds on the quantity $\mathbb W^H_{x}(n,N)$. We begin by considering the case of a single distant scale. That is, the case that $|N-n|$ is a constant and all the points of $x$ are contained in $\Lambda_x(0,n-1)$. Recall that
$\hat \eta_{d}(H)$ is defined to be $\min \{\eta_{d}(H') : \text{ $H'$ is a coarsening of $H$}\}$.
\begin{lem}[A single distant scale]
\label{lem:firstmoment}
Let $\mathbb G$ be a $d$-dimensional transitive graph and let $H$ be a finite hypergraph with boundary. Then for every $m \geq 0$, there exists a constant $c=c(\mathbb G,H,m)$ such that
\begin{equation*}
\log_2\mathbb W^H_x(n,n+m) \leq -\hat \eta_d(H) \, n + |E|^2\log_2 n + c
\end{equation*}
for all $x=(x_u)_{u\in \partial V} \in \mathbb V^{\partial V}$ and all $n$ such that $\langle x_u x_v \rangle \leq 2^{n-1}$ for all $u,v \in \partial V$.
\end{lem}
It will be useful for applications in \cref{Sec:technical} to prove a more general result. A graph $\mathbb G$ is said to be \textbf{$d$-Ahlfors regular} if there exists a positive constant $c$ such that $c^{-1} r^d \leq |B(x,r)| \leq cr^d$ for every $r\geq 1$ and every $x \in V$ (in which case we say $\mathbb G$ is $d$-Ahlfors regular with constant $c$).
Given $\alpha>0$ and a finite hypergraph with boundary $H$, we define
\[
\eta_{d,\alpha}(H) = (d-2\alpha)\Delta - d|E| - (d-2\alpha)|V_\circ|,
\]
where we recall that $\Delta =\sum_{e\in E}\deg(e) = \sum_{v\in V} \deg(v)$, and define $\hat \eta_{d,\alpha}(H) = \min \{\eta_{d,\alpha}(H') : \text{ $H'$ is a coarsening of $H$}\}$.
Given a graph $\mathbb G$, a finite hypergraph with boundary $H=(\partial V, V_\circ, E)$, and points $(x_v)_{v\in \partial V}$, $(\xi_e)_{e\in E}$ we also define
\[
W_\alpha^H(x,\xi) = \prod_{u \in \partial V} \langle x_u, \{\xi_e: e \perp u\} \rangle^{-(d-2\alpha)} \prod_{u \in V_\circ} \langle \{\xi_e: e \perp u\} \rangle^{-(d-2\alpha)}
\]
and, for each $N> n$,
\[
\mathbb{W}^{H,\alpha}_{x}(n,N) = \sum_{\xi\in \Xi_x(n,N)} W_\alpha^H(x,\xi).
\]
Note that $\eta_d=\eta_{d,2}$ and $\mathbb W_x^H=\mathbb W_x^{H,2}$, so that \cref{lem:firstmoment} follows as a special case of the following lemma.
\begin{lem}[A single distant scale, generalised]
\label{lem:firstmomentgeneral}
Let $\mathbb G$ be a $d$-Ahlfors regular graph with constant $c'$, let $H$ be a finite hypergraph with boundary, and let $\alpha \in \mathbb R$ be such that $d\geq 2\alpha$. Then for every $m \geq 0$, there exists a constant $c=c(c',H,\alpha,d,m)$ such that
\begin{equation*}
\log_2\mathbb W^{H,\alpha}_{x}(n,n+m) \leq -\hat \eta_{d,\alpha}(H) \, n + |E|^2\log_2 n + c
\end{equation*}
for all $x=(x_u)_{u\in \partial V} \in \mathbb V^{\partial V}$ and all $n$ such that $\langle x_u x_v \rangle \leq 2^{n-1}$ for all $u,v \in \partial V$.
\end{lem}
Before proving this lemma, we will require a quick detour to analyze a relevant optimization problem.
\subsubsection*{Optimization on the ultrametric polytope}
\label{subsec:ultrametric}
Recall that a (semi)metric space $(X,d)$ is an \textbf{ultrametric} space if $d(x,y) \leq \max \{d(x,z),d(z,y)\}$ for every three points $x,y,z\in X$. For each finite set $A$, the \textbf{ultrametric polytope} on $A$ is defined to be
\[ \mathcal{U}_A = \left\{(x_{a,b})_{a,b \in A} \in [0,1]^{A^2} :
\begin{array}{l}
x_{a,a}=0 \text{ for all $a \in A$},\, x_{a,b}=x_{b,a} \text{ for all $a,b\in A$},
\vspace{0.25em}
\\
\text{and } x_{a,b} \leq \max\left\{x_{a,c},x_{c,b}\right\} \text{for all $a,b,c \in A$}
\end{array}
\right\}, \]
which is a closed convex subset of $\mathbb R^{A^2}$.
We consider $\mathcal U_A$ to be the set of all ultrametrics on $A$ with distances bounded by $1$.
We write $\mathcal P(A^2)$ for the set of subsets of $A^2$.
\begin{lemma}\label{lem:ultrametric1}
Let $A$ be a finite non-empty set, and let $F:\mathbb R^{A^2}\to \mathbb R$ be of the form
\[
F(x) = \sum_{k=1}^K c_k \min\{x_{a,b} : (a,b) \in W_k\},
\]
where $K<\infty$, $c_1,\ldots,c_K \in \mathbb R$, and $W_1,\ldots, W_K \in \mathcal P(A^2)$. Then the maximum of $F$ on $\mathcal U_A$ is obtained by an ultrametric for which all distances are either zero or one. That is,
\[
\max\{F(x) : x \in \mathcal U_A\} = \max\left\{F(x) : x \in \mathcal U_A,\, x_{a,b} \in \{0,1\} \text{ for all $a,b\in A$}\right\}.
\]
\end{lemma}
\begin{proof}
We prove the claim by induction on $|A|$. The case $|A|=1$ is trivial. Suppose that the claim holds for all sets with cardinality less than that of $A$.
We may assume that $(a,a) \notin W_k$ for every $1\leq k \leq K$ and $i \in A$, since if $(a,a)\in W_k$ for some $1\leq k \leq K$ then the term $c_k \min \{x_{a,b} : (a,b) \in W_k\}$ is identically zero on $\mathcal U_A$.
We write $\mathbf{1}$ for the vector
\[\mathbf{1}_{(a,b)}=\mathbbm{1}(a\neq b).\]
It is easily verified that
\[F(\lambda x) = \lambda F(x)
\quad \text{ and } \quad F(x+\alpha\mathbf{1}) = F(x) + \alpha F(\mathbf{1}) \]
for every $x\in \mathbb R^{A^2}$, every $\lambda\geq 0$, and every $\alpha \in \mathbb R$.
Suppose $y\in\mathcal U_A$ is such that $F(y) = \max_{x\in \mathcal U_A} F(x)$.
We may assume that $F(y)>F(\mathbf{1})$ and that $F(y)>F(0)=0$, since otherwise the claim is trivial.
Let $m = \min \{ y_{a,b} : a,b \in A, a\neq b\}$, which is less than one by assumption.
We have that
\[\frac{y}{1-m}- \frac{m}{1-m}\mathbf{1} \in \mathcal U_A\]
and
\[F\left(\frac{y}{1-m}- \frac{m}{1-m}\mathbf{1}\right) = \frac{F(y)}{1-m} - \frac{mF(\mathbf{1})}{1-m} = F(y) + \frac{m}{1-m}(F(y)-F(\mathbf{1})), \]
and so we must have $m=0$ since $y$ maximizes $F$.
Define an equivalence relation $\bowtie$ on $A$ by letting $a$ and $b$ be related if and only if $y_{a,b}=0$. We write $\hat a$ for the equivalence class of $b$ under $\bowtie$. Let $C$ be the set of equivalence classes of $\bowtie$, and let $\phi: \mathcal U_C \to \mathcal U_A$ be the function defined by
\[\phi(x)_{a,b} = x_{\hat a, \hat b}\]
for every $x\in \mathcal U_n$.
For each $1\leq k \leq K$, let $\hat W_k$ be the set of pairs $\hat a, \hat b \in C$ such that $(a,b) \in W_k$ for some $a$ in the equivalence class $\hat a$ and $b$ in the equivalence class $\hat b$. Let
$\hat F : \mathcal U_C \to \mathbb R$ be defined by
\[\hat F(x) = \sum_{k=1}^K c_k \min \{x_{a,b} : (\hat a, \hat b) \in \hat W_k \}. \]
We have that $\hat F = F \circ \phi$, and, since $y$ maximized $F$, we deduce that, by the induction hypothesis,
\begin{align*}\max\{F(x): x \in \mathcal U_A\} &= \max\{\hat F(x) : x \in \mathcal U_C\}\\ &= \max\{\hat F(x) : x \in \mathcal U_C,\, x_{\hat a, \hat b} \in \{0,1\} \text{ for all $\hat a, \hat b \in C$}\},\end{align*}
completing the proof.
\end{proof}
We will also require the following generalisation of \cref{lem:ultrametric1}.
For each finite collection of disjoint finite sets $\{A_i\}_{i\in I}$ with union $A = \bigcup_{i\in I} A_i$, we define
\[ \mathcal{U}_{\{A_i\}_{i\in I}} = \{ x \in \mathcal U_{A} : x_{a,b}=1 \text{ for every distinct $i,j \in I$ and every $a \in A_i$ and $b \in A_j$.}\}. \]
\begin{lemma}\label{lem:ultrametric2}
Let $\{A_i\}_{i\in I}$ be a finite collection of disjoint, finite, non-empty sets with union $A = \bigcup_{i\in I}A_i$, and let $F:\mathbb R^{A^2}\to \mathbb R$ be of the form
\[
F(x) = \sum_{k=1}^K c_k \min\{x_{(i,j)} : (i,j) \in W_k\},
\]
where $K<\infty$, $c_1,\ldots,c_K \in \mathbb R$, and $W_1,\ldots, W_K \in \mathcal P(A^2)$. Then the maximum of $F$ on $\mathcal U_A$ is obtained by an ultrametric for which all distances are either zero or one. That is,
\[
\max\{F(x) : x \in \mathcal U_{\{A_i\}_{i\in I}}\} = \max\left\{F(x) : x \in \mathcal U_{\{A_i\}_{i\in I}},\, x_{a,b} \in \{0,1\} \text{ for all $a,b\in A$}\right\}.
\]
\end{lemma}
\begin{proof}
We prove the claim by fixing the index set $I$ and inducting on $|A|$. The case $|A|=|I|$ is trivial. Suppose that the claim holds for all collections of finite disjoint sets indexed by $I$ with total cardinality less than that of $A$.
We may assume that $(i,i) \notin W_k$ for every $1\leq k \leq K$ and $i \in A$, since if $(i,i)\in W_k$ for some $1\leq k \leq K$ then the term $c_k \min \{x_{i,j} : (i,j) \in W_k\}$ is identically zero on $\mathcal U_A$. Furthermore, we may assume that $W_k$ contains more than one element of at least one of the sets $A_i$ for each $1 \leq k \leq K$, since otherwise the term $c_k \min \{x_{i,j} : (i,j) \in W_k\}$ is equal to the constant $c_k$ on $\mathcal U_{\{A_i\}_{i\in I}}$.
We write $\mathbf{1}$ and $\mathbf{i}$ for the vectors
\[\mathbf{1}_{a,b} = \mathbbm{1}(a\neq b)\]
and
\[\mathbf{i}_{a,b} = \mathbbm{1}(\text{$a\neq b$, and $a,b\in A_i$ for some $i \in I$}).\]
It is easily verified that
\[F(\lambda x) = \lambda F(x)
\quad \text{ and } \quad F(x+\alpha\mathbf{i}) = F(x) + \alpha F(\mathbf{1}) \]
for every $x\in \mathcal U_{\{A_i\}_{i\in I}}$, every $\lambda\geq 0$, and every $\alpha \in \mathbb R$ such that $x + \alpha \mathbf{i} \in \mathcal U_{\{A_i\}_{i\in I}} $.
The rest of the proof is similar to that of \cref{lem:ultrametric1}. \qedhere
\end{proof}
\subsubsection*{Back to the uniform spanning forest}
We now return to the proofs of \cref{prop:nonubiquity} and \cref{lem:firstmomentgeneral}.
\begin{proof}[Proof of \cref{lem:firstmomentgeneral}]
In this proof, implicit constants will be functions of $c',H,\alpha,d$ and $m$. The case that $E = \emptyset$ is trivial (by the assumption that $d \geq 2 \alpha$), so we may assume that $|E|\geq 1$.
Write $\Xi=\Xi_x(n,n+m)$.
First, observe that
\[
\langle x_u, \{\xi_e: e \perp u\} \rangle \asymp 2^{n} \langle \{\xi_e: e \perp u\} \rangle \]
for every $\xi \in \Xi$ and $u \in \partial V$, and hence that
\begin{align*}
\mathbb W^{H,\alpha}_{x}(n,n+m) &= \sum_{\xi\in \Xi} \prod_{u \in \partial V} \langle x_u, \{\xi_e: e \perp u\} \rangle^{-(d-2\alpha)} \prod_{u \in V_\circ} \langle \{\xi_e: e \perp u\} \rangle^{-(d-2\alpha)} \\
&\preceq
2^{-(d-2\alpha)|\partial V|n} \sum_{\xi \in \Xi} \prod_{u \in V} \langle \{\xi_e: e \perp u\} \rangle^{-(d-2\alpha)}.
\end{align*}
Let $L$ be the set of symmetric functions $\ell:E^2 \to \{0,\ldots,n\}$ such that $\ell(e,e)=0$ for every $e\in E$.
For each $\ell \in L$, let
\[
\Xi_\ell = \left\{\xi \in \Xi :
\begin{array}{l}
2^{\ell(e,e')} \leq \langle \xi_e \xi_{e'} \rangle \leq 2^{\ell(e,e')+m+1} \text{ for all $e,e' \in E$}
\end{array}
\right\},
\]
so that $\Xi = \bigcup_{\ell \in L} \Xi_\ell$.
For each $\ell$ in $L$, let
\[
\hat \ell(e,e') = \min \{\ell(e,e')\}\cup\left\{\max \{\ell(e,e_1),\ldots,\ell(e_k,e')\}: k\geq 1 \text{ and } e_1,\ldots,e_k \in E \right\}.
\]
In other words, $\hat \ell$ is the largest ultrametric on $E$ that is dominated by $\ell$.
Observe that for every $\ell \in L$, every $\xi \in \Xi_\ell$, end every $e,e',e'' \in E$, we have that
\begin{align*}
\log_2 \langle \xi_e \xi_{e'} \rangle &\leq \log_2\left[ \langle \xi_e \xi_{e''} \rangle + \langle \xi_{e''} \xi_{e'} \rangle\right]
\leq \log_2\max\{\langle \xi_e \xi_{e''} \rangle,\, \langle \xi_{e''} \xi_{e'} \rangle\} +1\\
& \leq \max\{\ell(e,e''),\, \ell(e'',e')\}+2m +3,
\end{align*}
and hence, by induction, that
\[
\log_2 \langle \xi_e \xi_{e'} \rangle \leq \hat \ell(e,e')+(2m+3)|E| \approx \hat \ell(e,e').
\]
Let $e_1,\ldots,e_{|E|}$ be an enumeration of $E$.
For every $\ell \in L$, every $1 \leq j < i \leq |E|$ and every $\xi \in \Xi_\ell$ we have that
\[
\xi_{e_i} \in B\left(\xi_{e_j},\, 2^{\hat\ell(e_i,e_j)+(2m+3)|E|}\right) \text{ and } \left|B\left(\xi_{e_j},\, 2^{\hat\ell(e_i,e_j)+(2m+3)|E|}\right)\right| \preceq 2^{d\hat\ell(e_i,e_j)}.
\]
By considering the number of choices we have for $\xi_{e_i}$ at each step given our previous choices, it follows that
\begin{align}
\log_2 |\Xi_\ell| \lesssim
dn+ d\sum_{i=2}^{|E|}\min\left\{\hat \ell(e_i,e_j) : j<i\right\}.
\label{eq:Lambdaell}
\end{align}
Now, for every $\xi \in \Xi_\ell$, we have that
\begin{align}
\log_2 \prod_{u \in V} \langle \{\xi_e: e \perp u\} \rangle^{-(d-2\alpha)}
&\approx -(d-2\alpha)\sum_{u \in V}\sum_{i=2}^{|E|}\mathbbm{1}(e_i \in u) \min \left\{\ell(e_i,e_j) : j<i,\, e_j \perp u\right\}
\nonumber
\\
&=
-(d-2\alpha)\sum_{i=2}^{|E|}\sum_{u \perp e}\min \left\{\ell(e_i,e_j) : j<i,\, e_j \perp u\right\}.
\label{eq:Lambdaellsfriend}
\end{align}
Thus, from \eqref{eq:Lambdaell} and \eqref{eq:Lambdaellsfriend} we have that
\begin{multline}
\log_2 \sum_{\xi \in \Xi_\ell} \prod_{u \in V} \langle \{\xi_e: e \perp u \} \rangle^{-(d-2\alpha)}\\ \lesssim
dn+ \sum_{i=2}^{|E|}\left[ d\min\{\hat \ell(e_i,e_j) : j<i\} -(d-2\alpha) \sum_{u \perp e}\min \{\ell(e_i,e_j) : j<i,\, e_j \perp u\}\right]. \label{eq:Qfirstmoment1}
\end{multline}
Let $Q: L \to \mathbb R$ be defined to be the expression on the right hand side of \eqref{eq:Qfirstmoment1}.
We clearly have that $Q(\hat \ell) \geq Q(\ell)$ for every $\ell \in L$, and so there
exists $\ell\in L$ maximizing $Q$ such that $\ell$ is an ultrametric. It follows from \cref{lem:ultrametric1} (applied to the normalized ultrametric $\ell/n$) that there exists $\ell\in L$ maximizing $Q$ such that $\ell$ is an ultrametric and every value of $\ell$ is in $\{0,n\}$.
Fix one such $\ell$, and define an equivalence relation $\bowtie$ on $E$ by letting $e \bowtie e'$ if and only if $\ell(e,e')=0$, which is an equivalence relation since $\ell$ is an ultrametric.
Observe that, for every $2 \leq i \leq |E|$,
\[
\min \{\ell(e_i,e_j) : j < i\} = \mathbbm{1}[\text{$e_j$ is not in the equivalence class of $e_i$ for any $j<i$}]\, n,
\]
and hence that
\[
dn + \sum_{i=2}^{|E|} \min \{\ell(e_i,e_j) : j < i\} = |\{\text{equivalence classes of $\bowtie$}\}| \, n.
\]
Similarly, we have that, for every vertex $u$ of $H$,
\[
\sum_{i=2}^{|E|} \min \{\ell(e_i,e_j) : j < i, e_j \perp u\} = \left(|\{\text{equivalence classes of $\bowtie$ incident to $u$}\}|-1\right)\,n,
\]
where we say that an equivalence class of $\bowtie$ is incident to $u$ if it contains an edge that is incident to $u$. Thus, we have that
\begin{multline}
\label{eq:Qequiv}
Q(\ell) =
d|\{\text{equivalence classes of $\bowtie$}\}|\, n
\\-(d-2\alpha)\sum_{u \in V} (|\{\text{equivalence classes of $\bowtie$ incident to $u$}\}|-1)\, n.
\end{multline}
Let $H'=\coarse{H}{\bowtie}$ be the coarsening of $H$ associated to $\bowtie$ as in \cref{subsec:optimalcoarsenings}.
We can rewrite \eqref{eq:Qequiv} as
\begin{align*}
Q(\ell) &= d|E(H')|\, n -(d-2\alpha)\Delta(H')\, n+(d-2\alpha)|V(H)|\, n
= -\eta_{d,\alpha}(H')\, n+(d-2\alpha)|\partial V| \, n.
\end{align*}
Since $|L| \leq (n+1)^{|E|^2}$, we deduce that
\begin{multline*}
\log_2\mathbb W^{H,\alpha}_{x}(n,n+m) \lesssim -(d-2\alpha)|\partial V|\, n + \log_2 \sum_{\ell \in L} Q(\ell)
\\
\leq
\max_{\ell\in L}Q(\ell) - (d-2\alpha)|\partial V|\, n + \log_2|L| \lesssim - \hat \eta_{d,\alpha}(H)\, n + |E|^2 \log_2 n
\end{multline*}
as claimed. \qedhere
\end{proof}
Next, we consider the case that the points $x_v$ are roughly equally spaced and we are summing over points $\xi$ that are on the same scale as the spacing of the $x_v$.
\begin{lem}[The close scale]
\label{lem:firstmomentclose}
Let $\mathbb G$ be a $d$-dimensional transitive graph with $d>4$ and let $H$ be a finite hypergraph with boundary. Let $m_1,m_2\geq 0$. Then there exists a constant $c=c(\mathbb G,H,m_1,m_2)$ such that
\begin{equation*}\log_2\mathbb W^H_x(0,n+m_2) \leq -\hat \eta_d(H)\, n + |E \cup \partial V|^2\log_2 n + c \end{equation*}
for every $n \geq 1$ and every $x=(x_u)_{u\in \partial V} \in \mathbb V^{\partial V}$ are such that $2^{n-m_1} \leq \langle x_u x_v \rangle \leq 2^n$ for every $u,v\in V$.
\end{lem}
\begin{proof}
We may assume that $E \neq \emptyset$, the case $E=\emptyset$ being trivial.
For notational convenience, we will write $\xi_v=x_v$, and consider $v \perp v$ for every vertex $v\in \partial V$. Write $\Xi=\Xi_x(0,n+m_2)$, and observe that for each $\xi \in \Xi$ and $e \in E$ there exists at most one $v\in \partial V$ for which $\log_2 \langle \xi_e \xi_v\rangle < n-m_1-1$. To account for these degrees of freedom, we define $\Phi$ to be the set of functions $\phi:E \cup \partial V \to \partial V \cup \{\star\}$ such that $\phi(v)=v$ for every $v\in \partial V$. For each $\phi \in \Phi$,
let $L_\phi$ be the set of symmetric functions $\ell: (E \cup \partial V)^2 \to \{0,\ldots,n\}$ such that $\ell(e,e)=0$ for every $e\in E \cup \partial V$ and $\ell(e,e')=n$ for every $e,e' \in E \cup \partial V$ such that $\phi(e)\neq \phi(e')$.
For each $\phi \in \Phi$ and $\ell \in L_\phi$,
let
\begin{multline*}\hspace{-0.25cm}\Xi_{\phi,\ell} = \left\{\xi \in \Xi :
\begin{array}{l}
\ell(e,e')- m_1 - 1 \leq \log_2 \langle \xi_e \xi_{e'} \rangle \leq \ell(e,e') + m_2+1 \text{ for every $e,e' \in E \cup \partial V$}
\end{array}
\hspace{-0.15cm} \right\},\end{multline*}
and observe that
$\Xi = \bigcup_{\phi\in \Phi} \bigcup_{\ell \in L_\phi} \Xi_{\phi,\ell}$.
Now, for each $\phi \in \Phi$ and $\ell \in L_\phi$, let $\hat \ell$ be the largest ultrametric on $E \cup \partial V$ that is dominated by $\ell$. Observe that $\hat \ell \in L_\phi$, and that, as in the previous lemma, we have that
\[\log_2 \langle \xi_e \xi_{e'} \rangle \lesssim \hat \ell(e,e')\]
for every $e,e' \in E \cup \partial V$.
Let $e_1,\ldots,e_{|E|}$ be an enumeration of $E$, and let $e_0,e_{-1},\ldots,e_{-|\partial V|+1}$ be an enumeration of $\partial V$. As in the proof of the previous lemma, we have the volume estimate
\begin{align}\log_2 |\Xi_{\phi,\ell}| \lesssim
d\sum_{i=1}^{|E|}\min\{\hat \ell(e_i,e_j) : j<i\}
\label{eq:Lambdaell}\end{align}
Now, for every $\xi \in \Xi_{\phi,\ell}$, we have that, similarly to the previous proof,
\begin{align*}
\log_2 W(x,\xi)
&\approx
-(d-4)\sum_{i=1}^{|E|}\sum_{u \perp e}\min \{\ell(e_i,e_j) : j<i,\, e_j \perp u\}.
\end{align*}
(Recall that we are considering $u\perp u$ for each $u \in \partial V$.)
Thus, we have
\begin{multline}\log_2 \sum_{\xi \in \Xi_{\phi,\ell}} W(x,\xi)\\ \lesssim
\sum_{i=1}^{|E|}\left[ d\min\{\hat \ell(e_i,e_j) : j<i\} -(d-4) \sum_{u \perp e}\min \{\ell(e_i,e_j) : j<i,\, e_j \perp u\}\right]. \label{eq:Qfirstmoment}
\end{multline}
Let $Q: L_\phi \to \mathbb R$ be defined to be the expression on the right hand side of \eqref{eq:Qfirstmoment}.
Similarly to the previous proof but applying \cref{lem:ultrametric2} instead of \cref{lem:ultrametric1}, there is an $\ell\in L_\phi$ maximizing $Q$ such that $\ell$ is an ultrametric and $\ell(e,e') \in \{0,n\}$ for all $e,e' \in E \cup \partial V$.
Fix one such $\ell$, and define an equivalence relation $\bowtie$ on $E \cup \partial V$ by letting $e \bowtie e'$ if and only if $\ell(e,e')=0$, which is an equivalence relation since $\ell$ is an ultrametric.
Similarly to the proof of the previous lemma, we can compute that
\begin{multline*}Q(\ell) =
dn\left|\{\text{equivalence classes of $\bowtie$ that are contained in $E$}\}\right|
\\
-(d-4)n\sum_{u \in \partial V} \left|\{\text{equivalence classes of $\bowtie$ incident to $u$ that do not contain $u$}\}\right|
\\
-(d-4)n \sum_{u \in V_\circ} \left(\left|\{\text{equivalence classes of $\bowtie$ incident to $u$}\}\right| -1\right).
\end{multline*}
Since $d>4$ and each equivalence class of $\bowtie$ can contain at most one vertex of $v$, we see that $Q$ increases if we remove a vertex $v\in \partial V$ from its equivalence class. Since $\ell$ was chosen to maximize $Q$, we deduce that the equivalence class of $v$ under $\bowtie$ is a singleton for every $v\in \partial V$. Thus, there exists an ultrametric $\ell \in L_\phi$ maximizing $Q$ such that $\ell(e,e')\in \{0,n\}$ for every $e,e' \in E$ and $\ell(e,v)=n$ for every $e\in E$ and $v\in \partial V$. Letting $\bowtie'$ be the equivalence relation on $E$ (rather than $E \cup \partial V$) corresponding to such an optimal $\ell$, we have
\begin{multline}
\label{eq:Qell2}
Q(\ell) =
dn\left|\{\text{equivalence classes of $\bowtie'$}\}\right|
\\
-(d-4)n\sum_{u \in \partial V} \left|\{\text{equivalence classes of $\bowtie'$ incident to $u$}\}\right|.
\\
-(d-4)n \sum_{u \in V_\circ} \left(\left|\{\text{equivalence classes of $\bowtie'$ incident to $u$}\}\right| -1\right).
\end{multline}
The rest of the proof is similar to the proof of \cref{lem:firstmoment}.
\qedhere
\end{proof}
We can now bootstrap from the single scale estimates \cref{lem:firstmoment,lem:firstmomentclose} to a multi-scale estimate. Given a hypergraph with boundary $H=(\partial V, V_\circ, E)$ and a set of edges $E'\subseteq E$, we write
$V_\circ(E')=\bigcup_{e\in E'}\{v\in V_\circ : v \perp e\}$ and define
$H(E') = (\partial V, V_\circ(E'), E')$.
\begin{lem}[Induction estimate]
\label{lem:inductionestimate}
Let $\mathbb G$ be a $d$-dimensional transitive graph and let $H$ be a finite hypergraph with boundary. Then there exists a constant $c=c(\mathbb G,H)$ such that
\begin{multline*}
\log_2\left[\mathbb W^H_x(0,N+|E|+2) - \mathbb W^H_x(0,N)\right] \leq
\\
\max_{E' \subsetneq E}
\left\{
\log_2 \mathbb W^{H(E')}_x(0,N+|E|+2)
- \left[\hat \eta_d(H) -\hat \eta_d(H(E')) \right]\, N + |E\setminus E'|^2\log_2 N
\right\} + c
\end{multline*}
for every $x=(x_u)_{u\in \partial V} \in \mathbb V^{\partial V}$ and every $N$ such that $\langle x_u x_v \rangle \leq 2^{N-1}$ for all $u,v \in \partial V$.
\end{lem}
Note that when $|E|\geq 1$ we must consider the term $E'=\emptyset$ when taking the maximum in this lemma, which gives $-\hat \eta_d(H) N + |E|^2 \log_2 N$.
\begin{proof}
The claim is trivial in the case $E=\emptyset$, so suppose that $|E|\geq 1$.
Let
$\Xi = \Xi_x(0,N+|E|+2) \setminus \Xi_x(0,N)$
so that
\[ \mathbb W^H_x(0,N+|E|+2) - \mathbb W^H_x(0,N) \leq \sum_{\xi \in \Xi} W^H(x,\xi).\]
For each $E'\subsetneq E$ and every $1 \leq m \leq |E|+1$, let
\[ \Xi^{E', m} = \left(\Lambda_x(0,N+m-1)\right)^{E'} \times \left(\Lambda_x(N+m,N+|E|+2)\right)^{E \setminus E'}.
\]
Observe that if $\xi \in \Xi$
then, by the Pigeonhole Principle, there must exist $1 \leq m \leq |E|+2$ such that $\xi_e$ is not in $\Lambda_x(N-m-1,N-m)$ for any $e \in E$, and we deduce that
\[
\Xi = \bigcup \left\{ \Xi^{E',m} : E'\subsetneq E,\, 1\leq m \leq |E|+2 \right\}.
\]
Thus, to prove the lemma it suffices to show that
\begin{equation}
\label{eq:inductionestimate1}
\log_2 \sum_{\xi \in \Xi^{H(E'),m}} W^H(x,\xi) \lesssim
\log_2 \mathbb W^{H(E')}_x(0,N) - \left(\hat \eta_d(H) -\hat \eta_d(H(E')) \right)\, N + |E \setminus E'|^2\log_2 N
\end{equation}
whenever $1\leq m \leq |E|+2$ and $E' \subsetneq E$. If $E'=\emptyset$ then this follows immediately from \cref{lem:firstmoment}, so we may suppose not.
\medskip
To this end, fix $E' \subsetneq E$ with $|E'|\geq 1$ and write $H'=H(E') = (\partial V,V_\circ(E'),E') = (\partial V, V_\circ',E')$. Choose some $v_0 \in \partial V$ arbitrarily, and write $x_v = x_{v_0}$ for every $v \in V_\circ'$. Then for every $\xi \in \Xi^{E',m}$, using the fact that we have the empty scale $\Lambda_x(N-m-1,N-m)$ separating $\{\xi_e : e \in E'\}$ from $\{\xi_e : e \notin E'\}$, we have that
\[
\left\langle \{x_u\} \cup \{\xi_e: e \perp u\} \right\rangle
\asymp \left\langle \{x_u\}\cup \{\xi_e: e\in E',\, e \perp u\} \right\rangle \left\langle \{x_u\}\cup\{\xi_e: e \notin E',\, e \perp u\} \right\rangle
\]
for every vertex $u\in \partial V$,
\[
\left\langle \{\xi_e: e \perp u\} \right\rangle \asymp
\left\langle \{\xi_e: e\in E',\, e \perp u\} \right\rangle \left\langle \{x_u\}\cup \{\xi_e: e \notin E',\, e \perp u\} \right\rangle
\]
for every vertex $u \in V'_\circ$, and that, trivially,
\[
\left\langle \{\xi_e: e \perp u\} \right\rangle =
\left\langle \{\xi_e: e \notin E',\, e \perp u\} \right\rangle
\]
for every vertex $u \in V_\circ \setminus V'_\circ$. Define a hypergraph with boundary $H'' =(\partial V'', V_\circ'', E'',\perp'')$ by setting \begin{multline*}\text{$\partial V'' = \partial V \cup V_\circ'$,\qquad $V_\circ'' = V_\circ \setminus V_\circ',$\qquad $V''= \partial V'' \cup V_\circ''=V$,\qquad $E'' = E \setminus E'$,}\\ \text{and
$\perp''=\perp \cap\, (V'' \cap E'')$.}\end{multline*}
For each $\xi \in \Xi^{E',m}$, let $\xi'=(\xi'_e)_{e\in E'}=(\xi_e)_{e\in E'}$ and $ \xi''=( \xi''_e)_{e\in E''} = (\xi_e)_{e\in E''}$.
Then the above displays imply that
\[ W^H(x,\xi) \asymp W^{H'}\left(x,\xi'\right) \cdot W^{H''}\big(x, \xi''\big) \]
for every $\xi \in \Xi^{E',m}$. Thus, summing over $\xi' \in (\Lambda_x(0,N+m-1))^{E'}$ and $ \xi'' \in (\Lambda_x(N+m,N+|E|+2))^{E''}$, we obtain that
\begin{align}
\log_2 \sum_{\xi \in \Xi^{E',m}} W^H(x,\xi) &\lesssim \log_2 \mathbb W^{H'}_x(0,N+m-1) + \log_2 \mathbb W^{H''}_x(N+m,N+|E|+2)
\nonumber
\\
&\lesssim \log_2 \mathbb W^{H'}_x(0,N+|E|+2) - \hat \eta_d(H'') N + |E''|^2 \log_2 N,
\label{eq:inductionestimate2}
\end{align}
where the second inequality follows from \cref{lem:firstmoment}.
To deduce \eqref{eq:inductionestimate1} from \eqref{eq:inductionestimate2}, it suffices to show that
\begin{equation}
\label{eq:HHH}
\hat \eta_d(H) \leq \hat \eta_d(H') + \hat \eta_d(H'').
\end{equation}
To this end, let $\bowtie'$ be an equivalence relation on $E'$ and let $\bowtie''$ be an equivalence relation on $E''$. We can define an equivalence relation $\bowtie$ on $E$ by setting $e \bowtie e'$ if and only if either $e,e' \in E'$ and $e \bowtie' e'$ or $e,e' \in E''$ and $e \, \bowtie'' \,e'$.
We easily verify that
$\Delta(\coarse{H}{\bowtie})$ $=$ $\Delta(\coarse{H'}{\bowtie'})$ $+$ $\Delta(\coarse{H''}{\bowtie''})$,
$|V_\circ(\coarse{H}{\bowtie})| = |V_\circ(\coarse{H'}{\bowtie'})| + |V_\circ(\coarse{H''}{\bowtie''})|$, and
$|E(\coarse{H}{\bowtie})| = |E(\coarse{H'}{\bowtie'})|$ $+ |E(\coarse{H''}{\bowtie''})|$, so that
\begin{align*}
\eta_d(\coarse{H}{\bowtie}) = \eta_d(\coarse{H'}{\bowtie'}) + \eta_d(\coarse{H''}{\bowtie''}),
\end{align*}
and the inequality \eqref{eq:HHH} follows by taking the minimum over $\bowtie'$ and $\bowtie''$. \qedhere
\end{proof}
We now use \cref{lem:inductionestimate} and \cref{lem:firstmomentclose} to perform an inductive analysis of $\mathbb W$. Although we are mostly interested in the non-buoyant case, we begin by controlling the buoyant case.
\begin{lem}[Many scales, buoyant case]\label{lem:firstmoment2}
Let $H$ be a finite hypergraph with boundary. Let $m \geq 1$, and suppose that $x=(x_u)_{u\in \partial V} \in \mathbb V^{\partial V}$ are such that $2^{n-m} \leq \langle x_u x_v \rangle \leq 2^{n-1}$.
If every subhypergraph of $H$ has a $d$-buoyant coarsening,
then there exists a constant $c=c(\mathbb G,H,m)$ such that
\[\log_2\mathbb W^H_x(0,N) \leq -{\hat \eta}_d(H) \, N + (|E \cup \partial V|^2+1)\log_2 N + c\]
for all $N\geq n$.
\end{lem}
\begin{proof}
We induct on the number of edges in $H$. The claim is trivial when $E = \emptyset$.
Suppose that $|E|\geq 1$ and that
the claim holds for all finite hypergraphs with boundary that have fewer edges than $H$.
By assumption, $\hat \eta_d(H') \leq 0$ for all subhypergraphs $H'$ of $H$. Thus, it follows from the induction hypothesis that
\[\log_2\mathbb W^{H'}_x(0,N+|E|+2) \lesssim -{\hat \eta}_d(H') \, N + (|E' \cup \partial V'|^2+1)\log_2 N \]
for each proper subhypergraph $H'$ of $H$, and hence that
\begin{multline*}
\log_2 \mathbb W^{H'}_x(0,N+|E|+2)
- \left[\hat \eta_d(H) -\hat \eta_d(H') \right] N + |E\setminus E'|^2\log_2 N \\
\lesssim - \hat \eta_d(H) \, N + (|E' \cup \partial V|^2+1 + |E\setminus E'|^2) \log_2 N.
\end{multline*}
(Note that the implicit constants depending on $H'$ from the induction hypothesis are bounded by a constant depending on $H$ since $H$ has only finitely many subhypergraphs.)
Observe that whenever $E' \subsetneq E$ we have that
\[ |E' \cup \partial V|^2+1 + |E\setminus E'|^2 \leq |E\cup \partial V|^2,\]
and so we deduce that
\begin{multline*}
\log_2 \mathbb W^{H'}_x(0,N+|E|+2)
- \left[\hat \eta_d(H) -\hat \eta_d(H') \right] N + |E\setminus E'|^2\log_2 N
\\ \lesssim - \hat \eta_d(H) \, N + |E \cup \partial V|^2 \log_2 N
\end{multline*}
for every proper subhypergraph $H'$ of $H$. Thus, we have that
\begin{align*}
\log_2 \left[ \mathbb W^H_x(0,N+1) - \mathbb W^H_x(0,N) \right]
&\leq \log_2 \left[ \mathbb W^H_x(0,N+|E|+2) - \mathbb W^H_x(0,N) \right]
\\
&\lesssim - \hat \eta_d(H) \, N + |E \cup \partial V|^2 \log_2 N
\end{align*}
for all $N \geq n$, where we applied \cref{lem:inductionestimate} in the second inequality.
Summing from $n$ to $N$ we deduce that
\begin{align*}
\mathbb W^H_x(0,N) - \mathbb W^H_x(0,n) &\preceq \sum_{i=n}^N \exp_2\left[- \hat \eta_d(H)\,i + |E\cup \partial V|^2\log_2 i\right]\\
&\preceq \exp_2\left[- \hat \eta_d(H)\,N + (|E\cup \partial V|^2+1)\log_2 N \right].
\end{align*}
Using \cref{lem:firstmomentclose} to control the term $\mathbb W^H_x(0,n)$ completes the induction.
\end{proof}
We are now ready to perform a similar induction for the non-buoyant case. Note that in this case the induction hypothesis concerns probabilities rather than expectations. This is necessary because the expectations can grow as $N\to\infty$ for the wrong reasons if $H$ has a buoyant coarsening but has a subhypergraph that does not have a buoyant coarsening (e.g.\ the tree in \cref{fig:unbalanced}).
\begin{lem}[Every scale, non-buoyant case]
\label{lem:firstmoment3}
Let $H$ be a finite hypergraph with boundary such that $E\neq \emptyset$, let $m \geq 1$, and suppose that
$H$ has a subhypergraph that does not have any $d$-buoyant coarsenings.
Then there exist positive constants $c_1=c_1(\mathbb G,H,m)$ and $c_2=c_2(\mathbb G,H.m)$ such that
\[
\vspace{0.25em}
\log_2\P(S^H_{x}(0,\infty) > 0) \leq -c_1 \, n + |E\cup \partial V|^2\log_2 n + c_2\]
for all
$x=(x_u)_{u\in \partial V} \in \mathbb V^{\partial V}$ such that $2^{n-m} \leq \langle x_u x_v \rangle \leq 2^{n-1}$ for all $u,v \in \partial V$.
\end{lem}
\begin{proof}
We induct on the number of edges in $H$. For the base case, suppose that $H$ has a single edge.
In this case we must have that $\eta_d(H)>0$, and we deduce from \cref{lem:firstmoment,lem:firstmomentclose} that
\begin{align*}\mathbb W^H_x(0,N) &\leq \mathbb W^H_x(0,n)+ \sum_{i=n+1}^N \mathbb W^H_x(i-1,i)\\ &\preceq \exp_2\left[-\hat \eta_d(H)\, n + |E\cup \partial V|^2\log_2 n \right] + \sum_{i=n+1}^N \exp_2\left[ -\hat \eta_d(H) \, i + |E|^2\log_2 i \right]\\
&\preceq \exp_2\left[-\hat \eta_d(H) \, n + |E\cup \partial V|^2\log_2 n\right],\end{align*}
so that the claim follows from Markov's inequality.
This establishes the base case of the induction.
Now suppose that $|E|>1$ and that
the claim holds for all finite hypergraphs with boundary that have fewer edges than $H$. If $H$ has a proper subhypergraph $H'$ with $\hat \eta_d(H')>0$, then $S^{H'}_x(0,\infty)$ is positive if $S^{H}_x(0,\infty)$ is, and so the claim follows from the induction hypothesis, letting $c_1(\mathbb G,H,m)=c_1(\mathbb G,H',m)$ and $c_2(\mathbb G,H,m)=c_2(\mathbb G,H',m)$.
Thus, it suffices to consider the case that $\hat \eta_d(H) >0$ but that $\hat \eta_d(H')\leq 0$ for every proper subhypergraph $H'$ of $H$.
In this case, we apply \cref{lem:inductionestimate}
to deduce that
\begin{multline*}
\log_2 \left[ \mathbb W^H_x(0,N+1) - \mathbb W^H_x(0,N) \right] \leq \log_2 \left[ \mathbb W^H_x(0,N+|E|+2) - \mathbb W^H_x(0,N) \right] \\
\lesssim
\max_{E'\subsetneq E}
\left\{
\log_2 \mathbb W^{H(E')}_x(0,N+|E|+2)
- \left[\hat \eta_d(H) -\hat \eta_d(H(E')) \right]\, N + |E\setminus E'|^2\log_2 N
\right\}.
\end{multline*}
\cref{lem:firstmoment2} then yields that
\begin{multline*}
\log_2 \left[ \mathbb W^H_x(0,N+1) - \mathbb W^H_x(0,N) \right]
\lesssim - \hat \eta_d(H) \, N + (|E'\cup \partial V|^2+1 +|E\setminus E'|^2 ) \log_2 N\\
\lesssim - \hat \eta_d(H) \, N + |E\cup \partial V|^2 \log_2 N.
\end{multline*}
Finally, combining this with \cref{lem:firstmomentclose} yields that, since $\hat \eta_d(H)>0$,
\begin{align*}\mathbb W_x^H(0,N) &\preceq \exp_2\left[-\hat \eta_d(H)\, n + |E\cup \partial V|^2\log_2 n \right] + \sum_{i=n}^N \exp_2\left[ -\hat \eta_d(H)\,i+|E\cup \partial V|^2\log_2 i\right]\\
&\preceq \exp_2\left[-\hat \eta_d(H)\, n + |E\cup \partial V|^2\log_2 n\right], \end{align*}
and the claim follows from Markov's inequality.
\end{proof}
\begin{proof}[Proof of \cref{prop:nonubiquity}]
Let $H$ be a finite hypergraph with boundary that has a subhypergraph that does not have any $d$-buoyant coarsenings, so that in particular $H$ has at least one edge.
\cref{lem:firstmoment3} and \cref{prop:sdim2} imply that for every $\varepsilon>0$, there exists $x=(x_v)_{v\in \partial V}$ such that each of the points $x_v$ are in different components of $\mathfrak F$ with probability at least $1-\varepsilon$, but $H$ has probability at most $\varepsilon$ to be faithfully present at $x$ in the component hypergraph $\mathcal C^{hyp}_r(\mathfrak F)$. It follows that $H$ is not faithfully ubiquitous in the component graph $\mathcal C^{hyp}_r(\mathfrak F)$ a.s.
Now suppose that $H$ is a hypergraph with boundary such that every quotient $H'$ of $H$ such that $R_\mathbb G(H') \leq r$ has a subhypergraph that does not have any $d$-buoyant coarsenings. Note that if $H'$ is a quotient of $H$ such that $R_\mathbb G(H') > r$ then $H'$ is not faithfully present anywhere in $\mathbb G$ a.s. This follows immediately from the definition of $R_\mathbb G(H')$. On the other hand, \cref{lem:firstmoment3} and \cref{prop:sdim2} imply that for every $\varepsilon>0$, there exists $x=(x_v)_{v\in \partial V}$ such that each of the points $x_v$ are in different components of $\mathfrak F$ with probability at least $1-\varepsilon$, but, for each quotient $H'$ of $H$ with $R_\mathbb G(H') \leq r$, the hypergraph $H'$ has probability at most $\varepsilon / |\{$quotients of $H\}|$ to be faithfully present at $x$ in the component hypergraph $\mathcal C^{hyp}_r(\mathfrak F)$, since $H'$ must have a subhypergraph none of whose coarsenings are $d$-buoyant by assumption. It follows by a union bound that $H$ has probability at most $\varepsilon$ to be present in $\mathcal C^{hyp}_r(\mathfrak F)$ at this $x$.
It follows as above that $H$ is not ubiquitous in the component hypergraph $\mathcal C^{hyp}_r(\mathfrak F)$ a.s.
\end{proof}
\subsection{Positive probability of robust faithful presence in low dimensions}
\label{sec:2ndmoment}
Recall that if $\mathbb G$ is a $d$-dimensional transitive graph, $H=(\partial V, V_\circ, E)$ is a finite hypergraph with boundary, that $r\geq 1$ and that $(x_v)_{v\in \partial V}$ is a collection of points in $\mathbb G$, we say that $H$ is $r$-\textbf{robustly faithfully present} at $x=(x_v)_{v\in V}$ if there is an infinite collection $\{ \xi^i = (\xi^i_{(e,v)})_{(e,v)\in E_\bullet} : i \geq 1 \}$
such that $\xi^i$ is a witness for the $r$-faithful presence of $H$ at $x$ for every $i$, and $\xi^j_{(e,v)} \neq \xi^j_{(e',v')}$ for every $i, j \geq 1$ and $(e,v),(e',v') \in E_\bullet$ such that $i \neq j$. As in the introduction, for each $M\geq 1$ we let $R_\mathbb G(M)$ be minimal such that it is possible for a set of diameter $R_\mathbb G(M)$ to intersect $M$ distinct components of the uniform spanning forest of $\mathbb G$, and let $R_\mathbb G(H) = R_\mathbb G(\max_{e\in E} \deg(e) ).$
\medskip
We say that a set $W \subset \mathbb V$ is \textbf{well-separated} if the vertices of $W$ are all in different components of the uniform spanning forest $\mathfrak F$ with positive probability.
\begin{lemma}
\label{lem:annoyinglemma}
Let $\mathbb G$ be a $d$-dimensional transitive graph with $d>4$, and let $\mathfrak F$ be the uniform spanning forest of $\mathbb G$. Then a finite set $W \subset \mathbb V$ is well-separated if and only if when we start a collection of independent simple random walks $\{X^v : v \in W\}$ at the vertices of $W$, the event that $\{
X^u_i : i\geq 0
\} \cap \{X^v_i :i \geq 0 \} = \emptyset$ for every distinct $u,v\in W$ has positive probability.
\end{lemma}
\begin{proof}
We will be brief since the statement is intuitively obvious from Wilson's algorithm and the details are somewhat tedious.
The `if' implication follows trivially from Wilson's algorithm. To see the reverse implication, suppose that $W$ is well-separated and consider the paths $\{(\Gamma^v_i)_{i\geq 0} : v\in W\}$ from the vertices of $W$ to infinity in $\mathfrak F$. Using Wilson's algorithm and the Green function estimate \eqref{eq:HSC}, it is easily verified that
\begin{equation}
\label{eq:supressed}
\lim_{i\to\infty} \sum_{v\in W} \sum_{u\in W \setminus \{v\}} \left[\langle \Gamma^v_i \Gamma^u_j \rangle^{-d+4} + \sum_{j=0}^{i-1} \langle \Gamma^v_i \Gamma^u_j \rangle^{-d+2}\right] =0
\end{equation}
almost surely on the event that the vertices of $W$ are all in different components of $\mathfrak F$. Let $i\geq 1$ and consider the collection of simple random walks $Y^{v,i}$ started at $\Gamma^v_i$ and conditionally independent of each other and of $\mathfrak F$ given $(\Gamma^v_i)_{v\in W}$, and let $\tilde Y^{v,i}$ be the random path formed by concatenating $(\Gamma^{v}_j)_{j=1}^i$ with $Y^{v,i}$. It follows from \eqref{eq:supressed} and Markov's inequality that
\begin{equation}
\label{eq:supressed2}
\limsup_{i\to\infty}\P\left( \bigl\{\tilde Y^{v,i}_j : j \geq 0\bigr\} \cap
\bigl\{\tilde Y^{u,i}_j: j \geq 0\bigr\}
= \emptyset \text{ for every $v\in W$}\right)
= \P(\mathscr F(W))>0,
\end{equation}
where we recall that $\mathscr F(W)$ is the event that all the vertices of $W$ are in different components of $\mathfrak F$. In particular, it follows that the probability appearing on the left hand side of \eqref{eq:supressed2} is positive for some $i_0\geq 0$. The result now follows since the walks $\{X^v : v \in W\}$ have a positive probability of following the paths $\Gamma^v$ for their first $i_0$ steps, and on this event their conditional distribution coincides with that of $\{\tilde Y^{v,i_0} : v\in W\}$.
\end{proof}
The goal of this subsection is to prove criteria for robust faithful presence to occur with positive probability.
We begin with the case that $d/(d-4)$ is not an integer (i.e., $d\notin \{5,6,8\}$), which is technically simpler. The corresponding proposition for $d=5,6,8$ is given in \cref{prop:ubiquityspeciald}.
\begin{prop}\label{prop:ubiquity}
Let $\mathbb G$ be a $d$-dimensional transitive graph with $d>4$ such that $d/(d-4)$ is not an integer, and let $\mathfrak F$ be the uniform spanning forest of $\mathbb G$. Let $H$ be a finite hypergraph with boundary with at least one edge, and suppose that $H$ has a coarsening all of whose subhypergraphs are $d$-buoyant.
Then for every $r\geq R_\mathbb G(H)$ and every well-separated collection of points $(x_v)_{v\in \partial V}$ in $\mathbb V$,
there is a positive probability that
the vertices $x_u$ are all in different components of $\mathfrak F$ and that $H$ is robustly faithfully present at $x$ in $\mathcal{C}^{hyp}_r(\mathfrak F)$.
\end{prop}
The proof of \cref{prop:ubiquity} will employ the notion of \emph{constellations}.
The reason we work with constellations is that a constellation of witnesses for the presence of $H$ (defined below) necessarily contains a witness for every refinement of $H$. This allows us to pass to a coarsening and work in the setting that every subhypergraph of $H$ is $d$-buoyant.
For each finite set $A$, we define the \textbf{rooted powerset} of $A$, denoted $\mathcal P_\bullet(A)$, to be
\[\mathcal P_\bullet(A) := \{(B,b) : B \text{ is a subset of $A$ and $b \in B$}\}.\]
We call a set of vertices $y=(y_{(B,b)})$ of $\mathbb G$ indexed by $\mathcal P_\bullet(A)$ an $A$\textbf{-constellation}. Given an $A$-constellation $y$, we define $\mathscr A_r(y)$ to be the event that $y_{(B,b)}$ and $y_{(B',b')}$ are connected in $\mathfrak F$ if and only if $b=b'$, and in this case they are connected by a path in $\mathfrak F$ with diameter at most $r$.
We say that an $A$-constellation $y$ in $\mathbb G$ is \textbf{$r$-good} if it satisfies the following conditions.
\begin{enumerate}
\itemsep0.31em
\item
$\langle y_{(B,b)} y_{(B',b')} \rangle \leq r$ for every $(B,b),(B',b') \in \mathcal P_\bullet(A)$.
\item
$\langle y_{(B,b)} y_{(B,b')} \rangle \leq R_{\mathbb G}(|B|) +1$ for every $B \subseteq A$ and $b,b' \in B$, and
\item $\P(\mathscr A_r(y)) \geq 1/r$.
\end{enumerate}
The proof of the following lemma is deferred to \cref{Sec:technical}.
\begin{lemma}\label{lem:constellations}
Let $\mathbb G$ be a $d$-dimensional transitive graph with $d>4$.
Let $A$ be a finite set. Then there exists $r=r(|A|)$ such that for every vertex $x$ of $\mathbb G$, there exists an $r$-good $A$-constellation contained in the ball of radius $r$ around $x$.
\end{lemma}
Let $H=(\partial V,V_\circ,E)$ be a finite hypergraph with boundary with at least one edge, and let $r=r(\max_e\deg(e))$ be as in \cref{lem:constellations}. We write $\mathcal P_\bullet(e) = \mathcal P_\bullet(\{v \in V : v \perp e\})$ for each $e\in E$. For each $\xi=(\xi_e)_{e\in E}\in \mathbb V^E$ and each $e\in E$, we let $(\xi_{(e,B,v)})_{(B,v) \in \mathcal P_\bullet(e)}$ be an $r$-good $e$-constellation contained in the ball of radius $r$ about $\xi_e$, whose existence is guaranteed by \cref{lem:constellations}.
For each $x=(x_v)_{v\in \partial V}$ and $\xi=(\xi_e)_{e\in E}$, we define $\tilde \mathscr W(x,\xi)$ to be the event that the following conditions hold:
\begin{enumerate}[leftmargin=*]
\item For each boundary vertex $v \in \partial V$, every point in the set $\{x_v\} \cup \{\xi_{(e,A,v)} : e\in E, (A,v) \in \mathcal P_\bullet(e)\}$ is in the same component of $\mathfrak F$,
\item For each interior vertex $v \in V_\circ$, every point in the set $\{\xi_{(e,A,v)} : e\in E, (A,v) \in \mathcal P_\bullet(e)\}$ is in the same component of $\mathfrak F$, and
\item For any two distinct vertices $u,v \in V$, the components of $\mathfrak F$ containing the sets $\{\xi_{(e,A,u)} : e\in E, (A,u) \in \mathcal P_\bullet(e)\}$ and $\{\xi_{(e,A,v)} : e\in E, (A,v) \in \mathcal P_\bullet(e)\}$ are distinct.
\end{enumerate}
Thus, on the event $\tilde \mathscr W(x,\xi)$ every refinement $H'$ of $H$
is $R_\mathbb G(H')$-faithfully present at $x$: Indeed, letting $\phi_V: V'\to V$ and $\phi_E: E'\to E$ be as in the definition of a coarsening and letting $A(e') = \{v \in V : \phi^{-1}_V(v) \perp' e'\}$ for each $e\in E'$, the collection $(\xi_{(e',v')})_{(e',v') \in E_\bullet '} =
(\xi_{(\phi_E(e'),A(e')\phi_V(v'))})_{(e',v') \in E_\bullet '}$ is a witness for the $R_\mathbb G(H')$-faithfully presence of $H'$ at $x$.
\medskip
For each $n\geq 0$, let $\Omega_x(n)$ be the set
\[\Omega_x(n) = \left\{(\xi_{e})_{e\in E } \in \Lambda_x(n,n+1)^{E} : \langle \xi_e \xi_{e'} \rangle \geq 2^{n-C_1} \text{ for all distinct $e,e' \in E$}\right\}, \]
where $C_1=C_1(E)$ is chosen
so that $\log_2|\Omega_x(n)|\approx nd|E|$ for all $n$ sufficiently large and all $x$. It is easy to see that such a constant exists using the $d$-dimensionality of $\mathbb G$.
For each $n\geq 0$ we define $\tilde S_x(n)$ to be the random variable
\begin{equation*} \tilde S_x(n) :=
\sum_{\xi \in \Omega_x(n)}\mathbbm{1}(\tilde\mathscr W(x,\xi)),\end{equation*}
so that every refinement $H'$ of $H$ is $R_\mathbb G(H')$-faithfully present at $x$ on the event that $\tilde S_x(n)$ is positive for some $n\geq 0$, and every refinement $H'$ of $H$ is $R_\mathbb G(H')$-robustly faithfully present at $x$ on the event that $\tilde S_x(n)$ is positive for infinitely many $n\geq 0$.
\medskip
The following lemma lower bounds the first moment of $\tilde S_n$.
\begin{lem}\label{prop:restrictedfirstmoment}
Let $\mathbb G$ be a $d$-dimensional transitive graph with $d>4$.
Let $H$ be a finite hypergraph with boundary with at least one edge, let $\varepsilon>0$, and suppose that $x=(x_v)_{v\in \partial V}$ is such that $\langle x_u x_v \rangle \leq 2^{n-1}$ for all $u,v \in \partial V$ and satisfies
\[
\P\Bigl(\{X^u_i : i \geq 0 \} \cap \{X^v_i : i \geq 0\} =\emptyset \text{ for every distinct $u,v\in \partial V$}\Bigr)\geq \varepsilon
\]
when $\{ X^v : v \in \partial V\}$ are a collection of independent simple random walks started at $(x_v)_{v\in \partial V}$.
Then there exist constants $c=c(\mathbb G,H,\varepsilon)$ and $n_0=n_0(\mathbb G,H,\varepsilon)$ such that if $n\geq n_0$ then
\[
\log_2 \P(\tilde\mathscr W(x,\xi)) \geq -(d-4)\left(\Delta -|V_\circ|\right) \, n -c\]
for every $\xi \in \Omega_x(n)$ and hence that
\[\log_2\mathbb E[\tilde S_x(n)] \geq -\eta_d(H)\, n - c.\]
\end{lem}
\medskip
The proofs of \cref{lem:constellations,prop:restrictedfirstmoment} are unfortunately rather technical, and are deferred to \cref{Sec:technical}. For the rest of this section, we will take these lemmas as given, and use them to prove \cref{prop:ubiquity}. The key remaining step is to upper bound the second moment of the random variable $\tilde S_x(n)$.
\begin{lem}[Restricted second moment upper bound]\label{lem:secondmoment}
Let $\mathbb G$ be a $d$-dimensional transitive graph with $d>4$ such that $d/(d-4)$ is not an integer.
Let $H$ be a hypergraph with boundary with at least one edge. Suppose that every subhypergraph of $H$ is $d$-buoyant. Then there exists a positive constant $c=c(\mathbb G,H)$ such that
\vspace{0.3em}
\[
\vspace{0.3em}
\log_2\mathbb E[\tilde S_x(n)^2] \leq -2\eta_d(H)\, n + c
\]
for all $x=(x_u)_{u\in \partial V} \in (\mathbb V)^{\partial V}$ and all $n$ such that $\langle x_u x_v \rangle \leq 2^{n-1}$ for all $u,v \in \partial V$.
\end{lem}
\begin{proof}
Observe that if $\xi,\zeta \in \Omega_x(n)$ are such that the events $\tilde \mathscr W(x,\xi)$ and $\tilde \mathscr W(x,\zeta)$ both occur, then the following hold:
\begin{enumerate}
\item For each $v \in V$, there is at most one $v' \in V$ such that $\xi_{(e,A,v)}$ and $\zeta_{(e',A',v')}$
are in the same component of $\mathfrak F$ for some (and hence every) $e,e' \in E$ and $(A,v) \in \mathcal P_\bullet(e)$, $(A',v')\in \mathcal P_\bullet(e')$.
\item For each $e \in E$, there is at most one $e'$ such that $\langle \xi_e \zeta_{e'} \rangle \leq 2^{n-C_1-1}$.
\end{enumerate}
As a bookkeeping tool to account for the first of these degrees of freedom, we define $\Phi$ be the set of functions $\phi: V_\circ \to V_\circ \cup \{\star\}$ such that the preimage $\phi^{-1}(v)$ has at most one element for each $v\in V_\circ$. We write $\phi^{-1}(v)=\star$ if $v$ is not in the image of $\phi \in \Phi$, and write $\phi(v)=v$ for every $v\in \partial V$. (Here and elsewhere, we use $\star$ as a dummy symbol so that we can encode partial bijections by functions.) For each $\phi \in \Phi$, and $\xi,\zeta \in \mathbb V$, define the event $\tilde\mathscr W_\phi(\zeta,\xi)$ to be the event that both the event $\tilde\mathscr W(x,\xi)\cap \tilde\mathscr W(x,\zeta)$ occurs, and that
for any two distinct vertices $u,v \in V_\circ$ the components of $\mathfrak F$ containing $\{\xi_{(e,A,u)} : e\in E, (A,u)\in \mathcal P_\bullet(e)\}$ and $\{\zeta_{(e,A,v)}: e\in E, (A,v)\in \mathcal P_\bullet(e) \}$ coincide if and only if $v =\phi(u)$. Thus, we have that
\[\tilde \mathscr W(x,\xi)\cap \tilde \mathscr W(x,\zeta) = \bigcup_{\phi\in \Phi} \tilde \mathscr W_{\phi}(\xi,\zeta)\]
and hence that
\[\tilde S_x(n)^2 = \sum_{\xi,\zeta \in \Omega_x(n)} \mathbbm{1}[\tilde \mathscr W(x,\xi) \cap \tilde \mathscr W(x,\zeta)] \leq \sum_{\phi \in \Phi}\sum_{\xi,\zeta\in \Omega_x(n)} \mathbbm{1}[\tilde \mathscr W_\phi(\xi,\zeta)].\]
\medskip
It follows from \cref{prop:sdim2} that
\vspace{0.2em}
\begin{multline}
\vspace{0.2em}
\P(\tilde \mathscr W_\phi(\xi,\zeta)) \preceq \prod_{u \in \partial V}\langle \{x_u\}\cup\{\xi_{e} : e \perp u\},\{\zeta_{e} : e\perp u\} \rangle^{-(d-4)}
\cdot \prod_{u \in V_\circ,\, \phi(v)=\star}\langle\{\xi_{e} : e \perp u\}\rangle^{-(d-4)}\\
\cdot
\prod_{u \in V_\circ,\, \phi^{-1}(v)=\star}\langle\{\xi_{e} : e \perp u\}\rangle^{-(d-4)} \prod_{u \in V_\circ,\, \phi(v) \neq \star}\langle\{\xi_{e} : e \perp u\} \cup \{\zeta_e : e \perp \phi(u)\}\rangle^{-(d-4)}
\label{eq:Hphi}
\end{multline}
We define $R_\phi(\xi,\zeta)$ to be the expression on the right hand side of \eqref{eq:Hphi}, so that
\[\mathbb E\left[\tilde S_x(n)^2\right] \preceq \sum_{\phi\in \Phi}\sum_{\xi,\zeta \in \Omega_x(n)} R_\phi(\xi,\zeta).\]
\medskip
We now account for the second of the two degrees of freedom above.
Let $\Psi$ be the set of functions $\psi: E \to E \cup \{\star\}$ such that the preimage $\psi^{-1}(e)$ has at most one element for every $e\in E$.
For each $\psi \in \Psi$ and $k = (k_e)_{e \in E} \in \{0,\ldots,n\}^{E}$, let
\begin{multline*}\Omega^{\psi,k} = \\\left\{(\xi,\zeta) \in (\Omega_x(n))^2 :
\begin{array}{l} 2^{n-k_e} \leq \langle \zeta_e \xi_{\psi(e)} \rangle \leq 2^{n-k_e+2} \text{ for all $e\in E$ such that $\psi(e) \neq \star$,}
\vspace{0.3em} \\
\text{and }\langle \zeta_e \xi_{e'} \rangle \geq 2^{n-C_1-2} \text{ for all $e,e'\in E$ such that $e' \neq \psi(e)$}
\end{array}
\right\},
\end{multline*}
where $C_1$ is the constant from the definition of $\Omega_x(n)$,
and observe that
\begin{equation}\label{eq:Omegapsikvolume}\log_2|\Omega^{\psi,k}| \lesssim 2d|E|n - d\sum_{\psi(e)\neq\star} k_e. \end{equation}
For each $\xi,\zeta \in \Omega_x(n)$ and $e \in E$, there is at most one $e' \in E$ such that $\langle \zeta_e \xi_{e'} \rangle \leq 2^{n-C_1-2}$, and it follows that
\[\left(\Omega_x(n)\right)^2 = \bigcup_{\psi,k} \Omega^{\psi,k},\]
where the union is taken over $\psi \in \Psi$ and $k \in \{0,\ldots,n\}^E$.
\medskip
Now, for any $\xi,\zeta \in \Omega^{\psi,k}$ and $u \in V_\circ$ with $\phi(u)\neq \star$, we have
that
\begin{multline*}
\log_2 \langle\{\xi_e : e \perp u\}\cup\{\zeta_e : e \perp \phi(u)\}\rangle^{-(d-4)} \approx -(d-4)\left(\deg(u)+\deg(\phi(u))-1\right)\, n\\ + (d-4) \sum_{e \perp u} \mathbbm{1}[\psi(e)\perp \phi(u)] \, k_e. \end{multline*}
Meanwhile, we have that
\[\log_2 \langle \{\xi_e : e \perp u\} \rangle^{-(d-4)} \approx
\log_2 \langle \{\zeta_e : e \perp u\} \rangle^{-(d-4)} \approx
-(d-4)(\deg(u)-1)\, n\]
for every $u\in V_\circ$, and
\begin{multline*}
\log_2 \langle x_u,\{\xi_e : e \perp u\},\{\zeta_e : e\perp u\} \rangle^{-(d-4)} \approx -2(d-4)\deg(u)n + (d-4)\sum_{e \perp u}\mathbbm{1}[\psi(e)\perp u]\,k_e
\end{multline*}
for every $u\in \partial V$.
Summing these estimates yields
\begin{multline*}\log_2 R_\phi(\xi,\zeta) \approx -2(d-4)\Delta n + 2(d-4)|V_\circ|n - (d-4)|\{ v \in V_\circ : \phi(v) \neq \star\}|\,n
\\+ (d-4)\sum_{e}|\{u \perp e : \phi(u) \perp \psi(e)\}|k_e.
\end{multline*}
Thus, using the volume estimate \eqref{eq:Omegapsikvolume}, we have that
\begin{multline*}\log_2 \sum_{(\xi,\zeta) \in \Omega^{\psi,k}}R_\phi(\xi,\zeta) \lesssim
-2\eta_d(H)n - (d-4)|\{ u \in V_\circ : \phi(u) \neq \star\}|\,n
\\+ (d-4)\sum_{\psi(e)\neq \star}|\{u \perp e : \phi(u) \perp \psi(e)\}|k_e - d \sum_{\psi(e)\neq\star}k_e.
\end{multline*}
Observe that for every $\psi\in \Psi$ and $e\in E$, we have that
\begin{multline*}
\sum_{k_e=0}^n \exp_2\left(\left[(d-4)|\{u\perp e:\phi(u)\perp \psi(e)\}| - d\right]k_e \right) \\\preceq \begin{cases}
\exp_2\left(\left[(d-4)|\{u\perp e:\phi(u)\perp \psi(e)\}| - d\right]n\right) & \text{ if $(d-4)|\{u\perp e:\phi(u)\perp \psi(e)\}| >d$}\\
n & \text{ if $(d-4)|\{u\perp e:\phi(u)\perp \psi(e)\}| =d$}\\
1 & \text{ if $(d-4)|\{u\perp e:\phi(u)\perp \psi(e)\}| <d$}.
\end{cases}
\end{multline*}
Thus, summing over $k$, we see that for every $\psi \in \Psi$ and $\phi \in \Phi$ we have that
\begin{multline}
\log_2 \sum_{k\in\{0,\ldots,n\}^E}\sum_{(\xi,\zeta) \in \Omega^{\psi,k}}R_\phi(\xi,\zeta) \lesssim -2\eta_d(H)\, n - (d-4)|\{ u \in V_\circ : \phi(v) \neq \star\}| n\\
+\sum_{e\in E}\left[(d-4)|\{u \perp e : \phi(u) \perp \psi(e)\}|-d\right]
\mathbbm{1}\left(|\{u \perp e : \phi(u) \perp \psi(e)\}| > d/(d-4) \right) n\\
+\sum_{e\in E}\mathbbm{1}\left(|\{u\perp e : \phi(u) \perp \psi(e)\}| = d/(d-4)\right)\log_2 n.
\label{eq:Rallterms}
\end{multline}
Since $d/(d-4)$ is not an integer, the last term is zero, so that if we define $Q : \Phi \times \Psi \to \mathbb R$ by
\begin{multline}Q(\phi,\psi) = - (d-4)|\{ u \in V_\circ : \phi(v) \neq \star\}|\\
+\sum_{e\in E}\left[(d-4)|\{u \perp e : \phi(u) \perp \psi(e)\}|-d\right]
\mathbbm{1}[|\{u \perp e : \phi(u) \perp \psi(e)\}| > d/(d-4) ],
\label{eq:Qdef2}
\end{multline}
then we have that
\begin{equation*}
\log_2 \sum_{k \in \{0,\ldots,n\}^E} \sum_{(\xi,\zeta)\in \Omega^{\psi,k}}R_\phi(\xi,\zeta)
\lesssim -2\eta_d(H)n + Q(\phi,\psi)n.
\end{equation*}
Thus, since $|\Phi \times \Psi|$ does not depend on $n$, we have that
\begin{multline*}\log_2 \mathbb E[\tilde S_x(n)^2] \lesssim \log_2 \sum_{\phi \in \Phi}\sum_{\psi \in \Psi} \sum_{k \in \{0,\ldots,n\}^E} \sum_{(\xi,\zeta)\in \Omega^{\psi,k}}R_\phi(\xi,\zeta) \\
\lesssim
\max_{\phi,\psi} \log_2 \sum_{k \in \{0,\ldots,n\}^E} \sum_{(\xi,\zeta)\in \Omega^{\psi,k}}R_\phi(\xi,\zeta) \lesssim
-2\eta_d(H)n + \max_{\phi,\psi}Q(\phi,\psi)n,
\end{multline*}
and so it suffices to prove that $Q(\phi,\psi)\leq 0$ for every $(\phi,\psi)\in \Phi\times\Psi$.
To prove this, first observe that we can bound
\begin{multline*}Q(\phi,\psi) \leq \tilde Q(\phi):= - (d-4)|\{ u \in V_\circ : \phi(v) \neq \star\}|\\
+\sum_{e\in E}\left[(d-4)|\{u \perp e : \phi(u) \neq \star \}|-d\right]
\mathbbm{1}[|\{u \perp e : \phi(u) \neq \star \}| > d/(d-4) ].
\end{multline*}
Let $H'$ be the subhypergraph of $H$ with boundary vertices given by the boundary vertices of $H$, edges given by the set of edges of $H$ that have $|\{u\perp e:\phi(u)\neq \star\}|>d/(d-4)$, and interior vertices given by the set of interior vertices $u$ of $H$ for which $\phi(u)\neq \star$ and $\phi(u)\perp e$ for some $e\in E'$. Then we can rewrite
\begin{equation}
\tilde Q(\phi) = \eta_d(H')-(d-4)\bigl|\{v\in V_\circ : \phi(v)\neq \star\}\setminus V'\bigr| \leq 0,
\label{eq:bad2}
\end{equation}
where the second inequality follows by the assumption that every subhypergraph of $H$ is $d$-buoyant. This completes the proof.
\end{proof}
\begin{proof}[Proof of \cref{prop:ubiquity}]
Suppose that the finite hypergraph with boundary $H$ has a $d$-optimal coarsening all of whose subhypergraphs are $d$-buoyant. Then the lower bound on the square of the first moment of $\tilde S^{H'}_x(n)$ provided by \cref{prop:restrictedfirstmoment} and the upper bound on the second moment of $\tilde S^{H'}_x(n)$ provided by \cref{lem:secondmoment} coincide, so that the Cauchy-Schwarz inequality implies that
\[\P\left(\tilde S^{H'}_x(n) > 0\right) \geq \myfrac[0.5em]{\mathbb E\left[\tilde S^{H'}_x(n)\right]^2}{\mathbb E\left[\tilde S^{H'}_x(n)^2\right]} \succeq 1\]
for every $n$ such that $\langle x_u x_v \rangle \leq 2^{n-1}$ for every $u,v \in \partial V$. It follows from Fatou's lemma that
\[\P\left(\tilde S^{H'}_x(n)>0 \text{ for infinitely many $n$}\right) \geq \limsup_{n\to\infty} \P\left(\tilde S^{H'}_x(n) > 0\right) \succeq 1, \]
so that $H$ is robustly faithfully present at $x$ with positive probability as claimed.
\end{proof}
\subsubsection{The cases $d=5,6,8$.}
\label{sec:speciald}
We now treat the cases in which $d/(d-4)$ is an integer. This requires somewhat more care owing to the possible presence of the logarithmic term in \eqref{eq:Rallterms}. Indeed, we will only treat certain special `building block' hypergraphs directly via the second moment method. We will later build other hypergraphs out of these special hypergraphs in order to to prove the main theorems.
Let $H=(\partial V,V_\circ, E)$ be a finite hypergraph with boundary. We say that a subhypergraph $H'=(\partial V',V_\circ',E')$ of $H$ is \textbf{bordered} if $\partial V'=\partial V$ and every vertex $v\in V \setminus V'$ is incident to at most one edge in $E'$. For example, every full subhypergraph containing every boundary vertex is bordered. We say that a subhypergraph of $H$ is \textbf{proper} if it is not equal to $H$ and \textbf{non-trivial} if it has at least one edge. We say that $H$ is $d$\textbf{-basic} if it does not have any edges of degree less than or equal to $d/(d-4)$ and does not contain any proper, non-trivial bordered subhypergraphs $H'$ with $\eta_d(H')=0$.
\begin{prop}\label{prop:ubiquityspeciald}
Let $\mathbb G$ be a $d$-dimensional transitive graph with $d\in\{5,6,8\}$, and let $\mathfrak F$ be the uniform spanning forest of $\mathbb G$. Let $H$ be a finite hypergraph with boundary with at least one edge.
Suppose additionally that one of the following assumptions holds:
\begin{enumerate}
\item
$H$ is a refinement of a hypergraph with boundary that has exactly one edge, the unique edge contains exactly $d/(d-4)$ boundary vertices, and every interior vertex is incident to the unique edge.
\end{enumerate}
or
\begin{enumerate}
\item[(2)]
$H$ has a $d$-basic coarsening with more than one edge, all of whose subhypergraphs are $d$-buoyant.
\end{enumerate}
Then for every $r\geq R_\mathbb G(H)$ and every well-separated collection of points $(x_v)_{v\in \partial V}$ in $\mathbb V$
there is a positive probability that
the vertices $x_u$ are all in different components of $\mathfrak F$ and that $H$ is
robustly faithfully present at $x$.
\end{prop}
The proof of \cref{prop:ubiquityspeciald} will apply the following lemma, which is the analogue of \cref{lem:secondmoment} in this context.
\begin{lem}
\label{lem:secondmomentspeciald}
Let $\mathbb G$ be a $d$-dimensional transitive graph with $d\in\{5,6,8\}$.
Let $H$ be a hypergraph with boundary with at least one edge such that every subhypergraph of $H$ is $d$-buoyant.
\begin{enumerate}
\itemsep0.5em
\item If $H$ has exactly exactly one edge, this unique edge is incident to exactly $d/(d-4)$ boundary vertices, and every interior vertex is incident to this unique edge,
then there exists a constant $c=c(\mathbb G,H)$ such that
\vspace{0.3em}
\[
\vspace{0.3em}
\log_2\mathbb E[\tilde S_x(n)^2] \leq \log_2 n + c
\]
for all $x=(x_u)_{u\in \partial V} \in (\mathbb V)^{\partial V}$ and all $n$ such that $\langle x_u x_v \rangle \leq 2^{n-1}$ for all $u,v \in \partial V$.
\item If $H$ is $d$-basic, then there exists a constant $c=c(\mathbb G,H)$ such that
\vspace{0.3em}
\[
\vspace{0.3em}
\log_2\mathbb E[\tilde S_x(n)^2] \leq -2\eta_d(H) +c
\]
for all $x=(x_u)_{u\in \partial V} \in (\mathbb V)^{\partial V}$ and all $n$ such that $\langle x_u x_v \rangle \leq 2^{n-1}$ for all $u,v \in \partial V$.
\end{enumerate}
\end{lem}
\begin{proof} Note that in both cases we have that every subhypergraph of $H$ is $d$-buoyant.
We use the notation of the proof of \cref{prop:ubiquity}. As in equation \eqref{eq:Rallterms} of that proof, we have that
\begin{multline}
\log_2 \sum_{k\in\{0,\ldots,n\}^E}\sum_{(\xi,\zeta) \in \Omega^{\psi,k}}R_\phi(\xi,\zeta)\\ \lesssim -2\eta_d(H)\, n +Q(\phi,\psi)n
+\left|\left\{e \in E : \left|\left\{u\perp e : \phi(u) \perp \psi(e)\right\}\right| = d/(d-4)\right\}\right|\log_2 n,
\label{eq:Rallterms2}
\end{multline}
where $Q(\phi,\psi)$ is defined as in \eqref{eq:Qdef2}. Moreover, the same argument used in that proof shows that $Q(\phi,\psi)\leq 0$ for every $(\phi,\psi)\in\Phi\times\Psi$. In case $(1)$ of the lemma, in which $H$ has a single edge, we immediately obtain the desired bound since $\eta_d(H)=0$ and the coefficient of the $\log_2 n$ term is either $0$ or $1$.
Now suppose that $H$ is $d$-basic. Let $L(\phi,\psi)$ be the coefficient of $\log_2 n$ in \eqref{eq:Rallterms2}.
Note that $H$ cannot have an edge whose intersection with $\partial V$ has $(d-4)/d$ elements or more, since otherwise the subhypergraph $H'$ of $H$ with that single edge and with no internal vertices is proper, bordered, and has $\eta_d(H')\geq 0$. Thus, we have that if $\phi_0$ is defined by $\phi_0(v)=\star$ for every $v\in V_\circ$ then
\[L(\phi_0,\psi)\leq
\left|\left\{e \in E : |\psi(e) \cap \partial V| \geq d/(d-4)\right\}\right|
=0\]
for every $\psi \in \Psi$.
Let $\operatorname{Isom} \subseteq \Phi\times \Psi$ be the set of all $(\phi,\psi)$ such that $\phi(u)\perp \psi(e)$ for every $e\in E$ and $v\perp e$. Since $H$ is $d$-basic we have that if $(\phi,\psi)\in \operatorname{Isom}$ then
\[
L(\phi,\psi) =
\left|\left\{e \in E : \deg(e) = d/(d-4)\right\}\right| =0.
\]
We claim that $Q(\phi,\psi)\leq -(d-4)$ unless either $\phi=\phi_0$ or $(\phi,\psi)\in \operatorname{Isom}$.
Once proven this will conclude the proof, since we will then have that
\begin{equation*}
\log_2 \sum_{k\in\{0,\ldots,n\}^E}\sum_{(\xi,\zeta) \in \Omega^{\psi,k}}R_\phi(\xi,\zeta)\\ \lesssim -2\eta_d(H)\, n + \max\{ -(d-4) n + |E|\log_2 n, 0\} \lesssim -2\eta_d(H) n
\end{equation*}
for every $(\phi,\psi)\in\Phi\times \Psi$, from which we can conclude by summing over $\Phi\times\Psi$ as done previously.
We first prove that $Q(\phi,\psi)\leq -(d-4)$ unless either $\phi=\phi_0$ or $\phi(v) \neq \star$ for every $v\in V$.
Note that since $d-4$ divides $d$, the $d$-apparent weight of every hypergraph with boundary is a multiple of $d-4$, and so we must have that $\eta_d(H')\leq -(d-4)$ for every subhypergraph $H'$ of $H$ with $\eta_d(H')<0$.
As in \eqref{eq:bad2}, we have that
$Q(\phi,\psi)\leq \eta_d(H')$,
where $H'=H'(\phi)$ is the subhypergraph of $H$ with boundary vertices given by the boundary vertices of $H$, edges given by the set of edges of $H$ that have $|\{u\perp e:\phi(u)\neq \star\}|\geq d/(d-4)$, and interior vertices given by the set of interior vertices $u$ of $H$ for which $\phi(u)\neq \star$ and $\phi(u)\perp e$ for some $e\in E'$.
We claim that
if $\phi$ is such that $\eta_d(H')=0$ then $H'$ is bordered, and consequently is either equal to $H$ or does not have any edges by our assumptions on $H$. To see this, suppose for contradiction that $H'$ is not bordered, so that there exists a vertex $v\in V_\circ \setminus V_\circ'$ that is incident to more than one edge of $H'$. Let $H''$ be the subhypergraph of $H'$ obtained from $H'$ by adding the vertex $v$. Then we have that $|E(H'')|=|E(H')|$, $|V_\circ(H'')|=|V_\circ(H')|+1$ and $\Delta(H'')\geq \Delta(H'')+2$, and consequently that $\eta_d(H'')\geq \eta_d(H')+(d-4)$. Since every subhypergraph of $H$ is $d$-buoyant, we have that $\eta_d(H'')\leq 0$ and consequently that $\eta_d(H')\leq -(d-4)$, a contradiction. This establishes that
$Q(\phi,\psi)\leq -(d-4)$ unless either $\phi=\phi_0$ or $\phi(v) \neq \star$ for every $v\in V$, as claimed.
It remains to show that if $\phi(v) \neq \star$ for every $v\in V$ then $Q(\phi_1,\psi) \leq -(d-4)$ unless $(\phi,\psi)\in \operatorname{Isom}$.
Since every edge of $H$ has degree strictly larger than $d/(d-4)$, we have that
\begin{multline*}
\left[(d-4)|\{u \perp e : \phi(u) \perp \psi(e)\}|-d\right]
\mathbbm{1}[|\{u \perp e : \phi(u) \perp \psi(e)\}| > d/(d-4) ]\\ \leq
\left[(d-4)\deg(e)-d\right]
-(d-4)
\end{multline*}
for every $e\in E$ and every $(\phi,\psi)\in \Phi\times\Psi$ such that $|\{u\perp e : \phi(u) \perp \psi(e)\}| < \deg(e)$. It follows easily from this and the definition of $Q(\phi,\psi)$ that if $\phi$ has $\phi(v) \neq \star$ for every $v\in V$, then
\begin{align*}
Q(\phi,\psi) \leq
\eta_d(H) - (d-4) |\{ e \in E : |\{u \perp e : \phi(u) \perp \psi(e)\}| < \deg(e) \}|.
\end{align*}
Since $\eta_d(H)\leq 0$ by assumption, it follows that $Q(\phi,\psi)\leq -(d-4)$ unless $(\phi,\psi)\in \operatorname{Isom}$. This concludes the proof.
\end{proof}
\cref{lem:secondmoment} (together with \cref{prop:restrictedfirstmoment}) is already sufficient to yield case (2) of \cref{prop:ubiquityspeciald}. To handle case (1), we will require the following additional estimate.
\begin{lem}[Different scales are uncorrelated]\label{lem:secondmoment2}
Let $\mathbb G$ be a $d$-dimensional transitive graph with $d>4$.
Let $H$ be a hypergraph with boundary.
Then there exists a positive constant $c=c(\mathbb G,H,r)$ such that
\[\log_2\mathbb E[\tilde S_x(n) \tilde S_{x}(n+m)] \leq -\eta_d(H)\, (2n+m) + c\]
for all $x=(x_u)_{u\in \partial V} \in (\mathbb V)^{\partial V}$, all $m\geq 2$, and all $n$ such that $\langle x_u x_v \rangle \leq 2^{n-1}$ for all $u,v \in \partial V$.
\end{lem}
\begin{proof}
Let $\Phi$ and $\tilde \mathscr W_\phi(\xi,\zeta)$ be defined as in the proof of \cref{lem:secondmoment}.
For every $\xi \in \Omega_x(n)$ and $\zeta \in \Omega_x(n+m)$, we have that all distances relevant to our calculations are on the order of either $2^n$ or $2^{n+m}$. That is,
\begin{align*}
\log_2 \langle \xi_e \xi_{e'} \rangle,\, \log_2 \langle \xi_e x_v \rangle \approx n
\quad \text{ and } \quad
\log_2 \langle \xi_e \zeta_{e'} \rangle,\, \log_2 \langle \zeta_e \zeta_{e'} \rangle,\, \log_2 \langle \zeta_e x_v \rangle \approx n+m
\end{align*}
for all $e,e' \in E$ and $v\in \partial V$. Thus, using \eqref{eq:Hphi}, can estimate
\begin{multline*}\frac{1}{d-4}\log_2\P(\tilde \mathscr W_\phi(\xi,\zeta)) \lesssim -\sum_{u\in\partial V}|\{e \in E : e\perp u\}|\,(2n+m) \\-\sum_{u \in V_\circ,\, \phi(u) =\star} (|\{e\in E : e \perp u\}|-1)\, n
-\sum_{u \in V_\circ,\, \phi^{-1}(u) =\star} (|\{e\in E : e \perp u\}|-1)\, (n+m)
\\ -\sum_{u \in V_\circ,\, \phi(u) \neq \star} \left(\left|\{e\in E : e \perp u\}\right|n-n +\left|\{e\in E : e \perp \phi(u)\}\right|(n+m)\right)\,
\\ = -\Delta (2n+m) + |V_\circ|\,(2n+m) - |\{v \in V_\circ: \phi(v)\neq \star \}|\, (n+m),
\end{multline*}
which is maximized when $\phi(v)=\star$ for all $v\in V_\circ$. Now, since
\[\log_2|\Omega_x(n)\times \Omega_x(n+m)| \lesssim |\Lambda(n-1,n)^{E} \times \Lambda(n+m-1,n+m)^{E}| \lesssim d(2n+m),\]
we deduce that
\begin{align*}
\log_2\mathbb E[\tilde S_x(n)\tilde S_x(n+m)]
&\leq \log_2|\Omega_x(n)\times \Omega_x(n+m)| + \log|\Phi| \\&\hspace{2cm}+ \max\{\P(\tilde \mathscr W_\phi(\xi,\zeta)): \xi \in \Omega_x(n),\zeta\in \Omega_x(n+m), \phi \in \Phi \}
\\ & \lesssim d|E|(n+m) - (d-4)\Delta(2n+m) +(d-4) |V_\circ|(2n+m)\\ &= -\eta_d(H) (2n+m)
\end{align*}
as claimed.
\end{proof}
\begin{proof}[Proof of \cref{prop:ubiquityspeciald} given \cref{lem:constellations,prop:restrictedfirstmoment}]
The second case, in which $H$ has a $d$-basic coarsening with more than one edge all of whose subhypergraphs are $d$-buoyant, follows from \cref{lem:constellations,prop:restrictedfirstmoment,lem:secondmomentspeciald} exactly as in the proof of \cref{prop:ubiquity}. Now suppose that $H$ is a refinement of a hypergraph with boundary $H'$ that has $d/(d-4)$ boundary vertices and a single edge incident to every vertex. Then $\eta_d(H')=0$ and every subhypergraph of $H'$ is $d$-buoyant. Applying \cref{prop:restrictedfirstmoment,lem:secondmomentspeciald,lem:secondmoment2}, we deduce that
\[\mathbb E\left[\sum_{k=n}^{2n} \tilde S^{H'}_x(2k)\right] \succeq n, \quad \text{ and } \quad
\mathbb E\left[\left(\sum_{k=n}^{2n} \tilde S^{H'}_x(2k)\right)^2\right] \preceq n^2, \]
for every $n$ such that $\langle x_u x_v \rangle \leq 2^{n-1}$ for every $u,v \in \partial V$, from which it follows by Cauchy-Schwarz that
\[\P\left( \sum_{k=n}^{2n}\tilde S^{H'}_x(2k) >0 \right)\succeq 1.\]
for every $n$ such that $\langle x_u x_v \rangle \leq 2^{n-1}$ for every $u,v \in \partial V$.
The proof can now be concluded as in the proof of \cref{prop:ubiquity}.
\end{proof}
\subsection{Proof of Lemmas \ref{lem:constellations} and
\ref{prop:restrictedfirstmoment}
}
\label{Sec:technical}
In this section we prove \cref{lem:constellations,prop:restrictedfirstmoment}.
We begin with some background on random walk estimates. Given a graph $G$ and a vertex $u$ of $G$, we write $\mathbf{P}_u$ for the law of the random walk on $G$ started at $u$.
Let $G$ be a graph, and let $p_n(x,y)$ be the probability that a random walk on $G$ started at $x$ is at $y$ at time $n$. Given positive constants $c$ and $c'$, we say that $G$ satisfies $(c,c')$\textbf{-Gaussian heat kernel estimates} if
\begin{align} \frac{c}{|B(x,n^{1/2})|}e^{-c d(x,y)^2/n} \leq p_n(x,y) + p_{n+1}(x,y) \leq \frac{c'}{|B(x,n^{1/2})|}e^{-d(x,y)^2/(c' n)}
\label{eq:GHKE}
\end{align}
for every $n\geq 0$ and every pair of vertices $x,y$ in $G$ with $d(x,y)\leq n$. We say that $G$ satisfies Gaussian heat kernel estimates if it satisfies $(c,c')$-Gaussian Heat Kernel Estimates for some positive constants $c$ and $c'$.
\begin{thm}[Hebisch and Saloff-Coste \cite{HebSaCo93}]
\label{thm:HSCGreen}
Let $\mathbb G$ be a $d$-dimensional transitive graph. Then $G$ satisfies Gaussian heat kernel estimates.
\end{thm}
Hebisch and Saloff-Coste proved their result only for Cayley graphs, but the general case can be proven by similar methods\footnote{In fact, the general case can also be deduced from the case of Cayley graphs, since if $\mathbb G$ is a $d$-dimensional transitive graph then the product of $G$ with a sufficiently large complete graph is a Cayley graph \cite{trofimov1985graphs,godsil1989note}, but taking such a product affects the random walk in only a very trivial way.}, see e.g.\ \cite[Corollary 14.5 and Theorem 14.19]{Woess}.
Now, recall that two graphs $G=(V,E)$ and $G'=(V',E')$ are said to be $(\alpha,\beta)$\textbf{-rough isometric} if there exists a function $\phi:V \to V'$ such that the following conditions hold.
\begin{enumerate}
\item $\phi$ roughly preserves distances: The estimate \[\alpha^{-1} d(x,y) - \beta \leq d'(\phi(x),\phi(y)) \leq \alpha d(x,y) + \beta\] holds for all $x,y \in V$.
\item $\phi$ is roughly surjective: For every $x \in V'$, there exists $y \in V$ such that $d'(x,\phi(y)) \leq \beta$.
\end{enumerate}
The following stability theorem for Gaussian heat kernel estimates follows from the work of Delmotte \cite{delmotte1999parabolic}; see also \cite[Theorem 3.3.5]{KumFlour}.
\begin{thm}\label{thm:GHKEstability}
Let $G$ and $G'$ be $(\alpha,\beta)$-roughly isometric graphs for some positive $\alpha,\beta$, and suppose that the degrees of $G$ and $G'$ are bounded by $M<\infty$ and that $G$ satisfies $(c,c')$-Gaussian heat kernel estimates for some positive $c,c'$. Then there exist $\tilde c = \tilde c(\alpha,\beta,M,c,c')$ and
$\tilde c' = \tilde c'(\alpha,\beta,M,c,c')$ such that $G'$ satisfies $(\tilde c, \tilde c')$-Gaussian heat kernel estimates.
\end{thm}
Recall that a function $h:V\to\mathbb R$ defined on the vertex set of a graph is said to be \textbf{harmonic} on a set $A \subseteq V$ if
\[h(v)=\frac{1}{\deg(v)} \sum_{u \sim v} h(u)\]
for every vertex $v\in A$, where the sum is taken with appropriate multiplicities if there are multiple edges between $u$ and $v$. The graph $G$ is said to satisfy an \textbf{elliptic Harnack inequality} if for every $\alpha>1$, there exist a constant $c(\alpha) \geq 1$ such that \[c(\alpha)^{-1} \leq h(v)/h(u) \leq c(\alpha)\] for every two vertices $u$ and $v$ of $G$ and every positive function $h$ that is harmonic on the set \[\left\{w \in V : \min \{d(u,w),d(w,v)\} \leq \alpha d(u,v)\right\},\]
in which case we say that $G$ satisfies an elliptic Harnack inequality with constants $c(\alpha)$.
The following theorem also follows from the work of Delmotte \cite{delmotte1999parabolic}, and was implicit in the earlier work of e.g.\ Fabes and Stroock \cite{FabStro86}; see also \cite[Theorem 3.3.5]{KumFlour}. Note that these references all concern the \emph{parabolic} Harnack inequality, which is stronger than the elliptic Harnack inequality.
\begin{thm}\label{thm:GHKEimpliesEHI}
Let $G$ be a graph. If $G$ satisfies $(c_1,c_1')$-Gaussian heat kernel estimates, then there exists $c_2(\alpha)=c_2(\alpha,c_1)$ such that $G$ satisfies an elliptic Harnack inequality with constants $c_2(\alpha)$.
\end{thm}
We remark that the elliptic Harnack inequality has recently been shown to be stable under rough isometries in the breakthrough work of Barlow and Murugan \cite{barlow2016stability}.
\medskip
Recall that a graph is said to be \textbf{$d$-Ahlfors regular} if there exists a positive constant $c$ such that $c^{-1} r^d \leq |B(x,r)| \leq cr^d$ for every $r\geq 1$ and every $x \in V$ (in which case we say $G$ is $d$-Ahlfors regular with constant $c$). Ahlfors regularity is clearly preserved by rough isometry, in the sense that if $G$ and $G'$ are $(\alpha,\beta)$-rough isometric graphs for some positive $\alpha,\beta$, and $G$ is $d$-Ahlfors regular with constant $c$, then there exists a constant $c'=c'(\alpha,\beta,c)$ such that $G'$ is $d$-Ahlfors regular with constant $c'$.
\medskip
Observe that if the graph $G$ is $d$-Ahlfors regular for some $d>2$ and satisfies a Gaussian heat kernel estimate, then summing the estimate \eqref{eq:GHKE} yields that
\[1 \leq \sum_{n\geq0} p_n(v,v) \preceq 1 \]
for every vertex $v$, and that
\begin{equation}
\label{eq:***}
\mathbf{P}_u(\text{hit v}) = \frac{\sum_{n\geq0}{p_n(u,v)}}{\sum_{n\geq0} p_n(v,v)} \asymp \langle uv\rangle^{-(d-2)}
\end{equation}
for all vertices $u$ and $v$ of $G$.
\medskip
We now turn to the proofs of \cref{lem:constellations,prop:restrictedfirstmoment}. The key to both proofs is the following lemma.
\begin{lemma}
\label{lem:firstmomentlowerboundestimate}
Let $\mathbb G$ be a $d$-Ahlfors regular graph with constant $c_0$ for some $d>4$, let $\mathfrak F$ be the uniform spanning forest of $\mathbb G$, and suppose that $\mathbb G$ satisfies $(c_0^{-1},c_0)$-Gaussian heat kernel estimates.
Let $K_1,\ldots,K_N$ be a collection of finite, disjoint sets of vertices, and let $K = \bigcup_{i=1}^k K_N$. Let $\{X^v : v \in K\}$ be a collection of independent simple random walks started from the vertices of $K$. If
\begin{equation}\label{eq:lowerboundhypothesis}\P\Bigl(
\{X^u_i : i \geq 0\} \cap \{ X^v_i : i\geq0\} = \emptyset \text{ for all $u\neq v \in K$}
\Bigr) \geq \varepsilon > 0,\end{equation}
then there exist constants $c=c(\mathbb G,H,\varepsilon,|K|,c_0)$ and $C=C(\mathbb G,H,\varepsilon,|K|,c_0)$ such that
\begin{multline}
\log_2\P\left(
\begin{array}{l}
\mathscr F(K_i \cup K_j) \text{ if and only if $i=j$, and each two points in $K_i$ are connected}\\
\text{by a path in $\mathfrak F$ of diameter at most $C \operatorname{diam}(K)$ for each $1 \leq i \leq N$}
\end{array}
\right)\\ \geq -(d-4)(|K| -N)\log_2 \operatorname{diam}(K) + c.\end{multline}
\end{lemma}
Here we are referring to the diameter of the path considered as a subset of $\mathbb G$.
Before proving \cref{lem:firstmomentlowerboundestimate}, let us see how it implies \cref{lem:constellations,prop:restrictedfirstmoment}.
\begin{proof}[Proof of \cref{lem:constellations} given \cref{lem:firstmomentlowerboundestimate}]
Let $r'$ be a large constant to be chosen later.
By definition of $R_\mathbb G$ and \cref{lem:annoyinglemma}, there exists $\varepsilon=\varepsilon(|A|)>0$ such that for each set $B \subseteq A$, there exists a set $\{\xi_{(B,b)} : b \in B\} \subset \mathbb V$ of diameter at most $R_\mathbb G(|B|)$ such that if $\{X^{(B,b)}:b\in B\}$ are independent simple random walks then
\[\P\left(\{X^{(B,b)}_i:i\geq 0\} \cap \{X^{(B,b')}_i:i\geq 0\} = \emptyset \text{ for every $b\neq b' \in B$}\right) \geq (2\varepsilon)^{2^{-|A|}}.\]
Take such a set for each $B$ in such a way that the set $\{\xi_{(B,b)} : (B,b) \in \mathcal P_\bullet(A)\}$ is contained in the ball of radius $r'$ around $x$, and for each distinct $B,B' \subseteq A$, the sets $\{\xi_{(B,b)} : b\in B\}$ and
$\{\xi_{(B',b)} : b\in B'\}$ have distance at least $r'/2$ from each other. Clearly this is possible for sufficiently large $r'$. We have by independence that
\[\P\left( \bigcap_{B \subseteq A} \left\{ \{X^{(B,b)}_i:i\geq 0\} \cap \{X^{(B,b')}_i:i\geq 0\} = \emptyset \text{ for every $b\neq b' \in B$}\right\}\right) \geq 2\varepsilon.\]
On the other hand, it follows easily from the Greens function estimate \eqref{eq:HSC} that if $r'$ is sufficiently large (depending on $|A|$ and $\varepsilon$) then
\[
\P\biggl( \begin{array}{l} \{X^{(B,b)}_i:i\geq 0\} \cap \{X^{(B',b')}_i:i\geq 0\} = \emptyset \text{ for}\\ \text{some $B,B' \subseteq A$, $b \in B$ and $b'\in B$ with $B\neq B'$} \end{array}\biggr) \leq \varepsilon,
\]
and we deduce that
\[
\P\left( \{X^{(B,b)}_i:i\geq 0\} \cap \{X^{(B',b')}_i:i\geq 0\} = \emptyset \text{ for every distinct $(B,b),(B',b')\in \mathcal P_\bullet(A)$}\right) \geq \varepsilon
\]
for such $r'$. Applying \cref{lem:firstmomentlowerboundestimate}, we deduce that
$\P( \mathscr A_{Cr'}(\xi) ) \geq c$
for some $C=C(\mathbb G,|A|,\varepsilon,r')$ and $c=c(\mathbb G,|A|,\varepsilon)$. It follows that
$(\xi_{(B,b)})_{(B,b) \in \mathcal P_\bullet(A)}$ is an $r$-good $A$ constellation for some $r=r(|A|)$ sufficiently large. \qedhere
\end{proof}
\begin{proof}[Proof of \cref{prop:restrictedfirstmoment} given \cref{lem:firstmomentlowerboundestimate}]
Let $\mathbb G$ be a $d$-dimensional transitive graph for some $d>4$. Let $x=(x_v)_{v\in \partial V}$ be such that $\langle x_u x_v \rangle \leq 2^{n-1}$ for every $u,v \in \partial V$, let $\xi=(\xi_e)_{e\in E} \in \Omega_x(n)$, and let $r=r(H)$ and $(\xi_{(e,A,v)})_{e \in E, (A,v) \in \mathcal P_\bullet(e)}$ be as in \cref{sec:2ndmoment}.
For each edge $e$ of $H$, write $\mathscr A_e(\xi)$ for the event $\mathscr A_r((\xi_{(e,A,v)})_{(A,v) \in \mathcal P_\bullet(e)})$, which has probability at least $1/r$ by definition of the $r$-good constellation $(\xi_{(e,A,v)})_{(A,v) \in \mathcal P_\bullet(e)}$.
Since the number of subtrees of a ball of radius $r$ in $\mathbb G$ is bounded by a constant, it follows that there exists a constant $\varepsilon=\varepsilon(\mathbb G,H)$ and
a collection of disjoint subtrees $(T_{(e,v)}(\xi))_{(e,v) \in E_\bullet}$ of $\mathbb G$ such that the tree $T_{(e,v)}(\xi)$ has diameter at most $r$ and contains each of the vertices $\xi_{(e,A,v)}$ with $(A,v)\in \mathcal P_\bullet(e)$ for every $(e,v)\in E_\bullet$, and the estimate
\[\P\left(\mathscr A_r(\hat\xi_e) \cap \bigcap_{v\perp e} \{T_{(e,v)}(\xi) \subset \mathfrak F\}\right) \geq (2\varepsilon)^{1/|E|} \]
holds for every $e\in E$.
Fix one such collection $(T_{(e,v)}(\xi))_{(e,v) \in E_\bullet}$ for every $\xi \in \Omega_x(n)$, and for each $e\in E$ let $\mathscr B_e(\xi)$ be the event that $T_{(e,v)}(\xi)$ is contained in $\mathfrak F$ for every $v\in E$. Let $\mathscr B(\xi) = \bigcap_{e\in E} \mathscr B_e(\xi)$. Considering generating $\mathfrak F$ using Wilson's algorithm, starting with random walks $\{X^{(e,A,v)} : e \in E,$ $(A,v) \in \mathcal P_\bullet(e)\}$ such that $X^{(e,A,v)}_0=\xi_{(e,A,v)}$ for every $e \in E$ and $(A,v) \in \mathcal P_\bullet(e)$, we observe that
\begin{equation}
\label{eq:tailtrivB2}
\Big|\P\left( \mathscr B(\xi) \right) - \prod_{e\in E} \P\left( \mathscr B_e(\xi) \right)\Big| \leq \P\left( \begin{array}{l} X^{(e,A,v)} \text{ and } X^{(e',A',v')} \text{ intersect for some distinct} \\ \text{$e,e' \in E$ and some $(A,v) \in \mathcal P_\bullet(e)$, $(A',v') \in \mathcal P_\bullet(e')$} \end{array}\right)
\end{equation}
and hence that
\begin{equation}
\label{eq:tailtrivB}
\P\left( \mathscr B(\xi) \right) \geq \frac{1}{2} \prod_{e\in E} \P\left( \mathscr B_e(\xi) \right) \geq \varepsilon
\end{equation}
for all $n$ sufficiently large and $\xi \in \Omega_x(n)$.
\medskip
Let $\mathbb G_\xi$ be the graph obtained by contracting the tree $T_{(e,v)}(\xi)$ down to a single vertex for each $(e,v) \in E_\bullet$. The spatial Markov property of the USF (see e.g.\ \cite[Section 2.2.1]{HutNach2016b}) implies that the law of $\mathfrak F$ given the event $\mathscr B(\xi)$ is equal to the law of the union of $\bigcup_{(e,v) \in E_\bullet} T_{(e,v)}(\xi)$ with the uniform spanning forest of $\mathbb G_\xi$. Observe that $\mathbb G_\xi$ and $\mathbb G$ are rough isometric, with constants depending only on $\mathbb G$ and $H$, and that $\mathbb G_\xi$ has degrees bounded by a constant depending only on $\mathbb G$ and $H$. Thus, it follows from \cref{thm:HSCGreen,thm:GHKEstability,thm:GHKEimpliesEHI} that $\mathbb G_\xi$ is $d$-Ahlfors regular, satisfies Gaussian heat kernel estimates, and satisfies an elliptic Harnack inequality, each with constants depending only on $H$ and $\mathbb G$.
\medskip
Let $E_\star = E \cup \{\star\}$, and let $K = E_\bullet \cup \{(\star,v) : v \in \partial V\}$.
For each $(e,v)\in E_\bullet$, let $x_{(e,v)}$ be the vertex of $\mathbb G_\xi$ that was formed by contracting $T_{(e,v)}(\xi)$, and let $x_{(\star,v)} = x_v$ for each $v\in \partial V$.
For each vertex $v$ of $H$, choose an edge $e_0(v)\perp v$ arbitrarily from $E_\star$, and let $K' = K \setminus \{x_i: v \in V\}$. Let $\mathfrak F_\xi$ be the uniform spanning forest of $\mathbb G_\xi$, and let $\tilde \mathscr W'(x,\xi)$ be the event that for each $(e,v),(e',v') \in K$ the vertices $x_{(e,v)}$ and $x_{(e',v')}$ are in the same component of $\mathfrak F_\xi$ if and only if $v=v'$. The spatial Markov property and \eqref{eq:tailtrivB} imply that
\[\P\left(\tilde \mathscr W(x,\xi)\right) \geq \varepsilon \P\left(\bar \mathscr W'(x,\xi)\right) \succeq \P\left(\tilde \mathscr W'(x,\xi)\right)\]
whenever $n$ is sufficiently large and $\xi \in \Omega_x(n)$. Thus, applying \cref{lem:firstmomentlowerboundestimate} to $\mathbb G_\xi$ by setting $N=|V|$, enumerating $V=\{v_1,\ldots,v_N\}$ and setting $K_i = \{ x_{(e,v_i)} : (e,v_i) \in K \}$ for each $v\in V$ yields that
\[ \log_2 \P\left(\tilde \mathscr W(x,\xi)\right) \gtrsim \log_2 \P\left(\tilde \mathscr W'(x,\xi)\right) \gtrsim -(d-4)\left(\Delta -|V_\circ|\right) \, n, \]
completing the proof. \qedhere
\end{proof}
\medskip
We now start working towards the proof of \cref{lem:firstmomentlowerboundestimate}.
We begin with the following simple estimate.
\begin{lemma}\label{lem:hitwnotB}
Let $G$ be $d$-Ahlfors regular with constant $c_1$, and suppose that $G$ satisfies $(c_2,c_2')$-Gaussian heat kernel estimates. Then
there exist a positive constant $C=C(d,c_1,c_2,c_2')$ such that
\vspace{0.2em}
\[
\vspace{0.2em}
C^{-1} \langle u w \rangle^{-(d-2)} \leq \mathbf{P}_u\left(\text{hit } w \text{ before }\Lambda_x(n+3c,\infty) \text{, do not hit } \Lambda_x(0,n)\right) \leq C \langle u w \rangle^{-(d-2)}\]
for every $c \geq C$, every vertex $x$, every $n\geq 1$, and every $u,w \in \Lambda_x(n+c,n+2c)$.
\end{lemma}
\begin{proof}
The upper bound follows immediately from \eqref{eq:***}. We now prove the lower bound.
For every $c \geq 1$ and every $u,w \in \Lambda_x(n+c,\infty)$, we have that
\[
\mathbf{P}_u(\text{hit }\Lambda_x(0,n)) = \frac{\mathbf{P}_u(\text{hit } x)}{\mathbf{P}_u(\text{hit } x \mid \text{ hit }\Lambda_x(0,n))} \asymp \frac{\langle u x \rangle^{-(d-2)}}{2^{-(d-2)n}} \preceq 2^{-(d-2)c}.
\]
Thus, we have that
\begin{align*}
\mathbf{P}_u(\text{hit } w \text{ and } \Lambda_x(0,n)) &\leq \mathbf{P}_u(\text{hit $\Lambda_x(0,n)$ after hitting $w$}) +
\mathbf{P}_u(\text{hit $w$ after hitting $\Lambda_x(0,n)$})\\
&\preceq \langle u w \rangle^{-(d-2)}2^{(d-2)n}\langle wx\rangle^{-(d-2)} + 2^{(d-2)n}\langle u x \rangle^{-(d-2)} \langle w x \rangle^{-(d-2)},
\end{align*}
where the second term is bounded by conditioning on the location at which the walk hits $\Lambda_x(0,n)$ and then using the strong Markov property.
By the triangle inequality, we must have that at least one of $\langle u x \rangle$ or $\langle w x \rangle$ is greater than $\frac{1}{2}\langle u w \rangle $. This yields the bound
\begin{align*}
\mathbf{P}_u(\text{hit } w \text{ and } \Lambda_x(0,n))
&\preceq \left(2^{(d-2)n}\langle wx\rangle^{-(d-2)} + 2^{(d-2)n}\left(\min \left\{\langle u x \rangle,\, \langle w x \rangle\right\}\right)^{-(d-2)}\right) \langle uw \rangle^{-(d-2)}\\
&\preceq 2^{-(d-2)c}\langle u w \rangle^{-(d-2)}.
\end{align*}
On the other hand, if $u,w \in \Lambda_x(n+c,n+2c)$ then conditioning on the location at which the walk hits $\Lambda_x(n+3c,\infty)$ yields that
\[ \mathbf{P}_u(\text{hit } w \text{ after } \Lambda_x(n+3c,\infty)) \preceq 2^{-(d-2)(n+3c)} \preceq \langle uw \rangle^{-(d-2)}.\]
The claim now follows easily.
\end{proof}
\begin{proof}[Proof of \cref{prop:restrictedfirstmoment}]
For each $1 \leq i \leq N$, let $x_i$ be chosen arbitrarily from the set $K_i$.
Let $(X^{x})_{x \in K}$ be a collection of independent random walks on $\mathbb G$, where $X^{x}$ is started at $x$ for each $x\in K$, and write $X^i=X^{x_i}$.
Let $K'_i=K_i \setminus \{x_i\}$ for each $1 \leq i \leq N$ and let $K'=\bigcup_{i=1}^N K'_i$. In this proof, implicit constants will be functions of $|K|, N, c_0,$ and $d$. We take $n$ such that $2^{n-1} \leq \textrm{diam}(K) \leq 2^{n}$.
\medskip
Let $c_1,c_2,c_3$ be constants to be determined.
For each $y=(y_{x})_{x\in K} \in (\Lambda(n+c_1,n+c_3))^{K}$, let $\mathscr Y_y$ be the event
\[\mathscr Y_y = \{ X^{x}_{2^{2(n+c_2)}} = y_{x} \text{ for each $x\in K$}\}.\]
Let $\mathscr C(c_2)$ be the event that none of the walks $X^{x}$ intersect each other before time $2^{2(n+c_2)}$, so that $\P(\mathscr C(c_2)) \geq \varepsilon$ for every $c_2 \geq 0$ by assumption.
For each $x\in K$, let $\mathscr D_{x}(c_1,c_3)$ be the event that $X^{x}_{2^{2(n+c_2)}}$ is in $\Lambda(n+c_1,n+c_3)$ and that $X^{x}_m \in \Lambda(n,\infty)$ for all $m \geq 2^{2(n+c_2)}$, and let $\mathscr D(c_1,c_3) = \bigcap \mathscr D_{x}(c_1,c_3)$.
It follows by an easy application of the Gaussian heat kernel estimates that we can choose $c_2=c_2(\mathbb G,N,\varepsilon)$ and $c_3=c_3(\mathbb G,N,\varepsilon)$ sufficiently large that
\begin{equation}
\label{eq:DgivenY}
\P(\mathscr D(c_1,c_3) \mid \mathscr Y_y) \geq 1- \varepsilon/2
\end{equation}
for every $y=(y_{x})_{x\in K} \in (\Lambda(n+c_1,n+c_3))^{K}$, and in particular
so that $\P(\mathscr C(c_2) \cap \mathscr D(c_1,c_3)) \geq \varepsilon$. We fix some such sufficiently large $c_1,c_2,$ and $c_3$, and also assume that $c_1$ is larger than the constant from \cref{lem:hitwnotB}. We write $\mathscr C=\mathscr C(c_2)$, $\mathscr D_{x}=\mathscr D_{x}(c_1,c_3)$, and $\mathscr D=\mathscr D(c_1,c_3)$.
\medskip
For each $1 \leq i \leq N$ and $x\in K'_i$, we define $\mathscr I_{x}$ to be the event that the walk $X^{x}$ hits the set
\begin{multline*}
L^i_{\text{good}}=\\
\left\{ \mathsf{LE}(X^i)_m : \mathsf{LE}(X^i)_m \in \Lambda(n+2c_3, n+ 4c_3),\, \mathsf{LE}(X^i)_{m'} \in \Lambda(0, n+ 6c_3) \text{ for all $ 0 \leq m' \leq m$} \right\}
\end{multline*}
before hitting $\Lambda(n + 6c_3, \infty)$, and let $\mathscr I = \bigcap_{x\in K'} \mathscr I_{x}$.
\medskip
For each $x$ and $x'$ in $K$, we define $\mathscr E_{x,x'}$ to be the event that the walks $X^{x}$ and $X^{x'}$ intersect, and let
\[\mathscr E = \bigcup\left\{ \mathscr E_{x,x'} : 1 \leq i < j \leq N,\, x \in K_i,\, x'\in K_j \right\} \cup \bigcup \left\{ \mathscr E_{x,x'} : x,x' \in K' \right\}.\]
These events have been defined so that, if we sample $\mathfrak F$ using Wilson's algorithm, beginning with the walks $\{ X^v : v \in V\}$ (in any order) and then the walks $\{ X^{x} : x\in K\}$ (in any order), we have that
\begin{multline*}
\left\{
\begin{array}{l}
\mathscr F(K_i \cup K_j) \text{ if and only if $i=j$, and each two points in $K_i$ are connected}\\
\text{by a path in $\mathfrak F$ of diameter at most $2^{6c_3} \operatorname{diam}(K)$ for each $1 \leq i \leq N$}
\end{array}\right\}\\ \supseteq (\mathscr C \cap \mathscr D \cap \mathscr I) \setminus \mathscr E.
\end{multline*}
Thus, it suffices to prove that
\[\log_2 \P\left(\left(\mathscr C \cap \mathscr D \cap \mathscr I\right) \setminus \mathscr E\right) \gtrsim -(d-4)\left(K -N\right) \, n = -(d-4)|K'|\,n .\]
We break this estimate up into the following two lemmas: one lower bounding the probability of the good event $\mathscr C \cap \mathscr D \cap \mathscr I$, and the other upper bounding the probability of the bad event $\mathscr C \cap \mathscr D \cap \mathscr I \cap \mathscr E$.
\begin{lemma}
\label{lem:Iev}
The estimate
\[\log_2 \P(\mathscr I_{x} \mid \mathscr C \cap \mathscr D \cap \mathscr Y_y) \gtrsim -(d-4)n\] holds for every $x \in K'$ and $y=(y_{x})_{x\in K} \in (\Lambda(n+c_1,n+c_3))^{K}$.
\end{lemma}
The proof uses techniques from \cite{lyons2003markov} and the proof of \cite[Theorem 4.2]{BeKePeSc04}.
\begin{proof}[Proof of \cref{lem:Iev}]
Fix $x \in K'$, and let $1\leq i \leq N$ be such that $x\in K'_i$. Write $Y=X^i$ and $Z=X^{x}$.
Let $L = ( L(k) )_{k\geq 0}$ be the loop-erasure of $(Y_k)_{k\geq 0}$ and, for each $m\geq 0$, let $L_m= (L_m(k))_{k= 0}^{q_m}$ be the loop-erasure of $( Y_k )_{k=0}^m$. Define
\[\tau(m) = \inf\{ 0 \leq r \leq q_m : L_m(r) = Y_k \text{ for some $k \geq m$}\} \]
and
\[
\vspace{0.4em}
\tau(m,\ell) = \inf\{ 0 \leq r \leq q_m : L_m(r) = Z_k \text{ for some $k \geq \ell$}\}.\]
The definition of $\tau(m)$ ensures that $L_m(k)=L(k)$ for all $k\leq \tau(m)$. We define the indicator random variables
\begin{multline*}
I_{m,\ell} =\\ \mathbbm{1}\left(Y_m = Z_\ell \in \Lambda(n+2c_3,n+4c_3), \text{ and } Y_{m'}, Z_{\ell'} \in \Lambda(0,n+6c_3) \text{ for all $m' \leq m$, $\ell'\leq \ell$}\right)
\end{multline*}
and
\begin{align*}
J_{m,\ell} &= I_{m,\ell} \, \mathbbm{1} \!\big(\tau(m,\ell) \leq \tau(m)\big).
\end{align*}
Observe that
\[\mathscr I_x \subseteq \left\{ J_{m,\ell} =1 \text{ for some } m, \ell \geq 2^{2(n+c_2)} \right\}. \]
Moreover, for every $m,\ell \geq 2^{2(n+c_2)}$ and every $y \in (\Lambda(n+c_1,n+c_3))^{K}$, the walks $\langle Y_k \rangle_{k\geq m}$ and $\langle Z_k \rangle_{k\geq \ell}$ have the same distribution conditional on the event \[\mathscr C \cap \mathscr D \cap \mathscr Y_y \cap \{I_{m,\ell}=1\}.\]
Thus, we deduce that
\[\P\left(\tau(m) \geq \tau(m,\ell) \mid \mathscr C \cap \mathscr D \cap \mathscr Y_y \cap \{I_{m,\ell}=1\}\right) \geq 1/2 \]
whenever the event being conditioned on has positive probability,
and therefore that
\begin{equation*}\mathbb E[ I_{m,\ell} \mid \mathscr C \cap \mathscr D \cap \mathscr Y_y] \; \geq \; \mathbb E[J_{m,\ell}\mid \mathscr C \cap \mathscr D \cap \mathscr Y_y] \; \geq \; \frac{1}{2}\mathbb E[ I_{m,\ell} \mid \mathscr C \cap \mathscr D\cap \mathscr Y_y].\end{equation*}
Let
\[I = \sum_{\ell \geq 2^{2(n+c_2)}}\sum_{m \geq 2^{2(n+c_2)}} I_{m,\ell}
\quad \text{ and } \quad
J = \sum_{\ell \geq 2^{2(n+c_2)}}\sum_{m \geq 2^{2(n+c_2)}} J_{m,\ell}, \]
and note that the conditional distribution of $I$ given the event $\mathscr C\cap\mathscr D\cap\mathscr Y_y$ is the same as the conditional distribution of $I $ given the event $\mathscr D \cap \mathscr Y_y $.
For every $y \in (\Lambda(n+c_1,n+c_3))^{K}$, we have that, decomposing $\mathbb E[I \mid \mathscr D \cap \mathscr Y_y]$ according to the location of the intersections and applying the estimate \cref{lem:hitwnotB},
\begin{multline*}
\mathbb E[J \mid \mathscr C \cap \mathscr D \cap \mathscr Y_y] \asymp \mathbb E[I \mid \mathscr D \cap \mathscr Y_y] \succeq\\
\sum_{w \in \tilde \Lambda(n+2c_3,n+4c_3)} \mathbf{P}_{y_{x_i}}(\,\text{hit } w \text{ before $\Lambda(n+6c_3,\infty)$} \mid \text{do not hit } \Lambda(0,n))\\\hspace{5cm} \cdot \mathbf{P}_{y_{x}}(\,\text{hit } w \text{ before $\Lambda(n+6c_3,\infty)$} \mid \text{do not hit } \Lambda(0,n))\\
\succeq 2^{-2(d-2)n} | \Lambda(n+2c_3,n+4c_3)| \asymp 2^{-(d-4)n}.
\end{multline*}
On the other hand, we have that
\begin{align*}
\mathbb E[J^2 \mid \mathscr C \cap \mathscr D \cap \mathscr Y_y]
\, \leq \, \mathbb E[I^2 \mid \mathscr C \cap \mathscr D \cap \mathscr Y_y] \, = \,
\mathbb E[I^2 \mid \mathscr D \cap \mathscr Y_y] \preceq \mathbb E[I^2 \mid \mathscr Y_y].
\end{align*}
Meanwhile, decomposing $\mathbb E[I^2 \mid \mathscr Y_y]$ according to the location of the intersections and applying the Gaussian heat kernel estimates yields that
\begin{multline*}
\mathbb E[I^2 \mid \mathscr Y_y]
\preceq \sum_{w,z \in \Lambda(n+2c_3,n+4c_3)} \langle y_{x_i} w \rangle^{-(d-2)}\langle w z \rangle^{-(d-2)}\langle y_{x} w \rangle^{-(d-2)} \langle wz \rangle^{-(d-2)}\\
+
\sum_{w,z \in \Lambda(n+2c_3,n+4c_3)} \langle y_{x_i} w \rangle^{-(d-2)}\langle w z \rangle^{-(d-2)}\langle y_{x} z \rangle^{-(d-2)} \langle z w \rangle^{-(d-2)},
\end{multline*}
where the two different terms come from whether $Y$ and $Z$ hit the points of intersection in the same order or not. With the possible exception of $\langle wz \rangle$, all the distances involved in this expression are comparable to $2^n$. Thus, we obtain that
\[\mathbb E[I^2 \mid \mathscr Y_y] \preceq 2^{-2(d-4)n} \sum_{w,z \in \Lambda(n+2c_3,n+4c_3)} \langle w z \rangle^{-2(d-2)}.\]
For each $w \in \mathbb V$, considering the contributions of dyadic shells centred at $w$ yields that, since $d>4$,
\begin{align*}
\sum_{z\in \mathbb V} \langle w z\rangle^{-2(d-2)} \preceq \sum_{n\geq 0}2^{dn}2^{-2(d-2)n} \leq \sum_{n\geq 0} 2^{-(d-4)n} \preceq 1,
\end{align*}
and we deduce that
\[\mathbb E[I^2 \mid \mathscr Y_y] \preceq 2^{-2(d-4)n} |\Lambda(n+2c_3,n+4c_3)| \preceq 2^{-(d-4)n}.\]
Thus, the Cauchy-Schwarz inequality implies that
\begin{align*}\P(\mathscr I_{x}\mid \mathscr C \cap \mathscr D \cap \mathscr Y_y) \geq \P(J > 0 \mid \mathscr C \cap \mathscr D \cap \mathscr Y_y) \succeq \myfrac[0.2em]{\mathbb E\left[J \mid \mathscr C \cap \mathscr D \cap \mathscr Y_y\right]^2}{\mathbb E\left[J^2 \mid \mathscr C \cap \mathscr D \cap \mathscr Y_y\right]} \succeq 2^{-(d-4)n}.
\end{align*}
as claimed.
\end{proof}
\medskip
We next use the elliptic Harnack inequality to pass from an estimate on $\mathscr I_x$ to an estimate on $\mathscr I$.
\begin{lemma}\label{lem:farintersections}
$\log_2 \P(\mathscr C\cap \mathscr D \cap \mathscr I) \gtrsim -(d-4)|K'|\, n$
\end{lemma}
\begin{proof}
For each $1\leq i \leq N$, let $x'_i$ be chosen arbitrarily from $K'_i$.
To deduce \cref{lem:farintersections} from \cref{lem:Iev}, it suffices to prove that
\[\P\Bigg(\bigcap_{x \in K'} \mathscr I_{x} \mid \mathscr C \cap \mathscr D \cap \mathscr Y_y \Bigg) \succeq \prod_{i=1}^N \P\left( \mathscr I_{x_i'} \mid \mathscr C \cap \mathscr D \cap \mathscr Y_y\right)^{|K_i'|}\]
for every $y=(y_{x})_{x\in K} \in (\Lambda(n+c_1,n+c_3))^{K}$.
\medskip
Let $\mathcal X$ be the $\sigma$-algebra generated by the random walks $(X^{i})_{i=1}^N$.
Observe that for each $x\in K'$ we have
\begin{align*}\P(\mathscr I_{x} \mid \mathcal X,\, \mathscr C\cap\mathscr D\cap\mathscr Y_y) &= \frac{\mathbf{P}_{y_{x}}
\left( \text{hit $L^i_{\text{good}}$ before $\Lambda(0,n+6c_3)$, never leave $\Lambda(n,\infty)$} \right)}{\mathbf{P}_{y_{x}}
\left( \text{never leave $\Lambda(n,\infty)$} \right)}\\
& \asymp \mathbf{P}_{y_{x}}
\left( \text{hit $L^i_{\text{good}}$ before $\Lambda(0,n+6c_3)$, never leave $\Lambda(n,\infty)$} \right).
\end{align*}
The right hand side of the second line is a positive harmonic function of $y_{x}$ on $\Lambda(n+c_1,n+c_3+1)$, and so the elliptic Harnack inequality implies that for every $y,y' \in (\Lambda(n+c_1,n+c_3))^{K}$ and every $x\in K'$, we have that
\begin{equation*}\P\left(\mathscr I_{x} \mid \mathcal X,\, \mathscr C \cap \mathscr D \cap \mathscr Y_y\right) \asymp \P(\mathscr I_{x} \mid \mathcal X,\, \mathscr C \cap \mathscr D \cap \mathscr Y_{y'}). \end{equation*}
Furthermore, if $y'$ is obtained from $y$ by swapping $y_{x}$ and $y_{x'}$ for some $1\leq i \leq N$ and $x,x' \in K'_i$, then clearly
\begin{equation*}\P(\mathscr I_{x} \mid \mathcal X,\, \mathscr C\cap\mathscr D\cap\mathscr Y_y) = \P(\mathscr I_{x'} \mid \mathcal X,\, \mathscr C\cap\mathscr D\cap\mathscr Y_{y'}). \end{equation*}
Therefore, it follows that
\begin{equation*}\P(\mathscr I_{x} \mid \mathcal X,\, \mathscr C\cap\mathscr D\cap\mathscr Y_y) \asymp \P(\mathscr I_{x'} \mid \mathcal X,\, \mathscr C\cap\mathscr D\cap\mathscr Y_{y}) \end{equation*}
for all $1\leq i \leq N$ and $x,x' \in K'_i$.
Since the events $\mathscr I_{x}$ are conditionally independent given the $\sigma$-algebra $\mathcal X$ and the event $\mathscr C \cap \mathscr D \cap \mathscr Y_y$, we deduce that
\begin{align*}
\P(\mathscr I \mid \mathscr C \cap \mathscr D \cap \mathscr Y_y) &= \mathbb E\left[ \P(\mathscr I \mid \mathcal X,\, \mathscr C \cap \mathscr D \cap \mathscr Y_y) \mid \mathscr C \cap \mathscr D \cap \mathscr Y_y \right]\\
& = \mathbb E\left[ \prod_{x\in K'}\P(\mathscr I_{x} \mid \mathcal X,\, \mathscr C \cap \mathscr D \cap \mathscr Y_y) \mid \mathscr C \cap \mathscr D \cap \mathscr Y_y\right]\\
& \asymp \mathbb E\left[ \prod_{i=1}^N\P(\mathscr I_{x'_i} \mid \mathcal X,\, \mathscr C \cap \mathscr D \cap \mathscr Y_y)^{|K'_i|} \mid \mathscr C \cap \mathscr D \cap \mathscr Y_y\right].
\end{align*}
Now, the random variables $\P(\mathscr I_{x'_i} \mid \mathcal X,\, \mathscr C \cap \mathscr D \cap \mathscr Y_y)^{|K_i'|}$ are independent conditional on the event $\mathscr C \cap \mathscr D \cap \mathscr Y_y$, and so we have that
\begin{align*}
\P(\mathscr I \mid \mathscr C \cap \mathscr D \cap \mathscr Y_y)
& \asymp \prod_{i=1}^N \mathbb E\left[ \P(\mathscr I_{x'_i} \mid \mathcal X,\, \mathscr C \cap \mathscr D \cap \mathscr Y_y)^{|K'_i|} \mid \mathscr C \cap \mathscr D \cap \mathscr Y_y\right]\\
&\geq \prod_{i=1}^N \P(\mathscr I_{x'_i} \mid \mathscr C \cap \mathscr D \cap \mathscr Y_y)^{|K'_i|},
\end{align*}
as claimed, where the second line follows from Jensen's inequality.
\end{proof}
\medskip
Finally, it remains to show that the probability of getting unwanted intersections in addition to those that we do want is of lower order than the probability of just getting the intersections that we want.
\begin{lemma} \label{lem:ABIC}
We have that
\[\log_2 \P(\mathscr C \cap \mathscr D \cap \mathscr I \cap \mathscr E) \lesssim - \bigl[(d-4)|K'|+2\bigr] \, n + {|K'|^2}\log_2 n.\]
\end{lemma}
\begin{proof}
For each $w \in \mathbb V$ and $x,x' \in K$, let $\mathscr E_{x,x'}(w)$ be the event that $X^{x}$ and $X^{x'}$ both hit $w$.
Let $\zeta=(\zeta_{x})_{x\in K'}$ and let $\sigma = (\sigma_i)_{i=1}^N$ be such that $\sigma_v$ is a bijection from $\{1,\ldots,|K'_i|\}$ to $K'_i$ for each $1 \leq i \leq N$.
We define $\mathscr R_\sigma(\zeta)$ to be the event that for each $1 \leq i \leq N$ the walk $X^{i}$ passes through the points $\{ \zeta_{x} : x \in K'_i\}$ in the order given by $\sigma$ and that for each $x\in K'$ the walk $X^{x}$ hits the point $\zeta_{x}$. We also define
\begin{equation*}
R_\sigma(\zeta) =
\prod_{i=1}^N\left\langle x_i \zeta_{\sigma_i(1)} \right\rangle^{-(d-2)}\prod_{j=1}^{|K'_i|} \left\langle \zeta_{\sigma_i(j-1)} \zeta_{\sigma_i(j)} \right\rangle ^{-(d-2)} \left\langle \sigma_i(j) \zeta_{\sigma_i(j)} \right\rangle^{-(d-2)},
\end{equation*}
so that
$\P(\mathscr R_\sigma(\zeta)) \asymp R_\sigma(\zeta)$ for every $\zeta \in \mathbb V^{K'}$.
Let $\Lambda_\zeta = \Lambda(n+c_1,n+c_1+c_2)^{K'}$, $\Lambda_{w,1}= \Lambda(n,n+c_2+1),$ $\Lambda_{w,2} = \Lambda(n+c_2+1,\infty)$, and $\Lambda_w=\Lambda_{w,1}\cup\Lambda_{w,2}$. (Note that these sets are not functions of $\zeta$ or $w$, but rather are the sets from which $\zeta$ and $w$ will be drawn.)
We also define
\[O=K^2 \setminus \left[ \{(x,x) : x \in K\} \cup \bigcup_{i=1}^N\left[\{(x_i,x) : x \in K_i \} \cup \{(x,x_i) : x \in K_i \} \right] \right]. \]
To be the set of pairs of points at least one of which must have their associated pair of random walks intersect in order for the event $\mathscr E$ to occur.
Define the random variables $M_{\sigma,0}$, $M_{\sigma,1}$, and $M_{\sigma,2}$ to be
\begin{align*}
\vspace{0em}
M_{\sigma,0} &= \sum_{\zeta \in \Lambda_\zeta} \mathbbm{1}\big[\mathscr R_\sigma(\zeta)\big]\\
\vspace{0em}\\
M_{\sigma,1} &= \sum_{(x,x') \in O}\; \sum_{w \in \Lambda_{w,1}} \sum_{\zeta \in \Lambda_\zeta} \mathbbm{1}\big[\mathscr R_\sigma(\zeta) \cap \mathscr E_{x,x'}(w)\big], \qquad \text{ and}\\
\vspace{0em}\\
M_{\sigma,2} &= \sum_{(x,x')\in O} \;\sum_{w \in \Lambda_{w,2}} \sum_{\zeta \in \Lambda_\zeta} \mathbbm{1}\big[\mathscr R_\sigma(\zeta) \cap \mathscr E_{x,x'}(w)\big].
\end{align*}
Observe that $\sum_{\sigma}(M_{\sigma,1} + M_{\sigma,2}) \geq 1$ on the event
$\mathscr C \cap \mathscr B\cap\mathscr I\cap\mathscr E$, and so
to prove \cref{lem:ABIC} it suffices to prove that
\begin{equation}
\label{eq:M1M2estimate}
\log_2\mathbb E\left[M_{\sigma,1}+M_{\sigma,2}\right]
\lesssim
-\bigl[(d-4)|K'|+2\bigr]\, n + 2\log_2 n
\end{equation}
for every $\sigma$. We will require the following estimate.
\begin{lem}
\label{lem:REcases}
The estimate
\begin{equation}\P\left(\mathscr R_\sigma(\zeta) \cap \mathscr E_{x,x'}(w) \right) \preceq R_\sigma(\zeta) \langle w \zeta_{x} \rangle^{-(d-2)}\langle w \zeta_{x'} \rangle^{-(d-2)}. \end{equation}
holds for every $(x,x') \in O$, every $\zeta \in \Lambda_\zeta$, every $w \in \Lambda_w$, and every collection $\sigma=(\sigma_i)_{i=1}^N$ where $\sigma_i : \{1,\ldots,|K'_i|\} \to K'_i$ is a bijection for each $1\leq i \leq N$.
\end{lem}
\begin{proof}
Unfortunately, this proof requires a straightforward but tedious case analysis. We will give details for the simplest case, in which both $x,x'\in K'$.
A similar proof applies in the cases that one or both of $x$ or $x'$ is not in $K'$, but there are a larger amount of subcases to consider according to when the intersection takes place. In the case that $x,x' \in K'$, let $\mathscr E^{-,-}(\zeta,w)$, $\mathscr E^{-,+}(\zeta,w)$, $\mathscr E^{+,-}(\zeta,w)$ and $\mathscr E^{+,+}(\zeta,w)$ be the events defined as follows:
\begin{itemize}[leftmargin=2.5cm]
\itemsep1em
\item[$\mathscr E^{-,-}(\zeta,w)$:] The event $\mathscr R_\sigma(\zeta)$ occurs, and $X^{x}$ and $X^{x'}$ both hit $w$ before they hit $\zeta_{x}$ and $\zeta_{x'}$ respectively.
\item[$\mathscr E^{-,+}(\zeta,w)$:] The event $\mathscr R_\sigma(\zeta)$ occurs, $X^{x}$ hits $w$ before hitting $\zeta_{x}$, and $X^{x'}$ hits $w$ after hitting $\zeta_{x'}$.
\item[$\mathscr E^{+,-}(\zeta,w)$:] The event $\mathscr R_\sigma(\zeta)$ occurs, $X^{x}$ hits $w$ after hitting $\zeta_{x}$, and $X^{x'}$ hits $w$ before hitting $\zeta_{x'}$.
\item[$\mathscr E^{+,+}(\zeta,w)$:] The event $\mathscr R_\sigma(\zeta)$ occurs, and $X^{x}$ and $X^{x'}$ both hit $w$ after they hit $\zeta_{x}$ and $\zeta_{x'}$ respectively.
\end{itemize}
We have the estimates
\begin{align*}
\P(\mathscr E^{-,-}(\zeta,w)) &\asymp R(x,\zeta) \frac{\langle x w \rangle^{-(d-2)}\langle w \zeta_{x} \rangle^{-(d-2)} \langle x' w \rangle^{-(d-2)}\langle w \zeta_{x'} \rangle^{-(d-2)}}
{\langle x \zeta_{x} \rangle^{-(d-2)}\langle x' \zeta_{x'} \rangle^{-(d-2)}},
\\\\
\P(\mathscr E^{-,+}(\zeta,w)) &\asymp R(x,\zeta) \frac{\langle x w \rangle^{-(d-2)}\langle w \zeta_{x} \rangle^{-(d-2)}}
{\langle x \zeta_{x} \rangle^{-(d-2)}}
\langle \zeta_{x'} w \rangle^{-(d-2)}, \\\\
%
\P(\mathscr E^{+,-}(\zeta,w)) &\asymp R(x,\zeta)\frac{\langle x' w \rangle^{-(d-2)}\langle w \zeta_{x'} \rangle^{-(d-2)}}{\langle x' \zeta_{x'} \rangle^{-(d-2)}}\langle \zeta_{x} w \rangle^{-(d-2)},\\
\text{and}\hspace{3.5cm}&\\
\P(\mathscr E^{+,+}(\zeta,w)) &\asymp R(x,\zeta)\langle \zeta_{x} w \rangle ^{-(d-2)}\langle \zeta_{x'} w \rangle^{-(d-2)}.\end{align*}
\noindent
In all cases, a bound of the desired form follows since $\langle w x \rangle \succeq \langle \zeta_{x} x \rangle$ and $\langle w x' \rangle \succeq \langle \zeta_{x'} x' \rangle$ for every $x,x'\in K'$, $\zeta\in \Lambda_\zeta$, and $w\in \Lambda_w$, and we conclude by summing these four bounds. \qedhere
\end{proof}
Our aim now is to prove \cref{eq:M1M2estimate} by an appeal to \cref{lem:firstmomentgeneral}. To do this, we will encode the combinatorics of the potential ways that the walks can intersect via hypergraphs.
To this end, let $H_\sigma$ be the finite hypergraph with boundary that
has vertex set
\[V(H_\sigma) = \left(\{1\} \times K\right) \cup \left(\{2\} \times K'\right),\]
boundary set
\[\partial V(H_\sigma) =
\left(\{1\} \times \{x_i : 1 \leq i \leq N \}\right) \cup \left(\{2\}\times K'\right),\]
and edge set
\begin{multline*}
E(H_\sigma) =
\left\{ \left\{(2,\sigma_i(j)), (1,\sigma_i(j)), (1,i,\sigma_i(j+1))\right\} : 1 \leq i \leq N,\, 1 \leq j \leq |K'_i|-1 \right\}\\ \cup \left\{\left\{(2,\sigma_i(|K'_i|)), (1,\sigma_i(|K'_i|)\right\} : 1 \leq i \leq N\right\}.
\end{multline*}
See \cref{fig:inthyp} for an illustration.
Note that the isomorphism class of $H_\sigma$ does not depend on $\sigma$.
The edge set $E(H_\sigma)$ can be identified with $K'$ by taking the intersection of each edge with the set $\{2\}\times K'$.
Under this identification, the definition of $H_\sigma$ ensures that
\[R_\sigma(\zeta) = W^{H_\sigma,2}(x,\zeta)\]
and consequently that
\[\mathbb E[M_{\sigma,0}] \preceq \mathbb W^{H_\sigma,2}_x(n,n+c_1+c_2).\]
\begin{figure}
\includegraphics[width=0.28\textwidth]{intersectionhypergraph.pdf}
\hspace{1cm}
\includegraphics[width=0.28\textwidth]{intersectionhypedit2.pdf}
\hspace{1cm}
\includegraphics[width=0.28\textwidth]{intersectionhypergraph3.pdf}
\caption{Left: The hypergraph $H_\sigma$ in the case that $N=2$,
$|K_1|=5$, and $|K_2|=4$.
Note that the isomorphism class of $H_\sigma$ does not depend on $\sigma$.
Centre: Letting $K_1=\{x_{1,1},\ldots,x_{1,5}\}$, and $K_2=\{x_{2,1},\ldots,x_{2,4}\}$, this is the hypergraph $H_\sigma(x_{1,2},x_{1,4})$.
Right: The hypergraph $H_\sigma(x_{1,4},x_{2,2})$.
}
\label{fig:inthyp}
\end{figure}
\medskip
We claim that
\begin{equation}
\label{eq:Lprime}
\eta_{d,2}(H_\sigma') \geq \eta_{d,2}(H_\sigma)+2
\end{equation}
for any coarsening $H_\sigma'$ of $H_\sigma$, so that
\begin{align*}\hat \eta_{d,2}(H_\sigma) = \eta_{d,2}(H_\sigma) &= (d-2)(3|K'|-|V|) -d|K'| -(d-2)(|K'|-|V|)\\
&= (d-4)|K'|.\end{align*}
and hence that
\begin{equation}
\label{eq:M0estimate}
\log_2 \mathbb E[M_{\sigma,0}] \lesssim -(d-4) |K'| \, n + |K'|^2 \log_2(n)
\end{equation}
by \cref{lem:firstmomentgeneral}.
Indeed, suppose that $\coarse{H_\sigma}{\bowtie}$ is a proper coarsening of $H_\sigma$ corresponding to some equivalence relation $\bowtie$ on $E(H_\sigma)$, and that the edge corresponding to $x=\sigma_i(j) \in K'$ is maximal in its equivalence class in the sense that there does not exist $\sigma_i(j')$ in the equivalence class of $\sigma_i(j)$ with $j' > j$. Clearly such a maximal $x$ must exist in every equivalence class. Moreover, for such a maximal $x = \sigma_i(j)$ there can be at most one edge of $H_\sigma$ that it shares a vertex with and is also in its class, namely the edge corresponding to $\sigma_i(j-1)$. Thus, if $x$ is maximal and its equivalence class is not a singleton, let $\coarse{H_\sigma}{\bowtie'}$ be the coarsening corresponding to the equivalence relation $\bowtie'$ obtained from $\bowtie$ by removing $x$ from its equivalence class. Then we have that $\Delta(\coarse{H_\sigma}{\bowtie'}) \leq \Delta(\coarse{H_\sigma}{\bowtie})+1$ and that $|E(\coarse{H_\sigma}{\bowtie'})| = |E(\coarse{H_\sigma}{\bowtie})|+1$, so that
\begin{equation}
\label{eq:plusfour}
\eta_d(\coarse{H}{\bowtie}) \geq \eta_d(\coarse{H_\sigma}{\bowtie'}) + d - (d-2)= \eta_d(\coarse{H_\sigma}{\bowtie'}) + 2,
\end{equation}
and the claim follows by inducting on the number of edges in non-singleton equivalence classes.
\medskip
To obtain a bound on the expectation of $M_{\sigma,2}$, considering the contribution of each shell $\Lambda(m,m+1)$ yields the estimate
\begin{align*}
\sum_{w \in \Lambda_{w,2}} \langle \zeta_{x} w \rangle^{-(d-2)}\langle \zeta_{x'} w \rangle^{-(d-2)}
& \preceq \sum_{m \geq n+ c_2 + 1} 2^{dn} 2^{-2(d-2)n} \preceq 2^{-(d-4)n}
\end{align*}
for every $\zeta \in \Lambda_\zeta$,
and it follows from \cref{lem:REcases} and \eqref{eq:M0estimate} that
\begin{align}
\log_2 \mathbb E[M_{\sigma,2}] &\lesssim \log_2 \mathbb E[M_{\sigma,0}] - (d-4)\, n
\nonumber
\\
&\lesssim -(d-4)(|K'|+1)\, n + |K'|^2 \log_2 n.
\label{eq:M2estimate}
\end{align}
\medskip
It remains to bound the expectation of $M_{\sigma,1}$.
For each two distinct $x,x' \in K'$, let $H_\sigma(x,x')$ be the hypergraph with boundary obtained from $H_\sigma$ by adding a single vertex, $\star$, and adding this vertex to the two edges corresponding to $x$ and $x'$ respectively. These hypergraphs are defined in such a way that, by \cref{lem:REcases},
\[\mathbb E[M_{\sigma,1}] \preceq \sum_{(x,x')\in O} \mathbb W^{H_\sigma(x,x'),\, 2}_x(n+c_1,n+c_1+c_2)\]
We claim that
\vspace{0.2cm}
\begin{equation}
\label{eq:Lprime2}
\vspace{0.2cm}
\hat \eta_{d,2}(H_\sigma(x,x')) \geq \hat \eta_{d,2}(H) +2 = (d-4)|K'|+2
\end{equation}
for every two distinct $x,x' \in K$.
First observe that
coarsenings of $H_\sigma$ and of $H_\sigma(x,x')$ both correspond to equivalence relations on $K$. Let $\bowtie$ be an equivalence relation on $K$, and let $H'_\sigma(x,x')$ and $H_\sigma'$ be the corresponding coarsenings.
Clearly $|E(H'_\sigma(x,x'))|=|E(H_\sigma')|$ and $|V_\circ(H'_\sigma(x,x'))|=|V_\circ(H_\sigma')|+1$.
If $x$ and $x'$ are related under $\bowtie$, then we have that $\Delta(H'_\sigma(x,x')) = \Delta(H_\sigma')+1$, while if
$x$ and $x'$ are not related under $\bowtie$, then we have that $\Delta(H'_\sigma(x,x')) = \Delta(H_\sigma')+2$.
We deduce that
\[\eta_{d,2}(H'_\sigma(x,x')) \geq \begin{cases} \eta_{d,2}(H_\sigma') &\text{ if $x \bowtie x'$}\\
\eta_{d,2}(H_\sigma') + 2 &\text{ otherwise.}
\end{cases}
\]
If $x \bowtie x'$ then $H_\sigma'$ must be a proper coarsening of $H_\sigma$, and we deduce from \eqref{eq:Lprime} that the inequality $\eta_{d,2}(H'_\sigma(x,x')) \geq \eta_{d,2}(H_\sigma) +2$ holds for every coarsening $H'_\sigma(x,x')$ of $H_\sigma(x,x')$, yielding the claimed inequality \eqref{eq:Lprime2}.
Using \eqref{eq:Lprime2}, we deduce from \cref{lem:firstmomentgeneral} that
\begin{equation}
\label{eq:M1estimate}
\log_2\mathbb E[M_{\sigma,1}] \lesssim -\bigl[(d-4)|K'|+2\bigr]\, n + |K'|^2\log_2 n.
\end{equation}
Combining \eqref{eq:M2estimate} and \eqref{eq:M1estimate} yields the claimed estimate \eqref{eq:M1M2estimate}, completing the proof.
\qedhere
\end{proof}
\noindent
\emph{Completion of the proof of \cref{lem:firstmomentlowerboundestimate}.}
Since the upper bound given by \cref{lem:ABIC} is of lower order than the lower bound given by \cref{lem:farintersections}, it follows that
there exists $n_0=n_0(|K|,N,d,c_1,c_2)$ such that
\[\P(\mathscr C \cap \mathscr D \cap \mathscr I \cap \mathscr E) \leq \frac{1}{2}\P(\mathscr C\cap\mathscr D\cap \mathscr I)\]
if $n \geq n_0$, and hence that
\[\log_2 \P(\mathscr C \cap \mathscr D \cap \mathscr I \setminus \mathscr E) \gtrsim \log_2 \P(\mathscr C \cap \mathscr D \cap \mathscr I) \gtrsim -(d-4)|K'|\, n \]
for sufficiently large $n$ as claimed. \qedhere
\end{proof}
\section{Proof of the main theorems}
\label{sec:wrappingup}
We now complete the proof of \cref{thm:mainhyper}. We begin with the simpler case in which $d/(d-4)$ is not an integer.
\begin{proof}[Proof of \cref{thm:mainhyper} for $d\notin \{5,6,8\}$]
We begin by analyzing faithful ubiquity.
Let $\mathbb G$ be a $d$-dimensional transitive graph, and let $H$ be a finite hypergraph with boundary. If $H$ has a subhypergraph none of whose coarsenings are $d$-buoyant, then \cref{prop:nonubiquity} implies that $H$ is not faithfully ubiquitous in $\mathcal C^{hyp}_r(\mathfrak F)$ almost surely for any $r\geq 1$.
Otherwise, by \cref{lem:maxminswap}, $H$ has a coarsening all of whose subhypergraphs are $d$-buoyant. If $d/(d-4)$ is not an integer, then it follows from \cref{prop:ubiquity} that
there exist vertices $(x_v)_{v\in \partial V}$ in $G$ such that with positive probability, the vertices $x_v$ are in different components of $\mathfrak F$ and $H$ is $R_\mathbb G(H)$-robustly faithfully present at $(x_v)_{v\in V}$. The set
\[\left\{\left(\omega,(x_v)_{ v\in \partial V}\right) \in \{0,1\}^{E(\mathbb G)} \times \mathbb V^{\partial V}: H \text{ is $R_\mathbb G(H)$-robustly faithfully present at $x$}\right\}\]
is a tail multicomponent property, and it follows from \cref{thm:indist} that
$H$ is faithfully ubiquitous in $\mathcal{C}^{hyp}_{r}(\mathfrak F)$ for every $r \geq R_\mathbb G(H)$ a.s.
We now turn to ubiquity. Let $r \geq 1$. It follows immediately from the definitions that if $H$ has a quotient that is faithfully ubiquitous in $\mathcal C_r^{hyp}(\mathfrak F)$ almost surely then $H$ is ubiquitous in $\mathcal C_r^{hyp}(\mathfrak F)$ almost surely, and so it suffices to prove the converse.
If every quotient $H'$ of $H$ with $R_\mathbb G(H') \leq r$ has a subhypergraph none of whose coarsenings are $d$-buoyant,
then $H$ is not ubiquitous in $\mathcal C^{hyp}_r(\mathfrak F)$ almost surely by \cref{prop:nonubiquity}. Otherwise, by \cref{lem:maxminswap}, $H$ has a quotient $H'$ with $R_\mathbb G(H') \leq r$ that has a coarsening all of whose subgraphs are $d$-buoyant, so that $H'$ is faithfully ubiquitous in $\mathcal C^{hyp}_r(\mathfrak F)$ almost surely and therefore $H$ is ubiquitous in $\mathcal C^{hyp}_r(\mathfrak F)$ almost surely by the above. This concludes the proof.
\qedhere
\end{proof}
\begin{proof}[Proof of \cref{thm:mainhyper} for $d\in \{5,6,8\}$]
The only part of the proof that requires modification in this case is the proof that if $H$ has a coarsening all of whose subhypergraphs are $d$-buoyant then $H$ is faithfully ubiquitous in $\mathcal C^{hyp}_{R_\mathbb G(H)}(\mathfrak F)$ almost surely. To show this, we will prove by induction on $|E(H)|$ that if every subhypergraph of $H$ is $d$-buoyant then every refinement $H'$ of $H$ is faithfully ubiquitous in $\mathcal C^{hyp}_{R_\mathbb G(H')}(\mathfrak F)$ almost surely.
Let us first consider the base case $|E(H)|=1$. Since every subhypergraph of $H$ is $d$-buoyant, the unique edge of $H$ must contain at most $d/(d-4)$ boundary vertices. Let $H'$ be obtained from $H$ by deleting all internal vertices that are not in the unique edge of $H$, and, if necessary, adding additional new boundary vertices to the unique edge so that it contains exactly $d/(d-4)$ boundary vertices. Then it follows from \cref{prop:ubiquityspeciald} and \cref{thm:indist} that every refinement $H''$ of $H'$ is faithfully ubiquitous in $\mathcal C^{hyp}_{R_\mathbb G(H'')}(\mathfrak F)$ almost surely. It is easily verified from the definitions that this implies that every refinement $H'''$ of $H$ is is faithfully ubiquitous in $\mathcal C^{hyp}_{R_\mathbb G(H'')}(\mathfrak F)$ almost surely also. In particular, it follows that for every $n \leq d/(d-4)$, every set of $n$ trees of $\mathfrak F$ are contained in an edge of $\mathcal C^{hyp}_{r}(\mathfrak F)$ for every $r\geq R_\mathbb G(n)$ almost surely.
Let $H$ be a finite hypergraph with boundary all of whose subhypergraphs are $d$-buoyant. Suppose that $|E(H)|\geq 2$ and that the claim has been established for all hypergraphs with fewer edges than $H$.
If $H$ is $d$-basic then we are already done, so assume not. Then at least one of the following must occur:
\begin{enumerate}
\item $H$ has an edge of degree less than or equal to $d/(d-4)$.
\item $H$ has a proper, non-trivial bordered subhypergraph $H'$ with $\eta_d(H')=0$.
\end{enumerate}
Let us first consider the case that $H$ has an edge of degree less than or equal to $d/(d-4)$. Let $e_0$ be an edge of $H$ with $\deg(e_0)\leq d/(d-4)$ and let $H_1$ be the subhypergraph of $H$ with $\partial V(H_1)=\partial V(H)$, $V_\circ(H_1)=V_\circ(H)$, and $E(H_1)=E(H)\setminus\{e_0\}$. By the induction hypothesis, every refinement $H_1'$ of $H_1$ is faithfully ubiquitous in $\mathcal C^{hyp}_{R_\mathbb G(H_1')}(\mathfrak F)$ almost surely.
Let $H_2$ be a refinement of $H$, and let $H_3$ be obtained from $H_2$ by deleting every edge of $H_2$ which corresponds to $e_0$ under the refinement. Then $H_3$ is a refinement of $H_1$, and so is faithfully ubiquitous in $\mathcal C^{hyp}_{R_\mathbb G(H_3)}(\mathfrak F)$ almost surely. On the other hand, every edge of $H_2$ that was deleted to form $H_3$ has degree at most $d/(d-4)$, and since $\mathcal C^{hyp}_{R_\mathbb G(H_2)}(\mathfrak F)$ contains every possible edge of these sizes almost surely, we deduce that $H_2$ is faithfully ubiquitous on $\mathcal C^{hyp}_{R_\mathbb G(H_2)}(\mathfrak F)$ almost surely.
Now suppose that $H$ has a proper, non-trivial bordered subhypergraph $H_1$ with $\eta_d(H_1)=0$.
Let $H_2$ be the hypergraph with boundary that has $\partial V(H_2)=V(H_1)$, $V_\circ(H_2)=V_\circ(H)\setminus V_\circ (H_1)$, and $E(H_2)=E(H_1)\setminus E(H_1)$. We claim that every subhypergraph of $H_2$ is $d$-buoyant. Indeed, suppose that $H_3$ is a subhypergraph of $H_2$, and let $H_4$ be the subhypergraph of $H_1$ that includes all the edges and vertices of $H_1$ that are included in either $H_1$ or $H_3$ (noting that some of the boundary vertices of $H_3$ will become interior vertices of $H_4$). Let $N$ be the number of boundary vertices of $H_3$ that are interior vertices of $H_1$. Then we can compute that
$|E(H_4)|=|E(H_1)|+|E(H_3)|$, $|V_\circ(H_4)|=|V_\circ(H_1)|+|V_\circ(H_4)|+N$, and $\Delta(H_4)=\Delta(H_1)+\Delta(H_4)+N$, so that
\[
\eta_d(H_3)=\eta_d(H_3)+\eta_d(H_1)=\eta_d(H_4) \leq 0
\]
since $\eta_d(H_1)=0$ and every subhypergraph of $H_1$ is $d$-buoyant.
Thus, we deduce from the induction hypotheses that every refinement $H'$ of either $H_1$ or $H_2$ is faithfully ubiquitous in $\mathcal C^{hyp}_r(\mathfrak F)$ almost surely for every $r\geq R_\mathbb G(H) \geq \max\{R_\mathbb G(H_1),R_\mathbb G(H_2)\}$. It is easily verified that this implies that every refinement $H'$ of $H$ is faithfully ubiquitous in $\mathcal C^{hyp}_r(\mathfrak F)$ for every $r\geq R_\mathbb G(H')$ almost surely. \qedhere
\end{proof}
\begin{proof}[Proof of \cref{thm:maintree}]
We begin by proving the claim about faithful ubiquity. Applying \cref{thm:main,lem:maxminswap}, and since every subgraph of a tree is a forest, it suffices to prove that if $T$ is a finite forest with boundary then $\eta_d(T') \geq \eta_d(T)$ whenever $d\geq4$ and $T'$ is a coarsening of $T$, so that, in particular,
\[\hat \eta_d(T) = \eta_d(T) = (d-8)|E| -(d-4)|V_\circ|\]
for every $d\geq 4$.
Indeed, suppose that $T'=\coarse{T}{\bowtie}$ is a proper coarsening of a finite forest with boundary $T$.
Since $T$ is a finite forest, the subgraph of $T$ spanned by each equivalence class of $\bowtie$ is also a finite forest, and therefore must contain a leaf. Choose a non-singleton equivalence class of $\bowtie$ and an edge $e$ of this equivalence relation that is incident to a leaf of the spanned forest.
Thus, $e$ has the property that one of the endpoints of $e$ is not incident to any other edge in $e$'s equivalence class. Let $\bowtie'$ be the equivalence relation obtained from $\bowtie$ by removing $e$ from its equivalence class and placing it in a singleton class by itself. Then we have
that $|E(\coarse{T}{\bowtie'})| = |E(\coarse{T}{\bowtie})|+1$
and $\Delta(\coarse{T}{\bowtie'}) \leq \Delta(\coarse{T}{\bowtie})+1$ so that
\[\eta_d(\coarse{T}{\bowtie'}) \leq \eta_d(\coarse{T}{\bowtie}) -4.\]
Thus, it follows by induction on the number of edges of $T$ in non-singleton equivalence classes that $\eta_d(\coarse{T}{\bowtie}) \geq \eta_d(T)$ for every coarsening $\coarse{T}{\bowtie}$ of $T$ as claimed. This establishes the claim about faithful ubiquity.
We now turn to ubiquity. Let $\mathbb G$ be a $d$-dimensional transitive graph for some $d>8$, let $r\geq 1$, and let $\mathfrak F$ be the uniform spanning forest of $\mathbb G$.
Let $T$ be a finite tree with boundary that is not faithfully ubiquitous in $\mathcal C_r(\mathfrak F)$, and let $T'$ be a subgraph of $T$ such that $(d-8)|E(T')| -(d-4)|V_\circ(T')| >0$, which exists by the previous paragraph. Since
\[(d-8)|E(T')| -(d-4)|V_\circ(T')| = \sum_{\substack{T'' \text{ a connected}\\\text{component of $T'$}}} (d-8)|E(T'')| -(d-4)|V_\circ(T'')|,\]
we deduce that $T'$ has a connected subgraph $T''$ with $(d-8)|E(T'')| -(d-4)|V_\circ(T'')| >0$. Let $H$ be a quotient of $T$, let $H'$ be the image of $T''$ under the quotient map, and let $S$ be a spanning tree of $H'$, so that $|V_\circ(S)|\leq |V_\circ(T'')|$ and $|\partial V(S)| = |\partial V(T'')|$. Since $S$ and $T''$ are both trees, we have that $|E(S)| = |\partial V(S)|+|V_\circ(S)|-1$ and
$|E(T'')| = |\partial V(T'')|+|V_\circ(T'')|-1$. We easily deduce that $\eta_d(S) \geq \eta_d(T'')>0$, and consequently that $S$ is not faithfully ubiquitous in $\mathcal C_r(\mathfrak F)$ almost surely.
On the other hand, since $S$ is a subgraph of $H$, we have that if $H$ is faithfully ubiquitous in $\mathcal C_r(\mathfrak F)$ almost surely then $S$ is also. Since the quotient $H$ was arbitrary, it follows from \cref{thm:main} that $T$ is ubiquitous in $\mathcal C_r(\mathfrak F)$ if and only if it is faithfully ubiquitous in $\mathcal C_r(\mathfrak F)$ almost surely, completing the proof. \qedhere
\end{proof}
\begin{proof}[Proof of \cref{thm:mainsimple}]
To deduce item (1) from \cref{thm:main} and \cref{lem:maxminswap}, we need only prove that
\[ f(d):= \min\left\{ \max \left\{\eta_d (H'') : H'' \text{ is a subhypergraph of $H'$}\right\} : H' \text{ is a coarsening of } H\right\}\]
is a non-decreasing function of $d \geq 4$ for every finite hypergraph with boundary $H$.
Suppose that $H'$ is a subhypergraph of a coarsening of $H$. Let $H''$ be the largest subhypergraph of $H'$ that contains no edges or interior vertices of degree strictly less than $2$. In other words, $H''$ is obtained from $H'$ by recursively deleting edges and interior vertices of $H'$ that have degree strictly less than $2$ until no such edges or vertices remain.
It is easily verified that
deleting edges or interior vertices of degree less than $2$ does not decrease the $d$-apparent weight when $d\geq 4$, and hence that
$\eta_d(H'') \geq \eta_d(H')$.
Thus, we have that
\begin{multline}
\label{eq:etahatdegree2}
f(d)
=\\ \min\left\{ \max \left\{ \eta_d(H''): \begin{array}{l}\text{ a subhypergraph of $H'$ with no edges}\\ \text{or interior vertices of degree $<2$}\end{array} \right\} : H' \text{ a coarsening of $H$} \right\}
\end{multline}
for every $d\geq 4$.
If $H''$ is a finite hypergraph with boundary such that every edge and interior vertex of $H''$ has degree at least $2$, then $\Delta(H'') \geq 2|E(H'')|$ and $\Delta(H'') \geq 2|V_\circ(H'')|$, so that $\Delta(H'') \geq |E(H'')|+|V_\circ(H'')|$, and hence the coefficient of $d$ in $\eta_d(H'')$ is positive. Thus, the claimed monotonicity follows from \eqref{eq:etahatdegree2}.
For item (2) it suffices by \cref{thm:maintree} to construct a family of finite trees with boundary $(T_d)_{d\geq 9}$ such that
\[
\min\left\{\frac{|V_\circ(T'_d)|}{|E(T'_d)|}\,:\, T'_d \text{ is a subgraph of $T_d$}\right\} = \frac{d-8}{d-4}
\]
for each $d\geq 9$. We will use the family of trees pictured in \cref{fig:sepfamily}. Write $d= 4 + 5k + \ell$ where $0 \leq \ell < 5$ and let $T_d$ be the tree that has one vertex of degree five connected to $\ell$ paths of length $k+1$ and $5-\ell$ paths of length $k$. $T_d$ has five leaves, which we declare to be in its boundary, and declare all the other vertices to be in its interior.
Clearly any subgraph $T'_d$ of $T_d$ maximizing $|V_\circ(T'_d)|/|E(T'_d)|$ must be induced by a union of geodesics joining the boundary vertices, and it is easily verified that, amongst these subgraphs, it is the full graph $T_d$ that maximizes $|V_\circ(T'_d)|/|E(T'_d)|$. To conclude, we compute that
\[|V_\circ(T_d)| = 1 + 5(k-1) + \ell = d-8
\quad
\text{ and }
\quad
|E(T_d)| = 5k + \ell = d-4,\]
so that $|V_\circ(T_d)|/|E(T_d)|=(d-8)/(d-4)$ as required.
\end{proof}
\section{Closing remarks and open problems}
\label{sec:closing}
\subsection{The number of witnesses}
\label{subsec:witnessesdiscussion}
The proof of \cref{thm:mainhyper} also yields the following result.
If $\mathbb G$ is a $d$-dimensional transitive graph, $\mathfrak F$ is the uniform spanning forest of $\mathbb G$, $H=(\partial V, V_\circ, E)$ is a finite hypergraph with boundary, and $r\geq 1$, then the following hold almost surely:
\begin{enumerate}[leftmargin=*]
\itemsep0.5em
\item If $H$ is faithfully ubiquitous in $\mathcal C_r^{hyp}(\mathfrak F)$, then for every collection $(x_u)_{u\in \partial V}$ of distinct vertices of $\mathcal C_r^{hyp}(\mathfrak F)$, there exists a collection $(x^i_u)_{u \in V_\circ}$ of distinct vertices of $\mathcal C_r^{hyp}(\mathfrak F)$ for each $i \geq 1$ such that $\{ x^i_u : u\in V_\circ, u \perp e \} \cup \{x_u : u\in\partial V, u \perp e \}$ is an edge of $\mathcal C_r^{hyp}(\mathfrak F)$ for every $i \geq 1$ and every $e \in E$,
$\{x^i_u : u \in V_\circ\}$ is disjoint from $\{x_u : u \in \partial V\}$ for every $i \geq 1$, and $\{x^i_u : u \in V_\circ\}$ and $\{x^j_u : u \in V_\circ\}$ are disjoint whenever $i> j \geq 1$.
\item
If $H$ is not faithfully ubiquitous in $\mathcal C_r^{hyp}(\mathfrak F)$, then for every collection $(x_u)_{u\in \partial V}$ of distinct vertices of $\mathcal C_r^{hyp}(\mathfrak F)$ there exists a finite set of vertices $A$ of $\mathcal C_r^{hyp}(\mathfrak F)$ such that $\{x_u : u \in V_\circ\}$ intersects $A$ whenever $(x_u)_{u \in V_\circ}$ is a collection of distinct vertices of $\mathcal C_r^{hyp}(\mathfrak F)$ disjoint from $(x_u)_{u\in \partial V}$ with the property that
$\{ x^i_u : u\in V_\circ, u \perp e \} \cup \{x_u : u\in \partial V, u \perp e \}$ is an edge of $\mathcal C_r^{hyp}(\mathfrak F)$ for every $e \in E$.
\end{enumerate}
Indeed, item (2) is an immediate consequence of \cref{thm:indist}.
This has the following interesting consequence.
For each $d >8$, it follows from \cref{thm:maintree} that the star with $\lceil (d-4)/(d-8) \rceil$ boundary leaves and one internal vertex is not faithfully ubiquitous in the component graph of the uniform spanning forest of $\mathbb Z^d$. Thus, we deduce from item (2), above, that if $d > 8$ then for every collection of $\lceil (d-4)/(d-8) \rceil$ distinct vertices of the component graph, there is almost surely some finite $M$ depending on the collection such that any clique containing the collection has size at most $M$. In particular, we conclude that the component graph of the uniform spanning forest of $\mathbb Z^d$ does not contain an infinite clique whenever $d>8$ a.s. In contrast, we note that the component graph of the uniform spanning forest of $\mathbb Z^d$ \emph{does} contain arbitrarily large cliques almost surely whenever $d \geq 5$. (This follows as a special case of \cref{thm:main} as in \cref{fig:degenerate}, but is also very easy to prove directly.)
\subsection{Further questions about the component graph of the USF.}
It is natural to wonder whether \cref{thm:main} determines the component graph up to isomorphism. It turns out that this is not the case.
Indeed, observe that faithful ubiquity of a finite graph with boundary $H$ can be expressed as a first order sentence in the language of graphs:
\[\text{for all } (x_v)_{v \in \partial V}
\text{ there exists }
(x_v)_{v \in V_\circ} \text{ such that } x_u\sim x_v \text{ for every } u,v \in V \text{ such that } u \sim v.
\]
Ubiquity of $H$ can be expressed similarly. However, even if we knew the almost-sure truth value of \emph{every} first order sentence in the language of graphs, this still would not suffice to determine the graph up to isomorphism. Indeed, recall that a graph $G=(V,E)$ is \textbf{quasi-$k$-transitive} if the action of its automorphism group on $V^k$ has only finitely many orbits. The model-theoretic Ryll-Nardzewski Theorem \cite[Theorem 7.3.1]{HodgesBook} implies that a countably infinite graph is determined up to isomorphism by its first order theory if and only if it is \textbf{oligomorphic}, i.e., quasi-$k$-transitive for every $k\geq 1$.
By considering sizes of cliques as in \cref{subsec:witnessesdiscussion}, it follows from the discussion in that section that the component graph of the uniform spanning forest of $\mathbb Z^d$ is a.s.\ not quasi-$\lceil (d-4)/(d-8) \rceil$-transitive when $d>8$, and hence is a.s.\ not oligomorphic when $d> 8$.
We conjecture that in fact the component graph has very little symmetry indeed.
\begin{conjecture}
Let $\mathbb G$ be a $d$-dimensional transitive graph for some $d > 8$, and let $r\geq 1$. Then $\mathcal C_r(\mathfrak F)$ has no non-trivial automorphisms almost surely. Moreover, there does not exist a deterministic graph $G$ such that $\mathcal C_r(\mathfrak F)$ is isomorphic to $G$ with positive probability.
\end{conjecture}
Although we do not believe the component graphs of the USF on different transitive graphs of the same dimension to be isomorphic, it seems nevertheless that most properties of the component graph should be determined by the dimension. One way of formalizing such a statement would be to axiomatize entire the almost-sure first order theory of the component graph of the uniform spanning forest and show that this first order theory is the same for different transitive graphs of the same dimension. We expect that \cref{thm:main}, or a slightly stronger variation of it, should play an important role in this axiomatization.
See \cite{spencer2001strange} for the development of such a theory in the mean-field setting of Erd\H{o}s-R\'enyi graphs.
In particular, we believe the following.
\begin{conjecture}
Let $\mathbb G_1$ and $\mathbb G_2$ be $d$-dimensional transitive graphs, let $r_1,r_2\geq 1$, and let $\mathfrak F_1$ and $\mathfrak F_2$ be the uniform spanning forests of $\mathbb G_1$ and $\mathbb G_2$ respectively. Then the component graphs $\mathcal C_{r_1}(\mathfrak F_1)$ and $\mathcal C_{r_2}(\mathfrak F_2)$ are elementarily equivalent almost surely. That is, they satisfy the same set of first order sentences in the language of graphs almost surely.
\end{conjecture}
\subsection{Component graphs of other models and other graphs.}
It would be interesting to study ubiquitous subgraphs in component graphs derived from other models on $\mathbb Z^d$. The most tractable of these is likely to be the interlacement process \cite{Sznitman10,rath2010connectivity,procaccia2011geometry}, for which some related results have been proven by Lacoin and Tykesson \cite{lacoin2013easiest}. Here the component graph is defined by considering two trajectories to be adjacent if and only if they intersect.
\begin{question}
Let $d \geq 3$. Which finite graphs with boundary are ubiquitous in the component graph of the random interlacement on $\mathbb Z^d$?
\end{question}
The picture should be quite different to ours since the connection probabilities for more than two points are no longer given by a power of the spread.
\medskip
A much more straightforward extension of our results would be to consider uniform spanning forests generated by long-range random walks on $\mathbb Z^d$. Similarly, one could consider uniform spanning forests on non-transitive, possibly fractal, graphs that are Ahlfors-regular and satisfy sub-Gaussian heat kernel estimates of some order $\beta \geq 2$ (see e.g.\ \cite[Chapter 3]{KumFlour}).
The beginnings of this analysis are already present implicitly in \cref{lem:firstmomentgeneral}.
\subsection*{Acknowledgments}
This work was carried out while TH was an intern at Microsoft Research, Redmond. TH thanks Mathav Murugan for many useful discussions on heat kernel estimates. We thank Omer Angel for his comments on an earlier draft of this manuscript, and thank the anonymous referee for many helpful comments and corrections.
\bibliographystyle{abbrv}
|
2,869,038,155,910 | arxiv | \section{Introduction}
The worldwide race towards direct dark matter detection in the form of
Weakly Interacting Massive Particles (WIMPs) has been dramatically accelerated by the remarkable progress and evolution of liquid xenon time projection chambers
(LXeTPCs). The XENON100 experiment has already placed the most stringent limits on the WIMP-nucleon spin-independent cross section \cite{Aprile:2011xe} and new data have been accrued at the Laboratori Nazionali del Gran Sasso (LNGS) in Italy towards its ultimate sensitivity goal of $2\times 10^{-45}$~cm$^2$. The next phase of the XENON program will use a LXeTPC with about 3 tons of LXe \cite{Xe1t}.
To demonstrate some of the technologies relevant to the realization of such a massive LXe detector, a system was developed and tested. This system, the XENON1T Demonstrator, consists of a LXeTPC, a cryocooler, a gas system with recirculation pump and hot getter, and a heat exchanger (HE) module. The R$\&$D results presented here are especially relevant for addressing the key requirement of ultra-high purity LXe, enabling free electrons to drift over distances larger than 1~m and long scintillation photon absorption lengths. The $\sim 3$ tons of Xe filling XENON1T must contain less than a ppb (part per billion) level of $\mathrm{O_2}$-equivalent electronegative impurities ~\cite{Schmidt:2001},
and $\mathrm{H_2O}$ at a similar low level ~\cite{Baldini2005}.
Materials outgassing is a constant source of electronegative impurities. Although the detector vessel and TPC components can be baked-out in order to accelerate the outgassing, residual outgassing remains and necessitates continuous purification throughout the operation of the experiment. The gas recirculation rate in the XENON100 experiment, filled with 161 kg of LXe, was limited to 5~SLPM by the available cooling power~\cite{Aprile:2011dd}. Nevertheless, this resulted in a significant decrease in electron attenuation over months of operation time, reaching attenuation lengths $>$~1~m. For XENON1T, we plan to increase the recirculation speed to around 100~SLPM, which corresponds to about 800~kg/day. This is required to enable a reasonable commissioning period of the detector with good performance in terms of charge and light yields. The dynamics of gaseous xenon at this speed causes large pressure gradients and requires components that can handle that flow, including tubing, recirculation pump, getter and flow controller.
Xenon gas flow rates in excess of 40~SLPM through a purification system based on a hot getter have not been reported in the literature to-date. While most components used in the cryogenic and gas systems developed for the XENON1T R\&D are commercially available, their suitability for Xe gas must be proven. As an example, the performance of commercially available hot metal getters, such as SAES Monotorr \cite{SAES}, used to remove electronegative impurities from noble gases, is typically tested with argon gas. The impact of the larger xenon density and heat capacity on the purification efficiency and flow characteristics of the getter must be studied.
Membrane-based gas circulation pumps are commercially available as well, but the high density of xenon gas limits the ability of these pumps to work at high flow rates over long time periods, mostly due to the wear of the diaphragms from high pressures and induced heating. In addition, their leak tightness and durability under these conditions must be investigated.
Finally, since the recirculation and purification is done in the gas phase, the Xe gas must be continually re-liquefied, requiring large amounts of available cooling power.
To cool down Xe gas at a rate of 1~SLPM from room temperature to 175~K,
less than 2~W are used, out of a total of about 10.6~W that are needed to liquefy and cool at the same rate.
At 100~SLPM, this translates to more than 1~kW of cooling power, which is not practical as the overall efficiency of the cryocoolers successfully tested within the XENON program is limited due to the high power consumption of the cooling system. An efficient heat exchange to reduce the cooling power cost of compensating the evaporation and heating of Xe gas is essential. The efficiency of a commercial parallel-plate HE, to transfer the heat within the system and use it to cool purified Xe gas before injecting it back into a LXe detector, has already been studied as reported in~\cite{Giboni11}. Here we extend these studies to the high flow rate regime.
\section{Experimental Setup}
The XENON1T Demonstrator apparatus, constructed and tested at Columbia University, consists of a liquid xenon detector, a cooling tower with a cryocooler, a gas recirculation system with a pump and a getter, and a HE module. The detector, cooling tower and HE are mounted in three separate vacuum-insulated vessels to reduce heat losses to the ambient air and are super-insulated with 12 layers of aluminized mylar. The cryocooler, an Iwatani PC-150 Pulse Tube Refrigerator (PTR) with a 6.5 ~KW water-cooled He compressor, delivers 200~W of cooling power at 165K. This PTR is the same as used on XENON100. A HE is used to cool and liquefy xenon gas returning from the hot getter. The liquid is taken from the detector through the HE, where the latent heat is transferred to the returning xenon gas stream with an efficiency greater than 96\% (e.g. \cite{Giboni11}).
For the measurements reported here, the detector was not implemented as a TPC, but was merely used as a double walled vacuum insulated vessel. A maximum of $\sim$7~liters (about 20~kg) of LXe when filled for these tests.
\subsection{Gas purification system}
The gas system was specifically designed for high speed closed-loop circulation of the Xe gas. The lines are made of 1/2'' stainless steel pipes with VCR fittings, to allow the rapid flow through the circulation loop. Figure \ref{fig:demo_gas_sys} shows a schematic of the gas system, including all the equipment used in the setup. The gas purifier is a SAES Monotorr getter, model PS4MT50R1, rated for purification of rare gases at flows up to 75 SLPM~\cite{SAES}.
A large capacity double-headed diaphragm pump (KNF 1400 series) was selected, nominally capable of flowing $ \sim 200 $ SLPM of air at atmospheric pressure on the input. The pump was modified from double diaphragm to single diaphragm (for each of the two heads) to accommodate our requirement to withstand an output pressure greater than $6$ bar. Water cooling was also added to mitigate heating of the pump heads. The flow is controlled with a Teledyne Hastings Mass Flow Controller (MFC) Model HFC-303 \cite{MFC_man}, calibrated up to 250 ~SLPM of Xe gas. The control valve is situated at the inlet of the pump, since limiting the flow on the outlet may result in too high pressure (10 bars and above) which may damage the pump diaphragms. A buffer volume of about one liter was installed at the outlet of the pump, in order to damp pressure fluctuations.
Four pressure gauges were mounted at different positions along the recirculation loop. $ P_1 $ measures the pressure at the input of the MFC and the output of the HE, $ P_2 $ measures the pressure at the input of the pump and the output of the MFC, $ P_3 $ measures the pressure at the output of the pump and input of the getter and $ P_4 $ measures the pressure at the output of the getter, which is also the pressure going back into the HE. The pressure gradients give an indication of the resistance to the flow, allowing the system design to be optimized for high flow rates.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=10cm]{figure/demonstrator_v4.eps}
\caption{Schematic diagram of the gas system for the XENON1T demonstrator. The green line follows the recirculation path through the gas system.}
\label{fig:demo_gas_sys}
\end{center}
\end{figure}
\subsection{Cryogenics and Heat Exchanger}
Figure \ref{fig:demo} shows a photograph and a CAD drawing of the experimental setup.
The cooling power of the PTR is delivered to the Xe system through a copper cold finger at the top of the cooling tower. There, Xe liquefies on the fins of the cold finger, drips down and is collected by a funnel before flowing into the LXe chamber, located below the cooling tower. The entire system is surrounded by an insulation vacuum to minimize heat leaks into the system, as well as layered aluminized mylar for super-insulation. Due to the narrow temperature margin of only 3.4~K between the liquid and solid phase of Xe (under atmospheric pressure), temperature control is especially important. For temperature control, a copper cup with electrical heaters is inserted between the cold head of the PTR and the cold finger that reaches into the detector volume. The maximum power of these heaters is limited to the cooling power of the PTR as a fail-safe solution to prevent overheating. The temperature of the assembly above and below these heaters is measured with LakeShore PT111 resistors and monitored continuously. A Lakeshore 340 Proportional, Integral and Differential (PID) controller reads the temperature at the cold finger and controls the power to the heaters. The heaters keep the temperature of the cold finger at the set value and thus provide a constant cooling power. The temperature of the liquid is stable to better than 0.04~K over extended periods of continuous operation of the system.
Xe purification is done by continuous gas circulation through the hot getter, taking LXe from the detector
into the HE and letting the returning Xe gas cool and liquefy inside the HE on its way back into the detector.
The HE module is located in a separate vacuum insulated vessel.
The heat exchange was achieved with parallel plate HEs available for commercial use, and already tested in \cite{Giboni11} . Two HEs of different size, both made by GEA \cite{GEA} were used. The smaller unit consists of 20 plates (model FG3X8-20, measuring $3.3 \times 7.8 \times 2.1$ inches, with a volume of about 0.5 l) and the larger of 60 plates (model FG5X12-60, measuring $4.9 \times 12.2 \times 6$ inches, with a volume of about 3.8~l). Three configurations of HEs were tested: a) the small HE only, b) the large HE only, c) a combination in which the two HEs are connected in series, with the larger one at the bottom connected to the LXe side (the detector) and the smaller one, on top, connected to the gas system. A schematic diagram showing the three configurations is shown in Figure \ref{fig:HE_setups}.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=6cm]{figure/demo2.eps}
\includegraphics[width=6cm]{figure/demo_design2.eps}
\caption{(Left): The XENON1T Demonstrator setup at Columbia Nevis Laboratory. \newline (Right): A technical drawing of XENON1T Demonstrator cooling tower with insulation jacket and the PTR.}
\label{fig:demo}
\end{center}
\end{figure}
\begin{figure}[htb]
\begin{center}
\includegraphics[width=6cm]{figure/HE_setups.eps}
\caption{The three configurations of HEs tested in this work. The leftmost is configuration a, the middle is configuration b and the rightmost is configuration c (see text).}
\label{fig:HE_setups}
\end{center}
\end{figure}
\section{Results and Discussion}
\subsection{Fast gas circulation}\label{subsec:gas_circ}
In this section we describe the results of the dynamics of fast recirculation flow of Xe gas, regardless of cryogenics. The flow rate is constant through the circulation path, and its self consistency is maintained by pressure differences between different points along the path. The pressure differences drive the flow against the restrictions (resistances) of the components - pipes, valves, getter etc.
{\it Pressure drops and dynamical resistance}: We have measured the pressure drops across the getter and the HE, which proved to be the main sources of dynamical resistance to the flow. Figure \ref{fig:pressures} shows the pressure drops as a function of the flow rate, on the getter and on the HE. The results are shown for the three configurations of HEs. The pressure drop across the different components grows as a function of the flow rate. At a Xe vapor pressure of $ >2$ bar in the detector, a pressure of $ \sim 8 $ bar on the outlet of the pump is required for driving a constant flow of $ \sim 120 $ SLPM.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=8cm]{figure/deltap.eps}
\caption{Pressure drop as a function of flow rate, measured across the getter (filled markers) and the HE - incoming gas pressure minus chamber pressure (unfilled markers), in three HE configurations a, b and c, as explained in the text.}
\label{fig:pressures}
\end{center}
\end{figure}
{\it Buffer volume}: At flow rates larger than $ 50 $~SLPM, the output pressure of the pump starts fluctuating with a high frequency (30 Hz, half the AC frequency supplied to the pump) and an amplitude that exceeds 1.5 bar at flow rates above 100 SLPM. To prevent damage to the system and provide more stability to the flow we added a buffer volume of $ \sim 1 $~liter, close to the outlet of the pump. The displacement volume of the diaphragm pump is about 50~cm$^3$, so the buffer volume decreases the typical pressure change by a factor $\sim 20$. The buffer volume proved to be efficient in damping the pressure fluctuations, as expected.
{\it Increasing the maximum flow rate}: The factor limiting the flow rate in our configuration is the pressure drop across the MFC, since the flow forced by the recirculation pump is most sensitive to the inlet pressure. This drop (10-15 psi, \cite{MFC_man}) is required for the flow measurement. The control solenoid valve itself has a lower resistance to the flow. In order to be able to reach high flow rates we increased the pressure in the chamber by setting a slightly higher temperature set point on the cold finger. The chamber pressure required for a flow of 120~SLPM with the two HEs setup (configuration c) was 2.56 bar absolute.
In future tests we plan to separate the flow metering and the controlled valve, and place the valve at the inlet of the pump and the measurement device at the outlet. This will allow a faster gas flow without a strong restriction at the outlet. However, as mentioned earlier, the pressure at the outlet of the pump will increase, requiring a different pump or parallel getters.
In future tests, a different type of pump will be tested \cite{Qdrive}, which employs an acoustic compression
technique, for recirculating the Xe gas. This technology has a potentially lower leak rate and a lower $^{222}$Rn emanation rate. A prototype that will allow for tests with the system has been constructed and will be available soon.
{\it Bypassing the getter}: Changing the recirculation flow rate changes the steady state pressure of the detector, due to the different dynamics and heat input. We found that at high flow rates, bypassing the getter increases the pressure significantly, where the increase in pressure is more pronounced at higher flow rates. For instance, at 70~SLPM the bypassed steady state pressure is $ \sim 2.7$ barg, while for the same conditions, only flowing through the getter, the steady state pressure is slightly below 1 barg. We believe that the reason for this behavior is that the gas flowing through the getter is cooled by a radiator on the way out of the getter, whereas bypassing the getter takes the gas from the outlet of the pump almost directly back into the HE. At high flow rates the temperature of the gas at the outlet of the pump is very high, up to a few hundred degrees C.
\subsection{Heat Exchange Efficiency}
The cooling power of the PTR is measured by keeping the detector under vacuum and letting the cold head reach the temperature set point, corresponding to the desired LXe operation temperature. The power supplied by the heaters around the cold finger compensates the cooling power, keeping the temperature constant. Allowing the system to reach a steady state by leaving it cooling for about 12~h we measure directly the current and voltage applied to the heaters, thus finding the power that exactly cancels the PTR cooling. This value, in general, depends on the set point temperature, and was measured to be 208 W at a temperature of 173~K on the cold finger.
The efficiency $ \varepsilon $ of a HE is defined as the fraction of heat, required for vaporization and temperature change, that is kept outside the system. For the change from LXe at 2 bar absolute, on the phase change line (177.9~K), to room temperature GXe we can write
\begin{equation}
\varepsilon(r)=1-\frac{P(0)-P(r)-R(r)}{10.74\times r},
\end{equation}
where $ r $ is the flow rate [SLPM], P(r) is the heater power [W] at a given circulation rate $ r $, and $ R $ is a term that accounts for extra heat that leaks into the system around the HE, which is at room temperature when there is no flow. When using super insulation $ R $ can usually be neglected, as is done here. The value 10.74 W/SLPM corresponds to an isobaric vaporization of LXe at 2 bar absolute followed by an increase in temperature from 177.9~K to 293.15~K, of which 8.88 W/SLPM is spent on the enthalpy change during evaporation \cite{Lemmon2011:ni}.
Proper measurements of the heater power require that the system is sufficiently close to a thermal steady state. After the initial filling and start of circulation, the relaxation time is of the order of 12~h. A change of circulation speed also changes the thermal and pressure balance, and the relaxation time from one steady state to another is typically a few hours. Figures \ref{fig:p_stab_start} to \ref{fig:pow_stab_change} show the typical evolution of the detector pressure and heater power after the initial filling and change of circulation rate.
\begin{figure*}[ht]
\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=6cm]{figure/p_stab_start.eps}\caption{Pressure inside the detector as a function of time, following the initial filling and starting of circulation at a rate of 50~SLPM.}
\label{fig:p_stab_start}
\end{minipage}
\hspace{0.5cm}
\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=6cm]{figure/pow_stab_start.eps}\caption{Heater power as a function of time, following the initial filling and starting of circulation at a rate of 50~SLPM.}
\end{minipage}
\end{figure*}
\begin{figure*}[ht]
\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=6cm]{figure/p_stab_change.eps}\caption{Pressure inside the detector as a function of time, following a change of circulation rate from 100~SLPM to 60~SLPM.}
\end{minipage}
\hspace{0.5cm}
\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=6cm]{figure/pow_stab_change.eps}\caption{Heater power as a function of time, following a change of circulation rate from 100~SLPM to 60~SLPM.}
\label{fig:pow_stab_change}
\end{minipage}
\end{figure*}
{\it Small HE}: Initially, measurements with the small HE ($ \sim 0.5 $ l volume) were carried out. At flow rates up to ~20 SLPM the HE efficiency was measured to be in the range of 90\%-95\%. Since the system was designed for high flow rate, its sensitivity is limited when circulating at low flow rate, hence the large error. This result is compatible with that previously reported in \cite{Giboni11}. At higher flow rate, 40 SLPM, the efficiency drops to $ \sim$86\%.
The maximum rate attainable was 48 SLPM, at which liquid Xenon started spilling out of the HE, into the pipes of the gas system (observed as a violent pressure increase on the inlet of the pump). At that point the heater power was still not zero, meaning there was enough cooling power to handle higher circulation speeds at that efficiency. When filling more LXe into the detector, the maximum flow was reduced to 35 SLPM. This points to the heat exchange role played by the pipe carrying the liquid through the Xe gas phase, from the detector to the HE. A higher liquid level inside the chamber reduces the internal heat exchange, that takes place when the LXe in the tube taking the liquid exchanges heat with the gas phase in the detector before reaching the HE. From these measurements we conclude that at moderate and high circulation speeds the HE is partially filled with liquid, which boils off absorbing the latent heat from the incoming gas. The liquid level inside the HE increases as a function of the circulation rate.
{\it Large HE}: Following the above measurements, a larger HE was used, with 60 plates and volume of about 3.8~l. With this setup, circulation speeds of up to 120~SLPM were reached, with a margin of about 10 W cooling power remaining. The cooling power required as a function of circulation rate is shown in Figure \ref{fig:power}. The inferred heat exchange efficiency with this configuration is close to 90\% at flow rates up to 120 SLPM.
{\it HEs in series}: Using the two HEs in series proved to be much more efficient than a single unit. The improvement is significantly larger than that expected from the increase in volume of the combined units. Figure \ref{fig:power} also shows the required cooling power as a function of flow rate in this configuration, up to a rate of 114 SLPM. At that flow rate, the heaters still supply a power of about 130~W, which translates to an efficiency $ \varepsilon \approx 96\% $.
\subsection{About the physics of heat exchange}
The heat exchange involves two main processes: the phase transition and the gas temperature change.
{\it Phase transition:} Since about 80\% of the heat goes into latent heat of the gas-liquid phase transition, it is important to understand the way this heat is transferred between the ingoing and outgoing Xe. The key for efficient heat exchange is a temperature difference $ \Delta T_{ph} $ that drives the heat transfer between the two Xe flows. This is the difference between the condensation temperature of the incoming warm Xe and the boiling temperature of the outgoing cold Xe, $\Delta T_{ph}= T_{gl}(P_i)-T_{gl}(P_o)$, where $ T_{gl}(P) $ is the pressure dependent gas-liquid phase transition temperature. For dynamical gas flow reasons, and as shown in section \ref{subsec:gas_circ}, $ P_i>P_o $, and since $ T_{gl}(P) $ is an increasing function, $\Delta T_{ph}>0$ which leads to an effective heat transfer.
{\it Gas temperature change:} Once the LXe coming out of the detector is evaporated, it goes through the HE, thermally coupled to the incoming Xe gas through the heat conducting metal plates. The difference between the initial gas temperatures is $\sim$120~K. Let us assume a very simplistic model for gas heat exchange: Two streams of Xe gas coupled through $ n $ plates, each of width $ W $ and length $L$, in the $ x $ direction from $ x=0 $, so that $ T_1(0)=T_2(L)+\Delta T=T_h $. We designate the plates' thickness as $ d $ and their heat conductivity as $ \kappa $. The density of Xe gas and its heat capacity are $ \rho $ and $ C_v $, respectively. Neglecting the heat capacity of the plates and any temperature gradient in the gas perpendicular to the plates we can write for a gas flow rate of $ r $ (mass per unit time):
\begin{equation}\label{eq:HE1}
T_1'=\frac{dT_1}{dx}=-\alpha (T_1(x)-T_2(x))
\end{equation}
\begin{equation}\label{eq:HE2}
T_2'=\frac{dT_2}{dx}=-\alpha (T_1(x)-T_2(x))
\end{equation}
where
\begin{equation}
\alpha \propto \frac{\kappa \rho nW}{rC_v d}.\label{eq:alpha}
\end{equation}
Equation \ref{eq:alpha} is an equality under the assumption of perfect heat transfer between the gas and the plates, as well as perfect heat transfer inside the gas perpendicular to the flow direction. The proportionality constant turns out to be much smaller than 1, mostly due to inefficiencies in transferring the heat to the bulk of the gas. Equations \ref{eq:HE1} and \ref{eq:HE2} lead to an efficiency (for the gas heat exchange only) of
\begin{equation}
\varepsilon_{gas}=\frac{\alpha L}{1+\alpha L}.
\end{equation}
If $h$, the space between plates, is constant, then the figure of merit for heat exchange efficiency is the volume of the HE (equals $nhWL$) divided by the flow rate $r$.
It appears that in the case of liquid-gas heat exchange the description of the gas heat exchange is more complicated. The striking improvement of efficiency when adding in series the smaller HE is not expected by its volume, which is $ \sim 14\% $ of the larger HE's volume. The difference lies, in our opinion, in the temperature of the HE itself. When circulating at high rates, part of the bottom HE is filled with LXe, as established by the tests with the small HE. This amount of liquid increases as the rate increases, and the liquid cools down the metal plates above the liquid level, thus preventing efficient heat exchange of the Xe gas on top of the LXe. The use of a second HE, thermally decoupled from the bottom one, allows the gas to exchange heat efficiently thus decreasing the total energy loss by a factor of almost 3.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=8cm]{figure/powerplot.eps}
\caption{The cooling power required for circulation at different flow rates and HE configurations. The red solid line at 208 W represents the maximum cooling power available, and the blue dashed line is a linear fit to the two HE configuration points, with a slope of 0.39 W/SLPM (corresponding to $>$96\% efficiency).}
\label{fig:power}
\end{center}
\end{figure}
\section{Conclusions}
We have carried out measurements to study and demonstrate the ability to flow Xe gas at high rates, above 100~SLPM, for applications in future detectors such as XENON1T.
We have found that a high flow rate of Xe requires relatively high absolute pressures and pressure gradients that can reach 8 bar at a flow above 100~SLPM. With the use of a high capacity pump and $1/2''$ tubing for the gas system, we have shown that the getter is the main restriction on the gas flow. We observed that a buffer volume on the outlet of the circulation pump is necessary to avoid large pressure fluctuations. We also note that a bypass of the getter increases significantly the pressure in the LXe detector, probably due to the high temperature of the Xe gas after being compressed by the circulation pump.
We have shown that the high flow rate with heat exchange requires that there be LXe inside the HE itself. This amount is non negligible (estimated to be $>1$~kg at 45~SLPM in our system), and influences the efficiency of the heat exchange. A single parallel plate HE, even as large as 4 l, is still limited to $\sim90\%$ efficiency. We find that using two HEs in series increases the efficiency to $\sim96\%$, due to the decoupling of the top HE from the LXe that exists inside the bottom one. This efficiency is consistent with that projected from tests carried out at much lower circulation rates and previously reported by our group.
Acknowledgements:
This work was supported by the NSF with an award to the Columbia Astrophysics Laboratory (PHY-1047794) for the project entitled R\&D of LXe Detector technology for dark matter experiments. We also acknowledge support from the Weizmann Institute of Science with a Research Fellowship to R.~Budnik (WIS CU10-1945).
|
2,869,038,155,911 | arxiv | \section{Introduction}
\label{sec:intro}
\input{./Sections/intro.tex}
\section{Related Work}
\label{sec:related_work}
\input{./Sections/related_work.tex}
\section{Our Approach}
\label{sec:method}
\input{./Sections/method.tex}
\section{Experiments and Analysis}
\label{sec:experiments}
\input{./Sections/experiments.tex}
\section{Conclusions}
\label{sec:conclusions}
\input{./Sections/conclusions.tex}
\section*{Acknowledgments}
This work was supported in part by MIT Lincoln Laboratory.
The Tesla K40 GPU used for this research was donated by the NVIDIA Corporation. The authors would also like to thank Dr. Kevin Brady, Dr. Charlie Dagli, Professor Yun Fu, and Professor Usman Tariq for their insightful comments and suggestions with regards to this work.
{\small
\bibliographystyle{ieee}
\subsection{Performance on Toronto Face Database (TFD)}
First, we analyze the discriminative ability of the CNN by assessing its performance on the TFD dataset.
Table \ref{tab:tfd} shows the recognition accuracy obtained when training a zero-bias CNN from a random initialization with no other regularization as well as CNNs that have dropout (D), data augmentation (A) or both (AD). We also include recognition accuracies from previous methods. From the results in Table \ref{tab:tfd}, there are two main observations: (i) not surprisingly, regularization significantly boosts performance (ii) data augmentation improves performance over the regular CNN more than dropout ($9.4\%$ vs. $2.8\%$). Furthermore, when both dropout and data augmentation are used, our model is able to exceed the previous state-of-the-art performance on TFD by $3.6\%$.
\begin{table}[t!]
\caption{Recognition accuracy on the Toronto Face Dataset (TFD) - 7 classes - A: Data Augmentation, D: Dropout}
\begin{center}
\begin{tabular}{ | l | c |}
\hline
\textbf{Method} & \textbf{Accuracy} \\ \hline
Gabor+PCA \cite{dailey2002empath} & 80.2\% \\ \hline
Deep mPoT \cite{ranzato2011deep} & 82.4\% \\ \hline
CDA \cite{rifai2012disentangling} & 85.0\% \\ \hline
\hline
Zero-bias CNN & 79.0\% $\pm$ 1.1\%\\ \hline
Zero-bias CNN+D & 81.8\% $\pm$ 2.1\% \\ \hline
Zero-bias CNN+A & 88.4\% $\pm$ 1.7\% \\ \hline
\textbf{Zero-bias CNN+AD} & \textbf{88.6\% $\pm$ 1.5\%} \\ \hline
\end{tabular}
\end{center}
\label{tab:tfd}
\end{table}
\subsection{Performance on the Extended Cohn-Kanade Dataset (CK+)}
\begin{table}[t!]
\caption{Recognition accuracy on the Extended Cohn-Kanade (CK+) Dataset - 8 classes - A: Data Augmentation, D: Dropout}
\begin{center}
\begin{tabular}{ | l | c |}
\hline
\textbf{Method} & \textbf{Accuracy} \\ \hline
AURF \cite{liu2013aware} & 92.22\% \\ \hline
AUDN \cite{liu2015inspired} & 93.70\%\\
\hline \hline
Zero-bias CNN & 78.2\% $\pm$ 5.7\%\\ \hline
Zero-bias CNN+D & 82.3\% $\pm$ 4.0\%\\ \hline
Zero-bias CNN+A & 94.6\% $\pm$ 3.3\% \\ \hline
\textbf{Zero-bias CNN+AD} & \textbf{95.1\% $\pm$ 3.1\%}\\
\hline
\end{tabular}
\end{center}
\label{tab:eight_class}
\end{table}
\begin{table}[t!]
\caption{Recognition accuracy on the Extended Cohn-Kanade (CK+) Dataset - 6 classes - A: Data Augmentation, D: Dropout}
\begin{center}
\begin{tabular}{ | l | c |}
\hline
\textbf{Method} & \textbf{Accuracy} \\ \hline
CSPL \cite{zhong2012learning} & 89.89\% \\ \hline
LBPSVM \cite{shan2009facial} & 95.10\% \\ \hline
\textbf{BDBN \cite{liu2014facial}} & \textbf{96.70\%} \\
\hline \hline
Zero-bias CNN+AD & 95.7\% $\pm$ 2.5\%\\
\hline
\end{tabular}
\end{center}
\label{tab:six_class}
\end{table}
We now present our results on the CK+ dataset. The CK+ dataset usually contains eight labels (anger, contempt, disgust, fear, happy, neutral, sad, and surprise). However, many works \cite{zhong2012learning, shan2009facial, liu2014facial} ignore the samples labeled as neutral or contempt, and only evaluate on the six basic emotions. Therefore, to ensure fair comparison, we trained two separate models. We present the eight class model results in Table \ref{tab:eight_class} and the six class model results in Table \ref{tab:six_class}. For the eight class model, we conduct the same study we did on the TFD and we observe rather similar results. Once again, regularization appears to play a significant role in obtaining good performance. Data augmentation gives a significant boost in performance ($16.4\%$) and when combined with dropout, leads to a $16.9\%$ increase. For the eight class and six class models, we achieve state-of-the-art and near state-of-the-art accuracy respectively on the CK+ dataset.
\subsection{Visualization of higher-level neurons}
Now, with a strong discriminative model in hand, we will analyze which facial regions the neural network identifies as the most discriminative when performing classification. To do this, we employ the visualization technique presented by Zeiler and Fergus in \cite{zeiler2014visualizing}.
For each dataset, we consider the third convolutional layer and for each filter, we find the N images in the chosen split's training set that generated the strongest magnitude response. We then leave the strongest neuron high and set all other activations to zero and use the deconvolutional network to reconstruct the region in pixel space. For our experiments, we chose N=10 training images.
We further refine our reconstructions by employing a technique called "Guided Backpropagation" proposed by Springenberg et al. in \cite{springenberg2014striving}. "Guided Backpropogation" aims to improve the reconstructed spatial patterns by not solely relying on the masked activations given by the top-level signal during deconvolution but by also incorporating knowledge of which activations were suppressed during the forward pass. Therefore, each layer's output during the deconvolution stage is masked twice: (i) once by the ReLU of the deconvotional layer and (ii) again by the mask generated by the ReLU of the layer's matching convolutional layer in the forward pass.
First, we will analyze patterns discovered in the Toronto Face Dataset (TFD). In Figure \ref{fig:zeiler_plots_tfd}, we select 10 of the 256 filters in the third convolutional layer and for each filter, we present the spatial patterns of the top-10 images in the training set. From these images, the reader can see that several of the filters appear to be sensitive to regions that align with several of the Facial Actions Units such as: AU12: Lip Corner Puller (row 1), AU4: Brow Lowerer (row 4), and AU15: Lip Corner Depressor (row 9).
Next, we display the patterns discovered in the CK+ dataset. In Figure \ref{fig:zeiler_plots_ck_plus}, we, once again, select 10 of the 256 filters in the third convolutional layer and for each filter, we present the spatial patterns of the top-10 images in the training set. The reader will notice that the CK+ discriminative spatial patterns are very clearly defined and correspond nicely with Facial Action Units such as: AU12: Lip Corner Puller (rows 2, 6, and 9), AU9: Nose Wrinkler (row 3) and AU27: Mouth Stretch (row 8).
\begin{table}
\caption{Correspondences between CK+ visualization plots shown in Figure \ref{fig:zeiler_plots_ck_plus} and the FAU whose activation distribution had the highest KL divergence value. The KL divergence values of all the FAUs computed for each filter are shown in Figure \ref{fig:fau_kl_div_bar_charts}.}
\begin{center}
\begin{tabular}{| c | c |}
\hline
\makecell{\bf Filter \\ \bf Number } & \makecell{ \bf FAU with the Largest \\ \bf KL Divergence Value} \\ \hline
1 & AU25: Lips Part \\ \hline
2 & AU12: Lip Corner Puller \\ \hline
3 & AU9: Nose Wrinkler \\ \hline
4 & AU5: Upper Lid Raiser \\ \hline
5 & AU17: Chin Raiser \\ \hline
6 & AU12: Lip Corner Puller \\ \hline
7 & AU24: Lip Pressor \\ \hline
8 & AU27: Mouth Stretch \\ \hline
9 & AU12: Lip Corner Puller \\ \hline
10 & AU1: Inner Brow Raiser \\ \hline
\end{tabular}
\end{center}
\label{tab:ck_plus_AU_correspondence_list}
\end{table}
\begin{figure*}[t!]
\centering
\centerline{\includegraphics[width=17cm, height=17cm]{./Figures/tfd_3layer_conv3_zeiler_plots_recon_all.png}}
\vspace{0.2cm}
\caption{Visualization of spatial patterns that activate 10 selected filters in the conv3 layer of our network trained on the Toronto Face Dataset (TFD). Each row corresponds to one filter in the conv3 layer. We display the top 10 images that elicited the maximum magnitude response. Notice that the spatial patterns appear to correspond with some of the Facial Action Units.}
\label{fig:zeiler_plots_tfd}
\end{figure*}
\begin{figure*}[t!]
\centering
\centerline{\includegraphics[width=17cm, height=17cm]{./Figures/ck_3layer_conv3_zeiler_plots_recon_all.png}}
\vspace{0.2cm}
\caption{Visualization of spatial patterns that activate 10 selected filters in the conv3 layer of our network trained on the Cohn-Kanade (CK+) dataset. Each row corresponds to one filter in the conv3 layer. Once again, we display the top 10 images that elicited the maximum magnitude response. Notice that the spatial patterns appear to have very clear correspondences with some of the Facial Action Units.}
\label{fig:zeiler_plots_ck_plus}
\end{figure*}
\subsection{Finding Correspondences Between Filter Activations and the Ground Truth Facial Action Units (FAUs)}
In addition to categorical labels (anger, disgust, etc.), the CK+ dataset also contains labels that denote which FAUs are present in each image sequence. Using these labels, we now present a preliminary experiment to verify that the filter activations/spatial patterns learned by the CNN indeed match with the actual FAUs shown by the subject in the image. Our experiment aims to answer the following question: For a particular filter i, which FAU j has samples whose activation values most strongly differ from the activations of samples that do not contain FAU j, and does that FAU accurately correspond with the visual spatial patterns that maximally excite filter i?
Given a training set of M images ($X$) and their corresponding FAU labels ($Y$), let $ F_{\ell i}(x)$ be the activations of sample x at layer $\ell$ for filter $i$. Since we are examining the 3rd convolutional layer in the network, we set $\ell=3$. Then, for each of the 10 filters visualized in Figure \ref{fig:zeiler_plots_ck_plus}, we do the following:
\begin{enumerate}
\item[(i)] We consider a particular FAU j and place the samples $X$ that contain j in set S where: \\
$ S = \{ x_m \: | \: j \in y_m\}, \: \forall m \in \{1, ..., M\} $
\item[(ii)] We then build a histogram of the maximum activations of the samples that contained FAU j: $ \\ Q_{ij}(x) = P(F_{3i}(x) \: | \: S), \: \forall (x, y) \in (X, Y)$
\item[(iii)] We then, similarly, build a distribution over maximum activations of the samples that do not contain FAU j: \\$ R_{ij}(x) = P(F_{3i}(x) \: | \: S ^{c}), \: \forall (x, y) \in (X, Y) $
\item[(iv)] We compute the KL divergence between $Q_{ij}(x)$ and $R_{ij}(x)$, $ D_{KL}(Q_{ij} \: \| \: R_{ij})$, and repeat the process for all of the other FAUs.
\end{enumerate}
Figure \ref{fig:fau_kl_div_bar_charts} shows the bar charts of the KL divergences computed for all of the FAUs for each of the 10 filters displayed in Figure \ref{fig:zeiler_plots_ck_plus}. The FAU with the largest KL divergence value is denoted in red and its corresponding name is documented in Table \ref{tab:ck_plus_AU_correspondence_list} for each filter. From these results, we see that in the majority of the cases, the FAUs listed in Table \ref{tab:ck_plus_AU_correspondence_list} match the facial regions visualized in Figure \ref{fig:zeiler_plots_ck_plus}. This means that the samples that appear to strongly influence the activations of these particular filters are indeed those that possess the AU shown in the corresponding filter visualizations. Thus, we show that certain neurons in the neural network implicitly learn to detect specific FAUs in face images when given a relatively "loose" supervisory signal (i.e. emotion type: anger, happy, sad, etc.).
What is most encouraging is that these results appear to confirm our intuitions about how CNNs work as appearance-based classifiers. For instance, filter 2, 6, and 9 appear to be very sensitive to patterns that correspond to AU 12. This is not surprising as AU 12 (Lip Corner Puller) is almost always associated with smiles and from the visualizations in Figure \ref{fig:zeiler_plots_ck_plus}, a subject often shows their teeth when smiling, a highly distinctive appearance cue. Similarly, for filter 8, it is not surprising that FAU 25 (Lips Part) and FAU 27 (Mouth Stretch) had the most different activation distributions given that the filter's spatial patterns corresponded to the "O" shape made by the mouth region in surprised faces, another visually salient cue.
\begin{figure*}[t!]
\centering
\vspace{-0.5cm}
\begin{minipage}[b]{0.48\linewidth}
\centering
\centerline{\includegraphics[trim = 0mm -5mm 20mm 190mm, clip, width=10cm, height=4.1cm, keepaspectratio] {Figures/ck_kl_div_histogram_plots_enhanced/kl_div_bar_chart_bins_20/hist_filter_1.pdf}}
\vspace{-.25cm}
\centerline{Filter 1
\end{minipage}
\hfill
\begin{minipage}[b]{0.48\linewidth}
\centering
\centerline{\includegraphics[trim = 0mm -5mm 20mm 190mm, clip, width=10cm, height=4.1cm, keepaspectratio]{Figures/ck_kl_div_histogram_plots_enhanced/kl_div_bar_chart_bins_20/hist_filter_2.pdf}}
\vspace{-.25cm}
\centerline{Filter 2
\end{minipage}
\vspace{-.25cm}
\begin{minipage}[b]{0.48\linewidth}
\centering
\centerline{\includegraphics[trim = 0mm -5mm 20mm 190mm, clip, width=10cm, height=4.1cm, keepaspectratio]{Figures/ck_kl_div_histogram_plots_enhanced/kl_div_bar_chart_bins_20/hist_filter_3.pdf}}
\vspace{-.25cm}
\centerline{Filter 3
\end{minipage}
\hfill
\begin{minipage}[b]{0.48\linewidth}
\centering
\centerline{\includegraphics[trim = 0mm -5mm 20mm 190mm, clip, width=10cm, height=4.1cm, keepaspectratio]{Figures/ck_kl_div_histogram_plots_enhanced/kl_div_bar_chart_bins_20/hist_filter_4.pdf}}
\vspace{-.25cm}
\centerline{Filter 4
\end{minipage}
\vspace{-.25cm}
\begin{minipage}[b]{0.48\linewidth}
\centering
\centerline{\includegraphics[trim = 0mm -5mm 20mm 190mm, clip, width=10cm, height=4.1cm, keepaspectratio] {Figures/ck_kl_div_histogram_plots_enhanced/kl_div_bar_chart_bins_20/hist_filter_5.pdf}}
\vspace{-.25cm}
\centerline{Filter 5
\end{minipage}
\hfill
\begin{minipage}[b]{0.48\linewidth}
\centering
\centerline{\includegraphics[trim = 0mm -5mm 20mm 190mm, clip, width=10cm, height=4.1cm, keepaspectratio]{Figures/ck_kl_div_histogram_plots_enhanced/kl_div_bar_chart_bins_20/hist_filter_6.pdf}}
\vspace{-.25cm}
\centerline{Filter 6
\end{minipage}
\vspace{-.25cm}
\begin{minipage}[b]{0.48\linewidth}
\centering
\centerline{\includegraphics[trim = 0mm -5mm 20mm 190mm, clip, width=10cm, height=4.1cm, keepaspectratio]{Figures/ck_kl_div_histogram_plots_enhanced/kl_div_bar_chart_bins_20/hist_filter_7.pdf}}
\vspace{-.25cm}
\centerline{Filter 7
\end{minipage}
\hfill
\begin{minipage}[b]{0.48\linewidth}
\centering
\centerline{\includegraphics[trim = 0mm -5mm 20mm 190mm, clip, width=10cm, height=4.1cm, keepaspectratio]{Figures/ck_kl_div_histogram_plots_enhanced/kl_div_bar_chart_bins_20/hist_filter_8.pdf}}
\vspace{-.25cm}
\centerline{Filter 8
\end{minipage}
\vspace{-.25cm}
\begin{minipage}[b]{0.48\linewidth}
\centering
\centerline{\includegraphics[trim = 0mm -5mm 20mm 190mm, clip, width=10cm, height=4.1cm, keepaspectratio]{Figures/ck_kl_div_histogram_plots_enhanced/kl_div_bar_chart_bins_20/hist_filter_9.pdf}}
\vspace{-.25cm}
\centerline{Filter 9
\end{minipage}
\hfill
\begin{minipage}[b]{0.48\linewidth}
\centering
\centerline{\includegraphics[trim = 0mm -5mm 20mm 190mm, clip, width=10cm, height=4.1cm, keepaspectratio]{Figures/ck_kl_div_histogram_plots_enhanced/kl_div_bar_chart_bins_20/hist_filter_10.pdf}}
\vspace{-.25cm}
\centerline{Filter 10
\end{minipage}
\vspace{0.5cm}
\caption{Bar charts showing which FAUs lead to the strongest shifts in the activation distributions of particular filters in the CNN. For each of the 10 filters visualized in Figure \ref{fig:zeiler_plots_ck_plus}, we build histograms over the activations of training samples that contain a specific FAU j, and the activations of samples that do not contain FAU j. We then compute the KL divergence between the two distributions and plot them for each FAU above. The FAU with the largest KL divergence is displayed in red and its corresponding name is given in Table \ref{tab:ck_plus_AU_correspondence_list}. (Best viewed in color).
\label{fig:fau_kl_div_bar_charts}
\end{figure*}
\subsection{Network Architecture}
For all of the experiments we present in this paper, we use a classic feed-forward convolutional neural network. The networks we use, shown visually in Figure \ref{fig:network_architecture} consist of three convolutional layers with 64, 128, and 256 filters, respectively, and with filter sizes of 5x5 followed by ReLU (Rectified Linear Unit) activation functions. Max pooling layers are placed after the first two convolutional layers while quadrant pooling \cite{coates2011analysis} is applied after the third. The quadrant pooling layer is then followed by a full-connected layer with 300 hidden units and, finally, a softmax layer for classification. The softmax layer contains anywhere between 6-8 outputs corresponding to the number of expressions present in the training set.
One modification that we introduce to the classical configuration is that we ignore the biases of the convolutional layers. This idea was introduced first by Memisevic et al. in \cite{memisevic2014zero} for fully-connected networks and later extended by Paine et al. in \cite{paine2014analysis} to convolutional layers.
In our experiments, we found that ignoring the bias allowed our network to train very quickly while simultaneously reducing the number of parameters to learn.
\subsection{Network Training}
When training our network, we train from scratch using stochastic gradient descent with a batch size of 64, momentum set to 0.9, and a weight decay parameter of 1e-5. We use a constant learning rate of 0.01 and do not use any form of annealing. The parameters of each layer are randomly initialized by drawing from a Gaussian distribution with zero mean and standard deviation $ \sigma=\frac{k}{ N_{\text{FAN\_IN}}}$ where $N_{\text{FAN\_IN}}$ is the number of input connections to each layer and k is drawn uniformly from the range: $\left[0.2, 1.2\right]$.
We also use dropout and various forms of data augmentation to regularize our network and combat overfitting. We apply dropout to the fully-connected layer with a probability of 0.5 (i.e. each neuron's output is set to zero with probability 0.5). For data augmentation, we apply a random transformation to each input image consisting of: translations, horizontal flips, rotations, scaling, and pixel intensity augmentation. All of our models were trained using the anna software library \footnote{\url{https://github.com/ifp-uiuc/anna}}.
|
2,869,038,155,912 | arxiv | \section{The method of moving orthonormal frame and the fundamental
The history of research in the field of differential geometry in
Estonia is inseparably linked with the history of the University
of Tartu. Therefore we begin by reminding some basic facts from
the history of the University of Tartu. The University of Tartu
was founded by the King Gustav II Adolf of Sweden in 1632. We omit
the turbulent period from the foundation of the university to the
end of 18th century when the university sometimes was in Tartu,
sometimes was forced to move to other towns of Estonia such as
Tallinn and P\"arnu as a result of agreements between belligerent
powers. In 1802 the university was reopened in Tartu as a
provincial Baltic university depending upon the local Knighthoods
- it was titled Kaiserliche Universit\"at zu Dorpat (also
Imperatorskij Derptskij Universitet). A first important landmark
in the history of differential geometric research at the
University of Tartu goes back to this period, when J. Martin
Bartels (1769-1836) became a professor of mathematics at the
University of Tartu (1821). Johann Martin Christian Bartels was
born in Braunschweig. He studied at the University of Helmstedt
and then at the G\"ottingen University. He took his doctoral
degree at the University of Jena with a thesis in the field of
variational calculus in 1803. It should be mentioned that Bartels
was the first teacher of Gauss in Braunschweig. Before he was
invited to occupy a professorship of mathematics at the University
of Tartu, Bartels was a professor of mathematics at the University
of Kasan (Russia) for twelve years from 1808 to 1820, where one of
his students was Nicolai Ivanovitch Lobachevsky future professor
of mathematics and rector of the University of Kasan and one of
the founders of non-Euclidean geometry. Bartels contributed to the
theory of space curves by creating the method now known as the
method of moving orthonormal frame. Given a space curve one can
define the local trihedron at a point $P$ of this curve, which
consists of three orthogonal unit vectors ${\bf t}, {\bf n}, {\bf
b}$, where ${\bf t}$ is the unit tangent vector, ${\bf n}$ is the
principal normal vector and ${\bf b}$ is the binormal vector. The
triple $\{{\bf t}, {\bf n}, {\bf b}\}$ bears the name of Frenet
frame at a point of a curve. Bartels studied the rate of change of
the trihedron $\{{\bf t}, {\bf n}, {\bf b}\}$, when a point $P$
begins to move to a near point $Q$ along a curve and he was the
first to derive the equations expressing the derivatives of the
vectors ${\bf t}, {\bf n}, {\bf b}$ in terms of the vectors
themselves now known as Frenet-Serret formulae. The formulae
obtained by Bartels were published by his disciple C.E. Senff in
1831, which means that they appeared 17 years earlier than the
equations published by Frenet and 22 years before Serret published
his equations \cite{Lumiste0}. It should be noted that Bartels
used the components of the vectors ${\bf t}, {\bf n}, {\bf b}$ not
the vectors themselves because the notion of a vector was actually
developed later.
The professorship of applied mathematics was opened in 1843 and
Ferdinand Minding (1806--1885) was invited to occupy this position.
Ferdinand Minding was born in Kalisz (Poland) and shortly after
that his family moved to Hirschberg (now Jelenia Gora in Poland).
He studied classical philology and philosophy at Halle and Berlin
universities from 1824 to 1828. Working as a secondary school
teacher he completed his thesis on approximate calculation of
double integrals and successfully defended it at the University of
Halle in 1829. Ferdinand Minding made a valuable contribution to
the theory of surfaces. He defined the geodesic curvature of a
curve and proved that this curvature is constant along the
shortest curve encircling the given area on a surface. In the
series of papers published from 1838 to 1849 Minding laid the
foundations of the theory of surface bending. In 1864 he was
elected to St. Petersburg Academy of Sciences as an
associate-member and as a honorary member in 1879.
Minding lectured on the theory of curves and surfaces and among
the mathematics students of the University of Tartu attending his
lectures was Karl Peterson (1828--1881). He was born in Riga and
studied mathematics at the University of Tartu from 1847 to 1852.
In 1853 Peterson completed his thesis "On the bending of surfaces"
and having defended it obtained the candidate degree. Upon
graduation, he failed to get a position at the University of
Tartu and moved to Moscow, where he served as a mathematics
instructor at the Petropavlov specialized school.
In spite of an appreciation given by Minding to the Peterson'
thesis "On the bending of surfaces" it was not published until
1952, when appeared the Russian translation of the thesis made by
L. Depman. In Peterson' thesis we find two equations which can be
written in the modern notations as follows
\begin{eqnarray}
\frac{\partial \Delta}{\partial%
v}-\frac{\partial\Delta'}{\partial u}+\Gamma^{2}_{22}\Delta%
-2\;\Gamma^{2}_{12}\Delta'+\Gamma^{2}_{11}\Delta''&=&0,\label{firstI}\\%
\frac{\partial \Delta''}{\partial%
u}-\frac{\partial\Delta'}{\partial v}+\Gamma^{1}_{22}\Delta%
-2\;\Gamma^{1}_{12}\Delta'+\Gamma^{1}_{11}\Delta''&=&0,\label{secondII}%
\end{eqnarray}
where%
$$
\Delta=\frac{L}{\sqrt{EG-F^{2}}},\quad%
\Delta'=\frac{M}{\sqrt{EG-F^{2}}},\quad%
\Delta''=\frac{N}{\sqrt{EG-F^{2}}},
$$
and $E,G,F$ are the coefficients of the first fundamental form
$g$ of a surface;\ \ $L,N,M$ are the coefficients of the second
fundamental
form $h$ of a surface and $\Gamma^{i}_{jk}$ are Christoffel symbols. %
The equations (\ref{firstI}), (\ref{secondII}) play an essential
role in the theory of surfaces. We remind that Gaussian curvature
$K$ of a surface can be written in the form
\begin{equation}
K=\frac{LM-N^{2}}{EG-F^{2}}\label{third}.
\end{equation}
Substituting the Gaussian curvature $K$ in the above formula by
its expression in the terms of the coefficients of the first
fundamental form $g$ we obtain a relation between the coefficients
of the first and second fundamental forms. It turns out that this
relation and relations (\ref{firstI}), (\ref{secondII}) determine
a surface up to congruence. By other words it can be proved that
given two quadratic forms $g, h$, where $g$ is positively
definite, with coefficients satisfying the equations
(\ref{firstI}), (\ref{secondII}), (\ref{third}) there exists a
surface whose first fundamental form is $g$, the second is $h$ and
a surface is determined up to congruence. Shortly after
publication of the Russian translation of the Peterson' thesis it
was generally recognized that Peterson was the first who obtained
the fundamental equations of the theory of
surfaces (\ref{firstI}), (\ref{secondII}) and anticipated the
fundamental theorem of surface theory \cite{Phillips}.%
At the same time that Minding was studying the theory of surfaces,
the subject was also occupying the attention of another scientist
from the University of Tartu, Thomas Clausen (1801--1885). Thomas
Clausen was born in Northern Jutland and having accepted an
invitation to occupy the position of an astronomer-observer, he
came to Tartu in 1842. Clausen interest towards the theory of
surfaces was inspired by the paper of C.G.J. Jacobi, who tried to
generalize the Gauss theorem about the sum of interior angles of a
geodesic triangle. Casting doubt on the correctness of the results
published by C.G.J. Jacobi, Clausen elaborated a new proof for the
Gauss theorem. Inspired by an another Jacobi paper, where an
integration of differential equation determining the geodesic
lines of an ellipsoid was reduced to quadratures, Clausen showed
that the same reduction can be made in the case of any
non-developed second order surface. Clausen also studied the lunes
of Hippocrates and he showed how two new types of squarable lunes
with proportions of circular arcs 5:1 and 5:3 can be found. It
should be also mentioned that Clausen found a new way of
determining the lemniscate and this work is related to the field
of geometrical constructions.
The successors of Minding on the chair of applied mathematics at
the University of Tartu, who did research in the field of
geometry, were Otto Staude (1857--1928) and Adolf Kneser
(1862--1937). Staude was the first who began to construct the
second order surfaces with the help of the tautened threads. He
also studied the geodesic curvature of a line on a surface and
the sign of the torsion of a curve. Adolf Kneser studied the
algebraic lines by means of synthetic methods and he proved that
if a plane curve has no other singularities except double
tangents and only one double point, then it has only one double
tangent.
Friedrich Schur (1856--1932) took up the post of the professor of
pure mathematics at the University of Tartu in 1888, succeeding on
this post to Peter Helmling (1817--1901), who retired the same
year. Friedrich Schur took his doctoral degree at the University
of Berlin with the thesis on the geometry of second order line
complexes. Later Friedrich Schur studied the algebraic surfaces of
third and fourth order and he made a valuable contribution to the
development of differential geometry by stating the famous theorem
on the spaces of constant curvature now bearing his name.
Friedrich Schur spent only four years (1888--1892) at the
University of Tartu. In this period he began to study the groups
of continuous transformations, then the rapidly developing new
branch of the differential geometry and due to his achievements in
this field, he can be reckoned a founder of this area of geometry
among such mathematicians as S. Lie, F. Engel, L. Maurer and W.
Killing.
Schur also studied the foundations of geometry. This is another
trend of research which played an essential role in the
development of geometry in Estonia. We only mention few names of
the geometers of the University of Tartu whose scientific activity
was related with the investigations on the foundations of
geometry.
Leonid Lachtin (1863--1927) was a professor of pure mathematics at
the University of Tartu for three years (1892--1895) succeeding
Friedrich Schur to the post. He studied the Lobachevskian
geometry and during his stay in Tartu published two papers
devoted to this subject, studying in one of them the Poincar{\'e}
interpretation of the Lobachevskian geometry. The successor of
Lachtin to the post of professor of pure mathematics V. Alekseyev
(1866--1943) studied the theory of the line congruences in
connection with surface theory and the theory of rational
invariants of bilinear forms.
The next 20th century is very significant in the history of
Estonia because it was marked by the independence of Estonia. We
touch gently the period from 1919 to 1950 because the special
stress in this period was laid on the investigations of the
foundations of geometry. The first professorship of mathematics at
the University of Tartu in the independent Estonia was occupied by
the Finnish mathematician Kalle V\"ais\"al\"a (1893--1968). He
spent in Tartu three years (1919--1922) and then moved to Turku
(Finland), where he took up the post of the professor of
mathematics at the University of Turku. In the period from 1930 to
1940 the investigations of the foundations of geometry at the
University of Tartu were continued in the papers of Jaan Sarv
(1877--1954), Arnold Humal (1908--1987), J\"uri Nuut (1892--1952) and
it should be mentioned that the approach developed by these
geometers was based on the notion of "betweenness", which is a
ternary relation on the set of points of a straight line
expressing the fact that one point lies between two others.
\section{Minimal surfaces and semiparallel submanifolds}
In this section we proceed to the next period of the development
of differential geometry in Estonia. This period covers the space
of time from 1950s to the beginning of 1990s, and the development
of differential geometry in this period in a great degree was
influenced by \"Ulo Lumiste. Let us mark the main stages of \"Ulo
Lumiste's scientific activity \cite{Lumiste1}. Lumiste was born in
V\"andra (Estonia) in 1929. He graduated from the University of Tartu
in 1952 and then he was sent to Moscow for post-graduate studies
at the Moscow University. In Moscow under the scientific
supervision of professor A. Vassiliev, Lumiste completed his thesis
devoted to the study of the geometry of submanifolds with fields
of asymptotic multidimensional directions in space forms and,
having successfully defended it, he obtained the candidate degree
(equivalent to PhD). Then he was appointed to a post of assistant
professor at the department of geometry of the University of
Tartu. In 1963--1965 Lumiste held a position of post-doctoral
researcher at the Moscow University. During this time he attended
the seminars on differential geometry led by S.P. Finikov, G.F.
Laptev and A.M. Vassiliev. Under the influence of these seminars
he began to study the theory of connections in fibre bundles and
its applications to geometry of families of homogeneous subspaces.
These investigations formed the basis for his DSc thesis, and he
defended it at the University of Kazan in 1968. In 1969 Lumiste
was appointed to the post of professor at the department of
algebra and geometry of the University of Tartu. He retired in
1995 and at present Lumiste is a professor emeritus.
Lumiste initiated a research in the
following areas of differential geometry:%
\begin{enumerate}
\item{the minimal and ruled surfaces, their generalizations;}
\item{canonical fibre bundles, induced connections and general
theory of connections;}
\item{2-parallel and semiparallel submanifolds;}
\item{connections in gauge theories.}
\end{enumerate}
Lumiste considerably contributed to each field of research from
mentioned above and he also continued the investigations on the
foundations of geometry started by his predecessors.
It is not possible to describe in a full extent the achievements
of \"U. Lumiste within the limits of this paper, therefore we
shall give only a brief description of the contribution of Lumiste
to some fields mentioned above and let us begin with the theory of
minimal surfaces. Lumiste showed in \cite{Lumiste2} that a minimal
surface of constant Gaussian curvature (other than plane) exists
only in the case of an elliptic space $S_n(c)$. He found all such
surfaces for $n\leq 5$ and proved that the Gaussian curvature of a
surface of this kind is $\frac{1c}{3}$ in the case of $S_4(c)$
(this is a so called Veronese surface) and the curvature of a
minimal surface with constant Gaussian curvature equals to zero in
the case of the elliptic spaces $S_3(c)$ and $S_5(c)$. It was also
shown in the same paper that every minimal surface of constant
Gaussian curvature is an orbit of a Lie group of isometries of a
corresponding elliptic space. In \cite{Lumiste3} Lumiste proved
the fundamental theorem for minimal surfaces, which states that a
minimal surface is entirely determined by its inner metric, the
principal curvatures and the angles between the principal
directions of all orders. It was also shown in the same paper that
every minimal surface can be bent continuously within its own
class by leaving the values of first order principal curvatures
fixed. In \cite{Lumiste4} Lumiste proved that each indicatrix of
normal curvature of order $l$ of a minimal surface is a circle if and only if
the submanifold generated by the osculating planes of order l--1
is minimal.
This direction of research turned out to be very fruitful and
Lumiste drew his student L. Tuulmets (b. 1933) in the
investigations of the classes of 3-dimensional ruled surfaces in
the 4-dimensional space ${R}^4$, where ${R}^4$ can be either the
Euclidean space ${E}^4$ or the Minkowski space ${R}^{1,3}. $ The
starting point for Tuulmets' investigations was the paper
\cite{Lumiste4a}, where Lumiste elaborated the general theory of
quasi-congruences in the Euclidean space ${E}^4$. Tuulmets also
studied the various classes of surfaces in the 4-dimensional
Euclidean space ${E}^4$, Minkowski space ${R}^{1,3}$, projective
spaces $P^n$ and the spaces of constant curvature, where she used
the method of Cartan exterior forms and the systems of Pfaff
equations \cite{Tuulmets}. The question of consistency of a system
of Pfaff equations is very crucial in this kind of investigations,
but it also requires a large volume of pure algebraic computations
and Tuulmets assisted by an expert in the algebraic computer
methods R. Roomeldi successfully applied computer methods in her
investigations of Pfaff systems.
The congruences of null straight lines in the Minkowski space
$R^{1,3}$ were studied by R. Kolde (b. 1938) \cite{Kolde}. He
defined elliptic, hyperbolic and parabolic congruences of null
straight lines with the help of the local properties of
congruences. Using the notion of a central hypersurface, Kolde
constructed the set of symmetric tensors related to the rays in
the second order differential neighbourhoods. He showed that these
tensors could be used to canonize the frames in the case of the
congruences of hyperbolic and elliptic types. In the case of a
normal congruences it appeared that there were two different
canonical frames. In the special case of elliptic congruences
which are called isotropic similar canonical frames are determined
up to a parameter. Kolde found the geometrical meaning of these
canonical frames.
The next field of research initiated by \"U. Lumiste is the theory
of canonical fibre bundles, induced connections and the general
theory of connections \cite{Lumiste15} and \cite{Lumiste16}. A Grassmannian manifold of $m$-
dimensional
subspaces in Euclidean space or symplectic space can be considered
as a fiber bundle, whose fibers are $m$-dimensional subspaces.
There is a connection on this fiber bundle which can be defined in
a natural way. This connection allows to study the geometry of a
Grassmannian manifold by means of the curvature and the torsion of
the mentioned above connection. This direction of research was
developed by Lumiste's post-graduate students A. Parring (b.
1940), E. Abel (b. 1947) and A. Fleischer (b. 1948). Parring
studied a family of $2m$-dimensional symplectic subspaces in a
$2n$-dimensional affine-symplectic space interpreting this family
as a fiber bundle, whose standard fiber is a symplectic subspace
of this family \cite{Parring}. The group of symplectic motions
acts on each fiber of this fiber bundle. Parring used the
curvature and the torsion of an inner connection to study the
geometry of this fiber bundle and he also derived the structure
equations of this family of symplectic subspaces. Abel considered
a $(m+r)$-dimensional surface $V_{m+r}$ in the non-Euclidean space
\cite{Abel}. It can be shown that a surface $V_{m+r}$ stratifies,
where fibers are $r$-parametric families of $m$-dimensional
non-Euclidean subspaces. Elaborating the ideas presented in the
papers \cite{Lumiste15}, \cite{Parring} Abel defined three
connections on a surface $V_{m+r}$ and studied the properties of
the torsion and the curvature of these connections. Fleischer
studied homogeneous quotient spaces of the group of motions in
Euclidean space $E^4$ and the Lie triple systems
\cite{Fleischer2}. He also studied relations between reducibility
of the holonomy group $\text{hol}(\nabla)$ and properties of the
nonassociative algebra $m$ with multiplication defined by the
tensor $A$ \cite{Fleischer1}. Fleischer proved that if $M = G/H$
is a Riemannian non-symmetric reductive homogeneous space of a
simple Lie group $G$ and the isotropy representation of $H$ has
only inequivalent irreducible components, then any invariant
metric connection on $M$ has irreducible holonomy group.
Now we go on to the next field of research initiated by Lumiste
which is the theory of semiparallel submanifolds. Given a
$m$-dimensional smooth manifold $M^m$ immersed isometrically into
a $n$-dimensional Euclidean space $E^n$ one has two quadratic
differential forms associated with this submanifold, where one of
them is the first fundamental form $g=g_{ij}\,dx^{i}dx^{j},\;
i,j=1,2,\ldots,m$, where $x^1, x^2,\ldots, x^m$ are the local
coordinates of $M^m$, determined by the metric $(g_{ij})$, and the
other is the second fundamental form $h=h_{ij}\,dx^{i}dx^{j}$ with
values in the normal vector bundle over a submanifold $M^m$ (the
fiber of this vector bundle at a point $p$ of a submanifold $M^m$
is the orthogonal complement of the tangent space $T_p M^m$ of a
submanifold $M^m$ with respect to the whole Euclidean space
$E^n$). It is well known that the first fundamental form $g$ is
always parallel, i.e., $\nabla g=0$, where $\nabla$ is the
Levi-Civita connection, but the second fundamental form $h$ does
not need to be parallel. We remind that a submanifold $M^m$ in a
Euclidean space $E^n$ is said to be a parallel submanifold if
${\bar\nabla} h=0$, where ${\bar\nabla}$ is the van der
Waerden-Bortolotti connection on a submanifold $M^m$ which is a
pair of the Levi-Civita connection $\nabla$ and the normal
connection $\nabla^{\bot}$, i.e.,
$\bar\nabla=\nabla\oplus\nabla^{\bot}$. If
$\{e_{\alpha}\}$, where $\alpha=m+1,\ldots,n$, is an adapted orthonormal
local frame of the normal vector bundle, then
$h_{ij}=h_{ij}^{\alpha}\,e_{\alpha}$ and the components
$h_{ijk}^{\alpha}$ of ${\bar\nabla} h$ determined by
$h^{\alpha}_{ijk}= {\bar\nabla}_i h^{\alpha}_{kj}$ can be
expressed in terms of the components of the second fundamental
(mixed) tensor $h_{ij}^{\alpha}$ as follows:
\begin{equation}
h_{ijk}^{\alpha}\,\omega^k=%
dh_{ij}^{\alpha}-h_{kj}^{\alpha}\,\omega_i^k-%
h_{ik}^{\alpha}\,\omega_j^k+h_{ij}^{\beta}\,\omega_{\beta}^{\alpha},
\label{second}
\end{equation}
where $\omega^j_i$ are the connection 1-forms of the Levi-Civita
connection $\nabla$ and $\omega^{\alpha}_{\beta}$ the connection
1-forms of the normal connection $\nabla^{\bot}$. In the case of
a parallel submanifold $M^m$ the components of ${\bar\nabla} h$
are all equal to zero, i.e., $h_{ijk}^{\alpha}=0$. Applying the
exterior differential to the both sides of (\ref{second}), we
obtain
\begin{equation}
{\bar\nabla} h_{ijk}^{\alpha}\wedge \omega^k=%
h_{ij}^{\beta}\,\Omega^{\alpha}_{\beta}-%
h_{kj}^{\alpha}\,\Omega^{k}_{i}-%
h_{ik}^{\alpha}\,\Omega^{k}_{j},
\label{integrability1}
\end{equation}
where $\Omega^{k}_{i}$ is the curvature 2-form of the Levi-Civita
connection $\nabla$ and $\Omega^{\alpha}_{\beta}$ is the curvature
2-form of the normal connection $\nabla^{\bot}$. The above
relation (\ref{integrability1}) gives us the integrability
condition for the differential system ${\bar\nabla} h=0$ which is
\begin{equation}
h_{ij}^{\beta}\,\Omega^{\alpha}_{\beta}-%
h_{kj}^{\alpha}\,\Omega^{k}_{i}-%
h_{ik}^{\alpha}\,\Omega^{k}_{j}=0.%
\label{integrability2}
\end{equation}
A submanifold $M^m$ is said to be a semiparallel submanifold if
the integrability condition (\ref{integrability2}) is satisfied.
The term {\it semiparallel submanifolds} for submanifolds with
second fundamental form satisfying (\ref{integrability2}) was
introduced by J. Deprez in 1985. At the same time Lumiste together
with his post-graduate student V. Mirzoyan independently began to
study an interesting and important class of submanifolds with
parallel $\bar{\nabla} h$, which form the subclass in the class of
the semiparallel submanifolds as it follows from
(\ref{integrability2}). It should be noted that Deprez only
classified and described the semiparallel surfaces $M_2$ and
hypersurfaces $M_{n-1}$ in $E_n$ while in the main the theory of
semiparallel surfaces was developed by Lumiste. The theorem
asserting that any semiparallel submanifold $M_m$ in a space form
$M_n(c)$ is the second order envelope of an orbit of a Lie group
of isometries was proved by Lumiste in 1990 and this theorem plays
a crucial role in the theory of semiparallel submanifolds since it
suggests that in order to develop the theory of semiparallel
submanifolds one should first of all to find all the symmetric
orbits and then to describe the second order envelopes of these
orbits. It can be shown that any symmetric orbit is an orthogonal
product of irreducible ones, where an irreducible orbit is a
minimal submanifold in its sphere (except the case when
irreducible orbit is a plane). Lumiste showed that some
irreducible orbits can be constructed by means of mappings which
are known in the algebraic geometry. It turned out that symmetric
orbits of certain kind which arise in the connection with the
study of semiparallel submanifolds, whose first normal subspace at
any point has the maximal possible dimension $\frac{1}{2}m(m+1)$,
were earlier introduced and studied by R. Mullari (1931--1969)
\cite{Mullari1} and \cite{Mullari2} who was Lumiste's first
post-graduate student. Mullari considered a symmetric orbit which
is a space of constant curvature immersed into $E_n$ in such a way
that all its inner motions are generated by isometries of $E_n$
and $n=\frac{1}{2}m(m+3)$, where $m$ is the dimension of an orbit.
He called this kind of symmetric orbit {\it maximal symmetric}.
Later they were given the name {\it Veronese orbits} because any
of these orbits can be constructed as the image of the
$m$-dimensional sphere $S_m$ with respect to Veronese mapping. The
theorem proved by Lumiste \cite{Lumiste5} states that if $m\geq 2$
and $n=\frac{1}{2}m(m+3)$, then a complete semiparallel submanifold
$M_m$ in $E_n$ with maximal possible dimension $\frac{1}{2}m(m+1)$
of the first normal subspace at any point is a single Veronese
orbit.
The study of three-dimensional semiparallel submanifolds in $E_6$
(see \cite{Lumiste6} and \cite{Lumiste7}) led to a symmetric orbit, which
is generated by one-parameter family of\break 2-spheres, whose
orthogonal trajectories are circles. The direct generalization of
this is a \textit{Segre submanifold} $S_{(m_1,m_2)}$, which can be
constructed by means of the Segre map known in algebraic geometry.
The second order envelopes for Segre orbits can be found in
\cite{Lumiste8}. This direction of research is also developed by
K. Riives (b. 1942). Riives proved \cite{Riives1} that a
semiparallel submanifold $M^4$ of a Euclidean space $E^n$, $n>9$
is a second order envelope of $V^2(r_1) \times S^1(r_2)\times
S^1(r_3)$ (where $V^2 (r_1)$ is a Veronese surface in $E^5)$. In
the same paper some geometrical properties of Veronese and
Clifford leaves are described in terms of a certain function.
Using the Cartan moving frame method and the exterior differential
calculus Riives described in \cite{Riives2} some special classes of
curves on irreducible envelopes of the reducible symmetric
submanifolds $V^2(r_1) \times S^1 (r_2) \times S^1 (r_3)$ with a
Veronese component, which is a Veronese surface in $E^5$.
The third class of symmetric orbits, which were found by Lumiste
in the connection with the study of semiparallel submanifolds and
can be constructed with the help of a mapping known in algebraic
geometry. Let $G_{k,l}$ be the Grassmannian\break ($k$-dimensional
subspaces of the real Euclidean space $E_l$) and $T^{(0,k)}\subset
(E_l^{*})^{\otimes k}$ be the space of skew-symmetric
$k$-covariant tensors. We can consider the Grassmannian $G_{k,l}$
as a submanifold in $T^{(0,k)}$, where it turns to be an orbit. It
should be noted that the geometry of $G_{k,l}$ was also studied by
Lumiste's post-graduate student I. Maasikas (b. 1944) in \cite{Maasikas}.
Lumiste proved in \cite{Lumiste9} that this orbit is a symmetric
orbit only in the case of $k=2$ (it is called the Pl\"ucker
orbit). Later he showed that the second order envelope of Pl\"ucker
orbit $G_{2,l}$ is trivial, which means that it is neither more
nor less than the same orbit. Lumiste introduced the term {\it
umbilic-like orbits} for the symmetric orbits, whose second order
envelopes are trivial. He has shown that the class of umbilic-like
orbits includes not only the previously mentioned Pl\"ucker
orbits, but also the unitary orbits and Veronese-Grassmann orbits.
These results are summarized by Lumiste in his monographs
\cite{Lumiste12} and \cite{Lumiste8}.
Lumiste's post-graduate student M. V\"aljas (b. 1958) studied totally quasiumbilical
submanifolds with
nonflat normal connection and he proved in joint paper with \"U.
Lumiste \cite{Valjas1} the existence of totally quasiumbilical
submanifolds in Euclidean spaces with codimension 2 having a
non-flat normal connection. He also studied Dupin-Mannheim
submanifolds and Clifford cones in $E\sb{n+m}$ \cite{Valjas2}.
The other Lumiste's post-graduate student T. Vorovere (b. 1957) studied the evolute of
order $\lambda$ for a submanifold in a Euclidean space. A
submanifold $N$ in $E\sp n$ is called the evolute of order
$\lambda$ for a submanifold $M\sp m$ in $E\sp n$, if there exists
a submersion $f:N\to M$ such that the osculating plate of order
$(\lambda-1)$ of $N$ at a point $y\in N$ belong to the normal
space to $M$ at $f(x)$, and moreover, the osculating plane
intersects $M$ at $f(x)$. Virovere derived several criteria for
the existence of evolutes (see \cite{Virovere}).
\section{Fiber bundles, connections and gauge theories}
The question about the nature of a space and time has always
stirred up the minds of philosophers, mathematicians and
physicists. The geometry provides us with various mathematical
models of a space and this is the reason why the development of
geometry has been so closely related to the development of
classical mechanics and theoretical physics. The appearance of
non-abelian gauge theories in 50s of the 20th century gave a
powerful impetus to this relation. It is well known
that the theory of connections on principal and vector bundles is
an adequate geometric framework for the non-abelian gauge
theories. There were favorable conditions in Tartu in 70s to
elaborate the relation between theory of connections and gauge
theories because there was a group of physicists at the Institute
of Physics studying the gauge theories and a group of geometers at
the Department of Geometry and Algebra of the University of Tartu
studying the theory of connections on fiber bundles. The
physicists felt the need of basic knowledge in differential
geometry of fiber bundles such as connection, curvature,
characteristic classes (or Chern classes) and this led to the
joint seminar with geometers, which ran for several years (usually
once a week). This seminar was initiated by Madis K\~oiv on the
part of physicists and by \"Ulo Lumiste on the part of geometers.
This collaboration turned out to be very fruitful and it had not
only educational importance but it also led to an original
research in the areas such as Backlund transformations, quantum
Yang-Mills theory, supersymmetry, supermanifolds and supergravity.
In this paper we shall briefly describe the investigations
concerning a geometrical meaning of the Faddeev-Popov ghost fields
and BRST-transformations.
The first non-abelian gauge field theory with the gauge group
$SU(2)$ was constructed by Yang and Mills (see \cite{Yang-Mills}).
From a geometric point of view an adequate space for Yang-Mills
theory is a principal fiber bundle $P(M,G)$, where $M$ is the
smooth four dimensional manifold and $G=SU(2)$ is the structure
group of $P$ called a gauge group in field theory. A connection
1-form $\omega$ determines the Yang-Mills field potentials. Indeed
let $\pi: P\to M$ be a projection of a principal fiber bundle $P$
and $\sigma: V\to \pi^{-1}(V)$ be a section of some local
trivialization of $P$ over an open subset $V$ of $M$. Then the
pull-back of a connection form $\sigma^{*}\omega$ is the Lie
algebra $su(2)$-valued 1-form on a subset $V$ and can be expressed
in local coordinates $x^{\mu}$ of $V$ as follows
$\sigma^{*}\omega=A^{(V)}_{\mu}\,dx^{\mu}$. If the coefficients
$A^{(V)}_{\mu}$ satisfy the Yang-Mills equations one can interpret
them as Yang-Mills field potentials. A connection 1-form $\omega$
with coefficients of its pull-back $\sigma^{*}\omega$ satisfying
the Yang-Mills equations is called a {\it Yang-Mills connection}
and its pull-back $\sigma^{*}\omega$ a {\it Yang-Mills} 1-{\it form}. If
$U$ is some other open subset of $M$ such that $U\cap V\neq
\emptyset$ and the section $\xi:U\to \pi^{-1}(U)$ of a local
trivialization over $U$ is related to the section $\sigma$ by the
$G$-valued function $g(x)$, i.e.,
\begin{equation}
\xi (x)=\sigma(x)\,g(x),\qquad x\in U\cap V, \nonumber
\end{equation}
then the pull-back $\xi^{*}\omega$ of a connection on $U$ is
related to the pull-back $\sigma^{*}\omega$ as follows
\begin{equation}
\xi^{*}\omega=Ad(g^{-1}(x))\,\sigma^{*}\omega+
g^{-1}(x)\,dg(x).
\nonumber
\end{equation}
If $\xi^{*}\omega=A^{(U)}_{\mu}\,dx^{\mu}$, then the above relation
leads to the relation between the corresponding Yang-Mills
potentials
\begin{equation}
A^{(U)}_{\mu}=g^{-1}(x)\,A^{(V)}_{\mu}\,g(x)+g^{-1}(x)\,\partial_{\mu}
g(x), \label{nonabelian}
\end{equation}
which is the gauge transformation in a non-abelian case. The
curvature 2-form
\begin{equation}
\Omega=D_{\omega}\,\omega=
d\,\omega+\frac{1}{2}\,\lbrack \omega,\omega \rbrack,
\label{curvature}
\end{equation}
where $D_{\omega}$ is the covariant differential, written in local
coordinates
\begin{equation}
\Omega=F_{\mu\nu}\,dx^{\mu}dx^{\nu}, \label{strength}
\end{equation}
determines the strength $F_{\mu\nu}$ of a Yang-Mills field, which
can be expressed in the terms of potentials as follows
\begin{equation}
F_{\mu\nu}=\partial_{\mu} A_{\nu}-\partial_{\nu} A_{\mu}+
\lbrack A_{\mu},A_{\nu} \rbrack.
\end{equation}
The Yang-Mills action\vskip -.5cm
\begin{equation}
S_{YM}=\frac{1}{4}\,\int_M \langle \Omega,*\Omega\rangle,%
\label{action}
\end{equation}
where $*$ is the (Hodge) star operator and $\langle\, ,\, \rangle$
is a Killing form on the Lie algebra $su(2)$, is invariant with
respect to gauge transformations.
An invariance of the action $S_{YM}$ with respect to gauge
transformations (\ref{nonabelian}) brings a distinguishing feature
in the Yang-Mills field theory or, in a more general context, in
any non-abelian gauge field theory. Indeed, it shows that the real
physical configuration of a gauge field theory is determined not
by a single set of field potentials $A=\{A_{\mu}(x)\}$, but by the
entire class of gauge equivalent sets of field potentials, where
two sets $A=\{A_{\mu}(x)\}$ and $A'=\{{A'}_{\mu}(x)\}$ are said to be
gauge equivalent if\vskip -.5cm
\begin{equation}
A'_{\mu}=g^{-1}(x)\,A_{\mu}\,g(x)+g^{-1}(x)\,\partial_{\mu} g(x).
\end{equation}
Thus a real physical configuration is determined up to a gauge
transformation. The complications which arise in a quantization
of a gauge theory are caused just by this ambiguity. This problem
can be fixed by a parametrization of the space of real physical
configurations where each real physical configuration will be
uniquely determined by a single set of parameters. This can be
done by imposing an additional condition on Yang-Mills potentials
and this condition should pick out a single representative from
every gauge equivalent class. This condition is called {\it
gauge-fixing condition} or simply gauge in gauge field theories.
The quantum Yang-Mills field theory was constructed by L. Faddeev,
V. Popov, B. De Witt and they showed that the approach of R.
Feynman based on functional integral was most suitable scheme for
quantization of gauge field theories because the principle of
gauge invariance could be expressed in terms of this approach very
easily: one should integrate not over the space of all field
configurations, but only over the space of gauge-equivalent
classes of field configurations. However applying this procedure
to the Yang-Mills field theory one encounters a problem of the
non-local functional ${\mbox det} M$, where $M$ is the operator
$$M(\alpha) =\partial^{\mu}\partial_{\mu}
\alpha - \partial^{\mu} \lbrack A_{\mu}, \alpha\rbrack,$$
which appears in the functional integral for $S$-matrix. This was
the reason why the integrand in the functional integral of
generating functional was not in the canonical form $\exp(i\times
action)$. Faddeev and Popov solved this problem by
introducing the additional anticommuting fields $c(x)$ and ${\bar
c}(x)$, which allow to put the determinant ${\mbox det} M$ into a
form of the integral over the infinite dimensional Grassmann
algebra $\mathcal{R}$\ (here $c(x)$ and ${\bar c}(x)$ are the generators of
this algebra) as follows
\begin{equation}
{\mbox det} M= \int exp \lbrace i \int
{\bar c}^a(x)\, M^{ab}\, c^b(x)\, dx \} \prod_x d{\bar c}dc.
\end{equation}
The anticommuting fields $c(x)$ and ${\bar c}(x)$ were later given the
name of {\it Faddeev-Popov ghost fields}. It was showed by C.
Becchi, A. Rouet, R. Stora and independently by I. Tyutin that the
quantum effective action (this is the action one obtains by adding
to the classical Yang-Mills action the terms generated by the
Faddeev-Popov ghost fields) is invariant with respect to
transformations
\begin{eqnarray}
A^a_{\mu}(x)&\to & A^a_{\mu}(x)+\nabla_{\mu}\,c(x)\,\epsilon,
\label{I}\\
c^a (x)&\to &
c^a(x)-\frac{1}{2}\,t^{a}_{bd}\,c^b(x)\,c^d(x)\,\epsilon,
\label{II}\\
{\bar c}^a (x)&\to & {\bar c}^a(x)+\lbrack
\partial_{\mu}A^a_{\mu}(x)\rbrack \,\epsilon,\label{III}
\end{eqnarray}
where $t^{a}_{bd}$ are the structure constants of the Lie algebra
$su(2)$, $\nabla_{\mu}$ is the covariant derivative and
$\epsilon$ is a Grassmann parameter $\epsilon^2=0$ anticommuting
with the Faddeev-Popov fields and commuting with the Yang-Mills
potentials. The transformations (\ref{I})--(\ref{III}) are called
the {\it BRST transformations}. This kind of transformations is
known in the modern field theory under the name of {\it
supersymmetries}. The remarkable property of the BRST
transformations is that they are nilpotent. The BRST
transformations induce the BRST operator
\begin{equation}
\delta\,A^a_{\mu}(x)= \nabla_{\mu}\,c^a(x),\;
\delta\,c^a(x)=-\frac{1}{2}\,t^{a}_{bd}\,c^b(x)\,c^d(x),\;
\delta\,{\bar c}^a(x)=\partial_{\mu}A^a_{\mu}(x),
\label{BRST-operator}
\end{equation}
definition of BRST operator is usually replaced by an auxiliary
field $b^a(x)$ and the last formula in (\ref{BRST-operator})
takes on the form $\delta\,{\bar c}^a(x)=b^a(x)$. Later the
anti-BRST operator ${\bar\delta}$ was added to BRST operator and
together they form the {\it BRST-algebra}
\begin{equation}
\delta^2={\bar\delta}^2=0\;\;
\delta\,{\bar\delta}+{\bar\delta}\,\delta=0.
\end{equation}
The appearance of the Faddeev-Popov ghost fields in the quantum
Yang-Mills theory raised an interesting problem of elaborating a
geometric structure which could allow to incorporate these
additional fields into known geometric framework of gauge fields
based on fiber bundle technique and to derive the
BRST transformations from known equations. The property of
anticommutativity of the ghost fields suggests an idea to
construct their geometric interpretation by means of differential
forms since the wedge product of differential 1-forms is also
skew-symmetric. This idea seems to be very alluring if we look at
the BRST transformation of the ghost fields (\ref{II}) which is
very similar to the Cartan-Maurer equation and this suggests to
construct the BRST operator by means of exterior derivative. The
geometric interpretation of the ghost fields and BRST
transformations based on the mentioned above idea was proposed and
developed in the papers \cite{Thierry-Mieg1} and \cite{Thierry-Mieg2}.
Though the geometric interpretation of the
ghost fields and BRST operators in terms of differential forms and
exterior derivative is very attractive, it has a problem with that
part of BRST operator $\delta$ which is determined by the
transformation of the anti-ghost field (\ref{III}). Indeed, the
BRST operator $\delta$ transforms the anti-ghost field ${\bar
c}^a(x)$ into an auxiliary bosonic field $b^a(x)$ and therefore
this part of the BRST operator is not consistent with the
properties of the exterior derivative which raises the degree of a
form by 1. In \cite{Lumiste10} Lumiste improved the interpretation
proposed in \cite{Thierry-Mieg1} and \cite{Thierry-Mieg2} and
extended it framework to include the anti-ghost fields ${\bar
c}(x)$ and the anti-BRST operator ${\bar\delta}$. The geometric
construction, he proposed for interpretation of the ghost field
$c(x)$, was not so rigid as in \cite{Thierry-Mieg1} and
\cite{Thierry-Mieg2} and this allowed to keep the dependence of
$c(x)$ on a gauge-fixing condition. He also showed that BRST
transformations for the fields $A_{\mu}(x)$ and $c(x)$ could be derived
from the well known Laptev equations for a connection on a
principal fiber bundle $P(M,G)$. Lumiste elaborated a formalism of
$q$-vector fields considered as $(-q)$-forms for geometric
interpretation of the anti-ghost field ${\bar c}(x)$ and an
analogue of the exterior derivative, which can be used together
with the well known operator $\star^{-1}\,d\,\star$, where $\star$
is the Hodge operator, for geometric interpretation of the
anti-BRST operator $\bar\delta$. Let us briefly describe the
geometric construction elaborated in \cite{Lumiste10}. Let $z$ be
a point of a principal fiber bundle $\pi: P \to M$ with a
structure group $G$, $p=\pi(z)\in M$ be the projection of $z$ and
$S_z$ be a subspace of the tangent space $T_zP$ such that
$T_zP=S_z\oplus V_zP$, where $V_zP$ is the tangent space to the
fiber $\pi^{-1}(p)$ passing through a point $z$. Let ${\mathcal
J}_P(z)$ be the set of all subspaces $S_z$ at a point $z$
satisfying $T_zP=S_z\oplus V_zP$. It can be proved that the set
${\mathcal J}_P=\bigcup_{z\in P} {\mathcal J}_P(z)$ is the smooth
manifold, which can be endowed with a structure of the principal
fiber bundle over $P$ with the projection $\pi': {\mathcal J}_P\to
P$ defined by $\pi'(S_z)=z$. If $R_g(z)=zg$ is the right action of
the group $G$ on a principal bundle $P$, then the right action
$R_g^{*}$ of $G$ on ${\mathcal J}_P$ is defined by
$R_g^{*}(S_z)=S_{zg}=dR_g(S_z)$, where $dR_g: T_zP\to T_{zg}P$ is
the differential of $R_g$.%
Let ${\mathcal V}(P)$ be the Lie algebra of vertical (or
fundamental) vector fields on $P$. It is well known that this Lie
algebra is isomorphic to the Lie algebra ${\underline G}$ of $G$,
i.e, ${\mathcal V}(P)\simeq {\underline G}$. If $Y$ is a
fundamental vector field then let us denote by ${\xi}_Y$ the
corresponding element of the Lie algebra ${\underline G}$ which
induces $Y$. Let $\Sigma: P\to {\mathcal J}_P$ be a smooth section
of the principal bundle ${\mathcal J_P}$. Any section $\Sigma$
generates the ${\underline G}$-valued $1$-form $\theta$ on $P$
which is defined as follows: if $X$ is a vector field on $P$, then
$X$ can be written as the sum of two vector fields $Y$ and $Z$, where
$Z_p\in \Sigma(p)$ and $Y$ is the uniquely determined vertical
vector field, and%
\begin{equation}
\theta(X)=\xi_{Y}.
\end{equation}%
The ${\underline G}$-valued $1$-form $\theta$ induces the
${\underline G}$-valued $1$-form $\tilde\theta$ on the principal
bundle ${\mathcal J}_P$ and the value of this form on a tangent
vector $\tilde v$ to ${\mathcal J}_P$ at a point $S_z$ is
determined by the formula
\begin{equation}
{\tilde\theta}(\tilde v)=\theta_z(d\pi'(\tilde v)).
\end{equation}
In \cite{Lumiste10} Lumiste proposed to consider the ${\underline
G}$-valued $1$-form $\tilde\theta$ as a geometric interpretation
for the Faddeev-Popov ghost field $c$. A smooth section $\Sigma$
used in the construction of $\tilde\theta$ can be interpreted in
terms of a gauge theory as a gauge-fixing condition and this shows
the advantage of the approach proposed by Lumiste which allows to
keep the dependence of the ghost field on a gauge.
The geometric interpretation of ghost fields and
BRST-supersymmetries in terms of differential forms suffers from a
shortcoming, which does not allow to take into account all
properties of the ghost fields. It is important that ghost fields
$c^a(x)$ and ${\bar c}^b(x)$ are the generators of an infinite
dimensional Grassmann algebra $\mathcal{R}$ as it is mentioned
above. This means that ghost fields anticommute not only with
respect to superscripts $a$ and $b$, but also with respect to a point $x$
of a base manifold $M$, i.e.,\vskip -.5cm
\begin{equation}
c^a(x)\,c^b(y)=-c^b(y)\,c^a(x),%
\label{ghostrelations}
\end{equation}
where $x$ and $y$ are points of a base manifold $M$. The commutation
relations (\ref{ghostrelations}) show that if we consider the
ghost fields as generators of an infinite dimensional Grassmann
algebra, then the product of the ghost fields $c^a(x)$ and $c^b(y)$ is
determined even if $x$ and $y$ are different point of a manifold $M$. It
is well known that one can multiply differential forms pointwise
and the product of two differential forms has no sense if they are
taken in different points of a manifold. In order to incorporate
the anticommutation relations (\ref{ghostrelations}) into a
geometric interpretation Lumiste and the author of this paper
constructed an infinite dimensional supermanifold
$\mathcal{A}_{\mathcal{R}}$ (see \cite{Lumiste11}). There are few
approaches to a notion of a supermanifold and one of them was
proposed by F. Berezin. Briefly it can be described as follows:
given an ordinary smooth manifold $M$ one constructs a
supermanifold by means of the theory of sheaves, where $M$ is
usually called an underlying or a base manifold. The underlying
manifold $\mathcal{A}$ for an infinite dimensional supermanifold
$\mathcal{A}_{\mathcal{R}}$ proposed in \cite{Lumiste11} was the
infinite dimensional manifold of all smooth connections of a given
principal fiber bundle $P(M,G)$, where the differential structure
was defined with the help of a Banach space structure, and the
sheaves determining the structure of a supermanifold were
constructed with the help of infinite dimensional Grassmann
algebra $\mathcal{R}$. Figuratevily speaking this supermanifold
was infinite dimensional with respect to both sectors, even and
odd. The underlying manifold of all smooth connections
$\mathcal{A}$ has a rich geometric structure. It is well known
that it is an infinite dimensional principal fiber bundle with
respect to the action of the infinite dimensional Lie group of
gauge transformations and a gauge-fixing condition can be
considered as a section of this bundle. Given a smooth function
determined on an ordinary manifold $M$ one can extend it to the
smooth function on the supermanifold constructed over $M$. This
procedure is called a Grassmann analytical continuation in the
theory of supermanifolds. The Yang-Mills action (\ref{action}) can
be considered as a function on the infinite dimensional manifold
of the classes of gauge-equivalent connections and the infinite
dimensional supermanifold $\mathcal{A}_{\mathcal{R}}$ proposed in
\cite{Lumiste11} allowed to show that the quantum effective action
is a Grassmann analytical continuation of the Yang-Mills action
(\ref{action}) along the fibers of the corresponding principal
fiber bundle (determined by the action of the group of gauge
transformations). The BRST-supersymmetries were interpreted as a
family of vector fields on the infinite dimensional supermanifold
$\mathcal{A}_{\mathcal{R}}$.
\section{Jet bundles and symmetries of differential equations}
Every time when we solve a differential equation, study the
singularities of a mapping or compute the invariants of a Lie
group we (in some way or other) use the structure of a jet space
$J_{n,m}$, which means that we make use of the operator of total
differentiation, Cartan' forms and Lie vector fields. This
direction of research in the area of differential geometry is
developed by Maido Rahula who succeeded to \"Ulo Lumiste on the
post of professor of geometry at the University of Tartu in 1990
when Lumiste retired.
Let us list the main periods of Maido Rahula biography. Rahula was
born in J\"arvamaa (Estonia) in 1936. When he was 13 years old his
family was banished to Siberia during the Stalin' deportations. He
graduated the University of Tomsk in 1959 and then came back to
Estonia, where he had the luck to enter the post-graduate studies
at the University of Tartu under the supervision of Lumiste though
according to the system existing at that time the members of
deported families were forbidden to enter the universities located
in the european part of the Soviet Union. Rahula defended his PhD
thesis "On higher order differential geometry" at the University
of Tartu in 1964. He spent four years (1967--1971) in Algeria
teaching mathematics at the University of Algeria within the
framework of the program of cooperation of the Soviet Union with
developing countries. The next period (1972--1989) of his life was
bound up with the Institute of Technology in Odessa.
Let us consider the jet space $J_{n,m}$ in the simplest case of
$n=m=1$. Let%
\begin{equation}
t, u, u', u'', \ldots
\end{equation}
be the coordinates in this space. The operator of total
differentiation
\begin{equation}
D=\frac{\partial}{\partial t}+u'\,\frac{\partial}{\partial u}+
u''\,\frac{\partial}{\partial u'}+\ldots,
\end{equation}
can be considered as a linear vector field. The flow (i.e., the
one-parameter group of diffeomorphisms generated by this vector
field) is determined by the exponential law\vskip -.5cm
\begin{equation}
U'=C\,U \;\;{\rm and}\ \ U_t=e^{Ct}\,U, \label{exponentiallaw1}
\end{equation}
\vskip -.1cm
\noindent where $U$ is the column with infinite number of entries
$u,u',u'',\ldots $ and $C$ is the infinite-dimensional matrix,
whose non-zero elements can be obtained from the elements of the
main diagonal of the infinite-dimensional unit matrix by moving
each of them to the right along the corresponding row to the next
position. It is obvious that the matrix $C^l$, where $l$ is an
integer, can be obtained from the unit matrix by repeating $k$
times the previously described procedure. The exponential of the
matrix $C\,t$, where $t$ is a parameter, multiplied from the right
by the column $U$ yields the column $U_t$ whose first element\vskip -.5cm
\begin{equation}
u_t=\sum_{k=0}^{\infty} u^{(k)}\; \frac{t^k}{k!},
\end{equation}
\vskip -.2cm
\noindent and the next elements are the derivatives of $u_t$, i.e., $u'_t,
u''_t, \ldots $. Thus the formula $U_t=e^{Ct}\,U$ describes the
motion of a point $(t,U_t)$ along the trajectory of the vector
field $D$. In a general case of a jet space $J_{n,m}$ we have the
system of operators $D_i,\,i=1,2,\ldots,n$ and the formula
$U_t=e^{Ct}\,U$ written by means of multi-indices determines the
$n$-dimensional orbits of the additive group ${\bf R}^n$.
Let us denote by $\frac{\partial}{\partial U}$ the matrix whose
single row consists of the elements
\begin{equation}
\frac{\partial}{\partial u},\,\frac{\partial}{\partial u'},\,
\frac{\partial}{\partial u''},\ldots
\end{equation}
Then the operator $D$ can be written in the form
\begin{equation}
D=\frac{\partial}{\partial t}+\frac{\partial}{\partial U}\,U'.
\end{equation}
The entries of the matrix $(\frac{\partial}{\partial t}
\frac{\partial}{\partial U})$ form the basis in the jet space
$J_{1,1}$ and the
entries\break\vskip -.47cm \noindent of the matrix $(\begin{array}{c} dt \\ dU\\
\end{array})$ form the dual basis. If we replace the first\break\vskip -.4cm \noindent
entry $\frac{\partial}{\partial
t}$ in the first matrix by the operator of total differentiation
$D$ then the first entry in the second matrix containing the
elements of dual basis should be replaced by the Cartan form
$\omega=dU-U'\,dt$. This follows from the formulae
\begin{equation}
(\begin{array}{cc}\!\! D & \frac{\partial}{\partial U}\!\! \\ \end{array})
=(\begin{array}{cc}\!\!\frac{\partial}{\partial t} &
\frac{\partial}{\partial U}\!\! \\
\end{array})\,\left(%
\begin{array}{cc}
1 & 0 \\
U' & E \\
\end{array}%
\right),
\left(%
\begin{array}{c}
dt \\
\omega \\
\end{array}%
\right)=\left(%
\begin{array}{cc}
1 & 0 \\
-U' & E \\
\end{array}%
\right)\,\left(%
\begin{array}{c}
dt \\
dU \\
\end{array}%
\right). \nonumber
\end{equation}
The basis and its dual basis depend on a point of the jet space
and if this point starts to move along the trajectory of the
vector field $D$ passing through this point, then both basises
change and this change or dependence on a parameter $t$ can be
described by the same exponential law (\ref{exponentiallaw1}).
Indeed, if\vskip -.5cm
\begin{equation*}
\left(\frac{\partial}{\partial U}\right)'=-\frac{\partial}{\partial
U}\,C\;\;{\rm and}\ \
\omega'=C\,\omega,
\nonumber
\end{equation*}
where the stroke stands for the Lie derivative with respect to
$D$, then
\begin{equation*}
\left(\frac{\partial}{\partial
U}\right)_t=\frac{\partial}{\partial
U}\,e^{-Ct}\;\;{\rm and}\ \ \omega_t=e^{Ct}\,\omega.
\end{equation*}
The formula $I=e^{-Ct}\,U$ determines an infinite number of
invariants of the operator $D$. Indeed, we have
$I'=e^{-Ct}\,(U'-CU)=0$ and taking into account
$dI=e^{-Ct}\,\omega$, we conclude that the exponential $e^{-Ct}$ is
an integrating matrix for the system of forms $\omega$. The dual
formula
\begin{equation*}
\frac{\partial}{\partial I}=\frac{\partial}{\partial U}\,e^{Ct}
\end{equation*}
determines an infinite number of invariant operators. These
operators form the basis for Lie vector fields and for
infinitesimal symmetries of the operator $D$.
Any vector field $P$ can be written in the above defined basises
as follows
\begin{equation}
P=\frac{\partial}{\partial t}\,\xi+\frac{\partial}{\partial
U}\,\lambda=D\,\xi+\frac{\partial}{\partial
U}\,\mu=D\,\xi+\frac{\partial}{\partial I}\,\nu,
\end{equation}
where
$\lambda=P\,U,\,\mu=\omega(D),\,\nu=P\,I,\,\mu=\lambda-U'\,\xi,\ {\rm and}\ \nu=e^{-
Ct}\,\mu$.
It can be proved that the following conditions are equivalent to
each other:
\begin{enumerate}
\item vector field $P$ is a Lie vector field;
\item $\nu'=0$;
\item $\mu'=C\,\mu$;
\item $\lambda'=C\,\lambda+U'\,\xi'$.
\end{enumerate}
The second condition is the simplest one, and it shows that the
components of $\nu$ are the invariants of the field $D$. The third
condition shows that the entries of the column $\mu$ are
$f,f',f'',\ldots$, where $f$ is a generating function. For
instance the functions $1,t,t^2/2,\ldots$ are the generating
functions for the field $\frac{\partial}{\partial I}$ which is a
vertical vector field since $\xi=0$. The fourth condition is more
complicated since a generating function does not enter it
explicitly, but this condition determines the components of a Lie
vector field in the natural basis. It should be mentioned that the
fourth condition can be found in the classical Lie theory.
The above described general scheme was developed by Rahula and its
more detailed description can be found in the monograph
\cite{Rahula1}. This scheme can be applied in the theory of
differential equations. Let $F(t,U)=0$ be a differential equation,
which we shall write in the form $F=0$. This equation determines
the surface $A_0$ in the jet space $J_{1,1}$. The stratification
of singularities $A_0\supset A_1\supset A_2\supset\ldots$ is
defined on the surface $A_0$ by means of the flow of the vector
field $D$, where $A_n$ is determined by the system of equations
$F^{(k)}=0$, where $k=0,1,2,\ldots,n$. The solutions of $F=0$ can be
constructed by means of those trajectories of the operator $D$
which belong to $A_n$ for each integer $n$. The integral of an
equation $F=0$ is determined by a function which is constant on
the surface $A_0$. The symmetries of $F=0$ are the transformations
of the jet space $J_{1,1}$ such that they leave invariant the
stratification arising on the surface $A_0$. The symmetries of
$F=0$ considered as mappings transform a solution of $F=0$ into
another solution of a same equation and they do the same thing
with the integrals of $F=0$. Thus we can conclude that the flow of
a vector field reproduces the solutions of a differential
equation.
It is useful to consider a Lie vector field as an extended group
operator. Any differential invariant (for example, the curvature of
a curve or the curvature of a surface) is the invariant of a Lie
vector field. Determination of symmetries in some sense an inverse
problem for the Erlanger Program of F. Klein (1872). Indeed, the
main aim of the Erlanger Program is to find the set of all
properties of a set $S$ that remain invariant when the elements of
this set are subjected to the transformations of some Lie group of
transformations.
An advantage of this approach is that it is based on a jet space
whose structure is universal. Indeed, let us consider the triple
$(D,t,U)$ in the jet space $J_{1,1}$, where $D$ is the operator of
total differentiation, $t$ is the canonical parameter for $D$ (it
can be interpreted as a time) and $U$ is the set of coordinates of
a fiber. Given the triple $(X,s,F)$, where $X$ is a vector field
defined on a manifold $M$, $s$ is the canonical parameter for $X$
and $F$ is the system of functions $X\,f,\;X^2\,f,\;\ldots$
generated by a smooth function $f$ defined on $M$, we can always
relate this triple to the triple $(D,t,U)$ with the help of a
mapping $\varphi: M\to J_{1,1}$ satisfying
\begin{equation}
t\circ \varphi=0,\;\; U\circ \varphi=F\ \ {\rm and}\ \
(D I)\circ\varphi=X\,(I\circ\varphi),
\end{equation}
where $I$ is an arbitrary function defined on the space $J_{1,1}$.
In this way we can carry over any invariant of the operator $D$
from the jet space $J_{1,1}$ onto a manifold $M$, where it will be
the invariant of a vector field $X$. Particularly the invariants
$I\circ \varphi=e^{Cs}\,F$ on a manifold $M$ correspond to the
invariants $I=e^{Ct}\,U$ defined on the jet space. The covariant
tensor fields including the differential forms can be carried over
from the jet space $J_{1,1}$ onto a manifold $M$, where the
differential operators such as the Monge-Ampere operator,
Laplacian, Hessian, curvature operator and so on can be
constructed by means of these differential forms.
Thus we can use the structures defined on a jet space $J_{n,m}$ to
get a necessary information about the operators on a manifold $M$,
their invariants and symmetries. The set of all triples $(X,s,F)$
can be viewed as a category and the triple $(D,t,U)$ is a finite
object of this category. A universal problem is to construct an
initial element or a finite element of this category, and the
structure of a jet space $J_{n,m}$, which can be used to solve this
problem, is universal just in this sense. The structures briefly
described in this section, their analysis and possible
applications have been in detail described in the monograph
\cite{Rahula2}.
The geometric structures arising in the theory of differential
equations were also studied by H. Kilp (b. 1942)
in \cite{Kilp2} and \cite{Kilp3}.
\section{Noncommutative geometry and generalization of
supersymmetry}%
In this section we describe a direction of research in the field
of differential geometry, which was initiated by R. Kerner
(University Paris VI) at the beginning of 1990s and later
developed in cooperation with the colleagues from Paris, Wroclaw
(Poland) and the author of this paper. This direction is related
to the noncommutative geometry. During the last decade a
spectacular development of noncommutative generalizations of
differential geometry and Lie group theory has been achieved. The
respective new chapters of mathematical physics are known under
the names of {\it noncommutative geometry}, {\it quantum groups}
and {\it quantum spaces}. This section is based on the paper
\cite{Abramov1}.
Let us consider an associative algebra $\mathcal G$ over complex
numbers with generators $\theta ^A, A=1,2,..,N$ satisfying the
{\it ternary} relations
\begin{equation}
\theta ^A\theta ^B\theta ^C=j\theta ^B\theta ^C\theta ^A,%
\label{ternary relations}
\end{equation}
where $j$ is a primitive cube root of unity. We suppose that the
$N^2$ products $\theta ^A\theta ^B$ are linearly independent
entities. The algebra $\mathcal G$ with ternary commutation
relations (\ref{ternary relations}) is called a {\it ternary
Grassmann algebra} \cite{Abramov1}, because the commutation
relations (\ref{ternary relations}) are very similar to the
commutation relations of a classical Grassmann algebra. Indeed, if
$\theta^{\alpha}$ with $ \alpha=1,2,\ldots,n$ are the generators of the
Grassmann algebra with $n$ generators, then they are subjected to
the well known relations
$\theta^{\alpha}\theta^{\beta}=(-1)\theta^{\beta}\theta^{\alpha}$
which can be interpreted as follows: each permutation of
generators in the binary product $\theta^{\alpha}\theta^{\beta}$
is accompanied by multiplication by $-1$ and $-1$ is considered as
a primitive square root of unity. Thus one can get a ternary
analogue of the Grassmann algebra replacing a binary product of
generators by a ternary product, a permutation of two objects by a
cyclic permutation of three objects and a primitive square root of
unity by a primitive cube root of unity. It is obvious that
ternary analogue of Grassmann algebra is based on a faithful
representation of the cyclic group ${\Bbb Z}_3$ by cube roots of
unity.
Let us briefly describe the structure of the algebra $\mathcal G$.
The immediate corollary is that any product of four or more
generators must vanish. Here is the proof
\[
(\theta ^A\theta ^B\theta ^C)\theta ^D=j\theta ^B(\theta ^C\theta
^A\theta ^D)=j^2(\theta ^B\theta ^A\theta ^D)\theta ^C=\theta
^A(\theta ^D\theta ^B\theta ^C)=j\theta ^A\theta ^B\theta ^C\theta
^D.
\]
Now, as $(1-j)\not =0$, one must have $\theta ^A\theta ^B\theta
^C\theta^D=0 $. The dimension of the ternary Grassmann algebra
$\mathcal G$ is ${ N(N+1)(N+2)/3}+1$. Any cube of a generator is
equal to zero, i.e., $(\theta^A)^3=0$, and the odd permutation of
factors in a product of three leads to an independent quantity.
Our algebra admits a natural ${\Bbb Z}_3$-grading: under
multiplication, the grades add up modulo 3; the numbers are grade
0, the generators $\theta ^A$ are grade 1; the binary products are
grade 2 and the ternary products grade 0 again. The dimensions of
the subspaces of grade 0, 1 and 2 are, respectively, $N$ for grade
1, $N^2$ for grade 2 and $(N^3-N)/3+1$ for grade 0.
The lack of symmetry between the grades 1 and 2 (corresponding to
the generator $j$ and its square $j^2$ in the cyclic group ${\Bbb
Z}_3$, which are interchangeable, suggests that one should
introduce another set of $N$ generators of grade 2, whose squares
would be of grade 1, and which should obey the conjugate ternary
relations as follows
\[
\bar \theta ^{\bar A}\bar \theta ^{\bar B}\bar \theta ^{\bar
C}=j^2\bar \theta ^{\bar B}\bar \theta ^{\bar C}\bar \theta ^{\bar
A}.
\]
With respect to the ordinary generators $\theta ^A$, the conjugate
ones should behave like the products of two $\theta $'s, i.e.,
\begin{equation}
\theta ^A(\theta ^B\theta ^C)=j(\theta ^B\theta ^C)\theta
^A\rightarrow \theta ^A\bar \theta ^{\bar B}=j\bar \theta ^{\bar
B}\theta ^A \label{z2grad0}
\end{equation}
and consequently
\begin{equation}
\bar \theta ^{\bar B}\theta ^A=j^2\theta ^A\bar \theta ^{\bar B}.
\label{z2grad0bis}
\end{equation}
One may also note that there is an alternative choice for the
commutation relation between the ordinary and conjugate
generators, that makes the conjugate generators different from the
binary products of ordinary generators
\begin{equation}
\theta ^A\bar \theta ^{\bar B}=-j\bar \theta ^{\bar B}\theta ^A\text{ and }%
\bar \theta ^{\bar B}\theta ^A=-j^2\theta ^A\bar \theta ^{\bar B},
\label{z2grad1}
\end{equation}
which are still compatible with the ternary relations introduced
above.
This could be interpreted in the following way. We have assumed
that the algebra's field is the field of complex numbers, but we
can imagine that it is possible to multiply an element of the
${\Bbb Z}_3$-graded Grassmann algebra by an element of a {\it
binary} Grassmann algebra. We assume that the binary elements
commute with the ternary ones, but anticommute as usual with each
other. The ${\Bbb Z}_3$-graded Grassmann elements of a given grade
still have no binary commutation relation. Then our new algebra
admits two gradings: the ${\Bbb Z}_2$-grading and the ${\Bbb
Z}_3$-grading. The elements of ${\Bbb Z}_2$-grade 0 and ${\Bbb
Z}_3$-grades 1 and 2 obey the
rules (\ref{z2grad0}) and (\ref{z2grad0bis}) whereas the elements of ${\Bbb Z%
}_2$-grade 1 and ${\Bbb Z}_3$-grades 1 and 2 obey the rules
(\ref{z2grad1}). If we think that these objects can help in
modelling of the quark fields,
then a quark variable would be of ${\Bbb Z}_2$-grade 1 and ${\Bbb Z}_3$%
-grade 1, and an antiquark variable of ${\Bbb Z}_2$-grade 1 and ${\Bbb Z}_3$%
-grade 2. Then the products of a quark and an antiquark would
have both grades zero, making it a boson. In the same way, the
products of three quark
or three antiquark fields would be of ${\Bbb Z}_3$-grade 0 and of ${\Bbb Z}%
_2 $-grade 1, that is, they would very much look like a fermionic
field.
Now, the $\bar \theta $'s generate their own Grassmann subalgebra
of the same dimension that the one generated by $\theta $'s;
besides, we shall have all the mixed products containing both
types of generators, but which can be
always ordered e.g., with $\theta ^A$'s in front and $\bar \theta ^{\bar B}$%
's in the rear, by virtue of commutation relations. The products
of $\theta ^A$'s alone or of $\bar \theta ^{\bar A}$'s alone span
two subalgebras of
dimension $N(N+1)(N+2)/3$ each; the mixed products span new sectors of the $%
{\Bbb Z}_3$-graded Grassmann algebra.
In the case of usual ${\Bbb Z}_2$-graded Grassmann algebras the
anti-commutation between the generators of the algebra and the
assumed associativity imply automatically the fact that {\it all}
grade $0$ elements {\it commute} with the rest of the algebra,
while {\it any two} elements of grade $1$ anti-commute.
{In the case of the ${\Bbb Z}_3$-graded generalization such an
extension of ternary and binary relations {\it does not follow
automatically} and must be explicitly imposed. If we decide to
extend the relations (\ref{z2grad0}), (\ref{z2grad0bis}) and
(\ref{z2grad1}) to {\it all} elements of the algebra having a
well-defined grade (i.e., the monomials in $\theta $'s and $\bar
\theta $'s), then many additional expressions must vanish, e.g.,}
\[
{\theta ^A\underbrace{{\theta ^B{\bar \theta }}^{\bar C}}=\underbrace{{%
\theta ^B{\bar \theta }}^{\bar C}}\theta ^A=\theta
^B\underbrace{{{\bar \theta }^{\bar C}\theta }^{{A}}}={\bar \theta
}^C\theta ^A\theta ^B=0},
\]
{because on the one side, $\theta ^B{\bar \theta }^{\bar C}$ and
${\bar \theta }^{\bar C}\theta ^A$ are of grade 0 and commute with
all other elements, and on the other side, commuting ${\bar \theta
}^C$ with $\theta ^A\theta ^B$ one gets twice the factor $j$,
which leads to the overall factor $j^2{\bar \theta }^C\theta
^A\theta ^B$. This produces a
contradiction which can be solved only by supposing that $\theta ^A\theta ^B{%
\bar \theta }^C=0$.}
{The resulting ${\Bbb Z}_3$-graded algebra contains only the
following combinations of generators:
\vskip -.5cm
\[
A_1=\{\theta ,{\bar \theta }{\bar \theta }\}\text{, }A_2=\{{\bar \theta }%
,\theta \theta \}\ \ {\rm and}\ \ A_0=\{{\bf 1},\theta {\bar \theta
},\theta \theta \theta ,{\bar \theta }{\bar \theta }{\bar \theta
}\}.
\]}
The dimension of the algebra is
\[
D(N)=1{}+2N+3N^2+\frac{2(N^3-N)}3=\frac{3+4N+9N^2+2N^3}3.
\]
The four summands $1$, $2N$, $3N^2$ and $\frac{2(N^3-N)}3$
correspond to the subspaces respectively spanned by the
combinations $\{{\Bbb C}\}$, $\{\theta ,\bar \theta \}$, $\{\theta
\theta ,\theta \bar \theta ,\bar \theta \bar \theta \}$ and
$\{\theta \theta \theta ,\bar \theta \bar \theta \bar \theta \}$.
{Let us note that the set of grade $0$ (which obviously forms a
subalgebra of the ${\Bbb Z}_3$-graded Grassmann algebra) contains
the products which could symbolize the only observable
combinations of {\it quark fields} in quantum chromodynamics based
on the $SU(3)$-symmetry.}
We can introduce the ${\Bbb Z}_3$-graded derivations of the ${\Bbb Z}_3$%
-graded Grassmann algebra by postulating the following set of
rules\label {grassderiv}:
\[
\partial _A({\bf 1})=0\text{,\ \ }\partial _A\theta ^B={\delta }_A^B\ \ {\rm and}\
\ \partial _A\bar \theta ^{\bar B}=0
\]
and similarly\vskip -.5cm
\[
\partial _{\bar A}({\bf 1})=0\text{,\ }\ \partial _{\bar B}\bar \theta
^{\bar C}={\delta }_{\bar B}^{\bar C}\ \ {\rm and}\ \ \partial _{\bar
B}\theta ^A=0.
\]
When acting on various binary and ternary products, the derivation
rules are the following:
\[
\partial _A(\theta ^B\theta ^C)={\delta }_A^B\theta ^C+j{\delta }_A^C\theta
^B\ {\rm and}\ \partial _A(\theta ^B\theta ^C\theta ^D)={\delta
}_A^B\theta ^C\theta ^D+j{\delta }_A^C\theta ^D\theta
^B+j^2{\delta }_A^D\theta ^B\theta ^C\!.
\]
Similarly, for the conjugate entities,
\[
\partial _{\bar A}(\bar \theta ^{\bar B}\bar \theta ^{\bar C})={\delta }%
_{\bar A}^{\bar B}\bar \theta ^{\bar C}+j^2{\delta }_{\bar
A}^{\bar C}\bar \theta ^{\bar B}\ {\rm and}\ \partial _{\bar
A}(\bar \theta ^{\bar B}\bar \theta ^{\bar C}\bar \theta ^{\bar
D})={\delta }_{\bar A}^{\bar B}\bar \theta ^{\bar C}\bar \theta
^{\bar D}+j^2{\delta }_{\bar A}^{\bar C}\bar \theta ^{\bar D}\bar
\theta ^{\bar B}+j{\delta }_{\bar A}^{\bar D}\bar \theta ^{\bar
B}\bar \theta ^{\bar C}\!\!.
\]
We emphasize the "twisted" Leibniz rule for the ternary products in the above formulae.
Finally, for mixed binary products like $\theta ^A\bar \theta
^{\bar B}$, the derivation rules are the following:\vskip -.5cm
\[
\partial _A(\theta ^B\bar \theta ^{\bar C})={\delta }_A^B\bar \theta ^{\bar
C}\ \ {\rm and}\ \ {\partial }_{\bar A}(\theta ^B\bar \theta ^{\bar C})=j{\delta }%
_{\bar A}^{\bar C}{\theta }^B.
\]
There is no need for rules of derivation of fourth-order
homogeneous expressions, because these vanish identically.
As the immediate consequence of these rules, we have the following
important identities:
\[
\partial _A\partial _B\partial _C=j\partial _B\partial _C\partial _A\text{
and }\partial _{\bar A}\partial _{\bar B}\partial _{\bar
C}=j^2\partial _{\bar B}\partial _{\bar C}\partial _{\bar A},
\]
while\vskip -.5cm
\[
\partial _A\partial _{\bar C}=j\partial _{\bar C}\partial _A\text{ and }%
\partial _{\bar C}\partial _A=j^2\partial _A\partial _{\bar C}.
\]
Hence we have the important consequence
\begin{equation}
\partial _A\partial _B\partial _C+\partial _B\partial _C\partial _A+\partial
_C\partial _A\partial _B=0. \label{sum3deriv}
\end{equation}
The ${\Bbb Z}_3$-graded generalization of the Grassmanian and the ${\Bbb Z}%
_3 $-graded derivatives defined above can be used in order to produce a $%
{\Bbb Z}_3$-generalization of the supersymmetry generators acting
on the usual ${\Bbb Z}_2$-graded Grassmann algebra generated by
anticommuting fermionic variables ${\theta }^\alpha $ and $\bar
\theta ^{\dot \beta }$ satifying the relations
\[
\theta ^\alpha \theta ^\beta +\theta ^\beta \theta ^\alpha =0\text{,\ \ }%
\bar \theta ^{\dot \alpha }\theta ^\beta +\theta ^\beta \bar
\theta ^{\dot \alpha }=0\ \ {\rm and}\ \ \bar \theta ^{\dot \alpha
}\bar \theta ^{\dot \beta }+\bar \theta ^{\dot \beta }\bar \theta
^{\dot \alpha }=0.
\]
The ``anti-Leibniz'' rule of derivation
\[
\partial _\alpha (\theta ^\beta \theta ^\gamma )=\delta _\alpha ^\beta
\theta ^\gamma -\delta _\alpha ^\gamma \theta ^\beta
\]
and similarly for any two dotted indices or mixed indices, one
verifies easily that all such derivations do anticommute:
\[
\partial _\alpha \partial _\beta +\partial _\beta \partial _\alpha =0\text{%
,\ \ }\partial _{\dot \alpha }\partial _{\dot \beta }+\partial
_{\dot \beta }\partial _{\dot \alpha }=0\ \ {\rm and}\ \ \partial
_\alpha \partial _{\dot \beta }+\partial _{\dot \beta }\partial
_\alpha =0. \label{sum2deriv}
\]
These rules enable us to construct the generators of the supersymmetric (or\break $%
{\Bbb Z}_2$-graded) ``odd'' translations
\[
{\mathcal
D}_\alpha =\partial _\alpha +{\sigma }_{\alpha \dot\beta }^k\bar \theta ^{\dot \beta
}\partial _k\ \ {\rm and}\ \
{\mathcal D}_{\dot \beta }=\partial _{\dot \beta }+{\sigma }_{\alpha \dot \beta }^m{\theta
}^\alpha \partial _m,
\]
where both dotted and un-dotted indices $\alpha ,\dot \beta $ take
the values 1 and 2, while the space-time indices $k,l$ and $m$ run from 0
to 3. The anti-commutators of these differential operators yield
the ordinary (``even'') space-time translations
\[
{\mathcal D}_\alpha {\mathcal D}_{\dot \beta }+{\mathcal D}_{\dot
\beta }{\mathcal D}_\alpha =2\
{\sigma }_{\alpha \dot \beta
}^k\partial _k,
\]
while\vskip -.6
cm
\[
{\mathcal D}_\alpha {\mathcal D}_\beta +{\mathcal D}_\beta {\mathcal D}_\alpha =0\ \ {\rm
and}\ \ %
{\mathcal D}_{\dot \alpha }{\mathcal D}_{\dot \beta }+{\mathcal D}_{\dot \beta }{\mathcal
D}%
_{\dot \alpha }=0.
\]
The ${\Bbb Z}_3$-graded generalization would amount to find a
``cubic root'' of a linear differential operator, making use of
equation (\ref{sum3deriv}).
We must have six kinds of generalized Grassmann variables $\theta ^A$, $%
\theta ^{\stackrel{\wedge }{A}}$ and $\theta ^{\stackrel{\vee }{A}}$
on the one
hand and $\bar \theta ^{\bar A}$, $\bar \theta ^{\stackrel{\wedge }{\bar A}}$%
and $\bar \theta ^{\stackrel{\vee }{\bar A}}$ on the other hand,
which is formally analogous to the ${\Bbb Z}_2$-graded case.
Instead of the Pauli matrices we should introduce the entities
endowed with three indices (``cubic matrices'') with which the
generators of the ${\Bbb Z}_3$-graded translations of grade 1 and
2 may be constructed as follows:\vskip -.5cm
\[
{\it D}_A=\partial _A+{\rho }_{A\stackrel{\wedge }{B}\stackrel{\vee }{C}%
}^m\theta ^{\stackrel{\wedge }{B}}\theta ^{\stackrel{\vee }{C}}{\nabla }%
_m+\omega _{A\bar A}^m\bar \theta ^{\bar A}{\nabla }_m\text{,\ }{\it D}%
_{\bar A}=\partial _{\bar A}+{\bar \rho }_{\bar A\stackrel{\wedge }{\bar B}%
\stackrel{\vee }{\bar C}}^m\bar \theta ^{\stackrel{\wedge }{\bar
B}}\bar \theta ^{\stackrel{\vee }{\bar C}}{\nabla }_m+\bar \omega
_{\bar AA}^m\theta ^A{\nabla }_m,
\]
\vskip -.5cm
\[
{\it D}_{\stackrel{\wedge }{B}}=\partial _{\stackrel{\wedge }{B}}+{\rho }_{A%
\stackrel{\wedge }{B}\stackrel{\vee }{C}}^m\theta ^A\theta
^{\stackrel{\vee
}{C}}{\nabla }_m+\omega _{\stackrel{\wedge }{B}\stackrel{\wedge }{\bar B}%
}^m\bar \theta ^{\stackrel{\wedge }{\bar B}}{\nabla }_m\text{,\ }{\it D}_{%
\stackrel{\wedge }{\bar B}}=\partial _{\stackrel{\wedge }{\bar
B}}+{\bar \rho }_{\bar A\stackrel{\wedge }{\bar B}\stackrel{\vee
}{\bar C}}^m\bar \theta ^{\bar A}\bar \theta ^{\stackrel{\vee
}{\bar C}}{\nabla }_m+\bar
\omega _{\stackrel{\wedge }{\bar B}\stackrel{\wedge }{B}}^m\theta ^{%
\stackrel{\wedge }{B}}{\nabla }_m
\]\noindent and\vskip -.6cm
\[
{\it D}_{\stackrel{\vee }{C}}=\partial _{\stackrel{\vee }{C}}+{\rho }_{A%
\stackrel{\wedge }{B}\stackrel{\vee }{C}}^m\theta ^A\theta ^{\stackrel{%
\wedge }{B}}{\nabla }_m+\omega _{\stackrel{\vee }{C}\stackrel{\vee }{\bar C}%
}^m\bar \theta ^{\stackrel{\vee }{\bar C}}{\nabla }_m\text{,\ }{\it D}_{%
\stackrel{\vee }{\bar C}}=\partial _{\stackrel{\vee }{\bar C}}+{\bar \rho }%
_{\bar A\stackrel{\wedge }{\bar B}\stackrel{\vee }{\bar C}}^m\bar
\theta
^{\bar A}\bar \theta ^{\stackrel{\wedge }{\bar B}}{\nabla }_m+\bar \omega _{%
\stackrel{\vee }{\bar C}\stackrel{\vee }{C}}^m\theta ^{\stackrel{\vee }{C}}{%
\nabla }_m.
\]
The nature of the indices needs not to be specified; the only
important thing to be assumed at this stage is that the
differential operators $\nabla _m$ do commute with the ${\Bbb
Z}_3$-graded differentiations $\partial _A$. It is also
interesting to consider the operators one gets when the $\nabla _m
$ are replaced with {\em supersymmetric} derivations (that
anticommute with the ${\Bbb Z}_3$-graded differentiations). But in
the simpler case described here, the following operators acting on
the ${\Bbb Z}_3$-graded generalized Grassmanian:
$$D_{ABC}^{III}={\it D}_A{\it D}_B{\it D}_C+{\it D}_B{\it D}_C{\it D}_A+%
{\it D}_C{\it D}_A{\it D}_B+{\it D}_C{\it D}_B{\it D}_A+{\it D}_B{\it D}_A%
{\it D}_C+{\it D}_A{\it D}_C{\it D}_B, $$
$$\bar D_{\bar A\bar B\bar C}^{III} ={\it D}_{\bar A}{\it D}_{\bar B}{\it D}%
_{\bar C}+{\it D}_{\bar B}{\it D}_{\bar C}{\it D}_{\bar A}+{\it D}_{\bar C}%
{\it D}_{\bar A}{\it D}_{\bar B}+{\it D}_{\bar C}{\it D}_{\bar B}{\it D}%
_{\bar A}+{\it D}_{\bar B}{\it D}_{\bar A}{\it D}_{\bar C}+{\it D}_{\bar A}%
{\it D}_{\bar C}{\it D}_{\bar B}$$\noindent and
$$D_{A\bar A}^{II} ={\it D}_A{\it D}_{\bar A}-j^2{\it D}_{\bar
A}{\it D}_A$$
represent {\it homogeneous} operators on the ${\Bbb Z}_3$-graded
Grassmann algebra, i.e., they map polynomials in $\theta $'s of a
given grade into polynomials of the same grade; the result can be
represented by a complex-valued matrix containing various
combinations of the differentiations $\nabla _m$ ; their eventual
symmetry properties will
depend on the assumed symmetry properties of the matrices $\rho _{ABC}$ and $%
\omega _{A\bar B}$.
Let us consider in more detail the case of dimension $3$ (the
simplest possible realization of the ${\Bbb Z}_3$-graded
Grassmannian and the derivations on it is of course the case with
{\it one} generator and its conjugate.
The dimension of the ${\Bbb Z}_3$-graded Grassmann algebra with three grade-$%
1$ genera-\break\vskip -.5cm\noindent
tors $\theta $, $\stackrel{\wedge }{\theta }$ and $\stackrel{\vee }{%
\theta }$ and three ``conjugate'' grade-$2$ generators $\bar \theta $, $%
\stackrel{\wedge }{\bar \theta }$ and $\stackrel{\vee }{\bar \theta }$ is $%
51 $; any linear operator, including the derivations $\partial _A$
and the multiplication by any combination of the generators, as
well as the operators ${\it D}_A$ and ${\it D}_{\bar A}$
introduced above, can be represented by means of $51\times 51$
complex-valued matrices. Unfortunately, the operators $D^{II}$ and
$D^{III}$ are neither diagonal nor diagonalizable. But if we apply
them to a scalar function $f$, we get\vskip -.5cm
\[
D_{1\bar 1}^{II}f=(\omega _{1\bar 1}^m+\bar \omega _{\bar
11}^m)\nabla _mf
\]
\vskip -.2cm\noindent and\vskip -.5cm
\[
D_{1\stackrel{\wedge }{1}\stackrel{\vee }{1}}^{III}f=-3j^2\rho _{1\stackrel{%
\wedge }{1}\stackrel{\vee }{1}}^m\nabla _mf\text{ as well as }\bar D_{\bar 1%
\stackrel{\wedge }{\bar 1}\stackrel{\vee }{\bar 1}}^{III}f=-3j\bar
\rho _{\bar 1\stackrel{\wedge }{\bar 1}\stackrel{\vee }{\bar
1}}^m\nabla _mf.
\]
The $\omega $ matrices are the only ones that remain in the
$D^{II}$ whereas the $\rho $ cubic matrices emerge from the
ternary combinations $D^{III}$. On the space of scalar functions,
our operators act simultaneously{\em \ }as {\em square }and {\em
cubic} roots of ordinary translations. Using extensions of these
objects, where ${\nabla }_m$ are replaced with the
supersymmetry generators, we have contructed a simple ${\Bbb
Z}_3$-graded noncommutative geometry model featuring three Higgs
fields. The lagrangian contains the potential term\vskip -.6cm
\[
V=3\left| \Phi _1+\Phi _2+\Phi _3+\Phi _1\Phi _2+\Phi _2\Phi
_3+\Phi _3\Phi _1+\Phi _1\Phi _2\Phi _3\right| ^2
\]
and implies multiple spontaneous symmetry breaking.
The ternary
generalization of Grassmann algebra described in this section can
be used to construct a generalization of exterior calculus with
exterior differential $d$ satisfying $d^N=0$ with $N>2$. This
direction of research has been developed by Abramov's
post-graduste student N. Bazunova (b. 1964) in \cite{bazunova}.
\vskip.4cm%
\noindent%
{\bf Acknowledgments}%
\vskip.3cm%
I am grateful to the organizers of the Estonian-Finnish Seminar on
the Development of Mathematics for an invitation to give a talk on
the development of differential geometry in Estonia. I would like
to express my gratitude to \"U. Lumiste who took the trouble to
read the manuscript of this paper and his suggestions have been
very valuable for me in preparation of the final version of this
paper. I am also grateful to my colleagues M. Rahula, A. Parring,
L. Tuulmets, K. Riives, E. Abel, H. Kilp for the valuable
explanations concerning their research and the biographic
materials, which they kindly placed at my disposal. I gratefully
acknowledge the financial support of the Estonian Science
Foundation under the grant No. 4515.
\bibliographystyle{amsplain}
|
2,869,038,155,913 | arxiv | \section{Introduction}\label{S0}
Many function spaces of practical interest are also algebras and/or lattices. An important example is $C(X)$, the space of continuous real-valued functions on a compact Hausdorff space.
Hence, a natural problem in the course of understanding the structure of the space $C(X)$ is to characterize the algebraic and/or lattice isomorphisms on it.
The classical solutions were given by Gelfand and Kolmogorov \cite{GK} and Kaplansky \cite{K} respectively.
An in depth exposition of investigations into the algebraic structure of $C(X)$ and much more can be found in the classic monograph \cite{GJ}.
Subsequent research has tied these two strands together in the form of the disjointness structure of the function space $C(X)$.
Specifically, algebraic or lattice homomorphisms are disjointness preserving; they map disjoint functions to disjoint functions. An algebraic or lattice isomorphism is biseparating; that is, it is a bijection $T$ so that both $T$ and $T^{-1}$ are disjointness preserving. Moreover, generalization to disjointness preserving or biseparating maps allows for extension to function spaces that are neither algebras nor lattices, and even to vector-valued functions.
Copius research has been devoted to the study of disjointness preserving and biseparating maps on various function spaces; see, e.g., \cite{A}-\cite{AJ2}, \cite{BBH}, \cite{GJW}, \cite{HBN}-\cite{J-V W}, \cite{L}.
As far as the authors are aware of, the study of biseparating maps thus far has been confined to linear or at least additive maps. Since additive bijective maps are ${\mathbb Q}$-linear, such maps are not far removed from the linear world.
In this paper, we initiate the study of general nonlinear biseparating maps on spaces of vector-valued functions.
The following example shows that the definition of ``biseparating'' needs to be adjusted in order to obtain meaningful results.
\bigskip
\noindent{\bf Example}. Let $A$ be the set of all functions $f\in C[0,1]$ so that the set $\{t\in [0,1]: f(t) \neq 0\}$ is dense in $[0,1]$.
Let $T:C[0,1]\to C[0,1]$ be a map such that $T$ maps $A$ bijectively onto itself and that $Tf = f$ if $f\neq A$.
Then $T$ is a bijection so that $f\cdot g = 0 \iff Tf\cdot Tg = 0$.
\bigskip
The example shows that the definition of ``biseparating'' used for linear or additive maps is too weak when applied to general nonlinear maps. In the next section, we propose a revised definition of ``biseparating'' for nonlinear maps. The definition reduces to the usual one for additive maps. Moreover, with the revised definition, a satisfactory theory of nonlinear biseparating maps arise, subject to some mild assumptions. See the paragraph preceding Lemma \ref{l1.21}. The theory of nonlinear biseparating maps is somewhat related to the theory or order isomorphisms developed in \cite{LT}. It also partly generalizes the notion of ``nonlinear superposition operators''. We refer to \cite{AZ} for a comprehensive study of the latter types of operators.
Let us give an overview of the content of the paper. As mentioned, the definition of ``nonlinear biseparatimg maps'' is given in \S \ref{s1}. Under the mild assumptions of ``basic'' and ``compatible'', the fundamental characterization theorem of nonlinear biseparating maps (Theorem \ref{t5}) is obtained. The theorem shows that a nonlinear bijective operator is biseparating if and only if it is ``locally determined''. For an exposition of some applications of locally determined operators to operator functional equations, particularly on spaces of differentiable functions, refer to \cite{KM}.
The characterization theorem applies in particular to a number of familiar (vector-valued) function spaces such as spaces of continuous, uniformly continuous, Lipschitz and differentiable functions.
We would like to point out a general resemblance of Theorem \ref{t5} with the fundamental characterization theorem for ``nonlinear order isomorphisms'' \cite[Theorem 2.11]{LT}.
Indeed, our study of nonlinear biseparating maps is motivated and informed by the study of nonlinear order isomorphisms at various points. However, the lack of an order makes many of the arguments more difficult in the present case, especially for uniformly continuous and Lipschitz functions.
For further information on nonlinear order isomorphisms on function spaces, we refer to \cite{LT} and the references therein.
\S \ref{s2} studies nonlinear biseparating maps between spaces of vector-valued continuous or bounded continuous functions.
One of the main results is Theorem \ref{t2.8}, which shows that if $X,Y$ are realcompact spaces and $E,F$ are Hausdorff topological vector spaces, and there is a biseparating map $T:C(X,E)\to C(Y,F)$, then $X$ and $Y$ are homeomorphic. With reference to the classical Gelfand-Kolmogorov and Kaplansky theorems, one sees that one needs rather much less than the full algebraic or lattice structure of $C(X)$ to determine the topology of $X$.
From \S\ref{s3} onwards, we focus on metric spaces $X$ and $Y$.
In the course of \S\ref{s3} and \S\ref{s4}, full representations of biseparating maps between spaces of continuous, uniformly continuous and Lipschitz functions defined on metric spaces are obtained. See Propositions \ref{p4.2}, \ref{p4.3} and \ref{p4.4}. \S\ref{s5} revisits spaces of continuous functions, this time defined on metric spaces.
Complete characterizations of nonlinear biseparating maps are obtained; see Theorems \ref{t5.4} and \ref{t5.5}.
We also prove an automatic continuity result Theorem \ref{t5.6}.
\S7 is concerned with nonlinear biseparating maps between spaces of uniformly continuous functions.
Characterization of such maps is carried out in two stages. First it is shown that a biseparating map induces a uniform homeomorphism of the underlying metric spaces. The second part involves solving the ``section problem'': determining the maps $\Xi$ so that $Sf(x) = \Xi(x,f(x))$ is uniformly continuous whenever the input function $f$ is uniformly continuous. Refer to Theorems \ref{t6.7.1} and \ref{t6.7.2}.
From these characterization theorems, one can also obtain an automatic continuity result (Theorem \ref{t6.9}).
A classical result of Atsuji \cite{At} and Hejcman \cite{H}, rediscovered in \cite{O'F}, states that all uniformly continuous functions on a metric space $X$ are bounded if and only if $X$ is Bourbaki bounded (see definition in \S7.2). Theorem \ref{t6.10} generalizes this result. It shows that there is a biseparating map from $U(X,E)$ onto a space $U_*(Y,F)$ (the space of bounded uniformly continuous functions) if and only if $X$ is Bourbaki bounded.
\S \ref{s8} focuses on spaces of Lipschitz functions. First it is shown that we may reduce to considering spaces $\operatorname{Lip}(X,E)$, where $X$ is bounded metric (Proposition \ref{p6.2}).
Making use of the Baire Catergory Theorem and some intricate combinatorial arguments, it is then shown that a biseparating map between vector-valued Lipschitz spaces defined on bounded complete metric spaces induces a Lipschitz homeomorphism between the underlying metric spaces (Theorem \ref{t7.5}).
Next, the section problem for Lipschitz functions is solved (Theorem \ref{t7.7}), which enables the characterization of nonlinear biseparating maps between spaces of Lipschitz functions (Theorem \ref{t7.8}).
Suppose that $\Xi$ is a ``Lipschitz section'', i.e., the function $\Xi(x,f(x))$ is a Lipschitz function of $x$ whenever $f$ is Lipschitz. It is known that even if $x_0$ is an accumulation point, the function $\Xi(x_0,\cdot)$ need not be continuous with respect to the second variable. Nevertheless, exploiting the Baire Category Theorem, we show that $\Xi(x_0,\cdot)$ is continuous on an open dense set if $x_0$ is an accumulation point (Theorem \ref{t7.11}).
The final section \S \ref{s9} determines the biseparating maps that act between a space of uniformly continuous functions on the one hand and a space of Lipschitz functions on the other. The main results (Theorems \ref{t6.6} and \ref{t6.7}) show that there is a certain rigidity, so that the existence of such maps imply very strong conditions on the underlying metric spaces.
To end this introduction, we note that linear or nonlinear biseparating maps acting between spaces of differentiable functions seem to be rather more difficult to deal with. A notable achievement in this regard is the paper by Araujo \cite{A3}. We intend to address some of the problems raised therein in a future paper.
\section{Generalities}\label{s1}
Let $X, Y$ be sets and let $E, F$ be (real or complex) vector spaces.
Suppose that $A(X,E)$ is a vector subspace of $E^X$ and $A(Y,F)$ is a vector subspace of $F^Y$.
If $f\in A(X,E)$, let the {\em carrier} of $f$ be the set
\[C(f) = \{x\in X: f(x) \neq 0\}.\]
Set ${\mathcal C}(X) = \{C(f):f\in A(X,E)\}$.
For functions $f,g,h\in A(X,E)$, say that $f$ and $g$ are {\em disjoint with respect to} $h$, $f\perp_h g$, if $C(f-h) \cap C(g-h) = \emptyset$. We abbreviate $\perp_0$ as $\perp$.
The {\em support} of a function $f\in A(X,E)$ is the set
\[ \widehat{C}(f) = X\backslash \bigcup\{C(g): g\in A(X,E), f\perp g\} = X\backslash \bigcup\{C\in {\mathcal C}(X): C \cap C(f) = \emptyset\}.\]
Obviously $C(f) \subseteq \widehat{C}(f)$. Furthermore, if $f_1,f_2\in A(X,E)$ and $f_1=f_2$ on $C(f)$, then $f_1 = f_2$ on $\widehat{C}(f)$.
Set ${\mathcal D}(X) = \{\widehat{C}(f): f\in A(X,E)\}$. Similar definitions apply to $A(Y,F)$.
A map $T:A(X,E) \to A(Y,F)$ is {\em biseparating} if it is a bijection and for any $f,g,h \in A(X,E)$,
\[ f\perp_hg \iff Tf\perp_{Th}Tg.\]
For the remainder of the section, let $T:A(X,E)\to A(Y,F)$ be a given biseparating map.
The following proposition, although simple to state and easy to prove, turns out to be key to understanding biseparating maps.
\begin{prop}\label{p1}(Araujo's Lemma, cf \cite[Lemma 4.2]{A})
For any $f,g,h\in A(X,E)$,
\[ \widehat{C}(f-h) \subseteq \widehat{C}(g-h) \iff \widehat{C}(Tf-Th) \subseteq \widehat{C}(Tg-Th).\]
\end{prop}
\begin{proof}
Suppose that $\widehat{C}(f-h) \subseteq \widehat{C}(g-h)$.
Assume that there exists $z\in \widehat{C}(Tf-Th) \backslash \widehat{C}(Tg-Th)$.
There exists $v\in A(Y,F)$ so that $v\perp Tg-Th$ and $z\in C(v)$.
Since $z\in \widehat{C}(Tf-Th)$, $v\not\perp Tf-Th$.
Set $u = T^{-1}(v+Th) \in A(X,E)$. Then $v = Tu - Th$. Hence
\[ Tu -Th = v \perp Tg-Th \implies Tu \perp_{Th} Tg \implies u\perp_h g\implies u-h \perp g-h.\]
Therefore,
\[ C(u-h) \subseteq (\widehat{C}(g-h))^c \subseteq (\widehat{C}(f-h))^c \implies u-h \perp f-h \implies u\perp_h f.\]
It follows that
\[ Tu \perp_{Th}Tf \implies v = Tu-Th \perp Tf-Th.\]
This contradicts that fact that $v\not\perp Tf-Th$. This completes the proof for the forward implication ``$\implies$". The reverse implication follows by symmetry.
\end{proof}
\begin{prop}\label{p2}
Let $f\in A(X,E)$ be given. The map \[\theta_f: \widehat{C}(h) \mapsto \widehat{C}(T(f+h) -Tf)\] is a well-defined bijection from ${\mathcal D}(X)$ onto ${\mathcal D}(Y)$ that preserves set inclusion. For any $f,g\in A(X,E)$ and any $U\in{\mathcal D}(X)$, $f= g$ on $U$ if and only if $Tf = Tg$ on $\theta_f(U)$.
\end{prop}
\begin{proof}
By Proposition \ref{p1}, $\widehat{C}(h_1) = \widehat{C}(h_2)$ if and only if
\[ \widehat{C}((h_1+f)-f) = \widehat{C}((h_2+f)-f) \iff \widehat{C}(T(h_1+f) - Tf) = \widehat{C}(T(h_2+f)-Tf).\]
This shows that the map $\theta_f$ is well-defined and injective.
Since any $g\in A(Y,F)$ can be written in the form $T(f+h) - Tf$ with $h = T^{-1}(g+Tf) -f$, $\theta_f$ is surjective. It follows from Proposition \ref{p1} that $\theta_f$ preserves set inclusion.
Finally, suppose that $U = \widehat{C}(h) \in {\mathcal D}(X)$.
Then $f= g$ on $U$ if and only if $g-f \perp h = (f+h) -f$, which in turn is equivalent to the fact that $Tg-Tf \perp T(f+h) - Tf$. The last statement is easily seen to be equivalent to the fact that $Tg-Tf = 0$ on $\theta_f(\widehat{C}(h))$.
\end{proof}
The idea behind Proposition \ref{p2} is that a biseparating map gives rise to a collection of ``set movers'' $\theta_f$.
In order to make the set mover $\theta_f$ independent of the function $f$, we impose two conditions on the function space $A(X,E)$.
Say that $A(X,E)$ is
\begin{enumerate}
\item {\em basic} if whenever $x\in C_1\cap C_2$ for some $C_1, C_2\in {\mathcal C}(X)$, then there exists $C\in {\mathcal C}(X)$ so that $x\in C \subseteq C_1\cap C_2$;
\item {\em compatible} if for any $f\in A(X,E)$, any $D\in {\mathcal D}(X)$ and any point $x\notin D$, there exist $g\in A(X,E)$ and $C\in {\mathcal C}(X)$ so that $x\in C$ and
\[ g = \begin{cases} f &\text{on $C$},\\
0 &\text{on $D$}.\end{cases}\]
\end{enumerate}
\begin{lem}\label{l1.21}
Suppose that $A(Y,F)$ is basic. If $f, g\in A(Y,F)$ and $V\in {\mathcal D}(Y)$ are such that $f = g$ on $V$ and $V\subseteq \widehat{C}(g)$, then $V \subseteq \widehat{C}(f)$.
\end{lem}
\begin{proof}
Assume otherwise. There is a point $y_1\in V$ so that $y_1\notin \widehat{C}(f)$.
There exists $u\in A(Y,F)$ so that $y_1\in C(u)$ and $u\perp f$. Say $V = \widehat{C}(v)$.
Since $y_1\in \widehat{C}(v)$, $u\not\perp v$. As $A(Y,F)$ is basic, there exists $C\in {\mathcal C}(Y)$ so that $\emptyset\neq C\subseteq C(u)\cap C(v)$.
If $y\in C$, then $y \in C(u)$ and hence $f(y) = 0$. Moreover, $y\in C(v) \subseteq V$ and hence $g(y) = f(y) = 0$. This proves that $C\cap C(g) = \emptyset$. Since $C\in {\mathcal C}(Y)$, it follows that $C\cap \widehat{C}(g) = \emptyset$.
This is impossible since $C$ is a nonempty subset of $C(v)\subseteq \widehat{C}(v) = V\subseteq \widehat{C}(g)$.
\end{proof}
\begin{prop}\label{p3}
Assume that $A(Y,F)$ is basic. Suppose that $f, g\in A(X,E)$ and $U \in {\mathcal D}(X)$. If $f = g$ on $U$, then $\theta_f(U) = \theta_g(U)$.
\end{prop}
\begin{proof}
Let $U = \widehat{C}(h)$. Since $f= g$ on $U\supseteq C(h)$, $C(h)\subseteq C(f+h-g)$.
Hence $U \subseteq \widehat{C}(f+h- g)$.
Thus
\[ \theta_g(U) \subseteq \theta_g(\widehat{C}(f+h- g)) = \widehat{C}(T(f+h) - Tg)).\]
By Proposition \ref{p2}, $Tf = Tg$ on $\theta_g(U)$. In other words,
\[ T(f+h) - Tf = T(f+h)-Tg \text{ on } \theta_g(U) \subseteq \widehat{C}(T(f+h) - Tg)).\]
Therefore, by Lemma \ref{l1.21}, \[\theta_g(U) \subseteq \widehat{C}(T(f+h) - Tf) = \theta_f(U).\]
The reverse inclusion follows by symmetry.
\end{proof}
\begin{prop}\label{p4}
Assume that $A(Y,F)$ is both basic and compatible. Let $f,g\in A(X,E)$ and let $U$ be a set in ${\mathcal D}(X)$. Then $\theta_g(U) = \theta_f(U)$.
\end{prop}
\begin{proof}
Suppose that $U = \widehat{C}(h)$ and that there exists $y \in \theta_f(U) \backslash \theta_g(U)$.
Then there exists $C\in {\mathcal C}(Y)$ so that $y \in C$ and that $C \cap \theta_g(U) = \emptyset$.
Since $y\in \theta_f(U)$, $C \cap C(T(f+h)-Tf) \neq \emptyset$.
Using the fact that $A(Y,F)$ is basic, there exist $C'\in {\mathcal C}(Y)$ and $z\in Y$ so that
\[z\in C' \subseteq C \cap C(T(f+h)-Tf).\]
In particular, $z\notin \theta_g(U)$ and $\theta_g(U) \in {\mathcal D}(Y)$.
Use the compatibility of $A(Y,F)$ to choose $v\in A(Y,F)$ and $C''\in {\mathcal C}(Y)$ so that $z\in C''$ and that
\[ v = \begin{cases} Tf-Tg &\text{on $C''$,}\\
0 &\text{on $\theta_g(U)$}.\end{cases}\]
Since $A(Y,F)$ is basic, we may also assume that $C'' \subseteq C'$.
Set $k = v+Tg$. We have $k= Tg$ on $\theta_g(U)$. By Proposition \ref{p2}, $T^{-1}k = g$ on $U$.
Say $C'' = C(w)$. Then $k = Tf$ on $\widehat{C}(w)\subseteq \theta_f(U)$. Thus Proposition \ref{p2} implies that
\[ T^{-1}k = f \text{ on } (\theta_f)^{-1}(\widehat{C}(w)) \subseteq U.
\]
It follows that $f = g$ on the set $(\theta_f)^{-1}(\widehat{C}(w))\in {\mathcal D}(X)$.
By Proposition \ref{p3}, $\widehat{C}(w) = \theta_g((\theta_f)^{-1}(\widehat{C}(w)) \subseteq \theta_g(U)$.
Hence $z\in C(w)\subseteq \theta_g(U) =\emptyset$, contrary to the choice of $z$.
This proves that $\theta_f(U) \subseteq \theta_g(U)$. The reverse inclusion follows by symmetry.
\end{proof}
We now obtain the fundamental description of biseparating maps from the foregoing propositions.
\begin{Def}\label{d2.1}
Retain the notation above. A bijection $T: A(X,E)\to A(Y,F)$ is {\em locally determined} if there is a bijection $\theta:{\mathcal D}(X)\to{\mathcal D}(Y)$, preserving set inclusions, so that
for any $f,g\in A(X,E)$ and any $U \in {\mathcal D}(X)$, $f = g$ on $U$ if and only if $Tf = Tg$ on $\theta(U)$
\end{Def}
\begin{lem}\label{l2.0}
Assume that $A(X,E)$ is basic. Let $T:A(X,E)\to A(Y,F)$ be locally detemined, with a map $\theta$ as given in Definition \ref{d2.1}. If $g,h \in A(X,E)$, then $C(Tg-Th) \subseteq \theta(\widehat{C}(g-h))$.
\end{lem}
\begin{proof}
Suppose that $y \notin \theta(\widehat{C}(g-h))$. Choose $v_1\in A(Y,F)$ so that
$\widehat{C}(v_1) = \theta(\widehat{C}(g-h))$.
There exists $v_2\in A(Y,F)$ such that $y\in C(v_2)$ and $v_1\perp v_2$.
Let $u_1 = g-h$ and $u_2\in A(X,E)$ be such that $\widehat{C}(u_2) = \theta^{-1}(\widehat{C}(v_2))$.
We claim that $u_1\perp u_2$.
Otherwise, there exists nonempty $C\in {\mathcal C}(X)$ so that $C\subseteq C(u_1) \cap C(u_2)$.
Hence
\[ \theta(\widehat{C}) \subseteq \theta(\widehat{C}(u_1))\cap \theta(\widehat{C}(u_2)) = \widehat{C}(v_1)\cap \widehat{C}(v_2).\]
Let $\theta(\widehat{C}) = \widehat{C}(v)$ for some nonzero $v\in A(Y,F)$.
Since $v_1\perp v_2$, $C(v_1) \cap \widehat{C}(v_2) = \emptyset$.
Therefore,
\[C(v_1) \cap C(v) \subseteq C(v_1) \cap \widehat{C}(v_2) =\emptyset.
\]
Hence $\widehat{C}(v_1)\cap {C}(v) = \emptyset$. But $\emptyset \neq C(v) \subseteq \widehat{C}(v) = \theta(\widehat{C}) \subseteq \widehat{C}(v_1)
$. Thus we have a contradiction. This proves the claim.
From the claim, $g=h$ on $\widehat{C}(u_2)$. Hence $Tg = Th$ on $\theta(\widehat{C}(u_2)) = \widehat{C}(v_2)$. In particular, $y\notin C(Tg-Th)$. This completes the proof of the lemma.
\end{proof}
\begin{thm}
\label{t5}
Suppose that $A(X,E)$ and $A(Y,F)$ are both basic and compatible. A bijection $T:A(X,E)\to A(Y,F)$ is a biseparating map if and only if it is locally determined.
\end{thm}
\begin{proof}
Assume that $T$ is biseparating. Take any $f\in A(X,E)$ and let $\theta = \theta_f$. By Proposition \ref{p4}, $\theta$ is independent of the choice of $f$.
The properties enunciated for $\theta$ now follow from the same ones for $\theta_f$ by Proposition \ref{p2}. Therefore, $T$ is locally determined.
Conversely, suppose that $T$ is locally determined. Let $f,g,h\in A(X,E)$ be such that $f\perp_h g$.
Then $f =h$ on $\widehat{C}(g-h)$.
Therefore, $Tf = Th$ on $\theta(\widehat{C}(g-h))$.
By Lemma \ref{l2.0}, $Tf = Th$ on $C(Tg-Th)$. Thus $Tf \perp_{Th} Tg$.
Since the same argument applies to $T^{-1}$, we see that $T$ is biseparating.
\end{proof}
Let us give some examples of function spaces that are both basic and compatible. The verifications are simple and will be omitted.
If $G$ is a Banach space, a {\em bump function} on $G$ is a nonzero real-valued function on $G$ with bounded support.
\begin{ex}\label{e1.7}
Let $A(X,E)$ be any of the spaces described below. Then $A(X,E)$ is both basic and compatible.
Furthermore, $\widehat{C}(f) = \overline{C(f)}$ for any $f \in A(X,E)$.
\begin{enumerate}
\item Let $X$ be a Hausdorff completely regular topological space and let $E$ be a nonzero Hausdorff topological vector space.
$A(X,E) = C(X,E)$ or $C_*(X,E)$, the subspace consisting of all bounded functions in $C(X,E)$. (By a bounded function, we mean a function whose image is bounded in $E$, i.e, can be absorbed by any neighborhood of $0$.)
\item Let $X$ be a metric space and let $E$ be a normed space. Take $A(X,E)$ to be one of the following spaces. $U(X,E)$, the space of all $E$-valued uniformly continuous functions on $X$; or $\operatorname{Lip}(X,E)$, the space of all $E$-valued Lipschitz functions on $X$; or $U_*(X,E)$, respectively, $\operatorname{Lip}_*(X,E)$, the bounded functions in $U(X,E)$ and $\operatorname{Lip}(X,E)$ respectively.
\item Let $X$ be an open set in a Banach space $G$, $p\in {\mathbb N}\cup\{\infty\}$, and let $E$ be a normed space. Assume that $G$ supports a $C^p$ bump function and take $A(X,E) = C^p(X,E)$, the space of all $p$-times continuous differentiable $E$-valued functions on $X$. Alternatively, let $A(X,E) = C^p_*(X,E)$, the subspace of $C^p(X,E)$ so that
$D^kf$ is bounded on $X$ for $k\in \{0\}\cup{\mathbb N}$, $k\leq p$. In the latter case, assume that $G$ supports a $C^p_*$ bump function.
\end{enumerate}
\end{ex}
\section{Spaces of continuous functions}\label{s2}
In this section, let $X$ and $Y$ be Hausdorff completely regular topological spaces and $E$ and $F$ be nontrivial Hausdorff topological vector spaces.
Take $A(X,E) = C(X,E)$ or $C_*(X,E)$ and $A(Y,F) = C(Y,F)$ or $C_*(Y,F)$.
Let $T:A(X,E)\to A(Y,F)$ be a biseparating map.
Without loss of generality, we may assume that $T0 =0$.
We retain the notation in \S \ref{s1}.
The main aim is to derive topological relationship between $X$ and $Y$ based on the map $T$.
Recall that a Hausdorff completely regular topological space $X$ has a ``largest" compactification, namely the Stone-\v{C}ech compactification $\beta X$.
If $V$ is a set in $Y$, denote its closures in $Y$ and $\beta Y$ by $\overline{V}$ and $\overline{V}^{\beta Y}$ respectively.
By Example \ref{e1.7}, $\widehat{C}(f) = \overline{C(f)}$ for any $f\in A(X,E)$.
\begin{lem}\label{l2.0.1}
Let $U_i, i=1,2$, be open sets in $\beta X$ so that $U_i\cap X\in {\mathcal C}(X)$ and that $\overline{U_1}^{\beta X}\cap \overline{U_2}^{\beta X}= \emptyset$. Then $\overline{\theta(\overline{U_1\cap X})}^{\beta X} \cap \overline{\theta(\overline{U_2\cap X})}^{\beta X} = \emptyset$.
\end{lem}
\begin{proof}
Let $v$ be a nonzero vector in $F$. There exists $h\in C_*(X)$ so that $h(x) =1$ for all $x\in U_1\cap X$ and $h(x) = 0$ for all $x\in U_2\cap X$. The function $f = h\cdot T^{-1}(1\otimes v)$ belongs to $A(X,E)$. By Lemma \ref{t5}, $Tf = 1\otimes v$ on $\theta(\overline{U_1\cap X})$ and $Tf = 0$ on $\theta(\overline{U_2\cap X})$.
Since $F$ is a Hausdorff topological vector space, it is completely regular.
So there exists a continuous function $g: F\to {\mathbb R}$ so that $g(v) \neq g(0)$.
Set $k =g\circ (Tf):Y\to {\mathbb R}$. Then $k$ is continuous on $Y$ has hence has a continuous extension $\widetilde{k}:\beta Y\to {\mathbb R}\cup \{\infty\}$.
Now $k(y) = g(v)$ for all $y\in \theta(\overline{U_1\cap X})$ and $k(y) = g(0)$ for all $y\in \theta(\overline{U_2\cap X})$.
By continuity of $\widetilde{k}$, the sets $\overline{\theta(\overline{U_i\cap X})}^{\beta X}$, $ i=1,2$, must be disjoint.
\end{proof}
For any $x\in \beta X$, let ${\mathcal F}_x$ be the family of all open neighborhoods $U$ of $x$ in $\beta X$ so that $U\cap X \in {\mathcal C}(X)$.
Define ${\mathcal F}_y$ similarly for $y\in \beta Y$.
We will use the following fact which is easily deduced from the Urysohn Lemma.
Let $U$ be an open neighborhood of a point $x$ in $\beta X$. There exists an open neighborhood $V\in {\mathcal F}_x$ so that $V\subseteq U$.
\begin{lem}\label{l2.0.2}
Let $x\in \beta X$, $y\in \beta Y$. Then $y\in\overline{\theta(\overline{U\cap X})}^{\beta Y}$ for all $U \in {\mathcal F}_x$ if and only if
$x\in\overline{\theta^{-1}(\overline{V\cap Y})}^{\beta X}$ for all $V \in {\mathcal F}_y$.
\end{lem}
\begin{proof}
Assume that $y\in\overline{\theta(\overline{U\cap X})}^{\beta Y}$ for all $U \in {\mathcal F}_x$.
Suppose that there exists $V\in {\mathcal F}_y$ so that $x\notin\overline{\theta^{-1}(\overline{V\cap Y})}^{\beta X}$.
Choose $U \in {\mathcal F}_x$ so that $\overline{U}^{\beta X} \cap \overline{\theta^{-1}(\overline{V\cap Y})}^{\beta X} = \emptyset$.
Note that by definition of $\theta^{-1}$, $\theta^{-1}(\overline{V\cap Y}) = \overline{W_0}$ for some $W_0\in {\mathcal C}(X)$.
Express $W_0 = W\cap X$ for some open set $W\in \beta X$.
Then $\overline{U}^{\beta X} \cap \overline{W}^{\beta X} = \emptyset$.
By Lemma \ref{l2.0.1}, $\overline{\theta(\overline{U\cap X})}^{\beta X} \cap \overline{\theta(\overline{W\cap X})}^{\beta X} = \emptyset$. By choice, $y \in \overline{\theta(\overline{U\cap X})}^{\beta Y}$.
Also, since $V\in {\mathcal F}_y$,
\[ y \in \overline{{V\cap Y}}^{\beta Y}= \overline{\theta(\overline{W\cap X})}^{\beta Y}. \]
This contradicts the disjointness of $\overline{\theta(\overline{U\cap X})}^{\beta X}$ and $\overline{\theta(\overline{W\cap X})}^{\beta X}$ and completes the proof of the ``only if'' part. The reverse implication follows by symmetry.
\end{proof}
\begin{lem}\label{l2.0.3}
For any $x\in \beta X$, the set $\bigcap\{\overline{\theta(\overline{U\cap X})}^{\beta Y}: U \in {\mathcal F}_x\}$ has exactly one point in $\beta Y$.
\end{lem}
\begin{proof}
Let $U_i$, $i=1,\dots, n$, be sets in ${\mathcal F}_x$.
Then $\bigcap^n_{i=1}U_i$ is an open neighborhood of $x$ in $\beta X$. By the remark after Lemma \ref{l2.0.1},
there exists $U\in {\mathcal F}_x$ so that $U \subseteq \bigcap^n_{i=1}U_i$.
Then $\overline{\theta(\overline{U\cap X})}^{\beta Y} \subseteq \bigcap^n_{i=1}\overline{\theta(\overline{U_i\cap X})}^{\beta Y}$.
Since the set on the left is nonempty, this shows that the family $\{\overline{\theta(\overline{U\cap X})}^{\beta Y}: U \in {\mathcal F}_x\}$, which consists of closed sets in $\beta Y$, has the finite intersection property. By compactness of $\beta Y$, we conclude that the intersection in the statement of the lemma is nonempty.
Suppose that $y_1,y_2$ are distinct points in
$\bigcap\{\overline{\theta(\overline{U\cap X})}^{\beta Y}: U \in {\mathcal F}_x\}$.
Choose sets $V_i \in {\mathcal F}_{y_i}$, $i=1,2$, so that $\overline{V_1}^{\beta _Y} \cap \overline{V_2}^{\beta Y} = \emptyset$.
By Lemma \ref{l2.0.2}, $x\in \overline{\theta^{-1}(\overline{V_i\cap Y})}^{\beta X}$, $i=1,2$.
This contradicts Lemma \ref{l2.0.1} applied to the map $\theta^{-1}$.
\end{proof}
Define the map $\varphi: \beta X\to \beta Y$ by taking $\varphi(x)$ to be the unique point in $\bigcap\{\overline{\theta(\overline{U\cap X})}^{\beta Y}: U \in {\mathcal F}_x\}$.
By symmetry, we also have an analogous map $\psi : \beta Y\to \beta X$.
Now we arrive at the first structural result on biseparating maps on vector-valued $C/C_*$ spaces.
\begin{thm}\label{t2.0.4}
Let $X, Y$ be Hausdorff completely regular topological spaces and let $E,F$ be nonzero Hausdorff topological vector spaces.
Suppose that $T:A(X,E)\to A(Y,F)$ is a biseparating map, where $A(X,E) = C(X,E)$ or $C_*(X,E)$ and $A(Y,F) = C(Y,F)$ or $C_*(Y,F)$.
Then there is a homeomorphism $\varphi:\beta X\to \beta Y$ so that for any $f,g \in A(X,E)$ and any open set $U$ in $\beta {X}$, $f = g$ on $U\cap X$ if and only if $Tf = Tg$ on ${\varphi}(U) \cap Y$.
\end{thm}
\begin{proof}
Consider the maps $\varphi:\beta X \to \beta Y$ and $\psi: \beta Y \to \beta X$ given above.
By Lemma \ref{l2.0.2}, $\varphi$ and $\psi$ are mutual inverses.
Let us show that $\varphi$ is continuous.
If $\varphi$ is not continuous at some $x_0 \in \beta X$, then there is a net $(x_\alpha)$ converging to $x_0$ so that $(\varphi(x_\alpha))$ converges to $y_1 \neq \varphi(x_0)= y_0$.
Choose $V_i \in {\mathcal F}_{y_i}$, $i=0,1$, so that $\overline{V_0}^{\beta Y} \cap \overline{V_1}^{\beta Y} = \emptyset$.
For a cofinal set of $\alpha$, $V_1 \in {\mathcal F}_{\varphi(x_\alpha)}$, hence $x_\alpha = \psi(\varphi(x_\alpha)) \in \overline{\theta^{-1}(\overline{V_1\cap Y})}^{\beta X}$. Therefore, $x_0 \in \overline{\theta^{-1}(\overline{V_1\cap Y})}^{\beta X}$. Also, $V_0\in {\mathcal F}_{y_0}$ implies that $x_0\in \overline{\theta^{-1}(\overline{V_0\cap Y})}^{\beta X}$.
This is impossible since $\overline{\theta^{-1}(\overline{V_i\cap Y})}^{\beta X}$, $i =0,1$, are disjoint by Lemma \ref{l2.0.1}.
This completes the proof of continuity of $\varphi$. It follows that $\varphi$ is a homeomorphism by symmetry.
Let $U$ be an open set in $\beta X$ and suppose that $f= g$ on $U\cap X$ for some $f,g\in A(X,E)$.
Let $y_0\in \varphi(U)\cap Y$ and set $x_0 = \psi(y_0)\in U$. We wish to show that $Tf(y_0) =Tg(y_0)$.
By the remark preceding Lemma \ref{l2.0.2}, we may assume that $U\cap X \in {\mathcal C}(X)$.
Then $U\in {\mathcal F}_{x_0}$. By definition of $\varphi$, $y_0 = \varphi(x_0) \in \overline{\theta(\overline{U\cap X})}^{\beta Y}$.
Since $y_0\in Y$ and $\theta(\overline{U\cap X})$ is closed in $Y$, $y_0 \in \theta(\overline{U\cap X})$.
By Theorem \ref{t5}, $Tf = Tg$ on $\theta(\overline{U\cap X})$.
Hence $Tf(y_0) = Tg(y_0)$.
This proves that $Tf = Tg$ on $\varphi(U) \cap Y$.
The reverse implication follows by symmetry.
\end{proof}
Recall that a Hausdorff completely regular topological space
$X$ is {\em realcompact} if for any $x_0\in \beta X\backslash X$, there is a continuous function $f:\beta X\to [0,1]$ so that $f(x_0) =1 > f(x)$ for all $x\in X$.
For more on realcompact spaces, refer to the classic \cite{GJ}.
In particular, let $\upsilon X$ be the Hewitt realcompactification of $X$ \cite{GJ}. Then every $f\in C(X)$ has a unique continuous extension to a (real valued) function $\stackrel{\smile}{f} \in C(\upsilon X)$.
The map $f\mapsto\stackrel{\smile}{f}$ is an algebraic isomorphism and hence biseparating. Hence it is rather natural to consider realcompact spaces in the present context.
When one or more of the spaces $X$ or $Y$ is realcompact, Theorem \ref{t2.0.4} can be improved.
\begin{lem}\label{l2.4}\cite[Lemma 3.2]{LT}
Let $Y$ be a realcompact space and let $y_0 \in \beta Y \backslash Y$. There exist open sets $U_n$ and $V_n$ in $\beta Y$, $n \in {\mathbb N}$, such that
\begin{enumerate}
\item $\overline{U_n}^{\beta Y} \subseteq V_n$ for all $n$;
\item $y_0 \in \overline{\bigcup^\infty_{n=m}U_n}^{\beta Y}$ for all $m$;
\item $Y \cap \bigcap^\infty_{m=1}\overline{\bigcup^\infty_{n=m}V_n}^{\beta Y} = \emptyset$;
\item $\overline{V_n}^{\beta Y} \cap \overline{V_m}^{\beta Y} = \emptyset$ if $n\neq m$.
\end{enumerate}
\end{lem}
\begin{lem}\label{l2.5.1}
Let $E$ be a Hausdorff topological vector space and let $0 \neq u\in E$.
There is a continuous function $h:E\to {\mathbb R}$ so that $h(nu)= u$ for all $n\in {\mathbb N}$.
\end{lem}
\begin{proof}
Let $U$ be a circled open neighborhood of $0$ in $E$ so that $u \notin U + U +U$.
Then let $V$ be an open neighborhood of $0$ so that $\overline{V}\subseteq U$.
Set $V_n = nu + V$, $n\in{\mathbb N}$. Suppose that $m\neq n$ and $\overline{V_m} \cap \overline{V_n} \neq \emptyset$.
Then there are $v_1,v_2\in \overline{V}$ so that $mu+v_1 = nu + v_2$.
Thus
\[ u = \frac{v_2}{m-n} + \frac{v_1}{n-m} \in U+U \subseteq U+U+U,
\]
contrary to the choice of $U$. This shows that $\overline{V_m}\cap \overline{V_n} = \emptyset$ if $m\neq n$.
Next, we claim that
$\overline{\bigcup V_n} = \bigcup \overline{V_n}$.
Suppose that $x\in \overline{\bigcup V_n}$.
Choose $n_0\in {\mathbb N}$ so that $\frac{x}{n_0} \in U$.
For any $n\geq n_0$, if $V_n \cap (x+U) \neq \emptyset$, then there are $v\in V$ and $w\in U$ so that
$nu+ v = x+w$. Thus
\[ u = \frac{x}{n} + \frac{w}{n} -\frac{v}{n} \in U + U + U,\]
a contradiction. Hence $x\notin \overline{\bigcup_{n\geq n_0}V_n}$.
Therefore, $x\in \overline{\bigcup^{n_0-1}_{n=1}V_n} = \bigcup^{n_0-1}_{n=1}\overline{V_n}$.
This proves the claim.
Since $E$ is completely regular, for each $n\in {\mathbb N}$, there exists a continuous function $h_n:E\to {\mathbb R}$ so that $h_n(nu) = n$ and that $h_n(x) = 0$ if $x \notin V_n$.
Define $h:E\to {\mathbb R}$ by $h(x) = h_n(x)$ if $x\in V_n$ for some $n$ and $h(x) = 0$ otherwise.
From the properties of the sets $V_n$ shown above, for each $x\in E$, there are an open neighborhood $O$
of $x$ and some $n\in {\mathbb N}$ so that $h = h_n$ on $O$ or $h = 0$ on $O$.
It follows easily that $h$ is continuous.
Obviously, $h(nu)= n$ for all $n\in {\mathbb N}$.
\end{proof}
\begin{lem}\label{l2.6}
If $A(Y,F) = C(Y,F)$ and $Y$ is realcompact, then $\varphi(X)\subseteq Y$.
\end{lem}
\begin{proof}
Retain the notation of Theorem \ref{t2.0.4}. Suppose that there exists $x_0\in X$ so that $y_0 = \varphi(x_0) \in \beta Y \backslash Y$.
Choose open sets $U_n, V_n$ in $\beta Y$ using Lemma \ref{l2.4}.
From property (1) of said lemma, there exists a continuous function $f_n:\beta Y \to [0,1]$ so that
$f_n =1$ on $U_n$ and $f_n = 0$ outside $V_n$.
Fix a nonzero vector $u \in E$ and let $g_n$ be defined on $Y$ by $g_n(y) = f_n(y)T(n\otimes u)(y)$.
Then set $g(y) = g_n(y)$ if $y\in V_n\cap Y$ for some $n$ and $g(y) = 0$ if $y \in Y \backslash (\bigcup^\infty_{n=1}V_n)$.
Fix $y \in Y$. By property (3) of Lemma \ref{l2.4}, there exists $m$ so that $y \notin \overline{\bigcup^\infty_{n=m}V_n}^{\beta Y}$.
By property (4) of Lemma \ref{l2.4}, there exists at most one $n_0$, $1\leq n_0< m$, so that $y\in \overline{V_{n_0}}^{\beta Y}$.
Therefore, there exists an open neighborhood $U$ of $y$ in $Y$ so that $g = g_{n_0}$ or $g =0$ on the set $U$.
Thus $g$ is continuous at $y$. Since $y\in Y$ is arbitrary, $g\in C(Y,F)$.
As $g = T(n\otimes u)$ on $U_n\cap Y$, by Theorem \ref{t2.0.4}, $T^{-1}g = nu$ on ${\varphi}^{-1}(U_n)\cap X$.
By Lemma \ref{l2.5.1}, there is a continuous function $h:E\to {\mathbb R}$ so that $h(nu) = n$ for all $n$.
Set $k = h\circ T^{-1}g \in C(X)$. Let $m\in {\mathbb N}$. From the above, $k \geq m$ on ${\varphi}^{-1}(\bigcup^\infty_{n=m}U_n)\cap X$.
By (2) of Lemma \ref{l2.4}, $y_0 \in \overline{\bigcup^\infty_{n=m}U_n}^{\beta Y}$.
Since each $U_n$ is open in $\beta Y$ and $\varphi(X)$ is dense in $\beta Y$,
\[y_0 \in \overline{(\bigcup^\infty_{n=m}U_n) \cap \varphi(X)}^{\beta Y}.
\] As $\varphi:\beta X\to \beta Y$ is a homeomorphism,
\[x_0 =\varphi^{-1}(y_0) \in \overline{\varphi^{-1}(\bigcup^\infty_{n=m}U_n)\cap X}^{\beta X}.
\]
Recall that $x_0\in X$. By continuity of $k$, $k(x_0) \geq m$. This is a contradiction since $k$ is real-valued and $m$ is arbitrary.
\end{proof}
\begin{thm}\label{t2.8}
Let $X$, $Y$ be realcompact spaces and let $E$ and $F$ be Hausdorff topological vector spaces.
If $T:C(X,E)\to C(Y,F)$ is a (nonlinear) biseparating map, then $X$ and $Y$ are homeomorphic.
\end{thm}
\begin{proof}
By Lemma \ref{l2.6}, $\varphi$ maps $X$ into $Y$. By symmetry, $\varphi$ maps $X$ onto $Y$.
Hence it is a homeomorphism from $X$ onto $Y$.
\end{proof}
Theorem \ref{t2.8} generalizes the same result obtained in \cite{A, BBH} for {\em linear} biseparating maps.
\begin{thm}\label{t2.9}
Suppose that $Y$ is realcompact. Let $T:C_*(X,E)\to C(Y,F)$ be a biseparating map. Then $Y$ is compact.
\end{thm}
\begin{proof}
The proof is the same as the proof of Lemma \ref{l2.6}.
If $y_0 \in \beta Y \backslash Y$, then following the proof of Lemma \ref{l2.6}, one can choose $0\neq u\in E$ and construct a function $g\in C(Y,F)$
and a sequence of nonempty sets $(W_n)$ in $X$ so that $T^{-1}g = nu$ on $W_n$ for each $n\in {\mathbb N}$.
($W_n$ is the set $\varphi^{-1}(U_n)\cap X$ in the proof of Lemma \ref{l2.6}.)
Since the set $(nu)^\infty_{n=1}$ cannot be a bounded set in $E$, $T^{-1}g\notin C_*(X,E)$, contrary to the assumption. Therefore, $Y = \beta Y$ is compact.
\end{proof}
\noindent
{\bf Remark}. It is well known that $C_*(X)$ is algebraically isomorphic to $C(\beta X)$. Hence one cannot expect $X$ to be compact in Theorem \ref{t2.9} in general.
\section{Metric cases -- general results}\label{s3}
Throughout this section, let $X$ and $Y$ be metric spaces, $E$ and $F$ be nontrivial normed spaces and $A(X,E)$, $A(Y,F)$ be vector subspaces of $C(X,E)$ and $C(Y,F)$ respectively. Recall that a {\em bump function} on a Banach space $G$ is a nonzero real-valued function on $G$ with bounded support.
When we speak of the spaces $C^p(X,E)$ or $C^p_*(X,E)$, it will be assumed additionally that $X$ is an open set in a Banach space that supports a $C^p$, respectively, $C^p_*$, bump function.
A sequence $(x_n)$ in $X$ is {\em separated} if $\inf_{n\neq m}d(x_n,x_m) > 0$.
The main aim of the section is an analog of the structural result Theorem \ref{t2.0.4} when $X$ and $Y$ are metric spaces. In this instance, we make use of completion instead of compactification.
\begin{prop}\label{p3.0.1}
Let $A(X,E)$ be one of the spaces $C(X,E)$, $C_*(X,E)$, $U(X,E)$, $U_*(X,E)$, $\operatorname{Lip}(X,E)$, $\operatorname{Lip}_*(X,E)$, $C^p(X,E)$ or $C^p_*(X,E)$.
Then $A(X,E)$ has
the following properties.
\begin{enumerate}
\item[(S1)] $A(X,E)$ is compatible.
\item[(S2)] For any $x\in X$ and any $
\varepsilon >0$, there exists $C\in {\mathcal C}(X)$ so that $x\in C$ and $\operatorname{diam} C <\varepsilon$. In particular, $A(X,E)$ is basic and $\widehat{C}(f) = \overline{C(f)}$ for all $f\in A(X,E)$.
\item[(S3)] If $f\in A(X,E)$ and $(x_n)$ is a separated sequence in $X$, then there are a sequence $(C_n)$ in ${\mathcal C}(X)$ and a function $g\in A(X,E)$ so that $x_n \in C_n$ for all $n$, $g = f$ on $C_n$ for infinitely many $n$ and $g = 0$ on $C_n$ for infinitely many $n$.
\item[(S4)] Let $(x_n)$ and $(x'_n)$ be Cauchy sequences so that $\inf_{m,n} d(x_m,x'_n) > 0$.
For any $f\in A(X,E)$, there are sets $U,V\in {\mathcal C}(X)$ and a function $g\in A(X,E)$ so that $x_n\in U$ and $x'_n\in V$ for infinitely many $n$, $g = f$ on $U$ and $g = 0$ on $V$.
\end{enumerate}
\end{prop}
\begin{proof}
Except for property (S3) for the spaces $U(X,E)$ and $\operatorname{Lip}(X,E)$, all other verifications are straightforward and are left to the reader.
To verify (S3) for $A(X,E) = U(X,E)$ or $\operatorname{Lip}(X,E)$,
let $(x_n)$ be a sequence in $X$ so that $\inf_{n\neq m}d(x_n,x_m) = 3r > 0$ and let $f\in A(X,E)$.
In the first instance, assume that $(f(x_n))$ is a bounded sequence in $E$.
Since $f\in U(X,E)$, there exists $0< r' < r$ so that \[\sup\{\|f(x)\|: x\in \bigcup_n B(x_n,r')\} = M < \infty.\]
For each $n\in {\mathbb N}$, let $h_n, k_n:E\to [0,1]$ be defined by
\[ h_n(x) = (2 - \frac{2}{r'}d(x,x_n))^+\wedge 1 \text{ and } k_n(x) = (1 - \frac{d(x,x_n)}{r'})^+.\]
Then $(h_n)$ is a sequence of disjoint functions. Let $h$ be the pointwise sum $\sum h_{2n-1}$. It is easily verified that $h: E\to [0,1]$ is a Lipchitz function with Lipschitz constant $\frac{2}{r'}$.
Take $g = h\cdot f$. For each $n$, let $C_n = B(x_n,\frac{r'}{2})$. Fix a nonzero vector $a\in E$. Then $k_n\otimes a \in \operatorname{Lip}(X,E) \subseteq A(X,E)$. Hence
$C_n = C(k_n\otimes a) \in {\mathcal C}(X)$. Clearly $g= f$ on $C_n$ if $n$ is odd and $g = 0$ on $C_n$ if $n$ is even.
Let us verify that $g\in A(X,E)$.
Indeed, suppose that $s, t\in X$. If $s,t\notin \bigcup_n B(x_n,r')$, then $h(s) = h(t) =0$. Hence $\|g(s) - g(t)\| = 0$. Otherwise, assume without loss of generality that $t\in \bigcup_n B(x_n,r')$. We have
\begin{align*}
\|g(s) -g(t)\| & \leq |h(s)|\, \|f(s) - f(t)\| + |h(s)-h(t)|\,\|f(t)\| \\
& \leq \|f(s)-f(t)\| + \frac{2}{r'}\,d(s,t)\cdot M.
\end{align*}
Since $f\in A(X,E)$, it follows that $g \in A(X,E)$.
In the second case, assume that $(f(x_n))$ is unbounded in $E$.
Let $t_n = \|f(x_n)\|$ for all $n$. By replacing $(x_n)$ by a subsequence if necessary, we may assume that $t_1> 0$ and $6t_n < t_{n+1}$ for all $n$. Define $\gamma: [0,\infty) \to [0,1]$ by
\[ \gamma(t) = \begin{cases}
(2 - \frac{3}{t_n}|t-t_n|)\wedge 1 &\text{if $|t-t_n| < \frac{2t_n}{3}$ for some odd $n$}\\
0 & \text{otherwise}. \end{cases}
\]
Direct verification shows that if $0\leq a\leq b\neq 0$, then $|\gamma(a)-\gamma(b)| \leq \frac{6|a-b|}{b}$.
Let $g:X\to E$ be given by $g(x) = \gamma(\|f(x)\|)f(x)$.
Suppose that $y,z\in X$ with $\|f(y)\| \leq \|f(z)\|$ and $f(z) \neq 0$.
Then
\begin{align*}
\|g(y)- g(z)\| & \leq |\gamma(\|f(y)\|) - \gamma(\|f(z)\|)|\,\|f(y)\| + |\gamma(\|f(z)\|)|\,\|f(y) - f(z)\|\\
& \leq \frac{6(\|f(z)\| - \|f(y)\|)}{\|f(z)\|}\,\|f(y)\| + \|f(y) - f(z)\|\\
& \leq 7\|f(y)-f(z)\|.
\end{align*}
The same inequality obviously holds if $f(y) = f(z) = 0$.
Since $f$ belongs to $A(X,E)$, so does $g$.
As $\frac{t_n}{3} < \|f(x_n)\| < \frac{5t_n}{3}$, there exists $C_n\in {\mathcal C}(X)$ so that
\[ x_n \in C_n \subseteq \{x: \frac{t_n}{3} < \|f(x)\| < \frac{5t_n}{3}\}.\]
Finally, $\gamma(\|f(x)\|) =1$ if $x \in C_{2n-1}$ and $\gamma(\|f(x)\|) =0$ if $x \in C_{2n}$.
Hence $g = f$ on $C_{2n-1}$ and $g=0$ on $C_{2n}$.
This completes the verification of property (S3) for $A(X,E) = U(X,E)$ or $\operatorname{Lip}(X,E)$.
\end{proof}
For the sake of brevity, let us say that $A(X,E)$ is {\em standard} if it satisfies properties (S1) -- (S4).
For the rest of the section, assume that $A(X,E)$ and $A(Y,F)$ are standard spaces and that $T:A(X,E)\to A(Y,F)$ is a biseparating map. Without loss of generality, normalize $T$ by taking $T0=0$.
Let $\theta:{\mathcal D}(X) \to {\mathcal D}(Y)$ be the map obtained from Theorem \ref{t5}.
As in Section \ref{s2}, we will show that $\theta$ induces a point mapping $\varphi$.
Denote by $\widetilde{X}$ and $\widetilde{Y}$ the respective completions of the spaces $X$ and $Y$.
For any subset $U$ of $\widetilde{X}$, denote the closure of $U$ in $\widetilde{X}$ by $\widetilde{U}$. Similarly for sets in $\widetilde {Y}$.
If $x_0\in \widetilde{X}$ and $(U_n)$ is a sequence of nonempty sets in ${\mathcal C}(X)$ so that $\operatorname{diam} U_n \to 0$ and $d(x_0,U_n)\to 0$, we write $(U_n) \sim x_0$. By condition (S2), for any $x_0\in \widetilde{X}$, there is always a sequence $(U_n)$ so that $(U_n)\sim x_0$.
Suppose that $y \in \theta(\overline{U})\cap V$, where $U = C(u) \in {\mathcal C}(X)$ and $V= C(v)\in {\mathcal C}(Y)$.
Since $\theta(\overline{U}) = \overline{C(Tu)}$, $C(Tu) \cap C(v)\neq \emptyset$.
Thus $C(u)\cap C(T^{-1}v)\neq \emptyset$.
Hence $U \cap \theta^{-1}(\overline{V})\neq \emptyset$.
\begin{lem}\label{l3.1}
Let $x_0\in \widetilde{X}$ and assume that $(U_n)\sim x_0$.
Take $y_n \in \theta(\overline{U_n})$ for each $n$.
\begin{enumerate}
\item If $x_0\in X$, then $(y_n)$ is a Cauchy sequence in $Y$.
\item If, in additon, $A(X,E)\subseteq U(X,E)$ and contains a nonzero constant function, then $(y_n)$ is a Cauchy sequence in $Y$ for any $x_0\in \widetilde{X}$.
\end{enumerate}
\end{lem}
\begin{proof}
First we show that every subsequence of $(y_n)$ has a further Cauchy subsequence. Otherwise, there is a subsequence $(y'_n)$ of $(y_n)$ that is separated.
Under assumption (1), $x_0 \in X$. It follows from condition (S2) that there is a function $f\in A(X,E)$ so that $f(x_0) \neq 0$. Under assumption (2), take $f = 1\otimes a\in A(X,E)$, where $a\neq 0$.
Since $A(Y,F)$ has property (S3), there are a subsequence of $(y_n')$, still denoted as $(y_n')$, a sequence $(V_n)$ in ${\mathcal C}(Y)$ and a function $g\in A(Y,E)$ so that $y'_n\in V_n$ for all $n$, $g= Tf$ on $V_n$ for infinitely many $n$ and $g = 0$ on $V_n$ for infinitely many $n$.
Then $T^{-1}g = f$ on $\theta^{-1}(\overline{V_n})$ for infinitely many $n$ and $T^{-1}g = 0$ on $\theta^{-1}(\overline{V_n})$ for infinitely many $n$.
Since $y'_n \in \theta(\overline{U_n}) \cap V_n$, $U_n \cap \theta^{-1}(\overline{V_n}) \neq \emptyset$ by the discussion just before the lemma. Choose a point $x'_n$ from the intersection. Then $(T^{-1}g)(x'_n)=f(x_n')$ for infintely many $n$ and $0$ infinitely often.
Moreover, $(x'_n)$ converges to $x_0$ in $\widetilde{X}$.
Under assumption (1),
$x_0 \in X$, and we have a contradiction to the continuity of $T^{-1}g$ at $x_0$.
Under assumption (2), $T^{-1}g \in U(X,E)$ and $(x'_n)$ is Cauchy in $X$. Hence $((T^{-1}g)(x'_n))$ is Cauchy in $E$. This is impossible since $(T^{-1}g)(x'_n) = f(x'_n) = a$ and $(T^{-1}g)(x'_n) = 0$ both occur infinitely many times.
If the whole sequence $(y_n)$ is not Cauchy, then in view of the previous paragraph, there are subsequences $(y_{i_n})$ and $(y_{j_n})$ and $\varepsilon >0$ so that both subsequences are Cauchy and that $d(y_{i_m},y_{j_n}) > \varepsilon $ for all $m,n$.
Choose the function $f$ as in the last paragraph. By property (S4), there are $U,V \in {\mathcal C}(Y)$ and $g\in A(Y,F)$ so that $y_{i_n}\in U$, $y_{j_n}\in V$ for infinitely many $n$, $g= Tf$ on $U$ and $g = 0$ on $V$.
Thus $T^{-1}g = f$ on $\theta^{-1}(\overline{U})$ and $T^{-1}g = 0$ on $\theta^{-1}(\overline{V})$.
Then $y_{i_n} \in \theta(\overline{U_{i_n}}) \cap U$ for infinitely many $n$ and hence $U_{i_n} \cap \theta^{-1}(\overline{U}) \neq \emptyset$ for infinitely many $n$.
Let $x_{i_n} \in U_{i_n} \cap \theta^{-1}(\overline{U})$. Then $(x_{i_n})$ converges to $x_0$ in $\widetilde{X}$ and $T^{-1}g(x_{i_n}) = f(x_{i_n})$ for all $n$.
Similar consideration using the sequence $(y_{j_n})$ shows that there is a sequence $(x_{j_n})$ converging to $x_0$ in $\widetilde{X}$ so that $T^{-1}g(x_{j_n}) = 0$ for all $n$.
Under assumption (1),
\[ \lim T^{-1}g(x_{i_n}) = \lim f(x_{i_n}) = f(x_0) \neq 0 = \lim T^{-1}g(x_{j_n}),\]
contradicting the continuity of $T^{-1}g$ at $x_0$.
Under assumption (2), $f(x_{i_n}) =a\neq 0$ by choice of $f$. Thus $T^{-1}g(x_{i_n}) = a$ and $T^{-1}g(x_{j_n}) = 0$ for all $n$, contradicting the uniform continuity of $T^{-1}g$.
\end{proof}
Suppose that $x_0\in \widetilde{X}$. Let $(U_n)\sim x_0, (V_n)\sim x_0$ and $y_n\in U_n, z_n\in V_n$ for all $n$.
Then $(U_1,V_1,U_2,V_2,\dots)\sim x_0$.
By Lemma \ref{l3.1}, if $x_0\in X$, then the sequence $(y_1,z_1,y_2,z_2,\dots)$ is Cauchy.
Define $\varphi: X\to \widetilde{Y}$ by setting $\varphi(x) = \lim y_n$, where $(U_n)\sim x$ and $y_n \in \theta(\overline{U_n})$ for all $n$. From the above, $\varphi(x)$ is independent of the choices of $(U_n)$ and $(y_n)$.
Similarly, if $A(X,E)\subseteq U(X,E)$ and contains a constant function $1\otimes a$ for some $a\in E\backslash \{0\}$, then Lemma \ref{l3.1}(2) shows that there is a well defined map $\widetilde{\varphi}:\widetilde{X}\to \widetilde{Y}$ given by
$\widetilde{\varphi}(x) = \lim y_n$, where $(U_n)\sim x$ and $y_n \in \theta(\overline{U_n})$ for all $n$.
Clearly, in this case, $\widetilde{\varphi}$ extends $\varphi$.
By symmetry, there is also a similar map $\psi:Y\to \widetilde{X}$ and a map $\widetilde{\psi}:\widetilde{Y}\to \widetilde{X}$ under corresponding assumptions on $A(Y,F)$.
\begin{lem}\label{l3.2}
The map $\varphi$ is continuous from $X$ into $\widetilde{Y}$. If, in addition, $A(X,E)\subseteq U(X,E)$ and contains a nonzero constant function, then $\widetilde{\varphi}:\widetilde{X}\to \widetilde{Y}$ is continuous on $\widetilde{X}$.
\end{lem}
\begin{proof}
We will prove the second assertion. The first statement can be shown in the same way.
Under the second assumption, $\widetilde{\varphi}$ is well defined. Let $(x_n)$ be a sequence in $\widetilde{X}$ that converges to a point $x_0\in \widetilde{X}$.
By definition of $\widetilde{\varphi}$, for each $n$, $\widetilde{\varphi}(x_n) = \lim_k y_{nk}$, where $y_{nk} \in \theta(\overline{U_{nk}})$ and $(U_{nk})_k \sim x_n$.
For each $n$, choose $k_n$ so that $d(y_{nk_n}, \widetilde{\varphi}(x_n)), \operatorname{diam} U_{nk_n}, d(U_{nk_n}, x_{n}) < \frac{1}{n}$. Then $y_{n_{k_n}} \in U_{nk_n}$ and $(U_{nk_n})\sim x_0$.
Thus $\widetilde{\varphi}(x_0) = \lim y_{n{k_n}} = \lim \widetilde{\varphi}(x_n)$.
\end{proof}
Suppose that $x\in U \in {\mathcal C}(X)$. We can choose $(U_n)$ so that $(U_n)\sim x$ and $U_n \subseteq U$ for all $n$. By definition of $\varphi$, $\varphi(x) = \lim y_n$ where $y_n\in \theta(\overline{U_n}) \subseteq \theta(\overline{U})$. Hence $\varphi(x) \in \widetilde{\theta(\overline{U})}$.
\begin{lem}\label{l3.2.1}
Assume that $A(X,E)\subseteq U(X,E)$ and contains a nonzero constant function.
Let $f,g\in A(X,E)$ and $U$ be an open set in $\widetilde{X}$. If $f = g$ on $U\cap X$, then $Tf = Tg$ on the set $\widetilde{\varphi}(U) \cap Y$.
\end{lem}
\begin{proof}
Assume that $x_0\in U$ and $y_0 = \widetilde{\varphi}(x_0) \in Y$.
Choose $(U_n)\sim x_0$ and let $x_n \in U_n$.
By the foregoing remark, $\varphi(x_n) \in \widetilde{\theta(\overline{U_n})}$. Pick $y_n \in \theta(\overline{U_n})$.
By definition of $\widetilde{\varphi}$, $y_0 = \widetilde{\varphi}(x_0) = \lim y_n$.
For all sufficiently large $n$, $\overline{U_n} \subseteq U\cap X$.
Hence $f=g$ on $\overline{U_n}$. By Theorem \ref{t5}, $Tf = Tg$ on $\theta(\overline{U_n})$.
In particular, $Tf(y_n) = Tg(y_n)$.
By continuity of $Tf$ and $Tg$ at $y_0$, $Tf(y_0) = Tg(y_0)$.
\end{proof}
The following structure theorem applies to spaces of uniformly continuous functions and spaces of Lipschitz functions.
\begin{thm}\label{t3.5}
Suppose that both $A(X,E)$ and $A(Y,F)$ are standard subspaces of
$U(X,E)$ and $U(Y,F)$ respectively so that both contain nonzero constant functions.
There is a homeomorphism $\widetilde{\varphi}:\widetilde{X}\to \widetilde{Y}$ so that if
$f,g\in A(X,E)$ and $U$ is an open set in $\widetilde{X}$, then $f=g$ on $U\cap X$ if and only if $Tf = Tg$ on $\widetilde{\varphi}(U)\cap Y$.
\end{thm}
\begin{proof}
Under the given assumptions, we have well defined continuous maps $\widetilde{\varphi}:\widetilde{X}\to \widetilde{Y}$ and $\widetilde{\psi}:\widetilde{Y}\to \widetilde{X}$ by Lemma \ref{l3.2}.
In the next paragraph, we will show that $\widetilde{\psi}\circ \widetilde{\varphi}$ is the identity map on $\widetilde{X}$. With symmetry, this allows us to conclude that $\widetilde{\varphi}$ is a homeomorphism.
The final property in the statement of the theorem follows from Lemma \ref{l3.2.1} and symmetry.
Let $x_0\in \widetilde{X}$ and let $(U_n)\sim x_0$.
It follows from (2) of Lemma \ref{l3.1} and the definition of $\widetilde{\varphi}$ that $\operatorname{diam} \theta(\overline{U_n})\to 0$ and $d(\theta(\overline{U_n}),\widetilde{\varphi}(x_0)) \to 0$. By definition of $\theta$, there exists $V_n \in {\mathcal C}(Y)$ so that $\theta(\overline{U_n}) = \overline{V_n}$.
Then $(V_n)\sim \widetilde{\varphi}(x_0)$.
Hence $\widetilde{\psi}( \widetilde{\varphi}(x_0))= \lim x_n$, where $x_n \in \theta^{-1}(\overline{V_n}) = \overline{U_n}$ for all $n$.
Therefore, $\widetilde{\psi}( \widetilde{\varphi}(x_0))= x_0$, as claimed.
\end{proof}
Next, we consider the cases where one or both of $A(X,E)$ and $A(Y,F)$ is either $C, C_*$ or $C^p$.
\begin{lem}\label{l3.6}
Suppose that $A(X,E)$ is standard and $A(Y,F) = C(Y,F)$, $C_*(Y,F)$ or $C^p(Y,F)$. If $T:A(X,E)\to A(Y,F)$ is a biseparating map, then $\varphi(X)\subseteq Y$.
\end{lem}
\begin{proof}
Suppose on the contrary that there exists $x_0\in X$ so that $y_0 = \varphi(x_0) \in \widetilde{Y}\backslash Y$.
Let $(U_n)\sim x_0$. Then $(\theta(\overline{U_n}))$ is a sequence of sets in $Y$, each with nonempty interior, so that $\operatorname{diam}\theta(\overline{U_n}) \to 0$ and $d(\theta(\overline{U_n}),y_0) \to 0$.
Hence one can find a sequence $(g_n)$ in $C_*(Y)$, respectively $C^p(Y)$, and a sequence $(V_n)$ of nonempty sets in ${\mathcal C}(Y)$ so that $g_n =1$ on $V_n$, $\overline{C(g_n)}\subseteq \theta(\overline{U_n})$ for all $n$, and $\overline{C(g_m)}\cap \overline{C(g_n)} = \emptyset$ if $m\neq n$.
As observed in the proof of Theorem \ref{t3.5}, $\operatorname{diam} \theta(\overline{U_n})\to 0$.
So $(\overline{C(g_n}))$ is a pairwise disjoint sequence so that $\operatorname{diam} \overline{C(g_n)}\to 0$ and $d(\overline{C(g_n)},y_0) \to 0$, where $y_0 \notin Y$. Therefore the pointwise sum $g = \sum g_{2n}$ belongs to $C_*(Y)$, respectively, $C^p(Y)$.
By condition (S2), there exists $f\in A(X,E)$ so that $f(x_0) \neq 0$.
Then $h = g\cdot Tf$ lies in $A(Y,F)$.
Since $h = Tf$ on $\overline{V_n}$ if $n$ is even and $h = 0$ on $\overline{V_n}$ if $n$ is odd, and $\overline{V_n} \in {\mathcal D}(Y)$,
by Theorem \ref{t5}, $T^{-1}h = f$ on $\theta^{-1}(\overline{V_n})$ if $n$ is even and
$T^{-1}h = 0$ on $\theta^{-1}(\overline{V_n})$ if $n$ is odd.
Since $\overline{V_n} \subseteq \theta(\overline{U_n})$, $\theta^{-1}(\overline{V_n}) \subseteq \overline{U_n}$.
Choose $x_n \in \theta^{-1}(\overline{V_n})$ for each $n$. Then $(x_n)$ converges to $x_0$.
However, $T^{-1}h(x_n) = f(x_n)$ if $n$ is odd and $T^{-1}h(x_n) = 0$ if $n$ is even.
As $(f(x_n))$ converges to $f(x_0) \neq 0$,
this contradicts the continuity of $T^{-1}h$ at $x_0$. This proves that $\varphi(X) \subseteq Y$.
\end{proof}
The next two results can be obtained utilizing the proof of Theorem \ref{t3.5} and taking into account Lemma \ref{l3.6}.
\begin{thm}\label{t3.7}
Let $A(X,E) = C(X,E),$ $C_*(X,E)$ or $C^p(X,E)$ and let $A(Y,F) = C(Y,F),$ $C_*(Y,F)$ or $C^q(Y,F)$.
There exists a homeomorphism $\varphi: X\to Y$ so that
for any $f,g\in A(X,E)$, and any open set $U$ in $X$, $f =g$ on $U$ $\iff$ $Tf = Tg$ on $\varphi(U)$.
\end{thm}
\begin{thm}\label{t3.8}
Let $A(X,E)$ be a standard vector subspace of $U(X,E)$ that contains a nonzero constant fucntion. Suppose that $A(Y,F) = C(Y,F)$, $C_*(Y,F)$ or $C^p(Y,F)$.
There exists a homeomorphism $\varphi: X\to \varphi(X)$, where $\varphi(X)$ is a dense subset of $Y$, and
for any $f,g\in A(X,E)$ and any open set $U$ in $X$, $f =g$ on $U$ $\iff$ $Tf = Tg$ on $\varphi(U)$.
\end{thm}
\begin{proof}
We will only prove the density of $\varphi(X)$ in $Y$. The other parts follow from the proof of Theorem \ref{t3.5}, using Lemma \ref{l3.6}.
By Lemma \ref{l3.2} and Lemma \ref{l3.6}, $\varphi$ is a continuous map from $X$ into $Y$ with a continuous extension $\widetilde{\varphi}:\widetilde{X}\to \widetilde{Y}$.
Also, we have an analogous continuous map $\psi:Y\to \widetilde{X}$.
From the second paragraph of the proof of Theorem \ref{t3.5}, we see that $\widetilde{\varphi}\circ\psi$ is the identity map on $Y$. Given $y\in Y$, $\psi(y)\in \widetilde{X}$. Hence there is a sequence $(x_n)$ in $X$ that converges to $\psi(y)$.
By continuity of $\widetilde{\varphi}$ at $\psi(y)$, $(\varphi(x_n))=(\widetilde{\varphi}(x_n))$ converges to $\widetilde{\varphi}(\psi(y)) =y$.
This proves that $y\in \overline{\varphi(X)}$.
\end{proof}
We conclude this section with a remark concerning the space $C^p_*(X,E)$, where $X$ is an open set in a Banach space $G$ that supports a $C^p_*$ bump function. In general, it may not be true that all functions in $C^p_*(X,E)$ are uniformly continuous (with respect to the norm on $G$).
On the other hand, an easy application of the mean value inequality shows that if $X$ is open and {\em convex} in $G$, then $C^p_*(X,E) \subseteq U(X,E)$.
In particular, Theorems \ref{t3.5} and \ref{t3.8} apply to $C^p_*$ spaces whose domains are convex open sets.
\section{Pointwise representation}\label{s4}
Retain the notation of Section \ref{s3}. That is, let $X$ and $Y$ be metric spaces, $E$ and $F$ be nontrivial normed spaces, and assume that $A(X,E)$ and $A(Y,F)$ are standard vector subspaces of $C(X,E)$ and $C(Y,F)$ respectively. Say that $A(X,E)$ has property (P) if
\begin{enumerate}
\item[(P)] For any accumulation point $x$ of $\widetilde{X}$ and any function $f\in A(X,E)$ so that $\lim_{\stackrel{z\to x}{z\in X}}f(z) =0$, there are open sets $U$ and $V$ in $X$ and a function $g\in A(X,E)$ so that $x\in \widetilde{U} \cap \widetilde{V}$ and that $g =f$ on $U$ and $g = 0$ on $V$.
\end{enumerate}
\medskip
\noindent {\bf Remark}.
If $x$ is an isolated point of $\widetilde{X}$, then $x\in X$. In this case, given $f\in A(X,E)$ so that $f(x) = 0$, take $U = V= \{x\}$ and $g =f$. It is clear that the conditions above are fulfilled.
\begin{prop}\label{p3.10}
Let $A(X,E)$ be one of the spaces $C(X,E)$, $C_*(X,E)$, $U(X,E)$, $U_*(X,E)$, $\operatorname{Lip}(X,E)$ or $\operatorname{Lip}_*(X,E)$. Then $A(X,E)$ has property (P).
\end{prop}
\begin{proof}
Let $x_0$ be an accumulation point of $\widetilde{X}$ and let $f$ be a function in $A(X,E)$ so that $\lim_{\stackrel{x\to x_0}{x\in X}}f(x) =0$.
There is a sequence $(x_n)$ in $X$ converging to $x_0$ so that $0 < d(x_{n+1},x_0) < \frac{d(x_n,x_0)}{3}$ for all $n$. Set $r_n = d(x_n,x_0)$ and let $\gamma_n:[0,\infty)\to {\mathbb R}$ be the function
\[\gamma_n(r) = (2 - \frac{4|r-r_n|}{r_n})^+\wedge 1.\]
$(\gamma_n)$ is a disjoint sequence of functions. Furthermore,
\[ |\gamma_n(a) - \gamma_n(b)| \leq \frac{4}{r_n}|a-b| \wedge 1 \text{ for all $a, b\geq 0$}.\]
We may assume that $\|f(x)\| \leq1$ if $d(x,x_0) < \frac{3r_1}{2}$.
Let
\[ g_n(x) = \gamma_n(d(x,x_0))f(x)\quad \text{and} \quad g = \sum g_{2n} \ \text{(pointwise sum).}\]
Since $\gamma_n(d(\cdot,x_0))$ is bounded Lipschitz and $f$ is bounded on the support of $g_n$, it is easy to check that $g_n\in A(X,E)$.
Note that
\[ \|g_n\|_\infty \leq \sup\{\|f(x)\| :\frac{r_n}{2} < d(x,x_0) < \frac{3r_n}{2}\} \to 0.\]
Therefore, if $A(X,E)$ is any of the spaces except $\operatorname{Lip}(X,E)$ or $\operatorname{Lip}_*(X,E)$, $g$ is the uniform limit of its partial sums and hence belongs to $A(X,E)$.
Now consider the cases $A(X,E) = \operatorname{Lip}(X,E)$ or $\operatorname{Lip}_*(X,E)$. First of all, the function $g$ is bounded.
Let's check that it is Lipschitz. Since $f$ is Lipschitz and $\lim_{\stackrel{x\to x_0}{x\in X}}f(x) =0$, $\|f(x)\| \leq L(f)d(x,x_0)$, where $L(f)$ is the Lipschitz constant of $f$.
For any $n\in {\mathbb N}$, we claim that $g_n$ is Lipschitz with $L(g_n) \leq 7L(f)$. Let $x,z\in X$, $a = d(x,x_0)$, $b = d(z,x_0)$.
If $\gamma_n(a) = \gamma_n(b) =0$, then $g_n(x) - g_n(z) =0$. Otherwise, we may assume that $\gamma_n(a) \neq 0$, so that $\frac{r_n}{2} < a < \frac{3r_n}{2}$.
Then
\begin{align*}
\|g_n(x) - g_n(z)\| & \leq |\gamma_{n}(a)-\gamma_{n}(b)|\,\|f(x)\| + |\gamma_{n}(b)|\,\|f(x) - f(z)\|\\
&\leq \frac{4}{r_{n}}|a-b|\,L(f)a + \|f(x) -f(z)\|\\
&\leq 6L(f)|a-b| + L(f)d(x,z) \leq 7L(f)d(x,z).
\end{align*}
Thus $L(g_n)\leq 7L(f)$, as claimed.
For any $x,z\in X$, either there exists $n$ so that $g(x) = g_{2n}(x)$, $g(z) = g_{2n}(z)$, or there are distinct $m,n$ so that $g(x) = g_{2n}(x) + g_{2m}(x)$, $g(z) = g_{2n}(z) + g_{2m}(z)$. In either case, it follows that $\|g(x)-g(z)\| \leq 14L(f)d(x,z)$. This completes the proof that $g\in \operatorname{Lip}_*(X,E)\subseteq A(X,E)$.
Clearly, $g= f$ on the open set
\[ U = \bigcup_n \{x\in X: \frac{3r_{2n}}{4}< d(x,x_0) < \frac{5r_{2n}}{4}\}\]
and $g = 0$ on the open set
\[V = \bigcup_n \{x\in X: \frac{3r_{2n-1}}{4}< d(x,x_0) < \frac{5r_{2n-1}}{4}\}.\]
Since $x_n\in U$ for all even $n$, and $x_n\in V$ for all odd $n$, $x_0 \in \widetilde{U} \cap \widetilde{V}$.
\end{proof}
With the help of property (P), we can improve Theorems \ref{t3.5}, \ref{t3.7} and \ref{t3.8}.
First we consider the case where $A(X,E)$ and $A(Y,F)$ are standard subspaces of
$U(X,E)$ and $U(Y,F)$ respectively so that both contain nonzero constant functions.
Denote by $\widetilde{E}$ the completion of $E$. Since $A(X,E) \subseteq U(X,E)$, every function $f\in A(X,E)$ has a unique continuous extension $\widetilde{f}:\widetilde{X}\to \widetilde{E}$.
For each $x\in \widetilde{X}$, let
\[ \widetilde{E}_x = \{\widetilde{f}(x): f\in A(X,E)\}.\]
Similarly for $\widetilde{F}_y$ if $y\in \widetilde{Y}$.
Fix a biseparating map $T:A(X,E)\to A(Y,F)$, which we may normalize by taking $T0 = 0$.
Let $\widetilde{\varphi}: \widetilde{X}\to \widetilde{Y}$ be the homeomorphism given by Theorem \ref{t3.5}, with inverse $\widetilde{\psi}$.
\begin{prop}\label{p4.2}
Suppose that $A(X,E)$ and $A(Y,F)$ are standard subspaces of
$U(X,E)$ and $U(Y,F)$ respectively so that both contain nonzero constant functions. Assume that $A(X,E)$ has property (P).
Given any $y\in \widetilde{Y}$, there is a bijective function $\Phi(y,\cdot): \widetilde{E}_{\widetilde{\psi}(y)}\to \widetilde{F}_y$ so that
\[ \widetilde{Tf}(y) = \Phi(y,\widetilde{f}(\widetilde{\psi}(y))) \text{ for all $f\in A(X,E)$}.\]
\end{prop}
\begin{proof}
Let $y_0 \in \widetilde{Y}$ and $\widetilde{\psi}(y_0) =x_0\in \widetilde{X}$. For any $a\in \widetilde{E}_{x_0}$, fix a function $g_a\in A(X,E)$ so that $\widetilde{g_a}(x_0) =a$ and define $\Phi(y_0,\cdot): \widetilde{E}_{x_0} \to F$ by $\Phi(y_0,a) = Tg_a(y_0)$.
If $f\in A(X,E)$, let $a = \widetilde{f}(x_0)$. Clearly $\widetilde{f- g_a}(x_0) =0$. By property (P) and the remark following its definition, there are open sets $U,V$ in $X$ and a function $h\in A(X,E)$ so that $x_0 \in \widetilde{U} \cap \widetilde{V}$, $h = f-g_a$ on $U$ and $h =0$ on $V$.
Let $W$ be an open set in $\widetilde{X}$ so that $W\cap X = U$.
Since $\widetilde{\varphi}$ is a homeomorphism and $y_0 = \widetilde{\varphi}(x_0)$,
$y_0 \in \widetilde{\varphi}(\widetilde{U}) \subseteq \widetilde{\widetilde{\varphi}(W)}$.
But $\widetilde{\varphi}(W)$ is open in $\widetilde{Y}$. So $y_0 \in ({\varphi(W)}\cap Y)^{\widetilde{\ }}$.
As $f= h+g_a$ on $W\cap X$, $Tf = T(h+g_a)$ on $\widetilde{\varphi}(W)\cap Y$ by Lemma \ref{l3.2.1}.
By continuity, $\widetilde{Tf}(y_0) = [T(h+g_a)]^{\widetilde{ \ }}(y_0)$.
Similarly, looking at the set $V$ instead of $U$, one can show that
\[ [T(h+g_a)]^{\widetilde{ \ }}(y_0) = {Tg_a}(y_0) = \Phi(y_0,a). \]
Thus $\widetilde{Tf}(y_0) = \Phi(y_0,a)$, as required.
By symmetry, there is a function $\Psi(x, \cdot): \widetilde{F}_{\widetilde{\varphi}(x)}\to \widetilde{E}_x$ so that $(T^{-1}g)^{\widetilde{\ }}(x) = \Phi(x, \widetilde{g}(\widetilde{\varphi}(x)))$ for all $g\in A(Y,F)$.
The fact that $\Phi(y,\cdot)$ is a bijection follows from expressing the equations $T(T^{-1}g) = g$ and $T^{-1}(Tf) = f$ in terms of the mappings $\Phi(y,\cdot)$ and $\Psi(x,\cdot)$.
\end{proof}
The next two propositions can be obtained in a similar vein. The details are omitted.
\begin{prop}\label{p4.3}
Suppose that $A(X,E) = C(X,E)$ or $C_*(X,E)$ and $A(Y,F)$ $= C(Y,F)$ or $C_*(Y,F)$.
There is a function $\Phi:Y\times E\to F$ so that
\[ Tf(y) = \Phi(y,f(\psi(y))) \text{ for all $f\in A(X,E)$ and all $y\in Y$}.\]
\end{prop}
\begin{prop}\label{p4.4}
Suppose that $A(X,E)$ is standard subspace of
$U(X,E)$ that contains a nonzero constant function. Assume that $A(X,E)$ has property (P).
Let $A(Y,F) = C(Y,F)$ or $C_*(Y,F)$.
\begin{enumerate}
\item For any $y\in Y$, there is a function $\Phi(y,\cdot): \widetilde{E}_{\psi(y)}\to F$ so that
\[Tf(y) = \Phi(y,\widetilde{f}(\psi(y))) \text{ for all $f\in A(X,E)$}.\]
\item There is a function $\Psi: X\times F\to E$ so that
\[T^{-1}g(x) = \Psi(x,g(\varphi(x))) \text{ for all $g\in A(Y,F)$ and all $x \in X$}.\]
\end{enumerate}
\end{prop}
\section{Spaces of continuous functions -- metric case}\label{s5}
In this section, let $X, Y$ be metric spaces and $E,F$ be normed spaces.
Let $A(X,E) = C(X,E)$ or $C_*(X,E)$ and $A(Y,F) = C(Y,F)$ or $C_*(Y,F)$.
Fix a biseparating map $T:A(X,E)\to A(Y,F)$.
By Theorem \ref{t3.7}, Proposition \ref{p4.3} and symmetry, there are a homeomorphism $\varphi:X\to Y$ and functions $\Phi:Y\times E\to F$, $\Psi:X\times F\to E$ so that
\[ (Tf)(y)= \Phi(y,f(\varphi^{-1}(y))),\quad (T^{-1}g)(x) = \Psi(x,g(\varphi(x)))\]
for any $f\in A(X,E)$, $g\in A(Y,F)$, $x\in X$ and $y\in Y$.
From the equations
\[ 1\otimes a = T^{-1}(T(1\otimes a)),\quad 1\otimes b = T(T^{-1}(1\otimes b))\]
for all $a\in E$, $b\in F$, we find that $\Phi(y,\cdot)$ and $\Psi(x,\cdot)$ are mutual inverses provided $y = \varphi(x)$.
The aim of the present section is to characterize the functions $\Phi$ that lead to biseparating maps and prove a result on automatic continuity.
Observe that if we define $S:C(Y,F)\to C(X,F)$ by $Sg(x) = g(\varphi^{-1}(x))$, then $S$ is a biseparating map that also acts as a biseparating map from $C_*(Y,F)$ onto $C_*(X,F)$.
Thus characterization of biseparating maps reduces to the ``section problem'' addressed in Proposition \ref{p5.1}.
The result is well known, at least in the case for for $C(X)$. See, e.g. \cite[Chapter 9]{AZ}.
We omit the easy proof of the next lemma.
\begin{lem}\label{l5.0}
Let $(x_n)$ be a sequence of distinct points in $X$ and let $(a_n)$ be a sequence in $E$.
\begin{enumerate}
\item If $(x_n)$ has no convergent subsequence, then there is a function $f\in C(X,E)$ so that $f(x_n) = a_n$ for all $n$. Moreover, $f$ can be chosen to be bounded if $(a_n)$ is bounded.
\item If $(x_n)$ converges to a point $x_0\in X$, $x_0\neq x_n$ for all $n$, and $(a_n)$ converges to a point $a_0\in E$, then there exists $f\in C_*(X,E)$ so that $f(x_n) = a_n$ for all $n$.
\end{enumerate}
\end{lem}
Denote the set of accumulation points of $X$ by $X'$ and the unit balls of $E$ and $F$ by $B_E$ and $B_F$ respectively.
\begin{prop}\label{p5.1}
Let $X$ be a metric space and let $E$ and $F$ be normed spaces. Consider a function $\Phi:X\times E\to F$.
\begin{enumerate}
\item The function $x\mapsto \Phi(x,f(x))$ belongs to $C(X,F)$ for every $f\in C(X,E)$ if and only if $\Phi$ is continuous at every point in $X'\times E$.
\item The function $x\mapsto \Phi(x,f(x))$ belongs to $C_*(X,F)$ for every $f\in C_*(X,E)$ if and only if both of the following conditions hold.
\begin{enumerate}
\item $\Phi$ is continuous at every point in $X'\times E$.
\item For any bounded set $B$ in $E$, every $(x_n) \in \prod_n X_n(B)$ has a subsequence that converges in $X$, where
\[X_n(B) = \{x\in X: \Phi(x,B) \not\subseteq nB_F\}.\]
\end{enumerate}
\end{enumerate}
\end{prop}
\begin{proof}
Suppose that $x\mapsto \Phi(x,f(x))$ belongs to $C(X,F)$ for every $f\in C_*(X,E)$.
Let $(x_0,a_0)$ be a point in $X'\times E$ and let $((x_n,a_n))$ be a sequence in $X\times E$ that converges to $(x_0,a_0)$.
Since $\Phi(x,a_n)$ is a continuous function of $x$, by making small perturbations if necessary, we may assume that $x_n \neq x_0$ for all $n$.
Changing to a subsequence, we may further assume that $(x_n)$ is a sequence of distinct points. By Lemma \ref{l5.0}(2), there exists $f\in C_*(X,E)$ so that $f(x_n) = a_n$ for all $n$. By continuity of $f$, $f(x_0) = a_0$.
Then
\[ \Phi(x_n,a_n) = \Phi(x_n,f(x_n)) \to \Phi(x_0,f(x_0)) = \Phi(x_0,a_0).\]
This proves the continuity of $\Phi$ at $(x_0,a_0)$.
Hence the ``only if'' parts in statement (1) and statement (2)(a) are verified.
On the other hand,
if $\Phi$ is continuous at any point in $X'\times E$ and $f\in C(X,E)$, then it is clear that $\Phi(x,f(x))$ is a continuous function of $x\in E$. This completes the proof of statement (1).
Let us proceed to prove the necessity of condition (b) in statement (2).
Assume that condition (b) in (2) fails.
Let $B$ be a bounded set in $E$ and let $(x_n)$ be an element in $\prod X_n(B)$ so that $(x_n)$ has no convergent subsequence in $X$.
Since $X_n(B) \subseteq X_m(B)$ if $m \leq n$, we may replace $(x_n)$ by a subsequence to assume that all $x_n$'s are distinct.
Choose $a_n\in B$ so that $\|\Phi(x_n,a_n)\| > n$ for all $n$.
By Lemma \ref{l5.0}(1), there exists $f\in C_*(X,E)$ so that $f(x_n) = a_n$ for all $n$.
Then $\|\Phi(x_n,f(x_n))\| = \|\Phi(x_n,a_n)\| \to \infty$.
This contradicts the assumption that the function $x\mapsto \Phi(x,f(x))$ is bounded.
Finally, we prove the sufficiency in statement (2). Let $f\in C_*(X,E)$. As observed above, by (2) condition (a), $x\mapsto \Phi(x,f(x))$ is continuous on $X$ since $f\in C(X,E)$.
Let $B = f(X)$. Then $B$ is a bounded set in $E$. If $(\Phi(x,f(x)))_{x\in X}$ is unbounded, there is a sequence of distinct points $(x_n)$ so that $\|\Phi(x_n,f(x_n))\| > n$ for all $n$.
In particular, $(x_n) \in \prod X_n(B)$. By condition 2(b), we may replace it by a subsequence to assume that $(x_n)$ converges to a point $x_0$ in $X$. In particular, $x_0\in X'$. By assumption 2(a),
$\Phi(x_n, f(x_n))\to \Phi(x_0,f(x_0))$, contradicting the unboundedness of the sequence.
\end{proof}
The next two results follow immediately from the preceding discussion.
\begin{thm}\label{t5.4}
Let $X$ and $Y$ be metric spaces and let $E$ and $F$ be normed spaces.
A map $T:C(X,E)\to C(Y,F)$ is a biseparating map if and only if there are a homeomorphism $\varphi:X\to Y$ and functions $\Phi:Y\times E\to F$, $\Psi:X\times F\to E$ so that
\begin{enumerate}
\item $\Phi(y,\cdot)$ and $\Psi(x,\cdot)$ are mutual inverses if $\varphi(x) = y$.
\item $\Phi$ is continuous at any point in $Y'\times E$; $\Psi$ is continuous at any point in $X'\times F$.
\end{enumerate}
\end{thm}
\begin{thm}\label{t5.5}
Let $X$ and $Y$ be metric spaces and let $E$ and $F$ be normed spaces.
A map $T:C_*(X,E)\to C_*(Y,F)$ is a biseparating map if and only if there are a homeomorphism $\varphi:X\to Y$ and functions $\Phi:Y\times E\to F$, $\Psi:X\times F\to E$ so that
\begin{enumerate}
\item $\Phi(y,\cdot)$ and $\Psi(x,\cdot)$ are mutual inverses if $\varphi(x) = y$.
\item $\Phi$ is continuous at any point in $Y'\times E$; $\Psi$ is continuous at any point in $X'\times F$.
\item If $B_1$ and $B_2$ are bounded sets in $E$ and $F$ respectively, and
\[Z_n(B_1,B_2) = \{x\in X: \Phi(\varphi(x),B_1) \not\subseteq nB_F \text{ or } B_2 \not\subseteq \Phi(\varphi(x),nB_E)\},\]
then every $(x_n) \in \prod_n Z_n(B_1,B_2)$ has a subsequence that converges in $X$.
\end{enumerate}
\end{thm}
Observe that by condition (1) of Theorem \ref{t5.5}, $B_2 \not\subseteq \Phi(\varphi(x),nB_E)$ if and only if $\Psi(x,B_2)\not\subseteq nB_E$. Hence condition (3) in Theorem \ref{t5.5} is a combination of condition 2(b) in Proposition \ref{p5.1} for the maps $\Phi$ and $\Psi$.
\begin{prop}\label{p5.2}
If there is a biseparating map $T:C(X,E)\to C_*(Y,F)$, then $X$ are $Y$ are compact.
\end{prop}
\begin{proof}
Assume otherwise. Since $X$ and $Y$ are homeomorphic,
there is a sequence of distinct points $(x_n)$ in $X$ that has no convergent subsequence. Fix a nonzero element $b\in F$ and let $a_n = (T^{-1}(1\otimes nb))(x_n)$ for each $n$.
By Lemma \ref{l5.0}(1), there is a function $f\in C(X,E)$ so that $f(x_n) =a_n$ for all $n$.
Note that
\[nb = T((T^{-1}(1\otimes nb))(\varphi(x_n)) = \Phi(\varphi(x_n),a_n) \text{ for each $n$.}\]
Then
\[(Tf)(\varphi(x_n)) = \Phi(\varphi(x_n),a_n) = nb \text{ for all $n$},\]
contradicting the boundedness of $Tf$.
\end{proof}
Note that if $T:C(X,E)\to C_*(Y,F)$ is biseparating, then $X$ is compact by Proposition \ref{p5.2}. Hence $C(X,E) = C_*(X,E)$. Therefore, the characterization Theorem \ref{t5.5} applies.
We conclude this section with an automatic continuity result. If $K$ is a compact subset of $X$, let
\[\|f\|_K = \sup\{\|f(x)\|:x\in K\} \text{ for any $f\in C(X,E)$}.\]
\begin{thm}\label{t5.6}
Let $X$ and $Y$ be metric spaces and let $E$ and $F$ be normed spaces.
Suppose that $A(X,E) = C(X,E)$ or $C_*(X,E)$, $A(Y,F) = C(Y,F)$ or $C_*(Y,F)$.
Let $T:A(X,E)\to A(Y,F)$ be a biseparating map. For any compact subset $K$ of $Y'$, any $f\in A(X,E)$, and any $\varepsilon > 0$, there exists $\delta >0$ so that
\[ g\in A(X,E),\ \|g-f\|_{\varphi^{-1}(K)} <\delta \implies \|Tg-Tf\|_K < \varepsilon.\]
\end{thm}
\begin{proof}
Suppose that $(g_n) \subseteq A(X,E)$ and that $\|g_n-f\|_{\varphi^{-1}(K)} \to 0$. It suffices to show that a subsequence of $(\|Tg_n-Tf\|)$ converges to $0$.
Pick $(y_n) \subseteq K$ so that $\|Tg_n-Tf\|_K = \|(Tg_n)(y_n) - (Tf)(y_n)\|$ for all $n$.
By using a subsequence if necessary, we may assume that $(y_n)$ converges to some $y_0\in K$.
Let $x_n = \varphi^{-1}(y_n)$,
$g_n(x_n) = a_n$ , and $f(x_n) = a'_n$. Then $(x_n)$ converges to $x_0 = \varphi^{-1}(y_0)$ and $f(x_0) = a_0$.
Since $\|g_n-f\|_K \to 0$, $(a_n)$ converges to $a_0$.
Also $(a_n')$ converges to $a_0$ by continuity of $f$.
Since $y'\in K\subseteq Y'$, it follows fromm Proposition \ref{p5.1}(1) that $\Phi$ is continuous at $(y_0,a_0)$.
Therefore,
\[
\|(Tg_n)(y_n) - (Tf)(y_n)\| = \|\Phi(y_n,a_n)- \Phi(y_n,a'_n)\|\to 0.\]
Thus $\|Tg_n-Tf\|_K\to 0$.
\end{proof}
\section{Spaces of uniformly continuous functions}\label{s6}
In this section, let $X$, $Y$ be complete metric spaces and let $E$, $F$ be Banach spaces.
The aim of this section is to characterize biseparating maps from $U(X,E)$ or $U_*(X,E)$ onto $U(Y,F)$ or $U_*(Y,F)$. By Propositions \ref{p3.10} and \ref{p4.2}, a biseparating map $T: U(X,E)/U_*(X,E) \to U(Y,F)/U_*(Y,F)$
can be represented in the form
\[ Tf(y) = \Phi(y,f(\psi(y))) \text{ for all $f\in U(X,E)/U_*(X,E)$ and all $y\in Y$},\]
where $\psi:Y\to X$ is a homeomorphism with inverse $\varphi$ and $\Phi:Y\times E\to F$ is a function so that $\Phi(y,\cdot)$ is a bijection from $E$ onto $F$ for all $y\in Y$.
In fact, characterizations can be obtained without completeness assumptions of $X,Y,E,F$. However, the case of complete spaces contains all pertinent ideas without the distraction of niggling details.
Characterizations of lattice isomorphisms and of {\em linear} biseparating maps on spaces of uniformly continuous functions were obtained in \cite{GaJ} and \cite{A2} respectively.
\begin{prop}\label{p6.3.0}
Let $A(X,E)= U(X,E)$ or $U_*(X,E)$, and $A(Y,F)= U(Y,F), U_*(Y,F)$ or $\operatorname{Lip}_*(Y,F)$.
Let ${\varphi}: {X}\to {Y}$ be the homeomorphism associated with $T$ according to Theorem \ref{t3.5}.
Then ${\varphi}$ is uniformly continuous.
\end{prop}
\begin{proof}
Suppose that $\varphi$ is not uniformly continuous. There are sequences $(x_n)$, $(x_n')$ in $X$ and $\varepsilon >0$ so that
$d(x_n,x'_n)\to 0$ and that $d(\varphi(x_n),\varphi(x_n')) > \varepsilon$ for all $n$.
Set $y_n = \varphi(x_n)$ and $y_n' = \varphi(x_n')$.
In view of the continuity of ${\varphi}$, niether $(x_n)$ nor $(x'_n)$ can have a convergent subsequence in ${X}$.
Hence we may assume that $(x_n)$ is a separated sequence.
Since ${\varphi}^{-1}$ is continuous, neither $(y_n)$ nor $(y'_n)$ can have a convergent subsequence in ${Y}$.
As we also have $d(y_n,y_n') > \varepsilon$ for all $n$, by using subsequences if necessary, we may assume that $(y_n) \cup (y_n')$ is a separated set.
Without loss of generality, take $T0 = 0$. We will use repeatedly the following formulation of Proposition \ref{p4.2}. If $x\in {X}$ and $f, g\in A(X,E)$, then ${f}(x) = {g}(x)$ if and only if ${Tf}(\varphi(x)) = {Tg}(\varphi(x)).$
\medskip
\noindent\underline{Case 1}. $A(Y,F) = U_*(Y,F)$ or $\operatorname{Lip}_*(Y,F)$.
Fix a nonzero vector $a\in E$ and let $b_n = {(T(1\otimes a))}(y_n)$. Then $(b_n)$ is a bounded sequence.
Since $(y_n)\cup (y_n')$ is separated, one can easily construct a function $g\in A(Y,F)$ so that ${g}(y_n) = b_n$ and ${g}(y'_n) = 0$ for all $n$. By Proposition \ref{p4.2}, we see that $(T^{-1}g)(x_n) = a$ and $(T^{-1}g)(x'_n) = 0$. Since $T^{-1}g$ is uniformly continuous and $d(x_n,x'_n)\to 0$, we have a contradiction.
\medskip
In the remaining cases, take $A(Y,F) = U(Y,F)$.
\medskip
\noindent\underline{Case 2}. There exist $r>0$ and an infinite subset $N$ of ${\mathbb N}$ so that $B(y_n,r) = \{y_n\}$ for all $n\in N$.
Fix a nonzero vector $a \in E$. Let $b_n = (T(1\otimes a))(y_n)$ for each $n\in N$. Define $g:Y \to F$ by $g(y) = b_n$ if $y = y_n, n\in N$, and $g(y) = 0$ otherwise.
Clearly $g\in U(Y,F)$.
But by Proposition \ref{p4.2}, for all $n\in N$,
\[ (T^{-1}g)(x_n) = a \text{ and } (T^{-1}g)(x_n') = 0.\]
Hence $T^{-1}g$ is not uniformly continuous, contrary to the fact that $T^{-1}g \in A(X,E) \subseteq U(X,E)$.
\medskip
\noindent\underline{Case 3}. For all $r>0$, $B(y_n,r) = \{y_n\}$ occurs for only finitely many $n$.
In this case, by using a subsequence if necessary, we may assume that there is a sequence $(y_n'')$ in $Y$ so that $0 < d(y_n,y_n'') \to 0$.
Set $x_n'' = {\varphi}^{-1}(y_n'')$ for all $n$.
Take a nonzero element $b\in F$ and let
$a_n =(T^{-1}(1\otimes b))(x_n)$.
Since $(y_n)\cup(y_n')$ is separate
, we can find $g\in U(Y,F)$ so that
${g}(y_n) = b$ and ${g}(y_n') = 0$ for all $n$.
By Proposition \ref{p4.2},
\[ (T^{-1}g)(x_n) = a_n \text{ and } (T^{-1}g)(x'_n) = 0.\]
Since $T^{-1}g$ is uniformly continuous and $d(x_n,x_n') \to 0$,
\[ a_n = (T^{-1}g)(x_n) - (T^{-1}g)(x_n') \to 0.
\]
As $(x_n)$ is separated, $x_n'' \neq x_n$ and $(a_n)$ is a null sequence, we may, after replacing $(x_n)$ and $(x_n'')$ with subsequences, construct a function $f\in U_*(X,E)\subseteq A(X,E)$ so that $f(x_n) = a_n$ and $f(x_n'') = 0$ for all $n$.
Then ${Tf}(y_n) = {g}(y_n) = b$ and $Tf(y_n'') = 0$.
This is impossible since ${Tf}$ is uniformly continuous and $d(y_n,y_n'')\to 0$.
\end{proof}
By Proposition \ref{p6.3.0}, if $T: U(X,E)/U_*(X,E) \to U(Y,F)/U_*(Y,F)$ is a biseparating map, then $\varphi:X\to Y$ is a uniform homeomorphism.
In this case, the map $\widehat{T}$ given by
\[\widehat{T}f(x) = Tf(\varphi(x)) = \Phi(\varphi(x),f(x)) = \widehat{\Phi}(x,f(x))\]
maps $U(X,E)/U_*(X,E)$ onto $U(X,F)/U_*(X,F)$, with $\widehat{\Phi}:X\times E\to F$ being a function such that $\widehat{\Phi}(x,\cdot):E\to F$ is a bijection for each $x\in X$.
To complete the characterization of $T$, it suffices to determine the functions $\widehat{\Phi}: X\times E\to F$ so that $x\mapsto \widehat{\Phi}(x,f(x))$ belongs to $U(X,F)/U_*(X,F)$ for each $f\in U(X,E)/U_*(X,E)$.
We will refer to this as the ``section problem'' for uniformly continuous functions.
For any $\varepsilon > 0$, define $d_\varepsilon:\widetilde{X}\times \widetilde{X}\to [0,\infty]$ by
\begin{equation}\label{eq6.0} d_\varepsilon(a,b) = \inf\{\sum^n_{i=1}d(x_{i-1},x_i): n\in {\mathbb N}, x_0 = a, x_n = b, d(x_{i-1},x_i) \leq \varepsilon \text{ for all $i$}\},\end{equation}
where we take $\inf \emptyset = \infty$.
The connection of the $d_\varepsilon$ ``metrics'' with uniformly continuous functions is well known; see, e.g. \cite{A,H, O'F}.
In particular, the first part of the next proposition formalizes the well known principle that uniformly continuous functions are ``Lipschitz for large distances''.
\begin{prop}\label{p6.3}
Let $X$ be a complete metric space and let $E$ be a Banach space.
\begin{enumerate}
\item If $f\in U(X,E)$, then there exist $\varepsilon > 0$ and $C<\infty$ such that
\[ \|{f}(x_1)-{f}(x_2)\| \leq Cd_\varepsilon(x_1,x_2) \text{ whenever $x_1,x_2\in {X}$, $d(x_1,x_2) > \varepsilon$}.\]
\item If $f:X\to E$ and there exist $\varepsilon >0$ and $C<\infty$ so that
\[ \|f(x_1) - f(x_2)\| \leq Cd_\varepsilon(x_1,x_2) \text{ for all $x_1,x_2\in X$},\]
then $f\in U(X,E)$.
\end{enumerate}
\end{prop}
\begin{proof}
Statement (2) is trivial since $d_\varepsilon(x_1,x_2) = d(x_1,x_2)$ if $d(x_1,x_2) \leq \varepsilon$. Let us prove statement (1).
Assume that $f\in U(X,E)$. There exists $\varepsilon >0$ so that $\|f(a)-f(b)\| \leq 1$ if $d(a,b) \leq \varepsilon$.
Let $x_1,x_2\in X$ be points so that $\varepsilon < d(x_1,x_2)$ and $d_\varepsilon(x_1,x_2) <\infty$.
There are $n\in {\mathbb N}$, $(a_i)^n_{i=0} \subseteq X$ so that $a_0 = x_1$, $a_n = x_2$, $d(a_{i-1}, a_i) \leq\varepsilon$, and
\[ \sum^n_{i=1}d(a_{i-1}, a_i) \leq 2d_\varepsilon(x_1,x_2).\]
Note that since $d(x_1,x_2) > \varepsilon$, $n \geq 2$.
It is clear that we may assume that $d(a_{i-1},a_i) + d(a_i,a_{i+1}) >\varepsilon$ for $0\leq i< n$.
Thus
\[ 2d_\varepsilon(x_1,x_2) \geq \sum^n_{i=1}d(a_{i-1}, a_i) \geq \frac{n-1}{2}\,\varepsilon \geq \frac{n\varepsilon}{4}.\]
By choice of $\varepsilon$, $\|f(a_{i-1}) - f(a_i)\| \leq 1$ for all $i$.
Hence
\[ \|f(x_1)-f(x_2)\| \leq n \leq \frac{8}{\varepsilon}\,d_\varepsilon(x_1,x_2).\]
\end{proof}
For the rest of the section, let $\Xi:X\times E\to F$ be a given function and associate with it a mapping $Sf(x) = \Xi(x,f(x))$ for any function $f:X\to E$.
Denote the set of accumulation points in $X$ by $X'$.
\begin{prop}\label{p6.4}
If $Sf \in U(X,F)$ for any $f\in U_*(X,E)$, then $\Xi$ is continuous at any $(x_0,e_0)\in X'\times E$ .
\end{prop}
\begin{proof}
Assume to the contrary that $\Xi$ is discontinuous at some $(x_0,e_0)\in X'\times E$ .
There are a sequence $((x_n,e_n))^\infty_{n=1} \in X\times E$ converging to $(x_0,e_0)$ and $\varepsilon>0$
so that $\|\Xi(x_n,e_n) - \Xi(x_0,e_0)\| > \varepsilon$ for all $n$.
Replacing $(x_n)$ by a subsequence if necessary, we may assume that either $(x_n)$ is a sequence of distinct points in $X\backslash \{x_0\}$ or $x_n = x_0$ for all $n$.
In the former case, there is a function $f\in U_*(X,E)$ so that $f(x_n) = e_n$ for all $n$ and $f(x_0) = e_0$.
Since $Sf$ is continuous at $x_0$,
\[ \Xi(x_n,e_n) = Sf(x_n) \to Sf(x_0) = \Xi(x_0,e_0),\]
contrary to the choice of $((x_n,e_n))^\infty_{n=1}$.
Finally, suppose that $x_n = x_0$ for all $n$.
For each $n$, let $f_n$ be the constant function with value $e_n$. Then $Sf_n$ is continuous at $x_0$. Since $x_0$ is an accumulation point, there exists $x'_n$ with $0< d(x'_n,x_0) < \frac{1}{n}$ so that
\[ \|\Xi(x'_n,e_n) - \Xi(x_0,e_n)\| = \|Sf_n(x'_n) - Sf_n(x_0)\| < \frac{1}{n}.\]
But by the previous case, $\Xi(x'_n,e_n) \to \Xi(x_0,e_0)$. Thus
$\Xi(x_0,e_n) \to \Xi(x_0,e_0)$.
\end{proof}
Call a sequence $((x_n,e_n))^\infty_{n=1}\in X\times E$ a $u$-sequence if $(x_n)$ is a separated sequence and there are $\varepsilon > 0$, $C<\infty$ so that
\[ \|e_n-e_m\| \leq Cd_\varepsilon(x_n,x_m) \text{ for all $m,n\in{\mathbb N}$}.\]
The importance of $u$-sequences is captured in the next lemma.
\begin{lem}\label{l6.4}
Let $((x_n,e_n))^\infty_{n=1}$ be a $u$-sequence in $X\times E$. Then there is an infinite subset $N$ of ${\mathbb N}$ and a uniformly continuous function $f:X\to E$ so that $f(x_n) =e_n$ for all $n \in N$.
\end{lem}
\begin{proof}
Let $\varepsilon$ and $C$ be as in the definition above.
If there is an infinite set $N$ in ${\mathbb N}$ so that $d_\varepsilon(x_n,x_m) = \infty$ for all distinct $m,n\in N$, then clearly the function defined by $f(x) = e_n$ if $d_{\varepsilon}(x,x_n) <\infty$ for some $n\in N$ and $f(x) = 0$ otherwise is uniformly continuous. Obviously $f(x_n) =e_n$ for all $n\in N$.
Thus, without loss of generality, we may assume that $d_\varepsilon(x_m,x_n)<\infty$ for all $m,n$.
If $(d_\varepsilon(x_n,x_m))_{m,n}$ is bounded, then $(e_n)$ is a bounded sequence. Since $(x_n)$ is separated, there exists $f\in U(X,E)$ so that $f(x_n) = e_n$ for all $n$.
Finally, assume that $(d_\varepsilon(x_n,x_m))_{m,n}$ is unbounded set in ${\mathbb R}_+$.
By taking a subsequence, we may assume that $4r_n < r_{n+1}$ for all $n$, where $r_n = d_\varepsilon(x_n,x_1)$.
Define $f:X\to E$ by
\[ f(x) = \begin{cases}
e_1 + (1- \frac{2}{r_n}d_\varepsilon(x,x_n))^+(e_n-e_1) &\text{if $d_\varepsilon(x,x_n)< \frac{r_n}{2}$, $n > 1$}\\
e_1 &\text{otherwise}.
\end{cases}\]
Using the fact that $\|e_n-e_1\|\leq Cr_n$ for all $n$, one can check that
\[ \|f(a) - f(b)\| \leq 16Cd_\varepsilon(a,b)
\]
for all $a,b\in X$. By Proposition \ref{p6.3}(2), $f\in U(X,E)$. Clearly, $f(x_n) = e_n$ for all $n$, as required.
\end{proof}
\begin{prop}\label{p6.5}
Suppose that $Sf\in U(X,F)$ for all $f\in U(X,E)$.
Let $((x_n,e_n))^\infty_{n=1}$ be a $u$-sequence.
Assume that $((x_n',e_n'))^\infty_{n=1} \in X\times E$, $x_n\neq x_n'$ for all $n$, and $\lim (d(x_n,x_n') + \|e_n-e_n'\|) =0$, then
\[ \lim\|\Xi(x_n,e_n) - \Xi(x_n',e'_n)\| = 0.\]
\end{prop}
\begin{proof}
It suffices to show that $\liminf\|\Xi(x_n,e_n) - \Xi(x_n',e'_n)\| = 0$. By Lemma \ref{l6.4}, there exist $f\in U(X,E)$ and an infinite set $N$ in ${\mathbb N}$ so that $f(x_n) = e_n$ for all $n\in N$.
Since $d(x_n,x_n') \to 0$, $\lim_{n\in N}\|e_n -f(x_n')\| = 0$ and hence $\lim_{n\in N}\|e_n'-f(x_n')\| = 0$.
As $(x_n)$ is a separated sequence and $0 < d(x_n,x_n') \to 0$, we can construct a uniformly continuous function $g:X\to E$
such that $g(x_n) = 0$ and $g(x_n') = e_n'-f(x_n')$ for all sufficiently large $n\in N$.
Then $f+g\in U(X,E)$,
\[ \Xi(x_n,e_n) = S(f+g)(x_n) \text{ and } \Xi(x_n',e_n') = S(f+g)(x_n')\] for all sufficiently large $n\in N$.
As $S(f+g)\in U(X,F)$ and $d(x_n,x_n') \to 0$, we see that $\lim_{n\in N}\|\Xi(x_n,e_n) - \Xi(x_n',e_n')\| =0$.
\end{proof}
We will say that $\Xi$ is {\em $u$-continuous} if it satisfies the conclusion of Proposition \ref{p6.5}.
We can now solve the section problem for uniformly continuous functions.
\begin{thm}\label{t6.5}
Let $X$ be a complete metric space, $E$ and $F$ be Banach spaces. Given a function $\Xi:X\times E\to F$, associate with it a mapping $S$ by $Sf(x) = \Xi(x,f(x))$.
Then $S$ maps $U(X,E)$ into $U(X,F)$ if and only if
$\Xi$ is continuous at all $(x_0,e_0) \in X'\times E$ and $\Xi$ is
$u$-continuous.
\end{thm}
\begin{proof}
The necessity of the two conditions on $\Xi$ follow from Propositions \ref{p6.4} and \ref{p6.5}.
Conversely, suppose that $\Xi$ is continuous at any $(x_0,e_0)\in X'\times E$ and also $u$-continuous.
Let $f\in U(X,E)$.
If $Sf\notin U(X,F)$, there are sequences $(x_n)$, $(x_n')$ in $X$, $d(x_n,x'_n)\to 0$, and $\eta >0$ so that
\begin{equation}\label{eq6.1} \|\Xi(x_n,f(x_n)) - \Xi(x_n',f(x_n'))\| = \|Sf(x_n) - Sf(x_n')\| > \eta \text{ for all $n$}.\end{equation}
Suppose that $(x_n)$ has a subsequence that converges to some $x_0\in X$.
We may assume that the whole sequence converges to $x_0$. In particular, $(f(x_n))$ and $(f(x_n'))$ converge to $f(x_0)$.
Clearly, $x_n\neq x'_n$ for all $n$. Hence $x_0\in X'$.
In this case, (\ref{eq6.1}) violates the continuity of $\Xi$ at $(x_0,f(x_0))$.
Finally, assume that $(x_n)$ is a separated sequence. Choose $\varepsilon' > 0$ so that $d(x_m,x_n) > \varepsilon'$ if $m\neq n$. Then let $\varepsilon$ and $C$ be as given in condition (1) of Proposition \ref{p6.3} for the function $f$.
Obviously, $d_\varepsilon \leq d_{\varepsilon \wedge \varepsilon'}$. So we may assume without loss of generality that $\varepsilon \leq\varepsilon'$.
Hence $(x_n,f(x_n))^\infty_{n=1}$ is a $u$-sequence by Proposition \ref{p6.3}(1).
Again, (\ref{eq6.1}) implies that $x_n\neq x_n'$ for all $n$.
Furthermore, $d(x_n,x'_n)\to 0$ and $\|f(x_n)-f(x_n)\| \to 0$, the latter as a result of the uniform continuity of $f$. Therefore,
\[ \lim\|\Xi(x_n,f(x_n)) - \Xi(x_n',f(x'_n))\| =0\]
by $u$-continuity of $\Xi$, contradicting (\ref{eq6.1}).
\end{proof}
Characterization of biseparating maps from $U(X,E)$ onto $U(Y,F)$ can be obtained by using Theorem \ref{t6.5} together with the ``switch'' from $Y$ to $X$ described prior to Proposition \ref{p6.3}.
\begin{thm}\label{t6.7.1}
Let $X, Y$ be complete metric spaces and let $E,F$ be Banach spaces.
Suppose that $T:U(X,E)\to U(Y,F)$ is a biseparating map.
Then there are a uniform homeomorphism ${\varphi}:{X}\to {Y}$ and a function $\Phi:Y\times E\to F$ so that
\begin{enumerate}
\item For each $y\in Y$, $\Phi(y,\cdot):E\to F$ is a bijection with inverse $\Psi(x,\cdot):F\to E$, where $\varphi(x) = y$.
\item $Tf(y) = \Phi(y,f(\varphi^{-1}(y)))$ and $T^{-1}g(x) = \Psi(x,g(\varphi(x)))$ for all $f\in U(X,E), g\in U(Y,F)$ and $x\in X$, $y\in Y$.
\item $\Phi$ is continuous on $Y'\times E$ and $\Psi$ is continuous on $X'\times F$.
\item $(x,e)\mapsto \Phi(\varphi(x),e))$ and $(y,e')\mapsto \Psi(\varphi^{-1}(y),e'))$ are both $u$-continuous.
\end{enumerate}
Conversely, assume that $\varphi,\Phi$ satisfy conditions (1), (3) and (4). Define $Tf(y)$ as in (2) for any $f\in U(X,E)$ and $y\in Y$. Then $T$ is a biseparating map from $U(X,E)$ onto $U(Y,F)$.
\end{thm}
\begin{lem}\label{l6.8}
Let $\Xi:X\times E\to F$ be a given function and associate with it a mapping $Sf(x) = \Xi(x,f(x))$ for any function $f:X\to E$. If $Sf\in U_*(X,F)$ for any $f \in U_*(X,E)$, then for any separated sequence $(x_n)$ in $X$ and any bounded set $B$ in $E$, there is exists $k\in{\mathbb N}$ so that
$\bigcup_{n=k}^\infty\Xi(x_n,B)$ is bounded in $F$.
\end{lem}
\begin{proof}
Suppose that $Sf \in U_*(X,F)$ for any $f\in U_*(X,E)$.
Let $(x_n)$ be a separated sequence in $X$ and let $B$ be a bounded set in $E$.
Assume that for any $k\in {\mathbb N}$, $\bigcup_{n=k}^\infty\Xi(x_n,B)$
is unbounded.
Then there exists $(e_n)$ in $B$ so that $(\Xi(x_n,e_n))^\infty_{n=1}$ is unbounded.
Since $(x_n)$ is separated and $(e_n)$ is bounded, there exists $f\in U_*(X,E)$ so that $f(x_n) = e_n$.
By assumption $Sf$ is bounded. Hence $(\Xi(x_n,e_n))^\infty_{n=1} = (Sf(x_n))^\infty_{n=1}$ is bounded, a contradiction.
\end{proof}
We now obtain the analog of Theorem \ref{t6.7.1} for biseparating maps between spaces of bounded uniformly continuous functions. The details are similar to Theorem \ref{t6.7.1}, with the extra ingredient Lemma \ref{l6.8} for ``boundedness''.
\begin{thm}\label{t6.7.2}
Let $X, Y$ be complete metric spaces and let $E,F$ be Banach spaces.
Suppose that $T:U_*(X,E)\to U_*(Y,F)$ is a biseparating map.
Then there are a uniform homeomorphism ${\varphi}:{X}\to {Y}$ and a function $\Phi:Y\times E\to F$ so that
\begin{enumerate}
\item For each $y\in Y$, $\Phi(y,\cdot):E\to F$ is a bijection with inverse $\Psi(x,\cdot):F\to E$, where $\varphi(x) = y$.
\item $Tf(y) = \Phi(y,f(\varphi^{-1}(y)))$ and $T^{-1}g(x) = \Psi(x,g(\varphi(x)))$ for all $f\in U(X,E), g\in U(Y,F)$ and $x\in X$, $y\in Y$.
\item $\Phi$ is continuous on $Y'\times E$ and $\Psi$ is continuous on $X'\times F$.
\item $(x,e)\mapsto \Phi(\varphi(x),e))$ and $(y,e')\mapsto \Psi(\varphi^{-1}(y),e'))$ are both $u$-continuous.
\item Let $(x_n)$ be a separated sequence in $X$ and $y_n = \varphi(x_n)$ for all $n$. If $B$ and $B'$ are bounded sets in $E$ and $F$ respectively, then there exists $k\in {\mathbb N}$ so that
\[ \bigcup_{n=k}^\infty\Phi(y_n,B) \text{ and } \bigcup_{n=k}^\infty\Psi(x_n,B')\]
arer bounded sets in $F$ and $E$ respectively.
\end{enumerate}
Conversely, assume that $\varphi,\Phi$ satisfy conditions (1), (3), (4) and (5). Define $Tf(y)$ as in (2) for any $f\in U_*(X,E)$ and $y\in Y$. Then $T$ is a biseparating map from $U_*(X,E)$ onto $U_*(Y,F)$.
\end{thm}
\subsection{Automatic continuity}
Automatic continuity results for biseparating maps acting between spaces of uniformly continuous functions can be deduced easily from the characterization theorems \ref{t6.7.1} and \ref{t6.7.2}.
If $S$ is a subset of $X$, respectively, $Y$, and $f:X\to E$, respectively, $f:Y\to F$, let
\[ \|f\|_S = \sup_{s\in S}\|f(s)\|.\]
\begin{thm}\label{t6.9}
Let $X,Y$ be complete metric spaces and $E,F$ be Banach spaces.
Suppose that $T$ is a biseparating map from $U(X,E)$ onto $U(Y,F)$, respectively, from $U_*(X,E)$ onto $U_*(Y,F)$. Let $T$ be represented as in theorems \ref{t6.7.1} or \ref{t6.7.2}.
Assume that $f\in U(X,E)/U_*(X,E)$ and $S\subseteq X'$, the set of accumulation points of $X$.
For any $\varepsilon > 0$, there exists $\delta >0$ so that if $g\in U(X,E)/U_*(X,E)$ and $\|g-f\|_S < \delta$, then
$\|Tg- Tf\|_{\varphi(S)} < \varepsilon$.
\end{thm}
\begin{proof}
Suppose that the theorem fails. There exist $S\subseteq X'$, $\varepsilon >0$ and functions $(g_n)$ in $U(X,E)/U_*(X,E)$ so that
\[ \|g_n-f\|_S\to 0 \text{ and } \|Tg_n-Tf\|_{\varphi(S)} > \varepsilon \text{ for all $n$.}\]
Choose $(x_n) \subseteq S$ so that $\|Tg_n(\varphi(x_n)) - Tf(\varphi(x_n))\| >\varepsilon$ for all $n$.
Thus
\begin{equation}\label{e6.5} \|\Phi(\varphi(x_n), v_n) - \Phi(\varphi(x_n),u_n)\| >\varepsilon \text{ for all $n$},\end{equation}
where $v_n = g_n(x_n)$ and $u_n = f(x_n)$.
If $(x_n)$ has a subsequence $(x_{n_k})$ that converges to some $x_0$, then $x_0\in X'$.
Note that $(u_{n_k}) = Tf(\varphi(x_{n_k}))$ converges to $u_0 = Tf(\varphi(x_0))$ and $\|v_n - u_n\| \leq \|g_n-f\|_S\to 0$ as well. Thus $(v_{n_k})$ converges to $u_0$.
This shows that both sequences $((\varphi(x_{n_k}),v_{n_k}))$ and $((\varphi(x_{n_k}),u_{n_k}))$ converge to $(\varphi(x_0),u_0)$.
By condition (3) of Theorem \ref{t6.7.1} or \ref{t6.7.2}, $\Phi$ is continuous at $(\varphi(x_0),u_0)$, contradicting (\ref{e6.5}).
If $(x_n)$ does not have a convergent subsequence, then it has a separated subsequence $(x_{n_k})$. Again, let $u_{n_k} = f(x_{n_k})$
and $v_{n_k} = g_{n_k}(x_{n_k})$.
Since $g_{n_k}$ and $Tg_{n_k}$ are both continuous, one can choose $x_{n_k}'\neq x_{n_k}$
so that
\[ d(x_{n_k}',x_{n_k}), \|g_{n_k}(x'_{n_k}) - v_{n_k}\|, \|Tg_{n_k}(x'_{n_k}) - Tg_{n_k}(x_{n_k})\| \to 0.\]
Note that the last limit can be stated as
\begin{equation}\label{e6.6} \Phi(\varphi(x'_{n_k}),g_{n_k}(x'_{n_k})) - \Phi(\varphi(x_{n_k}), v_{n_k}) \to 0.\end{equation}
By Proposition \ref{p6.3}(1), $(x_{n_k}, u_{n_k}))$ is a $u$-sequence.
By (4) of Theorem \ref{t6.7.1} or \ref{t6.7.2}, $(x,e) \mapsto \Phi(\varphi(x),e)$ is $u$-continuous.
Since $x_{n_k}' \neq x_{n_k}$ and
\[ d(x'_{n_k},x_{n_k}) + \|g_{n_k}(x'_{n_k}) - u_{n_k}\| \leq
d(x'_{n_k},x_{n_k}) + \|g_{n_k}(x'_{n_k}) - v_{n_k}\| + \|g_{n_k} - f\|_S
\to 0,\]
$u$-continuity gives
\begin{equation}\label{e6.7} \Phi(\varphi(x'_{n_k}),g_{n_k}(x'_{n_k})) - \Phi(\varphi(x_{n_k}), u_{n_k}) \to 0.\end{equation}
The limits (\ref{e6.6}) and (\ref{e6.7}) yield
\[ \Phi(\varphi(x_{n_k}), v_{n_k}) - \Phi(\varphi(x_{n_k}), u_{n_k}) \to 0,\]
contrary to (\ref{e6.5}).
\end{proof}
\subsection{Bourbaki boundedness}
Let $X$ be a metric space. For any $\varepsilon>0$, recall the ``metric'' $d_\varepsilon$ defined by (\ref{eq6.0}). $d_\varepsilon$ induces an equivalence relation $\sim_\varepsilon$ on $X$ by $a\sim_\varepsilon b$ if and only if $d_\varepsilon(a,b) < \infty$.
The equivalence classes will be called {\em $\varepsilon$-sets}. $d_\varepsilon$ is a proper metric (i.e., finite valued) on each $\varepsilon$-set.
$X$ is said to be {\em Bourbaki bounded} if for any $\varepsilon>0$, there are only finitely many $\varepsilon$-sets, each of which is bounded in the $d_\varepsilon$ metric. See \cite{BG, B, GM}.
A classical result of Atsuji \cite{At} and Hejcman \cite{H}, rediscovered in \cite{O'F}, states that $U(X) = U_*(X)$ if and only if $X$ is Bourbaki bounded.
The final theorem in this section generalizes this result.
\begin{thm}\label{t6.10}
Let $X, Y$ be complete metric spaces and let $E, F$ be Banach spaces.
If there is a biseparating map from $U(X,E)$ onto $U_*(Y,F)$, then $X$ is Bourbaki bounded.
\end{thm}
Before proceeding to the proof of the theorem, observe that if $X$ is Bourbaki bounded, then $U(X,E) = U_*(X,E)$. This follows easily from Proposition \ref{p6.3}(2).
Let $T:U(X,E)\to U_*(Y,F)$ be a biseparating map. By Propositions \ref{p4.2} and \ref{p6.3.0}, $T$ has a representation
\[ Tf(y) = \Phi(y,f(\varphi^{-1}(y))) \text{ for all $y\in Y$ and all $f\in U(X,E)$},\]
where $\varphi$ is a uniform homeomorphism and $\Phi(y,\cdot):E\to F$ is a bijection for all $y\in Y$.
We may and do assume that $T0 = 0$, so that $\Phi(y,0) = 0$ for all $y$.
\begin{lem}\label{l6.11}
Let $X, Y$ be complete metric spaces and let $E, F$ be Banach spaces.
If there is a biseparating map from $U(X,E)$ onto $U_*(Y,F)$, then for any $\varepsilon>0$, $X$ has finitely many $\varepsilon$-sets.
\end{lem}
\begin{proof}
Suppose that there exists some $\varepsilon >0$ so that $X$ contains an infinite sequence $(X_n)^\infty_{n=1}$ of $\varepsilon$-sets.
Choose $x_n \in X_n$ for each $n$ and let $y_n =\varphi(x_n)$.
Since $\Phi(y_n,\cdot):E\to F$ is a bijection, there exists $e_n\in E$ so that $\|\Phi(y_n,e_n)\| > n$.
Define $f:X\to E$ by $f(x) = e_n$ if $x\in X_n$, $n\in {\mathbb N}$ and $0$ otherwise.
Then $f$ is uniformly continuous but $\|Tf(y_n)\|= \|\Phi(y_n,e_n)\| > n$ for all $n$.
This contradicts the assumption that $Tf \in U_*(Y,F)$.
\end{proof}
\begin{lem}\label{l6.12}
Let $X, Y$ be complete metric spaces and let $E, F$ be Banach spaces.
Suppose that there is a biseparating map from $U(X,E)$ onto $U_*(Y,F)$. For any $\varepsilon>0$, any $\varepsilon$-set of $X$ is $d_\varepsilon$-bounded.
\end{lem}
\begin{proof}
Define $\Xi:X\times E\to F$ by $\Xi(x,e) = \Phi(\varphi(x),e)$. The formula $Sf(x) = \Xi(x,f(x))$ defines a biseparating map from $U(X,E)$ onto $U_*(X,F)$ so that $S0 = 0$. For each $x$, $\Xi(x,\cdot):E\to F$ is a bijection. Denote its inverse by $\Theta(x,\cdot)$.
Suppose that there exist $\varepsilon>0$ and an $\varepsilon$-set $X_0$ that is not $d_\varepsilon$ bounded.
Fix $x_0\in X_0$ and a sequence $(x_n)$ in $X_0$ so that $d_\varepsilon(x_{n+1},x_0) > 3d_\varepsilon(x_n,x_0)$ for all $n$.
Let $a$ be a nonzero vector in $E$.
By Proposition \ref{p6.3}(2), the function $f:X\to E$ given by $f(x) = d_\varepsilon(x,x_0)a$ belongs to $U(X,E)$.
Hence $Sf\in U_*(X,F)$.
In particular, the sequence $(b_n) = (Sf(x_n))$ is bounded in $F$.
\medskip
\noindent{Claim}. There exists $m\in {\mathbb N}$ so that
\[ \limsup_n\|\Theta(x_n,sb_n) - \Theta(x_n,tb_n)\| \leq 1 \text{ if $s,t\in [0,1]$, $|s-t| \leq \frac{1}{m}$}.\]
First suppose that the claim holds.
Then
\begin{align*}
\limsup_n\|\Theta(x_n,b_n)\| & = \limsup_n\|\Theta(x_n,0)-\Theta(x_n,b_n)\|
\\&\leq \sum^m_{k=1}\limsup\|\Theta(x_n,\frac{k-1}{m}\,b_n) - \Theta(x_n,\frac{k}{m}\,b_n)\| \leq m.
\end{align*}
However, $\Xi(x_n,d_\varepsilon(x_n,x_0)a) = Sf(x_n) = b_n$ for all $n$.
Hence $\Theta(x_n,b_n) = d_\varepsilon(x_n,x_0)a$ for all $n$.
In particular, $(\Theta(x_n,b_n))$ cannot be bounded, contradicting the preceding inequality.
To complete the proof of the lemma, let us verify the claim.
If the claim fails, for each $m\in {\mathbb N}$, one can find $s_m,t_m\in [0,1]$, $|s_m-t_m|\leq \frac{1}{m}$, so that
\[ \limsup_n\|\Theta(x_n,s_mb_n) - \Theta(x_n,t_mb_n)\| > 1.\]
We may assume that $(s_m), (t_m)$ both converge to some $t_0\in [0,1]$.
Without loss of generality,
\[ \limsup_n\|\Theta(x_n,s_mb_n) - \Theta(x_n,t_0b_n)\| > \frac{1}{2} \text{ for all $m$.}\]
Choose $n_1 < n_2 < \cdots$ so that
\begin{equation}\label{e6.4}\|\Theta(x_{n_m},s_mb_{n_m}) - \Theta(x_{n_m},t_0b_{n_m})\| > \frac{1}{2} \text{ for all $m$.}\end{equation}
Clearly, $(x_n)$, and hence $(x_{n_m})$, is a separated sequence by choice. Since $(t_0b_{n_m})$ is bounded, there exist $g_1\in U_*(X,F)$ so that
$g_1(x_{n_m}) = t_0b_{n_m}$ for all $m$.
If there exists $\delta > 0$ and an infinite set $M$ so that $B(x_{n_m},\delta) = \{x_{n_m}\}$ for all $m\in M$, then each $\{x_{n_m}\}$ is a $\delta$-set in $X$, contradicting Lemma \ref{l6.11}.
Therefore, there is a sequence $(x'_m)$ in $X$ so that $0 < d(x_{n_m},x'_m)\to 0$.
Note that
$\|s_mb_{n_m} - t_0b_{n_m}\| \to 0$.
Hence there exists $h \in U_*(X,F)$ so that $h(x_{n_m}) = s_mb_{n_m} - t_0b_{n_m}$ and $h(x'_m) = 0$ for all sufficiently large $m$.
Set $g_2 = g_1 +h$.
Since $g_1(x'_m) = g_2(x'_m)$, $S^{-1}g_1(x'_m) = S^{-1}g_2(x'_m)$ for all sufficiently large $m$. Thus
\begin{align*}
\|\Theta(x_{n_m},&\, t_0b_{n_m}) - \Theta(x_{n_m},s_mb_{n_m})\| = \|S^{-1}g_1(x_{n_m}) - S^{-1}g_2(x_{n_m})\| \\
&\leq \|S^{-1}g_1(x_{n_m}) - S^{-1}g_1(x'_m)\| + \|S^{-1}g_2(x'_{m}) - S^{-1}g_2(x_{n_m})\|.
\end{align*}
As $S^{-1}g_1, S^{-1}g_2$ are uniformly continuous functions and $d(x_{n_m},x'_m) \to 0$, both terms on the right of the inequality tend to $0$. So we have reached a contradiction with (\ref{e6.4}).
This completes the proof of the claim and hence of the lemma.
\end{proof}
Lemmas \ref{l6.11} and \ref{l6.12} prove Theorem \ref{t6.10}.
If $T:U(X,E)\to U_*(Y,F)$ is a biseparating map, then $X$ is Bourbaki bounded by Theorem \ref{t6.10}.
Hence $U(X,E) = U_*(X,E)$ \cite{O'F}.
Thus the characterization Theorem \ref{t6.7.2} applies.
\section{Spaces of Lipschitz functions}\label{s8}
We focus on biseparating maps on Lipschitz spaces in this section. Again, we restrict consideration to complete metric spaces $X$, $Y$ and Banach spaces $E, F$.
In contrast to previous cases, we will see that there is no difference between spaces of bounded and unbounded Lipschitz functions. In fact, it is even sufficient to consider bounded metric spaces $X$ and $Y$.
Indeed, if $(X,d)$ is a metric space, let $X_1$ be the set $X$ with the bounded metric $d\wedge 1$.
Then clearly $\operatorname{Lip}_*(X,E) = \operatorname{Lip}(X_1,E)$.
To see that $\operatorname{Lip}(X,E)$ is equivalent to some $\operatorname{Lip}(Z,E)$ for a bounded metric space $Z$ via a linear biseparating map, we employ essentially the same argument from \cite[Proposition 5.2]{LT}, which has its roots in \cite{W}.
Fix a distinguished point $e$ in $X$ and define a function $\xi:X\to {\mathbb R}$ by $\xi(x) = d(x,e)\vee 1$.
Denote the Lipschitz constant of a function $f$ by $L(f)$. Let $d':X\times X\to {\mathbb R}$ be given by
\[ d'(p,q) = \sup_{\stackrel{f\in \operatorname{Lip}(X)}{L(f),|f(e)| \leq 1}}\bigl|\frac{f(p)}{\xi(p)} - \frac{f(q)}{\xi(q)}\bigr|,\]
where $\operatorname{Lip}(X)$ is the space of all real-valued Lipschitz functions on $X$.
\begin{lem}\cite[Proposition 5.1]{LT}\label{p6.1}
\begin{enumerate}
\item $d'$ is a metric on $X$ that is bounded above by $4$.
\item
\[ \frac{d(p,q)}{\xi(p)\vee \xi(q)}\leq d'(p,q) \leq \frac{3d(p,q)}{\xi(p)\vee \xi(q)}\]
for all $p,q\in X$.
\item If $\xi(p) \leq \xi(q)$, then
\[ d'(p,q) \leq d'(p,q)\xi(p) \leq 3d(p,q).\]
\item If $X$ is complete with respect to the metric $d$, then it is complete with respect to the metric $d'$.
\end{enumerate}
\end{lem}
Let $Z$ be the set $X$ with the metric $d'$.
\begin{prop}\label{p6.2}
$f\in \operatorname{Lip}(X,E)$ if and only if $\frac{f}{\xi} \in \operatorname{Lip}(Z,E)$.
In particular, $T:\operatorname{Lip}(X,E)\to \operatorname{Lip}(Z,E)$, $Tf = \frac{f}{\xi}$, is a linear biseparating map.
\end{prop}
\begin{proof}
The second assertion follows easily from the first.
Suppose that $f\in \operatorname{Lip}(X,E)$. Set $c = L(f)\vee \|f(e)\| \vee 1$.
For any $x^* \in E^*$, $\|x^*\| \leq 1$, $g = (x^*\circ f)/c\in \operatorname{Lip}(X)$ and $L(g), |g(e)| \leq 1$.
By definition of $d'$,
\[ cd'(p,q) \geq c\bigl|\frac{g(p)}{\xi(p)} - \frac{g(q)}{\xi(q)}\bigr| = \bigl|x^*\bigl(\frac{f}{\xi}(p) - \frac{f}{\xi}(q)\bigr)\bigr|.
\]
Taking supremum over $\|x^*\|\leq 1$ shows that $\frac{f}{\xi}\in \operatorname{Lip}(Z,E)$ with Lipschitz constant at most $c$.
Conversely, suppose that $g= \frac{f}{\xi}\in \operatorname{Lip}(Z,E)$. Let $p,q$ be distinct points in $X$ so that $\xi(p) \leq \xi(q)$. Denote the Lipschitz constant of $g$ with respect to the metric $d'$ by $L'(g)$.
Then
\[ \|g(q)\| \leq \|g(q) - g(e)\| + \|g(e)\| \leq L'(g)d'(q,e) + \|g(e)\| \leq 4L'(g) + \|g(e)\|\]
since $d' \leq 4$.
Hence
\begin{align*}
\|f(p) - f(q)\| & \leq \|g(p) - g(q)\|\xi(p) + \|g(q)\|(\xi(q) - \xi(p))\\
&\leq L'(g)d'(p,q)\xi(p) + (4L'(g)+ \|g(e)\|)d(p,q)\\
&\leq (7L'(g)+\|g(e)\|)d(p,q) \text{ by Lemma \ref{p6.1}(3)}.
\end{align*}
Thus $f\in \operatorname{Lip}(X,E)$.
\end{proof}
\subsection{$\varphi$ is a Lipschitz homeomorphism}
In view of the above, throughout the rest of this section, $X$ and $Y$ will be assumed to be bounded complete metric spaces. Let $T:\operatorname{Lip}(X,E)\to \operatorname{Lip}(Y,F)$ be a biseparating map so that $T0 =0$.
Once again, we have a representation (Proposition \ref{p4.2})
\begin{equation}\label{e7.1}Tf(y) = \Phi(y,f(\varphi^{-1}(y))) \text{ for all $y\in Y$ and all $f\in \operatorname{Lip}(X,E)$},\end{equation}
where $\varphi:X\to Y$ is a homeomorphism and $\Phi:Y\times E\to F$
is a function such that $\Phi(y,\cdot):E\to F$ is a bijection for all $y\in Y$.
Denote the inverse of $\Phi(y,\cdot)$ by $\Psi(x,\cdot)$, where $\varphi(x) = y$.
\begin{prop}\label{p7.3}
Suppose that $(x_n), (x'_n)$ are sequences in $X$. Let $y_n = \varphi(x_n), y_n' = \varphi(x_n')$ for all $n$.
If $(y_n)$ is a separated sequence, then there exists $C<\infty$ so that $d(y_n,y_n') \leq Cd(x_n,x_n')$ for all $n$.
\end{prop}
\begin{proof}
Assume that the proposition fails. There are sequences as in the statement of the proposition so that $d(y_n,y_n')/d(x_n,x_n') \to \infty$. Since $Y$ is bounded, $d(x_n,x_n') \to 0$.
If $(y_n')$ has a convergent subsequence, then $(x_n')$, and hence $(x_n)$ has a convergent subsequence, which in turn implies that $(y_n)$ has a convergent subsequence, contrary to the choice of $(y_n)$.
Thus, by taking a subsequence if necessary, we may assume that both $(y_n)$ and $(y_n')$ are separated sequences. Fix a nonzero vector $a\in E$ and let $g = T(1\otimes a)$.
\medskip
\noindent\underline{Case 1}. $d(y_n,y_n') \not\to 0$.
In this case, by taking a subsequence, we may assume that $(y_n)\cup (y_n')$ is a separated set.
Since $g\in \operatorname{Lip}(Y,F)$, $(g(y_n))$ is a bounded sequence in $F$.
Hence there exists $h\in \operatorname{Lip}(Y,F)$ so that $h(y_n) = g(y_n)$ and $h(y_n') = 0$ for all large $n$.
Then $T^{-1}h(x_n) = a$ and $T^{-1}(x_n') = 0$ for all large $n$.
Since $T^{-1}h$ is Lipschitz and $d(x_n,x_n')\to 0$, we have a contradiction.
\medskip
\noindent\underline{Case 2}. $d(y_n,y_n')\to 0$.
If $(\frac{\|g(y_n)\|}{d(y_n,y_n')})$ is bounded, then there is a function $h\in \operatorname{Lip}(Y,F)$ so that
$h(y_n) = g(y_n)$ and $h(y_n') = 0$ for all large $n$.
Thus $T^{-1}h(x_n) = a$ and $T^{-1}h(x_n') = 0$. This is impossible since $T^{-1}h$ is Lipschitz and $d(x_n,x_n') \to 0$.
From the unboundedness of $(\frac{\|g(y_n)\|}{d(y_n,y_n')})$, we may assume without loss of generality that for each $n$, there exists $s_n$ so that $d(y_n,y'_n)\leq s_n< 2d(y_n,y_n')$ and that $k_n = \frac{\|g(y_n)\|}{s_n}\in {\mathbb N}$.
Since $\Phi(y_n,a) = g(y_n)$, $\Psi(x_n,g(y_n)) = a$. Thus
\[ \sum^{k_n}_{j=1}[\Psi(x_n,\frac{jg(y_n)}{k_n}) - \Psi(x_n,\frac{(j-1)g(y_n)}{k_n})] = \Psi(x_n,g(y_n)) = a.
\]
Hence there exists $j_0\in \{1,\dots, k_n\}$ so that
\[ \|\Psi(x_n,\frac{j_0g(y_n)}{k_n}) - \Psi(x_n,\frac{(j_0-1)g(y_n)}{k_n})\| \geq \frac{\|a\|}{k_n}.\]
Therefore, there exists $i_0\in \{j_0-1,j_0\}$ so that
\begin{equation}\label{e7.2.0} \|\Psi(x_n, \frac{i_0g(y_n)}{k_n}) - \Psi(x'_n, \frac{j_0g(y_n')}{k_n})\| \geq \frac{\|a\|}{2k_n}.\end{equation}
Now
\begin{align}\label{e7.2}
\|\frac{i_0g(y_n)}{k_n} &- \frac{j_0g(y_n')}{k_n}\| \leq \frac{\|g(y_n)\|}{k_n} + \frac{j_0}{k_n}\|g(y_n)-g(y_n')\| \\ \notag
&\leq s_n + \frac{j_0L(g)}{k_n}d(y_n,y_n') \leq (2+L(g))d(y_n,y_n'),
\end{align}
where $L(g)$ is the Lipschitz constant of $g$.
Since $(y_n)$ is separated and $(\frac{i_0g(y_n)}{k_n})$ is a bounded sequence in $F$, there exists $h_1\in \operatorname{Lip}(Y,F)$ so that $h_1(y_n) = \frac{i_0g(y_n)}{k_n}$ for all $n$.
Let $L(h_1)$ be the Lipschitz constant of $h_1$. By (\ref{e7.2}),
\begin{align*} \|\frac{j_0g(y_n)}{k_n} - h_1(y_n')\|& \leq \|\frac{j_0g(y_n)}{k_n} - h_1(y_n)\| + L(h_1)d(y_n,y_n')
\\&\leq (2+L(g)+L(h_1))d(y_n,y_n').
\end{align*}
Therefore,
one can construct a function $h_2\in \operatorname{Lip}(Y,F)$ so that
\[ h_2(y_n) = 0 \text{ and } h_2(y_n') = \frac{j_0g(y_n')}{k_n}- h_1(y_n') \text{ for all large $n$}.\]
Let $f= T^{-1}(h_1+h_2)$.
Then $Tf(y_n) = h_1(y_n)$ and hence $f(x_n) =\Psi(x_n, \frac{i_0g(y_n)}{k_n})$ for all large $n$.
Similarly, $f(x_n') = \Psi(x_n, \frac{j_0g(y_n)}{k_n})$ for all large $n$.
Note that $g$ is a bounded function. Set $\|g\|_\infty = \sup_{y\in Y}\|g(y)\|$. By (\ref{e7.2.0},
\[ \|f(x_n) -f(x_n')\| \geq \frac{\|a\|}{2k_n} \geq \frac{\|a\|s_n}{2\|g\|_\infty} \geq \frac{\|a\|}{2\|g\|_\infty}d(y_n,y_n')\] for all large $n$.
Since $f$ is Lipschitz, it follows that $d(y_n,y_n')/d(x_n,x'_n)\not\to \infty$.
This contradiction completes the proof of the proposition.
\end{proof}
If $(x_0,e_0)\in X\times E$, $C<\infty$ and $r >0$, let
\begin{equation}\label{e7.4} \Delta(x_0,e_0,C,r) = \{(x,e)\in X\times E: d(x,x_0) \leq r, \|e-e_0\| \leq Cd(x,x_0)\}.\end{equation}
Then set $\Delta(x_0,e_0,C) = \bigcup_{r > 0} \Delta(x_0,e_0,C,r)$.
It is not surprising that understanding the map $T$ depends on analyzing the sets $\Delta(x_0,e_0,C,r)$ and $\Delta(x_0,e_0,C)$. For a very special instance, see \cite[Section 7.2]{AZ}.
Define sets in $Y\times F$ in a similar manner.
Let $M:X\times E\to Y\times F$ be the function
\[M(x,u) = (\varphi(x),\Phi(\varphi(x),u)).\]
Then $M$ is a bijection. Moreover, if $f \in \operatorname{Lip}(X,E)$, then $M(x_0, f(x_0)) = (\varphi(x_0), Tf(\varphi(x_0)))$.
Suppose that $\varphi:X\to Y$ is not Lipschitz. There are sequences $(x_n), (x'_n)$ in $X$, $x_n\neq x_n'$ for all $n$, so that taking $y_n = \varphi(x_n), y_n' = \varphi(x_n')$, we have $d(y_n,y_n')/d(x_n,x_n') \to \infty$.
Since $Y$ is bounded, $d(x_n,x_n') \to 0$. By Proposition \ref{p7.3}, $(y_n)$ cannot be a separated sequence.
Taking a subsequence if necessary, we may assume that $(y_n)$ converges to some $y_0$.
Then $(x_n)$ converges to $\varphi^{-1}(y_0) = x_0$. The same must hold for $(x'_n)$. Therefore, $(y_n')$ also converges to $y_0$.
With further subsequences and relabeling the primed and unprimed terms if necessary, we may assume that $d(y_n,y_0) \leq d(y_n',y_0)$ for all $n$.
With this assumption, $y_n'\neq y_0$ for all $n$. For otherwise $y_n = y_0 = y_n'$, which implies that $x_n = x'_n$, contrary to their choice. Hence we may further assume that
\[ 2d(y'_{n+1},y_0) <d(y_n',y_0) \text{ and } 2d(y'_{n+1},y_0) < d(y_n,y_0) \text{ if $y_n\neq y_0$}.\]
\begin{prop}\label{p7.5}
Let $v_0\in F$ and let $u_n\in E$ be determined by $M(x_n,u_n) = (y_n,v_0)$. There exists $m=m(v_0)\in {\mathbb N}$ so that for any $n\geq m$, if $(y_n',v_n') \in \Delta(y_n,v_0,1)$, then
\[ (x_n',u_n') = M^{-1}(y_n',v_n')\in \Delta(x_n,u_n,m).\]
\end{prop}
\begin{proof}
Otherwise, there are $n_m \geq m$, $(y_{n_m}',v_{n_m})\in \Delta(y_{n_m},v_0,1)$ so that
$(x_{n_m}',u_{n_m}') = M^{-1}(y_{n_m}',v_{n_m}')\notin \Delta(x_{n_m},u_{n_m},m)$ for all $m
$.
We may assume $n_m\uparrow \infty$.
For simplicity, relabel $n_m$ as $m$.
Thus
\[\varphi(x_{m}) = y_{m}, \varphi(x_{m}') = y_{m}', \Phi(y_{m}, u_{m}) = v_0, \Phi(y_{m}',u_m') = v_m'\]
and
\[ \|v_m'-v_0\| \leq d(y_m,y_m'), \|u_{m}'-u_{m}\| > m d(x_{m}',x_{m}) \text{ for all $m$.}\]
Let $d_m = d(y_m',y_m)$.
Define a function $g:Y\to F$ by
\[ g(y) = \begin{cases}
v_0 + (1- \frac{4d(y,y_m')}{d_m})(v_m'-v_0) &\text{if $m\in {\mathbb N}$, $d(y,y_m') < \frac{d_m}{4}$}\\
v_0 &\text{otherwise}.
\end{cases}\]
From the disjointness of the balls $B(y_m',\frac{d_m}{4})$ and the inequality $\|v_m'-v_0\| \leq d_m$ for all $m$,
we see that $g\in \operatorname{Lip}(Y,F)$.
Next, we claim that $y_n \notin B(y_m',\frac{d_m}{4})$ for any $m,n\in {\mathbb N}$.
Indeed, this is obvious if $m =n$.
Note that \[ d_m \leq d( y_m',y_0) + d(y_0,y_m) \leq 2d(y_m',y_0).\]
If $m < n$, then
\[ d(y_n,y_0) \leq d(y_n',y_0) < \frac{d(y_m',y_0)}{2}.
\]
Hence
\[ d(y_n,y_m') \geq d(y_m',y_0) - d(y_n,y_0)> \frac{d(y_m',y_0)}{2} \geq \frac{d_m}{4}.
\]
On the other hand, if $m > n$ and $y_n\neq y_0$, then
\[2d(y_m',y_0) \leq 2d(y_{n+1}',y_0) < d(y_n,y_0).
\]
Hence
\[ d(y_n,y_m') \geq d(y_n,y_0) - d(y_m',y_0) \geq d(y_m',y_0) \geq \frac{d_m}{2}.
\]
Finally, if $y_n = y_0$, then $d(y_n,y_m') = d(y_0,y_m') \geq \frac{d_m}{2}$.
Thus $y_n \notin B(y_m',\frac{d_m}{4})$ in all cases.
Obviously, $g(y_m') = v_m'= \Phi(y_m',u_m')$.
From the fact that $y_n \notin B(y_m',\frac{d_m}{4})$ for all $m,n$,
$g(y_m) = v_0 = \Phi(y_m,u_m)$ for all $n$.
Therefore, $T^{-1}g(x_m) = u_m$ and $T^{-1}g(x'_m) = u_m'$ for all $m$.
But $\|u_{m}'-u_{m}\| > m d(x_{m}',x_{m})$, which contradicts the fact that $T^{-1}g$ is Lipschitz.
\end{proof}
We are now ready to prove the main result regarding the homeomorphism $\varphi$.
\begin{thm}\label{t7.5}
Let $T:\operatorname{Lip}(X,E)\to \operatorname{Lip}(Y,F)$ be a biseparating map, where $X, Y$ are complete bounded metric spaces and $E, F$ are Banach spaces.
In the notation of (\ref{e7.1}), $\varphi:X\to Y$ is a Lipschitz homeomorphism.
\end{thm}
\begin{proof}
If $\varphi$ is not a Lipschitz function, then we obtain sequences $(x_n), (x_n')$, $(y_n)$ and $(y_n')$ as in the discussion before Proposition \ref{p7.5}.
For each $v\in F$, determine $m = m(v) \in {\mathbb N}$ by Proposition \ref{p7.5}.
Set $F_k = \{v\in F: m(v) \leq k\}$ for each $k\in {\mathbb N}$.
Then $F = \bigcup_{k=1}^\infty\overline{F_k}$.
By the Baire Category Theorem, there are an open ball $O$ in $F$ and $k_0\in {\mathbb N}$ so that $O\subseteq \overline{F_{k_0}}$.
Pick distinct points $a,b\in O\cap F_{k_0}$. Since $d_n = d(y_n,y_n') \to 0$, we may assume without loss of generality that $\|a-b\| > d_n$ for all $n$. For each $n$, choose $k_n\in {\mathbb N}$ so that
\[ \frac{k_nd_n}{2} \leq \|a-b\| < {k_nd_n}.\]
Note that $a + \frac{j}{k_n}(b-a) \in O$, $0\leq j\leq k_n$. By making small perturbations, one can find $w_{nj}\in O\cap F_{k_0}$, $0\leq j\leq k_n$, so that $w_{n0} = a$, $w_{nk_n} = b$ and $\|w_{nj} - w_{n,j-1}\|$ is sufficiently close to $\frac{\|a-b\|}{k_n}$ so as to make it $< d_n$.
Now
\begin{align*}
\|\sum^{k_n}_{j=1}[\Psi(x_n&,w_{nj}) - \Psi(x_n,w_{n,j-1})]\| = \|\Psi(x_n,v_1) -\Psi(x_n,v_0)\|
\\ & = \|T^{-1}(1\otimes v_1)(x_n) - T^{-1}(1\otimes v_0)(x_n)\|\\
& \to \|T^{-1}(1\otimes v_1)(x_0) - T^{-1}(1\otimes v_0)(x_0)\|\\
& = \|\Psi(x_0,b) -\Psi(x_0,a)\| = c.
\end{align*}
Since $\Psi(x_0,\cdot)$ is a bijection, $c > 0$.
For all $n$, there exists $1\leq j_n\leq k_n$ so that
\[ \|\Psi(x_n,w_{n,j_n}) - \Psi(x_n,w_{n,j_n-1})\| > \frac{c}{2k_n}.\]
Now choose $i_n\in \{j_n-1,j_n\}$ so that, setting
\[ u_n = \Psi(x_n,w_{n,j_n}) \text{ and } u_n' = \Psi(x_n',w_{n,i_n}),\]
we have $\|u_n-u_n'\| > \frac{c}{4k_n}$.
Note that
\[ M^{-1}(y_n,w_{n,j_n}) = (\varphi^{-1}(y_n),\Psi(\varphi^{-1}(y_n), w_{n,j_n}))
=
(x_n,u_n).
\]
Similarly, $M^{-1}(y_n',w_{n,i_n}) =(x_n',u_n')$.
By choice,
\[ \|w_{n,i_n} - w_{n,j_n}\| \leq \|w_{n,j_n-1} - w_{n,j_n}\| < d_n = d(y_n',y_n).\]
Hence $(y_n', w_{n,i_n}) \in \Delta(y_n,w_{n,j_n},1)$.
Since $w_{n,j_n} \in F_{k_0}$, $m=m(w_{n,j_n}) \leq k_0$.
By definition of $m(w_{n,j_n})$, this implies that for all $n \geq k_0$,
\[ (x_n',u'_n) \in \Delta(x_n,u_n,m)\subseteq \Delta(x_n,u_n,k_0),
\]
which in turns yields that
$\|u_n'-u_n\| \leq k_0d(x_n',x_n).$
Therefore,
\[ \frac{c}{2\|a-b\|}\cdot d_n \leq \frac{c}{4k_n} < \|u_n - u_n'\| \leq k_0d(x_n',x_n).\]
Since this holds for all sufficiently large $n$, and $d_n = d(y_n,y_n')$, it contradicts the assumption that $d(y_n,y_n')/d(x_n,x'_n) \to \infty$.
This completes the proof that $\varphi$ is a Lipschitz function. By symmetry, so is $\varphi^{-1}$. Hence $\varphi$ is a Lipschitz homeomorphism.
\end{proof}
\subsection{Section problem for Lipschitz functions}
Let $X$ be a complete bounded metric space and let $E,F$ be Banach spaces. Consider a given function $\Xi:X\times E\to F$. Define $M:X\times E \to X\times F$ by $M(x,e) = (x,\Xi(x,e))$. Recall the sets $\Delta(x,e,C,r)$ and $\Delta(x,e,C)$ in $X\times E$ as given by (\ref{e7.4}). Similar definitions apply in $X\times F$. Theorem \ref{t7.7} solves the section problem for spaces of Lipschitz functions. For a very special case, refer to \cite[Theorem 7.1]{AZ}.
\begin{lem}\label{l7.6}
Suppose that $x_0\in X$, $u_0\in E$ and $C<\infty$.
Let $(x^n_1), (x^n_2)$ in $X$, $(u^n_1), (u^n_2)$ in $E$ be sequences so that $x^n_1 \neq x^n_2$ for all $n$,
\[ \|u^n_1-u^n_2\|\leq Cd(x^n_1,x^n_2),\
\|u^n_i-u_0\| \leq Cd(x^n_i,x_0) \text{ $i =1,2$, $n\in {\mathbb N}$, and}\]
$\lim_n d(x^n_i,x_0)=0$, $i =1,2$.
Then there exists $f\in \operatorname{Lip}(X,E)$ so that $f(x^n_i) = u^n_i$, $i =1,2$, for infinitely many $n$.
\end{lem}
\begin{proof}
Set $r^n_i = d(x^n_i,x_0)$. There is no loss of generality in assuming that $r^n_2 \geq r^n_1$ for all $n$. After taking subsequences, we may divide the proof into the following cases.
\medskip
\noindent \underline{Case 1}. $x^n_1 = x_0$, i.e., $r^n_1 = 0$ for all $n$.
\medskip
Note that in this case $u^n_1 = u_0$ for all $n$. Since $x^n_2 \neq x^n_1$ and $r^n_2=d(x^n_2,x_0) \to 0$, we may further assume that $r^{n+1}_2 < \frac{1}{3}r^n_2$ for all $n$, which implies that the balls $B(x^n_2,\frac{r_n}{2})$ are pairwise disjoint.
Define $f:X\to E$ by
\[ f(x) = \begin{cases}
u_0 + (1 - \frac{2d(x,x^n_2)}{r_n})(u^n_2-u_0) &\text{if $d(x,x^n_2) < \frac{r_n}{2}$}\\
u_0 &\text{otherwise}.
\end{cases}\]
Then it can be checked that $f\in \operatorname{Lip}(X,E)$, $f(x^n_2) = u^n_2$, $f(x^n_1) = f(x_0) = u_0=u^n_1$ for all $n$.
\medskip
\noindent \underline{Case 2}. $r^n_1 >0$ and there exists $c > 0$ so that $d(x^n_1,x^n_2) \geq cr^n_2$ for all $n$.
\medskip
We may of course assume that $0 < c < 1$. The assumptions imply that the balls $B(x^n_1, \frac{cr^n_1}{2})$ and $B(x^n_2, \frac{cr^n_2}{2})$ are disjoint.
Since $r^n_2\to 0$, we may further assume that $B(x^{n+1}_1, \frac{cr^{n+1}_1}{2})\cup B(x^{n+1}_2, \frac{cr^{n+1}_2}{2})$ is disjoint from $\bigcup^n_{k=1}[B(x^{k}_1, \frac{cr^{k}_1}{2})\cup B(x^{k}_2, \frac{cr^{k}_2}{2})]$ for any $n$.
As a result, the sets $B(x^n_1, \frac{cr^n_1}{2})$, $B(x^n_2, \frac{cr^n_2}{2})$, $n\in {\mathbb N}$, are all mutually disjoint.
Define $f:X\to E$ by
\[ f(x) = \begin{cases}
u_0 + (1 - \frac{2d(x,x^n_i)}{cr^n_i})(u^n_i-u_0) &\text{if $d(x,x^n_i) < \frac{cr^n_i}{2}$, $n \in {\mathbb N}$, $i=1,2$,}\\
u_0 &\text{otherwise}.
\end{cases}\]
Then it can be checked that $f\in \operatorname{Lip}(X,E)$, $f(x^n_i) = u^n_i$, $i=1, 2$, $n\in {\mathbb N}$.
\medskip
\noindent \underline{Case 3}. $r^n_1 >0$ for all $n$ and $d(x^n_1,x^n_2)/r^n_2 \to 0$.
\medskip
As in Case 2, we may assume that the sets $B(x^n_2, \frac{r^n_2}{2}), n\in {\mathbb N}$, are dsijoint.
In this instance, we may further assume that $B(x^n_1,d(x^n_1,x^n_2)) \subseteq B(x^n_2, \frac{r^n_2}{2})$ for all $n$.
Define $g:X\to E$ by
\[ g(x) = \begin{cases}
(1 - \frac{2d(x,x^n_2)}{r^n_2})(u^n_2-u_0) &\text{if $d(x,x^n_2) < \frac{r^n_2}{2}$}\\
0 &\text{otherwise}.
\end{cases}\]
Since $\|u^n_2-u_0\| \leq Cd(x^n_2,x_0)= Cr^n_2$ for all $n$, $g\in \operatorname{Lip}(X,E)$ and has Lipschitz constant at most $2C$.
Clearly, $g(x^n_2) = u^n_2-u_0$ for all $n$.
Now let
$h:X\to E$ be given by
\[ h(x) = \begin{cases}
(1 - \frac{d(x^n_1,x)}{d(x^n_1,x^n_2)})(u^n_1-u_0-g(x^n_1)) &\text{if $d(x^n_1,x) < d(x^n_1,x^n_2)$}\\
0 &\text{otherwise}.
\end{cases}\]
Note that
\[\|u^n_1-u_0-g(x^n_1)\| \leq \|u^n_1-u^n_2\| + \|g(x^n_2)-g(x^n_1)\| \leq 3Cd(x^n_1,x^n_2).
\]
Taking into account the disjointness of the sets $B(x^n_1, d(x^n_1,x^n_2))$, it follows that $h\in \operatorname{Lip}(X,E)$.
Furthermore,
\[
(g+h)(x^n_1) = u^n_1-u_0 \text{ and } (g+h)(x^n_2) = g(x^n_2) = u^n_2-u_0.\]
Finally, the function $f(x) = g(x) +h(x) +u_0$ is the one we seek.
\end{proof}
\begin{thm}\label{t7.7}
Let $\Xi:X\times E\to F$ be a given function. Define $Sf(x) = \Xi(x,f(x))$ for any function $f:X\to E$.
Suppose that $Sf$ belongs to $\operatorname{Lip}(X,F)$ for all $f\in \operatorname{Lip}(X,E)$. Then
\begin{enumerate}
\item If $(x_n)$ is a separated sequence in $X$, and $B$ is a bounded set in $E$, then there is a finite set $N \subseteq {\mathbb N}$ so that $\bigcup_{n\notin N}\Xi(x_n,B)$ is bounded.
\item Suppose that $x_0\in X$, $u_0\in E$ and $C<\infty$. There exist $r >0$ and $D<\infty$ so that
\[ \|\Xi(x_1,u_1) - \Xi(x_2,u_2)\| \leq Dd(x_1,x_2)
\]
whenever $\|u_1-u_2\| \leq Cd(x_1,x_2)$, $\|u_i-u_0\| \leq Cd(x_i,x_0)$ and $d(x_i,x_0) \leq r$, $i=1,2$.
\item Let $(x_n)$ be a separated sequence in $X$ and $(u_n)$ be a bounded sequence in $E$.
For any $C<\infty$, there exist $r>0$ and $D<\infty$ so that
\[ \|\Xi(x_n',u'_n) - \Xi(x_n,u_n)\| \leq Dd(x_n',x_n) \text{ for all $n$}\]
whenever $\|u_n'-u_n\| \leq Cd(x_n',x_n)$ and $d(x_n',x_n) \leq r$ for all $n$.
\end{enumerate}
Conversely, suppose that conditions (1), (2) and (3) hold. Then $Sf\in \operatorname{Lip}(X,F)$ for any $f\in \operatorname{Lip}(X,E)$.
\end{thm}
\begin{proof}
Suppose that $Sf\in \operatorname{Lip}(X,F)$ for any $f\in \operatorname{Lip}(X,E)$.
Let $(x_n)$ be a separated sequence in $X$ and $B$ be a bounded set in $E$. If $\bigcup_{n\notin N}\Xi(x_n,B)$ is unbounded for any finite set $N\subseteq {\mathbb N}$, there exists a sequence $(u_n)\subseteq B$ so that $(\Xi(x_n,u_n))$ is unbounded. Since $(x_n)$ is separated and $(u_n)$ is bounded, there is a Lipschitz function $f:X\to E$ so that $f(x_n) = u_n$ for all $n$.
Then $Sf\in \operatorname{Lip}(X,F)$, $(\Xi(x_n,u_n)) = (Sf(x_n))$ is bounded in $F$, a contradiction. This proves condition (1).
Suppose that condition (2) fails. Then there are $(x^n_1), (x^n_2)$ in $X$, $(u^n_1), (u^n_2)$ in $E$ so that
$\|u^n_1-u^n_2\| \leq Cd(x^n_1,x^n_2)$,
$\|u^n_i-u_0\| \leq Cd(x^n_i,x_0)$ and $d(x^n_i,x_0) \leq \frac{1}{n}$, $i =1,2$, $n\in {\mathbb N}$, but $\|\Xi(x^n_1,u^n_1) - \Xi(x^n_2,u^n_2)\| > nd(x^n_1,x^n_2)$.
In particular, the last inequality implies that $x^n_1\neq x^n_2$ for all $n$.
Apply Lemma \ref{l7.6} to find a function $f\in \operatorname{Lip}(X,E)$ so that $f(x^n_i) = u^n_i$ for infinitely many $n$.
Let $L$ be the Lipschitz constant of $Sf$.
Then
\[\|\Xi(x^n_1,u^n_1) - \Xi(x^n_2,u^n_2)\| = \|Sf(x^n_1) - Sf(x^n_2)\|
\leq Ld(x^n_1,x^n_2)\]
for all $n$, contrary to their choices.
Let $(x_n)$ be a separated sequence in $X$ and let $(u_n)$ be a bounded sequence in $E$. Assume that condition (3) fails for a constant $C$.
For each $k\in {\mathbb N}$, there exist $n_k\in {\mathbb N}$, $x_k\in X $ and $u_k'\in E$ so that $\|u_k'- u_{n_k}\| \leq Cd(x_k',x_{n_k})$ and $d(x_k',x_{n_k}) < \frac{1}{k}$ but
\begin{equation}\label{e7.4.1}\|\Xi(x_k',u_k') - \Xi(x_{n_k},u_{n_k})\| > kd(x_k',x_{n_k}).\end{equation}
If $(n_k)$ has a constant subsequence, then, say, $x_{n_k} = x_0$ and $u_{n_k} = u_0$ for infinitely many $k$.
In this case, we have a contradiction to condition (2), which has been shown above.
Otherwise, we may assume that $n_k \uparrow \infty$.
Since $(x_{n_k})$ is separated and $(u_{n_k})$ is bounded, there exists $g\in \operatorname{Lip}(X,E)$ so that $g(x_{n_k}) = u_{n_k}$ for all $k$.
Let $L$ be the Lipschitz constant of $g$. We have
\[ \|u_k'- g(x_k')\| \leq \|u_k'-u_{n_k}\| + \| u_{n_k} - g(x_k')\| \leq (C+L)d(x_{n_k},x_k').\]
As $(x_{n_k})$ is separated and $d(x_k',x_{n_k})\to 0$, we can find $h\in \operatorname{Lip}(X,E)$ so that $h(x_{n_k}) = 0$ and $h(x_k') = u_k'-g(x_k')$ for all large $k$. Let $f = g+h\in \operatorname{Lip}(X,E)$.
Then $Sf\in \operatorname{Lip}(X,E)$ and
\[ Sf(x_{n_k}) = \Xi(x_{n_k},u_{n_k}),\ Sf(x_k') = \Xi(x_k',u_k').\]
Thus (\ref{e7.4.1}) leads to a contradiction.
Conversely, suppose that conditions (1) - (3) hold.
Let $f\in \operatorname{Lip}(X,E)$ with Lipschitz constant $C$.
First, let us show that $Sf$ is a bounded function.
If not, there is a sequence $(z_n)\in X$ so that $\|Sf(z_n)\| \to \infty$.
By condition (1), $(z_n)$ cannot have a separated subsequence.
Hence we may assume that $(z_n)$ converges to some $z_0\in X$.
Then $\|f(z_n) - f(z_0)\| \leq Cd(z_n,z_0)$ and $d(z_n,z_0) \to 0$.
Applying condition (2) with $(x_0,u_0) = (x_1,u_1) = (z_0,f(z_0))$ and $(x_2,u_2) = (z_n,f(z_n))$, we obtain $D<\infty$ so that
\[ \|\Xi(z_0,f(z_0)) - \Xi(z_n,f(z_n))\| \leq Dd(z_0,z_n) \text{ for all sufficiently large $n$}.\]
Hence $(Sf(z_n)) = (\Xi(z_n,f(z_n))$ is surely bounded, contrary to its choice.
Now suppose that $Sf\notin \operatorname{Lip}(X,F)$. There are sequences $(x_n)$, $(x_n')$ in $X$ so that
\begin{equation}\label{e7.5} \|\Xi(x_n,u_n) - \Xi(x_n',u_n')\| = \|Sf(x_n) - Sf(x_n')\|> nd(x_n,x_n') \text{ for all $n$},
\end{equation}
where $u_n = f(x_n)$ and $u_n' = f(x_n')$.
Since $Sf$ is a bounded function, we must have $d(x_n,x_n') \to 0$.
By using subsequences, we may assume that either $(x_n)$ converges to some $x_0$ or that $(x_n)$ is a separated sequence.
In the former case, since $d(x_n,x_0), d(x_n',x_0)\to 0$, $\|f(x_n) - f(x_n')\|\leq Cd(x_n,x_n')$,
$\|f(x_n)-f(x_0)\| \leq Cd(x_n,x_0)$ and $\|f(x_n')-f(x_0)\| \leq Cd(x'_n,x_0)$, it follows from condition (2) that there exists $D<\infty$ so that for all sufficiently large $n$,
\[ \|\Xi(x_n,u_n) - \Xi(x_n',u_n')\| = \|\Xi(x_n,f(x_n)) - \Xi(x_n',f(x_n'))\| \leq Dd(x_n,x_n'),\]
contrary to (\ref{e7.5}).
The proof is similar in case $(x_n)$ is a separated sequence, using condition (3) instead.
\end{proof}
The next theorem is easily deduced from Theorem \ref{t7.7}, keeping in mind that $\varphi:X\to Y$ is a Lipschitz homeomorphism.
\begin{thm}\label{t7.8}
Let $X,Y$ be complete bounded metric spaces and let $E, F$ be Banach spaces.
Suppose that $T:\operatorname{Lip}(X,E) \to \operatorname{Lip}(Y,F)$ is a biseparating map.
Then there are a Lipschitz homeomorphism $\varphi:X\to Y$ and a function $\Phi:Y\times E\to F$ so that
\begin{enumerate}
\item For each $y\in Y$, $\Phi(y,\cdot):E\to F$ is a bijection with inverse $\Psi(x,\cdot):F\to E$, where $\varphi(x) =y$.
\item $Tf(y) = \Phi(y,f(\varphi^{-1}(y)))$ and $T^{-1}g(x) = \Psi(x,g(\varphi(x)))$ for all $f\in \operatorname{Lip}(X,E)$, $g\in \operatorname{Lip}(Y,F)$ and $x\in X$, $y\in Y$.
\item Let $(x_n)$ be a separated sequence in $X$. For any bounded sets $B$ in $E$ and $B'\in F$, there is a finite set $N \subseteq {\mathbb N}$ so that $\bigcup_{n\notin N}\Phi(\varphi(x_n),B)$ and $\bigcup_{n\notin N}\Psi(x_n,B')$ are bounded.
\item Suppose that $x_0\in X$, $u_0\in E$, $v_0\in F$ and $C<\infty$. There exist $r >0$ and $D<\infty$ so that
\[
\|\Phi(\varphi(x_1),u_1) - \Phi(\varphi(x_2),u_2)\|, \|\Psi(x_1,v_1) - \Psi(x_2,v_2)\|\leq Dd(x_1,x_2)
\]
whenever $\|u_1-u_2\|, \|v_1-v_2\| \leq Cd(x_1,x_2)$, $\|u_i-u_0\|, \|v_i-v_0\| \leq Cd(x_i,x_0)$ and $d(x_i,x_0) \leq r$, $i=1,2$.
\item Let $(x_n)$ be a separated sequence in $X$ and $(u_n), (v_n)$ be bounded sequences in $E$ and $F$ respectively.
For any $C<\infty$, there exist $r>0$ and $D<\infty$ so that
\[ \|\Phi(\varphi(x_n'),u'_n) - \Phi(\varphi(x_n),u_n)\|,\
\|\Psi(x_n',v'_n) - \Psi(x_n,v_n)\| \leq Dd(x_n',x_n)\]
for all $n$,
whenever $\|u_n'-u_n\|, \|v_n'-v_n\| \leq Cd(x_n',x_n)$ and $d(x_n',x_n) \leq r$
for all $n$.
\end{enumerate}
Conversely, if $\varphi$, $\Phi$ satisfy conditions (1)-(5) and $T$ is defined by (2), then $T$ is a biseparating map from $\operatorname{Lip}(X,E)$ onto $\operatorname{Lip}(Y,F)$.
\end{thm}
\subsection{A property of Lipschitz sections}
Let $X$ be a bounded metric space and let $E$ and $F$ be Banach spaces.
Theorem \ref{t7.7} characterizes the ``section maps'' $\Xi:X\times E\to F$ so that $Sf(x) = \Xi(x,f(x))$ is Lipschitz whenever $f\in \operatorname{Lip}(X,E)$.
An example in \cite[p.~190]{AZ}, where $X= [0,1]$ with the H$\ddot{\text{o}}$lder metric $d(x,y)= |x-y|^\alpha$, $0<\alpha < 1$, and $E = F = {\mathbb R}$, shows that for a given $x\in X$, the function $\Xi(x,\cdot):E\to F$ need not be continuous.
Nevertheless, in this subsection, we will show that if $x$ is an accumulation point of $X$, then there is a dense open set $O$ in $E$ so that $\Xi(x,\cdot)$ is continuous on $O$.
Let $\Xi:X\times E\to F$ be a ``Lipschitz section''. Taking $(x_1,u_1) = (x,u)$ and $(x_2,u_2) = (x_0,u_0)$ in Theorem \ref{t7.7}(2) yields the next lemma.
\begin{lem}\label{l7.10}
Let $(x_0,u_0) \in X\times E$ and let $v_0 = \Xi(x_0,u_0)$.
For any $C<\infty$, there exists $n = n(x_0,u_0,C) \in{\mathbb N}$ so that if $(x,u)\in X\times E$,
$\|u-u_0\| \leq Cd(x,x_0)$ and $d(x,x_0) < \frac{1}{n}$, then
\[ \|v-v_0\|\leq nd(x,x_0), \text{ where $v = \Xi(x,u)$}.\]
\end{lem}
\begin{thm}\label{t7.11}
Let $x_0$ be an accumulation point of $X$.
There is a dense open set $O$ in $E$ so that $\Xi(x_0,\cdot)$ is continuous on $O$.
\end{thm}
\begin{proof}
In the notation of Lemma \ref{l7.10}, for each $n\in {\mathbb N}$, let
\[ A_n = \{u_0\in E: n(x_0,u_0,1) \leq n\}.\]
By the lemma, $E= \bigcup_n \overline{A_n}$.
Since $E$ is a complete metric space, $O = \bigcup_n \operatorname{int}\overline{A_n}$ is a dense open set in $E$.
To complete the proof of the theorem, let us show that $\Xi(x_0,\cdot)$ is continuous on $O$.
Clearly, it suffices to show that $\Xi(x_0,\cdot)$ is continuous on each $\operatorname{int}\overline{A_n}$.
Fix $N\in {\mathbb N}$. Suppose that $(u_n)$ is a sequence in $\operatorname{int}\overline{A_N}$ converging to $u_0 \in \operatorname{int}\overline{A_N}$.
\medskip
\noindent\underline{Claim}. There is a sequence $(u_n')$ in $A_N$ so that
\[ \|u_n'-u_n\| , \|\Xi(x_0,u_n') - \Xi(x_0,u_n)\| \to 0.\]
\medskip
Consider a given $n\in {\mathbb N}$. Since $\Xi(x,u_n)$ is a Lipschitz function of $x$ and $x_0$ is an accumulation point, there exists $x\in X$ so that $0< Nd(x,x_0) < \frac{1}{n}$ and that $\|\Xi(x,u_n) -\Xi(x_0,u_n)\| < \frac{1}{n}$.
As $u_n\in \overline{A_N}$, there exists $u_n'\in A_N$ so that $\|u_n'-u_n\| \leq d(x,x_0) < \frac{1}{nN}$.
Note that $n(x_0,u_n',1) \leq N$. Hence the condition $\|u_n-u_n'\| \leq d(x,x_0)< \frac{1}{N}$ implies
\[ \|\Xi(x,u_n) - \Xi(x_0,u_n')\| \leq Nd(x,x_0)< \frac{1}{n}.\]
Therefore,
\begin{align*}
\|\Xi(x_0,u_n)- &\Xi(x_0,u_n')\| \\&\leq \|\Xi(x_0,u_n)- \Xi(x,u_n)\| + \|\Xi(x,u_n)- \Xi(x_0,u_n')\|\\
& < \frac{2}{n}.
\end{align*}
This completes the proof of the claim.
\medskip
In view of the claim, in order to prove the continuity of $\Xi(x_0,\cdot)$ at $u_0$, it suffices to show that $
\Xi(x_0,u_n') \to \Xi(x_0,u_0)$.
Let $\varepsilon > 0$ be given. As before, one can choose $x'$ so that $0< d(x',x_0) < \frac{1\wedge \varepsilon}{N}$ and that
$\|\Xi(x',u_0) - \Xi(x_0,u_0)\| < \varepsilon$.
For all sufficiently large $n$, $\|u_n'-u_0\| < d(x',x_0)$.
Once again, $\|u_0-u_n'\| \leq d(x',x_0) < \frac{1}{N}$ implies
\[ \|\Xi(x',u_0) - \Xi(x_0,u_n')\| \leq Nd(x',x_0) < \varepsilon.\]
Therefore,
\begin{align*}
\|\Xi(x_0,u_n')- &\Xi(x_0,u_0)\| \\&\leq \|\Xi(x_0,u_n')- \Xi(x',u_0)\| + \|\Xi(x',u_0)- \Xi(x_0,u_0)\|
< 2\varepsilon
\end{align*}
for all sufficiently large $n$.
\end{proof}
\section{Comparisons}\label{s9}
We close with some results comparing different types of spaces under nonlinear biseparating maps.
Throughout this section, $X,Y$ will be complete metric spaces and $E$, $F$ will be Banach spaces.
\begin{prop}\label{p8.1}
Let $T:A(X,E)\to \operatorname{Lip}(Y,F)$ be a biseparating map, where $Y$ is bounded. If $A(X,E) = U(X,E)$, then $X$ is separated. If $A(X,E) = U_*(X,E)$, then both $X$ and $Y$ are separated.
\end{prop}
\begin{proof}
Normalize $T$ by taking $T0 = 0$. Suppose that $A(X,E)$ is either $U(X,E)$ or $U_*(X,E)$. First assume, if possible, that there is a convergent sequence $(x_n)$ in $X$ consisting of distinct points.
Let $x_0$ be its limit, which we may assume to be distinct from all $x_n$'s.
Set $y_n = {\varphi}(x_n)$, $n\in {\mathbb N}\cup\{0\}$, and $r_n = d(y_n,y_0)$, $n\in {\mathbb N}$.
Since $r_n \to 0$, without loss of generality, we may further assume that $r_{n+1} < \frac{r_n}{3}$ for all $n\in {\mathbb N}$.
Fix a nonzero vector $b\in F$. For each $m\in{\mathbb N}$, define $g_m: Y\to F$ by
\[ g_m(y) = \begin{cases}
\bigl(1- \frac{2d(y,y_n)}{r_n}\bigr)mr_nb &\text{if $d(y,y_n) < \frac{r_n}{2}$, $n\in {\mathbb N}$,}\\
0 &otherwise.
\end{cases}\]
Then $g_m\in \operatorname{Lip}(Y,F)$, ${g_m}(y_n) = mr_nb$ for all $n\in {\mathbb N}$ and ${g_m}(y_0) =0$.
By Proposition \ref{p4.2}, ${T^{-1}g_m}(x_0) =0$.
By continuity of $g_m$, there is an increasing sequence $(n_m)$ so that $T^{-1}g_m(x_{n_m}) \to 0$.
Thus, there is a function $f\in U_*(X,E)\subseteq A(X,E)$ so that $f(x_{n_m}) = T^{-1}g_m(x_{n_m})$ for all $m\in {\mathbb N}$ and $f(x_0) =0$.
By Proposition \ref{p4.2},
\[ {Tf}(y_{n_m}) = {g_m}(y_{n_m}) = mr_{n_m}b\text{ and } {Tf}(y_0) = 0.\]
However, $Tf$ is Lipschitz on $Y$.
We have reached a contradiction since
\[ \|{Tf}(y_{n_m}) - {Tf}(y_0)\| = mr_{n_m}b = md(y_{n_m},y_0)b.\]
This shows that $X$ does not contain any nontrivial convergent sequence.
If $X$ is not separated, there are points $x_n,x_n'\in X$ so that $0<d(x_n,x'_n)\to 0$.
Let $y_n = {\varphi}(x_n)$ and $y_n' = {\varphi}(x_n')$.
Since ${\varphi}$ is uniformly continuous by Proposition \ref{p6.3.0}, $d(y_n,y_n')\to 0$.
If $(y_n)$ has a subsequence that converges in ${Y}$, then $(x_n)$ has a convergent subsequence. By the previous paragraph, $(x_n)$ has a constant subsequence, which in turn implies that $(x'_n)$ has a nontrivial convergent subsequence, contrary to the last paragraph. Thus, we may replace $(y_n)$ by a subsequence if necessary to assume that it is separated.
As $(y_n)$ is separated and $d(y_n,y_n') \to 0$,
it is possible to choose ${g_m}\in \operatorname{Lip}(Y,F)$ so that ${g_m}(y_n) = md(y_n,y_n')b$ and $g_m(y_n') =0$ for all $n$.
As before, we can find an increasing sequence $(n_m)$ so that $(T^{-1}g_m)(x_{n_m})\to 0$.
Then we can construct $f\in U_*(X,E)$ so that $f(x_{n_m}) = (T^{-1}g_m)(x_{n_m})$ and $f(x_{n_m}') = 0$ for all $m$.
By Proposition \ref{p4.2},
\[{Tf}(y_{n_m}) = {g_m}(y_{n_m}) = md(y_{n_m},y'_{n_m}) \text{ and } {Tf}(y_{n_m}') = 0.
\]
Once again, this contradicts with the fact that $Tf$ is Lipschitz.
Now if $A(X,E) = U_*(X,E)$, we show that $Y$ is also separated.
By Theorem \ref{t3.5}, there is a homeomorphism ${\varphi}:X\to {Y}$.
In particular, ${Y}$ must be discrete.
If $Y$ is not separated, there are sequences $(y_n), (y_n')$ in $Y$ so that $0 < d(y_n,y_n')\to 0$.
Since $(y_n)$ cannot have a convergent subsequence in ${Y}$, we may assume that it is a separated sequence.
By taking a further subsequence, we may assume that $(y_n)\cup (y_n')$ consists of distinct points.
Let $x_n = {\varphi}^{-1}(y_n)$ and $x'_n = {\varphi}^{-1}(y_n')$. Then $(x_n)\cup (x_n')$ consists of distinct points.
Fix a nonzero vector $b\in F$ and let $a_n = T^{-1}(1\otimes b)(x_n)$. Then $(a_n)$ is a bounded sequence in $E$.
Since $X$ is separated, there is a function $f\in U_*(X,E)$ so that $f(x_n) = a_n$ and $f(x_n') = 0$ for all $n$.
By Proposition \ref{p4.2}, $Tf(y_n) = b$ and $Tf(y_n') = 0$ for all $n$.
This is impossible since $Tf$ is uniformly continuous.
\end{proof}
\begin{thm}\label{t6.6}
Assume that $Y$ is bounded.
A map $T:U(X,E)\to \operatorname{Lip}(Y,F)$ is biseparating if and only if $X$ and $Y$ are finite sets of the same cardinality and there are a bijection $\psi:Y\to X$ and bijections $\Phi(y,\cdot):E\to F$ for each $y\in Y$ so that $Tf(y) = \Phi(y,f(\psi(y)))$ for all $f\in U(X,E)$ and all $y\in Y$.
\end{thm}
\begin{proof}
Assume that $T$ is biseparating. Represent $T$ as in Proposition \ref{p4.2}. Similarly, $T^{-1}$ has a representation ${T^{-1}g}(x) = \Psi(x,{g}({\varphi}(x)))$.
By Proposition \ref{p8.1}, $X$ is separated.
Suppose that $y\in Y$. Let $\psi(y) = x\in {X}$.
If $b\in F$, then $\Psi(x,b) = T^{-1}(1\otimes b)(x) \in E$ and
\[ b= T(T^{-1}(1\otimes b))(x) = \Phi(y,T^{-1}(1\otimes b)(x)) = \Phi(y,\Psi(x,b)).\]
Take an arbitrary function $g:Y\to F$.
Define $f:X\to E$ by $f(x) = \Psi(x,g(\varphi(x)))$.
Since $X$ is separated, $f\in U(X,E)$.
Therefore, $Tf\in \operatorname{Lip}(Y,F)$.
By the above, for any $y\in Y$, $Tf(y) = g(y)$.
This shows that any function $g:Y\to F$ belongs to $\operatorname{Lip}(Y,F)$. Clearly, this implies that $Y$ must be a finite set.
The remaining statements of the theorem follows easily from the representations of $T$ and $T^{-1}$.
The converse is clear.
\end{proof}
\begin{thm}\label{t6.7}
Assume that $Y$ is bounded.
A map $T:U_*(X,E)\to \operatorname{Lip}(Y,F)$ is biseparating if and only if
\begin{enumerate}
\item $X$ and $Y$ are separated metric spaces.
\item There are a bijection $\psi:Y\to X$ and bijections $\Phi(y,\cdot):E\to F$, $y\in Y$, so that
\begin{enumerate}
\item $Tf(y) = \Phi(y,f(\psi(y)))$ for all $f\in U_*(X,E)$ and all $y\in Y$.
\item For any bounded sets $B_1$ in $E$ and $B_2$ in $F$, there are a finite set $Y_0$ in $Y$ and a bounded set $B_3$ in $E$ so that
\[ \bigcup_{y\notin Y_0}\Phi(y,B_1) \text{ is bounded in $F$ and } B_2 \subseteq \bigcap_{y\notin Y_0}\Phi(y,B_3).\]
\end{enumerate}
\end{enumerate}
\end{thm}
\begin{proof}
Assume that $T:U_*(X,E)\to \operatorname{Lip}(Y,F)$ is biseparating. By Proposition \ref{p6.4}, $X$ and $Y$ are both separated.
Therefore, for either direction of the theorem, $X$ and $Y$ are separated.
In this case, since $Y$ is assumed to be bounded, $U_*(X,E) = C_*(X,E)$ and $\operatorname{Lip}(Y,F) = C_*(Y,F)$.
Thus the problem reduces to proving that if $X$ and $Y$ are separated, then $T:C_*(X,E)\to C_*(Y,F)$ is biseparating if and only if condition (2) of the theorem holds.
Biseparating maps $T:C_*(X,E)\to C_*(Y,F)$ have been characterized in Theorem \ref{t5.5}.
Note that presently, as $X$ and $Y$ are separated, a map $\varphi:X\to Y$ is a homeomorphism if and only if it is a bijection. Also, condition (2) of Theorem \ref{t5.5} is vacuous.
Thus, it remains to show that when $X$ and $Y$ are separated, condition (3) of Theorem \ref{t5.5} is equivalent to
condition 2(b) above.
Let $B_1$ and $B_2$ be bounded sets in $E$ and $F$ respectively. Since $X$ is separated, condition (3) of Theorem \ref{t5.5}is equivalent to the fact that every $(x_n) \in \prod_nZ_n(B_1,B_2)$ has a constant subsequence.
Note that $Z_m(B_1,B_2) \supseteq Z_n(B_1,B_2)$ if $m \leq n$.
Therefore, said condition is satisfied if and only if there exist $n_0$ such that $Z_{n_0}(B_1,B_2)$ is finite.
If $Z_{n_0}(B_1,B_2)$ is finite, let $Y_0 = \varphi(Z_{n_0}(B_1,B_2))$.
Then $Y_0$ is a finite set and $y = \varphi(x) \notin Y_0$ implies
Now
\[
x\notin Z_{n_0}(B_1,B_2) \iff \Phi(y,B_1) \subseteq n_0B_F \text{ and } B_2\subseteq \Phi(y, n_0B_E).\]
Hence condition 2(b) is satisfied with $B_3 = n_0B_E$.
Conversely, if condition 2(b) holds. Let $n_0$ be such that
\[ \bigcup_{y\notin Y_0}\Phi(y,B_1) \subseteq n_0B_F \text{ and } B_2 \subseteq \bigcap_{y\notin Y_0}\Phi(y,n_0B_E).\]
Then $\varphi(x) \not\in Y_0$ implies $x\notin Z_{n_0}(B_1,B_2)$. Therefore, $Z_{n_0}(B_1,B_2)$ is finite.
\end{proof}
|
2,869,038,155,914 | arxiv | \section{Introduction}
\label{intro}
To solve high-level computer vision problems, like object recognition and action understanding, researchers and practitioners typically use very large datasets of manually labeled data to train a machine learning algorithm. These algorithms are typically used to discriminate between different categories, e.g., cars versus bikes, or running versus walking \cite{chen2019hybrid,yeung2018every}. Ideally, one would want to be able to design systems that can perform high-level task like these without the need of any manually annotated data.
One way to derive such unsupervised computer vision algorithms is to incorporate knowledge into the system. Here, we derive one such approach and use it to recognize intentional and non-intentional actions. We use Aristotle's definition of intent as something deliberate, chosen before the start of the action \cite{aristotle1926art}. This definition is also included in Cartesian dualism, where Descartes differentiated conscious, intentional actions from reflexes caused by external stimuli \cite{descartes1960meditations}.
To successfully classify a perceived action as intentional or unintentional, we need to carefully evaluate each segment of the video sequence displaying it. To clarify, consider the following example. A person is walking down a hall and after a few seconds slips and falls to the ground (maybe the floor is wet). Here, we would say that the person was intentionally walking down the hall, but that he unintentionally slipped and fell. Afterwards, he intentionally stood up and continued walking. Compare this to the case where the person does not slip but is instead pushed to the ground by someone else. In this case, we say that all segments in the scene are performed intentionally (since the fall is the result of the intentional push). Our goal is to derive an algorithm that can correctly and fully automatically annotate each segment of a video sequence as showing an intentional or a non-intentional action.
As mentioned above, to solve this problem, one could manually annotate a large number of video segments showing intentional and non-intentional actions and then use a machine learning algorithm to learn to discriminate between the two. Unfortunately, the collection and annotation of a sufficiently large dataset has a considerable cost. A major research direction in computer vision is to derive algorithm that can solve problems like ours in a completely unsupervised way, \textit{i}.\textit{e}., without the use of any labelled training data.
We solve this problem by adding knowledge to our system. Specifically, we use the basic knowledge of self-propelled motion, Newtonian motion and their relationship to reason about intentionality of an action. We derive a simple unsupervised computer vision algorithm for the recognition of intent based on these concepts. This demonstrates how simple, common concepts can be used to design systems that can perform complex, high-level tasks even when large amounts of labelled training data are not available.
\section{Related works}
\textbf{Visual recognition of intent in human.}
The mechanism of visual recognition of intent has been the interest of congitive and social science since 1960s, although its underlying behavioral and neural mechanism is still an open question. The seminal work from Heider and Simmel \cite{H&S} shows that human subjects can assign personal attribute (like intentionality) to abstract geometric shape when the object moves in a human-like manner. \cite{sartori2011cues} shows that body movement plays an important role in human intent recognition. \cite{luo2005can} studied the capability of infants attributing goals to human and non-human agents when the agent moves in a self-propelled manner, supporting the hypothesis that the part of the recognition capability is rooted in a specialized reasoning system activated based on the kinematic feature of the object's action. \cite{chambon2017neural,chambon2011they} showed that intent recognition involves in an interplay of the kinematic information of the agent and prior expectation of the agent's movement.
\textbf{Visual recognition of intention in computer vision.} Although significant progress has been made in some vision tasks like face/object recognition, there is very few studies focusing on visual recognition of intent of agent. \cite{wei2018and} proposed a hierarchical graph that jointly models attention and intention from a RGB-D video of an agent. But the study was focusing on the intention behind the eye gaze (the definition of attention in the study). \cite{vondrick2016predicting} proposed an algorithm to infer the motivation of the agent from an image with common knowledge factor graph extracted from text. \cite{ravichandar2017human} introduces an algorithm of estimate agent intention from the 3D skeleton of the upper body of the agent. In the study the intention is represented by latent state space defining the location of agent's arms, whose dynamic is defined by a neural network. This latent variable is then estimated by Expectation-Maximization (EM) algorithm. \cite{ullman2009help} developed an algorithm to infer a binary goal (help or hinder) of in a multi-agent setup with inverse planing in Markov Decision Process (MDP). Most of these works are based on a data-driven supervised model, which requires a large amount of labeled training data.
Another area of research that is also related to the visual recognition of intent is human action/motion forecasting \cite{rudenko2019human}. Motion prediction aims at predicting actions from one or multiple agents in the future based on the observed actions in the past, where the intention recognition plays an important role (albeit very differently from the proposed study). \cite{fang2019intention} uses the 2D human pose to estimate pedestrians' intention of crossing and cyclists’ intention of turning and stopping. \cite{varytimidis2018action}, which also addresses the problem of pedestrian crossing/non-crossing recognition, shows that among different combinations of handcrafted/deep features with data-driven learning models, CNN deep feature and SVM shows the best performance on Joint Attention for Autonomous Driving (JAAD) dataset.
Although the aforementioned works shared the name of ``intent recognition'' with our study, the task is however very different. First, the purpose of this study is to recognize intentionality, i.e., recognizing whether an observed action is performed intentionally or not, rather than predicting the future human behavior based on a confined set of actions. In other words, our study is focusing on understanding the past, rather than predicting the future. Second, the previous works focusing on recognizing different intentions, with the assumption that all actions from an agent are intentional. However, this assumption might not hold for an arbitrary action (the action might be non-intentional), which can be tested by our algorithm. To our knowledge, there is no published computer vision system for the recognition of intentional/non-intentional action.
\begin{figure*}
\includegraphics[width=1.0\textwidth]{dataset-samples-revise.png}
\caption{Recognizing intentional versus non-intentional actions. The six samples are from the three datasets introduced in Section \ref{ss:datasets}. The colored horizontal bar underneath each image sequence denotes an intentionality function $I(t)$ of the action. The yellow crosshair illustrates the 2D project of the location of agent's center of mass. (a) intent-maya dataset. The intentional action showing a ball agent jumping down from a platform and climbing up a conveyor belt. In the non-intentional action, the ball moves according to the Newtonian physics. The transparent tail of the ball shows the location of the agent in the last second. (b) intent-mocap dataset. In the intentional action the agent jumps down from a (invisible) platform. In the non-intentional action the agent trips while walking. These snapshots of animation is directly extracted from www.mixamo.com. (c) intent-youtube dataset. In the intentional action, the agent successfully completed a board slide. In the non-intentional action, the agent falls at the end of an ollie.}
\label{fig:dataset-samples}
\end{figure*}
\textbf{Common knowledge in computer vision.}
Incorporating common sense knowledge in computer vision system is also a largely unexplored territory in the community. \cite{aditya2015visual,del2013common} proposed rule based commonsense reasoning systems for visual scene understanding and action recognition, but with a focus on only hand related actions. \cite{zellers2018recognition} introduced the task of so called ``visual commonsense reasoning'' with corresponding dataset, where the machine is asked not only to answer question about the action and interaction between agent, but also the rationale behind such action. The rationale of an action is not directly observable in the given image, thus must be inferred through commonsense reasoning.
\section{Visual Recognition of Intent}
\label{section:method-modeling}
\subsection{Problem Formulation}
\label{section:problem-formulation}
Our goal is to design an unsupervised computer vision system that can classify observed actions of an agent or object as intentional or not. Given the trajectory of the agent's (object's) center of mass, we would like to parse the trajectory into segments that either exhibit intentional movement or unintentional movement.
Let the 3D location of the agent (object) as a function of $t$ be denoted by $\mathbf{p}(t)=(x(t),y(t),z(t))^T$, with $y(t)$ indicating the vertical axis pointing up (\textit{i}.\textit{e}., up defines the positive quadrant).
We now define the intentionality of the action of the agent as $I(t)\in\{1,-1\}$, with 1 indicating the action is intentional and $-1$ non-intentional; note $I(\cdot)$ is also a function of time, since some parts of the observed action may correspond to intentional actions (\textit{e}.\textit{g}., walking), while others to non-intentional (\textit{e}.\textit{g}., lose one's footing).
Hence, our goal is to construct a model $f(\cdot)$ such that $I(t)=f(\mathbf{p}(t))$. Since we wish to do so without any training or the need for labeled data (\textit{i}.\textit{e}., an unsupervised approach), herein, we derive a model of $f(\cdot)$ which incorporates common knowledge about intentional and non-intentional behavior of an agent.
Figure \ref{fig:dataset-samples} provides six examples of this task, ranging from animations of abstract geometric objects (intent-maya, Figure \ref{fig:dataset-samples}(a)), to animations of humanoid characters (intent-mocap, Figure \ref{fig:dataset-samples}(b)), then to real-world video of human actions (intent-youtube, Figure \ref{fig:dataset-samples}(c)). The colored horizontal bar in Figure \ref{fig:dataset-samples} denotes an $I(t)$ of an action. Our task is to construct a model that maps the 3D trajectory of the agent's center of mass (shown as yellow crosshairs in Figure \ref{fig:dataset-samples}) to the intentionality of the agent's action $I(t)$ (blue/red horizontal bar in Figure \ref{fig:dataset-samples}).
\begin{figure*}
\includegraphics[width=1.0\textwidth]{figure-1-revised.png}
\caption{Overview of the proposed algorithm. Here we illustrate the concepts we derive to model intentionality. (a) shows a logic diagram of the four concepts introduced in Section \ref{ss:concept-intro}, and their relationship with intentionality. (b-e) shows a pair of samples from our dataset described in Section \ref{sss:intent-maya-description}. The intentional example (in the blue box) shows a ball stepping down a ladder and jumping down an inclined platform to go to the isle at the far end of the scene. In the non-intentional example, the ball rolls and bounces according to Newtonian physics, with a trajectory that closely mimics that of the intentional action, yet the human eye is not trick by this and people clearly classify the first action as intentional and the second as non-intentional. (b) Result of our algorithm when only Concept 1 is considered; (c) result with Concepts 1 and 2; (d) result with Concepts 1, 2 and 3; (e) results with all four concepts included in our algorithm; (f) model overview. The proposed algorithm first extract change in total mechanical energy $\Delta E(t)$ and the vertical acceleration $a_y(t)$ from the input trajectory of the agent, $\mathbf{p}(t)$. Concept 1 recognizes intentional action from $\Delta E(t)$. Concept 2 takes $a_y(t)$ and the output of Concept 1 to form an understanding on non-intentional actions, which will be used in Concept 3 to update the decision. Finally, Concept 4 handles all the unknown state that is previously unrecognizable (see derivation in the main text of the paper for details).}
\label{fig:2}
\end{figure*}
\subsection{Common knowledge concepts}
\label{ss:concept-intro}
Imagine a human agent jumping over a hurdle, which is clearly an intentional action. When she prepares to jump, she converts the (non-observable) chemical energy stored in her body to the mechanical energy of her muscle. The muscle contracts and pushes her body upward in the air. While in the air, gravity is the main external force acting on her which forces her to fall back to the ground. If the initial muscle contraction is strong enough, she successfully jumps over the hurdle.
If we examine the total mechanical energy of the system in the above example, which includes the scene and the agent, we see stable energy before the jump, a sharp increase at the time of the jump, and a stable trend after the jump (during free fall back down). For us human, the association between the perception of intentionality and the function of total mechanical energy is among many common knowledge concepts that we gradually learn in the early stage of the development of our brain \cite{luo2005can}. Our goal is to incorporate this knowledge into a computer vision system, thus avoiding the need to train a supervised machine learning algorithm to model intentionality from labelled data.
This study models the following common knowledge concepts,
\begin{itemize}
\item \textbf{Concept 1 (C1):} A standalone\footnote{Standalone means this concept only focuses on the movement at a specific time point rather than the relationship between actions.} self-propelled motion (SPM) is an intentional action, where self-propelled motion (SPM) is any movement that adds observable mechanical energy into the system.
\item \textbf{Concept 2 (C2):} A standalone external-force motion (EFM) is a non-intentional action, where the external-force motion (EFM) is any movement induced only by external forces (e.g., gravity).
\item \textbf{Concept 3 (C3):} An EFM caused by a SPM is part of an intentional action (\textit{e}.\textit{g}., falling down after an upward jump).
\item \textbf{Concept 4 (C4):} An agent has inertia of intentionality (II), meaning the intentionality of an agent does not change unless C1-C3 applies.
\end{itemize}
The four concepts and their relationship with the intentionality of an action can be visualized by Figure \ref{fig:2}(a) in the form of a logic diagram.
Similar to Newton's Three Laws of Motion, any of these concepts alone does not fully define intentional/non-intentional actions across time. Only when combined, they form a common knowledge system that can be used to recognize intentionality for an agent across time.
\subsection{Mathematical derivations}
\label{s-section:math-formulation}
Recall that we want to formulate the common knowledge as a functional mapping $f(\cdot)$, such that $I(t) = f(\mathbf{p}(t))$, where $I(t)$ and range of $\{ -1, 1\}$ at each time $t$. During the definition of each Concept 1 and 2, we will also use 0 to denote an ``unknown'' state, which is an intermediate state that will be categorized in Concept 4.
\subsubsection{Concept 1}
\label{ss-section:C1}
Concept 1 (C1) states that a standalone SPM is an intentional actions since SPM adds total mechanical energy to the observable system. C1 derives from the common knowledge that human utilizes internally stored energy to execute movements that fulfill his/her intention, adding observable energy into the system. Thus the model of C1 can be derived as follows,
\begin{equation}
\label{eq:model-C1}
I_{C1}(t) = f_{C1}(\mathbf{p}(t)) =\left\{\begin{array}{l}1\;\;\;\mathrm{if}\;\Delta E(t)>0\\
0\;\;\;\mathrm{otherwise,}\end{array}\right.
\end{equation}
where $\Delta E(t)={dE(t)}/{dt}$ is the change in the total observable mechanical energy $E(t)$ with respect to time,
\begin{equation}
\label{eq:mechanical-energy}
E(t)=K(t)+V(t),
\end{equation}
with $K(t)$ the kinetic energy given by,
\begin{equation}
K(t)= \frac{1}{2}\left[\left(\frac{dx(t)}{dt}\right)^2+\left(\frac{dy(t)}{dt}\right)^2+\left(\frac{dz(t)}{dt}\right)^2\right],
\end{equation}
$V(t)$ the potential energy defined as,
\begin{equation}
V(t) =G\left(y(t)-y(t_0)\right),
\end{equation}
$G$ is the gravitational constant, and $y(t_0)$ is the initial y-axis location of the agent. In this formulation, we model agents as points with unit masses, neglecting the rotational kinetic energy or elastic potential energy.
$I_{C1}(t)$ will be equal to 1 at any instance in which the trajectory adds energy into the observable system and 0 to any other movement that does not specified in this concept.
\subsubsection{Concept 2}
\label{ss-section:C2}
Concept 2 (C2) states that a standalone EFM, a motion introduced by only external forces, is non-intentional. This is due to the fact that the exertion of the external force does not change depending on agent's desire or belief. For example, if an agent is falling, it is generally not the intention of the agent to be falling but, rather, the agent has no control over the effect of gravity, making this downward motion inevitable and, thus, non-intentional \cite{sep-action}. However, one should also notice that an agent may take advantage of the EFM, intentionally position themselves in the EFM to achieve their purpose. This special condition will be considered in concept 3.
In practice, the number and types of external forces vary depending on the scene. But on earth, gravity is the primary external force we are bound by and, hence, this is what we are focusing on in the present work.
There are two characteristics of gravity: 1. It is approximately equal regardless of the location of the agent, thus introducing a constant downward acceleration ($g$); 2. the effect of gravity on an agent (or object) does not increase the observable total mechanical energy of the system. The former will be modeled by $f_{C2g}(\mathbf{p}(t))$, while the latter is already modeled by Concept 1 and represented in $I_{C1}(t)$.
With this knowledge, we can derive the model of C2, $f_{C2}$, as,
\begin{equation}
\label{eq:model-C2}
\begin{aligned}
I_{C2}(t) & = f_{C2}(\mathbf{p}(t), I_{C1}(t)) \\
& = f_{C2g}(\mathbf{p}(t)) + I_{C1}(t),
\end{aligned}
\end{equation}
where $f_{C2g}(\mathbf{p}(t))$ is defined as,
\begin{equation}
\label{eq:model-C2g}
f_{C2g}(\mathbf{p}(t)) = \left
\{
\begin{array}{ll}
-1 & \; \mathrm{if} \; I_{C1}(t)=0 \; \wedge a_y(t)=c\cdot g, \\
\; & \; \exists \; c > 0\\
0 & \; \mathrm{otherwise,}
\end{array}\right.
\end{equation}
$g$ is the negative acceleration due to gravity, $\wedge$ is the Boolean AND operation, and $a_y(t)$ is the vertical acceleration of the agent, which is defined as,
\begin{equation}
a_y(t) = \frac{d^2y(t)}{dt^2}.
\end{equation}
Note that we defined the $y$-axis to be pointing vertically upward, opposite to the direction of gravity.
The condition for $-1$ (non-intentional) in the equation (\ref{eq:model-C2g}), $a_y(t)=c\cdot g, \; \exists \; c > 0$ represents a downward, constant acceleration. The other condition $I_{C1}(t) = 0$ showing the movement does not add anything to the total mechanical energy of the observable system. The two conditions are combined with an AND operator to ensure that both are simultaneously satisfied.
One may wonder why $c>0$, rather than $c=1$, meaning the vertical acceleration of the agent is equal to the gravitational acceleration on earth. The reason is that by having $c>0$ we can model the motion due to gravity when the agent is on an inclined surface. The case that the object moved in a uniform speed ($c=0$) is assigned to the unknown state under this concept since the motion is not due to mere gravity.
Equation (\ref{eq:model-C2}) adds $f_{C2g}$ and $I_{C1}$ together, which gives 1 for intentional, -1 for non-intentional, and 0 for all the unknown movement that are not described by either C1 or C2. Those unknown movements will be handled by Concept 4.
\subsubsection{Concept 3}
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{example-C3.pdf}
\caption{An example where Concept 3 is necessary to achieve a correct classification of intentionallity. In this trajectory, an agent jumps twice. $I_{C1}$, $I_{C2}$ and $I_{C3}$ is the output of Concepts 1, 2 and 3, respectively. At time $t_{a1}$ and $t_{a2}$, the agent adds positive energy into the system to initialize the jumps. Thus, the movement at these two time points is detected as intentional as shown in $I_{C1}$. $s_1$ and $s_2$ are the two time intervals when the agent's movement is induced only by gravity, i.e., free fall. Hence, the action in these two intervals is detected as non-intentional by $I_{C2}$. However, since the free fall is part of the jump, the correct classification should be intentional. By taking into account causal relationship between action, Concept 3 can correctly classify these two movement as intentional, as shown in $I_{C3}$.}
\label{fig:example-C3}
\end{figure}
Concept 3 (C3), as foreshadowed in Section \ref{ss-section:C2}, describes the condition that an EFM might not be non-intentional when the agent actively moves herself to the status of EFM. For example, when the human agent was jumping over the hurdle mentioned earlier in this section, she was subjected to gravity forces after she pushes herself in the air. Although the free fall motion is induced by mere gravity, the motion is nevertheless the result of her initial jump -- an intentional action that adds total mechanical energy into the system. C3 is modeling exactly this condition, when a EFM is casused by a SFM, this EFM should be classified as intentional movement.
However, modeling the causal relationship between actions is a challenging problem by itself. In this study, we simplify the causality to an immediate temporal relationship, i.e., the causal action is immediately before the consequential action, which is a surrogate we found works well. Temporal precedence is one of the criteria that is necessary for constructing causality. The reason we only focus on short-term causality is that the long-term causal relationship between actions can be decomposed to a chain of short-term causal relationships between actions.
To model this knowledge, let us first define the set of time intervals of all EFMs as $S_{\mathrm{EFM}} = \{s_1,s_2,...,s_i,...\}$ whose elements $s_i$ are the time interval of the $i^{th}$ EFM, as shown in Fig \ref{fig:example-C3}. The main idea of the algorithm is, for each EFM, identify if it is caused by a SFM. If so, EFM will be recognized as an intentional action. More formally, the model $f_{C3}$ is formulated as shown in Algorithm \ref{alg:C3}.
\begin{algorithm}
\caption{Algorithm for C3}
\label{alg:C3}
\SetAlgoLined
Input: $I_{C1}, S_{\mathrm{EFM}}$ \;
Output: $I_{C3}$ \;
Initialize $I_{C3} \gets I_{C1}$ \;
\For{ $s_i \in S_{\mathrm{EFM}}$}{
$t_{ai} \gets \mathrm{inf}(s_i)$ \;
\If{$I_{C1}(t_{ai}-1)==1$}{
$I_{C3}(t)\gets1$ for $\forall t \in s_i$\;
}
}
\end{algorithm}
In Algorithm \ref{alg:C3}, $t_{ai} \gets \mathrm{inf}(s_i)$ extracts the starting time point for the $i^{th}$ EFM. The operation of $I_{C1}(t_{ai}-1)==1$ examine if the movement immediate preceding EFM is a SFM. In such case, SFM is treated as the cause of EFM, which means EFM is also intentional. The assignment of intentionality is implemented as $I_{C3}(t)\gets 1$, for $\forall t \in s_i$ in the algorithm.
Figure \ref{fig:example-C3} illustrates an example of the case where C3 is needed for correct recognition of intentionality. There are two EFMs in the figure, whose time intervals are denoted by $s_1$ and $s_2$. There are two instances of SPMs, which can both be abstracted as force impulse generated by the agent that initializes a ``jump''. Since the instantaneous nature of the impulse, our C1 model can only detect the SPM at two time points, $t_{a1}$ and $t_{b1}$ shown in the $I_{C1}$ row in the figure. The two EFMs, which are free fall in this case, is a direct and expected result from the initial SPM, thus should be treated as intentional.
\subsection{Concept 4}
Concept 4 (C4) is introduced to handle intentional movements that are not modeled by C1, C2 and C3. Using the concept of inertia from physics, which describes a resistance of the object to change its velocity, we describe C4 as an {\em intentionality inertia}, a property of the agent that resists changes in its intentionality status -- the intentionality of an agent does not change unless the one or more of concepts 1 through 3 occur.
The rationale behind this concept can also be understood from the causal relationship of the actions. When a movement causes another movement, the intentionality carries over. However, if an event happens that breaks the causal relationship, in our case those event defined by the C1 to C3, the intentionality will change accordingly. Let us imagine a case in which a human agent falls from a cliff, hits the ground and lies on the ground since then. The unfortunate fall is a non-intentional movement, according to C2. The movement (or lack of movement) of lying on the ground is also non-intentional. It is not the agent's intention to fall at the first place, so it is also not the intention of the agent to be lying on the ground since lying on the ground is an effect of the falling and hitting the ground. Thus, although ``lying on the ground'' is not one of the actions defined in C1 to C3 (does not add total mechanical energy; does not have a constant downward acceleration), it is still non-intentional due to its relationship to its cause action. If the agent standing up after lying on the floor, the ``standing up'' will be recognized as intentional according to C1.
To model this concept, we first define a set of time interval of all the ``unknown'' actions, $U_{\mathrm{null}} = \{u_1,u_2,...,\\u_i,...\}$ whose element $u_i$ is the time interval of the $i$-th unknown movement - the ones that does not belong to C1 to C3. For each of the unknown movement, we check the intentionality of the previous action, and assign the previous intentionality state to the current unknown action. More formally, the concept is formulated in algorithm \ref{alg:C4}.
\begin{algorithm}[h!]
\caption{Algorithm for C4}
\label{alg:C4}
\SetAlgoLined
Input: $I_{C3}, U_{\mathrm{null}}$ \;
Output: $I_{C4}$ \;
Initialize $I_{C4} \gets I_{C3}$ \;
\For{ $u_i \in U_{\mathrm{null}}$}{
$t_{ai} \gets \mathrm{inf}(u_i)$ \;
$I_{\mathrm{cause}} = I_{C3}(t_{ai}-1)$ \;
$I_{C4}(t)\gets I_{\mathrm{cause}}$ for $\forall t \in u_i$\;
}
\end{algorithm}
One may wonder what if the unknown movement happens at the beginning of the video where there is no C1-C3 motion defined as cause. In those cases, prior knowledge about the nature of the agent is needed, i.e., the assumption about the default intentionality of the agent. For a human agent, one might want to assume the default state is intentional, since the action from a normal, conscious adult is generally intentional by default (otherwise there is no reason for that person to move). In the case that no prior knowledge is available, the algorithm will output the unknown states.
Now that we derived all the implementation of concepts 1-4, the final $I(t)$ is directly equal to $I_{C4}(t)$. Note that although $I(t)=I_{C4}(t)$, $I(t)$ is also a combination of all four concepts, since $I_{C4}(t)$ is a function of $I_{C3}(t)$ which itself is a function of both $I_{C1}(t)$ and $I_{C2}(t)$ (shown in Algorithm \ref{alg:C3} and Algorithm \ref{alg:C4}).
\subsection{Implementation Detail}
To apply our algorithm on the trajectories with discrete time (frames), we use 1st order finite difference to approximate the derivative. We applied a 30-frame median filter on the estimated total mechanical energy from equation (\ref{eq:mechanical-energy}) to remove outliers. The condition that compared to zero in equation (\ref{eq:model-C1},\ref{eq:model-C2g}) will replace zero with a positive threshold close to zero to account for possible numerical error in the estimation.
\section{Assumptions of the system}
In this section, we will give a thorough analysis on the assumptions of the proposed algorithm. We will delineate the assumptions made on the Computational Level (Section \ref{ss:concept-intro}) and the Algorithmic Level (Section \ref{s-section:math-formulation})\footnote{Here we are using the Computational, Algorithmic, and Implementational level from David Marr \cite{vision1982marr}. The implementational level is not discussed since our work does not contribute to that specific level}.
We argue that the proposed common knowledge concepts are relatively general on the computational level, meaning that the logic described by the four concepts are generally sufficient to determine intentionality across layers of abstraction (the same logic can be applied to Heider-Simmel-like animations as well as real word videos). However, the algorithmic implementation we used is based on a set of assumptions which limits its generality. To explain this, let us consider an example of a cue ball hitting a pool ball. In this example, when the interested agent is the pool ball, the movement is due to an external-force movement, induced by the impulse from the cue ball. Because this EFM is not a result of a self-propelled motion, the algorithm, on the conceptual level, should correctly classify the movement as non-intentional. But, on the algorithmic level, our specific algorithmic implementation is telling a different story. Because the ball gains mechanical energy (through the impulse from the cue ball), the algorithm determines that the pool ball is performing self-propelled motion, and thus annotate the movement as intentional.
The reason for the mismatch, is because of the following assumptions our algorithm operates on:
\begin{enumerate}
\item There is only one agent involved in the action.
\item The total mechanical energy of the agent can be calculated from its kinematics of the center of mass.
\item The external force is gravity and its decomposition.
\item The causal relationship between SPM and EFM can be described with immediate temporal relationship.
\end{enumerate}
The four assumptions listed here might lead to the impression that the proposed system is very constrained. This might be true compared to the generic intention recognition, which is extremely complex and even humans fail to perform this task in cases. However, under the condition of action from a single agent in a static environment on earth, the set of assumptions are generally applicable, or could be approximated well enough for the algorithm to perform well, which we will show in the experimental results in Section 5. The clear presentation of assumptions, we argue, should be considered as an advantage rather than a weakness, since it allows practitioners to be aware of the condition where our algorithm is not applicable, and provides researchers clear future directions of improvement.
\section{Experiments}
\label{Experiments}
To our knowlege, there is no existed dataset on intentional\//non-intentional actions. Thus, we created three datasets for our experiment: intent-maya, intent-mocap and intent-youtube. Intent-maya dataset contains abstract minimalistic 3D animation for intentional/non-intentional actions, providing 3D ground truth trajectory for sphere-like agent. Intent-mocap dataset contains motion capture data collected from human agents, providing accurate 3D location of human body but left the center of mass trajectory subject to estimation. Intent-youtube dataset provides in-the-wild RGB video samples where 3D location of human body and center of mass are both estimated. Although we provide manual labels of intentionality on all the three datasets, those labels are not used as part of our proposed algorithm, since our algorithm is not data-driven thus has no need for manual labels. The label is only used to evaluate the performance of the proposed algorithm and train the supervised baselines for comparison. Testing on these three datasets shows the capability of our algorithm to recognize intentionality in both abstract, idealistic dataset and realistic, noisy dataset, showing the general applicability of the proposed concepts.
\subsection{Datasets}
\label{ss:datasets}
\subsubsection{Intent-maya dataset}
\label{sss:intent-maya-description}
Intent-maya datasets contains 60 3D animations of agents acting intentionally or non-intentionally, half for each class. The animations are designed similar to the stimuli in the classical Heider and Simmel experiment \cite{H&S}, in which they showed that human attribute intentionality even to abstract geometric objects. In our videos, one or multiple balls move in a manually designed 3D scene. The movement is human-like in intentional videos and Newtonian in the non-intentional videos. We use Autodesk Maya 2015 to generate the videos. Keyframe animation is used for the intentional videos and Bullet Physical Engine is used for the non-intentional videos. Each video has 480 frames at 60 fps.
To ensure the videos can be perceived as intentional or nonintentional, we asked 30 Amazon Mechanical Turkers to evaluate each animation, judging if the action is intentional or not. All the videos in the dataset has at least 90\% agreement among Turkers indicating a consistent intentionality perception across human subjects.
Since all animation are manually coded, we can extract the ground truth 3D trajectory of the center of mass of the agent directly from Maya animation.
\subsubsection{Intent-mocap dataset}
The 3D manually designed animation we created in Maya provided abstract but yet compelling intentional/ non-intentional perception on balls. However, one may view the animation in intent-maya dataset as too abstract and simple to be generalized to practical condition. Intent-mocap dataset is created to mediate this concern. Motion capture data provides us actions performed by human agent with the advantage of direct measurement on 3D location of body markers, yielding accurate 3D trajectories of the joints of the agent.
We collect mocap sequences from Adobe Mixamo dataset\footnote{https://www.mixamo.com/}, which provide a wide range of intentional and non-intentional mocap sequences that are cleaned by keyframe animators. We manually select the intentional and non-intentional sample based on the action description provided in the datasets. The description for intentional actions includes jumping, walking, running, climbing, etc. The description of non-intentional actions includes, falling, tripping, slipping, etc. With these description, we collected total 208 samples, half for each class. A sequence of 21-joint skeleton is extracted from the mocap samples using MATLAB. The range of length the sequence varies from 32 to 1219 frames. The sampling rate of the sequence is 60 Hz.
We directly use the 3D human joints location provided the mocap data.
\subsubsection{Intent-youtube dataset}
The mocap dataset provides precise human action sequence. However, the nature of the mocap generally requires the actors to perform pre-scripted actions. Thus even if we collect non-intentional samples, one could argue that the actor is to ``pretend'' to be non-intentional\footnote{However, one should also notice that acting to be non-intentional does not mean the action and kinematics of the agent lacks the characteristic of the genuine non-intentional movement}. We introduce intent-youtube dataset to address this concern.
The youtube datasets contains 1000 in-the-wild human action videos, among which 500 are intentional actions and 500 are non-intentional actions. The videos are collected by keyword searching in YouTube. For intentional video, the keywords are derived from ``action'' and ``activity'' in WordNet \cite{miller1998wordnet} and ConceptNet \cite{speer2017conceptnet}. Non-intentional keywords consist two part: adjective and action (e.g., ``accidental drop''). Besides the keywords extracted from WordNet and ConceptNet, we also used keywords that empirically effective, like ``fail'' for non-intentional actions. Only the videos with above 100 views are used in our dataset. Camera shot detection is applied to each video to ensure each video clip only contains one camera. The video clips with significant camera motion are also removed from the dataset. All video samples have at least one full body agent. The final videos varied between 51 and 299 frames in length.
To verify that these video properly exhibit either an intentional or unintentional action, each video was classified into the intent or non-intent categories by a Amazon Mechanical Turker and then verified by an experienced annotator. All the videos with inconsistent judgment from the annotators are removed from the dataset.
We extract 3D human pose by applying 3D human pose estimation algorithm proposed in \cite{martinez2017simple} on the 2D human pose extracted by OpenPose \cite{cao2018openpose}. Given a estimated 3D human pose, we solve a perspective n-point (PnP) problem with non-linear least square (with steepest decent algorithm) to estimate the 3D translation of the agent.
\subsection{Estimating center of mass for human agent}
To estimate the center of mass of the human agent, we first assign each joint to either legs (from hip to the toe), torso (lower back, spine, lower spine and head) and arms (shoulder, elbow, wrist and hand). Then the center of mass of each human body component was computed by averaging all the points assigned to the body part. The center of mass of the agent is then calculated by weighted averaging the body part center, with the weight defined by the standard human weight distribution \cite{HumanBodyDynamics}, see Figure \ref{fig:weight-distribution}.
\begin{figure}[h]
\centering
\includegraphics[width=0.48\textwidth]{weight-distribution.png}
\caption{Illustration of weight distribution used in calculating center of mass in the mocap and youtube datasets. The joints with solid color are used in both mocap and youtube skeleton template. The joints with diagonal pattern are only used in the mocap skeleton template.}
\label{fig:weight-distribution}
\end{figure}
\subsection{Recognition of intent in videos}
\label{ss:recognition-method}
The algorithm introduced in Section \ref{section:method-modeling} recognizes intentionality in each segment for a single agent, which has to be aggregated for the final intentionality label for the entire video.
For the samples in intent-maya datasets, we know that each video either contains purely intentional or purely non-intentional actions. Thus if the number of detected intentional segment is greater than the nonintentional segments, the video is intentional, and \textit{vice versa}. More formally, the final decision for the video, $I_v$ is defined as follows,
\begin{equation}
I_v = \left\{\begin{array}{ll}\mathrm{intentional} \; & \mathrm{if}\;\sum_{j = 1}^N \sum_{t=1}^{T_j} I_{C4,j}(t)>0\\
\mathrm{nonintentional} \; & \mathrm{otherwise}
\end{array}\right.
\end{equation}
where $I_{C4,j}(t)$ denotes the result of C4 for the $j$-th agents in the video. $\sum_{t=1}^{T_j} I_{C4,j}(t)$ denotes the difference between the number of intentional segments versus non-intentional segments for $j$-th agent, which is calculated by summation since intentional action is labeled as 1 and non-intentional as -1.
Unlike the sphere-like agent in the intent-maya dataset, the non-intentional action of human agents is usually happen in the middle of intentional actions (e.g., a human slips in the middle of a walk, with walking as intentional but slipping as non-intentional). Thus we recognize the action of the agent as non-intentional if the number non-intentional segments is above a threshold (we set threshold equal to 40 frames in our experiment). Otherwise the action of the agent in the video is intentional.
\subsection{Comparison between our algorithm and baseline methods}
We first compare our algorithm against 4 baseline methods: Linear Discriminate Analysis (LDA), Nearest Neighbour (NN), Kernel Subclass Discriminant Analysis (KSDA) \cite{you2011kernel}, a deep residual network (ResNet) \cite{he2016deep}, a recurrent neural network with Long Short Term Memory modules (LSTM), and a recurrent neural network with attention mechanism (LSTM+attention). For the latter two baselines (LSTM and LSTM+attention) we also test their performance with RGB video as input rather than 3D trajectories with an additional baseline from R(2+1)D \cite{tran2018closer}. These collection of baselines represents a wide spectrum of methods ranging from simple linear method to modern deep learning based method. The performance on these methods will show the level of difficulties of the problem of intent recognition.
\subsubsection{Implementation detail for LDA, NN and KSDA}
\label{sss:detail-baseline-classic}
For LDA, NN and KSDA, we first applied a 30-frame sliding window with a step size of 15 to the trajectory of each agent. We then used Discrete Cosine Transformation (DCT) to map each x, y, z component of the trajectory segment to a 10 dimensional space of DCT coefficients, which defines a 30 dimensional feature space. Samples extracted from different agents are pooled together to form the training set. During testing, the classification is first conducted on the segment level and then the same thresholding method is used as in Section \ref{ss:recognition-method}. 10-fold cross-validation is used to partition the datasets to training and testing.
\subsubsection{Implementation Detail for ResNet}
\label{sss:detail-baseline-resnet}
For ResNet, instead of handcrafting the feature space, we directly input 3D trajectory segment to the network and have the network learn the feature representation for classification. Same sliding window and cross-validation method is used as in the LDA, NN and KSDA. We used ResNet-18 with modification on the first convolutional layer and maxpooling layer to accomodate the input dimensionality of the 3D trajectory segment. The kernel size of the first convolutional layer is $7\times3$ with padding $=2$. For the fisrt maxpooling layer, the kernal size $=3$, stride $=2\times1$ and padding $=1$. We use Adam optimizer with learning rate $=.001$, $\beta_1=.9$ and $\beta_2=.999$. We trained the network in 100 epochs with batch size $=128$. Similar to the testing procedure in the Section \ref{sss:detail-baseline-classic}, for a given testing trajectory, the network is applied on the segments of the samples, giving binary classification result for each segment. The final decision for the video is given by the majority vote of all the segment results of the testing trajectory.
\subsubsection{Implementation Detail for LSTM and LSTM+attention}
For LSTM \cite{hochreiter1997long}, we input the entire 3D trajectory of the agent to the network instead of the 30-frame segment as in previous baselines. This allows the LSTM baseline to learn to recognize video-wise intentionality from an entire trajectory, rather than using the simple hand-crafted rules for aggregating segment-wise inference as described in Section \ref{ss:recognition-method}. We use 10 dimensional hidden state and cell state, initialized with zero vectors at the beginning of each sequence. At the last frame, the hidden state is fed to a 10-by-2 fully connected network with softmax. The network is optimized using Stochastic Gradient Descent (SGD) with learning rate $=.001$ and momentum $=.9$ to minimize the cross-entropy loss.
For LSTM+attention, we used the attention mechanism proposed in \cite{bahdanau2014neural}, which models temporal attention as a bi-directional LSTM with 10-dimensional hidden and cell states. We use the same LSTM model described above to model the trajectory dynamics, jointly optimized with attention module using cross-entropy loss. The network is also optimized using SGD with learning rate $=.001$ and momentum $=.9$.
\subsubsection{Implementation Detail for video based classification}
We also provide three additional baselines, LSTM+ResNet, LSTM+ResNet+attention and R(2+1)D \cite{tran2018closer} with images sequences as input rather than 3D trajectories of agents. These baselines provide insight on the effectiveness of 3D trajectory as input feature. The image sequence is extracted at every 10 frames to reduce the total length. For each frame in this sequence, the RGB image within the bounding box of an agent is extracted, then resized to 224$\times$224. For both LSTM+ResNet and LSTM+ResNet+attention baselines, a 512 dimensional feature is extracted after the average pooling layer, which will be used as input feature to the LSTM module. The hidden and cell states of the LSTM used in this experiment are both 512-d to accommodate the increase in dimensionality of the input features. An 512-by-2 fully-connected network is used to recognize the action of given agent is intentional or not. Both ResNet18 and LSTM (with attention module) are jointly trained using SGD with learning rate $=.001$ and momentum $=.9$. R(2+1)D is trained using Adam optimizer with learning rate $=.001$, $\beta_1=.9$ and $\beta_2=.999$.
\subsubsection{Quantitative Result}
\begin{table*}[h!]
\centering
\caption{Quantitative result comparison between the ours and baseline algorithms, measured by mean classification accuracy and standard error of the mean (in parenthesis). 3D COM: 3D trajectory of the agent's center of mass}
\label{tab:baseline-compare}
\begin{tabular}{l l l l l}
\hline\noalign{\smallskip}
Methods & input & maya & mocap & youtube\\
\noalign{\smallskip}\hline\noalign{\smallskip}
LDA & 3D COM & 0.533 (0.060) & 0.755 (0.014) & 0.653 (0.017)\\
NN & 3D COM & 0.683 (0.052) & 0.805 (0.023) & 0.654 (0.014)\\
KSDA & 3D COM & 0.633 (0.048) & 0.795 (0.022) & 0.577 (0.013)\\
ResNet & 3D COM & 0.783 (0.058) & 0.760 (0.025) & 0.580 (0.019)\\
LSTM & 3D COM & 0.581 (0.232) & 0.835 (0.082) & 0.615 (0.057)\\
LSTM+attention & 3D COM & 0.504 (0.209) & 0.671 (0.155) & 0.505 (0.059)\\
\hline
LSTM+ResNet & RGB video & 0.550 (0.200) & - & 0.770 (0.036)\\
LSTM+ResNet+attention & RGB video & 0.517 (0.089) & - & 0.704 (0.079)\\
R(2+1)D & RGB video & 0.500 (0.091) & - & 0.609 (0.017)\\
\hline
\textbf{Ours} & 3D & \textbf{0.950} & \textbf{0.827} & \textbf{0.785}\\
\noalign{\smallskip}\hline
\end{tabular}
\end{table*}
Table \ref{tab:baseline-compare} shows the mean classification accuracy and its standard error for the four baseline methods in maya, mocap and youtube datasets. We use leave-one-pair-out cross-validation for the baseline experiments in maya dataset, and 10-fold cross-validation for mocap and youtube dataset. We did not calculate the mean accuracy for our method since our method does not need training thus the entire dataset is used as testing set without cross-validation. As shown in the table, the accuracy of our proposed algorithm is significantly higher than the accuracy of the baseline methods in intent-maya dataset. In intent-mocap and intent-youtube dataset our algorithm produces comparable results to the most accurate baseline methods. It is worth noting that our algorithm achieves these result without any supervision or training, comparing to all the baselines which are learning based methods.
As demonstrated by the above experiments, the algorithm described in this paper is general and can be applied to any video of an action. To prove this further, we decided to apply our approach to a new dataset that appeared long after we had submitted the first version of this paper.\footnote{This experiment was added during the revision phase of this paper.} Thus, this serves as an independent test on a data we had no access to during the design of our algorithm. The database in question is the Oops! database \cite{epstein2020oops}, which shows a number of unintentional actions collected from YouTube. We thus used our derived algorithm to identify these unintentional actions in the dataset. In this challenging task, our algorithms achieved 66.51\% accuracy.
One may wonder why our result is significantly better than all the 3D baselines in the youtube dataset but only comparable to the best baseline in the mocap dataset, since they are both essentially the same type of data (3D human pose sequence). One possible explanation is related to the highly noisy samples. In intent-youtube dataset, the 3D trajectory of an agent is estimated from the 2D video rather than directly measured by sensors as in mocap dataset. This estimation process introduces significantly higher noise to the youtube dataset due to the limitation of the off-the-shelf 3D human pose estimation algorithm. When a powerful non-linear data-driven learning algorithm (like ResNet, KSDA and LSTM) is used to learn the underlying pattern in this dataset, it is more likely that the algorithm will overfit to the noise, ending up with higher testing error \cite{friedman2001elements}. Our algorithm, on the other hand, directly examines the kinematics feature without training, thus avoiding this possible issue.
\begin{table}[h!]
\centering
\caption{Comparing leave-one-pair-out (LOPO) cross-validation versus 10-fold cross-validation on intent-maya dataset using LSTM and LSTM+attention.}
\label{tab:cv-compare}
\begin{tabular}{l l l}
\hline\noalign{\smallskip}
cross-validation & LOPO & 10-fold\\
\noalign{\smallskip}\hline\noalign{\smallskip}
LSTM (3D) & 0.581 (0.232) & 0.482 (0.103)\\
LSTM+attention (3D) & 0.504 (0.209) & 0.365 (0.142)\\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}
Table \ref{tab:cv-compare} also shows disadvantages of supervised methods in a biased dataset, which is almost always the case. This is particularly clear when examine the result of LSTM (3D) and LSTM+attention (3D) in intent-maya dataset using 10-fold cross-validation where the classification accuracy is even below the chance level of 50\%. The reason for the low performance on the maya dataset is its careful design. The samples in the maya dataset are designed in intentional-nonintentional pairs. Within a pair, the background, objects, and illumination agents are all identical. The only difference is the kinematics of the agent, which is also designed to be as similar as possible while preserving the significant difference in the perception of intentionality. When we randomly partition the dataset for 10-fold cross-validation, some intentional (non-intentional) samples in the validation set might have a their non-intentional (intentional) counterpart in the training set. Due to the data-driven nature of the supervised models, the similarity between the training and testing samples tends to bias the network towards the wrong decision. This is particularly true for the models like LSTM and LSTM+attention, which are given a higher flexibility to learn not only the features, but also the rules for video-wise recognition of intent. Our algorithm, on the other hand, does not have this disadvantage due to its common knowledge based inference.
\subsection{Qualitative Result}
The result in last section shows that our algorithm achieves higher or comparable classification accuracy on the video-level to the baseline methods. However, it is unknown if our algorithm can return reasonable segment-level classification. Imagine an agent trips while walking, with walking occupying a significant portion of the video. An algorithm can give correct annotation (non-intentional) of the sequence even if the walking is labeled as non-intentional and tripping as intentional. To examine this possible issue, we provide qualitative result of segment/frame-level intent recognition by our algorithm on the three datasets (see Figure \ref{fig:maya-result} for intent-maya dataset, Figure \ref{fig:mocap-intent} for intentional actions and Figure \ref{fig:mocap-nonintent} for non-intentional actions in intent-mocap dataset, see Figure \ref{fig:youtube-result} for intent-youtube dataset). These results shows that our algorithm can correctly recognizing intent of the agent on both video level and segment level. For example, in the ``Tripping'' sequence, our algorithm correctly annotates that the action (walking) is intentional before the 70th frame and unintentional thereafter, accurately reflecting the moment that the agent trips.
\begin{figure*}
\includegraphics[width=1.0\textwidth]{maya-result.png}
\caption{Qualitative result of our algorithm testing on intent-maya dataset. The full model with all concepts is used. Blue (red) indicates that our algorithm recognizes the movement of the agent as intentional (non-intentional) at that specific time. The ground truth annotation is shown on the left of the figure.}
\label{fig:maya-result}
\end{figure*}
\subsection{Effect of keypoints occlusion}
Since our algorithm depends on the estimation of the agent's center of mass, it is necessary to study the robustness of our algorithm against keypoints occlusion in the pose estimation.
We design three experiments to simulate a variety of cases of occlusions on skeleton keypoints: 1. A random joint is always occluded across all samples, similar to the cases where one of the sensors (or mocap markers) is defective; 2. A random joint is occluded per agent, which simulate the cases where a keypoint is occluded consistently for an agent; 3. A random joint is occluded per frame, which is to simulate a highly noisy center of mass estimate. Typically, the occlusion on a specific keypoint occurs consecutively across several frames, during which estimation of agent’s center of mass is biased. For our algorithm, a biased but smooth estimate of the center of mass is less problematic than a highly noisy estimate, which might produce an artificial increase of mechanical energy (due to the jittering movement). Thus, we occlude random joints per frame to simulate this highly noisy estimation of the center of mass, which tests our algorithm in a highly unfavorable setup. The keypoint occlusion experiment is only conducted on the intent-mocap and intent-youtube dataset, since the intent-maya dataset does not have keypoints defined for its ball agent.
Table \ref{tab:keypoint-occlusion} shows the result of our algorithm on intent-mocap and intent-youtube dataset with the three types of keypoint occlusion mentioned above. To provide a measurement of uncertainty, we repeat the three experiments with randomly selected keypoints for occlusion and report the mean accuracy and its standard error. As we can see, when only one joint is consistently occluded, either across all samples or per agent, there is no significant negative impact on our algorithm. In the worse case occlusion we designed for our algorithm, there is a drop in the classification accuracy due to the inaccurate estimation on the change of total mechanical energy induced by the noisy estimation of the center of mass, which is consistent with what we described earlier. Notice that we did not perform preprocessing to smooth the trajectory of the center of mass or infer the missing joint, which can be done to improve the performance.
\begin{table}[h!]
\centering
\caption{Quantitative result on our algorithm with occluded keypoints, measured by mean classification accuracy and standard error of the mean (in parenthesis). }
\label{tab:keypoint-occlusion}
\begin{tabular}{l l l}
\hline\noalign{\smallskip}
Occlusion & intent-mocap & intent-youtube\\
\noalign{\smallskip}\hline\noalign{\smallskip}
None & 0.827 & 0.785 \\
1 joint all sample & 0.833 (0.008) & 0.769 (0.011)\\
1 joint per agent & 0.808 (0.006) & 0.767 (0.003)\\
1 joint per frame & 0.712 (0.021) & 0.624 (0.006)\\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}
\subsection{Ablation study}
The result in the previous section shows that our algorithm is effective on recognizing intentionality from the trajectory of the agents. However, it is still unknown that if all the common sense concepts we introduced in the Section \ref{section:method-modeling} are necessary. We conduct an ablation study on the proposed algorithm to study this problem. In this experiment, we started with a model including only Concept 1, and gradually adding each common concept until reaching the full model with all four concepts. When classify with the ablated model, we directly apply the method described in Section \ref{ss:recognition-method} to $I_*$ - the output of the model with ablated Concepts. For example, when only C1 is used, $I_*=I_{C1}$, with $I_{C1}$ defined in equation (\ref{eq:model-C1}). For the ablated model with C1+2+4, we calculate the output of the model using Algorithm \ref{alg:C4} but with $I_{C2}$ as input instead of $I_{C3}$.
\begin{table}[h!]
\caption{Quantitative result of our algorithm with ablation, measured by mean classification accuracy and standard error of the mean (in parenthesis)}
\label{tab:ablation}
\begin{tabular}{l l l l l l}
\hline\noalign{\smallskip}
dataset & 1 & 1+2 & 1+2+3 & 1+2+4 & 1+2+3+4 \\
\noalign{\smallskip}\hline\noalign{\smallskip}
maya & 0.500 & 0.667 & 0.667 & 0.783 & 0.950\\
mocap & 0.500 & 0.571 & 0.534 & 0.737 & 0.827\\
youtube & 0.500 & 0.519 & 0.501 & 0.735 & 0.785\\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}
\subsection{Analysis on the ablation result}
The result of the ablation study is shown in Table \ref{tab:ablation}. When the C1 is the only concept used in the model, the classification accuracy is no greater than random chance for both maya and mocap datasets. With more common sense concepts included, the classification accuracy increases, indicating that the proposed common sense concepts are all necessary to achieve an accurate recognition. One may notice that the accuracy of C1+2+3 is no higher than C1+2 and may argue that C3 is not necessary for this reason. However, this argument is challenged by the result of C1+2+4 is less accurate result than C1+2+3+4, indicating that C3 is necessary in when combined with C4 (rather than C2) to further improve the accuracy. As mentioned in Section \ref{ss:concept-intro}, the four concepts combined to form a common knowledge system for intent recognition.
\begin{figure*}
\includegraphics[width=1.0\textwidth]{mocap-intent-revise.png}
\caption{Qualitative result of our algorithm testing on intent-mocap datasets. All samples shown here contains intentional actions. The full model with all concepts is used. The colorbar indicates the intentionality judgement by our algorithm at each frame, blue for intentional and red for non-intentional. The number above the agent is the corresponding frame index in the sequence. The action name is shown on the top-left corner of each sequence which corresponds to the animation name in mixamo dataset. We applied median filter on $I_{C4}$ with windows size 30 to increase smoothness or the result.}
\label{fig:mocap-intent}
\end{figure*}
\begin{figure*}
\includegraphics[width=1.0\textwidth]{mocap-nonintent-revise.png}
\caption{Qualitative result of our algorithm testing on intent-mocap datasets. All samples shown here contains non-intentional actions. The full model with all concepts is used. The same method used in Figure \ref{fig:mocap-intent} is applied to generated this images. }
\label{fig:mocap-nonintent}
\end{figure*}
\begin{figure*}
\includegraphics[width=1.0\textwidth]{youtube-result-revise-2.png}
\caption{Qualitative result of our algorithm testing on intent-youtube datasets. Each sequence contains 10 samples uniformed sampled across time. The colorbar depicts intentionality judgement by our algorithm at each frame. Median filter with windows size 30 frames is also applied for visual presentation. }
\label{fig:youtube-result}
\end{figure*}
\section{Discussion and conclusion}
The result in the previous section shows that our algorithm can achieve significantly superior or comparable accuracy to a range of learning based method on three datasets. The result also shows the necessity of each of the common knowledge concepts in the proposed algorithm. By modeling those concepts that defines intentionality, our algorithm does not need labeled data, nor it is a learning-based algorithm, thus shy away from the potential sampling bias in training set. Since the algorithm does not need training and composes only rules that derived from human common sense, the method is less computationally demanding and more interpretable than most of the deep learning based algorithms. The performance on the three datasets also shows the general applicability of the proposed common concept algorithm on different types of agents.
The result also shows that the classification accuracy of our algorithm is decreasing from maya to mocap to youtube dataset. One possible explanation is the lack of accuracy in the estimated center of mass of the agent in the mocap and youtube datasets. Intent-maya dataset only contains sphere-like agents, whose trajectory of the center of mass is readily available with high precision. However, in the mocap and youtube datasets, the center of mass has to be estimated from the skeleton of the agent, which is more accurate in mocap dataset than in youtube dataset (but both worse than the maya dataset), potentially explaining the drop of classification accuracy in the yotube dataset comparing to the mocap dataset. One should also notice that the proposed algorithm does not claim nor implement any novelty in 3D human pose estimation, which by itself is a challenging and open problem.
\begin{table*}[h!]
\centering
\caption{Cases for intentionality of the interaction between A and B.}
\label{table:multi-agent-example}
\begin{tabular}{|c|c|c|c|p{7cm}|}
\hline
Case & A action & B action & A $\rightarrow$ B interaction & Example \\
\hline\hline
1 & intentional & intentional & intentional & A punched B in a boxing game. \\
\hline
2 & intentional & intentional & non-intentional & A is walking backward but B is walking normally. A bumped into B without noticing B. \\
\hline
3 & non-intentional & intentional & non-intentional & A is on a bike out of control and crashed into B. \\
\hline
4 & non-intentional & non-intentional & non-intentional & A crashed into B when both of them are ice-skiing and cannot control their movement. \\
\hline
5 & intentional & non-intentional & intentional & A catches B while B tripped over an obstacle. \\
\hline
\end{tabular}
\end{table*}
Another significant future direction of the study is to infer intentionality when the action involves multiple agents. When social interactions are involved, the inference of intentionality can become much more complex. Table \ref{table:multi-agent-example} provides a rudimentary anecdotal analysis on the potential cases of intentionality in a two-agent system without considering the environmental context. As shown in the table, the relationship is complex, but not lacking of rules. For example, If the action conducted by agent A is non-intentional, it is likely that the interaction from agent A to agent B is also non-intentional. However, the \textit{reverse equality} may not hold. Thus, we argue that to consider solving this complicated multi-agent problem, it is necessary to address the that of a single agent first, which is what we did in the present work.
In conclusion, we proposed a common knowledge based unsupervised computer vision system for recognizing intent of an agent, specifically whether the action of the agent is intentional or not. The problem is significant due to the essential role played by the intent recognition in human's social life. Any machine that intended to work and live with human might benefit from intent recognition to achieve a smooth human-machine interaction. Recognition of intentionality (intentional vs unintentional) is a first step towards this goal. Our algorithm, to our knowledge, provides the first common knowledge based computer vision algorithm for the recognition of intentionality. Comparing to the modern computer vision and pattern recognition systems, whose majority are data-driven learning methods that require a large amount of training data, our system achieves this high-level vision task without the need for training data, but achieves higher or comparable result on multiple datasets to the baselines. The effectiveness of our algorithm not only provides a potential way to address the problem of automatic visual recognition of intent, but also performing high-level reasoning without using training data by leveraging human commonsense concepts.
\begin{acknowledgements}
This research was supported by the National Institutes of Health (NIH), grants R01-DC-014498 and R01-EY-020834, the Human Frontier Science Program (HFSP), grant RGP0036/2016, and a grant from Ohio State's Center for Cognitive and Brain Sciences.
\end{acknowledgements}
\bibliographystyle{spmpsci}
|
2,869,038,155,915 | arxiv | \section*{Introduction}\blfootnote{The author has benefited from a guest position at the Max Planck Institute for Mathematics in Bonn. Her travel was also partially supported by her advisor V. Turchin's Simon Foundation grant, award ID: 519474.}
Let $Emb(S^n,S^{n+q})$ be the space of smooth embeddings $S^n\hookrightarrow S^{n+q}$ and $Imm(S^n,S^{n+q})$ be the space of smooth immersions $S^n\looparrowright S^{n+q}$. We define the space of \textit{spherical embeddings modulo immersions} $\overline{Emb}(S^n,S^{n+q})$ as the homotopy fiber of $Emb(S^n,S^{n+q})\hookrightarrow Imm(S^n,S^{n+q})$ over the trivial inclusion $id:S^n\subset S^{n+q}$. An element in this space is represented by a pair $(f,\alpha)$, where $f:S^n\hookrightarrow S^{n+q}$ is a smooth embedding together with a regular homotopy $\alpha:[0,1]\rightarrow Imm(S^n,S^{n+q})$, between $f$ and the trivial inclusion $S^n\subset S^{n+q}$. Moreover, $Emb_{\partial}(D^n, D^{n+q})$ denotes the space of disk embeddings $D^n\hookrightarrow D^{n+q}$ with the fixed behavior near the boundary, and we similarly define $\overline{Emb}_{\partial}(D^n,D^{n+q})$ as the space of disk embeddings modulo immersions. For framed spherical/disk embeddings, we consider the spaces
\sloppy
$Emb^{fr}(S^n,S^{n+q}),\overline{Emb}^{fr}(S^n,S^{n+q}),Emb_{\partial}^{fr}(D^n,D^{n+q})$ and $\overline{Emb}_{\partial}^{fr}(D^n,D^{n+q})$ in the same manner. Throughout the paper for spherical embeddings we assume that the framing respects the natural orientation: if one takes the orientation of $S^n$ and completes it with the orientation of the normal bundle induced by the framing, one obtains the standard orientation of the ambient sphere $S^{n+q}$. For disk embeddings the framing is standard near the boundary. The spaces of embeddings modulo immersions recently attracted a lot of attention \cite{vt,vt1,bw,vt2,sak,vt3}. The main objective of this paper is to revise Haefliger's work \cite{hae}, and apply it to compute $\pi_0$ of these spaces.\\
In \cite[Theorems 3.4 and 5.7]{hae}, Haefliger has shown that for $q\geq 3$, the group of isotopy classes of (framed) spherical embeddings of $S^n$ in $S^{n+q}$ can be represented in terms of the homotopy group of a triad i.e., \sloppy $C_n^q:=\pi_{0}Emb(S^n,S^{n+q})=\pi_{n+1}(SG;SO,SG_q)$ and $FC_n^q:=\pi_{0}Emb^{fr}(S^n,S^{n+q})=\tilde{\pi}_{n+1}(SG;SO,SG_q)$. We recall these homotopy groups and isomorphisms later in \S \ref{sec1}.\\
\noindent
\textbf{Main results.} Let $\overline{FC}{}_n^q$ denote the group of isotopy classes of \enquote{framed disked embeddings}, which we discuss in more detail in \S \ref{sec2}.
\begin{theo}\thlabel{mth1}
\textit{For} $q\geq 3$,\\
$\pi_{0}\overline{Emb}(S^n,S^{n+q})=\pi_{0}\overline{Emb}_{\partial}(D^n,D^{n+q})=\pi_{0}\overline{Emb}(S^n,\mathbb{R}^{n+q})
=\pi_{0}\overline{Emb}^{fr}(S^n,S^{n+q})=\pi_{0}\overline{Emb}_{\partial}^{fr}(D^n,D^{n+q})=\pi_{0}\overline{Emb}^{fr}(S^n,\mathbb{R}^{n+q})
=\overline{FC}{}_n^q=\pi_{n+1}(SG,SG_q).$
\end{theo}
\noindent
The result is an immediate corollary of Theorems \ref{mt} and \ref{cor}.\\
Alternatively, this result can be obtained using smoothing theory, as a consequence of \cite[\S 6]{hae1} and \cite[Theorem~1.1]{sak}. There is even a stronger result:
\begin{equation*}\pi_i\overline{Emb}_{\partial}(D^n,D^{n+q})= \pi_{n+i+1}(SG_{n+q},SG_q) \text{ for } i\leq2q-5,
\end{equation*} which follows from the work of Lashof \cite{las1}, Millett \cite[Theorem~2.3]{mill} and Sakai \cite[Theorem 1.1 and Remark 2.3]{sak}. Moreover, for $ i\leq q-3$, $ \pi_{n+i+1}(SG_{n+q},SG_q)=\pi_{n+i+1}(SG,SG_q)$, see Lemma~\ref{le}. However, our goal is to review Haefliger's construction and give a geometric meaning to $ \pi_{n+1}(SG,SG_q)$ i.e., $ \pi_{n+1}(SG,SG_q)=\overline{FC}{}_n^q$. \\
Another main result that does not immediately follow from smoothing theory is to geometrically interpret the long exact sequences associated with the triad $(SG;SO,SG_q)$ considered by Haefliger \cite[\S 4.4 and \S 5.9]{hae}. Let $Im_n^q$ and $FIm_n^q$ denote the group of regular homotopy classes of immersions $S^n\looparrowright S^{n+q}$ and framed immersions $S^n\looparrowright S^{n+q}$, respectively. It is natural to ask which (framed) spherical immersions can be realized as (framed) embeddings, or when two (framed) spherical embeddings are equivalent as (framed) immersions. Answers to these questions are encoded by the lower exact sequences in (\ref{*}) and (\ref{**}) of Theorem~\ref{mth2}, in which $\overline{FC}{}_n^q$ naturally fits.
\begin{theo}\thlabel{mth2}
\textit{For $q\geq 3$, the two long exact sequences of the triad $(SG;SO,SG_q)$ are isomorphic to the corresponding geometric long exact sequences:}
\small
\begin{equation}\label{*}
\xymatrix@C=0.8em{
\ar[r] & \pi_{n+1}(SG,SG_q) \ar[r] \ar@{=}[d] & \pi_{n+1}(SG;SO,SG_q) \ar[r] \ar@{=}[d] & \pi_n(SO,SO_q) \ar[r] \ar@{=}[d] & \pi_n(SG,SG_q) \ar[r] \ar@{=}[d]&\\
\ar[r] & \overline{FC}{}_n^q \ar[r]& C_n^q \ar[r]& Im_n^q \ar[r] & \overline{FC}{}_{n-1}^q \ar[r] &
}
\end{equation}
\begin{equation}\label{**}
\xymatrix@C=1em{
\ar[r] & \pi_{n+1}(SG,SG_q) \ar[r] \ar@{=}[d] & \tilde{\pi}_{n+1}(SG;SO,SG_q) \ar[r] \ar@{=}[d] & \pi_n(SO) \ar[r] \ar@{=}[d] & \pi_n(SG,SG_q) \ar[r] \ar@{=}[d]&\\
\ar[r] & \overline{FC}{}_n^q \ar[r]& FC_n^q \ar[r]& FIm_n^q \ar[r] & \overline{FC}{}_{n-1}^q \ar[r] &
}
\end{equation}
\normalsize
\end{theo}
Note that the upper sequences in (\ref{*}) and (\ref{**}) are the long exact sequences of the homotopy groups of pairs $(SG/SG_q,SO/SO_q)$ and $(SG/SG_q, SO)$, respectively, see Remarks \ref{remar} and \ref{remark}.\\
The paper is organized as follows: we give a quick review of Haefliger's result \cite{hae} for (framed) spherical embeddings $S^n\hookrightarrow S^{n+q}$ in \S \ref{sec1}. In \S \ref{sec2}, we define the group $\overline{FC}_n^q$ and show that $\overline{FC}{}_n^q= \pi_{n+1}(SG,SG_q)$. We prove Theorems~\ref{mth1}~and~\ref{mth2} in \S \ref{sec3} and \S \ref{sec4}, respectively. We recall some computations and prove a few applications of Theorem~\ref{mth1} in \S \ref{sec5}. Throughout the paper we work in the smooth category and assume $q\geq 3$. \\
\noindent
\textbf{Terminology:}
Let $D^n$ be the standard unit disk in $\mathbb{R}^n$, and $\{e_1,..,e_n\}$ denote the natural basis of $\mathbb{R}^n$. Let $S^n=\partial D^{n+1}$ be the unit sphere such that $S^n=D_{-}^n\cup D_{+}^n$ with $D_{-}^n=\{x\in S^n | x_1\leq 0\}$ and $D_{+}^n=\{x\in S^n | x_1\geq 0\}$. According to Haefliger \cite{hae}, the \textit{suspension} of a map $f:D^n\rightarrow D^n$ is given by the map $S(f):D^{n+1}\rightarrow D^{n+1}$ sending the arc of circle going from $e_{n+1}$, by $x\in D^n$, to $-e_{n+1}$ on the arc of circle from $e_{n+1}$, by $f(x)$, to $-e_{n+1}$. The suspension $S^{n+1}\rightarrow S^{n+1}$ of a map $S^n\rightarrow S^n$ is defined in the same way.\\
Abusing terminology, the suspension of an embedding $S^n\xhookrightarrow{f} S^{n+q}$ is the composition $S^n\xhookrightarrow{f} S^{n+q}\subset S^{n+q+1}$. For the suspension of a framed embedding $S^n\hookrightarrow S^{n+q}$, the framing is completed by adding the standard vector $e_{n+q+2}$ as the last vector. We often say suspension for an iterated suspension defined inductively. For example, when we say $S^n\hookrightarrow S^{n+N}$ is the suspension of a framed embedding $S^n\xhookrightarrow{f} S^{n+q}$ for $N>q$, we mean that it is defined as the composition $S^n\xhookrightarrow{f} S^{n+q}\subset S^{n+N}$ and the framing is obtained by adding vectors $\{e_{n+q+2}, \ldots, e_{n+N+1}\}$ to the initial framing. We define the suspension of a (framed) disk embedding similarly. \\
\noindent
\textbf{Acknowledgment:}
The author would like to thank her advisor Victor Turchin for his time and helpful feedback on innumerable drafts of this paper.
\section{Embeddings of $S^n$ in $S^{n+q}$}\label{sec1}
Haefliger \cite{hae} proved that the group of concordance classes of embeddings of $S^n$ in $S^{n+q}$ is isomorphic to $\pi_{n+1}(SG;SO,SG_q)$ for $q\geq 3$.
\subsection{The group $C_n^q$}
\begin{center}
$C_n^q:=\{$concordance classes of smooth embeddings $S^n\hookrightarrow S^{n+q}\}.$
\end{center}
\begin{theorem} \cite[Theorem 1.2]{hae}.
\textit{Two concordant embeddings of $S^n$ in $S^{n+q}$ are isotopic when $q\geq3$, i.e. $C_n^q=\pi_{0}Emb(S^n,S^{n+q})$, the set of connected components of the space of embeddings of $S^n$ in $S^{n+q}$.}
\end{theorem}
Furthermore, the equality $\pi_{0}Emb(S^n,S^{n+q})= \pi_{0}Emb_{\partial}(D^n, D^{n+q})$ enables $C_n^q$ with an additive multiplication, and the existence of inverses is guaranteed, as we consider concordance classes. Hence, $C_n^q$ is an abelian group.
\begin{lemma}\cite[\S 1]{hae}.\thlabel{slice}
\textit{An embedding $S^n\hookrightarrow S^{n+q}$ is concordant to the trivial one if and only if it is slice, in other words, if it can be extended to an embedding $D^{n+1}\hookrightarrow D^{n+q+1}$.}
\end{lemma}
\subsection{The group $\pi_{n+1}(SG;SO,SG_q)$}\label{triad}
Let $SG_q$ be the space of degree one maps $S^{q-1}\rightarrow S^{q-1}$, $SG=\cup SG_q$ under suspension, and $SO=\cup SO_q$, where $SO_q$ is the special orthogonal group.\\
An element in $\pi_{n+1}(SG;SO,SG_q)$ is represented by a continuous based map $\phi:D^{n+1}\rightarrow SG$ i.e., for $x\in D^{n+1}$, $\phi(x):S^{N-1}\rightarrow S^{N-1}$, for some large $N$, such that $\phi(D_{-}^n)\subset SO_N$ and $\phi(D_{+}^n)\subset SG_q$. Note that the equator $S^{n-1}=\partial D_{-}^n=\partial D_{+}^n$ goes to $SO\cap SG_q=SO_q$, and $\phi(*)=id$ for the base-point $*=e_2\in S^{n-1}$.\footnote{Haefliger in \cite{hae} does not consider the base-point condition, but it is immediate that adding it yields the same homotopy group, since $SO_q=SO\cap SG_q$ is connected.} Abusing notation, we also view $\phi$ as a map $\phi:D^{n+1}\times S^{N-1}\rightarrow S^{N-1}$, and sometimes for $\phi(x)$ we write $\phi_x=\phi(x,-):S^{N-1}\rightarrow S^{N-1}$.\\
Two such maps $\phi:D^{n+1}\times S^{N-1}\rightarrow S^{N-1}$ and $\phi':D^{n+1}\times S^{N'-1}\rightarrow S^{N'-1}$ represent the same element in $\pi_{n+1}(SG;SO,SG_q)$ if there is a homotopy $\phi_t:D^{n+1}\times S^{M-1}\rightarrow S^{M-1}$ for some $M\geq N, N'$ and $t\in [0,1]$, satisfying the above conditions and such that for any $x\in D^{n+1}$, the maps $\phi_0(x,-),\phi_1(x,-):S^{M-1}\rightarrow S^{M-1}$ are suspensions of $\phi(x,-)$ and $\phi'(x,-)$, respectively. The product operation of any two elements in $\pi_{n+1}(SG;SO,SG_q)$ is defined point-wise.
\setbox\tempbox=\hbox{\begin{tikzcd}[scale cd=0.8]
BSO_q \arrow[r] \arrow[d]& BSG_q \arrow[d] \\
BSO \arrow[r] & BSG \text{ .}\end{tikzcd}}
\begin{remark}\label{remar} Recall that the upper long exact sequence in (\ref{*}) is the long exact sequence of the pair $(SG/SG_q,SO/SO_q)$. Indeed, Milgram \cite[\S 1]{mil} interpreted the group $\pi_{n+1}(SG;SO,SG_q)$ as $\pi_n$\Big(hofib$(SO/SO_q\rightarrow SG/SG_q)\simeq$ hofib$(SG_q/SO_q\rightarrow SG/SO)$\Big).\footnote{The spaces are equivalent because they describe the total homotopy fiber of the square \[\box\tempbox\]} One way to see this interpretation is that the group $\pi_{n+1}(SG;SO,SG_q)$ is obviously isomorphic to $\pi_{n}(SO\times_{SG}^{h} SG_q,SO_q)$, where $SO\times_{SG}^{h} SG_q$ is the homotopy pullback of $SO\rightarrow SG\leftarrow SG_q$.
\end{remark}
\subsection{The isomorphism $\psi:C_n^q\rightarrow \pi_{n+1}(SG;SO,SG_q)$}\label{sub}
Although these two groups look completely different, there is a natural map between them. To see the relation, Haefliger considers representatives in $C_n^q$ to be framed embeddings of $D^{n+1}$ with different boundary conditions on $D^n_{-}\subset S^n=\partial D^{n+1}$ and $D^n_{+}\subset S^n=\partial D^{n+1}$. Framing will be crucial to relate such embeddings to $\pi_{n+1}(SO;SO,SG_q)$ by means of Pontryagin-Thom type construction \cite[\S 3]{hae}.\\
Given an embedding $f:S^n\hookrightarrow S^{n+q}$, we say $f$ is a \textbf{\textit{special embedding}} if $f|_{D_{-}^{n}}=id$ and $f$(int $D_{+}^n)\subset$ int $D_{+}^{n+q}$.
We can always extend $f:S^n\hookrightarrow S^{n+q}$ to a disk embedding $\bar{f}:D^{n+1}\hookrightarrow D^{n+N+1}$, for some $N$ large enough (in fact $N> n+2$). We refer the obtained pair $(f,\bar{f}):(S^n,D^{n+1})\hookrightarrow (S^{n+q},D^{n+N+1})$ as a \textbf{\textit{disked embedding}}. Any element in $C_n^q$ can be represented by a special disked emedding $(f,\bar{f})$ together with some framing on $\bar{f}$ defined as follows:
\begin{itemize}
\item Fix the base-point $*=e_2\in S^{n-1}=D_{-}^n\cap D_{+}^n$ and endow it with the framing $\{e_{n+2},\ldots,e_{n+q+1}\}$.
\item Extend the framing from $*=e_2$ to $D_{+}^n$ inside $D_{+}^{n+q}$. Since $*\hookrightarrow D^n_{+}$ is a homotopy equivalence, this extension is unique up to homotopy. Take the suspension of this framing in $D^{n+N+1}$ by adding $\{e_{n+q+2},\ldots,e_{n+N+1}\}$ as last vectors.
\item Extend the obtained framing from $D_{+}^n$ to the entire disk $D^{n+1}$ inside~$D^{n+N+1}$. Again this framing is defined uniquely up to homotopy.\\
\end{itemize}
Note that even though the knot $f$ is trivial on $D_{-}^n$, the extended framing can be non-trivial. Moreover, the framing on $\bar{f}|_{D_{-}^n}$ inside $D^{n+N+1}$ might not be a suspension, while the framing on $\bar{f}|_{D^n_{+}}$ inside $D^{n+N+1}$ is the suspension of a framing inside $D^{n+q}_{+}$. We refer this boundary condition on the framing defined on $\bar{f}$ as \textbf{\textit{Type~I}} (in sections \ref{subsec1} and \ref{sec2}, we will also consider framing with Type~II and Type~III boundary conditions). Hence, any embedding $f:S^n\hookrightarrow S^{n+q}$ representing an element in $C^q_n$ can be considered as a \textbf{\textit{special disked embedding $(f,\bar{f}):(S^n,D^{n+1})\hookrightarrow (S^{n+q},D^{n+N+1})$ with Type~I framing}}, i.e., $\bar{f}|_{S^n=\partial D^{n+1}}=f$ is a special knot, and the framing on $\bar{f}$ has boundary condition defined as above.\\
Any two special disked embeddings $(f_0,\bar{f}_{0}):(S^n,D^{n+1})\hookrightarrow (S^{n+q},D^{n+N_{0}+1})$ and $(f_1,\bar{f}_{1}):(S^n,D^{n+1})\hookrightarrow (S^{n+q},D^{n+N_{1}+1})$ with Type~I framing are \textit{concordant} if there exists an embedding $F:D^{n+1}\times [0,1]\hookrightarrow D^{n+N+1}\times [0,1]$ for $N\geq max\{N_{0},N_{1}\}$, such that $F|_{D^{n+1}\times i}=\bar{f}_i$ for $i=0,1$, $F|_{D_{-}^n\times [0,1]}=id$ and $F|_{D_{+}^n\times [0,1]}\subset D_{+}^{n+q}\times [0,1]$. Furthermore, the framing on $F|_{D_{+}^n\times [0,1]}$ and $F|_{D^{n+1}\times i}$, $i=0,1$, is given by suspension of a framing inside $D_{+}^{n+q}\times [0,1]$ and $D^{n+N_{i}+1}\times i$, respectively. Similarly, we define the isotopy relation to be a level-preserving concordance. All the following groups are isomorphic for $q\geq 3$:
\begin{center}
$C^q_n=$\{concordance/isotopy classes of embeddings $S^n\hookrightarrow S^{n+q}$\}\\
$\updownarrow$\\
\{concordance/isotopy classes of special embeddings $S^n\hookrightarrow S^{n+q}$\}\\
$\updownarrow$\\
\{concordance/isotopy classes of special disked embeddings $(S^n,D^{n+1})\hookrightarrow (S^{n+q},D^{n+N+1})$\}\\
$\updownarrow$\\
\{concordance/isotopy classes of special disked embeddings $(S^n,D^{n+1})\hookrightarrow (S^{n+q},D^{n+N+1})$ with Type~I framing\}\\
\end{center}
\vspace{0.2cm}
Furthermore, as a consequence of the tubular neighborhood theorem, one can choose a representative $f$ in $C_n^q$ such that $f(S^n)$ is contained in a subspace of $S^{n+q}$ which can be identified with $S^n\times D^q$. Thus, we can consider a special knot to be $f:S^n\hookrightarrow S^n\times D^q$ such that $f|_{D_{-}^n}$ is the natural inclusion $D_{-}^n\hookrightarrow D_{-}^n\times 0$ and $f($int $D_{+}^n)\subset$ int$(D_{+}^n\times D^q)$, together with a disk extension $\bar{f}:D^{n+1}\hookrightarrow D^{n+1}\times D^N$ with a similarly defined framing of Type~I. The homomorphism $\psi: C_n^q\rightarrow \pi_{n+1}(SG;SO,SG_q)$ is then defined as follows.\\
\begin{theorem}\thlabel{th}
\textit{Given an element $\alpha\in C_n^q$ represented by a special disked embedding $(f,\bar{f}):(S^n, D^{n+1})\hookrightarrow (S^n\times D^q, D^{n+1}\times D^N)$ with Type~I framing, a map $\phi:D^{n+1}\times S^{N-1}\rightarrow S^{N-1}$ represents $\psi(\alpha)\in \pi_{n+1}(SG;SO,SG_q)$ if there exists an extension $\bar{\phi}:D^{n+1}\times D^N\rightarrow D^N$ i.e., $\bar{\phi}|_{D^{n+1}\times S^{N-1}}=\phi$ such that:}
\begin{enumerate}[(i)]
\item \textit{$\bar{\phi}$ is regular on $0\in D^N$ and $\bar{\phi}^{-1}(0)=\bar{f}(D^{n+1})$ as framed submanifolds,}
\item \textit{$\bar{\phi}_x\in SO_N $ for $x\in D_{-}^n$,}
\item \textit{$\bar{\phi}_x$ is the suspension of a map $D^q\rightarrow D^q$ for $x\in D_{+}^n$.}
\end{enumerate}
\textit{The homomorphism $\psi:C_n^q\rightarrow \pi_{n+1}(SG;SO,SG_q)$ is well defined \cite[Theorem~2.3]{hae} and is an isomorphism for $q\geq 3$ \cite[Theorem~3.4]{hae}.} \\
\end{theorem}
In the proof of well-definedness of $\psi$ \cite[Theorem 2.3]{hae}, Haefliger shows the existence of such a map $\bar{\phi}$ as follows. Define $\bar{\phi}_{-}:D^n_{-}\times D^N\rightarrow D^N$ uniquely as a linear map such that $(\bar{\phi}_{-})_x\in SO_N$ for $x\in D^n_{-}$ and $\bar{\phi}^{-1}_{-}(0)=f(D^n_{-})$, as framed submanifolds. Using obstruction theory \cite[Lemma~2.4]{hae} the restriction $\bar{\phi}_{-}|_{S^{n-1}\times D^q}$ can be extended to $\bar{\phi}_{-}|_{D^n_{+}\times D^q}$ with the given framing on $f(D^n_{+})$. Define $\bar{\phi}_{+}:D^n_{+}\times D^N\rightarrow D^N$ to be the $(N-q)$-suspension of $\bar{\phi}_{-}:{D^n_{+}\times D^q}\rightarrow D^q$. By using \cite[Lemma~2.4]{hae} again, we extend $\bar{\phi}_{-}\cup \bar{\phi}_{+}:S^n\times D^N\rightarrow D^N$ to a map $\bar{\phi}:D^{n+1}\times D^N\rightarrow D^N$ verifying (i)-(iii) above. To show $\psi$ is well defined, he uses the same argument invoking \cite[Lemma~2.4]{hae} twice to construct a homotopy between two maps $\phi_0,\phi_1:D^{n+1}\times S^{N-1}\rightarrow S^{N-1}$ corresponding to two concordant embeddings $(f_0,\bar{f_0}),(f_1,\bar{f_1}):(S^n, D^{n+1})\hookrightarrow (S^n\times D^q, D^{n+1}\times D^N)$.\\
To prove the isomorphism \cite[Theorem 3.4]{hae}, Haefliger interprets the group $\pi_{n+1}(SG;SO,SG_q)$ in terms of cobordisms (we refer this as Pontryagin-Thom type construction). An element of $\pi_{n+1}(SG;SO,SG_q)$ represented by a map $\phi: D^{n+1}\times S^{N-1}\rightarrow S^{N-1}$ as in subsection \ref{triad} which is regular on $e_1$, corresponds to a framed $(n+1)$-submanifold $V=\phi^{-1}(e_1)\subset D^{n+1}\times S^{N-1}$ with two parts of boundary:
\begin{itemize}
\item $V\cap (D^n_{-}\times S^{N-1})$ is the graph of some map $g:D^n_{-}\rightarrow S^{N-1}$ with the framing at points $(x,g(x))$ lying inside $x\times S^{N-1}$ and orthonormal. Indeed, for $x\in D^n_{-}$, the map $\phi_x:S^{N-1}\rightarrow S^{N-1}$ is linear and therefore the preimage of $e_1$ is just a point.
\item $V\cap (D^n_{+}\times S^{N-1})$ is the suspension of a framed submanifold in $D^n_{+}\times S^{q-1}$, since for any $x\in D^n_{+}$, the map $\phi_x:S^{N-1}\rightarrow S^{N-1}$ is the suspension of a map $S^{q-1}\rightarrow S^{q-1}$.
\end{itemize}
Thus, $\pi_{n+1}(SG;SO,SG_q)$ can be described as the group of cobordisms of framed $(n+1)$-manifolds with such boundary conditions.\\
He then considers $\bar{\phi}$ which exists by \cite[Theorem~2.3]{hae}. Note that $\bar{\phi}:D^{n+1}\times D^N\rightarrow D^N$ can always be slightly changed so that $\bar{\phi}^{-1}(\partial D^N)\subset D^{n+1}\times \partial D^N$. The preimage $\bar{\phi}^{-1}(I)\subset D^{n+1}\times D^N$ of the segment $I$ joining $0$ and $e_1$ in $D^N$ is a framed $(n+2)$-manifold $W$ with corners ($\bar{\phi}$ is chosen to be transversal to $I$). In particular, $\partial W$ has the following strata:
\begin{itemize}
\item a free face given by the framed disk $\bar{f}(D^{n+1})=\bar{\phi}^{-1}(0)$
\item $\partial W\cap (D^{n+1}\times S^{N-1})=\bar{\phi}^{-1}(e_1)=V$
\item $\partial W\cap (D^n_{-}\times D^N)$ is the radial extension of $V\cap (D^n_{-}\times S^{N-1})$
\item $\partial W\cap (D^n_{+}\times D^N)$ is the $(N-q)$-fold suspension of a framed submanifold in $D^n_{+}\times D^q$.
\end{itemize}
As a result, he restates the homomorphism defined in Theorem~\ref{th} as follows. Given an element $\alpha\in C_n^q$ represented by a special disked embedding $(f,\bar{f}):(S^n, D^{n+1})\hookrightarrow (S^n\times D^q, D^{n+1}\times D^N)$ with Type~I framing, a framed submanifold $V\subset D^{n+1}\times S^{N-1}$ as defined above represents $\psi(\alpha)\in \pi_{n+1}(SG;SO,SG_q)$ if there exists a framed submanifold $W\subset D^{n+1}\times D^N$ with the boundary strata as given above.
According to \cite[Argument 3.5]{hae}, to show surjectivity he applies surgery to construct $W$ satisfying $\partial W\cap (D^{n+1}\times S^{N-1})=V$ for a given $V$. For injectivity, he shows if $[(f,\bar{f})]$ maps to the trivial element $[V]$ of $\pi_{n+1}(SG;SO,SG_q)$, then the corresponding $W$ can be modified using surgery so that it is embedded in $D^{n+1}\times D^q$. In particular, the free face $\bar{f}(D^{n+1})$ of $W$ is inside $D^{n+1}\times D^q\cong D^{n+q+1}$, and therefore the corresponding $f=\bar{f}|_{\partial D^{n+1}}$ is slice i.e., concordant to the trivial embedding of $S^n$ in $S^{n+q}$, by Lemma~\ref{slice}.
\subsection{Framed embeddings of $S^n$ in $S^{n+q}$}\label{subsec1}
Let us recall that we always consider framed embeddings with a framing preserving the natural orientation. For $q\geq 3$, Haefliger expressed the group $FC_n^q$ of concordance classes of framed embeddings of $S^n$ in $S^{n+q}$ as $\tilde{\pi}_{n+1}(SG;SO,SG_q)$. An element in $\tilde{\pi}_{n+1}(SG;SO,SG_q)$ is represented by a continuous map $\phi:D^{n+1}\rightarrow SG$ i.e., for $x\in D^{n+1}$, $\phi(x):S^{N-1}\rightarrow S^{N-1}$, for some large $N$, such that $\phi(D_{-}^n)\subset SO$, $\phi(D_{+}^n)\subset SG_q$ and $\phi(\partial D_{-}^n=\partial D_{+}^n)=~id$. Again, abusing notation we also view $\phi$ as a map $\phi:D^{n+1}\times S^{N-1}\rightarrow S^{N-1}$ and sometimes write $\phi_x$ for $\phi(x)$.
\begin{remark}\label{remark}It is easy to see that the group $\tilde{\pi}_{n+1}(SG;SO,SG_q)$ is isomorphic to $\pi_{n}\Big((SO\times_{SG}^{h} SG_q)\simeq$ hofib$(SO\rightarrow SG/SG_q)\Big)$. Moreover, the upper long exact sequence in (\ref{**}) is the long exact sequence of the pair $(SG/SG_q,SO)$.
\end{remark}
\begin{remark}\cite[\S 5.1]{hae}.
Two concordant framed embeddings of $S^n$ in $S^{n+q}$ are isotopic when $q\geq3$ and therefore, $FC_n^q=\pi_{0}Emb^{fr}(S^n,S^{n+q})=\pi_{0}Emb_{\partial}^{fr}(D^n, D^{n+q})$.
\end{remark}
\begin{lemma}\cite[\S 5]{hae}.\thlabel{slice2}
\textit{A framed embedding $S^n\hookrightarrow S^{n+q}$ is concordant to the trivial one if and only if it is slice i.e., if it can be extended to an embedding $D^{n+1}\hookrightarrow D^{n+q+1}$ along with the framing.}
\end{lemma}
\subsection*{The isomorphism $\tilde{\psi}:FC_n^q\rightarrow \tilde{\pi}_{n+1}(SG;SO,SG_q)$}
The natural map $\tilde{\psi}$ between the two groups is defined as in the \enquote{non-framed} case. Firstly, an element in $FC_n^q$ can be represented by a special framed knot $f:S^n\hookrightarrow S^{n+q}$ which is the natural inclusion on $D_{-}^n$ with trivial framing $\{e_{n+2},\ldots,e_{n+q+1}\}$, and $f($int $D_{+}^n)\subset$ int$(D_{+}^{n+q})$ with some non-trivial framing. Such a framed knot is assigned a special disked embedding $(f,\bar{f}):(S^n,D^{n+1})\hookrightarrow (S^{n+q},D^{n+N+1})$ along with a framing as follows. We extend $f:S^n\hookrightarrow S^{n+q}$ to a disk embedding $\bar{f}:D^{n+1}\hookrightarrow D^{n+N+1}$ for $N$ large enough. For the framing on $\bar{f}(D^{n+1})$, which is defined uniquely up to homotopy, we first suspend the framing on $D_{+}^n$ inside $D_{+}^{n+q}$ to a framing inside $D^{n+N+1}$ by adding vectors $\{e_{n+q+2},\ldots,e_{n+N+1}\}$. Then we extend the obtained framing to the entire disk $D^{n+1}$ inside $D^{n+N+1}$. Note that the framing on $\bar{f}|_{D_{-}^n}$ may now be non-trivial (and does not have to be a suspension), while the framing on $\bar{f}|_{D_{+}^n}$ is the suspension of the framing on $D^n_{+}$ inside~$D^{n+q}_{+}$. But we still obtain a trivial framing on the equator $S^{n-1}=D_{-}^n\cap D_{+}^n$. Such boundary condition on the framing defined on $\bar{f}$ is referred as \textbf{\textit{Type~II}}. Therefore, a representative in $FC_n^q$ can be considered to be a \textbf{\textit{special disked embedding $(f,\bar{f})$ with Type~II framing}}. For $q\geq 3$, the following groups are isomorphic:
\begin{center}
$FC^q_n=$\{concordance/isotopy classes of framed embeddings $S^n\hookrightarrow S^{n+q}$\}\\
$\updownarrow$\\
\{concordance/isotopy classes of special framed embeddings $S^n\hookrightarrow S^{n+q}$\}\\
$\updownarrow$\\
\{concordance/isotopy classes of special disked embeddings $(S^n,D^{n+1})\hookrightarrow (S^{n+q},D^{n+N+1})$ with Type~II framing\}\\
\end{center}
\vspace{0.2cm}
Using the tubular neighborhood theorem, we can transform any special framed knot $f:S^n\hookrightarrow S^{n+q}$ into $f:S^n\hookrightarrow S^n\times D^q$, with a framed disk extension $\bar{f}:D^{n+1}\hookrightarrow D^{n+1}\times D^N$. Thus, an element in $FC^q_n$ can be represented by a pair $(f,\bar{f}):(S^n,D^{n+1})\hookrightarrow (S^n\times D^q, D^{n+1}\times D^N)$ with a framing of Type~II. We define the homomorphism $\tilde{\psi}: FC_n^q\rightarrow \tilde{\pi}_{n+1}(SG;SO,SG_q)$ exactly as in Theorem~\ref{th} by adding to condition ii) that $\bar{\phi}_x=id$ when $x\in S^{n-1}$. For $q\geq 3$, $\tilde{\psi}$ is an isomorphism \cite[Theorem~5.7]{hae}. This result is stated without proof because the argument follows the same lines as in the \enquote{non-framed} case. Note that Lemma~\ref{slice2} is used in the proof of injectivity of $\tilde{\psi}$ in the same way as Lemma~\ref{slice} is necessary for injectivity of $\psi$.
\section{Framed disked embeddings}\label{sec2}
We now define a new group of concordance classes of special disked embeddings with a \textbf{\textit{Type~III}} framing. Namely, this time we require the framing to be trivial along $D_{-}^n$.
To be precise, we consider special disked embeddings $(f,\bar{f}):(S^n,D^{n+1})\hookrightarrow (S^{n+q},D^{n+N+1})$ where the framing on $\bar{f}$ comes with the following boundary condition: $\bar{f}|_{D_{-}^n}$ has trivial framing, while the framing on $\bar{f}|_{D_{+}^n}$ inside $D^{n+N+1}$ is obtained as the suspension of a framing inside~$D_{+}^{n+q}$. \\
\begin{multline*}
\overline{FC}{}_n^q:=\{ \text{concordance classes of special disked embeddings} \\(f,\bar{f}):(S^n,D^{n+1})\hookrightarrow (S^{n+q},D^{n+N+1}) \text{ with Type~III framing}\}.
\end{multline*}
Note that since the codimension condition $q\geq 3$ is satisfied, concordance and isotopy relations coincide for special disked embeddings with all three boundary restrictions on framing.
\subsection{The group $\pi_{n+1}(SG,SG_q)$}
An element in $\pi_{n+1}(SG,SG_q)$ is represented by a continuous map $\phi:D^{n+1}\rightarrow SG$ such that $\phi|_{D_{-}^n}= id$ and $\phi(D_{+}^n)\subset SG_q$.\\
This representation is equivalent to the usual definition of a relative homotopy group i.e., $\pi_{n+1}(SG;*,SG_q)=\pi_{n+1}(SG,SG_q)$, since $D_{-}^n$ can be collapsed to get the base-point in the relative group.
\subsection{The isomorphism $\xi:\overline{FC}{}_n^q\rightarrow \pi_{n+1}(SG,SG_q)$}\label{sub3}
Following the same argument as in subsection \ref{sub}, when an element in $\overline{FC}{}_n^q$ is represented by a special disked embedding $(f,\bar{f}):(S^n,D^{n+1})\hookrightarrow (S^n\times D^q,D^{n+1}\times D^N)$ with Type~III framing, there is a natural homomorphism $\xi: \overline{FC}{}_n^q\rightarrow \pi_{n+1}(SG,SG_q)$ defined as in Theorem~\ref{th} by replacing condition ii) with $\bar{\phi}_x=id$ for $x\in D_{-}^n$. By Haefliger's surgery construction \cite[Argument~3.5]{hae} that proves \cite[Theorem~3.4]{hae}, we conclude:
\begin{theorem}\thlabel{mt}
\textit{The homomorphism $\xi:\overline{FC}{}_n^q\rightarrow \pi_{n+1}(SG,SG_q)$ is an isomorphism for $q\geq3$.}
\end{theorem}
The sliceness Lemma~\ref{slice3} is used to prove injectivity of $\xi$, similarly to the cases of $\psi$ and $\tilde{\psi}$. Note that Theorem~\ref{mt} can be deduced from the proof of Theorem~\ref{mth2} given in section \ref{sec4}. In particular, with $\psi$ and $\eta$ as isomorphisms in (\ref{five}), $\xi$ is also an isomorphism by the five lemma.\\
As a review, the following tables point to the main difference among all the groups we discussed in the three cases. In terms of special disked embeddings with different boundary conditions on framing of $\bar{f}$:\\
\begin{center}
\begin{tabular}{| l | r |}
\hline
$C_n^q$ & trivial framing at the base-point $*$ (Type I)\\
\hline
$FC_n^q$ & trivial framing at the equator $S^{n-1}$ (Type II) \\
\hline
$\overline{FC}{}_n^q$ & trivial framing at $D_{-}^n$ (Type III)\\ \hline
\end{tabular}
\end{center}
\vspace{0.2cm}
The corresponding homotopy groups differ as follows: \\
\begin{center}
\begin{tabular}{| l | r |}
\hline
$\pi_{n+1}(SG;SO,SG_q)$ & $\phi(*)=id$\\
\hline
$\tilde{\pi}_{n+1}(SG;SO,SG_q)$ & $\phi(S^{n-1})=id$\\
\hline
$\pi_{n+1}(SG,SG_q)$ & $\phi(D_{-}^n)=id$\\
\hline
\end{tabular}
\end{center}
\vspace{0.2cm}
\begin{remarkdf}\label{rkdf}
Consider a disked embedding $(f,\bar{f}):(S^n,D^{n+1})\hookrightarrow (S^{n+q},D^{n+N+1})$ which is not necessarily special, i.e., without a fixed behavior at $D_{-}^n$. Assume both $f$ and $\bar{f}$ are framed embeddings such that framing on $\bar{f}(D^{n+1})$ inside $D^{n+N+1}$ is defined by extending the suspension of the framing of $f(S^n)\subset S^{n+q}$. We call such a pair $(f,\bar{f})$ a \textbf{\textit{framed disked embedding}}. The concordance classes of such embeddings are the same as those of special ones with Type~III framing representing elements in $\overline{FC}{}_{n}^q$. It is because given any framed disked embedding, we can always isotope it, so that near the base-point $*\in \partial D_{-}^n=S^{n-1}$ it is the identity inclusion with the trivial framing. Then we can reparametrize the sphere so that the small neighborhood of $*$ is $D_{-}^n$ and the rest is $D_{+}^n$. As a result, we get a special disked embedding $(f,\bar{f})$ with Type~III framing. Therefore, we can describe $\overline{FC}{}_n^q$ as the group of concordance classes of framed disked embeddings $(f,\bar{f})$.
\end{remarkdf}
Thus, all the groups $C_n^q$, $FC_n^q$ and $\overline{FC}{}_n^q$ can be described as groups of concordance classes of \enquote{non-special} embeddings:\\
\begin{center}
\begin{tabular}{| l | r |}
\hline
$C_n^q$ & embeddings $S^n\hookrightarrow S^{n+q}$\\
\hline
$FC_n^q$ & framed embeddings $S^n\hookrightarrow S^{n+q}$ \\
\hline
\raisebox{-0.8ex}{$\overline{FC}{}_n^q$} & \raisebox{-0.5ex}{framed disked embeddings} \\ &\raisebox{0.2ex} {$(S^n,D^{n+1})\hookrightarrow (S^{n+q},D^{n+N+1})$}\\ \hline
\end{tabular}
\end{center}
Note that special disked embeddings with framing of Type~I or Type~II are not framed disked embeddings because for latter we require the framing on $\bar{f}$ to be the suspension on entire boundary $\partial D^{n+1}=S^n$, see the definition above.
\subsection{Sliceness}
In this subsection, we study an interesting property of sliceness for framed disked embeddings representing elements in the group $\overline{FC}{}_n^q$.
\begin{definition}
A framed disked embedding $(f,\alpha): (S^n,D^{n+1})\hookrightarrow (S^{n+q},D^{n+N+1})$ is \textbf{slice} if there exists a framed embedding $H:D^{n+2}\hookrightarrow D^{n+N'+2}$ where $N'\geq N$, such that $ H|_{(\partial_{-}D^{n+2}=D_{-}^{n+1})}=\alpha$ and $H|_{\partial_{+}D^{n+2}}$ is the suspension of a framed embedding inside $D^{n+q+1}$ i.e., $H(\partial_{+}D^{n+2})\subset D^{n+q+1}\subset \partial_{+}D^{n+N'+2}= D^{n+N+1}.$
\end{definition}
The trivial element in $\overline{FC}{}_n^q$ is given by the equivalence class of the trivial framed disked embedding $(id,id):(S^n,D^{n+1})\subset (S^{n+q},D^{n+N+1})$ i.e., the trivial pair with the trivial framing. \\
\begin{lemma}\thlabel{slice3}
\textit{A framed disked embedding $(f,\alpha): (S^n,D^{n+1})\hookrightarrow (S^{n+q},D^{n+N+1})$ representing an element in $\overline{FC}{}_n^q$ is concordant to the trivial element $(id,id):(S^n,D^{n+1})\subset (S^{n+q},D^{n+N+1})$, if and only if $(f,\alpha)$ is slice.}
\end{lemma}
\begin{proof} Let $F:D^{n+1}\times [0,1]\hookrightarrow D^{n+N'+1}\times [0,1]$, where $N'\geq N$ be a concordance between $(f,\alpha)$ and $(id,id)$. Since at $t=1$ we have a trivial framing, we attach a half disk $\frac{1}{2}D^{n+N'+2}$ along the trivial embedding such that it extends $D^{n+1}$ to the disk $D^{n+2}$, see Figure \ref{fig1}. Since $F$ takes the boundary inside $S^{n+q}\times [0,1]$, therefore attaching this half disk gives the sliceness of the framed knot $S^n\hookrightarrow S^{n+q}$ i.e., a framed extension $D^{n+1}\hookrightarrow D^{n+q+1}$. As a result, we get a framed embedding $H:D^{n+2}\hookrightarrow D^{n+N'+2}$, which on one part of $\partial D^{n+2}$ gives $\alpha$ and on the other, an embedding to $D^{n+q+1}$.
Therefore, $(f,\alpha)$ is slice. \\
\begin{figure}[htbp]
\centering
\includegraphics[width=0.7\textwidth]{1}
\caption{Attaching a half disk $D^{n+N'+2}$ to $D^{n+N'+1}$ at $t=1$.}
\label{fig1}
\end{figure}
\noindent
\begin{figure}[htbp]
\centering
\includegraphics[width=0.15\textwidth]{3}
\caption{Removing a small half disk $D^{n+N'+2}$ going inside.}
\label{fig2}
\end{figure}
The converse is easy to prove by reversing the above argument. We remove a small half disk $\frac{1}{2}D^{n+N'+2}$ around a point in $D^{n+q+1}\subset D^{n+N'+2}$, see Figure~\ref{fig2}, such that the resulting space acts as a concordance between $(f,\alpha)$ and $(id,id)$.
\end{proof}
\section{Embeddings modulo immersions as special disked embeddings}\label{sec3}
Let $D^{n+\infty}:= \cup_{N} D^{n+N}$. By a smooth embedding $D^n\hookrightarrow D^{n+\infty}$, we understand $D^n\hookrightarrow D^{n+N}$ for some $N$ large enough.
Set
\sloppy
$$Emb_{\partial}^{fr}(D^n,D^{n+\infty}):=\bigcup_{N}Emb_{\partial}^{fr}(D^n,D^{n+N}),$$
$$Imm_{\partial}^{fr}(D^n,D^{n+\infty}):=\bigcup_{N}Imm_{\partial}^{fr}(D^n,D^{n+N}).$$
Similarly, we define $SDE_n^q$ to be the space of special disked embeddings $(f,\alpha):(S^n,D^{n+1})\hookrightarrow (S^{n+q},D^{n+1+\infty})$ with Type~III framing.
By construction, $\pi_0SDE_n^q=\overline{FC}{}_n^q$.\\
We claim that the space $SDE_n^q$ has the same set of connected components as the space of embeddings modulo immersions i.e., $\overline{FC}{}_n^q=\pi_{0}\overline{Emb}_{\partial}(D^n,D^{n+q})$. First we prove the following lemma which gives different geometric interpretations of the group $\pi_0\overline{Emb}_{\partial}(D^n,D^{n+q})$.\\
\begin{lemma} \thlabel{lmm} \textit{For} $q\geq 3$,
\begin{multline}\label{eq1}
\pi_{0}\overline{Emb}(S^n,S^{n+q})=\pi_{0}\overline{Emb}_{\partial}(D^n,D^{n+q})=\pi_{0}\overline{Emb}(S^n,\mathbb{R}^{n+q}) \\ =\pi_{0}\overline{Emb}^{fr}(S^n,S^{n+q})=\pi_{0}\overline{Emb}_{\partial}^{fr}(D^n,D^{n+q})=\pi_{0}\overline{Emb}^{fr}(S^n,\mathbb{R}^{n+q}).
\end{multline}
\end{lemma}
\begin{proof} Let us first prove the \enquote{non-framed} case $$\pi_0\overline{Emb}(S^n,S^{n+q})= \pi_0\overline{Emb}(S^n,\mathbb{R}^{n+q}).$$
Consider the following diagram, where the horizontal lines are fiber sequences:\\
\begin{center}
\begin{tikzcd}
\overline{Emb}(S^n,\mathbb{R}^{n+q}) \arrow[r] \arrow[d] & Emb(S^n,\mathbb{R}^{n+q}) \arrow[r] \arrow[d] & Imm(S^n,\mathbb{R}^{n+q}) \arrow[d] \\
\overline{Emb}(S^n,S^{n+q})\arrow[r] & Emb(S^n,S^{n+q}) \arrow[r] & Imm(S^n,S^{n+q})
\end{tikzcd}
\end{center}
\vspace{0.2cm}
We view $\mathbb{R}^{n+q}=S^{n+q}-\infty$, where the \enquote{infinity} point is of codimension $n+q$. Given an embedding (resp. immersion) from $S^n$ to $S^{n+q}$, we can perturb it slightly in a way that it misses the \enquote{infinity} point as $q\geq 3$, so that we get an embedding (resp. immersion) from $S^n$ to $\mathbb{R}^{n+q}$. Therefore, the second (resp. third) vertical map is surjective on the level of $\pi_0$. For injectivity, note that an isotopy (resp. regular homotopy) of an embedding (resp. immersion) of $S^n$ in $S^{n+q}$ is $(n+1)$-dimensional, while the \enquote{infinity} point has codimension $n+q$, so it can still miss the point given $q\geq 3$, and therefore the second (resp. third) vertical map is bijective on $\pi_0$. The same argument holds for $\pi_1$ because $q\geq 3$. Therefore, the second and third vertical maps induce isomorphisms on $\pi_0$ and $\pi_1$ when $q\geq 3$. By five lemma, we get
$$\pi_0\overline{Emb}(S^n,S^{n+q})= \pi_0\overline{Emb}(S^n,\mathbb{R}^{n+q}).$$
It is proved in \cite[Theorem 1.1]{vt3} that $$\pi_0\overline{Emb}(S^n,\mathbb{R}^{n+q})=\pi_0\overline{Emb}_{\partial}(D^n,D^{n+q}).$$
Using the argument from \cite[Proposition 1.2]{song}, we have that the following natural projections are weak equivalences:
$$\overline{Emb}_\partial^{fr}(D^n,D^{n+q})\rightarrow \overline{Emb}_\partial(D^n,D^{n+q}),$$ $$\overline{Emb}^{fr}(S^n,\mathbb{R}^{n+q})\rightarrow \overline{Emb}(S^n,\mathbb{R}^{n+q}).$$
Similarly, one can show that $\overline{Emb}^{fr}(S^n,S^{n+q})\rightarrow \overline{Emb}(S^n,S^{n+q})$ is a weak equivalence. Thus, we get different representations for $\pi_0\overline{Emb}_{\partial}(D^n,D^{n+q})$ as in (\ref{eq1}).
\end{proof}
By Smale-Hirsch theory \cite{hir, sm}, we have $Imm_{\partial}^{fr}(D^n,D^{n+q})\simeq \Omega^n SO(n+q)$, and since we consider the ambient dimension tend to infinity, we get $Imm_{\partial}^{fr}(D^n,D^{n+\infty})\simeq \Omega^n SO$. Note that $Emb_{\partial}^{fr}(D^n,D^{n+N})$ is an open, dense subset of $Imm_{\partial}^{fr}(D^n,D^{n+N})$ of codimension $N-n$. As $N$ gets large, the inclusion $Emb_{\partial}^{fr}(D^n,D^{n+N})\hookrightarrow Imm_{\partial}^{fr}(D^n,D^{n+N})$ becomes highly connected, and we get $Emb_{\partial}^{fr}(D^n,D^{n+\infty})\simeq Imm_{\partial}^{fr}(D^n,D^{n+\infty})\simeq \Omega^n SO$.\\
\begin{lemma}\thlabel{lm} \textit{For} $q\geq 3$,
$$\pi_{0}\overline{Emb}_{\partial}^{fr}(D^n,D^{n+q})=\pi_{0}\text{hofib}(Emb_{\partial}^{fr}(D^n,D^{n+q})\rightarrow Emb_{\partial}^{fr}(D^n,D^{n+\infty})).$$
\end{lemma}
\begin{proof}
By definition, $\pi_0\overline{Emb}_{\partial}^{fr}(D^n,D^{n+q})$ is equal to $\pi_0$hofib$(Emb_{\partial}^{fr}(D^n,D^{n+q})\rightarrow Imm_{\partial}^{fr}(D^n,D^{n+q})\simeq \Omega^n SO(n+q))$, which is isomorphic to $\pi_0$hofib$(Emb_{\partial}^{fr}(D^n,D^{n+q})\rightarrow \Omega^n SO)$ using the stability of the homotopy groups of $SO$: $$\pi_{i}SO(n+q)=\pi_{i}SO, \text{ \hspace{0.2cm} if } i\leq n+q-2.$$ Therefore, for $q\geq 3$, we have that $\pi_{0}\Omega^n SO(n+q)=\pi_{n} SO(n+q)=\pi_{n}SO=\pi_{0} \Omega^n SO$ and similarly $\pi_{1}\Omega^n SO(n+q)=\pi_{1} \Omega^n SO$. Since $ \Omega^n SO \simeq Emb_{\partial}^{fr}(D^n,D^{n+\infty})$, we get the result as a consequence of five lemma.
\end{proof}
Thus, for any element $[(f,\alpha)]$ in $\pi_{0}\overline{Emb}_{\partial}^{fr}(D^n,D^{n+q})$ there corresponds an equivalence class of a pair $(\tilde{f},\tilde{\alpha})$ where $\tilde{f}:D^n\hookrightarrow D^{n+q}$ and $\tilde{\alpha}:[0,1]\rightarrow Emb_{\partial}^{fr}(D^n,D^{n+\infty})$ i.e, $\tilde{\alpha}: D^n\times [0,1]\hookrightarrow D^{n+N}\times [0,1]$ such that $\tilde{\alpha}|_{D^n\times 0}=id$ and $\tilde{\alpha}|_{D^n\times 1}=\tilde{f}$, together with framing.\\
We consider $D^{n+1}\cong D^n\times [0,1]$ obtained by identifying $D_{+}^n$ to $D^n\times \{1\}$ and $D_{-}^n$ to $D^n\times \{0\}\cup S^{n-1}\times [0,1]$, and then smoothening the corners. We similarly identify $D^{n+N+1}\cong D^{n+N}\times [0,1]$, for some large $N$. Therefore, each pair $(\tilde{f},\tilde{\alpha})$ can be thought of as a special disked embedding with Type~III framing i.e., a pair $(id \cup\tilde{f},\tilde{\alpha}):(S^n,D^{n+1})\hookrightarrow (S^{n+q},D^{n+N+1})$ such that $\tilde{\alpha}|_{D_{-}^n}$ is the trivial inclusion $id:D_{-}^n\hookrightarrow D_{-}^{n+q}$ with trivial framing, and $\tilde{\alpha}|_{D_{+}^n}$ is the framed embedding $\tilde{f}:D_{+}^n\hookrightarrow D_{+}^{n+q}$. In other words, one has a natural map
\begin{equation}\label{mu}
\mu:\text{ hofib}(Emb_{\partial}^{fr}(D^n,D^{n+q})\rightarrow Emb_{\partial}^{fr}(D^n,D^{n+\infty})) \longrightarrow SDE_n^q.\\
\end{equation}
On the level of $\pi_0$, we obtain $$\mu_*: \pi_0\overline{Emb}_{\partial}^{fr}(D^n,D^{n+q})\rightarrow \overline{FC}{}_n^q,$$ $$[(f,\alpha)]\mapsto [(id\cup\tilde{f},\tilde{\alpha})].$$
\begin{theorem}\thlabel{cor} \textit{For $q\geq 3$, $\mu_*$ is an isomorphism, and therefore}
$$ \pi_0\overline{Emb}_{\partial}^{fr}(D^n,D^{n+q})= \overline{FC}{}_n^q.$$
\end{theorem}
\begin{proof}
To show that $\mu_*$ is bijective, it suffices to show that hofib$(Emb_{\partial}^{fr}(D^n,D^{n+q})\rightarrow Emb_{\partial}^{fr}(D^n,D^{n+\infty}))$ and $SDE_n^q$ are weakly homotopy equivalent, and then Lemma~\ref{lm} concludes the result.\\
Consider the following diagram, where the vertical lines are fiber sequences, and the map $\mu$ is defined above (\ref{mu}).
\begin{center}
\begin{tikzcd}[column sep=-0.5em]
\Omega Emb_{\partial}^{fr}(D^n,D^{n+\infty}) \arrow[r] \arrow[d] & Emb_{\partial}^{fr}(D^{n+1},D^{n+1+\infty}) \arrow[d] \\
\text{hofib}(Emb_{\partial}^{fr}(D^n,D^{n+q})\rightarrow Emb_{\partial}^{fr}(D^n,D^{n+\infty})) \arrow[r, "\mu"] \arrow[d] & SDE_n^q \arrow[d] \\
Emb_{\partial}^{fr}(D^n,D^{n+q}) \arrow[r, equal] & Emb_{\partial}^{fr}(D^n,D^{n+q}) \\
\end{tikzcd}
\end{center}
Note that the top map is just restriction on the fibers. Moreover, it is a homotopy equivalence since $Emb_{\partial}^{fr}(D^{n+1},D^{n+1+\infty})\simeq Imm_{\partial}^{fr}(D^{n+1},D^{n+1+\infty})\simeq \Omega^{n+1}SO\simeq \Omega \Omega^{n}SO\simeq \Omega Emb_{\partial}^{fr}(D^n,D^{n+\infty})$. Thus, the map in the middle $\mu$ is also a weak homotopy equivalence. By Lemma~\ref{lm}, we get $ \pi_0\overline{Emb}_{\partial}^{fr}(D^n,D^{n+q})= \overline{FC}{}_n^q$.
\end{proof}
Theorem \ref{mth1} is immediate by combining Lemma \ref{lmm} and Theorems~\ref{mt} and~\ref{cor}.
\section{Geometric interpretation of long exact sequences associated with the triad $(SG;SO,SG_q)$}\label{sec4}
In this section, we prove Theorem \ref{mth2} for the \enquote{non-framed} case i.e., we show the isomorphism between the sequences in (\ref{*}). The proof for the framed case is similar.\\
Recall that $Im_n^q$ is the group of concordance (or equivalently regular homotopy) classes of immersions of $S^n$ in $S^{n+q}$. According to Haefliger \cite[\S 4]{hae1}, any representative in $Im_n^q$ is regular homotopic to a special immersion i.e., an immersion $f:S^n\looparrowright S^{n+q}$ such that $f|_{D_{-}^n}$ is the natural inclusion in $D_{-}^{n+q}$ and $f|_{D_{+}^n}$ is an immersion in $D_{+}^{n+q}$. We can extend this immersion as a disk immersion $\bar{f}:D^{n+1}\looparrowright D^{n+N+1}$ for $N$ large enough. Furthermore, we add framing on $\bar{f}$ by first extending the framing from the base-point $*=e_2$ to $D_{+}^n$ inside $D_{+}^{n+q}$, and then we extend this framing to $D^{n+1}$ inside $D^{n+N+1}$ after taking the suspension. In other words, we add disk structure and Type~I framing in the same way as we did for special embeddings representing elements in $C_n^q$. Thus, any element in $Im_n^q$ can be represented by a special disked immersion $(f,\bar{f}):(S^n,D^{n+1})\looparrowright (S^{n+q},D^{n+N+1})$ with Type~I framing. \\
Haefliger \cite[\S 4.2]{hae} has shown that $Im_n^q$ is isomorphic to the homotopy group $\pi_n(SO,SO_q)$ where his map $\eta:Im_n^q\rightarrow \pi_n(SO,SO_q)$ is defined as follows: given a special disked immersion $(f,\bar{f}):(S^n,D^{n+1})\looparrowright (S^{n+q},D^{n+N+1})$ with Type I framing, one considers the trivialization of the normal bundle induced by the framing of $\bar{f}$. To each $x\in S^n$, one associates the $(N-q)$ frame $e_{n+q+2},..., e_{N+n+1}$ with respect to this trivialization. This $(N-q)$ frame defines a map $h_{f}:S^n\rightarrow V_{N,N-q}$ that represents a homotopy class $[h_f]$ in $\pi_n(V_{N,N-q})=\pi_n(SO,SO_q)$, where $V_{N,N-q}=SO_N/SO_{N-q}$ is the Stiefel manifold.
\begin{remark}\label{rmk}
Note that $h_{f}|_{D_{+}^n}$ is constantly equal to the identity inclusion $\mathbb{R}^{N-q}\subset \mathbb{R}^N$ (viewed as the base-point of $V_{N,N-q}$) because the framing on $D_{+}^n$ is given by suspension and the last $(N-q)$ vectors are $e_{n+q+2},..., e_{N+n+1}$. Hence, the class $[h_{f}]$ depends only on the framing at~$D_{-}^n$.
\end{remark}
Let us now describe the map $\theta$ appearing in the geometric long exact sequence:
\begin{equation}
\longrightarrow \overline{FC}{}_n^q\longrightarrow C_n^q\longrightarrow Im_n^q{\overset{\theta}{\longrightarrow}} \overline{FC}{}_{n-1}^q \longrightarrow
\end{equation}
\vspace{0.01cm}
Note that $Im_n^q=\pi_0Imm_{\partial}(D^n, D^{n+q})=\pi_n V_{n+q,n}=\pi_1Imm_{\partial}(D^{n-1}, D^{n+q-1})$. The natural map $\Omega Imm_{\partial}(D^{n-1},D^{n+q-1})\rightarrow \overline{Emb}_{\partial}(D^{n-1},D^{n+q-1})$ induces a map $Im_n^q=\pi_1Imm_{\partial}(D^{n-1},D^{n+q-1}) \rightarrow \pi_0\overline{Emb}_{\partial}(D^{n-1},D^{n+q-1})=\overline{FC}_{n-1}^q$. \\
We can also interpret $\theta:Im_n^q\rightarrow \overline{FC}{}_{n-1}^q$ in terms of disked embeddings/immersions as follows: given a special disked immersion $(f,\bar{f}):(S^n,D^{n+1})\looparrowright (S^{n+q}, D^{n+N+1})$ with Type~I framing representing an element in $Im_n^q$, we consider the restriction $f|_{S^{n-1}=D_{-}^n\cap D_{+}^n }=g=id: S^{n-1}\hookrightarrow S^{n+q-1}$, which is the natural inclusion. Moreover, we get the disk immersion $f|_{D_{+}^n}:D_{+}^{n}\looparrowright D_{+}^{n+q}$, which can be immersed inside a bigger disk $D^{n+N}_{+}$ by allowing more dimensions. As a result, we obtain a disk immersion $\bar{g}:=id\circ f|_{D^n_{+}}:D^n_{+}\looparrowright D^{n+N}_{+}$ with the restricted framing from $\bar{f}|_{D^n_{+}}$. Since $N$ is large enough, we can change the framed immersion $\bar{g}$ into a framed embedding $\bar{g}':D^{n}\hookrightarrow D^{n+N}$. The obtained pair $(g,\bar{g}'):(S^{n-1}, D^n)\hookrightarrow (S^{n+q-1},D^{n+N})$ is a disked embedding where the framing on $\bar{g}'|_{S^{n-1}}$ is given by suspension of a framing inside $S^{n+q-1}$ i.e., $(g,\bar{g}')$ is a framed disked embedding. Therefore, given a special disked immersion $(f,\bar{f})$ with Type~I framing, we can assign a framed disked embedding $(g,\bar{g}')$ to it. Thus, we get a well defined map from $Im_n^q$ to $\overline{FC}{}_{n-1}^q$. \\
The commutativity of the following diagram is given by a similar argument as in the proof of Theorem~\ref{cor}.
\begin{center}
\begin{tikzcd}
Im_n^q \arrow[r,"\theta"] & \overline{FC}{}_{n-1}^q \\
\pi_1Imm(D^{n-1},D^{n+q-1})\arrow[r] \arrow[u, "\simeq"] & \pi_0\overline{Emb}_{\partial}(D^{n-1},D^{n+q-1}) \arrow[u, "\simeq"]
\end{tikzcd}
\end{center}
\begin{remark}\label{rem}
Note that while defining $\theta$, instead of $(g,\bar{g})=(id,id\circ f|_{D^n_{+}}):(S^{n-1}, D^n)\looparrowright (S^{n+q-1},D^{n+N})$, we can consider the pair $(id,id \circ f|_{D^n_{-}})$ which is same as $(id, id)$ since $f|_{D^n_{-}}=id$ with framing restricted from $\bar{f}$ (such framing may not be a suspension). In $\overline{FC}{}_{n-1}^q$, the representative $(g,\bar{g}')$ corresponding to $(g,\bar{g})=(id, id\circ f|_{D^n_{+}})$ is equivalent to the framed trivial disked embedding $(id,id)=(id,id\circ f|_{D^n_{-}})$ with framing as on $\bar{f}|_{D^n_{-}}$, since $\bar{f}$ acts as a concordance between $id\circ f|_{D^n_{+}}$ and $id\circ f|_{D^n_{-}}$. To be precise, we take a perturbation $\bar{f}'$ of $\bar{f}$ which acts as a concordance. We will use this in the following proof to show commutativity of the third square in (\ref{five}).\\
\end{remark}
\begin{proof}[\textbf{Proof of Theorem \ref{mth2}}] To prove the result, we need to show that each square in the following diagram commutes:
\small
\begin{equation}\label{five}
\begin{tikzcd}[column sep=tiny]
\pi_{n+1}(SG,SG_q) \arrow[r] & \pi_{n+1}(SG;SO,SG_q) \arrow[r] & \pi_n(SO,SO_q) \arrow[r] & \pi_n(SG,SG_q) \\
\overline{FC}{}_n^q \arrow[r] \arrow[u,"\xi", "\simeq" '] & C_n^q \arrow[r] \arrow[u, "\psi", "\simeq" '] & Im_n^q \arrow[r, "\theta"] \arrow[u, "\eta", "\simeq" '] & \overline{FC}{}_{n-1}^q \arrow[u, "\xi", "\simeq" ']
\end{tikzcd}
\end{equation}
\normalsize
For the first square, the map $\overline{FC}{}_n^q \rightarrow C_n^q$ is an inclusion on the level of representatives i.e., a framed disked embedding representing an element in $\overline{FC}{}_n^q$ clearly represents an element in $C_n^q$. Therefore, the commutativity of this square is straightforward from the construction. \\
The commutativity of the second square is given by Haefliger \cite[\S 4.4]{hae} and is easy to see. The map $C_n^q\rightarrow Im_n^q$ is obvious since an embedding is also an immersion. We have seen that the vertical map $\eta$ on a given representative in $Im_n^q$ depends only on the behavior of the representative on $D_{-}^n$, see Remark~\ref{rmk}. Similarly, the top horizontal map $\pi_{n+1}(SG;SO,SG_q) \rightarrow \pi_n(SO,SO_q)$ is defined by restricting the representatives in $\pi_{n+1}(SG;SO,SG_q)$ to the half-disk $D_{-}^n$. \\
We now check the commutativity of the third square. Given an element $\alpha\in Im_n^q$ represented by a special disked immersion $(f,\bar{f}):(S^n,D^{n+1})\looparrowright (S^{n+q}, D^{n+N+1})$ with Type~I framing, by Remark~\ref{rem}, the corresponding element $\theta(\alpha)$ in $\overline{FC}{}_{n-1}^q$ can be represented by the framed trivial disked embedding $(id,id)=(id,id\circ f|_{D^n_{-}}):(S^{n-1},D^n)\hookrightarrow (S^{n-1}\times D^q, D^{n}\times D^N)$, with the framing obtained as a restriction $\bar{f}|_{D^n_{-}}$. Recall that on $S^{n-1}=D_{-}^n\cap D_{+}^n$, the framing is given by suspension of a framing inside $S^{n+q-1}$. We can homotope the obtained framing on $S^{n-1}$ so that it becomes trivial on $D^{n-1}_{-}$. Now, under the vertical map $\xi$, the image of $\theta(\alpha)$ is represented by a map $\phi:D^n\times S^{N-1}\rightarrow S^{N-1}$ with an extension $\bar{\phi}:D^n\times D^N\rightarrow D^N$ defined linearly by $(x,y)\mapsto r(x)(y)$, for some rotation $r$ given by the framing on $\bar{f}|_{D^n_{-}}$. More precisely, $r:D^n\rightarrow SO(N)$ is such that $r|_{\partial D^n=S^{n-1}}$ is a suspension of rotation in $SO(q)$ with $r=id$ on $D^{n-1}_{-}$, by construction. The map $\bar{\phi}$ satisfies the definition of $\xi$ (see subsection \ref{sub3}), since $\bar{\phi}^{-1}(0)=\bar{f}(D^n_{-})=D^n\times 0$, with $\bar{\phi}|_{S^{n-1}}\in SO(q)$ such that $\bar{\phi}_x=id$ for $x\in D^{n-1}_{-}$ and $\bar{\phi}_x$ is the suspension of a map $D^q\rightarrow D^q$ for any $x\in D^{n-1}_{+}$. Moreover, $\bar{\phi}$ also represents an element in $\pi_n(SO,SO_q)$ and is precisely the representative that we get for $\eta(\alpha)$, as $\eta$ also depends only on the non-trivial framing on $D_{-}^n$ (see Remark~\ref{rmk}). Therefore, the square commutes.
\end{proof}
\section{Applications}\label{sec5}
\subsection{Known computations}
For $C_n^q$, the well-known computations were done by Haefliger \cite{hae3, hae2, hae} in the 1960s and later by Milgram \cite{mil} in the early 1970s. To the best of our knowledge, no computations were done ever since. The Manifold Atlas webpage \cite{max} describes all the known groups $C_n^q$. Haefliger \cite{hae3} has shown that $C_n^q=0$ for $n<2q-3$. Furthermore, he proved that for $q\geq 3$ (see \cite[Corollary~8.14]{hae}),
\begin{align*}
C_{2q-3}^{q}= \left\{ \begin{array}{cc}
\mathbb{Z} & \hspace{5mm} \text{ \textit{q odd}} , \\
\mathbb{Z}_2 & \hspace{5mm} \text{ \textit{q even}}. \\
\end{array} \right.
\end{align*}
For odd $q$, the generator is given by the Haefliger trefoil knot \cite{hae2}. It is an interesting question whether the Haefliger trefoil is a generator for the even case.\\
There are only a few computations for $FC_n^q$ in the literature. For example, Haefliger \cite[Theorem~5.17]{hae} has shown that $FC_3^3=\mathbb{Z}\oplus \mathbb{Z}$. Moreover, it is easy to see that for $n<2q-3$, we get $FC_n^q=\pi_n(SO_q)$, see Proposition~\ref{or}. \\
The groups $\overline{FC}{}_n^q = \pi_{n+1}(SG,SG_q)$ are related to the homotopy groups of spheres. Some of these groups are known, in particular, $\overline{FC}{}_{2}^{3}=\pi_3(SG,SG_3)=\mathbb{Z}_2$ and $\overline{FC}{}_3^3=\pi_{4}(SG,SG_3)=\mathbb{Z}$, found in \cite[\S 5.16]{hae} and \cite[Proof of Lemma~3.1]{skop}.\\
The rational computations of $\overline {FC}{}_n^q$ are known \cite[Corollary 20]{vt1}, \cite[\S 5.7]{vt2}, and can also be computed directly as follows:
\begin{prop}
\textit{For} $q\geq 3$, \begin{align*}
\overline {FC}{}_n^q\otimes \mathbb{Q} = \left\{ \begin{array}{cc}
\mathbb{Q} & \hspace{3mm} n=q-1, \hspace{3mm} q \hspace{2mm} even , \\
\mathbb{Q} & \hspace{3mm} n=2q-3, \hspace{2mm} q \hspace{2mm}odd, \\
0 & \hspace{2mm} otherwise. \\
\end{array} \right.
\end{align*}
\end{prop}
\begin{proof}
Since $\overline {FC}{}_n^q=\pi_{n+1}(SG,SG_q)$, we consider the long exact sequence of the pair $(SG,SG_q)$:
\begin{equation}\label{1}
..\longrightarrow \pi_{n+1}(SG)\longrightarrow \pi_{n+1}(SG,SG_q)\longrightarrow \pi_n(SG_q)\longrightarrow \pi_n(SG)\longrightarrow..
\end{equation}
The rational homotopy groups $\pi_n^{\mathbb{Q}}(SG_q)$ can be easily computed by considering the long exact sequence associated with the fibration $\Omega_*^{q-1}S^{q-1}\rightarrow SG_q\rightarrow S^{q-1}$, where $\Omega_*^{q-1}S^{q-1}$ is the component of loops of degree one. The connecting homomorphism $\pi_{n+1}(S^{q-1})\rightarrow \pi_n(\Omega_*^{q-1}S^{q-1})=\pi_{n+q-1}(S^{q-1})$ is given by the Whitehead bracket $[id_{S^{q-1}},-]$. Note that all $\pi_n(SG)$ are torsions being the stable homotopy groups of spheres, hence, $\pi_n^{\mathbb{Q}}(SG)=0$. Using the rational homotopy groups of spheres, we get \begin{align*}
\pi_n^{\mathbb{Q}}(SG_q) = \left\{ \begin{array}{cc}
\mathbb{Q} & \hspace{3mm} n=q-1, \hspace{3mm} q \hspace{2mm} even , \\
\mathbb{Q} & \hspace{3mm} n=2q-3, \hspace{2mm} q \hspace{2mm}odd, \\
0 & \hspace{2mm} otherwise.
\end{array} \right.
\end{align*}
Thus, from the long exact sequence (\ref{1}) we get $\pi_{n+1}^{\mathbb{Q}}(SG,SG_q)=\pi_n^{\mathbb{Q}}(SG_q)$ and that concludes the result.
\end{proof}
\subsection{Metastable range}
From \cite[\S 4.4]{hae}, one can deduce the following stability result for the groups $\overline {FC}{}_{n}^q$ and $FC_n^q$:
\begin{prop}\thlabel{or}
\textit{For $n<2q-3$, $\overline{FC}{}_n^q=\pi_{n+1}(SO,SO_q)$ and $FC_n^q=\pi_nSO_q$.}
\end{prop}
\begin{proof}
Consider the long exact sequence associated with the triad $(SG;SO,SG_q)$ given in \cite[\S 4.4]{hae}:
\small
\begin{equation}\label{2}
\rightarrow \pi_{n+1}(SG,SG_q)\rightarrow \pi_{n+1}(SG;SO,SG_q) \rightarrow \pi_n(SO,SO_q) \rightarrow \pi_n(SG,SG_q)\rightarrow
\end{equation}
\normalsize
By \cite[Corollary 6.6]{hae}, the groups $\pi_{n+1}(SG;SO,SG_q)=C_n^q=0$ for $n<2q-3$. Therefore, from the above sequence, we get $\pi_{n+1}(SO,SO_q)=\pi_{n+1}(SG,SG_q)=\overline {FC}{}_n^q$ for $n<2q-4$. Moreover, any element of $C_n^q$ is trivial as immersion for $n<2q-1$, see \cite[Corollary~6.10]{hae}. Thus, the homomorphism $\pi_{2q-2}(SG;SO,SG_q)\rightarrow \pi_{2q-3}(SO,SO_q)$ in (\ref{2}) is trivial, and we get $\pi_{2q-3}(SO,SO_q)=\pi_{2q-3}(SG,SG_q)=\overline{FC}{}_{2q-4}^q$.\\
For the second equality, we consider the geometric long exact sequence given by Haefliger \cite[\S 5.9]{hae}:
\begin{equation}\label{3}
\longrightarrow\pi_nSO_q\longrightarrow FC_n^q\longrightarrow C_n^q\longrightarrow \pi_{n-1}SO_q\longrightarrow
\end{equation}
The result easily follows for $n<2q-4$ since $C_n^q=0$ for $n<2q-3$. When $n=2q-4$, the homomorphism $C_{2q-3}^q\rightarrow \pi_{2q-4}SO_q $ in (\ref{3}) is the composition $C_{2q-3}^q=\pi_{2q-2}(SG;SO,SG_q){\overset{0}\rightarrow} \pi_{2q-3}(SO,SO_q)\rightarrow \pi_{2q-4}(SO_q)$, and therefore is also trivial.
\end{proof}
\begin{lemma}\thlabel{le}
\textit{For $i\leq q-2$, $\pi_i(SG_q)=\pi_{i}(SG)$.}
\end{lemma}
\begin{proof}
When $i\leq q-1$, we have $\pi_i(SG,SG_q)=\pi_{i}(SO,SO_q)=0$, where we get the first equality from Proposition \ref{or}, and the second one using the fact that $\pi_{i}(SO,SO_q)=0$ for $i< q$. Therefore, from the long exact sequence (\ref{1}), we conclude that for $i\leq q-2$, $\pi_i(SG_q)=\pi_{i}(SG)$.
\end{proof}
|
2,869,038,155,916 | arxiv | \section{Introduction}
Heavy fermions are a class of complex materials in which the effective mass of conduction electrons greatly exceeds the bare electron mass due to the strong hybridization between the conduction and localized $f$-electrons \cite{ColemanReview1,ColemanReview2}. The properties of heavy-fermion materials present an interest for researchers from the perspective of both technology and fundamental science. While the technology-driven research on heavy fermions is mostly focused on problems related to energy conservation and storage, fundamental physics research takes two interconnected directions: the investigation of the microscopic origins of unconventional superconductivity (with CeCu$_2$Si$_2$, UBe$_{13}$ and UPt$_3$ as first examples of this phenomenon in solid state system \cite{hfsc1,hfsc2,hfsc3}), and discovery and analysis of the novel states of quantum matter.
Despite many years of experimental and theoretical research, the microscopic mechanisms responsible for the emergence of various quantum states in these materials remain unclear. The hidden-order phase in URu$_2$Si$_2$ \cite{HOexp1,HOexp2,HOexp3, HOth1,HOth2,HOth3,HOth4,HOth5}, magnetic field-induced non-Fermi-liquid behavior in YbRh$_2$Si$_2$ \cite{YRS}, low-temperature metallic conductivity in heavy-fermion semiconductor SmB$_6$ and Ce-based compounds \cite{SmB6,KIReviews1,KIReviews2,KIReviews3}, superconducting response to Yb-doping in CeCoIn$_5$ \cite{Maple2011,Fisk2011,Cigdem2010}, La- and Y-doping in CeCu$_2$Si$_2$ \cite{Steglich1983,Ott1987} and recently discovered quantum criticality in $\beta$-YbAlB$_4$ \cite{Satoru} are just a few examples. All these states emerge via a physical process unique to heavy-fermion materials in which strong interactions between conduction and localized $f$-electrons operate in an environment of very strong spin-orbit coupling.
SmB$_6$ is a prototypical example of a heavy-fermion semiconductor \cite{SmB6,KIReviews1,KIReviews2,KIReviews3}. Interaction
between Sm $p$- and $d$-orbital conduction electrons and localized $f$-electron states leads to an opening of the
hybridization gap at $T^*\simeq 100$K. The average electron $f$-level occupancy is well below one, $n_f<1$, demonstrating the strongly mixed valent nature of this material \cite{SmB6mv1,SmB6mv2}. Transport measurements in SmB$_6$ show an increase
in resistivity below $T\simeq 50$K and then saturates at very low (below 5K) temperatures \cite{SmB6mv2,SmB6resist1,SmB6resist2,SmB6resist3,SmB6resist4,SmB6resist5,Kim}.
The value of residual resistivity grows when the quality of the sample increases and is of the order of $\rho_{sat}\sim 30 ~\Omega\cdot$cm.
This value is incompatible with the one originating from the metallic conduction in the presence of disorder induced scattering. The origin of
low-temperature conductivity still remains poorly understood \cite{KIReviews3,SmB6resist1}, however, it was recently proposed\cite{TKI,TKILong} that SmB$_6$ is a topological insulator and finite low-temperature conductivity can be due to topologically protected metallic surface states at sample boundaries.
There are two major theoretical challenges in understanding the anomalous transport properties of SmB$_6$. The first challenge is due
to the strong Hubbard interaction between the conduction and $f$-electrons. The second challenge is that the symmetry of the lowest lying crystalline field multiplets, which are hybridized with conduction electrons and determine the symmetry of the hybridization gap, is not known. These challenges make the formulation of full microscopic transport theory in SmB$_6$ quite challending. Nevertheless, some general features of the heavy-fermion semiconductors in regards to the topological features in their band structure can be described on a more general (i.e. less material dependent) level. One important question, for example, is the question of stability of weak and strong topological insulating phases depending on degeneracy of the local $f$-level.
The basic model which is thought to capture the main aspects of the physics of the heavy-fermion semiconductors is the Anderson lattice model. The Hamiltonian can be written as a sum of the following three terms:
\begin{equation}\label{H}
H=H_c+H_f+H_h.
\end{equation}
Here the first term $H_c$ describes the conduction electrons
\begin{equation}\label{HKL}
\begin{split}
H_c=\sum_{\mathbf k , \sigma }\xi_{\mathbf k }c^{\dagger}_{\mathbf k \sigma }c_{\mathbf k \sigma }, \quad \xi_\mathbf k=-\frac{t}{6}\sum\limits_{i=x,y,z}\cos k_i-\mu_c
\end{split}
\end{equation}
where $\sigma$ denotes electron's spin projection, $t$ is the hopping amplitude (equal to bandwidth) and $\mu_c$ is the chemical potential.
Consequently, $f$-electrons are described by
\begin{equation}\label{Hf}
H_f=\sum\limits_{j}\sum\limits_{\alpha=\pm 1}\varepsilon_{f}^{(0)} f_{j\alpha}^{\dagger } f_{j\alpha}+{U}\sum\limits_{i\alpha}
f_{i\alpha}^{\dagger } f_{i\alpha}f_{i\overline{\alpha}}^{\dagger } f_{i\overline{\alpha}}+\sum\limits_{\langle ij\rangle}\sum\limits_{\alpha=\pm 1}t_{ij}^{(h)} f_{i\alpha}^{\dagger } f_{j\alpha}
\end{equation}
where $f_{j\alpha}^{\dagger }$ creates an $f$-electron on site $j$ in a state
$\alpha$ of a lowest lying multiplet $N_\Gamma$-degenerate multiplet denoted by $\Gamma$ (below we consider Kramers doublet only, so $N_\Gamma=2$), $\varepsilon_{f}^{(0)}$ is the $f$-electron energy and $U>0$ is the Hubbard interaction between the $f$-electrons. The last term in (\ref{Hf}) yields a very weak hole-like dispersion for $f$-electrons to enforce the fully gapped insulating state. We emphasize that index $\alpha$ is not a spin index due to the presence of the strong spin-orbit coupling.
Lastly, the third term in Eq. (\ref{H}) accounts for the interaction between conduction and the $f$ electrons
\begin{equation}
\begin{split}
&H_h=\sum\limits_{j,\alpha=\pm 1}\left[V_{i\sigma,j\alpha} {c}_{i\sigma}^{\dagger} {f}_{j\alpha}+ {\rm h.c.}\right], \\
&V _{ i\sigma,j\alpha}=V\sum_{\mathbf k}[\Phi_{\Gamma\mathbf k}]_{\alpha \sigma }
e^{i \mathbf k \cdot({\bf R}_i-{\bf R}_j)}
\end{split}
\end{equation}
In Refs. [\onlinecite{TKI,TKILong}] the phenomenological analysis of the Anderson lattice model based on the low-energy expansion of the $f$-electron self-energy \cite{Phenom1,Phenom2} has been used to analyze the topological structure of the resulting heavy-fermion semiconducting state. In this paper, I will resort to a microscopic approach based on the large-$N$ slave-boson theory and analyze the topological structure of the insulating state.
I employ the symplectic SP($N$) ($N=2k, k=1,2,...$) representation for the electronic operators to properly describe time-reversal symmetry of the electronic states. In agreement with the previous results \cite{TKI,TKILong} I find that for $N=2$ and $N=4$ there appears two (weak and strong) topologically non-trivial states depending on the relative position between the renormalized $f$-level and the chemical potential of the conduction band. Moreover, I found that for the large value of $N>4$ there is only strong topological insulating state. In addition, I will discuss the tunneling into a topologically non-trivial heavy-fermion semiconductors.
In the next Section I will present the large-$N$ mean-field theory of topological heavy-fermion semiconductors.
and the calculation of the tunneling conductance of the surface states in weak topological insulator.
The Section II is devoted to the discussion of the results and conclusions.
\section{Symplectic slave-boson theory}
Large-$N$ slave-boson mean-field theories ($N$ is the degeneracy of the $f$-electron level) utilize the naturally small
parameter $1/N$ to determine the thermodynamic properties of heavy-fermion materials by expanding near exactly solvable limit
\cite{Barnes,SlaveBoson1,SlaveBoson2,kondolattice,SlaveBoson3,MillisLee,Hewson,Barzykin2006}
of $N\to\infty$. Recently, a novel large-$N$ expansion methods have been developed to account for the specific symmetries of the problem (see [\onlinecite{FlintNature}] and references therein).
In what follows, we generalize our model from SU($2$) symmetry group to SP($N$) with $N=2k$, so that the spin summation run over $k$ spin indices, $\alpha,\sigma\in[\pm 1,\pm k]$. The importance of using the SP($N$) subset of SU($N$) group clearly lies in the requirement for the
proper description of the states related by time-reversal \cite{FlintNature}. In the SP($N$) version of the theory,
the form-factor matrix acquires a block-diagonal form of identical $2\times 2$ blocks. For the subsequent saddle-point analysis
we find it more convenient to use the path integral formulation.
The slave-boson approximation corresponds to (i) taking the limit $U\to\infty$, which corresponds to projecting out the doubly occupied states and (ii) introducing the constraint
which guarantees the local moment at the $f$-site, i.e. $n_f=1$:
\begin{equation}\label{constraint}
\begin{split}
& U\to\infty: \quad f_{i\alpha}\to f_{i\alpha}b_i^{\dagger }, \quad f_{i\alpha}^{\dagger }\to f_{i\alpha}^{\dagger } b_i,\\
& \sum\limits_{\alpha}f_{i\alpha}^{\dagger } f_{i\alpha}+
b_i^{\dagger } b_i=1.
\end{split}
\end{equation}
The partition function corresponding to the model Hamiltonian (\ref{H}) with constraint condition (\ref{constraint}) reads :
\begin{equation}
Z=\int\limits_{-\pi/\beta}^{\pi/\beta}\frac{\beta d\lambda}{\pi}\int{\cal D}(b,b^{\dagger },f,f^{\dagger },c,c^{\dagger })
\exp\left(-\int\limits_0^\beta L(\tau)d\tau\right),
\end{equation}
where Lagrangian $L(\tau)$ is
\begin{equation}
\begin{split}
L=&\sum\limits_{i}b_i^{\dagger }\frac{d}{d\tau}b_i+\sum\limits_{ij}\sum\limits_{\alpha=1}^{N}f_{i\alpha}^{\dagger }\left[\delta_{ij}\left(\frac{d}{d\tau}+\varepsilon_f^{(0)}\right)+b_i t_{ij}^{(h)}b_j^{\dagger }\right)f_{j\alpha}\\&+\sum\limits_{\mathbf k}\sum\limits_{\alpha=1}^{N}c_{\mathbf k\alpha}^{\dagger }\left(\frac{d}{d\tau}+\xi_\mathbf k\right)c_{\mathbf k\alpha}\\&
+\frac{V}{\sqrt{N}}\sum\limits_{i,\mathbf k}\sum\limits_{\alpha,\beta=1}^N\left([\Phi_{\Gamma\mathbf k}]_{\alpha\beta}
e^{i \mathbf k \cdot{\bf R}_i}f_{i\alpha}^{\dagger } b_{i}c_{\mathbf k\beta}+\textrm{h.c.}\right)\\&+\sum\limits_{j}i\lambda_j\left(\sum\limits_{\alpha=1}^Nf_{j\alpha}^{\dagger } f_{j\alpha}+b_j^{\dagger } b_j-1\right)
\end{split}
\end{equation}
Here we use the same notation for the form-factor matrix, although now it has a block diagonal form of $k(=N/2)$ blocks each of dimension $2\times 2$. We have also introduced the field $\lambda_j$ to enforce a constraint.
Finally, we have rescaled the hybridization amplitude $V\to V/\sqrt{N}$ for the bookkeeping purposes (see below).
Now we can integrate the conduction electrons by making the following transformation
\begin{equation}
\begin{split}
&c_{\mathbf k\alpha}\to c_{\mathbf k\alpha}-\frac{V}{\sqrt{N}}\sum\limits_{i}\sum\limits_{\beta=1}^N[\Phi_{\Gamma\mathbf k}^*]_{\alpha\beta}
e^{-i \mathbf k \cdot{\bf R}_i}(\partial_\tau+\xi_\mathbf k)^{-1}f_{i\beta} b_{i}^{\dagger }, \\
&c_{\mathbf k\alpha}^{\dagger }\to c_{\mathbf k\alpha}^{\dagger }-\frac{V}{\sqrt{N}}\sum\limits_{i}\sum\limits_{\beta=1}^N[\Phi_{\Gamma\mathbf k}]_{\beta\alpha}
e^{i \mathbf k \cdot{\bf R}_i}f_{i\beta}^{\dagger } b_{i}(\partial_\tau+\xi_\mathbf k)^{-1}
\end{split}
\end{equation}
In what follows, it is convenient to write the $2\times 2$ form-factor matrix as follows
\begin{equation}
\Phi_{\Gamma \mathbf k}=\phi_\mathbf k({\vec n}_\mathbf k\cdot{\vec \tau}), \quad \phi_\mathbf k^2={\frac{1}{2}\textrm{Tr}[\Phi_{\Gamma\mathbf k}^{\dagger }\Phi_{\Gamma\mathbf k}]}
\end{equation}
where ${\vec \tau}$ are Pauli matrices and ${\vec n}_\mathbf k$ is a unit vector.
We obtain
\begin{equation}
\begin{split}
&L=\sum\limits_{i}b_i^{\dagger }\frac{d}{d\tau}b_i+\sum\limits_{ij}\sum\limits_{\alpha=1}^{N}f_{i\alpha}^{\dagger }\left[\delta_{ij}\left(\frac{d}{d\tau}+\varepsilon_f^{(0)}\right)+b_i t_{ij}^{(h)}b_j^{\dagger }\right]f_{j\alpha}\\+&
\sum\limits_{\mathbf k}\sum\limits_{\alpha=1}^{N}c_{\mathbf k\alpha}^{\dagger }\left(\frac{d}{d\tau}+\xi_\mathbf k\right)c_{\mathbf k\alpha}
+\sum\limits_{j}i\lambda_j\left(\sum\limits_{\alpha=1}^Nf_{j\alpha}^{\dagger } f_{j\alpha}+b_j^{\dagger } b_j-1\right)
\\&-\frac{|V|^2}{N}\sum\limits_{ij,\mathbf k}\sum\limits_{\alpha,\beta=1}^N\Delta_{\alpha\beta}(\mathbf k)
e^{i \mathbf k \cdot({\bf R}_i-{\bf R}_j)}f_{i\alpha}^{\dagger } b_{i}(\partial_\tau+\xi_\mathbf k)^{-1}f_{j\beta} b_{j}^{\dagger },
\end{split}
\end{equation}
where we have introduced
\[
\Delta_{\alpha\beta}(\mathbf k)=\frac{1}{N}\sum\limits_{\gamma=1}^N[\Phi_{\Gamma\mathbf k}^*]_{\alpha\gamma}
[\Phi_{\Gamma\mathbf k}]_{\gamma\beta}=\phi_\mathbf k^2\delta_{\alpha\beta}
\]
The action with the Lagrangian above is quadratic in fermionic operators, which can be integrated out to give an effective action
in terms of the slave fields only. Since the resulting expression is quite cumbersome we will not give it here.
Instead, we proceed with the saddle-point analysis.
\begin{figure}[h]
\includegraphics[width=2.4in,angle=0]{Fig1LargeNTKI.pdf}
\caption{Phase diagram found from the solution of the slave-boson mean-field equations (\ref{MFEqs}). Kondo liquid state corresponds to
the situation when the slave-boson amplitude $a=0$. I have used the following
values for the input parameters: $t_f^{(h)}=0.1t$, $\varepsilon_f^{(0)}=-1.05t$. When the number of the fermionic flavors exceeds four,
$N>4$, there exists only strong topological insulating phase and weak topological insulating state disappears.}
\label{Fig1TKI}
\end{figure}
\subsection{mean-field solution}
Mean-field (saddle-point) approximation corresponds to the following values of the bosonic fields:
\begin{equation}
b_\mathbf q(\tau)=\sqrt{N}a\delta_{q,0}, \quad i\lambda_\mathbf q(\tau)=(\varepsilon_f-\varepsilon_{f}^{(0)})\delta_{q,0},
\end{equation}
where both $a$ and $\varepsilon_f$ are $\tau$-independent. Now, we can use the Matsubara frequency representation and
integrate out $f$-fields, since the action is quadratic in these fields. These yields:
\begin{equation}
\begin{split}
Z=&\int\limits_{-\pi/\beta}^{\pi/\beta}\frac{\beta d\lambda}{\pi}\int{\cal D}b{\cal D}b^{\dagger } e^{-S_{eff}}, \\
S_{eff}=&N\left(\varepsilon_f-\varepsilon_f^{(0)}\right)(a^2-q_N)\\&-2NT\sum\limits_{i\omega}\sum\limits_{\mathbf k}
\log[(i\omega-\omega_{1\mathbf k})(i\omega-\omega_{2\mathbf k})],
\end{split}
\end{equation}
where we have introduce the parameter $q_N=\frac{1}{N}$. Moreover, functions $\omega_{1,2\mathbf k}$ describe newly formed energy
bands
\begin{equation}\label{hfbands}
\begin{split}
\omega_{1,2\mathbf k}&=\frac{1}{2}\left[\xi_\mathbf k+E_{f\mathbf k}\pm\sqrt{(\xi_\mathbf k-E_{f\mathbf k})^2+4(Va\phi_\mathbf k)^2}\right], \\
E_{f\mathbf k}&=\varepsilon_f+Na^2h_\mathbf k, \quad h_\mathbf k=\frac{1}{6}t_f^{(h)}\sum\limits_{i=x,y,z}\cos k_i.
\end{split}
\end{equation}
We note that the newly formed band spectrum corresponds to the effective Hamiltonian
\begin{equation}\label{Heff}
\mathcal{H}_{eff}(\mathbf k)=
\left(
\begin{matrix}
\xi_\mathbf k\underline{1} & {Va}{\Phi}_{\Gamma\mathbf k}^{\dagger } \\
{Va}{\Phi}_{\Gamma\mathbf k} & E_{f\mathbf k}\underline{1}
\end{matrix}
\right),
\end{equation}
The reason we invoke the effective Hamiltonian is that it will allow us to analyze the topological structure of an insulating state.
In Eq. (\ref{Heff}) $\underline{1}$ denotes the unit $2\times2$ matrix. To determine the parameters $a$ and $\varepsilon_f$ we, of course, have to minimize the effective action. In addition, we have to keep in mind that
the total number of electrons must be conserved. Specifically, for an insulator, we have to require that we will have one conduction electron per $f$-electron, so that
\begin{equation}\label{insulator}
n_c+n_f=N.
\end{equation}
Minimization of the effective action together with the condition for an insulator (\ref{insulator}) yields the following system of the mean-field
equations:
\begin{equation}\label{MFEqs}
\begin{split}
&(\varepsilon_f-\varepsilon_f^{(0)})a+T\sum\limits_{i\omega,\mathbf k}\left[Na h_\mathbf k A_{ff}(\mathbf k,i\omega)+V\phi_\mathbf k A_{fc}(\mathbf k,i\omega)\right]=0, \\
&(a^2-q_N)+T\sum\limits_{i\omega}\sum\limits_\mathbf k A_{ff}(\mathbf k,i\omega)=0,\\
&(q_N-a^2)+T\sum\limits_{i\omega}\sum\limits_\mathbf k A_{cc}(\mathbf k,i\omega)=1,
\end{split}
\end{equation}
where the functions $A_{ab}(\mathbf k,i\omega)$ are defined by
\begin{equation}
\begin{split}
&A_{ff}(\mathbf k,i\omega)=\frac{i\omega-\xi_\mathbf k}{(i\omega-\xi_\mathbf k)(i\omega-E_{f\mathbf k})-V^2a^2\phi_\mathbf k^2}, \\
&A_{fc}(\mathbf k,i\omega)=\frac{Va\phi_\mathbf k}{(i\omega-\xi_\mathbf k)(i\omega-E_{f\mathbf k})-V^2a^2\phi_\mathbf k^2}, \\
&A_{cc}(\mathbf k,i\omega)=\frac{i\omega-\xi_\mathbf k}{(i\omega-E_{f\mathbf k})(i\omega-E_{f\mathbf k})-V^2a^2\phi_\mathbf k^2}. \\
\end{split}
\end{equation}
To solve (\ref{MFEqs}) we still need to specify the momentum dependence of the hybridization gap, $\phi_\mathbf k$.
In what follows, we adopt the choice of the form-factors from Refs. [\onlinecite{TKI,TKILong}] and consider function
$\phi_\mathbf k$ which at small momenta $\mathbf k$
is $\phi_{\hat\mathbf k}=\frac{1}{12}\sqrt{\frac{3}{\pi}}[12\cos(2\theta)+5(3+\cos(4\theta))]^{1/2}$,
where $\theta$ define the direction of the unit vector
$\hat{\bf k}$, associated with the point on the Fermi surface.
To analyze the topology of the bands governed by the effective Hamiltonian (\ref{Heff}) we use the fact that
topology is invariant under any adiabatic deformation
of the Hamiltonian. We begin our study with a
tight-binding model for a KI on a simple cubic lattice. Our choice of hybridization ensures that
the mean-field Hamiltonian (Eq. \ref{Heff}) is
a periodic function satisfying $\mathcal{H}_{eff}({\bf
k})=\mathcal{H}_{eff}({\bf k}+{\bf G})$.
The technical analysis is readily generalized to
more complicated cases as discussed below. The most important element
of the analysis is the odd parity form factor of the
$f$ electrons, ${\Phi}_{\Gamma}(\mathbf k)=-{\Phi}_{\Gamma}(-\mathbf k)$.
This parity property together with the absence of the nodes in the hybridization gap across the Brillouin zone (BZ)
are the only essential input as far as the topological
structure is concerned.
To evaluate the topological indices we use the results of
Fu and Kane [\onlinecite{FuKane2007}] who have demonstrated that in an insulator
{\em with time-reversal and space-inversion symmetry}, the topological
structure is determined by parity properties at the eight
high-symmetry points, $\mathbf k^*_m$, in the 3D BZ, which are invariant
under time-reversal, up to a reciprocal lattice vector:
$\mathbf k^*_m=-\mathbf k^*_m+{\bf G}$. In our
case, these symmetries require that $\mathcal{H}_{eff}({\bf k})={P}
\mathcal{H}_{eff}(-{\bf k}){P}^{-1}$ and $\mathcal{H}_{eff}({\bf
k})^{T}={\cal T} \mathcal{H}_{eff}(-{\bf k}){\cal T}^{-1}$, where
the parity matrix $P$
and the unitary part of the time-reversal
operator ${\cal T}$ are given by
\begin{equation}\label{}
P = \begin{pmatrix} \underline{1}& \cr & -\underline{1}\end{pmatrix},
\qquad
{\cal T} = \begin{pmatrix} i \sigma_{2}&\cr & i \sigma_{2} \end{pmatrix},
\end{equation}
where $\sigma_{2}$ is the second Pauli matrix.
For any space-inversion-odd form factor, it follows immediately that
$\hat{\Phi}_{\Gamma}(\mathbf k)=0$ at a high-symmetry point. Hence, the
Hamiltonian at this high symmetry point is simply
$\mathcal{H}_{eff}({\mathbf k^*_m})=(\xi_{\mathbf k_m^*}+E_{f\mathbf k_m^*})
I/2+(\xi_{\mathbf k^*_m}-E_{f\mathbf k_m^*}){P}/2$, where $I$ is
the four-dimensional identity matrix.
The parity at a high symmetry point is thus determined by $\delta_m=\textrm{sgn}(\xi_{\mathbf k^*_m}-E_{f\mathbf k_m^*})$.
Four independent $Z_2$ topological indices $(\nu_0;\nu_1,\nu_2,\nu_3)$ ~\cite{Kitaev}, one strong ($\nu_0$) and three weak indices
($\nu_{1,2,3}$) can be constructed from $\delta_m$: (i)~The strong topological index is the product of all eight
$\delta_m$'s: $I_{\rm STI} = (-1)^{\nu_0}=\prod\limits_{m=1}^{8} \delta_m = \pm 1$;
(ii)~by setting $k_j=0$ (where $j= x,y, \mbox{and } z$),
three high-symmetry planes, $P_j = \left\{ {\bf k}: k_j=0\right\}$, are formed that contain four high-symmetry points each. The product of the parities at these four points defines the
corresponding weak-topological index, $I_{\rm WTI}^{a}=(-1)^{\nu_\alpha}= \prod\limits_{{\bf k}_m \in P_j} \delta_m = \pm 1$ ($\alpha=1,2,3$)
with integers corresponding to the axes $x,y$ and $z$. The existence of the three weak topological indices in 3D is related to a $Z_2$ topological index for 2D systems (a weak 3D TI is similar to a stack of 2D $Z_2$ topological insulators).
Because there are three independent ways to stack 2D layers to form a 3D system,
the number of independent weak topological indices is also three.
A conventional band insulator has all of the four indices $I_{\rm STI}
= I_{\rm WTI}^x=I_{\rm WTI}^y=I_{\rm WTI}^z = +1$ or equivalently (0;0,0,0). An index
$I=(-1)$ ($\nu_\alpha=1$) indicates a $Z_2$ topological state with the odd number of
surface Dirac modes. In a KI the symmetry index $\delta_{m}$ of a particular
high symmetry point $m$ is negative provided
$\xi_{\mathbf k^*_{m}}<E_{f\mathbf k_m^*}$ is lower the f-energy $E_{f\mathbf k_m^*}$.
Thus if
$\xi_{{\mathbf k_m^*}=0}<E_{f\mathbf k_m^*}$ at the $\Gamma$ point, while
$\xi_{\bf{{\mathbf k^*_m\ne 0}}}>E_{f\mathbf k_m^*}$ for all other symmetry
points, then $I_{\rm STI}
= -1$, and hence {the Kondo insulating state is a
strong-topological insulator, robust against disorder} \cite{TKI,TKILong}. Weak-topological insulators and topologically trivial
insulators can in principle be found for different band structures and
different values of $E_{f\mathbf k_m^*}$. A particularly
interesting possibility is to tune topological phase transitions
between different types of insulators (e.g., by applying a pressure).
Although we have been specifically considering a tight-binding
model with a primitive unit cell, all our conclusions apply directly
to systems adiabatically connected to this model.
I solve the mean-field equations (\ref{MFEqs}) numerically. For a given value of hybridization and temperature I analyze the topological structure of the effective Hamiltonian (\ref{Heff}) using the prescription outlined above. The results are shown on Fig. 1.
First I note that when $N=2$ the weak topological insulator (WTI) state precedes the strong topological insulating state in agreement
with the earlier studies \cite{TKISU2}. For $N=4$ $(k=2)$ the region where WTI exists shrinks and is fully absent for $N=6$ $(k=3)$. It is a quite surprising observation for it is a special case when the slave-boson mean-field theory results crucially depend on the number of fermionic flavors. In other words, here we find an example when there is no adiabatic connection between the phases for the realistic case of $N=2$ and $N\to\infty$, i.e. when the mean-field theory is exact. The reason for the disappearance of WTI phase can be easily traced to the condition for the WTI: half of the $\delta_m$'s must be negative, while the other half must be positive. However, as it directly follows from the solution of the mean-field equations, this condition can never be satisfied when $k>2$.
Lastly, the results on Fig. 1 are consistent with the ones which have been obtained for the low-energy version of the Anderson model \cite{TKI,TKILong}. There it was found that WTI is stabilized for $f$-level energy close to the chemical potential for the conductions: this situation corresponds to the $f$-level occupation $n_f\simeq 1$. With an increase of hybridization, $n_f$ is reduced as system shows more mixed valent behavior, so that $n_f<1$. For $N=2$ as soon as insulator becomes a strong topological insulator, $n_f\simeq 0.8$.
Generally, fluctuations around large-$N$ mean-field solution introduce interaction between the newly formed heavy-fermions.
Strictly speaking one needs to prove that fluctuations of the amplitude and phase of the slave-bosons do not break the newly formed state.
Due to the presence of the bulk gap, however, we do not expect that fluctuations will lead to the substantial modification of the
ground state. The separate issue, of course, is the effects of the fluctuations of the metallic surface states. Specifically, whether
the interactions may lead to an opening of gap at the surface. The detailed investigation of that problem goes beyond the scope
of this paper and we leave it for the future studies.
\subsection{tunneling into topological heavy-fermion semiconductors}
Scanning tunneling spectroscopy (STM) of heavy-fermion metals has become an active topic of experimental
and theoretical research in recent years \cite{STMexp1,STMexp2,STMexp3,STMtheory1, STMtheory2,STMtheory3}.
In this regard, an intriguing question is whether the STM can directly probe the metallic surface states
in a topological heavy-fermion semiconductor. In this Section I address this question by evaluating the
differential tunneling conductance into a weak topological heavy-fermion semiconductor. In what follows I will use
the mean-field theory discussed above for the case of SU(2) group, i.e. two flavors of fermions.
To evaluate the tunneling conductance, we will model a bulk system by a stack of $L$ planes along the
$z$-direction and diagonalize the Hamiltonian. The resulting Hamiltonian matrix $H_{nm}$ has blocks along the
diagonal given by (\ref{Heff}), which describe the hopping and hybridization within each plane and the off-diagonal parts describing
the hopping and hybridization between the planes. Since the in-plane momentum is a good quantum number,
the dispersion of the conduction electrons is given by $\varepsilon(k_x,k_y)=-(t/4) (\cos k_x+\cos k_y)$, while the
dispersion of the $f$ electrons is described by $\epsilon_f(k_x,k_y)=(t_f/4)(\cos k_x+\cos k_y)$ with $t_f=0.1t$.
In addition, we have taken the form factor matrix in the form:
\begin{equation}\label{Phi}
\underline{\Phi } = \left\{
\begin{matrix}
V (\sin k_{x}\tau_{x}+ \sin k_{y}\tau_{y}), \textrm{ within the planes}, \\
iV_z\tau_z, ~\textrm{between the planes (upwards)}, \\
-iV_z\tau_z, ~\textrm{between the planes (downwards)}.
\end{matrix}
\right.
\end{equation}
Within each plane the conduction and $f$-electrons are described by the four-component spinor
\begin{equation}\label{Spinor}
\Psi_{l\mathbf k_\perp}^{\dagger }=(c_{l\mathbf k_\perp,1}^{\dagger } ~c_{l\mathbf k_\perp,2}^{\dagger } ~f_{l\mathbf k_\perp,1}^{\dagger } ~f_{l\mathbf k_\perp,2}^{\dagger }),
\end{equation}
where $l$ labels the layer and $\mathbf k_\perp=(k_x,k_y)$. Lastly, the Hamiltonian describing
tunneling between an electrode and a sample is
\begin{equation}\label{Htun}
\begin{split}
H_{tun}&=T_c\sum\limits_{\mathbf k_\perp,\sigma} (p_\sigma^{\dagger } c_{1\mathbf k_\perp,\sigma}+\textrm{h.c.})\\&+T_f\sum\limits_{\sigma\beta,\mathbf k_\perp}\left(
[\Phi_{\Gamma}(\mathbf k_\perp)]_{\sigma\alpha}p_\sigma^{\dagger } f_{1\mathbf k_\perp,\alpha}+\textrm{h.c.}\right)
\end{split}
\end{equation}
Here we have assumed that the tunneling involves conduction and $f$-electron states on the surface layer only ($l=1$). As we will see below, the presence of the form-factor in the second term is crucial for the cotunneling events, which ultimately give rise to the Fano lineshape for the differential tunneling conductance.
\begin{figure}[h]
\includegraphics[width=2.5in,angle=0]{Fig2DOS.pdf}
\caption{Plots of the differential tunneling conductance $g(\omega)$ for the stack of $L$ layers of heavy-fermion semiconductors.
Panel (a) shows $g(\omega)$ for the stack of $L=2$ layers: for this case there are no states in the gap. The band structure as a function of the momentum in the 2D BZ is shown on inset. Panel (b) shows $g(\omega)$ for the stack of $L=8$ layers corresponding to the weak topological insulator (even number of Dirac nodes in the gap). The asymmetry in the tunneling conductance is due to the cotunneling processes into conduction and $f$-electron states.}
\label{Fig2DOS}
\end{figure}
If we now assume that the tunneling electrode is in equilibrium state with the surface, the tunneling current as a function of the voltage, $I(V)$,
is
\begin{equation}
\begin{split}
&I(V)=\frac{2e}{\hbar}\int\limits_{-\infty}^\infty d\omega \rho_{tip}(\omega-eV)[n_F(\omega-eV)-n_F(\omega)]\\ &
\times\textrm{Im}\left[|T_c|^2G_{cc}(\omega)+|T_f|^2G_{ff}(\omega)+2|T_c||T_f|G_{cf}(\omega)\right],
\end{split}
\end{equation}
where $\rho_{tip}$ is the STM tip density of states (DOS), $n_F(\omega)$ is the Fermi distribution function and $G_{ab}(\omega)$
are the advanced local single particle Green functions:
\begin{equation}
G_{ab}(\omega)=\sum\limits_{\lambda,\mathbf k_\perp}\frac{\phi_{a,\lambda}^*(\mathbf k_\perp)\phi_{b,\lambda}(\mathbf k_\perp)}{\omega-\epsilon_\lambda(\mathbf k_\perp)-i\delta}.
\end{equation}
In the expression above, $\varepsilon_\lambda(\mathbf k_\perp), \phi_{a,\lambda}(\mathbf k_\perp)$ denotes the set of $\lambda$ eigenvalues and the corresponding eigenfunctions I obtained by diagonalizing the Hamiltonian $H_{nm}$, while subscripts $a,b$ refer to the components of the spinor (\ref{Spinor}) at the surface ($l=1$). In real materials, however, electronic correlations as well as disorder lead the broadening of the $f$-electron level. One way to account for these effects in the differential tunneling conductance $g(V)=dI/dV$ is to consider the complex
valued quasiparticle energies:
\begin{equation}
\epsilon_\lambda(\mathbf k_\perp)\to E_{\lambda\mathbf k_\perp}-i\Gamma_{\lambda\mathbf k_\perp},
\end{equation}
with the quasiparticle width given by \cite{STMtheory3}
\begin{equation}\label{Gamma}
\Gamma_{\lambda\mathbf k_\perp}=
\left\{
\begin{matrix}
\Gamma_{\lambda\mathbf k_\perp}^2/T_K, \quad E_{\lambda\mathbf k_\perp}<T^*, \\
|E_{\lambda\mathbf k_\perp}|/[1+\log(|E_{\lambda\mathbf k_\perp}|/T_K)]^2, \quad E_{\lambda\mathbf k_\perp}>T^*.
\end{matrix}
\right.
\end{equation}
Here $T^*$ is the temperature corresponding to the opening of the hybridization gap, or the temperature at which the first non-trivial solution of the mean-field equations appears. In SmB$_6$, for example, $T^*\simeq 100$K.
With these provisions, we evaluate the differential tunneling conductance for the set of the parameters corresponding to the weak-topological
insulator in 3D translationally invariant system. I show the results on Fig. 2 for a fully gapped states (top panel) and metallic state (bottom panel) corresponding to the weak topological insulator. Both curves have characteristic asymmetry due to the co-tunneling processes into conduction and $f$-states. The finite value of the $g(V)$ are zero bias are due to the finite width of quasiparticle states, Eq. (\ref{Gamma}). From our results we see that $g_{WTI}(V\sim 0)$ in the case of weak topological insulator is significantly enhanced in comparison with
$g_{BI}(V\sim 0)$ (band insulator), which is not surprising. In addition, $g_{WTI}(V)$ shows higher asymmetry then $g_{BI}(V)$. Nevertheless,
it is seems to be a quite challenging task to argue in favor of the topologically protected surface states solely on the STM data.
\section{Conclusions and outlook}
In this paper I have analyzed the low-temperature properties of the generic heavy-fermion semiconductors using the large-$N$ slave-boson theory. Specifically, I have provided an evidence for the formation of the topologically non-trivial electronic states at the surface of these materials. The results reported in this paper are in agreement with those obtained using different approach based on the low-energy analysis of the Anderson lattice model. The phase diagram obtained within the mean-field analysis implies that the strong topological insulating phase is likely to be observed in materials with high, i.e. cubic,
point group symmetry. In this case, the analysis of the crystalline field split $f$-ion multiplets for the valence configurations corresponding to the total angular momentum $J=5/2$ or $J=7/2$ shows that only $N=4$ degenerate multiplets can give rise to the nodeless hybridization gap. Such a scenario can be realized in heavy-fermion semiconductor SmB$_6$. Indeed, finite
metallic conductivity below $T\simeq 5$K may serve as a signature for topologically protected metallic surface states. The fact that conductivity grows with the improvement of the quality of the samples qualitatively supports this idea. Indeed, recent theoretical works \cite{STIdis1,STIdis2} have explicitly demonstrated that the presence of relatively strong disorder on the surface of a strong topological insulator will substantially disrupt these states. The physical reason for such a behavior is that
the impurity induced states propagate well below leading to the diffusive behavior in the surface transport. In that regard,
when the strength of disorder potential is comparable to the bulk gap, the 2D Dirac theory description of the 3D topological
insulators is not valid \cite{STIdis2}.
An important issue for the subsequent study is a role of fluctuations around the mean-field solution. For the band insulator, one may argue
that the fluctuations effects (generally of the order of $1/N$) do not lead to any significant changes providing only small corrections to the gap itself. Situation becomes drastically different when the metallic surface states are present. In particular, fluctuations in slave-boson fields
lead to the effective interactions between the conduction and $f$-electrons, which in principle may lead to the opening of the gap at the
surface as well. The detailed investigation of these effects goes beyond the scope of this paper and I leave it for the future.
\section{acknowledgments}
I thank Piers Coleman and Pedro Schlottmann for useful comments.
Author acknowledges the financial support by the Ohio Board of Regents Research Incentive Program grant OBR-RIP-220573.
This work was supported in part by the National Science Foundation under grant No. 1066293 and the hospitality of the Aspen Center for Physics (M.D.)
|
2,869,038,155,917 | arxiv | \section{Introduction}
The relation between stimuli and sensation is one of the main research topics
in Psychophysics \cite{chescheider2013}. Stimulus of different sources and
intensities can cause different responses in the sensory system
\cite{Stevens1975}. In the early 19th century, Weber and Fechner proposed that
stimuli-respon\-se relation correspond to a logarithmic function
\cite{kinouchi06,Levina2007}. In the 1950s, Stevens proposed that
stimuli-response relation is given by a power law \cite{stevens08}. Due to
physiological and anatomical limitation, the relation between stimuli and
response have upper and lower limits. The stimuli difference, between the
smaller and bigger sensation, define the dynamic range (DR) associated with its
sense \cite{murray93}. In the context of biological systems, e.g. neuronal
networks, the DR corresponds to the ability to differentiate the intensity of
external stimulus
\cite{gollo09}.
The DR is proportional to the logarithm of the ratio between the largest value
of the external applied stimulus in which the response is close to saturation
of the firing rate and the smallest value of the external applied stimulus
in which it is weak to modify the firing rate. The human sense of sight
can perceive changes in about ten decades of luminosity and the hearing covers
twelve decades in a range of intensities of sound pressures
\cite{stevens08,chialvo06}. The DR of the human vision plays an important role
in the design of display devices \cite{reinhard10}, where as the
hearing case it is relevant to cochlear implants \cite{spahr07}.
The DR of a neuronal network increases with the network size until it reaches a
saturation value \cite{batista14}. The increase of the DR value is also
associated with the increase of the number of excitatory chemical synapses
\cite{viana14,iarosz2012}. Borges et al. \cite{borges} reported the
complementary effect of chemical and electrical synapses on the enhance of the
DR. Protachevicz et al. shown that chemical synapses can enhance DR
of the neural network submitted to external stimuli \cite{Protachevicz2018b}.
The mammalian brain is composed of excitatory and inhibitory neurons
\cite{adini1997}. The balance between excitation and inhibition plays a crucial
role in the transmission of information, signal propagation, and regular firing
of neurons in many brain areas \cite{kandel00,buzsaki06}. Neuronal networks
with excitatory and inhibitory neurons have been considered to describe the
dynamics of primary visual cortex \cite{adini1997,kurant06}, cortical firing
patterns \cite{Borges2017,Prota2018,Borges2020,protachevicz19}, and synaptic
plasticity mechanisms \cite{Borges2017b,Borges2017c,Lameu2018,Lameu2018b}.
Kinouchi and Copelli \cite{kinouchi06} proposed a model of an excitable network
based on Erd\"os-R\'enyi (ER) random gra\-phs \cite{erdos59}. They
demonstrated that the DR is maximised at the critical point of a non
equilibrium phase transition. A theoretical approach to study the effects of
network topology on the DR was presented by Larremore et al.
\cite{Larremore2011,Larremore2011b}, in which was considered only excitatory
nodes. Pei et al. \cite{pei2012} investigated the collective dynamics of
excitatory-inhibito\-ry excitable networks in response to external stimuli.
They found that the DR is maximised at the critical point of phase transition
which depends only on the excitatory connections.
The spiking dynamics of a network of excitable excitatory nodes resulting from
an initial stimulus ceases after a typically short time at a critical point
\cite{kinouchi06}. However, when inhibitory nodes are considered the collective
dynamic can become self-sustaining as shown by Larre\-more et al.
\cite{Larremore2014}. They showed this behaviour considering an additive
probabilistic model, where excitatory nodes increase the probability of
activation of their nei\-ghbours, and inhibitory nodes decrease the probability.
In addition, at critical point the collective dynamics can become
self-sustainable (ceaseless dynamic) if a fraction of inhibitory nodes is
greater than a threshold. However, in their model they did not consider a
refractory period and, for this reason, the neuronal firing rate obtained is
higher than the experimentally observed. When refractoriness is included in the
model, it is possible to obtain the critical point leading to realistic firing
patterns \cite{copeli2019,mauricio2019}.
In this work, we investigate the criticality and dynamic range of a cellular
automaton modelling a neuronal network in which the neurons are connected by
means of excitatory and inhibitory chemical synapses \cite{viana14,borges}. In
order to understand the relationship between maximisation of the DR and the
critical self-sustainable activity, we consider a refractory period in the model
like the one proposed by Larremore et al. \cite{Larremore2014}. With the
refractory period, the model exhibits more realistic firing rates and critical
self-sustained activity. In our simulations, we observe a transition from
ceaseless dynamics to ceasing activity when the mean connection degree of the
network is increased. We observe that the external stimulus mask effects of
self-sustained activity in the region where the DR is calculated, and the
firing rate is the same for the ceaseless dynamics and ceasing activity.
Furthermore, we obtain an analytical expression for the DR as a function of the
mean excitatory and inhibitory synaptic intensities. In a network with a large
number of connections, we show that the maximal DR value occurs in the critical
points where excitatory and inhibitory inputs are approximately equal. In this
situation, the neuronal network is in a balanced state. Shew et al. \cite{shew}
showed experimentally that the DR is maximised when the excitatory and
inhibitory synaptic input are balanced. Our work thus provides theoretical
explanations for this experimental result.
The paper is organised as follows. In Section $2$, we introduce the model.
Section $3$ presents our analytical results about the dynamic range. In the
last Section, we draw our conclusions.
\section{Model}
We consider a $n$ states cellular automaton model composed of $N$ excitable
elements. The state of each neuron $i$ is described by the variable $s_i$
($i=1,..., \ldots n$). In this representation, each neuronal state is
associated with the neuronal activity \cite{kinouchi06,copelli02}. The resting
state is given by $s_i=0$, the excited state by $s_i=1$, and $s_i = 2,...,n-1$
are the refractory states. The elements can not be excited during the
refractory states. In the model, we consider excitatory and inhibitory neurons
\cite{Larremore2014}. Inhibitory and excitatory inputs are related to the
excitatory and inhibitory neurons, respectively. To model the interaction of
the synaptic inputs, we considere a probability function. The activation
probability of a node in the resting state is given by function
$G (x_i)$ \cite{Larremore2014}
\begin{equation}
G(x_i) = G \left(\sum_{j=1}^{N} A_{ij} \;\delta (s_j (t),1)\right),
\end{equation}
where $G(x_i)=0$ for $x_i \leq 0$, $G(x_i)=x_i$ for $0 < x_i < 1$, and
$G(x_i)=1$ for $x_i \geq 1$. $G(x_i)$ is a piecewise linear function, known as
transfer function, with three pieces. The weighted matrix $\mathbf{A}$ has
elements $A_{ij}>0$ for excitatory connections and $A_{ij}<0$ for inhibitory
connections. The Kronecker delta $\delta(a,b)$ is equal to $1$ when $a=b$ and
zero otherwise. The dynamics of both excited and the refractory states are
deterministic. If $s_i=1$, in the next time steps the state is updated to
$s_i=2$, and so forth, until $s_i=n-1$, returning to the resting state $s_i=0$
in the next time step. The fractions of excitatory and inhibitory nodes
correspond to $f_{\rm ex}$ and $f_{\rm in}$, respectively, and the condition
$f_{\rm ex}+f_{\rm in}=1$ is always satisfied. In order to simplify the analysis,
we arrange the $i$ indexes as $1\leq i\leq f_{\rm ex}N$ for excitatory nodes,
whereas $f_{\rm ex}N+1\leq i\leq N$ for inhibitory ones. Fig. \ref{Fig1} displays
an schematic illustration of (a) the neuronal dynamics for $n=3$ states, (b) a
neuron receiving chemical synaptic inputs and (c) the function $G(x_i)$ as a
function of the sum of all synaptic inputs.
\begin{figure}[htbp!]
\centering
\includegraphics[scale=0.5]{figure1.eps}
\caption{Representation of the neuronal activity by a cellular automaton with
$n=3$ states. (a) Illustrative membrane potential for each neuron $i$, where
$s_i$ represent the rest ($s_i=0$), the active ($s_i=1$), and refractory
($s_i=2$) states. (b) Chemical synaptic inputs arriving in the neuron $i$. Red
triangles and blue squares represent the excitatory and inhibitory inputs,
respectively. (c) The neuronal activation probability ($G(x_i)$) is given by
a function of all chemical inputs arriving in the neuron $i$ at time $t$.}
\label{Fig1}
\end{figure}
The neuronal response at a given time $t$ can be qu\-antified using the density
of spiking neurons
\begin{equation}
p(t)=\frac{1}{N}\sum_{i=1}^{N}\delta(s_i(t),1),
\end{equation}
which is interpreted as the probability for a random neuron to be in the
excited state at time $t$. With the time series of $p(t)$, we calculate the
average firing rate
\begin{equation}
F=\overline {p(t)}=\frac{1}{T}\sum_{t=1}^{T}p(t),
\end{equation}
where $T$ is the time window chosen to calculate the average.
In this work, we consider random networks and for this case the update
equations are the same for both excitatory and inhibitory nodes \cite{pei2012}.
Our networks are built according to the Erd\"os-R\'enyi random graphs with
probability equal to $K/(N-1)$, where $K$ is the average degree of connections
of the network. Assuming that the events of the neighbours of an excited node
are statistically independent for large $t$, we obtain the following mean field
map for the density of spiking neurons
\begin{equation}\label{mapp}
p(t+1)= [1-(n-1)p(t)] (\eta+G(x)-\eta G(x)),
\end{equation}
where the external stimulus $\eta=1-\exp{(-r\Delta t)}$ is a Poisson process
with mean perturbation rate $r$ in the time interval $\Delta t$
\cite{kinouchi06}. In our simulations, we use $\Delta t=1$.
Setting the weights $A_{ij}=S_{\rm ex}$ for the excitatory connections and
$A_{ij}=-S_{\rm in}$ for the inhibitory ones, when the network reaches a
stationary state, the mean value of $x_i$ is given by
\begin{equation}
\langle x\rangle=f_{\rm ex}KS_{\rm ex}p(t)-f_{\rm in}KS_{\rm in}p(t).
\end{equation}
Defining $\sigma_{\rm ex}=KS_{\rm ex}$ and $\sigma_{\rm in}=KS_{\rm in}$, we obtain
\begin{equation}
\langle x\rangle=(f_{\rm ex}\; \sigma_{\rm ex}-f_{\rm in} \; \sigma_{\rm in}) \;p (t).
\end{equation}
In the stationary state we have $p(t+1)=p(t)=p^*$ and $F \approx p^*$.
Substituting in Eq. (\ref{mapp}), and considering the case of no external
perturbation ($\eta = 0$), we get
\begin{equation}
F_0=(1-(n-1)F_0) \; G(x).
\end{equation}
In the regime $0<(f_{\rm ex} \; \sigma_{\rm ex}-f_{\rm in} \; \sigma_{\rm in})F<1$,
the model implies $G(x) = x$, and therefore
\begin{equation}
F_0=(1-(n-1)F_0) \; (f_{\rm ex}\; \sigma_{\rm ex}-f_{\rm in} \; \sigma_{\rm in})F_0.
\end{equation}
Solving for $F_0$ we get
\begin{equation}\label{F0}
F_0=\frac{1-(f_{\rm ex}\sigma_{\rm ex}-f_{\rm in}\sigma_{\rm in})^{-1}}{n-1}.
\end{equation}
There is a phase transition from ceasing activity ($F_0 = 0$) to ceaseless
activity ($F_0 > 0$). In the critical point of this phase transition
($F_0\rightarrow 0$), we obtain
\begin{equation}\label{sigmac}
\sigma_{\rm in}=\frac{f_{\rm ex}\sigma_{\rm ex}-1}{f_{\rm in}}.
\end{equation}
This relation shows that the critical point in the model is given by
$f_{\rm ex}\sigma_{\rm ex}\geq 1$, implying the necessity of a minimum fraction
of excitatory neurons. In addition, we observe that
$f_{\rm ex}\sigma_{\rm ex}\approx f_{\rm in}\sigma_{\rm in}$ for
$\sigma_{\rm ex} \gg 1$. Then, for a highly connected network
($\sigma\propto K$), we obtain approximately the same amount of excitatory and
inhibitory mean inputs from probabilistic synapses. In this situation, our model
exhibits a state which is critical and balanced.
\begin{figure}[htbp!]
\centering
\includegraphics[scale=0.28]{figure2.eps}
\caption{Time series of the density of spiking neurons for subcritical (black
line), critical (red line), and supercritical (blue line) values of
$\sigma _{\rm in}$ for (a) $\sigma_{\rm ex}=1.5$ and (b) $\sigma_{\rm ex}=2.5$. In
(c), we plot the average firing rate as a function of $\sigma_{\rm in}$ for
$\sigma_{\rm ex}=1.5$ (black circles), $\sigma_{\rm ex}=2.0$ (red circles) and
$\sigma_{\rm ex}=2.5$ (blue circles). The points are obtained from numerical
simulations while the curves are given by Eq. \ref{F0}. The parameters are
$N=10^5$, $K=10^4$, $r=0$, $n=3$, and $f_{\rm ex}=0.8$.}
\label{Fig2}
\end{figure}
In this work, we split in three theoretical firing regi\-mes that depende of Eq. \ref{F0} and
of the parameters $f_{\rm ex}$, $\sigma_{\rm ex}$, $f_{\rm in}$, and $\sigma_{\rm in}$.
(i) if $F_0 < 0$ we have a subcritical regime; (ii) if $F_0 = 0$ we have a critical regime; and
(iii) if $F_0 > 0$ we have a supercritical regime.
In Figs. \ref{Fig2}(a) and \ref{Fig2}(b), we show the density of spiking
neurons without external perturbation as a function of the time for values of
$\sigma_{\rm in}$ in the subcritical, critical, and supercritical regime for
different values of $\sigma_{\rm ex}$. We choose randomly $0.4 \%$ of neurons
to be active ($s_i=1$) at $t=0$. In Fig. \ref{Fig2}(c), we show the
relation between $F_0$ and $\sigma_{\rm in}$ for some values of $\sigma_{\rm ex}$.
We verify that the theoretical results given by Eq. \ref{F0} are in agreement
with our numerical simulations.
When a great number of neurons presents $x_i<0$, even at the critical point,
in the numerical simulations $F$ can be positive for a large time span. In Fig. \ref{Fig2}, we see that
the spiking activity ceases rapidly when $\sigma_{\rm ex}=1.5$, whereas it is
persistent at the critical point when $\sigma_{\rm ex}=2.5$. We verify that the
activity is not persistent if we increase the average degree of connections.
Fig. \ref{Fig3}(a) exhibits the density of spiking neurons considering
$K=2\times 10^4$, for subcritical, critical (three different initial conditions)
and supercritical values of $\sigma_{\rm in}$. In Fig. \ref{Fig3}(b), we plot the
distribution of $x_i$ for $1000$ time steps,
$N=10^5$, $K=10^4$ (blue), and $K=2\times 10^4$ (red). In both cases, we find
$\langle x\rangle$ and $F\approx 0.0031$. In the first case, approximately
$2.18 \%$ of $x_i$ present negative values. In the second case, about $5.00\%$
of $x_i$ are less than zero. We observe that greater values of
$S_{\rm ex}=\sigma_{\rm ex}/K$ and $S_{\rm in}=\sigma_{\rm in}/K$ contribute for the
persistent activity at the critical point.
\begin{figure}[htbp!]
\centering
\includegraphics[scale=0.37]{figure3.eps}
\caption{(a) Time series of the density of spiking neurons for the subcritical
(black line), critical (red line), and supercritical (blue line) values of
$\sigma _{\rm in}$ with $\sigma_{\rm ex}=2.5$. (b) Distribution of $x_i$ values for
the average degree of connections $K=2\times 10^4$ (red) and $K=1\times 10^4$
(blue). Parameters are $N=10^5$, $f_{\rm ex}=0.8$, $\sigma_{\rm ex}=1.5$, and
$\sigma_{\rm ex}=2.5$.}
\label{Fig3}
\end{figure}
\section{Dynamic Range (DR)}
The behaviour of the average firing rate ($F$) as a function of the external
stimulus ($r$) shows a minimum and a maximum saturation ($F_0$ and $F_{\rm max}$,
respectively) for a range of $r$ values, as shown in Fig. \ref{Fig4}. The
DR is defined as
\begin{equation} \label{DR}
\Delta=10\log_{10}\frac{r_{\rm high}}{r_{\rm low}},
\end{equation}
where $\Delta$ is the stimulus interval (measured in dB) in which changes in $r$ can
be perceived as changes in $F$, and it is between the disregarding stimuli that
cause a response small to be distinguished from $F_0$ and the saturation
$F_{\rm max}$ \cite{kinouchi06}. The interval [$r_{\rm low}$,$r_{\rm high}$] is found
from its correspondent in $F$, [$F_{\rm low}$,$F_{\rm high}$], where
$F_{\rm high}=F_0+0.95(F_{\rm max}-F_0)$ and $F_{\rm low}=F_0+0.05(F_{\rm max}-F_0)$.
\begin{figure}[htbp!]
\centering
\includegraphics[scale=0.29]{figure4.eps}
\caption{Mean firing rate as a function of intensity stimuli.}
\label{Fig4}
\end{figure}
For $0<(f_{\rm ex}\; \sigma_{\rm ex}-f_{\rm in} \; \sigma_{\rm in})F<1$ (in the
stationary state) we approximate Eq. \ref{mapp} as
\begin{eqnarray}
F&=&[1-(n-1)F] [\eta + (f_{\rm ex}\; \sigma_{\rm ex}-f_{\rm min} \; \sigma_{\rm in})F \nonumber \\
&-& (f_{\rm ex}\; \sigma_{\rm ex}-f_{\rm in} \; \sigma_{\rm in})\eta F].
\label{eq13}
\end{eqnarray}
Rearranging the terms, we obtain
\begin{eqnarray}
& &\left[(n-1)(f_{\rm ex}\; \sigma_{\rm ex}-f_{\rm min} \; \sigma_{\rm in}) (1-\eta)\right]F^2 \nonumber \\
&+&\left[1+(n-1)\eta-(f_{\rm ex}\; \sigma_{\rm ex}-f_{\rm min} \; \sigma_{\rm in})(1-\eta)\right]F \nonumber \\
&-& \eta= 0. \label{eq14}
\end{eqnarray}
As $\eta$ depends on $r$, by solving Eq. \ref{eq14}, we are able to determine
the dependence of the average firing rate on the mean perturbation rate
$r$, as well as its dependence on all the parameters of the network.
In Fig. \ref{Fig5}(a), we plot $F$ as a function of $r$ for subcritical,
critical and supercritical values of $\sigma_{\rm in}$. The lines represent the
theoretical values from the solution of expression \ref{eq14} and the symbols
are obtained through numerical simulations. In the inset of Fig. \ref{Fig5}(a), we
show a magnification to demonstrate that there are diferences between the
theoretical and the numerical values of $F$ for $r$ values out of the region
where DR is calculated (green).
For a cellular automaton with $n$ states, the maximum average firing rate is
given by $F_{\rm max}=1/n$. Deriving $F_0$ in Eq. \ref{F0}, $F_{\rm low}$ and
$F_{\rm high}$ can be obtained. Then, $\eta _{\rm low}$ and $\eta _{\rm high}$ can be
calculated directly by
\begin{equation} \label{eta_lh}
\eta _{\rm low,high}=\frac{\lambda F_{\rm low,high}}{1-\lambda F_{\rm low,high}}
\left[\frac{1}{\lambda-(n-1)\lambda F_{\rm low,high}}-1\right],
\end{equation}
where we substitute $\lambda=f_{\rm ex}\sigma_{\rm ex}-f_{\rm in}\sigma_{\rm in}$
for convenien\-ce. Now we calculate $r_{\rm low}$ and $r_{\rm high}$ according to
\begin{equation} \label{r_lh}
r_{\rm low,high}=-\ln |1-\eta_{\rm low,high}|.
\end{equation}
Using the equations \ref{F0}, \ref{eta_lh}, \ref{r_lh}, and the
expressions for $F_{\rm max}$, $F_{\rm low}$, and $F_{\rm high}$, we calculate the
dynamic ran\-ge.
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.19]{figure5.eps}
\caption{(a) Average firing rate ($F$) as a function of the mean perturbation
rate ($r$). The black, red, and blue symbols correspond to subcritical,
critical and supercritical values of $\sigma _{\rm in}$, respectively. (b)
Dynamic range as a function of $\sigma_{\rm in}$ for $\sigma_{\rm ex}=1.5$. The
coloured circles are obtained by means of simulations and the black lines
represent the theoretical results from the analytical expression. Dynamic
range of Fig. (b) is indicated on the $\sigma_{\rm ex} \times \sigma_{\rm in}$
parameter space by a dash line in Fig. (c). We consider $N=10^5$, $K=10^4$, and
$f_{\rm ex}=0.8$.}
\label{Fig5}
\end{figure}
In Fig. \ref{Fig5}(b), we compare our numerical and the theoretical results.
We verify that the maximum DR occurs for $\sigma _{\rm in}=1$, which is the
critical point for the considered parameters ($\sigma_{\rm ex}=1.5$). In Fig.
\ref{Fig5}(c), the color scale represents the value of DR for each pair
$(\sigma _{\rm ex},\sigma _{\rm in})$. The dashed line indicates the range taken
in (b). From the figure, we see that the maximum value of DR follows the line
given by the critical point expression $\sigma_{\rm in}=4\sigma_{\rm ex}-5$
(Eq. \ref{sigmac}). Since
the excitatory-inhibitory ratio is $4$, the mean input approaches zero as
the $\sigma_{\rm ex}$ value increases. Therefore, the model shows both the
critical and balanced state in a network with a great number of connections,
where the weights are not small. For instance, for $K=2 \times 10^4$ and
$S_{\rm ex}=0.5$, we obtain $\sigma_{\rm ex}=KS_{\rm ex}=10^4$ and the critical
$\sigma_{\rm in}=4\times 10^4-5$. The ration between excitatory and inhibitory
input is $\frac{4\times\sigma_{\rm ex}}{\sigma_{\rm in}}\approx 1.0013$. In this
situation, the DR is maximum and closes to a balanced state.
\section{Conclusions}
The firing dynamics of a network of excitable excitatory nodes resulting from an
initial stimulus ceases after a typically short time at a critical point.
However, when inhibitory nodes are considered the collective dynamic can become
self-sustained. In this work, we build a cellular automaton model with
excitatory and inhibito\-ry connections. In our network, we consider that the
connections have different weights. We find an expression that relates the mean
of excitatory and inhibitory weights at the critical point. We also calculate
an expression for the dynamic range and show that at the critical point it
reaches its maximal value.
Depending of mean connection degree and coupl\-ing weights, the critical points
can exhibit ceasing or ceaseless dynamics (self-sustained activity). However,
the dynamic range is equal in both cases. We observe that the external stimulus
mask some effects of self-sustained activity in the region where the DR is
calculated. In these regions, the firing rate is the same for ceaseless
dynamics and ceasing activity. Furthermore, we show that at the critical point
the amount of excitatory and inhibitory inputs can be approximately equal in a
densely connected network. This result showing excitatory-inhibitory balanced
was experimentally reported by Shew et al. \cite{shew}.
In future works, we plan to consider other network topologies, such as
small-world and scale-free, to study the influence of inhibitory synapses on
the criticality of excitable neuronal networks.
\section*{Acknowledgment}
This study was possible by partial financial support fr\-om the following
Brazilian government agencies: Fun\-da\c c\~ao Arauc\'aria, National Council
for Scientific and Technological Development, Coordination for the Improvement
of Higher Education Personnel, and S\~ao Pa\-ulo Research Foundation
(2015/07311-7, 2017/18977-1, 2018/03211-6, 2020/04624-2).
|
2,869,038,155,918 | arxiv | \section{Introduction}
Machine learning techniques are increasing our capacity to discover complex nonlinear patterns in high-dimensional data; see \cite{bishop2006pattern} and \cite{hastie2009elements}. The impressive predictive powers of machine learning have found useful applications in many fields. It is natural to reflect on whether and how these techniques can be applied to advance the field of discrete choice analysis.
Traditional machine learning techniques are built for prediction problems. Prediction, an \textit{associational} (or correlational) concept, can be addressed through sophisticated data fitting techniques (\citealp{pearl2000causality}).
Discrete choice models, on the other hand, are typically deployed in policy analysis settings (\citealp{manski2013public}). Policy analysis demands answers to questions that can only be resolved by establishing a sense of \textit{causation}. To draw conclusions requires that data be combined with sufficient domain knowledge assumptions.
Algorithms of systematic data-driven model selection and ideas of cross-validation and regularization are prominent in machine learning methodologies. The appeal of such notions and methods over the sometimes arbitrary specification decisions in traditional econometric models remains; see \cite{athey2018impact}.
The goal of this paper is two-fold. The first is to clearly lay out the main capabilities required of (discrete choice) models developed for policy analysis and demonstrate some of the inadequacies of direct applications of off-the-shelf machine learning techniques to such settings. The second goal is to describe a framework where machine learning capabilities can be used to enhance the predictive powers of traditional discrete choice models without compromising their interpretability or suitability for policy analysis. We present two applications of this approach
namely in automating the specification of the random component of the utility equations in nested logit (\citealp{aboutalebmsthesis}) and mixed logit (\citealp{aboutalebsparse}).
\paragraph{Organization of this paper}
\begin{itemize}
\item \textbf{Section 2} introduces three levels of questions of interest in a typical policy analysis setting. A primer on supervised machine learning is presented along with a reflection on the core methodological differences between theory-driven econometric models such as discrete choice models and the data-fitting methodologies of machine learning.
\item \textbf{Section 3} presents, in detail, typical capabilities required of models used for policy analysis and demonstrates the inadequacy of off-the-shelf supervised machine learning.
\item \textbf{Section 4} reviews recent attempts in the literature to apply machine learning techniques to discrete choice analysis.
\item \textbf{Section 5} identifies appealing capabilities of machine learning and presents the incorporation of such capabilities to the nested logit and logit mixture models.
\item \textbf{Section 6} summarizes the main conclusions and take-aways of this paper.
\end{itemize}
\section{Background}
\paragraph{The inference problem} Consider a population of interest whose members are characterized by features (covariates) $\textbf{x}$ in an input space $\mathcal{X}$ and outcome (response) $y$ in an output space $\mathcal{Y}$ with some joint probability distribution $\mathbb{P}(\textbf{x}, y)$ which is assumed to exist but is not necessarily known a priori.
The classical inference problem of interest is to infer outcome $y$ as a function of features $\textbf{x}$. This generally entails learning (some function of) the conditional probability distribution $\mathbb{P}(y|\mathbf{x})$.
The conditional distribution provides the researcher of a model of the population under study. Three questions could be asked of this model:
\begin{itemize}
\item[Q1] What is the distribution of $y$ conditional on some \textit{observed} value of $\textbf{x}_{obs}$?
\item[Q2]What is the distribution of $y$ conditional on an \textit{extrapolated} value $\textbf{x}_{ext}$ off the support of $\mathbb{P}(\textbf{x})$?
\item[Q3]What is the distribution of $y$ given an \textit{intervention} that sets the value of $\textbf{x}$ to $\textbf{x}_{int}$?
\end{itemize}
It will be clear through this paper that off-the-shelf supervised machine learning, unguided by theory, can only reliably address the first question-- which is a prediction question. While policy analysis applications are typically also concerned with the second and third questions.
\paragraph{Supervised Machine Learning}
The paradigm of supervised machine learning is that of \textit{learning to predict by example}. Given an i.i.d sample of input/output pairs $\mathcal{D}=\{(\textbf{x}_i,y_i)\}_{i=1}^n$, called training data, the problem of supervised learning is that of finding a well-fitting function $\hat{f}:\mathcal{X} \rightarrow \mathcal{Y}$. The fitted function $\hat{f}$ is said to generalize well if $\hat{f}(\textbf{x})$ is a good estimate of $y$ on data pairs $(\textbf{x},y)$ drawn according to $\mathbb{P}(\textbf{x}, y)$ and not limited to the specific pairs in the training sample $\mathcal{D}$.
This paradigm requires specifying a loss function $\ell: \mathcal{Y}\times\mathcal{Y}: \rightarrow [0,\infty)$ for measuring the quality of predictions. This provides an objective measure for choosing $f$. The risk of a function $f$ is defined as the expected loss over the distribution of values the data pairs can take:
\begin{equation}
{R}(f)=\int_{\mathcal{X}\times \mathcal{Y}}\ell(y,f(\textbf{x}))\mathbb{P}(d\textbf{x},dy)
\end{equation}
The squared loss $\ell(y,f(\textbf{x}))=(y-f(\textbf{x}))^2$ is typical for prediction tasks, where $\mathcal{Y}=\mathbb{R}$, and the logistic loss $\ell(y,f(\textbf{x}))=log(2+exp(-yf(\textbf{x})))$ is typical for classification tasks, where $\mathcal{Y}= \{-1,1\}$.
The problem of supervised learning is then to solve:
\begin{equation}
\min_{f} R(f)
\end{equation}
given only $\mathcal{D}$. An exact solution for general functions $f$ and losses $\ell$ is clearly not possible. Another complication is that the joint distribution over which the expectation is taken is not known a priori. A path for tractability then is to restrict functions $f$ to some hypothesis space $\mathcal{H}$, for example the linear functions $f(x)=\beta^T\textbf{x}$, and to replace the expected risk by the \textit{empirical} risk calculated from the data:
\begin{equation}
\hat{R}(f)=\frac{1}{n}\sum_{i=1}^n\ell(y_i,f(\textbf{x}_i))
\end{equation}
The learning problem is then approximated by minimizing the empirical risk over a restricted hypothesis space $\mathcal{H}$:
\begin{equation}
\min_{f\subset \mathcal{H}} \hat{R}(f)
\end{equation}
Since the sampled data $\mathcal{D}$ are random and in practice the measurement pairs are noisy, if the hypothesis space is large relative to the sample size, it can happen that the empirical risk is not a good approximation to the expected risk. A typical behavior is that $f$ fits to noise in the observed sample and
\begin{equation}
\min_{f\subset \mathcal{H}} \hat{R}(f) \ll \min_{f} R(f)
\end{equation}
This phenomenon known as over-fitting. A way of mitigating against this is to consider a regularizer $G:\mathcal{H}:\rightarrow [0,\infty)$ that penalizes complexity in $f$. The objective function in (4) is replaced by
\begin{equation}
\min_{f\subset \mathcal{H}} \hat{R}(f) +\lambda G(f)
\end{equation}
for $\lambda >0$ \cite{MIT9520}.\\The parameter $\lambda$ is determined through a procedure known as \textit{cross-validation}. The main idea is that since the empirical risk evaluated on the training data (training loss) is not a good approximation to the true risk, a random sample is held out from $\mathcal{D}$ and used to approximate the true risk at solutions to (6) for various values of $\lambda$. The evaluation of the empirical risk (3) approximation on this independent hold out sample is called the validation loss. The optimal amount of regularization $\lambda$ is chosen so that the validation loss is minimized. Such $\lambda$ balances the complexity and `generalizability' of the function $f$ for the given learning task. There is a trade-off: \textit{overly} complex functions $f$ tend to fit to noise and generalize poorly. The \textit{right} amount of penalty applied to the complexity gives a best fitting generalizable model \cite{hasti2001elements}. Cross-validation is a method to determine this right amount of penalty. There are other approaches to prevent overfitting, these include model averaging (`boosting') and estimating separate models on subsamples of the data (`bagging') \cite{athey2018impact}.
The model does not need to learn the entire conditional distribution to make a prediction. The conditional mean or quantile is usually sufficient depending on the choice of the loss function $\ell$. Indeed the solution to (2) for the squared loss error can be shown to be simply the conditional expectation over the conditional distribution \cite{hasti2001elements}:
\begin{equation}
f(\textbf{x})= \mathbb{E}(y|\mathbf{x})
\end{equation}
This is also known as the regression function.
Depending on the use-case and the sample size, the hypothesis space $\mathcal{H}$ can be adapted to accommodate general functions with severe non-linearities. Two of the common possibilities include:
\begin{enumerate}
\item $f(\textbf{x})=\beta^T\phi(\textbf{x})$
\item $f(\textbf{x})=\phi(\beta^T\textbf{x})$
\end{enumerate}
Where $\phi(.)$ is a non-linear function. Noting that the latter choice can be iterated $f(\textbf{x})=\phi(\beta_L^T\phi(\beta_{L-1}^T\ldots \phi(\beta_1^T\textbf{x})))$ to arrive at a basic multi-layer neural net \cite{MIT9520}.
In addition to the choice of hypothesis space $\mathcal{H}$, there are two main modeling assumptions:
\begin{enumerate}
\item The data are drawn independently.
\item The data are identically distributed- there exists a \textit{fixed} underlying distribution.
\end{enumerate}
The appeal of supervised machine learning is in its ability to perform well on prediction tasks by fitting complicated and generalizable functional forms to discover sophisticated patterns in the data with little specification or input from the user. The success of supervised machine learning, however, hinges on some form of biased estimation. The bias is a direct result of regularization which trades off parameter un-biasedness for lower prediction variance \cite{breiman2001statistical}.
The \textit{i.i.d} assumption holds so long as prediction is limited to features drawn according to the same fixed joint distribution that generated the data used in the training procedure. Supervised machine learning models are therefore excellent candidates for answering questions of the type:
\begin{quote}
Q1 What is the distribution of $y$ conditional on some observed value of $\textbf{x}_{obs}$?
\end{quote}
absent any interpretability considerations.
\paragraph{Discrete choice models and the econometric approach}
Discrete choice models deal with inference problems where the output space is discrete or categorical $\mathcal{Y}=\{1,2,3,...,T\}$ for $T\in\mathbb{N}$. The researcher observes choices made by a population of decision makers. Under the widely adopted random utility maximization framework \cite{mcfadden-1981}, each decision maker ranks the alternatives in the choice set in order of preference as represented by a utility function. Each alternative is characterized by a utility and is chosen if and only if its utility exceeds the utility of all other alternatives \cite{ben1985discrete}.
Each utility equation includes a random error term, because it is not possible to model every aspect of an alternative or the decision maker in the utility equation.
In contrast to the supervised machine learning approach, the traditional econometric approach to the inference problem is a more theory-driven process. This involves building a structural model for $\mathbb{P}(y|\mathbf{x})$--combining data with subject-matter assumptions and knowledge of the sampling process through which the data was obtained. These assumptions guide the specification of the \textit{systematic} component of the utility equations and the handling of potential selection bias or endogeneity. Under this paradigm, transparent models with a strong theoretical base are the ideal.
It is understood that there might well be present countless influences, non-linearities, missing attributes and heterogeneities that are unaccounted for in the systematic part. A stochastic or random component will also need to be incorporated to account for such aspects. A few alternative model specifications are estimated on the full dataset and statistical theory is used to determine goodness of fit. A main consideration of model specification and estimation is the recovery of unbiased, or at least consistent, estimates of the policy parameters of interest.
The parameters of the estimated models carry clear subject-matter interpretations. These are subjected to sanity checks (the signs and relative magnitudes for example) and a determination is made as to whether the systematic or random specifications need to be modified. Often, a number of revisions are required before the model is deemed fit-for-use from a policy analysis perspective.
In essence, econometric model building is an effort to create \textit{causal} models \cite{angrist2008mostly} \cite{greene2003econometric}. With the understanding that identifying causality from observational data is at best somewhat tentative and must be combined with assumptions founded on subject-matter assumptions and knowledge of the sampling process \cite{manski2009identification}. From this stand point, empirical fit is not the only consideration for model choice.
\section{Models for Analysis}
Discrete choice policy analysis aims to predict behaviour in counterfactual settings where the attributes of alternatives or the characteristics of current decision makers change, new alternatives become available, existing ones become unavailable, or new decision makers arise \cite{manski2013public}. Policy analysis settings present hypothetical what-if questions such as ``\textit{what will happen if we raise transit fares?}". The answers to such questions requires models that can infer consequences of actions, i.e. models that capture a sense of causation. Different causal mechanisms have different implications for policy.
We motivate this discussion by quoting an example from \cite{manski2009identification}:
\begin{quote}
\textit{Suppose that you observe the almost simultaneous movements of a person and of his image in a mirror. Does the mirror image induce the person's movements, does the image reflect the person's movements, or do the person and image move together in response to a common external stimulus? Data alone can not answer this question. Even if you were able to observe innumerable instances in which persons and their mirror images move together, you would not be able to logically deduce the process at work. To reach a conclusion requires that you understand something of optics and of human behavior.}
\end{quote}
A model that only captures correlations or associations will rightly predict that an image will always appear whenever a person moves in front of a mirror. However, it can not correctly infer the effect on the person's movements of an intervention--say the shattering of that mirror. ``No image, therefore no motion!'' falsely concludes the correlational model with high confidence. Such is the kind of hypothetical what-if questions of policy analysis (although typically of a more constructive type).
Supervised learning models are only optimized directly to capture correlations: a function $f$ is chosen so that the risk (2) is minimized over the joint distribution.
As the reflection problem shows, addressing what-if extrapolative or interventional questions based on observational data requires that these observations be combined with assumptions on the underlying generating mechanisms \cite{manski1993identification}:
\begin{quote}
\textbf{Data + Assumptions $\rightarrow{}$ Conclusions}
\end{quote}
The only other resolution being the initiation of a new sampling processes to collect experimental data, which is typically impractical in many policy settings \cite{manski2009identification}.
The next sections discuss features that are essential to models deployed in policy analysis settings. We argue that these models must provide meaningful extrapolations ( Section 3.1), answers to interventions (Section 3.2), and must be interpretable (Section 3.3).
\subsection{Extrapolation: Theory as a Substitute for Data}
Consider the demand $y$ of a commodity or service modelled as function of its price $x$ shown in Figure \ref{fig:extrapolation}. The goal is to determine how the demand will respond to changes in price perhaps due to a proposed tax. It is very typical that the range of values over which prices were observed is limited-- prices just do not change enough. The goal is to build a model relating demand to price, a function of $\mathbb{P}(y|x)$, and use this model to extrapolate values of $y$ for values of $x$ outside the range of historically observed prices.
The supervised machine learning paradigm is one of maximizing fit. A model will be to chosen capture the non-linear trend in the observed data-- perhaps the second order polynomial shown in red in Figure \ref{fig:extrapolation}. This model, chosen to maximize empirical fit, is perfectly suitable for studying how the demand changes for different price points within the locality of historically observed prices. Extrapolations outside that range, without sufficient assumptions, are hard to justify as we will make precise why shortly.
An econometric approach to this problem will start with a theory-- that demand for a product responds negatively to increases in its price. The negative estimated slope of the simple linear model used, the blue line in Figure \ref{fig:extrapolation}, confirms the researcher's a priori expectations. Extrapolations based off this model are based on a theory which is most needed when making predictions outside the range of observed values. To quote Hal Varian \cite{varian1993use}:
\begin{quote}
\textit{Naive empiricism can only predict what has happened in the past. It is the theory---the
underlying model---that allows us to extrapolate.}
\end{quote}
\begin{figure}[h]
\centering
\includegraphics[scale=0.3]{EmphasizingFit.png}
\caption{The shape of an empirically fitted model is only governed by the cloud of training data points. Without meaningful restrictions, extrapolations off the training range are hard to justify.}
\label{fig:extrapolation}
\end{figure}
Model specifications that maximize fit as the only consideration are not enough to provide meaningful extrapolations. To make this argument more precise consider the general inference setting described in Section 1, and suppose we seek to answer the second question identified:
\begin{quote}
Q2 What is the distribution of $y$ conditional on an extrapolated value $\textbf{x}_{ext}$ off the support of $\mathbb{P}(\textbf{x})$?
\end{quote}
The only way to infer $\mathbb{P}(y|\textbf{x}=\textbf{x}_{ext})$ at $\textbf{x}_{ext}$ outside the support of $\mathbb{P}(\textbf{x})$ is to impose assumptions enabling one to deduce $\mathbb{P}(y|\textbf{x}=\textbf{x}_{ext})$ from $\mathbb{P}(y|\textbf{x})$. For concreteness, consider the conditional mean $\mathbb{E}[y|\textbf{x}]$ and look at the two possible ways of its estimation: nonparametric and parametric.
Smoothness regularity assumptions such as continuity or differentiability that enable the \textit{nonparmateric} estimation of $\mathbb{E}[y|\textbf{x}]$ from finite samples imply that $\mathbb{E}[y|\textbf{x}=\textbf{x}_1]$ is near $\mathbb{E}[y|\textbf{x}=\textbf{x}_2]$ when $\textbf{x}_1$ is near $\textbf{x}_2$. This assumption restricts the behaviour of $\mathbb{E}[y|\textbf{x}]$ locally. Let $\textbf{x}_0$ be the point on the support of $\mathbb{P}(\textbf{x})$ nearest to $\textbf{x}_{ext}$. It is not clear whether the distance between $\textbf{x}_0$ and $\textbf{x}_{ext}$ should be interpreted as small enough to be governed by these local restrictions. Extrapolation therefore requires an assumption that restricts the behaviour of $\mathbb{E}[y|\textbf{x}]$ globally. This enables the deduction of $\textbf{x}_{ext}$ from knowledge of $\mathbb{E}[y|\textbf{x}]$ at values of $\textbf{x}$ that are not necessarily near $\textbf{x}_{ext}$ \cite{manski2009identification}.
Recall from Section 1 that a \textit{parametric} estimation of $\mathbb{E}[y|\textbf{x}]$ is obtained by minimizing the squared loss empirical risk over a restricted class of functions $f\subset \mathcal{H}$.
Values of $\textbf{x}$ outside the support of $\mathbb{P}(\textbf{x})$ have no bearing on the value of the empirical risk and therefore have no bearing on the shape of the fitted function outside the support of $\mathbb{P}(\textbf{x})$. In other words, without sufficient restrictions on $\mathcal{H}$, extrapolations off the support are arbitrary.
Global restrictions make assumptions about how the conditional distribution varies with $\textbf{x}$. These restrictions are chosen by the researcher in line with a priori subject-matter expectations on that relationship. Consider again the conditional mean $\mathbb{E}[y|\textbf{x}]$. The common linear regression assumption is to restrict $\mathbb{E}[y|\textbf{x}]$ to be linear. Other possible assumptions include restricting $\mathbb{E}[y|\textbf{x}]$ to be convex or monotone increasing (in all or some of the covariates $\textbf{x}$).
These and other restrictions enable \textit{meaningful} extrapolations off the support of $\mathbb{P}(\textbf{x})$.
From this perspective, the primary function of theory is to justify the imposition of global assumptions that enable extrapolation.
\subsection{Intervention: Structural Assumptions Specify Invariant Aspects}
Suppose variables $x$ and $y$ are observed to be strongly positively correlated as in Figure \ref{fig:identification}. Does $x$ cause $y$? Is it the other way around? or Is there, perhaps, a confounding variable $u$ that causes both $x$ and $y$? Observational data alone can \textit{never} answer this question even if the researcher had access to innumerable observations of pairs $(x,y)$. Yet the underlying data generating process needs to be uncovered before the researcher is able to answer interventional questions. Ignoring this step will lead to misleading conclusions.
\begin{figure}[h]
\centering
\includegraphics[scale=0.45]{IdentificationProblem.png}
\caption{Any number of data generating mechanisms may be consistent with available empirical evidence. The three alternative models on the right produce the same joint distribution of $x$ and $y$. Each model, however, has different implications on how the value of one variable will change in response to an \textit{interventional} policy changing the value of the other variable. This presents an identification problem. Observational data must be combined with structural assumptions, motivated by subject-matter knowledge of $x$ and $y$, for a resolution.}
\label{fig:identification}
\end{figure}
An excellent example is provided in \cite{athey2018impact}. Suppose the researcher has access to observational data of hotel room prices and their occupancy rates. Since hotels tend to raise their prices during peak season, occupancy rates are observed to be positively correlated with room prices. Without making any structural assumptions, this data can only answer prediction questions of the first type. For example, an agency seeking to estimate hotel occupancies based on published room rates. What if instead we ask of the model the impact of a proposed luxury tax on occupancy rates? The model will suggest that raising room prices will lead to higher occupancy rates! This an instance of the logical fallacy: \textit{cum hoc ergo propter hoc} (with this, therefore because of this).
What went wrong? Evaluating the effect of interventional policies breaks the assumption of a fixed data generating process that underpins supervised machine learning. Structural assumptions that encode a sense of causality are therefore needed \cite{brockman2019possible}:
\begin{quote}
\textit{With regard to causal reasoning, we find that you can do very little with any form
of model-blind curve fitting, or any statistical inference, no matter how sophisticated the
fitting process is.}
\end{quote}
Supervised machine learning models, which only learn to capture correlations, can not answer interventional questions which require, in addition to data, strong structural assumptions. Prediction tasks are well managed by these models only under conditions similar to those of the training data $\mathcal{D}$. Recall that one of assumptions of supervised machine learning models is that the data, $\mathcal{D}$, are identically distributed according to some \textit{fixed }joint distribution. The problem of answering interventional questions is that of making predictions under situations that are \textit{changing}-- the assumption that the joint distribution is fixed is not necessarily valid in the ``mutilated" world.
Answering questions of the third type: \begin{quote}
Q3 What is the distribution of $y$ given an intervention that sets the value of $\textbf{x}$ to $\textbf{x}_{int}$?
\end{quote}requires combining data with sufficiently strong assumptions on the nature of the modeled world. Nothing in the specification of a joint distribution function $\mathbb{P}(x,y)$ identifies \textit{how} it is to change in response to interventions. This information must be provided by causal assumptions which identify relationships that remain invariant when external conditions change \cite{pearl2000causality}.
\subsection{Interpretability: Amenability to Scrutiny is a Prerequisite to Credibility}
The ultimate goal of analysis is to uncover insights on the behavior of a population under study--connecting observed data to reality, and to use those insights in forecasting and planning.
Any model is only a simplification of reality. It will include the salient features of a relationship of interest and will often require a number of sufficient maintained assumptions to meet the demands of policy analysis as discussed in earlier sections. The requirement that the model be used in answering ambitious introspective policy questions sets the bar high. For a model's recommendations to have credibility it must withstand scrutiny. This includes justifications for any assumptions made and an understanding of why the model's output is what it is.
Trust that the model's results are sensible must first be established before the model is applied to policy analysis. A model's interpretability is its gateway to establishing trust. Interpretable models are amenable to scrutiny-- a prerequisite to credibility.
Transparent models are the gold standard in interpretability. Transparency entails a full understanding of the model's mechanisms and assumptions. Each of the model's parameters admits intuitive subject-matter explanations. A wrong parameter sign, such as a positive coefficient for cost in a demand model, could be a strong cue that the model may be miss-specified. The researcher knows what is wrong and what to fix. Such an understanding confers a ``certificate of credibility" to the model--a guarantee, in essence, that while the model's predictions may be imprecise, the results are `in the right direction'. With such credibility, trust is established and the model is suitable for policy analysis.
Black box models are much harder, if not impossible, to fully scrutinize. The parameters of such models are not identifiable and do not carry subject-matter interpretations. It is sometimes still possible to query these models and extract information in a post hoc analysis \cite{lipton2016mythos}. A major problem remains. When the output does not conform to a priori expectations and the results are counter intuitive, the parameters provide no clues as to what went wrong and what should be fixed. It is not clear whether the problem is in training, in method or because things have changed in the environment \cite{pearl2019limitations}.
\section{Direct Machine learning applications to discrete choice}
This section surveys efforts in the literature of applying machine learning paradigms and techniques to models of discrete choice.
\paragraph{Direct comparisons of fit} Several studies in the literature compare the predictive accuracy of machine learning models such as neural nets and support vector machines to classical discrete choice models (such as flat and nested logit models) in various applications including travel mode choice \cite{zhang2008travel} \cite{omrani2015predicting} \cite{hagenauer2017comparative}, airline itinerary choice \cite{lheritier2019airline}, and car ownership \cite{paredes2017machine}. The unanimous conclusion that machine learning models provide a better fit is hardly a surprise. The usability of these models for policy analysis is suspect as we have demonstrated in the previous section.
\paragraph{Post hoc analysis of black box models} A few studies consider the application of non-transparent models to discrete choice settings and rely on post hoc analysis of output for insight. \cite{van2019using} used a neural network to estimate the value of time distribution using stated choice experiments with a faster/more expensive alternative and a slower/cheaper alternative. The authors claim that this method can uncover the distribution of value of time and its moments without making strong assumptions on the shape of the distribution or the error terms, while incorporating covariates and accounting for panel effects.
\cite{wang2018using} proposes an empirical method to extract valuable econometric information from neural networks, such as choice probabilities, elasticities, and marginal rates of substitution. Their results show that when economic information is aggregated over the population or ensembled over models, the analysis can reveal roughly S-shaped choice probability curves, and result in a reasonable median value-of-time.
The authors admit, however, that at the disaggregate level, some of the results are counter-intuitive (such as positive cost and travel time effects on the choice probabilities, and infinite value of time).
\paragraph{Algorithms for big data} A number of researchers studied the use of specific optimization algorithms that are traditionally used to train machine learning models to facilitate the estimation of discrete choice models over large datasets.
\cite{lederreystochastic} introduced an algorithm called Window Moving Average - Adaptive Batch Size, inspired by Stochastic Gradient Descent, used it to estimate mutlinomial and nested logit models. The improvement in likelihood is evaluated at each step, and the batch size is increased when the improvement is too low using smoothing techniques.
In the context of logit mixture models, \cite{braun2010variational} proposed a variational inference method for estimating models with random coefficients. Variational procedures were developed for empirical Bayes and fully Bayesian inference, by solving a sequence of unconstrained convex optimization problems. After comparing their estimators to those obtained from the standard MCMC - Hierarchical Bayes method \cite{allenby1997introduction} \cite{allenby1998marketing} \cite{train2009discrete} on real and synthetic data, the authors concluded that variational methods achieve accuracy competitive with MCMC at a small fraction of the computational cost. The same conclusions are found by several studies including \cite{bansal2019bayesian},\cite{depraetere2017comparison}, and \cite{tan2017stochastic}. \cite{krueger2019variational} extended this estimator to account for inter- and intra-consumer heterogeneity, however, they noted that the results were noticeably less accurate than those obtained from MCMC, mainly because of the restrictive mean-field assumption of variational Bayes.
\paragraph{Hybrid machine learning and discrete choice models}
\cite{sifringer2018let} introduced the Learning Multinomial Logit model, where the utility specification of a traditional multinomial logit is augmented with a non-linear representation arising from a neural net. The rationale behind this method was to divide the systematic part of the utility specification into an interpretable part (where the variables are chosen by the modeler), and a black-box part that aims at discovering a good utility specification from available data. This method relies on the fact that mutlinomial logit can be represented as a convolutional neural network with a single layer and linear activation functions.
\paragraph{Machine learning to inform model specification}\cite{bentz2000neural} showed that a feedforward neural network with softmax output units and shared weights can be viewed as a generalization of the multinomial logit model (MNL). The authors also indicated that the main difference between the two methods lies in the ability of neural nets to model non-linear preferences, without a priori assumptions on the utility function. The authors concluded that the if fitted function is not too complex, it is possible to detect and identify some low order non-linear effects from the neural nets by projecting the function on sub-sets of the input space, and use the results to obtain a better specification for MNL.
\cite{van2019artificial} developed a neural net based approach to investigate decision rule heterogeneity among travelers. The neural nets were trained to recognize the choice patterns of four distinct decision rules: Random Utility Maximization, Random Regret Minimization, Lexicographic, and Random. This method was applied to a Stated Choice experiment on commuters’ value of time, and cross-validation was used to compare the results against those obtained from traditional discrete choice analysis methods. The authors concluded that neural nets can successfully recover decision rule heterogeneity.
\section{Discrete Choice Models with Machine Learning Capabilities}
How can machine learning paradigms be leveraged to advance the field of discrete choice? Our motivation for applications of machine learning to discrete choice is directed both by its limitations-- that without incorporating strong structural assumptions and addressing issues of interpretability, machine learning cannot be used for answering the extrapolative and interventional questions of policy analysis-- and its strengths: machine learning provides \textit{flexibility in model specification}, and\textit{ systematic methods for model selection}.
So far, we have established that:
\begin{enumerate}
\item Fully data-driven methodologies need to be tempered with structural assumptions with respect to policy variables of interest.
\item Imposing meaningful subject-matter global restrictions on the hypothesis space $\mathcal{H}$ allows for meaningful extrapolations.
\item Structural assumptions are needed to establish causality from observational data
\item Stcrutiny, at least with respect to the policy variables, is required to asses the model's fit for use.
\end{enumerate}
Domain knowledge typically informs such assumptions and restrictions and guides assessments of model suitability. Such knowledge is most applicable in specifying the systematic component of random utility discrete choice models and least applicable in determining the specification of the random component.
This identifies an area where machine learning paradigms can be leveraged, namely in specifying and systematically selecting the best random utility specification.
The systematic component is specified with a priori expectations on the signs and relative magnitudes of the parameters \cite{ben1985discrete}. For example, addition travel cost and time represent added disutility in travel demand, the parameters corresponding to cost and time are expected to be negative in a linear specification of the model. The value of travel time, calculated as the relative magnitude of these parameters is commonly used to assess model specification.
\paragraph{Where domain knowledge does not help}
While subject-matter knowledge informs the specification of the systematic utility equations, specifying random aspects of the model can be more challenging. For concreteness, we consider two examples: nested logit and logit mixture models.
First consider the problem of specifying the nesting structure in nested logit models. Researches often use their understanding of the choice situation under study to group `similar' alternatives into nests. Alternatives grouped in the same nest share a common error term accounting for shared similarities not directly included in the systematic component. However, a priori expectations about the optimal nesting structure are sometimes misguided. The correlations in the error components depend largely on the variables entering the systematic part of the utility. If the systematic utility equations account for most of the correlation between two similar alternatives, then grouping these alternatives under the same nest does not necessarily improve over flat logit. The researcher typically tests and report two or three alternative nesting structure specifications for robustness. A comprehensive test of all possible structures is impractical for many modeling situations.
In logit mixture models, the parameters in the systematic utility equations are treated as random variables-- usually normally distributed with mean and covariance to be estimated from the data. Off-diagonal elements in the covariance matrix indicate that a decision maker's preferences for one attribute are related to their preferences for another attribute \cite{hess2017correlation}. The researcher has some leeway in specifying which of these off-diagonal elements to estimate and which to constraint to zero. In practice, these models are typically estimated under two extreme assumptions: either a full or a diagonal covariance matrix \cite{james2018estimation}. A full co-variance matrix implies correlations between all the distributed parameters, while a diagonal matrix implies that these parameters are uncorrelated.
Ignoring correlations between parameters can distort the estimated distribution of ratios of coefficients, representing the values of willingness-to-pay (WTP) and marginal rates of substitution \cite{hess2017estimation}. In practice, it is usually difficult for researchers to hypothesize which subsets of variables are correlated.
The following sections present machine learning data driven methodologies for algorithmically selecting the random specification of the utility components of nested logit (Section 5.1) and logit mixture models (Section 5.2) subject to interpretability considerations.
The optimal random specification is determined using optimization techniques, regularization, and out-of-sample validation. The econometric tradition of specifying the systematic component the utility remains. The models remain transparent, and the parameters can be used to estimate trade-offs, willingness to pay values, and elasticities as before.
\subsection{Learning Structure in Nested Logit Models}
Nested logit is a popular modeling tool in econometrics and transportation science when one wants to model the choice that an individual makes from a set of mutually exclusive alternatives \cite{mcfadden-1981} \cite{ben1985discrete}. The nested logit model provide a flexible modeling structure by allowing for correlations between the random components of the alternatives in the choice set.
In specifying a nested logit model, the researcher hypothesizes a nesting structure over the choice set and proceeds to estimate the model parameters (the coefficients in the utility equations that determine the relative attractiveness of choices to the decision maker). Each nest is associated with a scale parameter (which is also estimated), and quantifies the degree of intra-nest correlation \cite{ben1985discrete}. The nesting structure determines \textit{how} the alternatives are correlated, and the scales determine by \textit{what amount} they are correlated.
The large feasible set of possible nesting structures presents a significant modeling challenge in deciding which nesting structure best reflects the underlying choice behavior of the population. The current \textit{modus operandi} is to use domain knowledge to substantially reduce the feasible set to a small set of candidate structures. This is done at the risk of potentially excluding some ostensibly non-intuitive structures which might actually provide a better description of the choice behaviour of the population under study \cite{koppelman-2006}. This is the core motivation of \cite{aboutalebmsthesis} for taking a more \textit{holistic} view of nested logit model estimation, i.e., one that optimizes over structure as well as parameters.
\cite{aboutalebmsthesis} formulates and solves the nested logit structure learning problem as a mixed-integer nonlinear programming (MINLP) problem-- which entails optimizing not only over the parameters of the model but also over all valid nest structures. In other words, \textit{{rather than assuming a nesting structure a priori, the goal is to reveal this structure from the data}}. To ensure that the learned tree is consistent with utility maximization, the MINLP is constrained so that the scales increase with increasing nesting level. The authors penalize complexity in two ways: the number of nests and the nesting level. The optimal model complexity is chosen through a cross-validation procedure.
In advocating for a data-driven approach for specifying a nested logit structure, we are in no way diminishing the role of the modeler or the importance of domain-specific knowledge in specifying and designing good discrete choice models. Recall that the utility of an alternative to the decision makers under study is given by a sum of a systematic component and a random component. It is the modeler's purview to correctly specify the systematic part of the utility equation. Specifying the random part, however, is a tricky business and the optimal structure may be counter-intuitive. In fact, the optimal error structure is not independent of the specification of the systematic part. If all aspects of the choice behavior that account for correlation between choices can be fully captured in the systematic part, no nesting is needed.
\subsection{Sparse Covariance Estimation in Logit Mixture Models}
Logit mixtures permit the modeling of taste heterogeneity by allowing the model parameters to be randomly distributed across the population under study \cite{train2009discrete}. The modeler's task is to specify the systematic part of the utility equations, as well as the mixing distributions of the distributed parameters and any assumptions on the structure of the covariance matrix.
Researchers typically specify either a full or diagonal covariance matrix. \cite{keane2013comparing} compared different specifications with full, diagonal, and restricted covariance matrices and concluded that a full covariance matrix might not be needed in some cases. They concluded that different specifications fit best on different datasets, which means that researchers cannot know, without testing, which restrictions to impose.
As the number of combinations of all possible covariance matrix specifications grows super-exponentially with the number of distributed parameters, it is not practically feasible for the modeler to comprehensively compare all possible specifications of the covariance matrix in order to determine an optimal specification to use.
Sparse specifications of the covariance matrix are desirable since the number of covariance elements grows quadratically with the number of distributed parameters. Consequently, sparser models provide efficiency gains in the estimation process compared to estimating a full covariance matrix.
\cite{aboutalebsparse} presents the Mixed-integer Sparse Covariance (MISC) algorithm which uses a mixed-integer program to find an optimal block diagonal covariance matrix structure for any desired sparsity level using Markov Chain Monte Carlo (MCMC) posterior draws from the full covariance matrix. The optimal sparsity level of the covariance matrix is determined using out-of-sample validation. Furthermore, unlike Bayesian Lasso-based penalties in the statistics literature, the method in \cite{aboutalebsparse} does not penalize the non-zero covariances. This is a desirable feature, since penalizing the non-zero covariances may lead to underestimating the heterogeneity in the population under study (the covariance estimates will be biased towards towards zero).
\section{Concluding Remarks}
Supervised machine learning methods emphasize empirical fit as the objective, predictive success being the only criterion as opposed to issues of interpretation or establishing causality. This imposes an intrinsic limitation to the application of such models to policy analysis. Prediction is indeed important from several perspectives. From a policy analysis standpoint, however, the success of a model is best judged from its ability to predict in \textit{new} contexts. We have established the following:
\label{sec:others}
\begin{enumerate}
\item Machine learning and other empirical models that only maximize fit are excellent candidates for prediction problems where interpretability is not a primary consideration and the prediction is localized to situations directly similar to the training environment.
\item Discrete choice models seek to answer ``what-if" extrapolatiove and interventional questions that cannot be fully resolved from observational data alone. Instead data must be combined with domain knowledge assumptions.
\item Efforts to combine aspects of machine learning with discrete choice methods must not come at the cost of interpretability.
\item Machine learning concepts such as regularization and cross validation have merit in providing a systematic and principled model selection mechanism.
\item We presented an implementation of such algorithmic model selection techniques applied to two of the most common discrete choice models: the nested logit and the logit mixture model.
\end{enumerate}
We reviewed recent machine learning inspired methodologies for algorithmically selecting the random specifications in nested logit and logit mixtures that maximize fit subject to interpretability considerations.
The econometric tradition of specifying the systematic component the utility remains. The models remain transparent, and the parameters can be used to estimate trade-offs, willingness to pay values, and elasticities as before.
We have simply automated what the modeler would ideally like to have done: compare all possible nesting tree (or covariance structure) specifications that `make sense" and choose the best one based on likelihood ratio or some other statistical test.
|
2,869,038,155,919 | arxiv | \section{Introduction}
\label{sec:intro}
Most successful deep learning architectures for image classification consist of a certain building block that is applied sequentially several times: one block succeeds another until a linear operation finally outputs the model prediction. In deep convolutional neural networks (CNNs), the block consists of a sequence of convolutional operations \cite{Conv1_lecun,Conv2_lecun}, batch normalizations \cite{BatchNorm} and rectified linear units (ReLU) \cite{Relu} activations. Notably, adding a skip connection to every block improves the performance and facilitates very deep architectures called Residual Networks (ResNets) \cite{ResNets}.
Another approach relies on applying the convolutional layers in parallel, which results in a multi-branch design. For instance, inception models \cite{InceptionV1,InceptionV3_labelSmoothing} have blocks with multiple branches, each applying some transformation on the block's input. The input of the next block is then obtained by concatenating the outputs of all branches. The multi-branch design framework can also accommodate skip connections, as initially done in ResNeXt networks \cite{ResNeXt}, and later refined using squeeze-excitation in \cite{SqueezeExcitation}, or a split-attention mechanism in \cite{ResNest_versionOfResNeXt}.
The starting question of this work is: what is the purpose of multi-branch architectures?
Initially, in AlexNet \cite{AlexNet}, branches are used to allow ``grouped'' convolutions that could be distributed across multiple GPUs, which at the time had limited memory. Nowadays, multi-branch architectures are generally used to distribute the parameters of a block into smaller groups such that each group applies a separate transformation to the input. This has proved beneficial compared to keeping all parameters together in a single unique branch per block \cite{ResNeXt}. Nevertheless, rare are the cases where each branch of a multi-branch architecture is shown to contribute in a different way to the network performance. In most cases, the value of multi-branch architectures is mostly justified by showing an increase in the accuracy of the whole network. An example of the former is SKNet \cite{SelectiveKernel_versionOfResNeXt}, where by zooming in and out the input images it was demonstrated that an attention mechanism \cite{attentionInNLP} pays more attention to the branch with the appropriate receptive field size. Another interesting idea is related to capsules \cite{Capsules_first,Capsules_EM_routing}, which group neurons into smaller units specialized in recognizing specific visual entities.
In this work, as a means to enhance interpretability, we investigate how to ensure that, in a multi-branch architecture, each branch provably contributes in a different way. In contrast to previous works \cite{SelectiveKernel_versionOfResNeXt,Capsules_first,Capsules_EM_routing}, the role of branches is neither associated to some visual entity, nor to the size of the receptive field. We propose a novel way to organize in a class-wise manner the transformations carried out by the branches. Leveraging concepts from coding theory, we design how to assign each branch to a specific set of classes before training. Specifically, for each block in the network, a binary ``codeword'' of length equal to the number of branches of the block is assigned to each class. The codeword of each class then indicates which of the branches in the block will work for that class. That way, by keeping only all the branches of the network assigned to that class, it is possible to form a path unique for that class that traverses the network and through which the information related to that class flows.
To showcase the advantages of our idea, we use the the state-of-the-art multi-branch architecture ResNeXt \cite{ResNeXt} to which we add an architectural tweak.
Our main contributions can be summarized as follows:
\begin{itemize}[noitemsep,nolistsep,leftmargin=*]
\item We develop an algorithm that provably controls the path through which the information flows, thus allowing us to design before training one path per class and force the information related to that class to pass through the assigned path.
\item Without any additional training, these paths are used to extract for each class a binary classifier that has at least $60\%$ less parameters than the complete network.
\item We provide a design for the paths leveraging concepts from coding theory, which enables the utilization of the intermediate layers' output to make early predictions.
\item Our algorithm is applied to a slightly modified ResNeXt architecture and we show that the aforementioned desirable properties are achieved while maintaining or even improving classification accuracy.
\end{itemize}
\section{Related Work}
\label{sec:RelatedWork}
Numerous attempts have tried to understand how complex, dense neural networks actually work. For instance, activation maximization is used to find the input that increases the activation of a neuron\cite{activationMax_1,activationMax_2,activationMax_3,activationMax_4}. Saliency maps \cite{saliencyMaps_1, saliencyMaps_2,saliencyMaps_3,saliencyMaps_4, saliencyMaps_5,saliencyMaps_6} try to find the pixels that influence the model’s prediction the most.
The pitfall of prior approaches is that they cannot really explain the reasoning process behind neural networks' decisions and they mainly serve as post hoc visualization methods.
Building inherently interpretable models, beyond post hoc approaches, is our key challenge here \cite{StopExplainingBlackBox}. There have been several recent efforts \cite{ NeuralDecisionTrees,InterpretableCNN, Prototype1,Prototype2,ConceptBottleneckModels}, but most of them concentrate on enhancing interpretability only in the last layers of the neural network.
In \cite{NeuralDecisionTrees}, the final linear layer is replaced with a differentiable decision tree, and in \cite{InterpretableCNN} a loss is used to make each filter of the very high-level convolutional layer represent a specific object part. In \cite{Prototype1}, the model output is compared with learnt prototypes, whereas in \cite{ConceptBottleneckModels} it
represents concepts on which humans can intervene.
Differently from previous works, we investigate neural network architectures for classification in a more general way. Specifically, our aim is to control the paths through which information flows throughout the neural network. For that, we employ multi-branch architectures, in which each branch is assigned to specific paths. Roughly speaking, this resembles the idea of disentanglement \cite{bengioDisentanglement,DisentanglementDefinition,DisentaglementSoatto}. In \cite{FundamentalPrinciples10Challenges} it is stated that one of the ten challenges in interpretability is ``disentanglement, which refers to the way to the way information travels through the network''. Preferably
``information about a specific concept traverses through one part of the network, while information about another concept traverses through a separate part''.
In this work, instead of concepts such as objects, parts, scenes, materials, textures, etc. \cite{network_dissection}, information is specifically related to the classes. Prior to training, our algorithm allows to assign for each class a unique path in the network throughout which the information related to that class will flow. The work
\cite{Wang_2018} bears resemblance to ours where they identify such information paths; a key difference is that they use a post hoc method so that the paths are identified and not designed.
\section{Coded ResNeXt}
\label{sec:Coded_ResNeXt}
\subsection{The block}
\label{subsec:One_block}
\begin{figure*}[t]
\centering
\includegraphics[width=0.95\linewidth]{Figures/MainIdeaCodedMultiBranch.pdf}
\caption{Building block of ResNeXt and our proposed variation. \textbf{(a)}: ResNeXt block. A layer is shown as (\# in channels, filter size, \# out channels). The layers are composed of a convolutional operation, batch normalization, and ReLU. The last layer's batch normalization is between the two summations and the ReLU comes after the last summation of the skip connection. The same applies to our Coded-ResNext block. \textbf{(b)}: Coded-ResNeXt block. With light violet color we depict the architectural addition, and with beige the algorithmic ones. The energy normalization keeps the total sum of the subNNs' output energies constant. Depending on the class of the sample, the loss $\mathcal{L}_{code,l}$ pushes the total energy to be allocated to specific subNNs with a prior to training order. Each subNN's output can be zeroed by the dropSubNNs operation with probability $p_{drop}$. \textbf{(c)}: The prior to training order in which the $\mathcal{L}_{code,l}$ allocates the total energy. We name this table as the coding scheme of the coded ResNeXt block. In the figure it refers to CIFAR-10 that has $K=10$ classes, each mapped to a binary ``codeword'' of length equal to the number of subNNs $N$ representing their desirable energies. The ratio $r_l=3/10$ means that 3 subNNs out of $N=10$ will be working for each class. The $\mathcal{L}_{code,l}$ tries to match the energy of the subNNs to their corresponding digit, depending on the codeword of the class. }
\label{fig:arc}
\end{figure*}
The typical ResNeXt block \cite{ResNeXt} is depicted in \cref{fig:arc}(a). It takes input $x\in\mathbb{R}^{C\times H\times W}$ ($C$ is the number of input channels and $H,W$ are the height and width of the input planes) and outputs $y$ of the same dimensions. It consists of $N$ paths/branches
($N$ is called cardinality in \cite{ResNeXt}).
Each branch, which here is called \emph{sub-neural network} (subNN),
performs transformations $\mathcal{T}_n, \; n\in\{1,\cdots N\}$ that are aggregated together with the input $x$, giving the block's output $y$:
\begin{equation}
y=x+\sum_{n=1}^{N}\mathcal{T}_n(x).
\end{equation}
\subsubsection{Energy Normalization}
The \emph{Energy Normalization} is the sole architectural change we introduce, and is applied just before aggregating the transformed inputs $t_n = \mathcal{T}_n(x)\in\mathbb{R}^{C\times H\times W}$. If $(t_n)_{c,h,w}\in \mathbb{R}$ is the element of $t_n$ in position $(c,h,w)$, then we define function $\mathcal{E}$ as:
\begin{equation}
\mathcal{E}(t_n) = \frac{1}{CHW}\sum_{c=1}^{C}\sum_{h=1}^H\sum_{w=1}^W\big( (t_n)_{c,h,w}\big)^2, \label{eq:energy_funtion}
\end{equation}
which gives the mean energy of the output signal of the $n$-th subNN. The Energy Normalization step simply divides the outputs of all branches by a scalar value equal to the square root of the total mean energy:
\begin{equation}
\bar{t}_n = \frac{t_n}{\sqrt{\frac{1}{N}\sum_{i=1}^N \mathcal{E}(t_i)}}, \forall n\in \{1,\cdots,N\}
\label{eq:Energy_Normalization}.
\end{equation}
Given that $\mathcal{E}(ax)=a^2\mathcal{E}(x)$ for scalar $a\in \mathbb{R}_{\geq 0}$, it is easy to see that this step is actually standardizing the total energy of the outputs of subNNs, since after that the sum of all subNNs mean energy is $\sum_{n=1}^N \mathcal{E}(\bar{t_n})= N$.
\subsubsection{Coding Loss}
We present here our first algorithmic addition. After the Energy Normalization we compute a novel loss function coined \textit{coding loss} $\mathcal{L}_{code}$. Assume we have an image classification problem of $K$ classes and that $l$ is the index indicating the position of a ResNeXt block within the whole network. As seen in \cref{fig:arc}(c), for that block, we assign to each class a binary codeword $w_{l,k}, k\in\{1,\cdots, K\}$ of length $N$, indicating which subNNs we want to operate/activate for that class. If the $n$-th subNN operates for class $k$, then the $n$-th digit of $w_{l,k}$ is $(w_{l,k})_n=1$, and $(w_{l,k})_n=0$ otherwise. To ensure that each class receives the same number $N_{act,l}$ of operating subNNs, all $K$ codewords are designed with exactly $N_{act,l}$ ones. We define $r_l$ as the ratio
\begin{equation}
r_l = \frac{N_{act,l}}{N},
\end{equation}
which measures how much each class uses the block's total computational resources. Given an input image of class $k$, the objective of coding loss is to push the mean energies of the subNNs inactive for class $k$ to go to zero, and those of the active subNNs to take positive values.
The coding loss for the $l$-th block is given by
\begin{equation}
\mathcal{L}_{code,l} = \frac{1}{N}\sum_{n=1}^N(r_l\mathcal{E}(\bar{t_n})-(w_{l,k})_n)^4.\label{eq:Loss_code}
\end{equation}
Note that after the energy normalization, the total subNNs' mean energy is $\sum_{n=1}^N \mathcal{E}(\bar{t_n}){=}N$ but the codeword has $N_{act,l}{=}r_lN$ ones, hence we multiply $\mathcal{E}(\bar{t_l})$ by $r_l$. The choice of the loss exponent 4 is justified in the discussion in \cref{subsec:ablation}.
\subsubsection{DropSubNNs}
The second algorithmic addition is a type of dropout \cite{dropout} similar to techniques such as SpatialDropout \cite{spatialDropout}, StochasticDepth \cite{stochastic_depth}, and DropPath \cite{Fractalnet_DropPath}. Seeing each subNN as a more complicated neuron, we apply dropout to it such that its output is zeroed with a fixed probability $p_{drop}$. This method is coined as \textit{DropSubNNs}. Our aim is to reduce the ``co-adaptation'' effect \cite{dropout} on the subNN level, according to which subNNs will be collaborating in groups instead of trying to independently produce useful features.
The term \textit{coding scheme} of a block corresponds to the mapping of the classes to codewords as in \cref{fig:arc}(c). In our implementation, if the subsequent blocks are designed with the same coding scheme, then no new mask is chosen for each of these blocks; the first block randomly generates one mask which is then reused to the subsequent blocks having the same coding scheme.
\subsection{The network}
\label{subsec:The_network}
The complete network is constructed as a sequence of blocks. The Energy Normalization, $\mathcal{L}_{code,l}$, and dropSubNNs are applied only to blocks whose subNNs we want to specialize in some subsets of classes. So for blocks with $r_l=N/N=1$, we use the conventional ResNeXt block as in \cref{fig:arc}(a). In that sense, the ResNeXt model can be seen as a Coded ResNeXt model where all blocks have ratios $r_l=N/N$.
\subsubsection{Coding Scheme Construction}\label{sec:How_to_code}
We construct one coding scheme per ratio $r_l$ so that a coding scheme is uniquely characterized by the ratio $r_l$ and any two blocks $l, l'$ with $r_l=r_{l'}$ have exactly the same coding scheme. Our approach is based on the following intuitive rule: the deeper in the network a block is (i.e., the larger $l$ is), the smaller the $r_l$ assigned. The first blocks have $r_l=N/N$ so that their subNNs produce low-level features potentially useful for recognizing any of the classes. Deeper blocks have smaller $r_l$ so that their subNNs specialize on a subset of classes. In fact, the last linear layer of the conventional ResNeXt architecture can be seen as $K$ (number of classes) subNNs, each performing a simple linear combination, and where the coding scheme has the lowest possible ratio $r_l=1/K$ with the codewords being one-hot vectors.
A natural approach could be to use coding schemes so that earlier blocks are tasked with differentiating between superclasses and later blocks are used to select classes within those superclasses. This approach would require the coding scheme to depend on the semantic similarity between the classes. In this work, our \textit{primary goal is to demonstrate that it is possible to specialize subNNs to (defined before training) subsets of classes, even in the case that the classes within those subsets may not be semantically related}. Therefore, for a given $r_l$ we construct the coding scheme in an agnostic way with respect to the nature of the classes. The following three rules are used:
\begin{enumerate}[label=\Alph*.]
\item The number of ``1''s must be equal to $N_{act,l}=r_l N$ with $N$ being the codeword length.
\item The Hamming distance\footnote{The Hamming distance of two binary words is equal to the number of different digits that they have.} of any pair of codewords should be as high as possible.
\item Seeing the coding scheme as a binary table, as in \cref{fig:arc}(c), we require the sum of each column to be approximately the same.
\end{enumerate}
The rationale behind the second rule is to assign each class to a set of classes being as different as possible from the rest, while the third rule aims to avoid overloading or underutilizing any subNN. We remark that finding such coding scheme is very challenging; there is no known way to compute even the function $A_2(N,\mathsf{d})$ which gives the maximum number of binary codewords of length $N$ with minimum Hamming distance $\mathsf{d}$. Moreover, computing $\mathsf{D}(N,K)$ giving the minimum possible Hamming distance of a coding scheme of $K$ codewords is even harder. Using $\mathsf{D}(\cdot)$ one can evaluate $A_2(\cdot)$ through a binary search over $K$. In our case, the additional constraint of having $N_{act,l}$ ``1''s increases the difficulty. Lastly, in addition to the existence of such coding scheme, we are interested in how to realize it. For that, we have to resort to heuristics when developing coding schemes that satisfy to the largest extent the aforementioned three rules, as explained in \cref{sec:CodingSchemeAlg}.
\subsubsection{Architecture and Total Loss}
We compactly describe a Coded-ResNeXt block as $[C_{out}, d, r_l]$, with $C_{out}$ being the number of channels the block outputs and $d$ the bottleneck width as in ResNeXt \cite{ResNeXt}. A conventional ResNeXt block is expressed as $[C_{out}, d, N/N]$. Following \cite{ResNeXt}, given the number of subNNs $N$, the bottleneck width $d$ is determined so that the blocks have about the same number of parameters and FLOPs as the corresponding blocks of the original ResNet bottleneck architecture \cite{ResNets}.
\Cref{tab:architectures} presents the networks trained for CIFAR-10 (C10), CIFAR-100 (C100) \cite{CIFAR}, and ImageNet 2012 \cite{Imagenet} classification datasets. In CIFAR-10/100 we tried to keep $N$ low, but sufficiently high to enable reducing $r_l$ to less than $0.25$ and still obtaining a strong coding scheme with minimum Hamming distance larger or equal to 4. For ImageNet we used the default values of ResNeXt-50. Remarkably, even though the number of classes increases exponentially across datasets ($K\in\{10,100,1000\}$), the proposed coding methodology allows to efficiently share the subNNs between classes, so that both (a) random pairs of classes are assigned to very different subsets of subNNs; and (b) only a linear increase of the number of subNNs ($N\in\{10,20,32\}$) is needed.
Let $\mathcal{L}_{class}$ be the conventional negative cross entropy loss and $B_{code}$ the set of indices pointing to the blocks with ratio $r_l<1$. The total loss used in order to train the network is
\begin{equation}
\mathcal{L}_{tot} = \mathcal{L}_{class} + \mu \sum_{l\in B_{code}} \mathcal{L}_{code,l}\label{eq:total_loss}
\end{equation}
with $\mu$ being a constant balancing the two losses. For convenience of exposition and with some abuse of notation, in \cref{eq:total_loss} both losses are actually the expected values over the distribution of the samples. As commonly done in practice, the gradients are computed on the \textit{average} of the losses over the samples of the batch.
\begin{table}
\centering
\footnotesize
\begin{tabular}{p{0.15cm}|c|c|c}
\hline
\parbox[t]{1.2mm}{\multirow{3}{*}{\rotatebox[origin=c]{90}{\small{stage}}}}
&\thead{\textbf{Coded} \\\textbf{ResNeXt-29} \\\textbf{(10$\times$11d)}\\ for CIFAR-10}
& \thead{ \textbf{Coded} \\\textbf{ResNeXt-29}\\ \textbf{(20$\times$6d)}\\ for CIFAR-100}
&\thead{ \textbf{Coded} \\\textbf{ResNeXt-50}\\ \textbf{(32$\times$4d)}\\ for ImageNet}\\
\hline\hline
c1
&\thead{conv $3{\times}3, 64$}
& \thead{conv $3{\times}3, 64$ }
& \thead{conv $7{\times}7, 64$, str. 2 \\ $3{\times}3$ max pool, str. 2}\\
\hline
c2
&\thead{$\begin{bmatrix} \;\; 256,\; 11,\;\; \\ 10/10 \end{bmatrix}\hspace{-2pt}{\times }3$ }
&\thead{ $\begin{bmatrix}\;\; 256,\; 6, \;\; \\ 20/20 \end{bmatrix}\hspace{-2pt}{\times} 3$}
& \thead{$\begin{bmatrix}\;\; 256,\;\; 4, \;\;\; \\ 32/32 \end{bmatrix}\hspace{-2pt}{\times} 3$} \\
\hline
c3
&\thead{$\begin{bmatrix} \;\; 512,\; 22,\;\; \\ \mathbf{5/10} \end{bmatrix}\hspace{-2pt}{\times} 3$ }
&\thead{ $\begin{bmatrix}\;\; 512,\; 12, \;\; \\ \mathbf{8/20} \end{bmatrix}\hspace{-2pt}{\times} 3$}
& \thead{$\begin{bmatrix}\;\; 512,\;\; 8, \;\;\; \\ 32/32 \end{bmatrix}\hspace{-2pt}{\times }4$} \\
\hline
c4
&\thead{$\begin{bmatrix} 1024,\; 44,\;\; \\ \mathbf{3/10} \end{bmatrix}\hspace{-2pt}{\times }3$}
&\thead{ $\begin{bmatrix} 1024,\; 24,\;\; \\ \mathbf{4/20} \end{bmatrix}\hspace{-2pt}{\times} 3$}
&\thead{$\begin{bmatrix} 1024,\; 16, \;\;\\ \mathbf{16/32} \end{bmatrix}\hspace{-2pt}{\times }6$} \\
\hline
c5
&\thead{global avg. pool \\ 10-d fc, softmax}
& \thead{global avg. pool \\ 100-d fc, softmax}
& \thead{$\begin{bmatrix} 2048,\; 32,\;\; \\ \mathbf{8/32} \end{bmatrix}\hspace{-2pt}{\times }3$ }\\
\cline{1-1}\cline{4-4}
& & & \thead{global avg. pool \\ 1000-d fc, softmax}\\
\hline
\end{tabular}
\caption{Architecture for each dataset. A block is described by $[C_{out}, d, N_{act}/N]$, with $C_{out}$ being the number of channels it outputs, $d$ the bottleneck width, $N$ the number of paths/subNNs, and $N_{act}$ the number of active/operating subNNs per class. In the beginning of stages c3 and c4 in all datasets, and additionally for c5 in ImageNet, the feature map size is halved as in \cite{ResNets,ResNeXt}. For the CIFAR architectures, stages c2, c3, c4 have approximately $0.2$, $0.9$, $3.5$ million parameters, respectively, and the total architecture has approximately $4.7$ million parameters. For ImageNet, stages c2, c3, c4 and c5 have $0.2$, $1.2$, $7.0$ and $14.5$ million parameters, respectively, and the total number of parameters is $25.0$ millions.}
\label{tab:architectures}
\end{table}
\section{Experiments}
In this section, we present various experimental results to assess the performance of the proposed Coded-ResNeXt. First, we show that our algorithm achieves subNN specialization. We demonstrate this by showing that when subNNs specialized on the class of interest are removed, the performance degrades, whereas it remains the same or even improves when the subNNs removed are not specialized for that class. To further prove the specialization of the subNNs, we test the performance of the following binary classifier: given a certain class, we keep only the subNNs assigned to that class, thus retrieving a lighter single-purpose binary classifier (whose decision is whether the input belongs or not to the class). Next, we show that it is possible to get good predictions from intermediate blocks without evaluating the whole network. Finally, we conduct an ablation study on the two hyperparameters introduced, namely $\mu$ that balances the two losses in \cref{eq:total_loss} and the probability $p_{drop}$ of dropping subNNs.
\subsection{Setup and Validation Accuracy}\label{sec:Experiments_setup}
For data augmentation in ImageNet\cite{Imagenet}, we follow the guidelines in \cite{Revisiting_ResNets_guidelines} for ResNet-RS-50 in order to train our {(Coded-)}ResNeXt-50.
The input of the model is $160\times 160$ randomly resized and cropped images, for which we use standard values of scale and ratio \cite{InceptionV1}, followed by horizontal flips and RandAugment \cite{Randaugment} of layers $N_{aug}=2$ and magnitude $M_{aug}=10$.
For CIFAR-10/100 and (Coded-)ResNeXt-29, after the standard pad-and-crop and horizontal flips, RandAugment is used again with $(N_{aug},M_{aug})=(3,4)$ for CIFAR-10 and $(1,2)$ for CIFAR-100\footnote{Those values are chosen in \cite{Randaugment} for Wide-ResNet-28-2 \cite{WideResNets} which, out of all models presented in that work, seems most similar to ResNeXt-29.}.
All our experiments are run on Google's Colab TPUv2 ($N_w = 8$ cores with bfloat16 precision).
Batch size is picked relatively high to harness TPU speed; for CIFAR datasets, the batch size per core is set to $B_w=64$ (i.e., effective 512), while for ImageNet, $B_w=128$ (i.e., effective 1024).
We use PyTorch's implementation of stochastic gradient descent with Nesterov momentum \cite{sgd_nesterov,sgd_adaptation_sutskever} equal to 0.9.
The weight decay \cite{WeightDecay1992} is equal to $10^{-4}$ for ImageNet and $5\cdot 10^{-4}$ for CIFAR. Training on ImageNet is performed for 150 epochs using cosine scheduler \cite{cosine_scheduler} with initial learning rate defined by the rule $0.1\frac{N_w\cdot B_w}{256}=0.4$ \cite{lr_rules_of_thumb}, warmed up for 5 epochs,
and decayed until $10^{-5}$. Training on CIFAR is performed for 300 epochs with the same scheduler but without warming up, with initial learning rate 0.1, and with decay until $10^{-4}$. The implementation of Coded-ResNeXt for ImageNet is based on the timm library \cite{timm_tpu}.
Compared to the corresponding ResNeXt, the throughput, the number of parameters, and the FLOPs are almost the same. Additional details are provided in \cref{sec:ImplementationDetails}.
\Cref{tab:hyperparameters_accuracies} presents the default values used (unless otherwise stated) for the introduced hyperparameters $(\mu,p_{drop})$ and the achieved validation accuracy.
The baseline is the corresponding ResNeXt.
We observe a clear improvement in the CIFAR datasets, and in Imagenet our Coded-ResNeXt-50 achieves slightly higher accuracy. We remark that there exist several powerful techniques (label smoothing \cite{InceptionV3_labelSmoothing}, squeeze excitation \cite{SqueezeExcitation}, exponential moving average, deeper networks, mixup\cite{mixup}, etc.), which are empirically known to improve ImageNet accuracy \cite{BagOfTricks,Revisiting_ResNets_guidelines} and which could have been used here to further boost our accuracy. However, we choose to keep our setup as simple as possible in order to clearly test the effect of our architectural and hyperparameter choices.
\begin{table}
\centerin
\begin{tabular}{p{1.8cm}|c|c||c}
& ($\mu,p_{drop}$) & \thead{ Coded ResNeXt \\ top-1} & \thead{ ResNeXt \\ top-1}\\
\toprule
CIFAR-10 & (6, 0.1) & \textbf{94.41} & 93.66\\
\hline
CIFAR-100 & (6, 0.1) & \textbf{78.28} & 76.86\\
\hline
ImageNet & (1, 0.1) & \textbf{78.12} & 78.05\\
\hline
\end{tabular}
\caption{Default hyperparameters and validation accuracy for each dataset.}
\label{tab:hyperparameters_accuracies}
\end{table}
\subsection{Specialization}
\begin{figure*}[ht]
\centering
\begin{subfigure}{0.33\linewidth}
\includegraphics[width=1.0\columnwidth]{Figures/RemovingSubNNs.png}
\caption{Removing subNNs of a specific block.\\ $ $}
\label{fig:RemovingSubNNs_GivenBlock}
\end{subfigure}
\hfill
\begin{subfigure}{0.33\linewidth}
\includegraphics[width=1.0\columnwidth]{Figures/CIFAR10_PrecisionRecall.png}
\caption{Precision-Recall on CIFAR-10.\\ $ $}
\label{fig:PrecisionRecall_CIFAR10}
\end{subfigure}
\hfill
\begin{subfigure}{0.33\linewidth}
\includegraphics[width=1.0\columnwidth]{Figures/Imagenet_PrecisionRecall.pdf}
\caption{Precision-Recall on ImageNet. The larger the marker, the more points fall into that area.}
\label{fig:PrecisionRecall_ImageNet}
\end{subfigure}
\caption{Demonstrating the specialization of subNNs to their assigned set of classes. \textbf{(a)} Performance when removing active versus inactive subNNs from a specific block. \textbf{(b)} Precision-Recall from all extracted binary classifiers trained on CIFAR-10. Out-of-distribution negatives are the validation set of CIFAR-100. \textbf{(c)} Precision-Recall from all extracted binary classifiers trained on ImageNet.}
\label{fig:short}
\end{figure*}
The core idea of this work is to specialize each subNN to specific subset of classes; hence the first experiment is designed to test whether our proposed architecture actually succeeds in achieving specialization. Assuming a subNN is assigned to actively work for some class, then if this indeed helps on the classification process of images belonging to that class, removing this active subNN should have a negative impact on the performance. On the other hand, if that subNN is not assigned to that class, then it should remain inactive during the process so removing it should have no impact (degradation) on the performance.
For the first experiment we pick a block $l$ from which we randomly remove subNNs\footnote{Removing a subNN from a block in this architecture is equivalent to zeroing all of its parameters or to zeroing its output before the Energy Normalization. The latter changes neither the Energy normalization of the other subNNs (since its contribution to the denominator of \cref{eq:Energy_Normalization} is zero), nor it affects anyhow the subsequent summation of the subNNs' outputs.} in two ways. Given the class of the input image sampled from the validation set, the first way randomly removes $k{\leq} N_{act,l}$ subNNs from the set of active for that class subNNs. The second way randomly removes $k{\leq} N-N_{act,l}$ subNNs from the (complementary) set of inactive subNNs for that class. The block we choose in \cref{fig:RemovingSubNNs_GivenBlock} is the last one of stage c3 (see \cref{tab:architectures}) for CIFAR datasets and the second of stage c5 for ImageNet.
In \cref{fig:RemovingSubNNs_GivenBlock}, we observe the same behavior across all datasets, which confirms that the more active subNNs are removed, the more the performance degrades. Interestingly, when removing inactive ones, the accuracy tends to increase. Our interpretation is that that even though the inactive subNNs are trained to output zero signal, this is never perfectly achieved in practice and their output always interferes with that of the active subNNs. Thus, taking out the interferers could improve the accuracy. Note that this higher accuracy of the neural network is not actually achievable since to remove a subNN we need to know the class of the input so as to know the set of (in)active subNNs for that class. Finally, we remark that even if all active subNNs are removed from one block the performance does not necessarily plummet because information can still pass from the previous block to the next one through the skip connection.
\subsection{Binary Classifier}
\begin{figure}[ht]
\centering
\begin{subfigure}{\columnwidth}
\includegraphics[trim=20mm 2mm 20mm 10mm, clip,width=1.0\columnwidth]{Figures/cifar10_binaryclassifier.png}
\caption{Binary classifier for airplanes, CIFAR-10.}
\label{fig:BinaryClassifier_cifar10}
\end{subfigure}
\hfill
\begin{subfigure}{\linewidth}
\includegraphics[trim=20mm 2mm 20mm 10mm, clip,width=1.0\columnwidth]{Figures/cifar100_binaryclassifier.png
\caption{Binary classifier for apples, CIFAR-100.}
\label{fig:BinaryClassifier_cifar100}
\end{subfigure}
\hfill
\begin{subfigure}{\linewidth}
\includegraphics[trim=20mm 0mm 20mm 10mm, clip,width=1.0\columnwidth]{Figures/Imagenet_binaryclassifier1_.png
\caption{Binary classifier for tench, ImageNet.}
\label{fig:BinaryClassifier_ImageNet}
\end{subfigure}
\caption{Output distribution of binary classifier of the first class of each dataset.}
\label{fig:BinaryClassifiers}
\end{figure}
Having confirmed that the subNNs specialize on their assigned subset of classes, we proceed with testing this property to the extreme. For that, we do not only randomly remove few subNNs of one given block, but instead, given class a $k\in\{1,\cdots K\}$, we remove \emph{all} subNNs assigned to classes \emph{other} than $k$ from all blocks. The rationale behind is to check whether one can keep solely the subNNs specialized on one class and obtain a binary classifier capable of recognizing that class among the others. \textit{This binary classifier is considerably lighter than the initial network, since it has only $38\%$, $27\%$, and $35\%$ of the initial parameters for CIFAR-10, CIFAR-100, and ImageNet architectures, respectively.}
In \cref{fig:BinaryClassifiers} we pick the first class of CIFAR-10/100 and ImageNet (``airplane'', ``apple'', and ``tench'' respectively) and remove all inactive subNNs for that class. Additionally, we remove the softmax operation at the end of the network so that its outputs are logits $y\in \mathbb{R}^K$. We look only at the first element of the logits $(y)_1\in \mathbb{R}$, which is equivalent to removing all except the first row of the parameters of the linear layer. That way we retrieve a sub-model whose output is one dimensional. \Cref{fig:BinaryClassifiers} depicts with blue the output distribution when inputting samples of the validation set belonging to the first class of the dataset (i.e., in-distribution positives), and with red when the samples belong to some other class (i.e., in-distribution negatives). Evidently, the extracted sub-models indeed operate as binary classifiers giving high output when fed with samples of the class they are specialized in. To further showcase the specialization, for the sub-models trained on CIFAR-10 (resp. CIFAR-100) we input samples that belong to the validation set of CIFAR-100 (resp. CIFAR-10). Those are considered out-of-distribution predictions, since the sub-model has never been trained on such samples. Nevertheless, as \cref{fig:BinaryClassifiers} shows, the extracted model continues to perform well.
We show that with our algorithm it is possible to train a large multi-purpose neural network and extract from it a part serving as a ``lighter'', single-purpose model. This lighter model can be turned into a binary classifier by providing a threshold that gives a positive (resp. negative) prediction if the output of the model is greater (resp. lower) than the threshold value. We set that threshold to the value that maximizes the F1-score of the binary classifier when fed with samples from the training dataset. We can compactly depict its performance (on the validation set) as a point on a plot with the $x$ and $y$ axes being respectively the precision and recall of the binary classifier. In \cref{fig:PrecisionRecall_CIFAR10} we show the performance of all $K=10$ binary classifiers extracted from the coded ResNeXt-29 trained on CIFAR-10. Notably, the worst performance is obtained with the ``cat" classifier when fed with CIFAR-100's out-of-distribution samples. This seems reasonable, since we request from the classifier to distinguish cats from classes like leopard, lion, and tiger, but without having ``seen'' even one sample of them during training.
\Cref{fig:PrecisionRecall_ImageNet} shows the performance of $K=1000$ binary classifiers extracted from the Coded ResNeXt-50 trained on ImageNet. The validation set of ImageNet has 50 positives and $999*50=49950$ negatives per class. We compute the precision and recall by considering only $9*50=450$ randomly selected negatives.
We do that in order to (i) keep the same ratio of positives versus negatives as in CIFAR-10 in order to compare the scores, and (ii) because the dataset is very skewed;
e.g. even if setting the threshold very conservatively and allowing to misclassify a negative only $1\%$ of the time, this translates into $500$ false positives.
Since they are only 50 positives, this means that the precision will be less than 10\%. Additional details and plots are provided in \cref{sec:BinaryClassifiersAdditionalPlots}. In the next subsection, we provide some deeper insights on why the subNNs achieve specialization and we highlight why ResNeXt serves as the appropriate base architecture upon which our idea is built.
\subsubsection{Why ResNeXt?}\label{sec:whyResNeXt}
The core idea of our work is to construct a block of many parallel branches/subNNs, each being activated only when a certain rule, here the coding scheme, allows for it and in this way to control the paths through which the information related to each class flows. In order to achieve the desired behavior, we need (i) an operation (energy normalization) that limits how many subNNs on average can be activated; (ii) a loss function forcing the subNNs to comply with the rule of which one should be activated. Intuitively, one could think that for the $l$-th block with coding $r_l=N_{act,l}/N$, the energy normalization would allow the information to flow through only $N_{act,l}$ out of $N$ paths and the coding loss $\mathcal{L}_{code,l}$ would determine the exact paths.
However, this is not particularly accurate. Let us assume that was the case, i.e., the paths to be activated are solely determined by the energy normalization followed by the coding loss. Then, keeping the energy normalization and coding loss unaltered and changing only the way the outputs of the subNNs are passed to the subsequent blocks, should still allow the extraction of good binary classifiers. It turns out that is not true. If instead of aggregating by \textit{summation}, the subNNs' outputs are \textit{concatenated}, the performance of the extracted binary classifiers becomes poor. Therefore, \textit{it seems that when concatenating the outputs, the ``information'' does not pass only through the paths designated by $\mathcal{L}_{code,l}$}, since the performance degrades when all inactive subNNs (which in theory should not participate in those designated paths) are removed. Let us see why.
The reason is that when concatenating the outputs of the $l$-th block, the subsequent $l+1$-th block is allowed to take decisions not only based on the signal coming from the active subNNs of the $l$-th block, but also from the zero signal of the rest. Hence, since the $l+1$-th block also depends on which subNNs of the $l$-th block output a zero signal, we cannot remove them to construct a binary classifier. On the contrary, using summation to aggregate the outputs prohibits the $l+1$-th block to be dependent on the zero signal of the inactive subNNs of the $l$-th block. The information that some subNNs provide zero output is lost when adding them to the final output of the block. On the other hand, if the output of the subNNs is concatenated, the information related to which subNNs provide zero output is preserved. For that reason, the ResNeXt architecture (which aggregates the outputs by summation) is very well suited for developing our idea.\footnote{
In our initial experiments (before considering ResNeXt), we used an architecture like Coded ResNeXt, but without skip connections and instead of summing the outputs of subNNs (after the energy normalization step) we concatenated them. Algorithmically, we forced the information paths not with a (coding) loss function but by applying a mask in the backward propagation to \textit{block the gradients} of the subNNs that according to the coding scheme should remain inactive. That way, the subNNs were updated only by gradients coming from samples of the subset of classes that the coding scheme had assigned to them. This approach resulted in both good binary classifiers and total multi-class accuracy for that architecture. Unfortunately, adding skip connections the accuracy dropped and that is why we replaced the blocking gradient approach by the coding loss.}
\subsection{Early Decoding}
\begin{figure}[t]
\centering
\includegraphics[width=1.0\columnwidth]{Figures/EarlyDecoders.png}
\caption{Accuracy of the early decoders of Coded ResNeXt-29 when trained on CIFAR-10 and CIFAR-100 and Coded ResNeXt-50 when trained on ImageNet.}
\label{fig:EarlyDecoders}
\end{figure}
In this subsection, we keep all subNNs intact and investigate an additional advantage provided by our training algorithm. Given block $l$ with $r_l<1$, i.e. $l\in B_{code}$, the coding scheme maps each class in a \textit{one-to-one} fashion to a codeword and then the training tries to match the energies of the block's subNNs to that codeword. A natural question is whether it is feasible, using only the energies of the subNNs, to retrieve the codeword that can be used to correctly predict the class of the sample.
In that experiment, we forward the samples of the validation set and compute the energies of the subNNs of the blocks with $r_l<1$. For each $l\in B_{code}$ we obtain a vector $v_l\in \mathbb{R}_{\geq 0}^N$ containing those energies and we measure the Euclidean distance from $v_l$ to all the codewords $w_{l,k}, k\in\{1,\cdots,K\}$. If $k^\star$ is the class whose codeword $w_{l,k^\star}$ has the smallest distance, then $k^\star$ is the prediction of the $l$-th block. As a result, each block $l\in B_{code}$ becomes an early decoder predicting label:
\begin{equation}
\underset{k}{\arg \min} \; ||v_l-w_{l,k}||_2, \; k\in\{1,\cdots,K\}
\end{equation}
with $||\cdot||_2$ being the L2 norm. This early decoding is enabled and reinforced by the second rule used to create the coding schemes (see \cref{sec:How_to_code}). That rule forces the codewords to be as far apart from each other as possible, and the farther apart the codewords are, the more subNNs should erroneously be active or inactive for the early decoder to predict a wrong label. In \cref{fig:EarlyDecoders} we depict the accuracy of every block $l\in B_{code}$ when functioning as early decoder. For Imagenet we can see that increasing the effect of the coding loss by changing the initial hyperparameter value $\mu{=}1$ to $\mu{=}4$ can greatly improve the early decoding, but at the expense of the final total accuracy being reduced to $77.6\%$. Interestingly, as a sample passes from one block to the next one, the probability of being correctly decoded is increased. This bears a strong resemblance to the decoding procedure in communication systems, where therein a received signal, which is distorted by noise, needs to be matched to the original message transmitted. This is usually achieved through iterative algorithms, which improve the prediction outcome at each iteration until converging to a prediction of the original message with high certainty.
\subsection{Ablation Study on Coding Loss and dropSubNN}\label{subsec:ablation}
\begin{figure}[t]
\centering
\includegraphics[width=1.0\columnwidth]{Figures/AblationCIFAR100_tradeOff.png}
\caption{Impact of coefficient $\mu$ of the coding loss on CIFAR-100.}
\label{fig:ablation_tradeOff_losses}
\end{figure}
In this subsection, we investigate the choice of forth power in \cref{eq:Loss_code} for the coding loss and study the effect of the two hyperparameters introduced in the paper, namely the coefficient $\mu$ balancing the losses in \cref{eq:total_loss} and the probability $p_{drop}$ of dropping subNNs.
The role of a subNN in a block using coding is dual. On the one side, it has to provide useful features to the subsequent block for the set of classes it is assigned to; on the other side it should not interfere with the active subNNs for the rest of the classes. However, the computational capacity of each subNN is limited, hence it is impossible to excel at both. The main task is certainly the first one, i.e., to forward useful features to the next block, whenever dictated by the coding scheme, since if not, the whole network may fail as a classifier. Consequently, we do not want the subNNs to overemphasize on the second(ary) task of not interfering, since this could degrade the performance on the main task. This is the main reason behind the choice of the fourth in the coding loss $\mathcal{L}_{code}$ \cref{eq:Loss_code}. An exponent of 4 in the coding loss results in smaller values, closer to zero, than if an exponent of 2 or the absolute value were used. That way, the penalization in the coding loss function is more lenient when the subNNs energies are close to but not exactly equal to the codeword.
Our experiments performed using CIFAR-10 (with $(\mu,p_{drop})=(6,0.1)$) confirm the benefit of setting the exponent to 4, since the accuracy drops from $94.4\%$ to $93.1\%$ when the exponent is 2, and further drops to $87.1\%$ when using absolute value. In \cref{fig:ablation_tradeOff_losses} we show that as $\mu$ increases, ``organizing'' the energies according to the coding scheme is beneficial to the overall performance until a certain point. Past this point, forcing the subNNs to output a signal of a specific energy value provides only small diminishing gains on the early decoders. Furthermore, it disturbs the entire classification process, thus reducing the final accuracy of the whole network's predictions.
\begin{figure}[t]
\centering
\includegraphics[width=1.0\columnwidth]{Figures/AblationCIFAR10_dp_PrecisionRecall.png}
\caption{Impact of the probability $p_{drop}$ on the performance of the binary classifiers. The accuracy of the baseline ResNeXt-29 is $93.66\%$; as we increase $p_{drop}$ the gain in accuracy of our Coded-ResNeXt shrinks but the performance of the binary classifiers is improved.}
\label{fig:ablation_dropSubNN}
\end{figure}
The last experiment concerns the dropSubNN. Dropping randomly some subNNs during training inhibits their ``co-adaptation''\cite{dropout}, as they learn not to depend on the others and to perform well even in the absence of some of them. \Cref{fig:ablation_dropSubNN} shows that the dropSubNN is essential for the binary classifiers. A small value of $p_{drop}=0.1$ can greatly boost their performance and even slightly improve the overall accuracy. Further increasing $p_{drop}=0.2$ degrades the accuracy without improving much the binary classifiers.
\section{Conclusion}
In this paper, we proposed a modification of ResNeXt, which exhibits several attractive properties.
Our algorithm forces the branches of the ResNeXt blocks to specialize on specific subsets of classes.
For any class $k$, we can exploit this specialization property by keeping only the branches assigned to $k$, thus extracting a binary model for identifying class $k$ that is $60\%$ lighter than the original scheme.
Moreover, by leveraging coding theory for the assignment of the branches to classes, we enable the use of intermediate layers for making early predictions without having to evaluate the full network.
Experiments show that those desirable properties can be achieved without compromising the accuracy, which remains similar or even improves compared to the conventional ResNeXt. We believe that this framework could lead to the development of novel architectures that provide both better interpretability and higher accuracy.
{\small
\bibliographystyle{ieee_fullname}
|
2,869,038,155,920 | arxiv | \section{Introduction}
It is well known that an algebraic curve of genus zero is isomorphic
to the projective line. The
search for an analogous statement in the case of algebraic surfaces
led Max Noether to
conjecture that a smooth regular (i.e.,
$q(S) = 0$) algebraic surface with vanishing geometric genus ($p_g(S)
= 0$) should be a rational
surface. The first counterexample to this conjecture was provided by
Federigo Enriques in 1896 (\cite{enr96}, and also \cite{enrMS},
I, page 294), who introduced the so called Enriques surfaces by considering the
normalization of sextic
surfaces in 3-space double along the edges of a tetrahedron.
Enriques surfaces are of special type,
nowadays a large number of surfaces of general type with $p_g = q =
0$ is known, but the first
ones were constructed in the thirties by Luigi Campedelli and Lucien
Godeaux (cf. \cite{Cam},
\cite{god}: in their honour minimal surfaces of general type with
$K^2 = 1$ are called
numerical Godeaux surfaces, and those with
$K^2 = 2$ are called numerical Campedelli surfaces).
In the seventies, after rediscoveries of these old examples, many new
ones were found through
the efforts of several authors (cf. \cite{bpv}, pages 234-237 and
references therein). In
particular, in the spirit of Godeaux' method to produce interesting
surfaces as quotients $S =
Z/G$ of simpler surfaces by the free action of a finite group $G$,
Arnaud Beauville proposed a very
simple construction by taking as $Z$ the product $Z=C_1 \times
C_2$ of two curves of respective genera
$g_1 := g(C_1), g_2 : = g(C_2) \geq 2$,
together with an action of a group $G$ of order
$(g_1 - 1 )(g_2 -1 )$ (this method
produces surfaces with
$K^2 = 8$).
He also gave an explicit example as quotient of two Fermat curves
(in \cite{BaCa} it was shown that his example leads to exactly two non
isomorphic surfaces).
Generalising Beauville's construction we study here surfaces
$S$ isogenous to a
product of two curves, i.e., surfaces which have a finite unramified cover
which is biholomorphic to a product of two curves. One says that
the surface $S$ is isogenous to a higher product if both curves have genus bigger or equal
to $2$ (this condition is equivalent to $S$ being of general type).
It turns out that any surface with $p_g = q =
0$ and isogenous to a product is either $\ensuremath{\mathbb{P}}^1 \times \ensuremath{\mathbb{P}}^1$ or it is isogenous to a higher product
(this happens since $\chi (S) : = \chi(\mathcal{O}_S) =1 \Longrightarrow \chi (C_1 \times C_2) = (g_1 -1) (g_2 -1) >
0$, whence either both $g_i$'s are $\geq 2$, or both $g_i$'s are $ = 0$).
By results of \cite{cat00} any
surface $S$ isogenous to a higher product has a
unique minimal realisation $S\cong (C_1\times C_2) /G$ where
$G$ is a finite group acting freely on $C_1\times C_2$ and with $g_1 := g(C_1), g_2 : = g(C_2) \geq 2$
chosen as small as possible.
The action of $G$ can be seen to respect the product structure of $C_1\times C_2$. This
means that there are the following
two possibilities. Either there are actions of $G$ on $C_1$ and $C_2$ such
that the action of $G$ on $C_1\times C_2$ is the diagonal action,and if this happens we
speak of the {\em unmixed} case.
Or there are elements in $G$ which interchange
$C_1$ and $C_2$, and if this happens we
speak of the {\em mixed} case. Obviously, in the mixed case
$C_1$ and $C_2$ have to be
biholomorphic to each other.
In this paper we carry out the classification of all
smooth projective surfaces $S$ isogenous to a product
with $p_g(S)=q (S) = 0$. Note that if $S$ is of general type $p_g = 0$ implies $q = 0$, since for a
surface of general type $\chi (S) : = \chi(\mathcal{O}_S) = 1 + p_g -q \geq 1$.
We can henceforth assume without loss of generality that $ S \ncong \ensuremath{\mathbb{P}}^1 \times \ensuremath{\mathbb{P}}^1$
and therefore that $S$ is of general type.
First invariants of such surfaces are the group $G$ of the minimal
realisation $S\cong (C_1\times C_2) /G$ and the genera of $C_1$ and $C_2$.
It turns out that the surfaces $S$ which can be obtained
for a fixed finite group $G$ and with fixed genera $g_1 := g(C_1), g_2 : = g(C_2)$
fill out a finite number $N$ of irreducible connected components in
the moduli space $\mathfrak{M}_{(1,8)}$ of
minimal smooth complex
projective surfaces with $\chi (S) = 1$ and $K_S^ 2 = 8$.
These turn out a posteriori to have the same dimension $D$.
Our main result is:
\begin{theo}\label{theomai}
If $S$ is a smooth projective surface isogenous to a product
with $p_g(S)=q(S)=0$
and
with minimal realisation $S\cong (C_1\times C_2) /G$ then
either $G$ is trivial and $ S \cong \ensuremath{\mathbb{P}}^1 \times \ensuremath{\mathbb{P}}^1$ or $G$ is one of the
groups in the following table and the genera of the curves $C_1,\, C_2$ are as listed
in the table. The numbers of components $N$ in $\mathfrak{M}_{(1,8)}$ and
their dimension is given in the remaining two columns.
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
$G$ & $|G|$ & {\rm Type} & $g(C_1)$ & $g(C_2)$ & $N$ & $D$ \\
\hline
${\mathfrak A}_5$ & $60$ & {\rm unmixed} & $20$ & $3$ & $1$ & $1$ \\
${\mathfrak A}_5$ & $60$ & {\rm unmixed} & $5$ & $12$ & $1$ & $1$ \\
${\mathfrak A}_5$ & $60$ & {\rm unmixed} & $15$ & $4$ & $1$ & $1$ \\
${\mathfrak S}_4\times {\rm Z}_2$ & $48$ & {\rm unmixed} & $24$ & $2$ & $1$ & $3$
\\
${\rm G}(32)$ & $32$ & {\rm unmixed} & $4$ & $8$ & $1$ & $2$ \\
${\rm Z}_5^2$ & $25$ & {\rm unmixed} & $5$ & $5$ & $2$ & $0$ \\
${\mathfrak S}_4$ & $24$ & {\rm unmixed} & $12$ & $2$ & $1$ & $3$ \\
${\rm G}(16)$ & $16$ & {\rm unmixed} & $4$ & $4$ & $1$ & $2$ \\
${\rm D}_4\times{\rm Z}_2$ & $16$ & {\rm unmixed} & $8$ & $2$ & $1$ & $4$ \\
${\rm Z}_2^4$ & $16$ & {\rm unmixed} & $4$ & $4$ & $1$ & $4$ \\
${\rm Z}_3^2$ & $9$ & {\rm unmixed} & $3$ & $3$ & $1$ & $2$ \\
${\rm Z}_2^3$ & $8$ & {\rm unmixed} & $4$ & $2$ & $1$ & $5$ \\
${\rm G}(256,1)$ & $256$ & {\rm mixed} & $16$ & $16$ & $3$ & $0$ \\
${\rm G}(256,2)$ & $256$ & {\rm mixed} & $16$ & $16$ & $1$ & $0$ \\
\hline
\end{tabular}
\end{center}
Here ${\mathfrak A}_5$ is the alternating group on $5$ letters, $\mathfrak S_4$ is the
symmetric group on $4$ letters, ${\rm D}_4$ is the dihedral group of order $8$,
${\rm Z}_n$ is the cyclic group of order $n$, ${\rm G}(32)$ and ${\rm G}(16)$ are
two groups of respective orders $32$, $16$ described in Sections \ref{suse44} and \ref{concrete}
and ${\rm G}(256,1)$,
${\rm G}(256,2)$ are two groups of order $256$ described in Sections
\ref{suse256} and \ref{concrete}.
\end{theo}
\medskip
We see our main result as the solution in a very special case
to the open problem that David
Mumford set forth at the
Montreal Conference in 1980: "Can a computer classify all surfaces
of general type with $p_g=0$?
Our purpose is to show how computationally complex this question is and that
probably computers are needed even
if one asks a more restricted question.
All known surfaces of general type with $p_g = 0,\
K^2 = 8$ are quotients
$\ensuremath{\mathbb{H}} \times \ensuremath{\mathbb{H}} / \Gamma$ of the product of two upper half planes by
a discrete cocompact subgroup of ${\hbox{\rm PSL}}(2,\ensuremath{\mathbb{R}})\times {\hbox{\rm PSL}}(2,\ensuremath{\mathbb{R}})$.
There are also quotients which are not
related to products of curves; constructions of such surfaces using
quaternion algebras have been known since long, see for example
\cite{kug}, \cite{shav}. Also in this case a complete classification is
possible. We shall elaborate on this in a forthcoming paper.
It is an interesting question whether there do exist surfaces of general type with $p_g = 0,
K^2 = 8$ which are not quotients
$\ensuremath{\mathbb{H}} \times \ensuremath{\mathbb{H}} / \Gamma$ as above (observe that for $p_g = 0,
K^2 = 9$ the universal cover is the unit disk in ${\Bbb C}^2$ by Yau's theorem
\cite{yau}). In particular, it is attributed to Hirzebruch the question of
existence of such surfaces of general type which are simply connected (they
would be homeomorphic
to $\ensuremath{\mathbb{P}}^1 \times \ensuremath{\mathbb{P}}^1$ but not diffeomorphic, see \cite{free} and
\cite{don}).
Surfaces with
$p_g = q = 0$ were also investigated from other points of view. We
would like to mention
several articles by M. Mendes Lopes and R. Pardini ( \cite{PardDP},
\cite{MLP1}, \cite{MLP2}) where the authors encountered them
in the course of studying the
failure of birationality of the bicanonical map.
Concrete examples of rigid surfaces isogenous to a product
have been given in
\cite{BCG}.
Previously, in \cite{BaCa} the first two authors classified all
smooth algebraic surfaces isogenous to a product and of unmixed type
with $p_g = q = 0$, and with $G$ a finite abelian group.
They also gave a complete description of the connected components
of the moduli space that
arise from these surfaces.
In this article we complete this classification admitting arbitrary
groups and treating also the mixed type.
While describing the organisation of the paper we shall now explain the steps
of our classification procedure in more detail.
A family of surfaces isogenous to a product with associated group $G$
and with $ q=0$ is determined in the
unmixed case by a set of data which we
call a {\em ramification structure}. It consists of a pair of spherical systems of
generators
$[g_{(1,1)}, \ldots, g_{(1,r)}]$, $[g_{(2,1)}, \ldots, g_{(2,s)}]$
for the group $G$ (i.e., a system of generators whose product equals the
identity), which are 'disjoint' in the sense that the union of the conjugates of
the cyclic subgroups generated by $g_{(1,1)}, \ldots, g_{(1,r)}$, resp.
$g_{(2,1)}, \ldots, g_{(2,s)}$ have trivial intersection.
We exploit also the fact that
the geometric conditions impose very strong restrictions on the possible orders
of the elements $g_{(i,j)}$.
We are able to classify these by combinatorial
methods of finite group theory.
Riemann's existence theorem guarantees in fact that
for any ramification structure there
is an irreducible family of surfaces isogenous to a product
with the given ramification structure.
In the mixed case we follow a similar approach.
In Section \ref{sec1} we fix the algebraic set up and classify all the possible
types (i.e., tuples of orders) of the spherical systems of generators. In fact,
the conditions on the possible tuples of orders are strong enough to leave
only finitely many possibilities also for the orders of the finite groups
which have to be considered.
In Section \ref{groups}
we introduce the action of the product of the braid group with
${\rm Aut}(G)$ on the set of 'disjoint'
spherical systems of generators, which reflects the
deformation equivalence of the associated surfaces.
In Section \ref{basi} we collect some basic
results on surfaces isogenous to a
product and show how they correspond to the algebraic
data introduced previously.
In Sections \ref{classiumi}, \ref{classimi} we carry out
the complete classification of
all finite groups occuring as groups $G$ associated to a surface isogenous to
a product with $p_g = 0$.
The procedure is simple: using the libraries of the MAGMA computer algebra
system \cite{MA}, which include all groups of
order less then $2000$ (with the exception
of $1024$), we try to inspect all groups whose orders appear in the list
obtained in Section
\ref{sec1} asking for the existence of suitable systems of generators.
This turned
out not to be an easy task for two reasons: first, the number of groups which has to
be checked is much too high to be feasible to a direct computer calculation,
second, the orders of the groups in question may be too high to be contained in the
standard group libraries. In order to prove our main result we have then to use
direct arguments (exploiting e.g. the solvability of groups whose order admits
only two prime factors), which allow us to reduce the
cardinality of the finite groups under consideration until we reach a region which
is covered by
the MAGMA-library of small groups.
We have not tried to minimise the amount of computer calculations needed.
But we have tried to keep the complexity and time requirements for each single
calculation as small as possible. Sections
\ref{classiumi}, \ref{classimi} are to be understood
as a Leitfaden through a maze of
little facts about finite groups.
After finishing the calculations we realized that (with some effort) all
computer calculations could be eliminated to give a, in fact much longer,
``hand made'' proof of our main results.
We believe that the interest of our paper is twofold:
first of all the list of surfaces in Theorem \ref{theomai} contains many new
and interesting examples. Finding them is difficult, but establishing their
existence is easy. In particular, we devote Section
\ref{concrete} to a simple description of the groups and
ramification structures occurring.
We hope that this description may be useful for working explicitly
with our surfaces. Secondly, it seems
interesting to us that it is at all possible to carry out a subcase of the Mumford
classification program. The fact that this is only a subcase
was the reason for us not to analyse our results further and free them
from computer calculations.
Finally, in section \ref{modu} we calculate the number of orbits of the direct
product of the braid group with ${\rm Aut}(G)$ acting
on the set of disjoint pairs
of spherical systems of generators. By this procedure we determine the exact
structure of the corresponding subset of the moduli space
corresponding to surfaces isogenous to a
product with
$p_g = 0$: in particular we determine the number of irreducible
connected components and their respective dimensions.
\section{Combinatorial preliminaries}\label{sec1}
This section contains simple combinatorial results
which are important as a first step in the solution of
the algebraic problem to which our classification can be reduced.
We also fix certain terminologies to be used later.
The reader who finds these preliminaries too dry to swallow
might first want to read the subsequent section \ref{basi},
explaining how we pass from geometry to algebra.
\subsection{Group theoretic terminology}\label{groups}
Let $G$ be a group and $r\in\ensuremath{\mathbb{N}}$ with $r\ge 2$. An $r$-tuple
$T=[g_1,\ldots,g_r]$ of elements of $G$ is called a
{\it spherical system of generators of $G$} if
$g_1,\ldots,g_r$ is a system of generators of $G$
(i.e., $G=\langle\, g_1,\ldots,g_r\, \rangle$)
and we additionally have $g_1\cdot\ldots\cdot g_r=1$.
We call $r=:\ell(T)$ the length of $T$.
If $T=[g_1,\ldots,g_r]$ is an $r$-tuple of
elements of $G$ and $g \in G$ we
define $ gTg^{-1}:=[gg_1g^{-1},\ldots,gg_rg^{-1}]$.
If
$A=[m_1,\ldots,m_r]\in\ensuremath{\mathbb{N}}^r$ is an $r$-tuple of natural numbers with
$2\le m_1\le\ldots \le m_r$ then the spherical system of generators
$T=[g_1,\ldots,g_r]$ is said to have {\it type}
$A=[m_1,\ldots,m_r]$ if there is a permutation $\tau\in {\mathfrak S}_r$ such
that
$${\rm ord}(g_1)=m_{\tau(1)},\ldots, {\rm ord}(g_r)=m_{\tau(r)}$$
holds. Here ${\rm ord}(g)$ is the order of the element $g\in G$.
The spherical system of generators $T = [g_1,\ldots,g_r]$ is said to be {\em
ordered} if $2 \leq {\rm ord}(g_1) \leq \ldots \leq {\rm ord(g_r)}$.
Given a spherical system of generators $T=[g_1,\ldots,g_r]$ of $G$ we define
\begin{equation}
\Sigma(T)=\Sigma([g_1,\ldots,g_r]):=\bigcup_{g\in G}\, \bigcup_{j=0}^\infty\,
\bigcup_{i=1}^r \ \{\, g\cdot g_i^j\cdot g^{-1}\}
\end{equation}
to be the union of all conjugates of the
cyclic subgroups generated by the elements
$g_1,\ldots, g_r$.
A pair of spherical systems of generators ($T_1,T_2$) of $G$ is called
{\it disjoint} if
$$\Sigma(T_1)\cap \Sigma(T_2)=\{\, 1\,\}.$$
\begin{definition}
Consider a $r$-tuple $A_1=[m_{(1,1)},\ldots,m_{(1,r)}]$ and
a $s$-tuple $A_2=[m_{(2,1)},\ldots, m_{(2,s)}]$ of natural numbers with
$2\le m_{(1,1)}\le\ldots \le m_{(1,r)}$ and $2\le m_{(2,1)}\le\ldots \le m_{(2,s)}$.
An {\em unmixed ramification structure of type $(A_1,A_2)$ for $G$}
is a
disjoint pair ($T_1,T_2$) of spherical systems of generators of $G$,
such that $T_1$ has type $A_1$ and $T_2$ has type $A_2$.
We define ${\cal B}(G;A_1,A_2)$ to be the set of unmixed ramification
structures
of type $(A_1,A_2)$ for $G$.
\end{definition}
\begin{definition}\label{defimi}
Let $A=[m_1,\ldots,m_r]$ be a $r$-tuple of natural numbers with
$2\le m_1\le\ldots \le m_r$.
A {\em mixed ramification structure of type $A$ for $G$} is a pair
$(H,T)$ where $H$ is a subgroup of index $2$ in $G$ and $T=[g_1,\ldots,g_r]$
is a $r$-tuple of elements
of $G$ such that the following hold
\begin{itemize}
\item $T$ is a spherical system of generators of $H$ of type $A$,
\item for every $g\in G\setminus H$, the $r$-tuples
$T$ and $gTg^{-1}=[gg_1g^{-1},\ldots,gg_rg^{-1}]$ are disjoint,
\item for every $g\in G\setminus H$ we have $g^2\notin\Sigma(T)$.
\end{itemize}
We define ${\cal B}(G;A)$ to be the set of mixed ramification structures
of type $A$ for $G$.
\end{definition}
We shall now establish certain equivalence relations on the sets
${\cal B}(G;A_1,A_2)$ and ${\cal B}(G;A)$ of ramification
structures of a finite group $G$, which reflect the deformation
equivalence of the surfaces admitting such ramification structures.
This equivalence relation will be
used in section \ref{modu}.
Let $r$ be a natural number and consider the braid group
\begin{equation}
{\bf B}_r:=\left\langle\, \sigma_1,\ldots,\sigma_{r-1}\, \quad \vrule \quad
\begin{matrix}
\sigma_i\sigma_j=\sigma_j\sigma_i\ {\rm if}\ |i-j|>1,\cr
\sigma_i\sigma_{i+1}\sigma_i=\sigma_{i+1}\sigma_{i}\sigma_{i+1}
\end{matrix}
\right\rangle .
\end{equation}
We shall define now an action of ${\bf B}_r$ on the set of
$r$-tuples of elements of $G$. This action corresponds to the standard
embedding of ${\bf B}_r$ into the automorphism group of a free group on $r$
generators.
Let $T=[g_1,\ldots,g_r]$ be a $r$-tuple of elements of $G$ and $1\le i\le r-1$.
Define $\sigma_i(T)$ by
\begin{equation}
\sigma_i(T):=[g_1,\ldots,g_{i-1},g_i\cdot g_{i+1}\cdot
g_i^{-1},g_i,g_{i+2},\ldots ,g_{r}]
\end{equation}
It is well known and also easy to see that
i) the braid relations are satisfied,
ii) the group ${\bf B}_r$ maps spherical systems of generators to
spherical systems of generators, preserving the type.
Also the automorphism
group ${\rm Aut}(G)$ of $G$ acts on the set of spherical systems of generators
of a fixed type by simultaneous application of an automorphism to the
coordinates of a tuple.
Given $(\gamma_1,\gamma_2,\varphi)\in
{\bf B}_r\times {\bf B}_s\times {\rm Aut}(G)$ and $(T_1,T_2)\in
{\cal B}(G;A_1,A_2)$, where $T_1$ has length $r$ and $T_2$ has length $s$,
we set
\begin{equation}\label{act}
(\gamma_1,\gamma_2,\varphi)\cdot (T_1,T_2):=(\varphi(\gamma_1(T_1)),
\varphi(\gamma_2(T_2))).
\end{equation}
A one moment consideration shows that (\ref{act}) leads to an action
of ${\bf B}_r\times {\bf B}_s\times {\rm Aut}(G)$ on
${\cal B}(G;A_1,A_2)$.
Given $(\gamma,\varphi)\in
{\bf B}_r\times {\rm Aut}(G)$ and $(H,T)\in
{\cal B}(G;A)$, where $T$ has length $r$, we set
\begin{equation}\label{act2}
(\gamma,\varphi)\cdot (H,T):=(\varphi(H),\varphi(\gamma(T))).
\end{equation}
Formula (\ref{act2}) leads to an action
of ${\bf B}_r\times {\rm Aut}(G)$ on
${\cal B}(G;A)$.
In Section \ref{basi} we will associate to a surface $S$
isogenous to a product of unmixed (resp.mixed)
type with $q=0$ an equivalence class of an unmixed (resp. mixed) ramification
structure for
$G=G(S)$. In Section \ref{basi} we shall also conversely see that an
unmixed (resp. mixed) ramification structure
for a finite group $G$ gives a surface $S$
isogenous to a product of unmixed (resp.mixed)
type with $q=0$.
The equivalence classes (the orbits of the respective actions) determine also exactly
the irreducible components in the corresponding moduli space. This will be
applied in Section \ref{modu}.
\medskip
In the following sections polygonal groups will play an important role.
We give their definition right away.
Let $A:=[m_1,\ldots,m_r]$ be a $r$-tuple of natural numbers $\geq 2$. The
polygonal group
${\hbox{\ensuremath{\mathbb{T}}}}(m_1,\ldots,m_r)$ is defined by generators and relations as
\begin{equation}
{\hbox{\ensuremath{\mathbb{T}}}}(m_1,\ldots,m_r):=\langle\, t_1,\ldots,t_r\quad \vrule \quad t_1t_2\ldots
t_r=1=t_1^{m_1}=\ldots t_r^{m_r}\, \rangle .
\end{equation}
These groups are important for us since every finite group which has a
spherical system of generators of type $A$ is (in the obvious way) a quotient
group of ${\hbox{\ensuremath{\mathbb{T}}}}(m_1,\ldots,m_r)$.
\subsection{Tuples}\label{tup}
In this section we classify $r$-tuples of natural numbers satisfying certain
arithmetic conditions. The lists of these tuples will be of importance in our
later classification program of surfaces.
For an $r$-tuple ($r\in\ensuremath{\mathbb{N}}$) $[m_1,\ldots,m_r]\in \ensuremath{\mathbb{N}}^r$ define
the orbifold canonical degree as
\begin{equation}
\Theta([m_1,\ldots,m_r]):=-2+\sum_{i=1}^r\left(1-\frac{1}{m_i}\right)
\end{equation}
In the following we define properties of tuples of natural numbers which are
satisfied by the tuples of orders of the spherical systems of generators
occurring in the unmixed case.
\begin{definition}
I) Given $A=[m_1,\ldots,m_r]\in {\cal N}_r$ define
\begin{equation}
\alpha([m_1,\ldots,m_r]):=\frac{2}{\Theta([m_1,\ldots,m_r])}=
\frac{2}{-2+\sum_{i=1}^r\left(1-\frac{1}{m_i}\right)}
\end{equation}
and use the notation $[m_1,\ldots,m_r]_{\alpha(A)}$.
II) For $r\in{\hbox{\ensuremath{\mathbb{N}}}}$ with $r\ge 3$ let ${\cal N}_r$ be the set of $r$-tuples
$[m_1,\ldots,m_r]\in {\hbox{\ensuremath{\mathbb{N}}}}^r$ which satisfy:
\begin{itemize}
\item[{\rm (i):}] $2\le m_1\le\ldots \le m_r$,
\item[{\rm (ii):}] $\Theta([m_1,\ldots,m_r]) > 0$,
\item[{\rm (iii):}]
$\alpha([m_1,\ldots,m_r]):=\frac{2}{\Theta([m_1,\ldots,m_r])}\in {\hbox{\ensuremath{\mathbb{N}}}}$.
\item[{\rm (iv):}] $m_r\le
\alpha([m_1,\ldots,m_r]):=\frac{2}{\Theta([m_1,\ldots,m_r])}$,
\end{itemize}
We set ${\cal N}:=\cup_{i=3}^\infty\, {\cal N}_i$.
\end{definition}
We shall now give a simple classification result for the tuples in ${\cal N}$.
\begin{prop}\label{proputup}
We have ${\cal N}_r=\emptyset$ for $r\ge 7$. The sets ${\cal N}_3$,
${\cal N}_4$, ${\cal N}_5$, ${\cal N}_6$, are finite and
\begin{equation}
{\cal N}_3=\left\{
\begin{matrix}
[ 2 , 3 , 7 ]_{ 84 }, & [ 2 , 3 , 8 ]_{ 48 }, & [ 2 , 4 , 5 ]_{ 40 }, &
[ 2 , 3 , 9 ]_{ 36 }, & [ 2 , 3 , 10 ]_{ 30 }, \cr
[ 2 , 3 , 12 ]_{ 24 }, & [ 2 , 4 , 6 ]_{ 24 }, & [ 3 , 3 , 4 ]_{ 24 }, &
[ 2 , 3 , 14 ]_{ 21 }, & [ 2 , 3 , 15 ]_{ 20 }, \cr
[ 2 , 5 , 5 ]_{ 20 }, & [ 2 , 3 , 18 ]_{ 18 }, & [ 2 , 4 , 8 ]_{ 16 }, &
[ 2 , 5 , 6 ]_{ 15 }, & [ 3 , 3 , 5 ]_{ 15 }, \cr
[ 2 , 4 , 12 ]_{ 12 }, & [ 2 , 6 , 6 ]_{ 12 }, & [ 3 , 3 , 6 ]_{ 12 }, &
[ 3 , 4 , 4 ]_{ 12 }, & [ 2 , 5 , 10 ]_{ 10 }, \cr
[ 2 , 6 , 9 ]_{ 9 }, & [ 3 , 3 , 9 ]_{ 9 }, & [ 2 , 8 , 8 ]_{ 8 }, &
[ 3 , 4 , 6 ]_{ 8 }, & [ 4 , 4 , 4 ]_{ 8 },\cr
[ 3 , 6 , 6 ]_{ 6 }, & [ 4 , 4 , 6 ]_{ 6 }, & [ 5 , 5 , 5 ]_{ 5 }
\end{matrix}
\right\},
\end{equation}
\begin{equation}
{\cal N}_4=\left\{
\begin{matrix}
[ 2,2,2 , 3 ]_{ 12 }, & [ 2 , 2 , 2 , 4 ]_{ 8 }, &
[ 2 , 2 , 2 , 6 ]_{ 6 }, & [ 2 , 2 , 3 , 3 ]_{ 6 }, \cr
[ 2 , 2 , 4 , 4 ]_{ 4 }, & [ 2 , 3 , 3 , 3 ]_{ 4 }, &
[ 3 , 3 , 3 , 3 ]_{ 3 } &
\end{matrix}
\right\},
\end{equation}
\begin{equation}
{\cal N}_5=\left\{
[ 2 ,2 , 2 , 2 , 2 ]_{4},\
[ 2 ,2 , 2 , 2 , 3 ]_{3}
\right\},\qquad
{\cal N}_6=\left\{
[ 2 ,2 , 2,2 , 2,2]_{2}
\right\}.
\end{equation}
\end{prop}
{\it Proof. }
Suppose that $[m_1,\ldots,m_r]$ is a tuple of natural numbers in ${\cal N}_r$.
From condition (iv) we get
\begin{equation}\label{we1}
\sum_{i=1}^r\, (1- \frac{1}{m_i}) \le 2 + \frac{2}{m_r} \leq 3.
\end{equation}
Using $2\le m_i$ for $i=1,\ldots, r$ we obtain $r\le 6$ and
$ r=6 \Leftrightarrow m_i = 2 \ \forall i$. In particular
${\cal N}_r$ is empty for $r\ge 7$.
Let us treat the case $r=3$ next. In this case we have $m_2\ge 3$ since
otherwise $\Theta([m_1,m_2,m_3])$ is negative which contradicts condition
(ii). An application of (\ref{we1}) using $m_1\ge 2$ gives $m_3\le 18$.
By a quick computer search through the remaining tuples (or just by hand)
we obtain
the finite set ${\cal N}_3$.
In the cases $r=3,\, 4,\, 5,\, 6$ we infer from (\ref{we1}) that
$\sum_{i=1}^{r-2}\, (1- \frac{1}{m_i}) \le \frac{1}{m_{r-1}} + \frac{3}{m_r}$,
whence $\frac{r-2}{2}\le \frac{4}{m_{r-1}}$ and $\frac{r-3}{2}\le \frac{3}{m_{r}}$.
These inequalities imply $ m_{r-1} \leq \frac{8}{r-2} \leq 4$
and
$m_r \leq \frac{6}{r-3} \leq 6$. The remaining tuples can again
be quickly searched
by computer or by hand to obtain the above lists for
${\cal N}_4$, ${\cal N}_5$, ${\cal N}_6$.
\hspace*{\fill}$Q.E.D.$
\medskip
In the following we define properties of a tuple of natural numbers which are
satisfied by the tuple of orders of the spherical system of generators
occurring in the mixed case.
\begin{definition}
I) Given $A=[m_1,\ldots,m_r]\in {\cal M}_r$ define
\begin{equation}
\beta([m_1,\ldots,m_r]):=\frac{4}{\Theta([m_1,\ldots,m_r])}=
\frac{4}{-2+\sum_{i=1}^r\left(1-\frac{1}{m_i}\right)}
\end{equation}
and use the notation $[m_1,\ldots,m_r]_{\beta(A)}$.
II) For $r\in{\hbox{\ensuremath{\mathbb{N}}}}$ with $r\ge 3$ let ${\cal M}_r$ be the set of $r$-tuples
$[m_1,\ldots,m_r]\in {\hbox{\ensuremath{\mathbb{N}}}}^r$ which satisfy:
\begin{itemize}
\item[{\rm (i):}] $2\le m_1\le\ldots \le m_r$,
\item[{\rm (ii):}] $\Theta([m_1,\ldots,m_r]) > 0$,
\item[{\rm (iii):}] $m_r\le
\beta([m_1,\ldots,m_r])=\frac{4}{\Theta([m_1,\ldots,m_r])}$,
\item[{\rm (iv):}] $\beta([m_1,\ldots,m_r])=\frac{4}{\Theta([m_1,\ldots,m_r])}\in
{\hbox{\ensuremath{\mathbb{N}}}}$.
\item[{\rm (v):}] $\beta([m_1,\ldots,m_r])$ is even and
$\beta([m_1,\ldots,m_r])^2/2$ is divisible by $m_i$ for $i=1,\ldots,m_r$.
\end{itemize}
Define further ${\cal M}:=\cup_{i=3}^\infty\, {\cal M}_i$.
\end{definition}
We shall now give a simple classification result for the tuples in ${\cal M}$.
\begin{prop}\label{propmtup}
We have ${\cal M}_r=\emptyset$ for $r\ge 9$ or $r=7$.
The sets ${\cal M}_3$,
${\cal M}_4$, ${\cal M}_5$, ${\cal M}_6,$, ${\cal M}_8$,
are finite and
\begin{equation}
{\cal M}_3=\left\{
\begin{matrix}
[2,3,7]_{168}, & [2,3,8]_{96}, & [2,4,5]_{80}, & [2,3,9]_{72}, &
[2,3,10]_{60}, \cr
[ 2 , 3 , 12 ]_{48}, & [ 2 , 4 , 6]_{ 48 }, & [ 3 , 3 , 4]_{ 48 }, &
[ 2 , 3 , 14]_{ 42 }, & [ 2 , 5 , 5]_{40},\cr
[ 2 , 3 , 18]_{ 36 }, & [ 2 , 4 , 8]_{ 32 }, & [ 2 , 3 , 30]_{30 }, &
[ 2 , 5 , 6 ]_{ 30 }, & [ 3 , 3 , 5 ]_{ 30 },\cr
[ 2 , 4 , 12]_{ 24 }, & [ 2 , 6 , 6]_{ 24 } & [ 3 , 3 , 6]_{ 24 }, &
[ 3 , 4 , 4 ]_{ 24 }, & [ 2 , 4 , 20]_{ 20 },\cr
[ 2 , 5 , 10]_{ 20 }, & [ 2 , 6 , 9]_{ 18 }, & [ 3 , 3 , 9]_{ 18 }, &
[ 2 , 8 , 8 ]_{ 16 }, & [ 4 , 4 , 4]_{ 16 } \cr
[ 2 , 7 , 14]_{ 14 }, & [ 2 , 12 , 12]_{ 12 }, & [ 3 ,4 ,12]_{ 12 }, &
[ 3 , 6 , 6]_{ 12 }, & [ 4 , 4 , 6 ]_{ 12 }, \cr
[ 5 , 5 , 5]_{ 10 }, & [ 4 , 8 , 8]_{ 8 } & & &
\end{matrix}
\right\},
\end{equation}
\begin{equation}
{\cal M}_4=\left\{
\begin{matrix}
[ 2 , 2 , 2 , 3]_{24}, & [ 2 , 2 , 2 , 4 ]_{ 16 }, &
[ 2 , 2 , 2 , 6 ]_{ 12 }, & [ 2 , 2 , 3 , 3 ]_{ 12 }, \cr
[ 2 , 2 , 2 , 10 ]_{ 10 }, & [ 2 , 2 , 4 , 4 ]_{ 8 }, &
[ 2 , 2 , 6 , 6 ]_{ 6 }, & [ 3 , 3 , 3 , 3 ]_{ 6 },\cr
[ 2 , 3 , 3 , 6 ]_{ 6 } & [ 4 , 4 , 4 , 4 ]_{ 4 } & &
\end{matrix}
\right\},
\end{equation}
\begin{equation}
{\cal M}_5=\left\{
\begin{matrix}
[ 2 , 2 , 2 , 2 , 2 ]_{ 8 }, &
[ 2 , 2 , 2 , 2 , 3 ]_{ 6 }, &
[ 2 , 2 , 2 , 4 , 4 ]_{ 4 }
\end{matrix}
\right\},
\end{equation}
\begin{equation}
{\cal M}_6=\left\{
[2, 2, 2, 2, 2, 2]_{4}
\right\},
\quad {\cal M}_8=\left\{[2,2,2,2,2,2,2,2]_2\right\}
\end{equation}
\end{prop}
We skip the proof since it is similar to that of Proposition \ref{proputup}.
\section{Basics on surfaces isogenous to a product}\label{basi}
Throughout this section we assume that $S$ is a surface of general type.
We recall first the notion of surfaces isogenous to a product of
curves. By Proposition 3.11 of
\cite{cat00} the following two properties for a surface of general type are
equivalent.
\begin{definition}\label{defbasi}
A surface $S$ of general type is said to be {\em isogenous to a
product} if and only if one of the following two equivalent
conditions is satisfied.
\begin{itemize}
\item $S$ admits a finite unramified covering which is isomorphic to a
product of curves (of genera at least two),
\item $S$ is a quotient $S := (C_1 \times C_2) / G$,
where the $C_i$'s are
curves of genus at least two, and $G$ is a finite group acting freely on
$C_1 \times C_2$.
\end{itemize}
\end{definition}
It is shown in \cite{cat00} that every such
surface isogenous to a product has
a unique minimal realization $S := (C_1 \times C_2) / G$
(i.e., the genera $g(C_1),\, g(C_2)$
of the two curves $C_1 , C_2$ are minimal).
It can further be shown (see \cite{cat00}) that the action
of $G$ on $C_1 \times C_2$ in the second condition of the above
definition respects the product decomposition, i.e., the elements of $G$
either interchange the factors or act independently on both factors.
\begin{definition}
Let $S$ be a surface isogenous to a
product with minimal realisation $S=(C_1\times C_2)/G$.
We say that $S$ is a {\rm mixed case} if the action of $G$
exchanges the two factors (and
then $C_1 , C_2$ are isomorphic), and an {\rm unmixed case} if
$G$ acts via a diagonal action.
\end{definition}
We shall associate to a surface $S$ now certain algebraic data. This approach
is taken from \cite{cat00} where a much more detailed discussion can be found.
We first take a minimal realisation of $S$ as $S=(C_1\times C_2)/G$
and define
\begin{equation}
G(S):=G.
\end{equation}
Suppose we are in the unmixed case. Then $q(S)=0$ implies that
$C_1/G(S)=C_2/G(S)=\ensuremath{\mathbb{P}}^1$, i.e., we have two ramified Galois coverings
\begin{equation}
C_1\to \ensuremath{\mathbb{P}}^1, \qquad C_2\to \ensuremath{\mathbb{P}}^1
\end{equation}
with Galois group $G$ (see \cite{miranda}, Section 4 for explanations).
Let $\{ P_1,\ldots, P_r\}\subset \ensuremath{\mathbb{P}}^1$ be the set
of branch points of the first covering. Choose a base point $P$
in $\ensuremath{\mathbb{P}}^1$ distinct from them.
Choose a
geometric basis $\gamma_1, \ldots
\gamma_r$ of $\pi_1( \ensuremath{\mathbb{P}}^1 - \{P_1, \dots P_r \}) $ ($\gamma_i$ is a
simple counterclockwise loop
around
$P_i$, and they follow each other by counterclockwise ordering around
the base point).
Notice that $\gamma_1\cdot\ldots \cdot\gamma_r=1$.
Choose a monodromy representation, i.e., a surjective homomorphism
$$\psi: \pi_1( \ensuremath{\mathbb{P}}^1 - \{P_1, \dots P_n \}) \to G.$$
Notice that only the kernel of $\psi$ is uniquely determined
by the covering.
Then the elements $\psi(\gamma_1),\ldots,\psi(\gamma_r)$ form a
spherical system of generators of $G$.
Now, the mapping class group of the sphere $\pi_0( Diff ( \ensuremath{\mathbb{P}}^1 - \{P_1,
\dots P_n \}))$, which is a quotient of the braid group ${\bf B}_n$, operates
on such homomorphisms, and their orbits are called Hurwitz
equivalence classes of spherical
systems of generators. This action is the one which was already
described in the previous section. We use this action in
order to assume without loss of generality that $T_1(S):=
[\psi(\gamma_1),\ldots,\psi(\gamma_r)]$ is an ordered spherical system of
generators.
We apply the same principle to the second covering and obtain another ordered
spherical system of generators $T_2(S)$ of $G$.
Since the action of $G$ on $C_1 \times C_2$ is free we have
$\Sigma(T_1(S)) \cap\Sigma(T_2(S)) = \{\, 1\,\},$ i.e., the two systems are
disjoint.
\medskip
Let $S$ be a surface isogenous to a product, of unmixed type and with $q(S)=0$.
Then we have attached to $S$ its finite group $G=G(S)$ (up to isomorphism) and
a pair ${\cal T}(S)=(T_1(S),T_2(S))\in {\cal B}(G;A_1(S),A_2(S))$ of
an uniquely defined ordered
type $(A_1(S),A_2(S))$.
We show now that the tuples $T_1(S)$, $T_2(S)$ attached to a surface $S$
isogenous to a product, of unmixed type and with $p_g(S)=0$ satisfy the
properties of section \ref{tup}, i.e., that
they are contained in $\mathcal{N}$.
\begin{prop}\label{unmitup}
Let $S$ be a surface isogenous to a product, of
unmixed type and with $p_g(S)=0$. Let $A_1(S)=[m_1,\ldots,m_r]$,
$A_2(S)=[n_1,\ldots,n_s]$ be the two ordered types attached to $S$ as above.
We have
\begin{itemize}
\item $\Theta(A_1(S)),\, \Theta(A_2(S)) > 0$,
\item $m_r\le \frac{2}{\Theta(A_1(S))}$, $n_s\le \frac{2}{\Theta(A_2(S))}$,
\item $\frac{2}{\Theta(A_1(S))},\, \frac{2}{\Theta(A_2(S))} \in {\hbox{\ensuremath{\mathbb{N}}}}$.
\end{itemize}
\end{prop}
{\it Proof.} Since $S$ is isogenous to a product we can represent $S$ as
$$S=(C_1\times C_2)/G(S)$$
where $C_1,\, C_2$ are two smooth projective curves with genera
$g(C_1),\, g(C_2)\ge
2$ where the finite group $G(S)$ acts without fixed points
and via an action preserving the product on $C_1\times C_2$.
Since $q(S)=0$ we have $C_1/G(S)\cong C_2/G(S)\cong \mathbb{P}^1$ and
the Hurwitz formula implies
\begin{equation}\label{e1}
|G|\left(-2 + \sum_j^r \left(1 - \frac{1}{m_j}\right)\right)=2(g(C_1) - 1),
\end{equation}
\begin{equation}\label{e2}
|G|\left(-2 + \sum_j^s \left(1 - \frac{1}{n_j}\right)\right) =2(g(C_2) - 1)
\end{equation}
This establishes $\Theta(A_1(S)),\, \Theta(A_2(S)) > 0$, because
$g(C_1),\, g(C_2)\ge 2$.
We have
$$ K_S^2 = \frac{K^2_{C_1\times C_2}}{|G|} = \frac{8
(g(C_1) - 1)(g(C_2) -1)}{|G|} = 8 \chi (\mathcal{O}_S)= 8,
$$
where the last equality holds since $p_g = 0$.
Therefore
$$|G(S)|=(g(C_1)-1)(g(C_2)-1)$$ and
using formulas (\ref{e1}), (\ref{e2}) we get
$$
\frac{2}{\Theta(A_1)}=g(C_2)-1,\ \frac{2}{\Theta(A_2)}=g(C_1)-1
\in \mathbb{N}.
$$
This establishes the third property of $A_1$, $A_2$.
To prove the second property assume that
$$m_r> \frac{2}{\Theta(A_1(S))}=g(C_2)-1.$$
If $T(S)=[g_1,\ldots,g_r]$ then $g_r$ has order $m_r$ and
we know that it acts with a fixed
point on $C_1$. Hence the cyclic group $\langle g_r\rangle$
should have no fixed points
on $C_2$.
Let $C := C_2 / \langle g_r\rangle $ be the quotient.
By Hurwitz' formula we get:
$$
2g(C) - 2 = \frac{2g(C_2) - 2}{m_r} < 2.
$$
Therefore $g(C) \in \{ 0, 1 \}$, which contradicts
the freeness of the action of $\langle g_r\rangle$ on $C_2$ (recall that ${\hbox{\ensuremath{\mathbb{P}}}}^1$
has no unramified coverings and an unramified covering of an elliptic curve is again
an elliptic curve, while $C_2$ has genus $\geq 2$).
\hspace*{\fill}$Q.E.D.$
\medskip
Let $S$ be a surface isogenous to a product, of mixed type and with $q(S)=0$.
Then we can attach to $S$ its finite group $G=G(S)$ and
a pair ${\cal T}(S)=(H(S),T(S))\in {\cal B}(G;A)$ of
an uniquely defined ordered
type $A(S)$.
Here we get the following
\begin{prop}
Let $S$ be a surface isogenous to a product, of mixed type and with
$p_g(S)=0$. Let $A(S)=[m_1,\ldots,m_r]$ be the ordered type attached to $S$.
We have
\begin{itemize}
\item $\Theta(A(S)) \ne 0$,
\item $m_r\le \frac{4}{\Theta(A(S))}$,
\item $\beta(A(S)):= \frac{4}{\Theta(A(S))} \in {\hbox{\ensuremath{\mathbb{N}}}}$,
\item $\beta(A(S))$ is even and
$\beta(A(S))^2/2$ is divisible by $m_i$ for $i=1,\ldots,m_r$.
\end{itemize}
\end{prop}
{\it Proof. }
Noting that $H(S)$ has an unmixed ramification structure of type
$(A,A)$ yielding a surface with invariants $K_S^2= 16 = 8 \chi$, the first
three properties are proven in the same way as in proposition
\ref{unmitup}. For the last property observe that $G$ has order
$\beta(A(S))^2$ and has a subgroup of index $2$. Moreover, $|H(S)| =
\beta(S(S))^2/2$ and has a spherical system of generators of type $A(S) =
[m_1,\ldots,m_r]$.
\hspace*{\fill}$Q.E.D.$
\medskip
So far we have discussed the ramification structure associated to a surface
isogenous to a product. There is also a way back from ramification structures
to surfaces. This construction relies on the Riemann existence theorem (see
\cite{cat00} for details). More precisely we have
\begin{prop}
Let $G$ be a finite group.
let $A_1=[m_{11},\ldots,m_{1r}]$ be a $r$-tuple and
$A_2=[m_{21},\ldots,m_{2s}]$ a $s$-tuple of natural numbers with
$2\le m_{11}\le\ldots \le m_{1r}$ and $2\le m_{21}\le\ldots \le m_{2s}$.
Then for any ramification structure ${\cal T}\in {\cal B}(G;A_1,A_2)$
there is a
surface isogenous to a product with $G(S)=G$ and ${\cal T}(S)={\cal T}$.
\end{prop}
An analogous existence result holds in the mixed case also.
\section{The unmixed case, classification of the groups}\label{classiumi}
This section is devoted to the classification of all finite groups $G$
admitting
an unmixed ramification structure of type $(A,B)$ with
$A,\, B\in {\cal N}$. The result is summarized in the following
\begin{prop}
The only finite groups $G$ admitting
an unmixed ramification structure of type $(A,B)$ with
$A,\, B\in {\cal N}$ are those in the following table:
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
$G$ & $|G|$ & $A$ & $B$\\
\hline
${\mathfrak A}_5$ & $60$ & $[ 2, 5, 5]_{20}$ & $[ 3, 3, 3, 3]_{3}$\\
${\mathfrak A}_5$ & $60$ & $[ 5, 5, 5]_{ 5 }$ & $[ 2, 2, 2, 3]_{12}$ \\
${\mathfrak A}_5$ & $60$ & $[ 3, 3, 5]_{15}$ & $[ 2, 2, 2, 2]_{4}$ \\
${\mathfrak S}_4\times {\rm Z}_2$ & $48$ & $[ 2, 4, 6]_{24}$ & $[ 2, 2, 2, 2, 2, 2]_2$ \\
G($32$) & $32$ & $[ 2, 2, 4, 4]_4$ & $[ 2, 2, 2, 4]_8$ \\
${\rm Z}_5^2$ & $25$ & $[5,5,5]_5$ & $[5,5,5]_5$ \\
${\mathfrak S}_4$ & $24$ & $[ 3, 4, 4]_{12}$ & $[ 2, 2, 2, 2, 2, 2]_2$ \\
G($16$) & $16$ & $[ 2, 2, 4, 4]_4$ & $[ 2, 2, 4, 4]_4$ \\
${\rm D}_4\times{\rm Z}_2$ & $16$ & $[ 2, 2, 2, 4]_8$ & $[ 2, 2, 2, 2, 2, 2]_2$\\
${\rm Z}_2^4$ & $16$ & $[ 2, 2, 2, 2, 2]_4$ & $[ 2, 2, 2, 2, 2]_4$ \\
${\rm Z}_3^2$ & $9$ & $[ 3, 3, 3, 3]_3$ & $[ 3,3, 3, 3]_3$ \\
${\rm Z}_2^3$ & $8$ & $[ 2, 2, 2, 2, 2]_4$ & $[ 2, 2, 2, 2, 2, 2]_2$ \\
\hline
\end{tabular}
\end{center}
\end{prop}
\medskip
The proof relies heavily on the use of the MAGMA-library containing either
permutational representations or polycyclic presentations of all groups of order less than
$2000$ with the exception of the order $1024$. We proceed as follows.
We consider each type $(A,B)$ separately going through
the finite list of Proposition \ref{proputup}. Assume that there is a group
$G$ admitting
an unmixed ramification structure of type $(A,B)$:
then $|G|=\alpha(A)\alpha(B)$.
If this order is less than $2000$ we just go through the MAGMA-library
and search for groups which have a disjoint pair of systems of spherical generators
of type $(A,B)$. There is a huge number of groups to check but there are methods
to speed up the computation.
These will be described in the next two subsections
where we also exhibit the arguments for $|G|=\alpha(A)\alpha(B)>2000$.
Sometimes we shall have to talk about individual groups in the
MAGMA-library. Here we use the terminology of MAGMA, i.e.,
${\rm SmallGroup}(a,b)$ denotes the group of order $a$ having number $b$ in the list.
A simple but useful observation is
\begin{lemma}
Let $(T_1, T_2)$ be a
disjoint pair of spherical systems of generators of a finite group $G$.
Then, for every $g\in G$, $gT_1g^{-1}$, $T_2$ and
$T_1$, $gT_2g^{-1}$ are also
disjoint pairs of spherical systems of generators of $G$.
\end{lemma}
{\it Proof. } $\Sigma(T_1)$ and $\Sigma(T_2)$ are unions of conjugacy classes.
\hspace*{\fill}$Q.E.D.$
\begin{rem}
We will often use without explicit mention the
$p^{\alpha}q^{\beta}$-Theorem (of Burnside) saying that every group of order
$p^{\alpha}q^{\beta}$ ($p,\, q$ primes) is solvable (cf. \cite{burnart} and
\cite{burnlibro}, p.323).
\end{rem}
\subsection{The case: $A=[2,3,7]_{84},\ B\in{\cal N}$}
In this section we consider the case $A=[2,3,7]_{84}$ and prove
\begin{prop}\label{prop84}
There is no finite group $G$ having an unmixed ramification structure
of type $(A,B)$ with $A=[2,3,7]_{84}$ and $B\in{\cal N}$ arbitrary.
\end{prop}
Since the finite group $G$ satisfying the conditions of Proposition
\ref{prop84} has a system of generators of type $[2,3,7]$ it has to be a
non-trivial perfect group.
Recall that a group $G$ is called
perfect if $G=G'$ where $G'$ is the commutator
subgroup of $G$. Notice that every quotient group of a perfect group is again
perfect. Running through the relevant MAGMA library it can quickly be checked
whether there is a perfect group of some cardinality. We shall often exploit
\begin{rem}\label{simplequot}
Let $G$ be a non trivial finite perfect group.
Then $G$ has a non abelian simple group
$Q$ as quotient.
\end{rem}
We shall also make use of
\begin{comp}\label{ord}
The only non abelian simple groups whose order divides $2016$,
$3024$, $4032$, $7056$ are ${\hbox{\rm PSL}}(2,{\hbox{\ensuremath{\mathbb{F}}}}_7)$ and ${\hbox{\rm PSL}}(2,{\hbox{\ensuremath{\mathbb{F}}}}_8)$.
\end{comp}
This can be seen by applying the command
``SimpleGroupsWithOrderDividing'' of MAGMA to the above numbers.
Now we shall run through all $B\in{\cal N}$ and
explain the computations that are performed.
\subsubsection{$A=[2,3,7]_{84},\ B\in{\cal N},\ \alpha(B)\le 21$}
\label{sususe1}
The necessary computations can be speeded up enormously by first establishing
the following
\begin{comp}
The only perfect groups of order $84k$, where
$k\le 21$ is one of the numbers $\alpha(B)$ for $B\in{\cal N}$,
are
\noindent
1. ${\rm SmallGroup}(168,42)$ for $k=2$,
\noindent
2. ${\rm SmallGroup}(336,114)$ for $k=4$,
\noindent
3. ${\rm SmallGroup}(504,156)$ for $k=6$,
\noindent
4. ${\rm SmallGroup}(1344,814)$ for $k=16$,
\noindent
5. ${\rm SmallGroup}(1344,1186)$ for $k=16$.
\noindent
The first four of them have only one conjugacy class of elements of order $2$.
\end{comp}
This can be seen quickly by running through the relevant MAGMA-libraries
checking the IsPerfect-predicate.
We then exclude the first four cases since we verify that for each $B \in
\mathcal{N}$ with $\alpha(B) = 2,4,6,16$ we
have that $2$ divides one of the $m_i$'s.
We are left with the case $B=[2,4,8]_{16}$
(since $\alpha(B) = 16$ iff $B=[2,4,8]_{16}$) and $G={\rm SmallGroup}(1344,1186)$.
By Computational Fact \ref{ord} this
group is an extension of ${\hbox{\rm PSL}}(2,{\hbox{\ensuremath{\mathbb{F}}}}_7)$ by
a kernel of order $8$ and it can be excluded noting that
${\hbox{\rm PSL}}(2,{\hbox{\ensuremath{\mathbb{F}}}}_7)$ has no spherical system of generators of types
$[2,4,8]$, $[2,4,4]$, $[2,2,4]$ or $[2,2,2]$.
\subsubsection{$A=[2,3,7]_{84},\ B\in{\cal N}_3,\ \alpha(B)=24$}
Here the order of $G$ is $2016$. By \ref{simplequot} there is a simple non
abelian quotient $Q$ of $G$,
which by Computational fact \ref{ord} is isomorphic to ${\hbox{\rm PSL}}(2,{\hbox{\ensuremath{\mathbb{F}}}}_7)$ or to
${\hbox{\rm PSL}}(2,{\hbox{\ensuremath{\mathbb{F}}}}_8)$.
Suppose that $Q$ is ${\hbox{\rm PSL}}(2,{\hbox{\ensuremath{\mathbb{F}}}}_7)$:
then the kernel $K$ of the quotient
homomorphism has order $12$. Each group of order $12$ has either a normal
Sylow-$2$-subgroup or a normal Sylow-$3$-subgroup. This Sylow-subgroup, which
we denote by $S$, is characteristic in $K$, hence normal in $G$.
Suppose that $|S|=3$. Then $G/S$ is perfect and has order
$2016/3=672$, but there are no perfect
groups of this order (checked by MAGMA). If $|S|=4$, then
$G/S$ has order $2016/4=504$. The only perfect group of order $504$ is
${\hbox{\rm PSL}}(2,{\hbox{\ensuremath{\mathbb{F}}}}_8)$ which is simple. A contradiction (since $K/S$ is a normal
subgroup of $G/S$).
Suppose that $Q$ is ${\hbox{\rm PSL}}(2,{\hbox{\ensuremath{\mathbb{F}}}}_8)$: then the kernel $K$ of the quotient
homomorphism has order $4$. Since $Q$ has to act trivially (by conjugation)
on $K$. Let $K_1$ be a subgroup of order $2$ in $K$. Clearly $K_1$ is normal
in $G$ and $G/K_1$ is perfect of order $1008$. There are no such groups.
This shows that there is no group $G$ of order $2016$ with a spherical
system of
generators of type $[2,3,7]$.
\subsubsection{$A=[2,3,7]_{84},\ B\in{\cal N}_3,\ \alpha(B)=30$}
Here the order of $G$ is $2520$ and $B=[2,3,10]_{30}$. We find
\begin{comp}
The only non abelian simple groups of order dividing $2520$ are
${\mathfrak A}_5$, ${\mathfrak A}_6$, ${\mathfrak A}_7$,
${\hbox{\rm PSL}}(2,{\hbox{\ensuremath{\mathbb{F}}}}_7)$ and ${\hbox{\rm PSL}}(2,{\hbox{\ensuremath{\mathbb{F}}}}_8)$.
\end{comp}
Let $Q$ be a simple quotient of $G$.
The cases $Q={\mathfrak A}_5$ and $Q={\mathfrak A}_6$ cannot occur since these
groups have no element of order $7$.
$Q={\mathfrak A}_7=G$ cannot occur since
${\mathfrak A}_7$ has only one conjugacy class of elements of order $2$ and hence
cannot have a disjoint pair of spherical generators of type $(A,B)$.
Suppose now that $Q={\hbox{\rm PSL}}(2,{\hbox{\ensuremath{\mathbb{F}}}}_7)$ or $Q={\hbox{\rm PSL}}(2,{\hbox{\ensuremath{\mathbb{F}}}}_8)$.
These groups do not have an element of order
$5$, hence $Q$ would have to be a quotient of ${\hbox{\ensuremath{\mathbb{T}}}}(2,3,2)$ which is a dihedral
group. A contradiction.
\subsubsection{$A=[2,3,7]_{84},\ B\in{\cal N}_3,\ \alpha(B)=36$}
Here the order of $G$ is $3024$. By \ref{ord} a
non abelian simple quotient $Q$ of $G$ can only be ${\hbox{\rm PSL}}(2,{\hbox{\ensuremath{\mathbb{F}}}}_7)$ or
${\hbox{\rm PSL}}(2,{\hbox{\ensuremath{\mathbb{F}}}}_8)$.
Suppose that $Q={\hbox{\rm PSL}}(2,{\hbox{\ensuremath{\mathbb{F}}}}_7)$, then the kernel $K$ of
the quotient homomorphism
from $G$ to $Q$ has order $18$. The Sylow-$3$-subgroup $S$ of $K$ is normal,
hence characteristic in $K$. It follows that $S$ is normal in $G$.
We have
\begin{comp}
There is only one perfect group of order $336$, namely ${\rm SL}(2,{\hbox{\ensuremath{\mathbb{F}}}}_7)$.
\end{comp}
Hence $G/S$ is isomorphic to ${\rm SL}(2,{\hbox{\ensuremath{\mathbb{F}}}}_7)$. This is a contradiction
because $G/S$ would be a quotient of ${\hbox{\ensuremath{\mathbb{T}}}}(2,3,7)$ and
${\rm SL}(2,{\hbox{\ensuremath{\mathbb{F}}}}_7)$ has only one element of order $2$ which lies in its center.
Suppose that $Q={\hbox{\rm PSL}}(2,{\hbox{\ensuremath{\mathbb{F}}}}_8)$. The kernel $K$ of the quotient homomorphism
from $G$ to $Q$ has order $6$. Its Sylow-$3$-subgroup $S$ is normal in $G$.
The quotient group $G/S$ has order $1008$ and is perfect. There is no
such group.
This shows that there is no group $G$ of order $3024$ with a spherical system
of
generators of type $[2,3,7]$.
\subsubsection{$A=[2,3,7]_{84},\ B\in{\cal N}_3,\ \alpha(B)=40$}
Here the order of $G$ is $3360$ and $B=[2,4,5]_{40}$. We find
\begin{comp}
The only non abelian simple groups of order dividing $3360$ are
${\mathfrak A}_5$, and ${\hbox{\rm PSL}}(2,{\hbox{\ensuremath{\mathbb{F}}}}_7)$.
\end{comp}
Any minimal non abelian simple quotient $Q$ of $G$ has to be one of these
two groups. Using the fact that
${\mathfrak A}_5$ has no element of order $7$ and ${\hbox{\rm PSL}}(2,{\hbox{\ensuremath{\mathbb{F}}}}_7)$ has no
element of order $5$ we see that there is no group $G$ in this section.
\subsubsection{$A=[2,3,7]_{84},\ B\in{\cal N}_3,\ \alpha(B)=48$}\label{ss1}
Here the order of $G$ is $4032$ and $B=[2,3,8]_{48}$.
By Computational fact \ref{ord}
any minimal non abelian simple quotient $Q$ of $G$ has to be one of
${\hbox{\rm PSL}}(2,{\hbox{\ensuremath{\mathbb{F}}}}_7)$ or ${\hbox{\rm PSL}}(2,{\hbox{\ensuremath{\mathbb{F}}}}_8)$.
But these two groups do not have a
spherical system of generators of type $[2,3,8]$, $[2,3,4]$ or $[2,3,2]$.
\subsubsection{$A=[2,3,7]_{84},\ B\in{\cal N}_3,\ \alpha(B)=84$}
Here the order of $G$ is $7056$.
By Computational fact \ref{ord}
any minimal non abelian simple quotient $Q$ of $G$ has to be
${\hbox{\rm PSL}}(2,{\hbox{\ensuremath{\mathbb{F}}}}_7)$ or ${\hbox{\rm PSL}}(2,{\hbox{\ensuremath{\mathbb{F}}}}_8)$.
Let $K$ be the kernel of the
quotient homomorphism. This group has order $42$ or $14$ and
it has a normal, hence characteristic
Sylow-$7$-subgroup $S$, which has to be normal in $G$. Then $G/S$ has
order $1008$ and is perfect. There are no such groups.
\subsection{The case: $A=[2,3,8]_{48},\ B\in{\cal N}$}
In this section we treat the case $A=[2,3,8]_{48}$ and prove
\begin{prop}\label{prop48}
There is no finite group $G$ having an unmixed ramification structure
of type $(A,B)$ with $A=[2,3,8]_{48}$ and $B\in{\cal N}$ arbitrary.
\end{prop}
Let us first consider the case $B\in{\cal N}_3$. All cases except
$B=[2,3,7]_{84}$, $[2,3,8]_{48}$ or $[2,4,8]_{16}$ can be analysed quickly by
MAGMA. In fact, there is no group $G$
having simultaneously a spherical system of generators
of type $A$ and one of type $B$ in these cases. The computer calculation is
speeded up enormously by noting that $G$ has to be either perfect or has
abelianisation equal to ${\rm Z}_2$.
In the cases $B=[2,3,7]_{84}$, $[2,3,8]_{48}$ the order of $G$ is bigger than
$2000$ and there are no MAGMA-libraries of groups of such order. The
first case was already treated in Section \ref{ss1}, the second is treated below. In case
$B=[2,4,8]_{16}$ the number of groups ($1090235$) to be considered makes the
computation time consuming, so we treat it by a direct argument below.
\subsubsection{$A=[2,3,8]_{48}$, $B\in {\cal N}_3,\ \alpha(B)=16$}
Here we have $B=[2,4,8]_{16}$. The order of the group $G$ is $768 = 3 \cdot 256$ and
$G$ is solvable.
The group $G$ is a quotient of ${\hbox{\ensuremath{\mathbb{T}}}}(2,3,8)$ and of ${\hbox{\ensuremath{\mathbb{T}}}}(2,4,8)$.
We have
\begin{equation}\label{tria1}
{\hbox{\ensuremath{\mathbb{T}}}}(2,3,8)^{\rm ab}={\rm Z}_2,\qquad {\hbox{\ensuremath{\mathbb{T}}}}(2,3,8)' \cong {\hbox{\ensuremath{\mathbb{T}}}}(3,3,4).
\end{equation}
The abelianisation of ${\hbox{\ensuremath{\mathbb{T}}}}(3,3,4)$ is ${\rm Z}_3$. These facts imply that $G^{\rm
ab}$ is ${\rm Z}_2$ and $(G')^{\rm ab}$ is ${\rm Z}_3$, since they are both not perfect.
The triangle group ${\hbox{\ensuremath{\mathbb{T}}}}(2,4,8)$ has exactly $3$ subgroups of index $2$. They are
isomorphic to ${\hbox{\ensuremath{\mathbb{T}}}}(2,8,8)$, ${\hbox{\ensuremath{\mathbb{T}}}}(4,4,4)$ and ${\hbox{\ensuremath{\mathbb{T}}}}(2,2,2,4)$ and each of them
has a $2$-group as abelianisation. This implies that the
abelianisation of $G'$ has to be a $2$-group which is a contradiction.
The subgroups of finite index in finitely presented groups (like ${\hbox{\ensuremath{\mathbb{T}}}}(2,4,8)$)
can quickly be analysed by the ``generators and relations'' programs of MAGMA
to obtain results like those just used (alternatively one can use geometric
branching arguments, cf. lemma $(2.3)$ of \cite{cat03}).
\subsubsection{$A=[2,3,8]_{48}$, $B\in {\cal N}_3,\ \alpha(B)=48$}
In this case we have $B=[2,3,8]_{48}$ and $|G|=2304$.
Every group of this order is solvable.
We again use (\ref{tria1}) to find that $G$
has to have a subgroup of index 2 with
abelianisation ${\rm Z}_3$. The
Smallgroups-library of MAGMA contains all groups of order 1152. It can
be quickly checked that there is no group of order 1152 with
abelianisation equal to ${\rm Z}_3$.
\subsubsection{$A=[2,3,8]_{48}$, $B\in {\cal N}_4,\ {\cal N}_5,\ {\cal N}_6$}
All these pairs of types $(A,B)$ can be quickly searched by computer to
find
\begin{comp}
There is no finite group $G$ having a pair of sytems of generators
of types $(A,B)$ with $A=[2,3,8]_{48},\, B\in {\cal N}_4,$
${\cal N}_5$, ${\cal N}_6$ with exception of
${\rm SmallGroup}(96,64)$ which has non disjoint pairs of spherical systems of
generators of
type $(A,B)$ with $B=[2,2,2,2,2,2]_2$.
\end{comp}
\subsection{The case: $A,\, B\in{\cal N}_3,\ \alpha(A),\, \alpha(B)\le 40$}
All these pairs of types $(A,B)$ can be quickly searched by computer to
find
\begin{comp}\label{prop40}
There is no finite group $G$ having a disjoint pair of spherical
sytems of generators
of types $(A,B)$ with $A,\, B\in {\cal N}_3$ and
$\alpha(A),\, \alpha(B)\le 40$ except for $G={\rm Z}_5^2$ which admits such a
system with $A,\, B=[5,5,5]_5$.
\end{comp}
\subsection{The case: $A\in {\cal N}_3,\, \alpha(A)\ne 48,\, 84,\
B\in{\cal N}_4,\ {\cal N}_5,\ {\cal N}_6$}
All these pairs of types $(A,B)$ can be quickly searched by computer to
find as only groups $G$ with an unmixed ramification structure the following cases
\begin{itemize}
\item $A=[ 2, 5, 5]_{20},\, B=[ 3, 3, 3, 3]_{ 3}$, $G={\rm SmallGroup}(60,5)$,
\item $A=[ 5, 5, 5]_{5},\, B=[ 2, 2, 2, 3]_{ 3}$, $G={\rm SmallGroup}(60,5)$,
\item $A=[ 3, 3, 5]_{15},\, B=[ 2, 2, 2, 2,2]_{ 4}$, $G={\rm SmallGroup}(60,5)$,
\item $A=[ 2, 4, 6]_{24},\, B=[ 2, 2, 2, 2,2,2]_{ 2}$, $G={\rm SmallGroup}(48,48)$,
\item $A=[ 3, 4, 4]_{12},\, B=[ 2, 2, 2, 2,2,2]_{ 2}$, $G={\rm SmallGroup}(24,12)$.
\end{itemize}
From the description of the groups given by MAGMA it is easy to see that
${\rm SmallGroup}(60,5)$ is the alternating group ${\mathfrak A}_5$, ${\rm SmallGroup}(48,48)$ is
${\rm S}_4\times {\rm Z}_2$ and ${\rm SmallGroup}(24,12)$ is ${\rm S}_4$.
\subsection{The case: $A,\, B\in {\cal N}_4,\ {\cal N}_5,\ {\cal N}_6$}
\label{suse44}
All these pairs of types $(A,B)$ can be quickly searched by computer to
find as only groups $G$ with an unmixed ramification structure the following cases
\begin{itemize}
\item $A=[ 2, 2, 2,4]_{8},\, B=[ 2, 2, 4, 4]_{ 4}$, $G={\rm SmallGroup}(32,27)$,
\item $A=[ 2, 2, 4,4]_{4},\, B=[ 2, 2, 4, 4]_{ 4}$, $G={\rm SmallGroup}(16,3)$,
\item $A=[ 2, 2, 2,4]_{8},\, B=[ 2, 2, 2, 2,2,2]_{ 2}$, $G={\rm SmallGroup}(16,11)$,
\item $A=[ 2, 2, 2,2,2]_{4},\, B=[ 2, 2, 2, 2,2]_{ 4}$, $G={\rm SmallGroup}(16,14)$,
\item $A=[ 3, 3, 3,3]_{9},\, B=[ 3, 3, 3, 3]_{ 3}$, $G={\rm SmallGroup}(9,2)$,
\item $A=[ 2, 2, 2,2,2]_{4},\, B=[2,2,2,2,2,2]_{2}$, $G={\rm SmallGroup}(8,5)$.
\end{itemize}
The group ${\rm SmallGroup}(8,5)$ is ${\rm Z}_2^3$, ${\rm SmallGroup}(9,2)$ is ${\rm Z}_3^2$,
${\rm SmallGroup}(16,14)$ is ${\rm Z}_2^4$ and ${\rm SmallGroup}(16,11)$ is ${\rm D}_4\times {\rm Z}_2$ where
${\rm D}_4$ stands for the dihedral group of order $8$.
A finite presentation of ${\rm SmallGroup}(16,3)$ is
\begin{equation}\label{gr16}
G(16):={\rm SmallGroup}(16,3)=\langle\, g_1,\, g_2,\, g_3,\, g_4\quad \vrule \quad g_1^2=g_4,\,
g_2^{g_1}=g_2g_3\,\rangle.
\end{equation}
\begin{rem}
The convention here is that the squares of all generators $g_1,\ldots,g_4$
which are not mentioned in the presentation are equal to $1$. If $h_1,\, h_2$
are elements of the group $G$ then $h_1^{h_2}:=h_2^{-1}h_1h_2$. All conjugates
$g_i^{g_j}$ amongst the generators which are not mentioned are equal to $g_i$,
i.e., $g_i$ and $g_j$ commute in this case.
\end{rem}
A finite presentation of $G(32):={\rm SmallGroup}(32,27)$ is
\begin{equation}\label{gr32}
G(32)=\langle\, g_1,\, g_2,\, g_3,\, g_4,\, g_5\quad \vrule \quad
g_2^{g_1}=g_2g_4, \, g_3^{g_1}=g_3g_5 \,\rangle.
\end{equation}
\section{The mixed case, classification of the groups}\label{classimi}
This section contains the classification of all finite groups $G$ which admit
a mixed ramification structure of type $A\in {\cal M}$.
In fact, there are only two such
groups which are described in detail
in Section \ref{resum}. We show
\begin{prop}
There are two finite groups which admit
a mixed ramification structure of type $A\in {\cal M}$. They both
have order $256$ and admit a structure of type $[4,4,4]_{16}\in{\cal M}_3$.
\end{prop}
The proof relies again heavily on the use of the MAGMA-library containing
all groups of low order.
In order to avoid an excess of computations
we first consider each type $A=[m_1,\ldots,m_r]\in {\cal M}_r$
($r\in{\hbox{\ensuremath{\mathbb{N}}}}$) seperately going through
the finite list of Proposition \ref{propmtup},
trying in a first round to exclude as
many cases as possible by some criteria, which are computationally
cheap to verify.
If these are satisfied and if we have we have access to the groups of order
$\beta(A)^2/2$ through a MAGMA-library we check for each of these groups $H$
\begin{itemize}
\item does $H$ admit a spherical system of generators of type $A$?
\item does $H$ admit a disjoint pair of spherical systems of
generators of type $(A,A)$?
\end{itemize}
In fact, only very few groups $H$ survive the first test and
for them the second
criterion, though computationally expensive, can be carried out. We are left
with a small list of groups $H$ admitting a disjoint pair of spherical systems
of generators of type $(A,A)$, i.e., an unmixed ramification structure.
Fortunately such groups $H$
only appear when the order $\beta(A)^2$ is small enough to have access to
all groups $G$ of this order. We then go through all these groups $G$ and
list their subgroups of index $2$ isomorphic to one of the groups $H$. We
then check whether the compatibility conditions for a mixed
ramification structure of type $A$
(see Section \ref{groups}) could be satisfied.
If the order $\beta(A)^2/2$ is too big to use a
MAGMA-library we analyse the
subgroups of low index in the polygonal group
${\hbox{\ensuremath{\mathbb{T}}}}(m_1,\ldots,m_r)$
(which often happen to be isomorphic to
polygonal groups). We always can show that $H$ would then have
a subgroup of low index which is a quotient of another polygonal
group. Using this descent procedure, sometimes repeatedly,
always brought us into a
region of orders accessible to MAGMA-libraries.
\begin{rem}
Proposition $(4.4)$ of \cite{BCG} contains a misprint (the order of $G$
was mistakenly confused with the order of $H$). The correct
statement is: no group of order $< 256$ admits a mixed Beauville structure, i.e., a
mixed ramification structure of length $3$.
\end{rem}
\subsection{$A\in{\cal M}_3$, $\beta(A)\ne 16$}
In this section we treat the cases $A\in{\cal M}_3$, $\beta(A)\ne 16$. In each
case we indicate a sequence of MAGMA-computations showing that there is no
finite group $G$ admitting
a mixed ramification structure of such type.
\subsubsection{\bf $A\in{\cal M}_3$,\ $\beta(A)=8,\, 10,\, 12,\, 14,\, 18,
\, 20,\, 30,\, 36,\, 42,\, 60$}
In these cases the order of $H$ is either small or divisible
only by a power of $2$ with exponent $\leq 3$ and
all relevant groups $H$ can be quickly inspected by MAGMA.
We find
\begin{comp}
Let $k$ be one of the numbers $8,\, 10,$ $12,\, 14,$
$18,\, 20,$ $30,\, 36,$ $42,\, 60$
and $H$ a group of order $k^2/2$. Then
$H$ does not have a disjoint pair of spherical systems of
generators of type $(A,A)$
with $\beta(A)=k$.
\end{comp}
For $k=10,\, 14,\, 60$ there is even no group of order $k^2/2$ having a
system of generators of type $A$
with $\beta(A)=k$.
\subsubsection{\bf $A\in{\cal M}_3$,\ $\beta(A)=24$}
The group $G$ has order $|G|=576= 32 \cdot 27$, hence is
solvable. Since the order of the subgroup $H$ is still low we may quickly infer
from MAGMA
\begin{comp}
No group $H$ of order $288$
has a disjoint pair of spherical systems of generators of type $(A,A)$
with $A\in{\cal M}_3$ and $\beta(A)=24$.
\end{comp}
\subsubsection{\bf $A\in{\cal M}_3$,\ $\beta(A)=32$}
Here $A=[ 2 , 4 , 8]_{ 32 }$.
The group $G$ has order $|G|=1024$, it is a $2$-group.
Its subgroup $H$ has order $512$ and
has a spherical system of generators of type $[ 2 , 4 , 8]$. A computation (of about $6$
hours)
using the MAGMA-library of groups of order $512$ reveals
\begin{comp}
There are $10494213$ groups of order $512$. Eight of them
(${\rm SmallGroup}(512,v)$ for $v=409$, $1818$, $1822$, $1832$, $1838$, $1854$, $1862$,
$2023$)
have a system of generators of type $[ 2 , 4 , 8]$.
\end{comp}
\medskip
The analysis of so many groups is made feasible by first selecting those
groups of order $512$ whose abelianisation is a quotient of the
abelianisation of ${\hbox{\ensuremath{\mathbb{T}}}}(2,4,8)$ which is isomorphic to ${\rm Z}_2\times{\rm Z}_4$.
Now a quick computation shows
\begin{comp}
None of the groups of order $512$ admitting
a spherical system of generators of type $[ 2 , 4 , 8]$ has a
disjoint pair of spherical systems of generators of type
$([ 2 , 4 , 8],\,[ 2 , 4 , 8])$.
\end{comp}
\subsubsection{\bf $A\in{\cal M}_3$,\ $\beta(A)=40$}
Here $A=[ 2 , 5 , 5]_{40}$.
The group $G$ has order $|G|=1600$, hence it is
solvable.
Its subgroup $H$ has order $800$ and has a spherical system of generators
of type $[2,5,5]$.
We have
\begin{equation}
{\hbox{\ensuremath{\mathbb{T}}}}(2,5,5)^{\rm ab}={\rm Z}_5.
\end{equation}
Therefore the abelianisation of $H$ is ${\rm Z}_5$.
\begin{comp}
There are $1211$ groups of order $800$.
None of them has abelianisation ${\rm Z}_5$.
\end{comp}
\subsubsection{\bf $A\in{\cal M}_3$,\ $\beta(A)=48$}
Here we have
$A=[2,3,12]_{48},\, [ 2 , 4 , 6]_{ 48},\, [ 3 , 3 , 4]_{ 48}$.
Going through the MAGMA-library of groups of order $1152=24\cdot 48$
we find
\begin{comp}
There are $157877$ groups of order $1152$.
\begin{itemize}
\item None of them has a system of generators of type $[ 3,3 ,4]$,
\item one of them
(${\rm SmallGroup}(1152,155454)$)
has a system of generators of type $[ 2 , 3 ,12]$,
\item one of them (${\rm SmallGroup}(1152,157849)$)
has a system of generators of type $[ 2 , 4 ,6]$.
\end{itemize}
\end{comp}
Treating the two groups ${\rm SmallGroup}(1152,155454)$ and
${\rm SmallGroup}(1152,157849)$ which are remaining, is computationally cheap. We have
\begin{comp}
1) If $[a,b,c]$, $[a',b',c']$
are two systems of generators the group $H:={\rm SmallGroup}(1152,155454)$ of type
$[ 2 , 3 , 12]$ then $a,\, a'$ are conjugate in $H$.
\noindent
2) If $[a,b,c]$, $[a',b',c']$
are systems of generators of $H:={\rm SmallGroup}(1152,157849)$ of type
$[ 2 , 4 ,6]$ then $a,\, a'$ are conjugate in $H$.
\end{comp}
This shows that there are no groups $H$ of order $1152$ with a
disjoint pair of spherical systems
of generators of one of the types above.
\subsubsection{\bf $A\in{\cal M}_3$,\ $\beta(A)=72$}
Here $A=[2,3,9]_{72}$. The group $G$ has order $|G|=5184$, hence it is
solvable. Its index $2$
subgroup $H$ has order $2592$ and system of generators of type $[2,3,9]$.
The order of $H$ is too big to apply the computational arguments used
before. To exclude this case we argue as follows.
We have
\begin{equation}\label{we5}
{\hbox{\ensuremath{\mathbb{T}}}}(2,3,9)^{\rm ab}={\rm Z}_3,\qquad {\hbox{\ensuremath{\mathbb{T}}}}(2,3,9)' \cong {\hbox{\ensuremath{\mathbb{T}}}}(2,2,2,3).
\end{equation}
The commutator subgroup $H'$ of $H$ has order $864$ and we conclude from
(\ref{we5})
that $H'$ is a quotient of ${\hbox{\ensuremath{\mathbb{T}}}}(2,2,2,3)$. Looking through the
relevant MAGMA-library we find
\begin{comp}\label{compufa72}
Of the $4725$ groups of order $864$ only the two groups
$H_1:={\rm SmallGroup}(864,2225)$ and
$H_2:={\rm SmallGroup}(864,4175)$ are quotients of ${\hbox{\ensuremath{\mathbb{T}}}}(2,2,2,3)$.
\end{comp}
We are left with the question whether $H_1,\, H_2$ are the commutator
subgroup of a group $H$ of order $2592$ which has a disjoint pair of spherical
systems of generators of type $(A,A)$. We need
\begin{lemma}\label{lemgru72}
Let $\tilde H$ be a finite group with $\tilde H^{\rm ab}={\rm Z}_3$
which has a disjoint pair of spherical
systems of generators
$([h_{(1,1)},h_{(1,2)},h_{(1,3)}],\,[h_{(2,1)},h_{(2,2)},h_{(2,3)}])$ with $[2,3,9]=$
$[{\rm ord}(h_{(1,1)}),{\rm ord}(h_{(1,2)}), {\rm ord}(h_{(1,3)})]$
$=[{\rm ord}(h_{(2,1)}),{\rm ord}(h_{(2,2)}), {\rm ord}(h_{(2,3)})]$. Then
\begin{itemize}
\item[{\rm (i)}]
$[h_{(i,1)},h_{(i,2)}h_{(1,1)}h_{(i,2)}^{-1},h_{(i,2)}^2h_{(i,1)}h_{(i,2)}^{-2}]$
($i=1,\, 2$) are generating tuples of elements of order $2$ for $\tilde H'$ such that
$z_i:=h_{(i,1)}\cdot h_{(i,2)}h_{(1,1)}h_{(i,2)}^{-1}\cdot
h_{(i,2)}^2h_{(i,1)}h_{(i,2)}^{-2}$ has order $3$.
\item[{\rm (ii)}] $\tilde \Sigma_1\cap\tilde \Sigma_2=\emptyset$ where for
$i=1,\, 2$
$$\tilde\Sigma_i:=\bigcup_{h\in \tilde H'} h\{\,h_{(i,1)},\,
z_i,\, z_i^2\,\}h^{-1}$$
\end{itemize}
\end{lemma}
We skip the straightforward proof.
We finish this case by
\begin{comp} The groups $H_1$, $H_2$ in Computational Fact \ref{compufa72}
do not have generating $3$-tuples with
properties ${\rm (i)},\, {\rm (ii)}$ of Lemma \ref{lemgru72}.
\end{comp}
\subsubsection{\bf $A\in{\cal M}_3$,\ $\beta(A)=80$}
Here $A=[2,4,5]_{80}$.
The group $G$ has order $|G|=6400$, hence is
solvable. Its index $2$ subgroup $H$ has a system of
generators of type $[2,4,5]$.
We have
\begin{equation}
{\hbox{\ensuremath{\mathbb{T}}}}(2,4,5)^{\rm ab}={\rm Z}_2,\qquad {\hbox{\ensuremath{\mathbb{T}}}}(2,4,5)' \cong {\hbox{\ensuremath{\mathbb{T}}}}(2,5,5).
\end{equation}
This implies that there must be a subgroup $H_1$ of index
$2$ in $H$ which is a quotient of ${\hbox{\ensuremath{\mathbb{T}}}}(2,5,5)$.
$H_1$ has order $1600$ and is solvable, hence it has abelianisation
${\rm Z}_5$. An inspection of the relevant MAGMA-library shows
\begin{comp}
There are $10281$ groups of order $1600$.
None of them has abelianisation ${\rm Z}_5$.
\end{comp}
\subsubsection{\bf $A\in{\cal M}_3$,\ $\beta(A)=96$}
Here $A=[2,3,8]_{96}$ and
$G$ has order $|G|=96^2=9216$, hence is
solvable. Its subgroup $H$ has a system of generators of type $[2,3,8]$.
We have
\begin{equation}
{\hbox{\ensuremath{\mathbb{T}}}}(2,3,8)^{\rm ab}={\rm Z}_2,\qquad {\hbox{\ensuremath{\mathbb{T}}}}(2,3,8)' \cong {\hbox{\ensuremath{\mathbb{T}}}}(3,3,4).
\end{equation}
This implies that there must be a subgroup $H_1$ of index
$2$ in $H$ which is a quotient of ${\hbox{\ensuremath{\mathbb{T}}}}(3,3,4)$. We further have
\begin{equation}
{\hbox{\ensuremath{\mathbb{T}}}}(3,3,4)^{\rm ab}={\rm Z}_3, \qquad {\hbox{\ensuremath{\mathbb{T}}}}(3,3,4)'\cong {\hbox{\ensuremath{\mathbb{T}}}}(4,4,4).
\end{equation}
This in turn implies ($H$ is solvable) that there is a subgroup $H_2$ of index
$3$ in $H_1$ which is a quotient of ${\hbox{\ensuremath{\mathbb{T}}}}(4,4,4)$. Note that $|H_2|=768$.
\begin{comp}
There are $1090235$ groups of order $768$.
None of them is a quotient of ${\hbox{\ensuremath{\mathbb{T}}}}(4,4,4)$.
\end{comp}
This fact can be derived by inspection of the relevant MAGMA-library.
The analysis of such a hugge number of groups is made possible by first selecting
those groups of order $768$ which have abelianisation which is a quotient of the
abelianisation of ${\hbox{\ensuremath{\mathbb{T}}}}(4,4,4)$ which is ${\rm Z}_4\times{\rm Z}_4$. There are $1651$ such
groups. For them it is quickly checked whether they are a quotient of
${\hbox{\ensuremath{\mathbb{T}}}}(4,4,4)$.
\subsubsection{\bf $A\in{\cal M}_3$,\ $\beta(A)=168$}
Here $A=[2,3,7]_{168}$.
The group $G$ has order $|G|=28224$, its subgroup $H$ has order
$14112$ and has a system of generators of type $[2,3,7]$.
The group $H$ is perfect, hence has a non abelian simple group as quotient.
Note that the only non abelian simple groups with order dividing $14112$
are ${\hbox{\rm PSL}}(2,{\hbox{\ensuremath{\mathbb{F}}}}_7)$ and ${\hbox{\rm PSL}}(2,{\hbox{\ensuremath{\mathbb{F}}}}_8)$.
Let $K$ be the kernel of the
quotient homomorphism. This group has order $28$ or $84$ and
it has a normal, hence characteristic
Sylow-$7$-subgroup $S$.
Therefore $S$ has to be normal in $G$ and $G/S$ has
order $2016$ and is perfect. There are no such groups.
\subsection{$A\in{\cal M}_3$, $\beta(A)=16$}\label{resum}
In Section \ref{tup} we have shown that the possible tuples are
$[ 2 , 6 , 12]_{ 16 }$,
$[ 3 , 3 , 12]_{ 16 }$, $[ 3 , 4 , 6]_{ 16 }$, $[ 2 , 8 , 8 ]_{ 16 }$
$[ 4 , 4 , 4]_{ 16 }$. The first three are not possible since a group of
order $128$ cannot contain elements of order $3$. This leaves
$[ 2 , 8 , 8 ]_{ 16 }$ and $[ 4 , 4 , 4]_{ 16 }$.
\subsubsection{\bf $A=[ 2 , 8 , 8 ]_{ 16 }$}
Going through the list of groups of order $128$ we find
\begin{comp}
There are $2328$ groups of order $128$.
Only $7$ of them have a spherical system of
generators of type $[ 2 , 8 , 8 ]$. These are
${\rm SmallGroup}(128,v)$ for $v=2,\, 48,\, 50,\, 77,\, 135,\, 137,\, 142$.
\end{comp}
Analysing the $7$ remaining groups we find
\begin{comp}
None of the groups ${\rm SmallGroup}(128,v)$ ($v=2$,\, $48,$\, $50,$\, $77,$\,
$135,$\, $137,$\, $142$)
has a disjoint pair of spherical systems of generators of type $(A,A)$.
\end{comp}
\subsubsection{\bf $A=[ 4 , 4 , 4 ]_{ 16 }$}\label{suse256}
Going through the list of groups of order $128$ we find
\begin{comp}
There are $2328$ groups of order $128$. Only $4$ of them have a system of
generators of type $[ 4 , 4 , 4 ]$. These are
${\rm SmallGroup}(128,v)$ for $v=36,\, 125,\, 141,\, 144$. Only
${\rm SmallGroup}(128,36)$ has a disjoint pair of systems of generators of type $(A,A)$.
\end{comp}
This leaves us with the possibility that $H\cong {\rm SmallGroup}(128,36)$. Going through
the groups of order $256$ we find
\begin{comp}
Of the $56092$ groups of order $256$ only $29$ contain a subgroup of index $2$
isomorphic to ${\rm SmallGroup}(128,36)$. They are the groups ${\rm SmallGroup}(128,v)$ for $v=$
$ 382 $, $ 414 $, $ 1087 $, $ 1088 $, $ 1089 $, $ 1090 $, $ 1734 $, $ 1735 $,
$ 1736 $, $ 1737 $, $ 1738 $, $ 2483 $, $ 2484 $, $ 2485 $, $ 2486 $,
$ 2487 $, $ 2488 $, $ 2489 $, $ 2490 $, $ 3324 $, $ 3325 $, $ 3326 $, $ 3327 $,
$ 3378 $, $ 3379 $, $ 3380 $, $ 3381 $, $ 3678 $, $ 3679 $.
\end{comp}
Analysing the $29$ remaining groups we easily find
\begin{prop}
Of the groups of order $256$ exactly
\begin{equation}
{\rm G}(256,1):={\rm SmallGroup}(256,3678),\quad {\rm G}(256,2):={\rm SmallGroup}(256,3679)
\end{equation}
admit a mixed ramification structure of type $[ 4 , 4 , 4 ]_{ 16 }$.
\end{prop}
Presentations for the groups
${\rm G}(256,1)$ and ${\rm G}(256,2)$ are
\begin{equation}\label{gr2561}
{\rm G}(256,1)=\left\langle\, g_1,\ldots,g_8\quad \vrule \quad
\begin{matrix}
g_1^2 = g_4g_5g_6, & g_2^2 = g_4 g_5, & g_3^2 = g_4, \cr
g_2^{g_1} = g_2 g_4, & g_3^{g_1} = g_3 g_5, & g_3^{g_2} = g_3 g_6,\cr
g_4^{g_1} = g_4 g_7, & g_4^{g_2} = g_4 g_8, & g_5^{g_1} = g_5g_7 g_8,\cr
g_5^{g_2} = g_5 g_8, & g_5^{g_3} = g_5 g_7, & g_6^{g_1} = g_6 g_8,\cr
g_6^{g_2} = g_6 g_7, & g_6^{g_3} = g_6 g_8 &
\end{matrix}
\right\rangle,
\end{equation}
\begin{equation}\label{gr2562}
{\rm G}(256,2)=\left\langle\, g_1,\ldots,g_8 \quad \vrule \quad
\begin{matrix}
g_1^2 = g_4g_5g_6g_7, & g_2^2 = g_4 g_5, & g_3^2 = g_4, \cr
g_2^{g_1} = g_2 g_4, & g_3^{g_1} = g_3 g_5, & g_3^{g_2} = g_3 g_6,\cr
g_4^{g_1} = g_4 g_7, & g_4^{g_2} = g_4 g_8, & g_5^{g_1} = g_5g_7 g_8,\cr
g_5^{g_2} = g_5 g_8, & g_5^{g_3} = g_5 g_7, & g_6^{g_1} = g_6 g_8,\cr
g_6^{g_2} = g_6 g_7, & g_6^{g_3} = g_6 g_8 &
\end{matrix}
\right\rangle.
\end{equation}
The conventions for these so called PC-presentations are explained in Section
\ref{suse44}.
\subsection{$A\in{\cal M}_4,\ {\cal M}_5,\ {\cal M}_6,\ {\cal M}_8$}
In this section we treat the cases $A\in{\cal M}_4,\,
{\cal M}_5,\, {\cal M}_6,\, {\cal M}_8$. We show
\begin{prop} There is no finite group $G$ admitting a mixed ramification
structure of type $A\in{\cal M}_4,\,
{\cal M}_5,\, {\cal M}_6,\, {\cal M}_8$.
\end{prop}
\subsubsection{$A\in{\cal M}_4$}
The order of the group $H$ is at most $288$ and all relevant groups can be
checked for generating systems.
\begin{center}
\begin{tabular}{|c|c|}
\hline
A & \\
\hline
$[ 2 , 2 , 2 , 3]_{24}$ & No disjoint generating systems\\
$[ 2 , 2 , 2 , 4 ]_{16 }$ & No disjoint generating systems\\
$[ 2 , 2 , 2 , 6 ]_{12 }$ & No disjoint generating systems\\
$[ 2 , 2 , 3 , 3 ]_{ 12 }$ & No disjoint generating systems\\
$[ 2 , 2 , 2 , 10 ]_{10}$ & No generating systems\\
$[ 2 , 2 , 4 , 4 ]_{8}$ & ${\rm SmallGroup}(32,22)$ admits a disjoint generating systems\\
$[ 2 , 2 , 6 , 6 ]_{6 }$ & No disjoint generating systems\\
$[ 2 , 3 , 3 , 6 ]_{6 }$ & No disjoint generating systems\\
$[ 3 , 3 , 3 , 6 ]_{6 }$ & No generating systems\\
$[ 4 , 4 , 4 , 4 ]_{4 }$ & No disjoint generating systems\\
\hline
\end{tabular}
\end{center}
\begin{comp}
Of the $267$ groups of order $64$ only $32$ contain a subgroup of index $2$
which is isomorphic to ${\rm SmallGroup}(32,22)$. None of them has a mixed ramification
structure of type $[ 2 , 2 , 4 , 4 ]_{8}$.
\end{comp}
\subsubsection{$A\in {\cal M}_5,\ {\cal M}_6,\ {\cal M}_8$}
In these cases the relevant group orders are so small that all groups $G$ can
easily be inspected. There is none with a mixed ramification structure.
\section{Moduli spaces}\label{modu}
In this section we will describe completely the moduli spaces of the surfaces
isogenous to a product with $p_g = q = 0$. More precisely, let
$\mathfrak{M}_{(1,8)}$ be the moduli space of minimal smooth complex
projective surfaces with $\chi (S) = 1$ and $K_S^ 2 = 8$. As usual $K_S$
denotes the canonical divisor of $S$ and $\chi(S) = 1 + p_g(S) -q(S)$ is the
holomorphic Euler-Poincare' characteristic of $S$. It is nowadays wellknown
(cf. \cite{gieseker}) that $\mathfrak{M}_{(a,b)}$ is quasiprojective for all
$a,\, b\in{\hbox{\ensuremath{\mathbb{N}}}}$.
Obviously, our surfaces are contained in the
moduli space $\mathfrak{M}_{(1,8)}$
and we will
describe their locus there.
Let $G$ be a finite group and fix an unmixed ramification type
$(A,B)\in {\hbox{\ensuremath{\mathbb{N}}}}^r\times {\hbox{\ensuremath{\mathbb{N}}}}^s$. We
denote by $\mathfrak{M}_{(G;A,B)}$ the subset of $\mathfrak{M}_{(1,8)}$ defined
by isomorphism classes of surfaces isogenous to a product admitting
a ramification type
$(A,B)$ (or $(B,A)$).
We observe
\begin{rem} 1) The set
$\mathfrak{M}_{(G;A,B)}\subset \mathfrak{M}_{(1,8)}$
consists of a finite number of connected components
of the same dimension,
which are irreducible in the Zariski topology.
2) It is clear from Section \ref{basi} that the dimension
$d(G;A,B)$ of any component in
$\mathfrak{M}_{(G;A,B)}$ is precisely $\ell(A)-3+\ell(B)-3$
since we take $\ell(A)$-points in $\ensuremath{\mathbb{P}}^1$ modulo projective equivalence,
and likewise $\ell(B)$-points in $\ensuremath{\mathbb{P}}^1$ modulo projective equivalence.
\end{rem}
In order to calculate the number of components $n(G;A,B)$ of
$\mathfrak{M}_{(G;A,B)}$ we use the following
\begin{prop}\label{thu1}
Let $S,\, S'$ be a surfaces isogenous to a product,
of unmixed type and with $q(S)=q(S')=0$. Then $S$, $S'$
are in the same irreducible component if and only if $G(S)\cong G(S')$,
$(A_1(S),A_2(S))=(A_1(S'),A_2(S'))$ and
${\cal T}(S)$ and ${\cal T}(S')$ are in the same orbit of
${\bf B}_r\times {\bf B}_s\times {\rm Aut}(G)$ where $r=\ell(T_1)$,
$s=\ell(T_2)$.
\end{prop}
For a proof we refer to \cite{BaCa}.
By computer calculation we obtain the following table
of the possible unmixed ramification structures on finite groups
of type $(A,B)$ with $A,\, B\in {\cal N}$ leading to surfaces with $K^2=8$.
\begin{theo}
If $S \neq \ensuremath{\mathbb{P}}^1 \times \ensuremath{\mathbb{P}}^1$ is a smooth projective surface isogenous to a
product of unmixed type with $p_g(S)=q(S)=0$
and
with minimal realisation $S\cong (C_1\times C_2) /G$ then
$G$ is one of the
groups in the following table and the genera of the curves $C_1,\, C_2$ are as listed
in the table. The numbers of components $N$ in $\mathfrak{M}_{(1,8)}$ and
their dimension is given in the remaining two columns.
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
$G$ & $|G|$ & $A$ & $B$ & $n(G;A,B)$ & $d(G;A,B)$ \\
\hline
${\mathfrak A}_5$ & $60$ & $[2,5,5]_{20}$ & $[3, 3, 3, 3]_{3}$ & 1 & 1\\
${\mathfrak A}_5$ & $60$ & $[5,5,5]_{ 5 }$ & $[ 2, 2, 2, 3]_{12}$ & 1& 1\\
${\mathfrak A}_5$ & $60$ & $[3,3,5]_{15}$ & $[2,2,2,2]_{4}$ & 1& 1\\
${\mathfrak S}_4\times {\rm Z}_2$ & $48$ & $[2,4,6]_{24}$ & $[2,2,2,2,2,2]_2$ & 1 &
3\\ G($32$) & $32$ & $[ 2,2,4,4]_4$ & $[2,2,2,4]_8$ & 1 & 2\\
${\rm Z}_5^2$ & $25$ & $[5,5,5]_5$ & $[5,5,5]_5$ & 2 & 0 \\
${\mathfrak S}_4$ & $24$ & $[ 3, 4, 4]_{12}$ & $[2,2,2,2,2,2]_2$ & 1& 3\\
G($16$) & $16$ & $[ 2, 2, 4, 4]_4$ & $[2,2,4,4]_4$ & 1& 2\\
${\rm D}_4\times{\rm Z}_2$ & $16$ & $[2,2,2,4]_8$ & $[2,2,2,2,2,2]_2$ & 1& 4\\
${\rm Z}_2^4$ & $16$ & $[2,2,2,2,2]_4$ & $[2,2,2,2,2]_4$ & 1 & 4 \\
${\rm Z}_3^2$ & $9$ & $[ 3, 3, 3, 3]_3$ & $[3,3,3,3]_3$ & 1 & 2 \\
${\rm Z}_2^3$ & $8$ & $[2,2,2,2,2]_4$ & $[2,2,2,2,2,2]_2$ & 1 & 5 \\
\hline
\end{tabular}
\end{center}
\end{theo}
\medskip
The case $G$ abelian was done in \cite{BaCa}. In \cite{PardDP} four of the non
abelian cases are constructed and for three of these the irreducibility of the
corresponding family is proven.
We now turn to the mixed case.
Let $G$ be a finite group and fix a mixed ramification type $A$. We
denote by $\mathfrak{M}_{(G;A)}$ the subset of $\mathfrak{M}_{(1,8)}$
given
by the isomorphism classes of surfaces isogenous to a product
admitting a mixed
ramification type $A$.
Also here we have
\begin{rem} The set
$\mathfrak{M}_{(G;A)}\subset \mathfrak{M}_{(1,8)}$ consists of a
finite number of connected components
of the same dimension
$d(G;A) = \ell(A) - 3$, which are irreducible in
the Zariski topology.
\end{rem}
\begin{prop}
Let $S,\, S'$ be surfaces isogenous to a product,
of mixed type and with $q(S)=q(S')=0$. Then $S$, $S'$
are in the same irreducible component if and only if $G(S)\cong G(S')$ and
${\cal T}(S)$ and ${\cal T}(S')$ are in the same orbit of
${\bf B}_r\times {\rm Aut}(G)$ where $r=\ell(T)$.
\end{prop}
Hence the number of components $n(G;A)$ of $\mathfrak{M}_{(G;A)}$ is
precisely the number of orbits of ${\bf B}_{\ell(A)} \times {\rm Aut}(G)$
on the set $\mathcal{B}(G;A)$.
We already know from Section \ref{classimi} that there are exactly the
groups ${\rm G}(256,1)$, ${\rm G}(256,2)$ which have a
mixed ramification structure
of type $A\in {\cal M}$. In fact, they both have such a structure
of type $A=[4,4,4]_{16}$.
We shall now determine the numbers of orbits of
${\bf B}_3\times {\rm Aut}({\rm G}(256,i)$ ($i=1,\, 2$) on the set of
ramification structures.
Let us begin with $G={\rm G}(256,1)$. We have
\begin{prop}\label{prop2561}
\begin{itemize}
\item[{\rm (i)}] The automorphism group
of ${\rm G}(256,1)$ has $12288$ elements, it acts
with $3$ orbits on the set of subgroups of index $2$ in ${\rm G}(256,1)$.
\item[{\rm (ii)}] Representatives for the $3$ orbits are
$H_1:=\langle \, g_1,\, g_3 \,\rangle$ (which is a fixed point for
${\rm Aut}({\rm G}(256,1))$),
$H_2:=\langle \, g_1,\, g_2 \,\rangle$ (which has an orbit of cardinality $3$,
$H_3:=\langle \, g_2,\, g_1g_3 \,\rangle$
(which has an orbit of cardinality $3$).
\item[{\rm (iii)}] The action of ${\bf B}_3\times {\rm Aut}({\rm G}(256,1)$ on
$\mathcal{B}({\rm G}(256,1);[4,4,4]_{16})$ has $3$ orbits (corresponding to
the $3$ orbits of ${\rm Aut}({\rm G}(256,1))$ on
the set of subgroups of index $2$ in ${\rm G}(256,1)$.
\end{itemize}
\end{prop}
\medskip
For the group $G={\rm G}(256,2)$ the picture is different, we find
\begin{prop}
\begin{itemize}
\item[{\rm (i)}] The automorphism group
of ${\rm G}(256,2)$ has $86016$ elements, it acts
transitively on the set of subgroups of index $2$ in ${\rm G}(256,2)$.
\item[{\rm (ii)}] The action of ${\bf B}_3\times {\rm Aut}({\rm G}(256,2))$ on
$\mathcal{B}({\rm G}(256,2);[4,4,4]_{16})$ is transitive.
\end{itemize}
\end{prop}
\medskip
The proof of the above two propositions is done by standard MAGMA routines.
s
Combining these results we find the following table
of the possible mixed ramification structures on finite groups
of type $A$ with $A\in {\cal M}$.
\begin{theo}
If $S$ is a smooth projective surface isogenous to a
product of mixed type with $p_g(S)=q(S)=0$
and
with minimal realisation $S\cong (C_1\times C_2) /G$, then
$G$ is one of the
groups in the following table and the genera of the curves $C_1,\, C_2$ are as listed
in the table. The numbers $N : = n(G;A)$ of components of $\mathfrak{M}_{(1,8)}$
and their dimension is given in the remaining two columns.
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
$G$ & $|G|$ & $A$ & $n(G;A)$ & $d(G;A)$ \\
\hline
$G={\rm G}(256,1)$ & $256$ & $[4,4,4]_{16}$ & 3 & 0\\
$G={\rm G}(256,2)$ & $256$ & $[4,4,4]_{16}$ & 1 & 0\\
\hline
\end{tabular}
\end{center}
\end{theo}
\section{Concrete models}\label{concrete}
In this section we want to give explicit descriptions of the groups
and spherical systems of generators occurring in the nonabelian case
(the abelian case is fully classified and described in \cite{BaCa}).
Some of these nonabelian examples were already described in \cite{BaCa},
but we thought it would be worthwhile to give a complete list.
\subsection{ $G = \mathfrak A_5$ }
The unmixed ramification structure of type
$$ ([ 3,3,3,3 ] , [2,5,5]) $$
is given by the following elements of $ \mathfrak A_5$:
$$ ([(1,2,3),(3,4,5), (4,3,2), (2,1,5)],[(2,4)(3,5), (2,1,3,4,5), (1,2,3,4,5)]).$$
The unmixed ramification structure of type
$$ ([ 5,5,5 ] , [2,2,2,3]) $$
is given by the following elements of $ \mathfrak A_5$:
$$ ([(1,2,5,3,4), (1,2,4,5,3),(1,2,3,4,5) ], [(1,2)(3,4), (2,4)(3,5), (1,4)(3,5),
(2,3,4)]).$$
The unmixed ramification structure of type
$$ ([ 2,2,2,2,2 ] , [3,3,5]) $$
is given by the following elements of $ \mathfrak A_5$:
$$ ([(1,2)(3,4), (1,3)(2,4), (1,4) (2,3), (1,4)(2,5), (1,4)(2,5)], [(1,2,3), (3,4,5) ,
(5,4,3,2,1)]).$$
\subsection{ $G = \mathfrak D_4 \times {\rm Z}_2$ }
We write as customary $\mathfrak D_4$ as the group generated by elements $x, y$
satisfying the relations $ x^4 = y^2 = 1, yxy = x^{-1}.$
Then there is exactly one class of unmixed ramification structures, of type
$$ ([ 2,2,2,2,2 ,2] , [2,2,2,4]) $$
given by the following elements of $ \mathfrak D_4 \times {\rm Z}_2$
$$ ([(y,0), (yx,1), (y x^2, 0), (yx,1), (x^2,1), (x^2,1)], [(1,1), (y,1), (xy,0), (x,0)]). $$
\subsection{ $G = \mathfrak S_4 $ }
There is exactly one class of unmixed ramification structures, of type
$$ ([ 2,2,2,2,2 ,2] , [3,4,4]) $$
given by the following elements of $ \mathfrak S_4 $:
$$([(1,2), (1,2) , (2,3), (2,3), (3,4), (3,4)], [(1,2,3), (1,2,3,4), (1,2,4,3)]). $$
Note that this generating system is contained in the ArXiv version of \cite{BaCa},
it was not possible for technical reasons to correct the
printed version in time.
\subsection{ $G = \mathfrak S_4 \times {\rm Z}_2$ }
There is exactly one class of unmixed ramification structures, of type
$$ ([ 2,4,6] , [2,2,2,2,2,2]) $$
given by the following elements of $ \mathfrak S_4 \times {\rm Z}_2$:
$$([[(1,2),0], [ (1,2,3,4),1], [ (4,3,2),1]] ,$$
$$ [[(1,2) (3,4), 1], [ (1,2),1], [(3,4),1],
[(2,3) (1,4), 1], [ (2,3),1], [(1,4),1]]). $$
\subsection{ $G = G(16)$.}
We use here the following realization of $G : = {\bf G(16)}$
as a semidirect product
$$ ({\rm Z}_4 \times {\rm Z}_2)\mathbin {\hbox {\msbm \char'157}}_{\Phi} {\rm Z}_2$$
generated by $x,y,z$, with centre $C \cong {\rm Z}_2 \times {\rm Z}_2$
generated by $x^2, y$, and such that
$$ zxz = xy. $$
There is exactly one class of unmixed ramification structures, of type
$$ ([ 2,2,4,4] , [2,2,4,4]) $$
given by the following elements of ${\bf G(16)}$:
$$ ([z , z, x, x^{-1}], [z x^2 y,z x^2 y, xyz , (xyz)^{-1}]) .$$
\subsection{ $G = G(32), G(256,1), G(256,2).$ }
We construct now concrete models for the finite groups like
$G(256,1)$ which make hand computations simple. We start off by giving a
general construction principle for metabelian groups. A group is called
metabelian if it contains an abelian normal subgroup with abelian quotient.
Let now $N,\, Q$ be two abelian groups written additively. Let
\begin{equation}
\Phi: Q\to {\rm Aut}(N), \qquad \Phi: q\mapsto \Phi_q \ \ (q\in Q)
\end{equation}
a homomorphism from $Q$ to the automorphism group of $N$. Further let
\begin{equation}
\Theta : Q\times Q\to N
\end{equation}
be a bilinear map. We define a multiplication on the set $N\times Q$ by
setting
\begin{equation}
(n_1,q_1)\cdot (n_2,q_2):=
\left(n_1+\Phi_{q_1}(n_2)+\Theta(q_1,q_2),q_1+q_2\right)
\end{equation}
for $n_1,\, n_2\in N$ and $q_1,\, q_2\in Q$. We obtain a group structure iff
$$\Phi_{q_1}(\Theta(q_2,q_3))=\Theta(q_2,q_3)$$
holds for all $q_1,\, q_2,\, q_3$. The resulting group is denoted by
\begin{equation}
N\mathbin {\hbox {\msbm \char'157}}_{\Phi,\Theta} Q.
\end{equation}
There is the obvious exact sequence
$$\langle 0\rangle\to N\to N\mathbin {\hbox {\msbm \char'157}}_{\Phi,\Theta} Q\to Q\to \langle 0\rangle$$
hence $ N\mathbin {\hbox {\msbm \char'157}}_{\Phi,\Theta} Q$ is metabelian. Conversely, every metabelian
group arises in this way. If $\Theta: Q\times Q\to N$ is the zero map then
$N\mathbin {\hbox {\msbm \char'157}}_{\Phi,\Theta} Q$ is a semidirect product of $N$ and $Q$ which we
denote by $N\mathbin {\hbox {\msbm \char'157}}_{\Phi} Q$.
We shall now describe the models for the remaining finite groups from Sections
\ref{classiumi} and \ref{classimi}.
\bigskip
\centerline{\bf G(32):}
\medskip
This group has nilpotency class $2$ and is a semidirect product of ${\rm Z}_2^4$ by
${\rm Z}_2$. The homomorphism
$\Phi :{\rm Z}_2\to {\rm Aut}({\rm Z}_2^4)={\rm GL}(4,{\hbox{\ensuremath{\mathbb{F}}}}_2)$ can be
given by the single matrix (unipotent of order $2$). We set
$$
\Phi_1:=\left(
\begin{matrix}
1 & 0 & 0 & 0\cr
0 & 1 & 0 & 0\cr
1 & 0 & 1 & 0\cr
0 & 1 & 0 & 1
\end{matrix}\right)
$$
From the presentation (\ref{gr32}) it can be seen that the
resulting group ${\rm Z}_2^4\mathbin {\hbox {\msbm \char'157}}_{\Phi} {\rm Z}_2$ is isomorphic to
${\rm G}(32)$.
An unmixed ramification structure ${\cal T}=(T_1,T_2)$ of type
$([2,2,2,4]_8,[2,2,4,4]_4)$
on
${\rm Z}_2^4\mathbin {\hbox {\msbm \char'157}}_{\Phi} {\rm Z}_2$
is given by
$$T_1=[((0,0,1,1)^t,1),\ ((1,1,1,1)^t,0),\
((1,0,1,1)^t,0),\ ((0,1,1,1)^t,1)],$$
$$T_2=[((1,1,1,0)^t,0),\ ((1,0,0,0)^t,0),\
((1,1,1,0)^t,1),\ ((1,0,1,0)^t,1)].$$
\bigskip
\centerline{\bf G(256,1):}
\medskip
This group has nilpotency class $3$. But fortunately for us every group with
this property is metabelian. The group ${\rm G}(256,1)$ is
of the form ${\rm Z}_2^5\mathbin {\hbox {\msbm \char'157}}_{\Phi,\Theta}{\rm Z}_2^3$. We shall first describe
the maps $\Phi$ and $\Theta$.
Let $e_1,\, e_2,\, e_3$ be the standard basis of ${\rm Z}_2^3$. The homomorphism
$\Phi :{\rm Z}_2^3\to {\rm Aut}({\rm Z}_2^5)={\rm GL}(5,{\hbox{\ensuremath{\mathbb{F}}}}_2)$ can be
given by its values on $e_1,\, e_2,\, e_3$. We set
$$
\Phi_{e_1}:=\left(
\begin{matrix}
1 & 0 & 0 & 0 & 0\cr
0 & 1 & 0 & 0 & 0\cr
0 & 0 & 1 & 0 & 0\cr
1 & 1 & 0 & 1 & 0\cr
0 & 1 & 1 & 0 & 1
\end{matrix}\right),
\quad
\Phi_{e_2}:=\left(
\begin{matrix}
1 & 0 & 0 & 0 & 0\cr
0 & 1 & 0 & 0 & 0\cr
0 & 0 & 1 & 0 & 0\cr
0 & 0 & 1 & 1 & 0\cr
1 & 1 & 0 & 0 & 1
\end{matrix}\right),
\quad
\Phi_{e_3}:=\left(
\begin{matrix}
1 & 0 & 0 & 0 & 0\cr
0 & 1 & 0 & 0 & 0\cr
0 & 0 & 1 & 0 & 0\cr
0 & 1 & 0 & 1 & 0\cr
0 & 0 & 1 & 0 & 1
\end{matrix}\right).
$$
To give the bilinear map $\Theta$ we set
$$\Theta(e_1,e_1):=(1,1,1,0,0)^t,\ \Theta(e_2,e_2):=(1,1,0,0,0)^t,\
\Theta(e_3,e_3):=(1,0,0,0,0)^t,$$
$$\Theta(e_2,e_1):=(1,0,0,1,1)^t,\
\Theta(e_3,e_1):=(0,1,0,0,1)^t,\
\Theta(e_3,e_2):=(0,0,1,1,1)^t$$
with the convention that the $\Theta(e_i,e_j)$ which are not mentioned are
equal to $0$.
From the presentation (\ref{gr2561}) it can be seen that the
resulting group ${\rm Z}_2^5\mathbin {\hbox {\msbm \char'157}}_{\Phi,\Theta} {\rm Z}_2^3$ is isomorphic to
${\rm G}(256,1)$.
Here are three mixed ramification structures of type
$[4,4,4]_{16}$
on
${\rm Z}_2^5\mathbin {\hbox {\msbm \char'157}}_{\Phi,\theta} {\rm Z}_2^3$:
$$T_1:=[((0,0,1,0,1)^t,e_3),((1,1,0,0,0)^t,e_1),((1,0,0,1,0)^t,e_1+e3)],$$
$$T_2:=[((0,0,0,0,1)^t,e_2),((1,0,0,1,0)^t,e_1+e_2),((0,0,1,0,1)^t,e_1)],$$
$$T_3:=[((0,0,0,1,0)^t,e_2),((1,1,0,1,0)^t,e_1+e_2+e_3),
((1,0,1,1,0)^t,e_1+e_3)].$$
This is to say the three coordinates of $T_1,\, T_2,\, T_3$ generate a
subgroup $H$ of index $2$ in $G$ and the compatibility conditions of
Definition \ref{defimi} are satisfied. Moreover $T_1,\, T_2,\, T_3$ represent
the three orbits appearing in Proposition \ref{prop2561}.
\bigskip
\centerline{\bf G(256,2):}
\medskip
This group has nilpotency class $3$ and is
of the form ${\rm Z}_2^5\mathbin {\hbox {\msbm \char'157}}_{\Phi,\Theta}{\rm Z}_2^3$. We shall describe
the maps $\Phi$ and $\Theta$.
Let $e_1,\, e_2,\, e_3$ be the standard basis of ${\rm Z}_2^3$. The homomorphism
$\Phi :{\rm Z}_2^3\to {\rm Aut}({\rm Z}_2^5)={\rm GL}(5,{\hbox{\ensuremath{\mathbb{F}}}}_2)$ can be
given by its values on $e_1,\, e_2,\, e_3$. We set
$$
\Phi_{e_1}:=\left(
\begin{matrix}
1 & 0 & 0 & 0 & 0\cr
0 & 1 & 0 & 0 & 0\cr
0 & 0 & 1 & 0 & 0\cr
1 & 1 & 0 & 1 & 0\cr
0 & 1 & 1 & 0 & 1
\end{matrix}\right),
\quad
\Phi_{e_2}:=\left(
\begin{matrix}
1 & 0 & 0 & 0 & 0\cr
0 & 1 & 0 & 0 & 0\cr
0 & 0 & 1 & 0 & 0\cr
0 & 0 & 1 & 1 & 0\cr
1 & 1 & 0 & 0 & 1
\end{matrix}\right),
\quad
\Phi_{e_3}:=\left(
\begin{matrix}
1 & 0 & 0 & 0 & 0\cr
0 & 1 & 0 & 0 & 0\cr
0 & 0 & 1 & 0 & 0\cr
0 & 1 & 0 & 1 & 0\cr
0 & 0 & 1 & 0 & 1
\end{matrix}\right).
$$
To give the bilinear map $\Theta$ we set
$$\Theta(e_1,e_1):=(1,1,1,1,0)^t,\ \Theta(e_2,e_2):=(1,1,0,0,0)^t,\
\Theta(e_3,e_3):=(1,0,0,0,0)^t,$$
$$\Theta(e_2,e_1):=(1,0,0,1,1)^t,\
\Theta(e_3,e_1):=(0,1,0,0,1)^t,\
\Theta(e_3,e_2):=(0,0,1,1,1)^t$$
with the convention that the $\Theta(e_i,e_j)$ which are not mentioned are
equal to $0$.
From the presentation (\ref{gr2562}) it can be seen that the
resulting group ${\rm Z}_2^5\mathbin {\hbox {\msbm \char'157}}_{\Phi,\Theta} {\rm Z}_2^3$ is isomorphic to
${\rm G}(256,2)$.
A mixed ramification structure of type
$([4,4,4]_{16}$
on
${\rm Z}_2^5\mathbin {\hbox {\msbm \char'157}}_{\Phi,\theta} {\rm Z}_2^3$
is given by
$$[((0,1,0,1,0)^t,e_3),((0,0,1,0,1)^t,e_2+e_3),((0,0,0,0,0)^t,e_2)].$$
The conventions are the same as in the example G(256,1).
|
2,869,038,155,921 | arxiv | \section*{Acknowledgement}
\addtolength{\textheight}{-1cm}
\typeout{}
\bibliographystyle{IEEEtran}
\section{Introduction}
Autonomous robot navigation is a fundamental aspect in robotics, especially for service robots, logistic robots, and autonomous vehicles \cite{fragapane2020increasing}. While navigation in a known static environment is a well-studied problem, navigation in a dynamically changing environment remains a challenging task for many mobile robot applications. It requires the robot to efficiently generate safe actions in proximity to unpredictably moving obstacles in order to avoid collisions. Traditional model-based motion planning approaches employ hand-engineered safety rules to avoid dynamic obstacles. However, hand-designing the navigation behavior in dense environments is difficult since the future motion of the obstacles is unpredictable \cite{qian2010socially}. Deep Reinforcement Learning (DRL) emerged as a learning-based method that can learn policies by interacting with the environment on a trial-and-error basis. Due to the ability to learn nonlinear patterns, DRL is employed to enable the robot to learn the complex behavior rules, environment semantics, and add common-sense reasoning into navigation policies. However, DRL tends to be extremely sample-inefficient and highly specialized to the system they were trained on \cite{recht2019tour}. Moreover, due to the myopic nature of DRL, a variety of literature incorporates DRL into robot navigation systems only as a local planner, where the sample space is smaller and the planning horizon is restricted locally \cite{dugas2020navrep}.
To tackle these limitations of DRL and leverage its advantages for navigation in long-range dynamic environments, long-term global guidance such as the provision of a subgoal is required to relieve its myopic nature. However, real-time generation of a globally consistent path in long-range navigation tends to be computationally expensive. Moreover, most global planning approaches are based on static environment assumptions \cite{demyen2006efficient} \cite{harabor2019regarding} \cite{moon2014kinodynamic}. Unlike in a static environment, in a dynamic environment, the planned global path at an arbitrary point in time could be immediately invalid for the next moment due to dynamic obstacles.
On that account, we direct our attention to the challenges associated with navigation in a long-range, dynamic environments and the idea of effective and efficient exploration in the state-time space of these environments by using traditional model-based methods to provide global guidance to a DRL-based local planner in real -time.
The main contributions are as follows:
\begin{itemize}
\item Proposal of a global planner based on Hybrid A-Star, which generates landmark waypoints as sparse representation of the global path.
\item Proposal of an efficient trajectory planning method for the mid planner, which integrates a front-end searching algorithm based on timed Delaunay triangle graph for planning a near-optimal initial path and a back-end ESDF-free gradient-based trajectory optimization algorithm. The trajectory takes moving obstacles into consideration and provides mid-term global guidance (subgoal) in a highly dynamic environment.
\item Evaluation of the navigation system against two baseline approaches in highly dynamic environments in terms of safety robustness and efficiency against conventional planning methods.
\end{itemize}
The paper is structured as follows: Sec. II begins with related works, followed by the methodology in Sec. III. Subsequently, the results and evaluations are presented in Sec IV. Finally, Sec. V provides a conclusion and outlook.
We made our code open-source at https://github.com/ignc-research/arena-fsm-ego-planner.
\section{Related Works}
Among the most common traditional OA approaches are reactive methods such as velocity obstacles \cite{van2008reciprocal}, \cite{van2011reciprocal}, artificial potential field \cite{park2001obstacle}, \cite{sun2017collision} or vector field histograms \cite{borenstein1989real}. However, these methods require high computational calculations and can not cope well with fast moving obstacles. Other traditional obstacle avoidance approaches contain the model predictive control (MPC) \cite{rosmann2019time}, timed elastic bands (TEB) \cite{rosmann2015timed} or the dynamic window approach (DWA) \cite{fox1997dynamic}, which repeatedly solve optimization control problems.
DRL-based navigation approaches proved to be a promising alternative that has been successfully applied in various robotic applications with remarkable results. Various works demonstrated the superiority of DRL-based OA approaches due to more flexiblility in the handling of obstacles, generalization new problem instances, and ability to learn more complex tasks without manually designing the functionality \cite{faust2018prm}, \cite{everett2018motion}, \cite{chen2019crowd}.
However, since the reward that a DRL agent can obtain in long-range navigation over large-scale maps is usually sparse, agents are only suitable for short-range navigation due to local minima. Thus, a variety of research works combine DRL-based local planning with traditional methods through the use of waypoints to provide the DRL approach with short-range goals on a global path generated by a traditional global planner such as RRT \cite{lavalle1998rapidly} or A-Star \cite{hart1968formal}.
Gundelring et al. \cite{guldenringlearning} first integrated a DRL-based local planner with a conventional global planner from the ubiquitously used robot operating system (ROS) and demonstrated promising results. The researchers employ a subsampling of the global path to create waypoints for the DRL-local planner.
Similarly, Regler et al. \cite{regier2020deep} propose a hand-designed sub-sampling to deploy a DRL-based local planner with conventional navigation stacks.
A limitation of these works is that the simple sub-sampling of the global path is inflexible and could lead to hindrance in complex situations, e.g. when multiple humans are blocking the way. Other works employed a more intelligent way to generate waypoints.
Brito et al. \cite{brito2021go} proposed a DRL-based waypoint generation where the agent is trained to learn a cost-to-go model to directly generate subgoals, which an MPC planner follows. The better estimated cost-to-go value enables MPC to solve a long-term optimal trajectory. Similarly, Bansal et al. \cite{bansal2020combining} proposed a method called LB-WayPtNav, in which a supervised learning-based perception module is used to process RGB image data and output a waypoint. With the waypoint and robot current state, a spline-based smooth trajectory is generated and tracked by a traditional model-based, linear feedback controller to navigate to the waypoint. However, training of DRL-based agents is complex and not always intuitive requiring a lot of pre assumptions and limitations. Supervised training, requires a tedious data acquisition phase to provide annotated training data.
In this work, we follow a similar hierarchical approach but focus on employing a model-based waypoint generator combined with a DRL-based local planner. This way, we leverage the superior performance of DRL-based approaches for the obstacle avoidance problem.
Inspired by recent works of Zhou et al. \cite{zhou2020ego}, which efficiently search the state-time space for an optimal trajectory considering fast-moving obstacles, our waypoint generator is able to provide meaningful waypoints without prior training while considering fast-moving obstacles.
\section{Conclusion}
In this paper, we proposed a hierarchical navigation approach combining model-based optimization for the mid planner with a DRL-based local planner for long-range navigation in highly dynamic environments. Our approach efficiently searches the state-time space utilizing a timed Delaunay Trianagle graph to encode dynamic obstacles and efficiently generate a collision free trajectory using a modified timed A-Star approach. The trajectory is further refined by an EGO optimizer and an optimized subgoal is generated. The resultant waypoint from the mid planner has incorporated both global information and dynamic obstacles within the sensor range.
Subsequently, we evaluated our approach against two baseline approaches and found an enhanced performance in terms of navigational safety and robustness. Future work include the incorporation of semantic information into the navigation system, the combination with other local planners as well as a DRL-based training of the joint system. Furthermore, we aspire to evaluate the approach in real environments and robotic hardware.
\section{Results and Evaluation}
To evaluate our space-timed waypoint generator, we compared it against two baseline approaches. The first one is a simple subsampling of the global plan, which we denote as SUB-WP. The second one is an approach of our previous work \cite{kastner2021towards}, which spawns a waypoint depending on the robot's position. It is denoted as SH-WP. The approach presented in this paper is denoted as ST-WP. As a local planner, CADRL \cite{everett2018motion}, a DRL-based obstacle avoidance approach is utilized to follow subgoals computed by the waypoint generators.
The experiments were conducted on an office map with static obstacles and an increasing number of dynamic obstacles. We assumed static obstacles are known to the robot on the given global map. The dynamic obstacles were unknown to the robot and could only be sensed locally within the sensor range, which is 3.5 $m$ in our robot setup. The obstacles move with a back and forth movement between two positions with predefined lengths. The obstacles are triggered only when the robot is nearby. Thus, the robot may meet obstacles that suddenly appear in front of it, which may add difficulties to the obstacle avoidance task. If the distance between the robot and any static or dynamic obstacle is less than 0.35 $m$, a collision is published and counted. Each test trial continues until the robot reaches the goal position or a timeout of 3 min is exceeded. When the robot has reached this goal, the scenario will be reset by the task generator. We define a test trial as successful if the robot reaches the goal within the time limit and with less than 2 collisions.
\subsection{Qualitative Results}
Figure \ref{quali} illustrates the robot's trajectories and collision zones over different numbers of obstacles with a velocity of 0.3 m/s. As observed, the higher the obstacle speed, the higher the collision rates for all approaches. However, the SH-WP and SUB-WP result in a significantly higher number of collisions compared to the ST-WP. Furthermore, it is observed that the ST-WP maintains a robust and efficient path to the goal, whereas the other two approaches contain roundabout paths especially starting from the position [x: 11,y: 11]. Even with increasing obstacle velocity, the trajectories of our proposed ST-WP approach maintain their robustness. This is due to the fact that the approach considers the obstacle's position and velocity at every time step and thus can provide a safer waypoint. The following quantitative evaluation further outlines these findings.
\subsection{Quantitative Results}
The quantitative results of the robot performance with different subgoal modes on the office map over different obstacle velocities and fixed number of obstacles ($N_{obs}=20$) are shown in Fig. \ref{quali}.
For the scenarios with 20 obstacles and a velocity of 0.3 and 0.5 m/s, our ST-WP approach accomplishes a 100 percent success rate compared to over 60 percent for the SH-WP and over 35 percent for the SUB-WP. For a velocity of 0.5 m/s, the success rate for SH-WP and SUB-WP drops to over 50 and over 10 percent respectively, while our ST-WP maintains a 100 percent success rate. However, for an obstacle velocity of 1 m/s, the success rate drops rapidly for all approaches to under 15 percent. Our proposed ST-WP achieves the lowest collision rates with only 11 collisions for 0.3 m/s and under 10 for 0.5 m/s, while the other two approaches reach over 25 collisions. These results demonstrate the superiority of our proposed waypoint generator in terms of safety and robustness compared to the other two baseline approaches.
In terms of efficiency, all approaches perform similarly with a slightly more efficient performance by our ST-WP approach.
The subsampling method proves to be the worst in terms of efficiency because the robot has to traverse all subgoals along the initial global trajectory before reaching the final goal. If subsampled waypoints are blocked by dynamic obstacles, the robot needs to take extra effort to reach that waypoint before approaching the next one.
\subsection{Discussion}
The results demonstrate the superiority of our ST-WP approach. It causes fewer collisions while being slightly more efficient than the baseline approaches SUB-WP and SH-WP. This is due to the fact that our approach considers obstacle positions and velocities to calculate an efficient path based on which a subgoal is selected that guides the robot within a highly dynamic environment. Due to the usage of efficient planning methods, the calculations can be done efficiently and quickly. The paths are smoother due to the refinement step incorporating the EGO trajectory optimizer.
Another important aspect is that the safety and efficiency performance of SH-WP and SUB-WP is highly dependent on the quality of the global path because it is the only source to calculate the waypoints. Since the global path is not updated until specific conditions are met, the quality of the future part of the path is not ensured and may lead to worse results in certain scenarios where the path is in an obstacle-dense area. Moreover, in a large-scale environment, frequent global replan consumes much more time. By using the landmark method, the cost for global replan is reduced.
\section{Methodology}
In this section, we will present the methodology of our proposed framework. Our main objective is to explore and exploit a feasible and sub-optimal trajectory in state-time space effectively and efficiently to provide meaningful waypoints for a DRL-based local planner.
\subsection{System Design}
An overview of the complete system structure is shown in Fig. \ref{system}. The navigation system contains different layers and modules. A hierarchical motion planning framework is adopted to handle complex long-range navigation problems. Three modules are designed for different planning horizons. The global planner is designed to provide a global path to guide the robot and avoid local minima. Therefore, a hybrid A-Star search is implemented to generate a kinodynamic feasible global path. The mid planner is designed to generate a collision-free trajectory in the mid horizon that incorporates dynamic obstacles within the sensor range. The trajectory generates a subgoal/waypoint, which is given as input to the local planner.
Finally, the local planner is designed to plan the local horizon with the given waypoint from the mid planner. In the following section, each module is described in more detail.
\subsection{Environment Representation}
A reasonable representation of the environment is as important as the motion planning algorithms themselves. An appropriate environment representation can improve the effectiveness and efficiency of these processes and thus speed up the motion planning process. In this work, we make use of three different representations: the occupancy grid map, the Euclidean signed distance field (ESDF) map, and the Delaunay triangle graph, which is modified to cope with dynamic obstacles.
\subsubsection{Occupancy Grid Map}
Typically, planning approaches use occupancy grid maps that indicate whether a pixel is occupied, not occupied, or unknown. Although this representation is efficient and simple, it does not contain enough information about obstacles that could be of relevance. Thus, we additionally rely on other map representations that provide more information about the obstacles.
\subsubsection{ESDF Maps}
The ESDF map gives the minimum distance of each cell to the obstacles around and thus provides relevant information for collision checking approaches.
It is built based on the Fast Incremental Euclidean Distance Fields approach for Online Motion Planning of Aerial Robots (FIESTA) \cite{han2019fiesta}, which incrementally builds an ESDF map from the occupancy grid directly.
In this work, we utilize the ESDF map for global path optimization and update it once at the initialization of the system.
\subsubsection{Delaunay Triangulation}
The Delaunay triangle graph is a sparse topological map and is used to represent topological relations between dynamic obstacles, which is the key aspect to speed up the exploration process in the dynamic state-time space.
With the observation that the dynamic obstacles such as other robot agents and pedestrians are usually spread sparsely in the environment while each of them occupying only a small area in the space, we can model the dynamic obstacles with a rather sparse map and to find a collision-free trajectory from the gaps between the obstacles. Delaunay triangulation is an intuitive representation of the dynamic environment, in which the position of dynamic obstacles can be seen as the vertices of triangle.
\subsection{Global Planner}
Our proposed global planner module consists of two phases: first, a collision-free path is calculated using a hybrid A-Star search. Second, a landmark generation module computes suitable points along the global path.
\subsubsection{Hybrid A-Star search}
The global planner incorporates a hybrid A-Star search to generate a kinodynamic feasible global path given an occupancy and an ESDF map. Hybrid A-Star extends the clasic A-Star search by searching directly in the state space for a collision-free and kinodynamic feasible trajectory while minimizing the time duration and control cost. The key difference from standard A-Star is that the edges connecting two nodes are not straight-line segments but motion primitives, which are continuous local trajectories integrated by samples in the action space. To limit the growth of the search graph in the exploration space, the discretized grid is used, while nodes can reach any continuous point on the grid. In this paper, a simplified Hybrid A-Star is used and in order to accelerate the global planning process, we simplify the original Hybrid A-Star in two aspects: the trajectory dimension is reduced to 2D with x-axis and y-axis and the kinematic model is reduced to a double integrator.
\subsubsection{Landmark Generation}
Since not all points on the global path are equally important, only a few critical points are used. As a result, the search space can be limited, and only reasonable waypoints proposed for the mid planner. These points are denoted as 'Landmark waypoints', which are defined as critical waypoints that the robot must pass in order to reach the goal, such as turning points at corners. Exemplary landmarks on a calculated global path are illustrated in Fig. \ref{landmarks}.
\begin{figure}[!h]
\centering
\includegraphics[width=0.5\textwidth]{ieeeconf/content/img/lms.png}
\caption{Exemplary process of the timed A-Star search}
\label{landmarks}
\end{figure}
These landmark waypoints are a sparse representation of the global path, which is advantageous in dynamic environments. By traversing each landmark waypoint sequentially, the robot is able to reach the goal position without trapping by local minima. A landmark waypoint is defined as a point at a position $[p_x,p_y]$, where the required steering angle $\delta$ to traverse this point is larger than a threshold value $\delta_{th}$.
\begin{equation}
\delta > \delta_{th}
\label{equ:steering_angle}
\end{equation}
As for a differential drive robot, the heading angle of the robot $\theta$ is controlled directly by the difference in wheel velocities. Therefore, the change of the heading angle $\Delta \theta$ within an amount of time is used to reflect the steering angle $\delta$ of the robot. Thus, the criterion for selecting landmark waypoints is transformed to finding the points, where the change of heading angle within a time interval is larger than a threshold value $\Delta \theta > \Delta \theta_{th}$. Since the global time-parameterized trajectory is obtained by performing Hybrid A-Star search, the angular velocity trajectory $\omega(t)$ of the heading angle can be computed by following kinematic relation:
\begin{align}
\mathbf{a}_{\text {norm}}(t)&=\dfrac{\mathbf{v}(t) \times \mathbf{a}(t)}{\|\mathbf{v}(t)\|} \\
\omega(t)&=\dfrac{\left\|\mathbf{a}_{\text {norm }}(t)\right\|}{\|\mathbf{v}(t)\|}
\end{align}
where $\mathbf{v}(t)=\dot{\mathbf{p}}(t)$ is the velocity and $\mathbf{a}(t)=\ddot{\mathbf{p}}(t)$ is the acceleration in the 2D configuration space.
By integrating the angular velocity from a time interval $[t_0, t_i]$, the change of heading angle $\Delta\theta$ can be calculated using Equ. \ref{equ:delta_theata}
\begin{equation}
\Delta\theta = \int_{t_0}^{t_i} \omega(\tau) d \tau
\label{equ:delta_theata}
\end{equation}
The final algorithm of the landmark generation is formalized in Alg. \ref{alg:landmark} and the resultant global path and landmark waypoints are shown in Fig. \ref{landmarks}
\begin{algorithm}[]
\caption{Landmark generation($\mathbf{x}_{start},\mathbf{x}_{goal}$)}
\label{alg:landmark}
$L \leftarrow \emptyset$\;
$\boldsymbol{\xi}_{global}(t)$ $\leftarrow$
\textbf{Hybrid A-Star}($\mathbf{x}_{start},\mathbf{x}_{goal}$)\;
$\Delta\theta \leftarrow 0$\;
$l_{prev} \leftarrow \mathbf{p}_{start}$\;
\For{$t \leftarrow t_s$ to $t_e$}{
$\omega(t) \leftarrow \mathbf{ComputeAngularVelocity}(\mathbf{v}(t),\mathbf{a}(t))$\;
$\Delta\theta \leftarrow \Delta\theta + \omega(t)\Delta{t}$\;
\If{$\Delta\theta > \Delta\theta_{th} \wedge \mathbf{ComputeDist}({l_{prev}}, \mathbf{p}(t))> D_{th}$ }{
${L}.\mathbf{add}\left(\mathbf{p}(t)\right)$\;
$l_{prev} \leftarrow \mathbf{p}(t)$\;
$\Delta\theta \leftarrow 0$\;}}
${L}.\mathbf{add}(\mathbf{p}_{goal})$
\end{algorithm}
\subsection{Mid Planner}
The mid planner consists of two modules, which we denote as front-end and back-end. The front-end takes as input the computed landmarks from the previously presented global planner and computes a trajectory considering dynamic obstacles based on Delaunay triangulation.
The back-end incorporates an ESDF-free gradient-based local trajectory optimizer (EGO) from \cite{zhou2020ego} et al. to optimize the trajectory to a collision-free dynamic feasible trajectory in real-time. The subgoal is selected on the final optimized trajectory.
In the following, each part is described in more detail.
\subsubsection{Frontend - State-Timed A-Star-search}
For the front-end, a state-time A-Star search based on a timed Delaunay triangle graph is designed to find an initial trajectory considering the goal and obstacles information.
It provides an initial trajectory to the back-end to be further optimized by the EGO trajectory optimizer. The state-time A-Star search is an extension of standard A-Star based on a sparse representation of state-time space called the timed Delaunay triangle graph. The resultant trajectory takes both dynamic obstacles in the local sensor range and the input landmark waypoint as a mid-term goal into consideration, thus it can provide better global guidance to local DRL planners.
\begin{figure}[!h]
\centering
\includegraphics[width=0.28\textwidth]{ieeeconf/content/img/frontend.jpg}
\caption{Exemplary process of the timed A-Star search}
\label{fig:state_time_graph}
\end{figure}
The approach utilizes a Delaunay triangle graph $G[t]$, which encodes dynamic obstacles as vertices of the triangles. Due to the sparsity of the topological representation, the exploration phase can be done efficiently. The uncertainty of future motions of dynamic obstacles is relieved by rapid iterative replanning. Thus, the error caused by constant velocity assumptions in predicting obstacle movements can be reduced. An example of the search process of state-time A-Star is shown in Fig. \ref{fig:state_time_graph}.
At the beginning of each planning cycle, a discretized timed Delaunay triangle graph containing a set of triangle graphs is constructed according to the start state $\mathbf{x}_{start}$, the goal state $\mathbf{x}_{goal}$ and current perceived states of dynamic obstacles $O[t_0]$ within the sensor range. The timed graph G contains a set of Delaunay triangle graphs from the time $t_0$ to $t_0+T_h$, where $T_h$ is the planning time horizon. The state-time space exploration is done by generating a simplified trajectory through the edge connecting two states. Then, the samples in the sample set $ST$ are pruned so that only the sample that stays safe from the collision within the planning time horizon are considered as neighbors. Subsequently, not only are the \textit{time-to-collision} conditions of each sample checked, but a re-computation for a new safe sample based on the original sample is performed if the original sample is expected to collide with obstacles during the action duration. This sampling mechanism is inspired by RRT and ensures the algorithm to have enough effective samples in the state-time space. After valid neighbor nodes are found, an update of the sample set $ST$ is performed according to $g$-cost and heuristics. The heuristic calculation is designed to take the time-to-avoid dynamic obstacles into consideration. Since the sampling algorithm is based on the triangle graph, which contains the goal state (here the landmark waypoint) as a vertex of the triangle, the proposed state-time A-Star method is able to incorporate global information as well as the dynamic local environment within the sensor range simultaneously.
\subsubsection{Backend- EGO Optimization}
For the last module in our pipeline, we incorporate an ESDF-free gradient-based trajectory optimizer (EGO) by Zhou et al. \cite{zhou2020ego} as the back-end of the mid planner to optimize the initial trajectory computed by the state-time A-Star search.
\begin{figure}[!h]
\centering
\begin{subfigure}{0.28\textwidth}(a)
\centering
\includegraphics[width=\linewidth]{ieeeconf/content/img/state_time_compare1-c.jpg}
\label{fig:1}
\end{subfigure}\hfil
\begin{subfigure}{0.28\textwidth }(b)
\centering
\includegraphics[width=\linewidth]{ieeeconf/content/img/state_time_compare2-c.jpg}
\label{fig:2}
\end{subfigure}
\caption{Comparison between the initial trajectory (a) and the optimized trajectory (b). The dark grey circle is an unexpected obstacle not considered by the initial state-time A-Star search, the green dashed line is the initial trajectory, the green line is the optimized trajectory.}
\label{fig:optimization_compare}
\end{figure}
Although the dynamics of the robot and obstacle are considered in state-time A-Star search, due to simple constant linear assumption of dynamic obstacles motion, the generated trajectory is not smooth and cannot ensure the real-time safety of the robot. However, this trajectory is a reasonable initial guess, which has considered long-term information of the global landmark waypoint and local dynamic obstacles. The incorporation of the EGO trajectory optimizer should further enhance navigational safety.
The key advantage of the EGO optimizer is that it provides fast real-time optimization. Moreover, unlike other gradient-based trajectory optimization methods, the EGO optimizer can also deal with an initial trajectory that is in a collision and optimize it to a collision-free dynamic feasible trajectory as shown in Fig. \ref{fig:optimization_compare}
The trajectory optimization of the EGO optimizer requires three steps: trajectory parameterization of the initial trajectory into a B-spline, artificial distance field generation for collision optimization, and the numerical optimization process of the trajectory given optimization objective functions. For a detailed explanation, we refer to \cite{zhou2020ego}.
|
2,869,038,155,922 | arxiv | \section{Introduction} \label{sec:Introduction}
Photon noise limited detection of millimetre-wave radiation has been demonstrated with a number of cryogenic detectors such as: semiconductor bolometers, transition edge sensors and kinetic inductance detectors\cite{Morozov11,Doyle08}. A bolometer consists of a thermally isolated absorber that converts absorbed radiation into thermal energy, which is detected by means of a sensitive thermometer. The concept of using the weak coupling between electrons and phonons at low temperatures, combined with a normal metal-insulator-superconductor (NIS) tunnel junction thermometer, to make a fast and sensitive hot electron bolometer was first proposed by Nahum, Richards and Mears\cite{Nahum93, Nahum94}. Dual normal metal-insulator-superconductor (SINIS) junctions, coupled to an absorbing metallic island, can be used to simultaneously act as a microrefrigerator by extracting heat from the electrons and as a bolometric detector. The wavelengths that the island absorbs can be defined by patterning the superconducting leads into an antenna.
\begin{figure}[ht]
\includegraphics[width = 0.5\columnwidth]{NIS_energyLevels_Bias}
\caption{Energy bands for a biased normal metal-insulator-superconductor NIS structure. In order for electrons to tunnel from the normal metal (left) into the superconductor (right), we require that $eV > \Delta - k_{B}T_{e}$ , where $V$ is the voltage across the structure due to the bias, $T_{e}$ is the electron temperature and $\Delta $ is half the superconducting gap.}
\label{fig:NISenergy}
\end{figure}
Schmidt \textit{et al.}\cite{Schmidt2005} describe how the use of a combined microwave and DC biasing signal, along with frequency domain multiplexing techniques, can be used to realise large imaging arrays (up to $10^{5}$ pixels) of cold electron bolometers.
Detailed calculations of the characteristics of these Cold Electron Bolometers (CEBs) indicate that they should exhibit a combination of fast response speeds ($<1~\mathrm{\upmu s}$) and high sensitivity. Achieving high sensitivities with metal-based Cold Electron Bolometers requires fabrication of submicron metal islands.
Replacing the normal metal with a degenerately doped silicon offers reduced electron-phonon coupling compared to standard metals and thus gives higher sensitivity for a given island volume\cite{Leoni99}. It has been proposed\cite{Muhonen2011} that using a strained silicon absorber enables fabrication of detectors with photon noise limiting sensitivity using standard photolithographic techniques. Initial reports of optical noise equivalent power for metal based Cold Electron Bolometers have been published in recent years\cite{Otto2013, Tarasov2011}. Most of these measurements have been based on radiation absorbed from a cold blackbody source, this does not allow for the spectral response of the detector to be studied. They have also all reported optical noise equivalent powers limited by the readout electronics. Here we present optical measurements of a Strained Silicon Cold Electron Bolometer designed to absorb millimetre-wave radiation, these measurement have been taken with the detector looking out of a window in the cryostat which allowed for a number of sources, including a Fourier transform spectrometer, to be observed.
\section{Theory} \label{sec:Theory}
The electrothermal properties of both the normal metal-insulator-superconductor and the symmetric (SINIS) structure have been well studied \cite{Pekola05, Nahum93, Nahum94, Leivo96, Savin01, Pekola04}. FIG.~\ref{fig:NISenergy} shows a typical normal metal-insulator-superconductor structure (shown in the presence of an external bias such that $eV = \Delta$). These devices have been shown\cite{Pekola04} to be able to reduce electron temperature from $300~\mathrm{mK}$ to below $100~\mathrm{mK}$. For a sensitive bolometric detector we would like the absorber (the normal metal in this case) to have as small a volume as possible.
A similar structure, superconductor-semiconductor-superconductor (SSmS), exists where the normal metal is replaced by a doped semiconductor and the insulator replaced by a Schottky contact formed between the semiconductor and the superconductor\cite{Savin01}. These devices have the advantage of decreased electron-phonon coupling compared to the normal metal based type of device\cite{Muhonen2011} and reduced electron density. The current, $I$, flowing through each of the symmetric junctions is given by:
\begin{align}
I &=\frac{1}{eR_{N}}\int_{\Delta}^{\infty} \frac{E}{\sqrt{E^{2}-\Delta^{2}}} \times \left[f\left(E-\nicefrac{eV}{2},T_{e}\right) - \right. \nonumber \\
&\qquad \qquad \qquad \qquad \qquad \qquad \; \left. f\left(E+\nicefrac{eV}{2},T_{e}\right)\right] \mathrm{d}E\, , \label{eq:IV}
\end{align}
where $R_{N}$ is the normal-state resistance due to tunnelling through the insulating barrier, $\Delta$ is half the superconducting bandgap, $V$ is the voltage across the structure and $f\left(E,T\right)$ is the Fermi distribution for electrons at temperature $T_{e}$. Associated with this current is a flow of heat from the central island which dissipates a power, $P$, within the device of:
\begin{align}
P &= IV + \frac{2}{e^{2}R_{N}}\int_{\Delta}^{\infty} \frac{E^{2}}{\sqrt{E^{2}-\Delta^{2}}} \times \left[2f\left(E,T_{s}\right) - \right. \nonumber \\
&\qquad \qquad \left. f\left(E-\nicefrac{eV}{2},T_{e}\right)- f\left(E+\nicefrac{eV}{2}, T_{e}\right)\right] \mathrm{d}E\, \label{eqn:Pc}.
\end{align}
This power is bias dependent and is negative (cooling) for bias voltages $eV \lesssim 3\Delta$.
In a Cold Electron Bolometer when the absorber is heated by incident optical power it is this cooling power, associated with the most energetic of charges tunnelling out of the absorber, which removes the heat. Since the cooling (thermal resetting) of the bolometer is carried out directly by electron diffusion (as opposed to the long, weak, thermal links required by many of today's most sensitive bolometers \cite{Mauskopf97, Audley12, Holland13}) the thermal time constant associated with the Cold Electron Bolometer is governed by the tunnelling time. This can be \cite{Kuzmin04} as low as $10~\mathrm{ns}$, whereas other types of detector \cite{Jackson12} have response times of the order of $1~\mathrm{ms}$.
In addition to this cooling power, the electrons are also heated or cooled by the weak thermal link to the phonons. This heating term, $P_{e-ph}$, is given by:
\begin{align}
P_{e-ph} &= \Sigma \Omega \left(T^{\beta}_{e} - T^{\beta}_{ph} \right), \label{eq:Pe-ph} \\
\intertext{where $\Sigma$ is a material constant that has been measured\cite{Prest11} to be $2 \times 10^{7}~\mathrm{W\,K^{-6}\,m^{-3}}$; $\Omega$ is the volume of the bolometer's absorber; $T_{ph}$ and $T_{e}$ are the phonon and electron temperatures respectively and the power $\beta$ has been found\cite{Prest11} to be $6$. From this we can define a thermal conductance, $G$, from the phonons to the electrons as:}
G = \frac{\mathrm{d}P}{\mathrm{d}T_{e}} &= \beta \Sigma \Omega T^{\beta-1}_{e}.
\end{align}
The total noise equivalent power (NEP) for the Cold Electron Bolometer is comprised of several terms and has been fully derived by Golubev and Kuzmin (2001)\cite{Golubev01} to be:
\begin{align}
NEP^{2}_{CEB} &= \frac{\left<\delta V^{2}\right>_{\mathrm{amp}}}{S^{2}} + 2\beta k_{B} \Sigma \Omega\left(T_{e}^{\beta+1} + T_{ph}^{\beta+1}\right) \nonumber \\
&\qquad+ \left<\delta P^{2} \right> - 2\frac{\left<\delta P \, \delta I\right>}{\nicefrac{\partial I}{\partial V}S} +\frac{\left<\delta I^{2}\right>}{\left(\nicefrac{\partial I}{\partial V}S\right)^{2}} \, , \label{eq:CEB_NEP}
\end{align}
where $\left<\delta V^{2}\right>_{\mathrm{amp}}$ is the noise of the readout amplifier and $S$ is the responsivity of the detector, which is a function of bias, $\left<\delta P\right>$ is the heat flow noise and $\left<\delta I\right>$ is the current noise. The use of strained silicon reduces the constant $\Sigma$ by a factor of $25$ compared to unstrained silicon\cite{Prest11}, this results in a corresponding improvement in the second term of EQN.~\ref{eq:CEB_NEP} (the phonon noise).
The other dominant limiting factor to the noise equivalent power will be due to the absorption of photons into the strained silicon. This photon noise term is:
\begin{align}
NEP^{2}_{photon} &= 2h\nu P_{opt} + \frac{P_{opt}^{2}}{\delta \nu}, \label{eq:photonNEP}
\end{align}
where $\nu$ and $P_{opt}$ are the frequency and power of the incident radiation respectively and $\delta \nu$ is the optical bandwidth.
\par
\section{Device Design} \label{sec:Device}
One advantage of the silicon based Cold Electron Bolometer compared to those utilising a metal absorber (SINIS) is that since the tunnel barrier is formed by a Schottky contact the is no need to fabricate separate insulating layers. The Strained Silicon Cold Electron Bolometer, studied in this work, consists of three elements: Firstly, the silicon substrate has an epitaxially grown $2.5~\mathrm{\upmu m}$ thick relaxed SiGe (80 \% silicon) straining layer. On top of the straining layer is a $30~\mathrm{nm}$ thick layer of n\textsuperscript{++} doped silicon ($N_{D} = 4 \times 10^{19}~\mathrm{cm}^{-3}$) etched to form a rectangular mesa with an area of $38~\mathrm{\upmu m} \times 14~\mathrm{\upmu m}$. Finally the top layer is a $100~\mathrm{nm}$ thick film of e-beam evaporated aluminium. This final layer is patterned to form both the contacts to the doped silicon absorber and a twin slot antenna. The contacts to the absorber are both $30~\mathrm{\upmu m} \times 5~\mathrm{\upmu m}$ and have a give a tunnelling resistance of $290~\mathrm{\Omega}$. The twin slot antenna has been designed to couple $160~\mathrm{GHz}$ radiation to the central absorber and the coupling has been simulated with Ansoft's HFSS software prior to fabrication. The device design is shown in FIG.~\ref{fig:device_design}.
\begin{figure}[ht]
\includegraphics[width = 0.8\columnwidth]{CEB_structure_strained_APL}
\caption{(a) Cross-sectional view of Cold Electron Bolometer structure. (b) Optical image a Cold Electron Bolometer. A small island absorber of n\textsuperscript{++} doped silicon ((a) - green, (b) - highlighted green) sits atop a strained SiGe virtual substrate ((a) - light green, (b) - brown); the top layer of aluminium ((a) - blue, (b) - beige) forms both the antenna structure and the contacts to the absorber; the small slots, which can be seen at the edges of the device, allows DC measure of the cold electron bolometer without affecting the antenna coupling.}\label{fig:device_design}
\end{figure}
\section{Experimental Procedure}\label{sec:exp_procedure}
\begin{figure}[ht]
\includegraphics[width = 0.8\columnwidth]{experimental_setup}
\caption{Experimental setup, radiation is focussed onto the detector chip via a pair of back-to-back horns and a silicon lens. Optical filters are placed before and after the horns and have the effect of limiting the radiation seen by the detector to an upper limit of $300~\mathrm{GHz}$. The detector is biased via a simple voltage generator and biasing resistors. The voltage output of the detector is sent into two JFET based amplifiers (each with an input referred noise of $2~\mathrm{\mbox{nV\,Hz}^{\nicefrac{-1}{2}}}$) and the output of these is correlated to achieve a final input referred noise of $300~\mathrm{\mbox{pV\,Hz}^{\nicefrac{-1}{2}}}$.}
\label{fig:exp_setup}
\end{figure}
A schematic of the testing setup is shown in FIG. \ref{fig:exp_setup}. The detector was housed in a liquid helium cryostat and cooled to $350~\mathrm{mK}$ using a helium-3 refrigerator. Radiation, visible through a window in the outer cryostat shield, was fed in to a pair of back-to-back horns, the beam from this horn pair was then focussed on to the detector's antenna by a hemispherical silicon lens. This optical coupling scheme was not optimised for high efficiency but designed to minimise stray light coupling to the device.
The detector was current biased using a differential voltage source and a pair of cold $1~\mathrm{M\Omega}$ biasing resistors. The voltage output of the detector was fed into two matched JFET differential amplifiers, each of which had an input referred noise of $2~\mathrm{\mbox{nV\,Hz}^{\nicefrac{-1}{2}}}$. The output of each of these amplifiers was then passed to a computer which cross-correlated the signal in real time and resulted in a final input referred correlated noise, after averaging, of $300~\mathrm{\mbox{pV\,Hz}^{\nicefrac{-1}{2}}}$ for the readout system. For optical testing we used an eccosorb load chopped between $ 300~\mathrm{K}$ and $77~\mathrm{K}$.
\section{Results} \label{sec:Results}
\begin{figure}[ht]
\includegraphics[width = 0.8\columnwidth]{01_IVs_APL}
\caption{IV characteristics and model fit. Solid lines optical measurements; dashed lines dark measurements. Red - $77~\mathrm{K}$ source; Green - $300~\mathrm{K}$ source; Blue - $T_{ph} = 350~\mathrm{mK}$; Black - $T_{ph} = 550~\mathrm{mK}$ . There is a clear shift of the IV towards the linear as the incident power is increased. Lines - Model fit based on $T_{e}$ fitting in EQN. \ref{eq:IV}. Circles - Heavily reduced experimental data. Inset - Variation in electron temperature with bias; colours as in main figure.}
\label{fig:IV_data_model}
\end{figure}
The Silicon Cold Electron Bolometer has been tested both dark and optically loaded. Dark measurements consist of current-voltage (IV) characterisation at various bath (phonon) temperatures. The optical response of the device to a variable temperature blackbody source has also been measured. FIG.~\ref{fig:IV_data_model} compares the current-voltage relationship for the detector in these various conditions; it can be seen that the optically loaded measurements correspond to higher electron temperature in the device and therefore more linear current-voltage curves compared to the corresponding unloaded measurement. In fact the optically loaded curves are similar to a dark measurement at a much higher phonon temperature.
From the measured voltage for a given current bias and using EQN.~\ref{eq:IV} we can calculate the temperature of the electrons. This model, shown as the lines in FIG.~\ref{fig:IV_data_model}, shows that a high quality fit to the data (open circles) can be achieved based on this algorithm in all cases. The electron temperatures found from this fit were $570~\mathrm{mK}$ and $640~\mathrm{mK}$ at zero bias for the $77~\mathrm{K}$ and $300~\mathrm{K}$ illuminations. The increase from the phonon temperature of $350~\mathrm{mK}$ is accounted for by the incident power heating the electrons. At a bias corresponding to a voltage of $\sim 2\Delta$ across the detector, the minimum electron temperatures achieved for the two illumination levels were $350~\mathrm{mK}$ and $500~\mathrm{mK}$. By use of EQN.~\ref{eq:Pe-ph} at zero bias, combined with the dimensions of the absorbing island and the measured value of $\Sigma$ $(2.7 \times 10^{7}~\mathrm{W\,K^{-6}\,m^{-3}})$ and assuming the electron temperature is significantly greater than that of the phonons, we compute the absorbed power to be $10.5~\mathrm{pW}$ \& $21.5~\mathrm{pW}$ for the two load temperatures. We believe there is a contribution of approximately $5~\mathrm{pW}$ from stray light to both of these powers.
\begin{figure}[t]
\includegraphics[width = 0.8\columnwidth]{NEP_ampNoise_APL}
\caption{Noise equivalent power for a SiCEB, as a function of readout frequency, operating at optimum bias ($eV=2\Delta$) with $10.5~\mathrm{pW}$ of absorbed optical power. Left inset - Measured device noise (blue) and amplifier noise limit (red). Right inset - Reduction in amplifier noise with averaging for two JFET amplifiers operating in cross-correlated mode.}
\label{fig:NEP_ampNoise}
\end{figure}
The responsivity, at a particular current bias, of the Cold Electron Bolometer can be calculated from the change in the voltage when the incident power changes by a known amount. From the calculated absorbed powers for the two illuminations and the voltage changes resulting from this change (seen in FIG.~\ref{fig:IV_data_model}), we calculate the responsivity to have a maximum of $7.9 \times 10^{6}~\mathrm{\mbox{V\,W}^{-1}}$ for the $77~\mathrm{K}$ $(10.5~\mathrm{pW})$ source and $2.8 \times 10^{6}~\mathrm{\mbox{V\,W}^{-1}}$ for the room temperature $(21.5\mathrm{pW})$ source. In both cases the maximum responsivity occurs when the voltage across the device is just below $2\Delta$, as is expected. FIG.~\ref{fig:NEP_ampNoise} shows the noise equivalent power calculated from these results. For both the $77\mathrm{K}$ \& the $300~\mathrm{K}$ loading this is dominated by photon noise. From FIG.~\ref{fig:NEP_ampNoise} we see that the $77~\mathrm{K}$ noise equivalent power is $1.1 \times 10^{-16}~\mathrm{\mbox{W\,Hz}^{\nicefrac{-1}{2}}}$.
The speed of the detector can be found from the roll-off in the white noise level from the photon noise or from measuring the change in responsivity for a modulated signal as a function of frequency. We attempted to measure this using a coherent $150~\mathrm{GHz}$ tunable source which could be chopped on and off at frequencies up to 6~kHz but did not see any reduction in the signal and we also did not see any roll-off in the noise power (as seen in FIG.~\ref{fig:NEP_ampNoise}) up to the bandwidth of the readout amplifier $\left(100~\mathrm{kHz}\right)$. From this, we conclude that the time-constant of this detector is less than $1~\mathrm{\upmu s}$.
From EQN.~\ref{eq:CEB_NEP} we compute that the limit on the electrical (dark) noise equivalent power, for optical loading less than $1\mathrm{pW}$, from the electron-phonon interaction is $8.3 \times 10^{-18}~\mathrm{\mbox{W\,Hz}^{\nicefrac{-1}{2}}}$, this compares well to the `dark' noise equivalent power estimations for hot electron bolometer type devices operating at comparable phonon temperatures\cite{Karasik2011}, which share a common noise limit in these circumstances. The current proof of concept detector has a very large absorbing element, if this was reduced by a factor of $10$ (which is still larger than the absorbing element of the comparable hot electron bolometer\cite{Karasik2011} and still possible with standard photolithography) the phonon noise limit would be reduced to $2.6\times 10^{-18}~\mathrm{\mbox{W\,Hz}^{\nicefrac{-1}{2}}}$ for the same operating temperature.
\begin{figure}[ht]
\includegraphics[width = 0.8\columnwidth]{01_FTS_response}
\caption{Response of the Strained Silicon Cold Electron Bolometer to a Fourier Transform Spectrometer with a mercury arc lamp source. Red - Response to a vertically polarised source; green - Horizontally polarised source; highlighted region - expected frequency range of the antenna's $3~\mathrm{dB}$ response.}
\label{fig:FTS_response}
\end{figure}
We have also measured the response of the Strained Silicon Cold Electron Bolometer as a function of the frequency of incident radiation. This was performed in both linear polarisations; since the detector used a twin-slot antenna to couple radiation it was expected that there would be more response in one polarisation. The measured spectral response is shown in FIG.~\ref{fig:FTS_response}. The measured response has a cutoff of $300~\mathrm{GHz}$ due to the optical filters in place. The highlighted region denotes the expected frequency range of the twin-slot antenna. There is a clear excess response in this region in the vertical polarisation, parallel to the twin slot antenna. The peak in the horizontal polarisation may be attributed to response in the coplanar waveguide (CPW), which couples radiation to the absorber and is also due to the cuts in the aluminium (seen in FIG.~\ref{fig:device_design}b), which break the DC continuity around the detector. Both these cuts and the coplanar waveguide are orthogonal to the twin-slot antenna. The plateau level, around half of the maximum response, is due to a combination of photons directly splitting Cooper pairs in the aluminium along with direct absorption in the doped silicon mesa, general broadening of the absorption spectrum due to the silicon lens and the integrating cavity in which the detector was housed.
\section{Conclusion}\label{sec:conclusion}
We have demonstrated a detector that utilises direct electron cooling via Schottky tunnelling contacts between aluminium and strained silicon. We have shown that this detector has a photon noise limited noise equivalent power of $1.1 \times 10^{-16}~\mathrm{\mbox{W\,Hz}^{\nicefrac{-1}{2}}}$ when observing a $77~\mathrm{K}$ blackbody and under low optical loading conditions has an electrical or dark noise equivalent power, at $350~\mathrm{mK}$, of $8.3 \times 10^{-18}~\mathrm{\mbox{W\,Hz}^{\nicefrac{-1}{2}}}$. The time constant of this detector has been determined to be less than $1~\mathrm{\upmu s}$, which compares extremely favourably to other detector types with similar noise equivalent power.
This work has been financially supported by the EPSRC through grant numbers EP/F040784/1 and EP/J001074/1, and by the Academy of Finland through grant number 252598.
|
2,869,038,155,923 | arxiv | \section*{Introduction}\label{sec:introduction}
\subsection*{Real World Data}\label{subsec:rwd}
Health Information Systems (HIS) are increasingly collecting routine care data
\cite{jha_use_2009,sheikh_adoption_2014,kim_rate_2017,esdar_diffusion_2019,kanakubo_comparing_2019,liang_adoption_2021,apathy_decade_2021}.
This source of Real World Data (RWD) \cite{fda_real-world_2021} bears great
promises to improve the quality of care. On the one hand, the use of this data
translate into direct benefits --primary uses-- for the patient by serving as
the cornerstone of the developing personalized medicine
\cite{talukder_diseasomics_2022,mann_artificial_2022,ziegler_high_2022}. They also bring
indirect benefits --secondary uses-- by accelerating and improving knowledge
production: on pathologies \cite{campbell_characterizing_2022}, on the conditions of use of health products and
technologies\cite{safran_toward_2007,tuppin_value_2017}, on the measures of
their safety \cite{wisniewski_development_2003}, efficacy or usefulness in
everyday practice \cite{richesson_electronic_2013}. They can also be used to
assess the organizational impact of health products and technologies
\cite{has_guide_2020,has_real-world_2021}.
In recent years, health agencies in many countries have conducted extensive work
to better support the generation and use of real-life data
\cite{has_real-world_2021,kent_nice_2022,plamondongenevieve_integration_2022,fda_real-world_2021}.
Study programs have been launched by regulatory agencies: the DARWIN EU
program by the European Medicines Agency and the Real World Evidence Program by
the Food and Drug Administration \cite{fda_real_2018}.
\subsection*{Clinical Data Warehouse}
In practice, the possibility of mobilizing these routinely collected data
depends very much on their degree of concentration, in a gradient that goes from
centralization in a single, homogenous HIS to fragmentation in a multitude of
HIS with heterogeneous formats. The structure of the HIS reflects the governance
structure. Thus, the ease of working with these data depends heavily on the
organization of the healthcare actors.
Healthcare actors are sometimes concentrated in a small number of
organizations, resulting in uniform sources of real-life
data. For example, in Israel, the largest provider of provider (Clalit) insures
and cares for more than half of the population. In South Korea, the government
agency responsible for healthcare system performance and quality (HIRA) is
connected to the HIS of all healthcare stakeholders. England has a centralized
health care system under the National Health Service. This organization has
enabled it to bring together data from urban medicine in two large databases
databases that correspond to the two major software publishers. Currently,
opensafely \cite{opensafely_2022}, a first operating platform for research on Covid-19 exists and
should be followed by other similar platforms for more general themes.
Conversely, the production of real-life data may be distributed among many
entities, that have made different choices, without common management. Despite
heterogeneous insurance systems and hospitals in the United States, the grouping
of insurers into large entities nevertheless makes it possible to create large
databases such as Medicare, Medicaid or IBM MarketScan. Germany has found that
its data collection systems are very heterogeneous, limiting the potential of
health data. At the Medical Informatics Initiative \cite{gehring_german_2018},
it created four consortia in 2018 to develop technical and organizational
technical and organizational solutions to improve the consistency of clinical
data.
In France, the national insurer collects all hospital activity and city care
claims into a unique reimbursement database. However, clinical data is scattered
at each care site in numerous HISs.
Whatever the organizational framework, it requires an infrastructure that pools
data from one or more medical information systems, to homogeneous
formats for management, research or care reuses
\cite{chute_enterprise_2010,pavlenko_implementation_2020}. Figure
\ref{background:CDW:fig:ehr_flow} illustrates for a Clinical Data Warehouse, the
three phases of data flow from the various sources that make up the HIS:
\begin{enumerate}
\item \textbf{Collection} and copying of original sources.
\item \textbf{Transformation}: Integration and harmonization
\begin{itemize}
\item Integration of sources into a unique database.
\item Deduplication of identifiers.
\item Standardization: A unique data model, independent of the
software models harmonizes the different sources in a common schema,
possibly with common nomenclatures.
\item Pseudonymization: Removal of directly identifying elements.
\end{itemize}
\item \textbf{Provision} of sub-population data sets and transformed datamarts
for primary and secondary reuse.
\end{enumerate}
\begin{figure*}
\centering
\includegraphics[width=0.7\linewidth]{figures/ehr_flow_en.png}
\caption{Clinical Data Warehouse: Three steps of data flow from the Hospital Information System: 1) collection, 2) transformations and 3) provisioning.}
\label{background:CDW:fig:ehr_flow}
\end{figure*}
In France, several hospitals deployed efforts for about ten years to create
CDWs from electronic medical records
\cite{cuggia_roogle_2011,jannot_georges_2017,garcelon_finding_2017,wack_installation_2017,daniel_initializing_2018,malafaye_mise_2018,artemova_predimed_2019,lelong_building_2019,conan_les_2021,
lamer_development_2022}. This work has accelerated recently, with the beginning
of CDWs structuring at the regional and national levels. Regional cooperation
networks are being set up --such as the Ouest Data Hub \cite{hugo_2022}. In July
2022, the Ministry of Health opened a 50 million euros call for projects to set
up and strengthen a network of hospital CDWs coordinated with the national
platform, the Health Data Hub by 2025.
\subsection*{Objective}\label{objective}
Based on an overview of university hospital CDWs in France, this study make
general recommendations for properly leveraging the potential of CDWs to improve
healthcare. It focuses on: governance, transparency, types of
data, data reuse, technical tools, documentation and data quality control
processes.
\section*{Material and methods}\label{methods}
Interviews were conducted from March to November 2022 with 32 French regional
and university hospitals, both with existing and prospective CDWs.
\subsection*{Interviews}\label{methods:interviews}
Semi-structured interviews were conducted on the following
themes: the initiation and construction of the CDWs; the current status of the
project and the studies carried out; opportunities and obstacles; quality
criteria for observational research. Appendix \ref{apd:table:expert_teams} lists
all interviewed people with their team title. The complete form, with the
precised questions, is available in Appendix \ref{apd:interview_form}.
The interview form was sent to participants in advance, and then used as a
support to conduct the interviews. The interviews lasted 90 minutes and were
recorded for reference.
\subsection*{Quantitative methods}\label{methods:quantitative}
Three tables detailed the structured answers in Appendix \ref{apd:study_tables}.
The first two tables deal with the characteristics of the actors, and those of
the data warehouses. We completed them based on the notes taken during the
interviews, the recordings, and by asking the participants for additional
information. The third table focuses on ongoing studies in the CDWs. We
collected the list of these studies from the dedicated reporting portals, which
we found for 8 out of 14 operational CDWs. We developed a classification of
studies, based on the typology of retrospective studies described by the OHDSI
research network \cite{schuemie_book_2021}. We enriched this typology by
comparing it with the collected study resulting in the six following categories:
\begin{itemize}
\item \textbf{Outcome frequency}: Incidence or prevalence estimation for a
medically well-defined target population.
\item \textbf{Population characterization}: Characterization of a specific set
of covariates. Feasibility and pre-screening studies belong to this category \cite{pasco_pre-screening_2019}.
\item \textbf{Risk factors}: Identification of covariates most associated with
a well-defined clinical target (disease course, care event). These
studies look at association study without quantifying the causal effect
of the factors on the outcome of interest.
\item \textbf{Treatment Effect}: Evaluation of the effect of a well-defined
intervention on a specific outcome target. These studies intend to show
a causal link between these two variables \cite{hernan_methods_2021}.
\item \textbf{Development of decision algorithms}: Improve or automate a
diagnostic or prognostic process, based on clinical data from a given
patient. This can take the form of a risk, a preventive score, or the
implementation of a diagnostic assistance system. These studies are part
of the individualized medicine approach, with the goal of inferring
relevant information at the level of individual patient's files.
\item \textbf{Medical informatics}: Methodological or tool oriented. These
studies aim to improve the understanding and capacity for action of
researchers and clinicians. This type of study includes the evaluation
of a decision support tool, the extraction of information from
unstructured data, automatic phenotyping methods.
\end{itemize}
Studies were classified according to this nomenclature based on their title and
description.
\section*{Results}\label{results}
Figure \ref{results:image:eds_map} summarizes the development state of progress
of CDWs in France. Out of 32 regional and university hospitals in France, 14
have a CDW in production, 5 are experimenting, 5 have a prospective CDW project,
8 did not have any CDW project at the time of writing. The results are described
for all projects that are at least in the prospective stage minus the three that
we were unable to interview after multiple reminders (Orléans, Metz and Caen),
resulting in a denominator of 21 university hospitals.
\begin{figure*}[!b]
\centering
\includegraphics[width=0.67\linewidth]{figures/eds_map.pdf}
\caption{Repartition of CDWs in France.}
\label{results:image:eds_map}
\end{figure*}
\subsection*{Governance}
Figure \ref{results:governance:image:timeline} shows the history of the
implementation of CDWs. A distinction must be made between the first works --in
blue--, which systematically precede the regulatory authorization --in green--
from the French Commission on Information Technology and Liberties (CNIL).
\begin{figure*}
\centering
\includegraphics[width=0.9\linewidth]{figures/timeline_eds.png}
\hspace{5em}
\caption{The French CDWs implementations date back to the first academic works in data reuse in early 2010s and accelerated recently.}
\label{results:governance:image:timeline}
\end{figure*}
The CDWs have so far been initiated by one or two people from the hospital world
with an academic background in bioinformatics, medical informatics or
statistics. The sustainability of the CDW is accompanied by the construction of
a cooperative environment between different actors: Medical Information
Department (MID), Information Systems Department (IT), Clinical Research
Department (CRD), clinical users, and the support of the management or the
Institutional Medical Committee. It is also accompanied by the creation of a
team, or entity, dedicated to the maintenance and operationalization of the CDW.
More recent initiatives, such as those of the HCL (Hospitals of the city of
Lyon) or the \textit{Grand-Est} region, are distinguished by an initial,
institutional and high-level support.
The CDW has a federating potential for the different business departments of the
hospital with the active participation of the CRD, the IT Department and the
MID. Although there is always an operational CDW team, the human resources
allocated to it vary greatly: from half a full-time equivalent to 80 people for
the AP-HP, with a median of 6.0 people. The team systematically includes a
coordinating physician. It is multidisciplinary with skills in public health,
medical informatics, informatics (web service, database, network,
infrastructure), data engineering and statistics.
Historically, the first CDWs were based on in-house solution development. More
recently, private actors are offering their services for the implementation and
operationalization of CDWs (15/21). These services range from technical
expertise in order to build up the data flows and clean them up to the delivery
of a platform integrating the different stages of data processing.
\subsection*{Management of studies}
Before starting, projects are systematically analyzed by a scientific and
ethical committee. A local submission and follow-up platform is often mentioned
(12/21), but its functional scope is not well defined. It ranges from simple
authorization of the project to the automatic provision of data into a Trusted
Research Environment (TRE) \cite{goldacre_better_2022}. The processes
for starting a new project on the CDW are always communicated internally but rarely documented publicly (8/21).
\subsection*{Transparency}
Studies underway in CDWs are unevenly referenced publicly on hospital websites.
In total, we found 8 of these portals out of 14 CDWs in production. Uses other
than ongoing scientific studies are very rarely publicly documented.
\subsection*{Data}
\subsubsection*{Strong dependance to the HIS}
CDW data reflect the HIS used on a daily basis by hospital staff. Stakeholders
point out that the quality of CDW data and the amount of work required for rapid
and efficient reuse are highly dependent on the source HIS. The possibility of
accessing data from an HIS in a structured and standardized format greatly
simplifies its integration into the CDW and then its reuse.
\subsubsection*{Categories of data}
Although the software landscape is varied across the country, the main
functionalities of HIS are the same. We can therefore conduct an analysis of the
content of the CDWs, according to the main categories of common data present in
the HIS.
The common base for all CDWs is constituted by data from the Patient
Administrative Management software (patient identification, hospital movements)
and from billing codes. Then, data flows are progressively developed from the
various softwares that make up the HIS. The goal is to build a homogeneous data
schema, linking the sources together, controlled by the CDW team. The
prioritization of sources is done through thematic projects, which feed the CDW
construction process. These projects improve the understanding of the sources
involved, by confronting the CDW team with the quality issues present in the
data.
Table \ref{results:data:img:data_categories} presents the different ratio of
data categories integrated in French CDWs. Structured biology and texts are
almost always integrated (20/21 and 20/21). The texts contain a large amount of
information. They constitute unstructured data and are therefore more difficult
to use than structured tables. Other integrated sources are the hospital drug
circuit (prescriptions and administration, 16/21), Intense Care Unit (ICU, 2/21)
or nurse forms (4/21). Imaging is rarely integrated (4/21), notably for reasons
of volume. Genomic data are well identified, but never integrated, even though
they are sometimes considered important and included in the CDW work program.
\begin{table}[!ht]
\centering
\begin{tabular}{lrl}
\thickhline
Category of data & Number of CDW & Ratio \\
\thickhline
Administrative & 21 & 100 \% \\
Billing Codes & 20 & 95 \% \\
Biology & 20 & 95 \% \\
Texts & 20 & 95 \% \\
Drugs & 16 & 76 \% \\
Imagery & 4 & 19 \% \\
Nurse Forms & 4 & 19 \% \\
Anatomical pathology & 3 & 14 \% \\
ICU & 2 & 10 \% \\
Medical devices & 2 & 10 \% \\
\thickhline
\end{tabular}
\vspace{1em}
\caption{Type of data integrated into the French CDWs: Text, billing codes and biology are the foundations that enrich the core administrative data.}\label{results:data:img:data_categories}
\end{table}
\subsection*{Data reuse}
\subsubsection*{Today, the main use put forward for the constitution of CDWs is that of scientific research.}
The studies are mainly observational (non-interventional). Figure
\ref{results:usage:image:study_objective} presents the distribution of the six
categories defined in \nameref{methods:quantitative} for 231 studies collected
on the study portals of nine hospitals. The studies focus first on population
characterization (25 \%), followed by the development of decision support
processes (24 \%), the study of risk factors (18 \%) and the treatment effect
evaluations (16 \%).
\begin{figure*}
\centering
\includegraphics[width=0.7\linewidth]{figures/pourcentage_types_etudes.pdf}
\caption{Percentage of studies by objective.}
\label{results:usage:image:study_objective}
\end{figure*}
The CDWs are used extensively for internal projects such as student theses (at
least in 9/21) and serve as an infrastructure for single-service research: their
great interest being the de-siloing of different information systems. For most
of the institutions interviewed, there is still a lack of resources and maturity
of methods and tools for conducting inter-institutional research (such as in the
\textit{Grand-Ouest} region of France) or via European calls for projects
(EHDEN). These two research networks are made possible by supra-local governance
and a common data schema, respectively eHop \cite{madec_ehop_2019} and OMOP
\cite{hripcsak_observational_2015}. The Paris hospitals, thanks to its regional
coverage and the choice of OMOP, is also well advanced in multi-centric
research. At the same time, the \textit{Grand-Est} region is building a network
of CDW based on the model of the \textit{Grand-Ouest} region, also using eHop.
\subsubsection*{CDW are used for monitoring and management (16/21)}
The CDW have sometimes been initiated to improve and optimize billing coding
(4/21). The clinical texts gathered in the same database are queried using
keywords to facilitate the structuring of information. The data are then
aggregated into indicators, some of which are reported at the national level.
The construction of indicators from clinical data can also be used for the
administrative management of the institution. Finally, closer to the clinic,
some actors state that the CDW could also be used to provide regular and
appropriate feedback to healthcare professionals on their practices. This
feedback would help to increase the involvement and interest of healthcare
professionals in CDW projects. The CDW is sometimes of interest for health
monitoring (eg. during Covid-19) or pharmacovigilance (13/21).
\subsubsection*{Strong interest for CDW in the context of care (13/21)}
Some CDWs develop specific applications that provide new functionalities
compared to care software. Search engines can be used to query all the
hospital's data gathered in the CDW, without data compartmentalization between
different softwares. Dedicated interfaces can then offer a unified view of the
history of a patient's data, with inter-specialty transversality, which is
particularly valuable in internal medicine. These cross-disciplinary search
tools also enable healthcare professionals to conduct rapid searches in all the
texts, for example to find similar patients \cite{garcelon_finding_2017}. Uses
for prevention, automation of repetitive tasks and care coordination are also
highlighted. Concrete examples are the automatic sorting of hospital
prescriptions by order of complexity, or the setting up of specialized channels
for primary or secondary prevention.
\subsection*{Technical architecture}
The technical architecture of modern CDWs has several layers:
\begin{itemize}
\item Data processing: connection and export of source data, diverse
transformation (cleaning, aggregation, filtering, standardization).
\item Data storage: database engines, file storage (on file servers or object
storage), indexing engines to optimize certain queries.
\item Data exposure: raw data, APIs, dashboards, development and analysis
environments, specific web applications.
\end{itemize}
Supplementary cross-functional components ensure the efficient and secure
operation of the platform: identity and authorization management, activity
logging, automated administration of servers and applications.
The analysis environment (Jupyterhub or RStudio datalabs) is a key component of
the platform, as it allows data to be processed within the CDW infrastructure. A
few CDWs had such operational datalab at the time of our study (6/21) and almost
all of them have decided to provide it to researchers. Currently, clinical
research teams are still often working on data extractions, in less secure
environments.
\subsection*{Data quality, standard formats}
\subsubsection*{Quality tools} Systematic data quality monitoring processes are
being built in some CDWs. Often (8/21), scripts are run at regular intervals to
detect technical anomalies in data flows. Rare data quality investigation tools,
in the form of dashboards, are beginning to be developed internally (3/21).
Theoretical reflections are underway on the possibility of automating data
consistency checks, for example, demographic or temporal. Some facilities
randomly pull records from the EHR to compare them with the information in the
CDW.
\subsubsection*{Standard format}
No single standard data model stands out as being used by all CDWs. All are
aware of the existence of the OMOP (research standard)
\cite{hripcsak_observational_2015} and HL7 FHIR (communication standard) models
\cite{braunstein_health_2019}. Several CDWs consider the OMOP model to be a
central part of the warehouse, particularly for research purposes (9/21). This
tendency has been encouraged by the European call for projects EHDEN, launched
by the OHDSI research consortium, the originator of this data model. In the
\textit{Grand-Ouest} region of France, the CDWs use the eHop warehouse software.
The latter uses a common data model also named eHop. This model will be extended
with the future warehouse network of the \textit{Grand Est} region also
choosing this solution. Including this grouping and the other establishments
that have chosen eHop, this model includes 12 establishments out of the 32
university hospitals. This allows eHop adopters to launch ambitious
interregional projects. However, eHop does not define a standard nomenclature to
be used in its model and is not aligned with emerging international standards.
\subsubsection*{Documentation}
Half of the CDWs have put in place documentation accessible within the
organization on data flows, the meaning and proper use of qualified data (10/21
mentioned). This documentation is used by the team that develops and maintains
the warehouse. It is also used by users to understand the transformations
performed on the data. However, it is never publicly available. No schema of the
data once it has been transformed and prepared for analysis is published.
\section*{Discussion}\label{discussion}
\subsection*{Principal findings}
We give the first overview of the CDWs in university hospitals of France with 32
hospitals reviewed. The implementation of CDW dates from 2011 and accelerated in
the late 2020. Today, 24 of the university hospitals have an ongoing CDW
project.
From this case study, some general considerations can be drawn, that should be
valuable to all healthcare system implementing CDWs on a national scale.
\subsubsection*{Governance}
As the CDW becomes an essential component of data management in the hospital,
the creation of an autonomous internal team dedicated to data architecture,
process automation and data documentation should be encouraged
\cite{goldacre_better_2022}. This multidisciplinary team should develop an
excellent knowledge of the data collection process and potential reuses in order
to qualify the different flows coming from the source IS, standardize them
towards a homogenous schema and harmonize the semantics. It should have a strong
knowledge of public health competences as well as technical and statistical
competences to develop high quality softwares facilitating the reuses of data.
The resources specific to the warehouse are rare and often taken from other
budgets, or from project-based credits. While this is natural for an initial
prototyping phase, it does not seem adapted to the perennial and transversal
nature of the tool. As a research infrastructure of growing importance, it must
have the financial and organizational means to plan for the long term.
The governance of the CDW has multiple layers: local within the university
hospital, interregional, and national/international. The first level allow to ensure the
quality of data integration as well as the pertinence of data reuse by
clinicians themselves. The interregional level is well adapted for resources
mutualization and collaboration. Finally, the national and international levels
assure coordination, encourage consensus for committing choices such as metadata
or interoperability, and provide financial, technical and regulatory support.
\subsubsection*{Transparency}
International recommendations
\cite{pavlenko_implementation_2020,has_real-world_2021,kohane_what_2021} favour
public referencing of ongoing projects, with prior publication of research
protocols, which is essential from a scientific point of view to control bias.
All institutions should publish all of their studies on
\url{https://clinicaltrials.gov/} in the observational research category.
Introducing EHR as a new subtype of observational study would allow to
better follow the utilization of this emerging data source.
From a patient's perspective, there is currently no way to know if their
personal data is included for a specific project. Better patient information
about the reuse of their data is needed to build trust over the long term. A
strict minimum is the establishment and update of the declarative portals of
ongoing studies for each institution.
\subsubsection*{Data and data usage}
When using CDW, the analyst has not defined the data collection process and is
generally unaware of the context in which the information is logged. This new
dimension of medical research requires a much greater development of data
science skills to change the focus from the implementation of the statistical
design to the data engineering process. Data reuse requires more effort to
prepare the data and document the transformations performed.
International recommendations insist on the need for common data formats
\cite{zhang_best_2022,kohane_what_2021}. However, there is still a lack of
adoption, either of research standards from hospital CDWs to conduct robust
studies with multiple sites, or from EHR vendors to allow sufficient data
interoperability for efficient data communication. Building open-source tools on
top of these standards such as those of OHDSI \cite{schuemie_book_2021} could
foster their adoption.
Many underway studies concern the development of decision support processes
whose goal is to save time for healthcare professionals. These are often
research projects, not yet integrated into routine care. Data reuse oriented
towards primary care is still rare and rarely supported by appropriate funding.
\subsubsection*{Technical architecture}
Tools, methods and data formats of CDW lack harmonization due to the strong
technical innovation and the presence of many actors. As suggested by the recent
report on the use of data for research in the UK \cite{goldacre_better_2022}, it
would be wise to focus on a small number of model technical platforms.
These platforms should favor open source solutions to assure transparency by
default, foster collaboration and consensus and avoid technological lock-in of
the hospitals.
\subsubsection*{Data quality and documentation}
Quality is not sufficiently considered as a relevant scientific topic itself.
However, it is the backbone of all research done within an CDW. In order to
improve the quality of the data with respect to research uses, it is necessary
to conduct continuous studies dedicated to this topic
\cite{zhang_best_2022,kohane_what_2021,shang_conceptual_2018,looten_what_2019}.
These studies should contribute to a reflection on methodologies and standard
tools for data quality, such as those developed by the OHDSI research network
\cite{schuemie_book_2021}.
Finally, there is a need for open source publication of research code to ensure
quality retrospective research
\cite{shang_conceptual_2018,seastedt_global_2022}. Recent research in data
analysis has shown that innumerable biases can lurk in training data sets
\cite{gebru_datasheets_2021,mehrabi_survey_2021}. Open publication of data
schemas is considered an indispensable prerequisite for all data science and
artificial intelligence uses \cite{gebru_datasheets_2021}. Inspired by dataset
cards \cite{gebru_datasheets_2021} and dataset publication guides, it would be
interesting to define a standard CDW card documenting the main data flows.
\subsection*{Limitations}
The interviews were conducted in a semi-structured manner within a limited time
frame. As a result, some topics were covered more quickly and only those
explicitly mentioned by the participants could be recorded. The uneven existence
of study portals introduces a bias in the recording of the types of studies
conducted on CDW. Those with a transparency portal already have more maturity in
use cases.
With only one oncology specialized center and four non-university
hospital groups, including two private health care institutions, we have not
covered the exhaustive health care landscape in France. CDW initiatives also
exist in primary care, in smaller hospital groups and in private companies.
\section*{Conclusion}\label{conclusion}
The French CDW ecosystem is beginning to take shape, benefiting
from an acceleration thanks to national funding, the multiplication of
industrial players specializing in health data and the beginning of a
supra-national reflection on the
European
Health Data Space\cite{ehds_2022}. However, some points require special attention to ensure
that the potential of the CDW translates into patient benefits.
The priority is the creation and perpetuation of multidisciplinary warehouse
teams capable of operating the CDW and supporting the various projects. A
combination of public health, data engineering, data stewardship, statistics and
IT competences is a prerequisite for the success of the CDW. The team should be
the privileged point of contact for data exploitation issues and should
collaborate closely with the existing hospital departments.
The constitution of a multi-level collaboration network is another priority. The
local level is essential to structure the data and understand its possible uses.
Interregional, national and international coordination would make it possible to
create thematic working groups, in order to stimulate a dynamic of cooperation
and mutualization.
A common data model should be encouraged, with precise metadata allowing to map
the integrated data, in order to qualify the uses to be developed today from the
CDWs. More broadly, open-source documentation of data flows and transformations
performed for quality enhancement would require more incentives to unleash the
potential for innovation for all health data reusers.
Finally, the question of expanding the scope of the data beyond the purely
hospital domain must be asked. Many risk factors and patient follow-up data are
missing from the CDWs, but are crucial for understanding pathologies. Combining
city data and hospital data would provide a complete view of patient care.
\section*{Ethics Statement}
This work has been authorized by the board of the French Health Authority of
Health (HAS). Every interviewed participant was asked by email for their participation and
informed on the possible forms of publication: a French official report and
international publication. Furthermore, at each interview, every participant has
been asked for their agreement before recording the interview. Only one
participant refused the video to be recorded.
\section*{Acknowledgments}
\subsection*{Funding}
The Haute Autorité de Santé (HAS) funded this research; The Inria supervised
Matthieu Doutreligne in the Social Data team.
\subsection*{Author contributions}
Conceptualization: PAJ, MD
Data curation: MD
Formal analysis: MD
Methodology: MD, AD
Project Administration: PAJ
Software: MD
Writing - Original Draft Preparation: md
Writing - Review \& Editing: MD, AD, PJ, AL, XT
\subsection*{Acknowledgments}
We want to thanks all participants and experts interviewed for this study. We
also want to thanks other people that proof read the manuscript for external
review : Judith Fernandez (HAS), Pierre Liot (HAS), Bastien Guerry (Etalab),
Aude-Marie Lalanne Berdouticq (Institut Santé numérique en Société), Albane
Miron de L’Espinay (ministère de la Santé et de la Prévention), Caroline Aguado
(ministère de la Santé et de la Prévention). We also thank Gaël
Varoquaux for his support and advice.
\printbibliography
\clearpage
|
2,869,038,155,924 | arxiv | \section{Introduction} \label{sec:intro}
Much of the study of the formation of stars focusses on the process of how a denser part of the \textquotedblleft Interstellar Medium" (ISM) evolves into a pre-main sequence star. However few stars form in isolation. There have been a number of studies in the past investigating the feedback of individual HII regions including some that find evidence for feedback and other studies that have found no or very limited signs of feedback. In this paper, we use submillimeter wavelength observations of dust emission to look at the relationship between HII regions and the dense interstellar material associated with them. We measure the physical properties of this material, determine how it is distributed around the HII regions, and look for signs of interactions taking place. Overall, we were able to amass a large number of observations of many HII regions in order to set some constraints on their feedback in general as well as to understand how important feedback is on average.
\\ \\
An HII region is the product of one or more massive, OB stars embedded in a molecular cloud. The ionizing radiation provided by the parent OB star(s) not only drives the outward expansion of the HII region, but can also heat nearby clumps while progressively eroding them away by gradually ionizing them. Sandford et al. (1982) have suggested that dusty clumps bombarded by ionizing radiation can be lead to collapse due to the large radiation pressure exerted on their outer layers, but also, the large extinction of the dusty content delays the ionization of gas in their interior regions. These claims have been followed up with analytic models (Kovalenko \& Shchekinov 1992) as well as simulations (Motoyama, Umemoto, \& Shang 2007), (Bisbas et al. 2011). This process of forming stars is now commonly referred to as \textquotedblleft Radiative Driven Implosion" (RDI).
\\ \\
Furthermore, older (post-Str\"{o}mgren-sphere) HII regions have slowly-expanding, shockwave-ionization-front structures, whose propagation can have dramatic effects on the local ISM. More specifically, the pressure differential between the shockwave-ionization-front bundle and the local ISM is large and sharp, leading material encountered along the way to become swept-up and compressed into clumps. Elmegreen and Lada (1977) suggested that these clump condensations can become gravitationally unstable as they grow beyond a certain column density, and collapse to form stars. This process of forming stars is now commonly referred to as Collect and Collapse (CC). In later work, Lada used this process to explain how star formation could propagate in sequential waves through a \textquotedblleft Giant Molecular Cloud" (GMC) where essentially swept up molecular material forms a shell filament along the boundary of an expanding HII region. Gravitational instabilities can develop along this filament, leading to its fragmentation into several dense cores, which ultimately collapse to give rise to massive stars. These maintain the velocity of the original layer they formed in, and give rise to subsequent HII regions deeper within the cloud structure. In later work, Lada (1987) describes the end to this chain as the point at which the massive stars completely strip their parent GMC of gas and dust material.
\\ \\
One of the first HII regions observed to have extended molecular condensations forming along its boundary was Sh-2 104, where emission from various molecular CO transitions was used to trace molecular material while radio-continuum emission was used to trace the location of the HII region itself (Deharveng et al. 2003). With the rise of submillimeter astronomy, an increasing number of similar HII regions were found using the cooler dust components to trace the associated gas. Some of these include the case of RCW 79 (Zavagno et al. 2005), Sh-2 219 (Deharveng et al. 2006), RCW 120 (Zavagno et al. 2007), Sh-2 212 (Deharveng et al. 2008), the Sh-2 254 -- Sh-2 258 complex (Chavarria et al. 2008), Sh-2 217 (Brand et al. 2011), Sh-2 90 (Samal et al. 2014), Sh-2 39 (Duronea et al. 2017) and Sh-2 242 (Dewangan et al. 2017), all of which show signs of feedback acting on material around the HII region. On the other hand, in a study of 25 HII regions conducted by Xu et al. (2014) only 3 had timescale estimates that allowed for the action of feedback.
\\ \\
In this work we assemble observations of dust emission from 38 SCUBA-2 images containing a total of 53 HII regions. Many of these results are only accessible through the use of such a large sample, which, to the best of our knowledge, has never been available before. The HII regions comprising this sample are all larger, more evolved, mature objects, in which the effects of feedback are expected to be most pronounced.
\section{Observations}
The sample of HII region systems analyzed is comprised exclusively of mature, galactic HII regions taken from the \textquotedblleft Sharpless" (Sh-2) (Sharpless 1959) and \textquotedblleft Blitz-Fich-Stark" (BFS) (Blitz, Fich, \& Stark 1982) catalogues. These catalogues select HII regions from the Palomar Sky Survey, a set of optical large (6 degree) images of the sky. This results in a sample of objects that have evolved beyond the compact stage where they are still deeply embedded in, and obscured by, dense clouds. The smallest HII regions visible on these images are approximately half an arcminute in angular size but the largest may cover several degrees. Our sample is drawn from those that have VLA observations available. This in practice limits the sample to objects less than $\approx20'$ in diameter.
\\ \\
The properties of these HII regions are determined using 1.46 GHz and 4.89 GHz VLA data (Fich 1986, 1993). This data was only available for 48 of the 53 HII regions located on these SCUBA-2 images. The position and size of the HII regions is determined using the 10\% flux contour of their radio-continuum emission, which is subsequently fitted to a circle. Number densities are determined using the simplified expression from Mezger \& Henderson (1967) along with small corrections for the radio observing band and the use of a cylindrical approximation, and an electron temperature of $\approx 8 \times 10^3 \ K$, equal to the average electron temperatures of all HII regions analyzed in the work of Rudolph et al. (2006). To determine the physical radii used in this expression, radial distances provided by Foster \& Brunt (2015) are used, and when those aren't available, older estimates compiled by Chan \& Fich (1995) are used instead. The masses are determined by incorporating the overall number density, and assuming an abundance ratio $He^{++}/H^{+} \approx 0$ and $He^{+}/H^{+} = He/H \approx 0.06$ (Rudolph et al. 2006).
\\ \\
For the identification of submillimeter-emitting clumps in the vicinity of these HII regions, 450$\mu$m and 850$\mu$m SCUBA-2 data was used, which was collected from various projects from the \textquotedblleft Canadian Astronomical Data Center" (CADC) archives and reduced using a standard data reduction pipeline procedure (Holly \& Currie 2014). The resolution of the SCUBA-2 instrument allowed the separation of these clumps into a warm outer layer \textquotedblleft cloud" and one or more inner, dense condensations \textquotedblleft cores".
\\ \\
Photometry of the clumps was obtained using a scripted routine in Python. The routine treats the cloud and embedded cores individually, with the cloud flux always removed from that of the embedded cores. In addition, negative bowl artifacts persisting after the application of a mask during data reduction are treated on a clump-by-clump basis. This is done by approximating the residual negative bowl as a step function, whose value is estimated using small, circular apertures that are employed along the periphery of the source with the goal of finding the most negative mean flux per pixel value occurring there. Once found, this value is used to characterize the negative bowl, and allows its removal from the source flux. Finally, the contamination in the 850$\mu$m band from the molecular CO(3-2) transition is treated using a correction factor of 10\%. The choice for this value comes from consideration of SCUBA-2 surveys of NGC 1333, NGC 2071 and NGC 2024 by Drabek et al. (2012) in which the majority of SCUBA-2 sources experienced contamination levels less than 20\%, but also, a survey of the Taurus star-forming region by Buckle et al. (2015) within which all SCUBA-2 sources experienced a contamination level less than 15\%, with a large number of sources not exceeding a contamination level of 5\%.
\\ \\
\begin{figure}
\centering
\subfloat[Sh-2 168]{
\fbox{\includegraphics[width=0.48\columnwidth]{S168.pdf}}}
\subfloat[Sh-2 201]{
\fbox{\includegraphics[width=0.48\columnwidth]{S201.pdf}}}
\subfloat[Sh-2 242]{
\fbox{\includegraphics[width=0.48\columnwidth]{S242.pdf}}}
\subfloat[Sh-2 305]{
\fbox{\includegraphics[width=0.48\columnwidth]{S305.pdf}}}
\caption{A collection of 4 HII regions representing the variety of SCUBA-2 condensation morphologies encountered in this work. The images consist of SCUBA-2 850$\mu$m emission overlaid with VLA 1.46 GHz contours. The contours are color-coded based on confidence level multiples (blue/cyan/green/orange/red $\rightarrow$ 1$\sigma$,2$\sigma$, 3$\sigma$, 4$\sigma$, $5\sigma$). The yellow "x" ticks indicate the location of identified cores.}
\label{fig:HII_regions}
\end{figure}
The opacity model used is the one originally proposed by Ossenkopf \& Henning (1994) for use with protostellar cores. Due to the limitations from the consideration of only two submillimeter bands (450$\mu$m and 850$\mu$m), a prescribed value of $\beta = 1.8$ is used along with this opacity model. A dust to gas mass ratio of 1:100 and a composition of 70\% $H_{2}$ by-mass are also assumed throughout these calculations.
\\ \\
A comprehensive list of properties was made for all identified cores and clouds of this sample. This included measured properties such as positions, size and 450$\mu$m/850$\mu$m flux, but also several derived properties, such as average temperature, total mass, average column density and average number density using the same recipes as those used for the analysis of SCUBA-2 clumps in surveys of the Gould Belt (Buckle et al. 2015).
\\ \\
From the total of 38 SCUBA-2 450$\mu$m and 850$\mu$m systems investigated, 31 (82\%) had one or more dusty clumps identified in the vicinity of an HII region from the considered sample. A total of 185 clumps and 333 cores were identified, and after discarding a few of these objects on the premise that they were too far from the nearest HII region for an association to exist, 176 clumps (95\%) and 315 embedded cores (95\%) continued to analysis.
\\ \\
We portray the variety of structures we encountered with a set of 4 representative cases in Figure \ref{fig:HII_regions}. The most commonly seen structure (58\% of our fields) have 4 or fewer dense cores embedded in a few clumps as seen in the object Sh-2 201 in this Figure. 81\% of our image fields have 20 or fewer dense cores, with a few cases where most or all of the cores are embedded in one massive clump as in Sh-2 242 in Figure \ref{fig:HII_regions}. Only six of our fields (the remaining 19\%) show large numbers of cores (28 to 49 cores) as seen in Sh-2 168 and Sh-2 305 in Figure \ref{fig:HII_regions}. However these six fields contain 69\% of the dense cores in our sample. Many of the dense cores are found far beyond the boundary of the HII region: we examine the radial distribution of the positions of these cores below.
\section{Results}
\subsection{Summarized Properties}
Our sample was selected to include virtually all of the HII regions with data in the SCUBA-2 archives. Nonetheless, we excluded a handful of more extreme HII regions, such as very nearby and/or large in angular extent (e.g. the Orion HII region). Also, our sample was not selected by prioritizing similarity, such that it covered a small range in distance, angular size or brightness. Despite this, the measured and calculated properties for this sample were remarkably uniform. Table \ref{table:Results} summarizes the properties of the sample, including median values, the \textquotedblleft central population" (i.e the range of each property when outliers are not considered) and finally, the full range.
\begin{table}[!htb]
\centering
\caption{Table of summarized properties for all HII regions and their associated clouds and cores.}
\begin{footnotesize}
\setlength\tabcolsep{2pt
\begin{tabular}{|l|c|c|c|c|}
\hline
\textbf{Property} & \textbf{Median} & \textbf{Range (Excluding Outliers)} & \textbf{Sample Fraction} & \textbf{Full Range} \\
\hline
Distance (kpc) & $3.9$ & $2 \leq d \leq 9$ & $87\%$ & $1.15 \leq d \leq 12.6$ \\
\hline
HII Region Angular Radius ($''$) & $100$ & $30 \leq R \leq 300$ & $90\%$ & $24 \leq R \ \leq 600$ \\
\hline
HII Region Physical Radius (pc) & $2.20$ & $0.5 \leq R \leq 5$ & $80\%$ & $0.35 \leq R \leq 20.9$ \\
\hline
HII Region $n_{e}$ ($cm^{-3}$) & $40.7$ & $10 \leq n_{e} \leq 100$ & $79\%$ & $7.62 \leq n_{e} \leq 574$ \\
\hline
HII Region Total Mass ($M_{\odot}$) & $36.6$ & $2 \leq M \leq 300$ & $84\%$ & $0.32 \leq M \leq 8500$ \\
\hline
Cloud Angular Radius ($''$) & $36$ & $20 \leq R \leq 90$ & $96\%$ & $18 \leq R \leq 186$ \\
\hline
Cloud Physical Radius (pc) & $0.64$ & $0.2 \leq R \leq 2$ & $95\%$ & $0.17 \leq R \leq 7.6$ \\
\hline
Cloud 450$\mu$m Integrated Flux (Jy) & $7.27$ & $1 \leq F_{450} \leq 40$ & $83\%$ & $0.314 \leq F_{450} \leq 421$ \\
\hline
Cloud 850$\mu$m Integrated Flux (Jy) & $0.826$ & $0.2 \leq F_{850} \leq 5$ & $80\%$ & $0.059 \leq F_{850} \leq 49$ \\
\hline
Cloud Average Temperature (K) & $15$ & $9 \leq T \leq 30$ & $82\%$ & $7.4 \leq T \leq 243$ \\
\hline
Cloud Total Mass ($M_{\odot}$) & $106.5$ & $10 \leq M \leq 2000$ & $81\%$ & $2.21 \leq M \leq 15700$ \\
\hline
Cloud Average $N_{H_{2}}$ ($10^{21} cm^{-2}$) & $2.36$ & $0.5 \leq N_{H_2} \leq 6$ & $81\%$ & $0.045 \leq N_{H_2} \leq 32$ \\
\hline
Cloud Average $n_{H_{2}}$ ($cm^{-3}$) & $727$ & $150 \leq n_{H_{2}} \leq 2000$ & $80\%$ & $8.31 \leq n_{H_{2}} \leq 15300$ \\
\hline
Core Angular Radius ($''$) & $12$ & $10 \leq R \leq 20$ & $85\%$ & $4 \leq R \leq 54$ \\
\hline
Core Physical Radius (pc) & $0.26$ & $0.1 \leq R \leq 0.5$ & $86\%$ & $0.04 \leq R \leq 2.2$ \\
\hline
Core 450$\mu$m Integrated Flux (Jy) & $1.41$ & $0.2 \leq F_{450} \leq 7$ & $82\%$ & $0.08 \leq F_{450} \leq 307$ \\
\hline
Core 850$\mu$m Integrated Flux (Jy) & $0.196$ & $0.04 \leq F_{850} \leq 0.8$ & $81\%$ & $0.023 \leq F_{850} \leq 41.4$ \\
\hline
Core Average Temperature (K) & $19.4$ & $10 \leq T \leq 40$ & $83\%$ & $6.6 \leq T \leq 219$ \\
\hline
Core Total Mass ($M_{\odot}$) & $22.7$ & $2 \leq M \leq 150$ & $84\%$ & $0.37 \leq M \leq 12300$ \\
\hline
Core Average $N_{H_{2}}$ ($10^{21}cm^{-2}$) & $3.7$ & $1.5 \leq N_{H_{2}} \leq 15$ & $80\%$ & $0.253 \leq N_{H_{2}} \leq 35$ \\
\hline
Core Average $n_{H_{2}}$ ($cm^{-3}$) & $2990$ & $1000 \leq n_{H_{2}} \leq 15000$ & $80\%$ & $247 \leq n_{H_{2}} \leq 197000$ \\
\hline
\label{table:Results}
\end{tabular}
\end{footnotesize}
\end{table}
\\ \\
The measured quantities in Table \ref{table:Results} (angular sizes, fluxes) have uncertainties that are typically $\leq$ 10$\%$. Properties that depend on distance (physical radii, HII region density and mass) typically are uncertain by twenty percent. Values that depend on the ratio of the submillimeter fluxes (temperature, mass, and densities) are uncertain by 20-30$\%$ for the lower temperature objects (e.g. for T$\le$15K) but can be uncertain by 100$\%$ on the upper bound side due to the strong, non-linearity of the temperature-flux-ratio relationship at higher temperatures.
\\ \\
None of these quantities were normally distributed: there was always a strong skew to their distributions, usually with several extreme outliers. However, after removing the small number of such outliers these properties were found to have a relatively small range where most (typically over 80\%) of the sample was found to form a central population. For example, their distances from the Sun varied by more than a factor 10 and the angular radii of the HII regions varied by a factor of 25. Nonetheless, the physical radii of 80\% of these were within a factor of 10 of each other (and within a factor of 4 of the median).
\\ \\
The central population of most calculated properties varied by roughly one order of magnitude. The noticeable exceptions to this rule are the total mass and the average number density of the clouds and cores. This is likely because the calculation of these is very sensitive to the precision of the distance provided, the value of which is prone to larger uncertainties.
\\ \\
The central population of the HII region masses also varied by more than one order of magnitude. This is mostly due to the large span of sizes in the HII region sample considered. Note that the mass of the HII regions is used later in this paper in order to calculate star formation efficiency. However, the gas mass contributed by the HII regions is most commonly far less than that from other gaseous components (i.e the clouds and cores).
\\ \\
From the 53 HII regions analyzed, all had a distance, an angular radius and a physical radius estimate; while 48 (91\%) had an electron number density and total mass estimate. Furthermore, from the total of 176 clouds and 315 cores analyzed, all had an angular and physical radius estimate; all clouds and 275 (87\%) cores had a 450$\mu$m integrated flux measurement; all clouds and 312 (99\%) cores had an 850$\mu$m integrated flux measurement; 136 (77\%) clouds and 206 (65\%) cores had an average temperature estimate; 129 (73\%) clouds and 192 (61\%) cores had a total mass estimate; 133 (76\%) clouds and 203 cores (64\%) had an average $H_{2}$ column density estimate; and finally 134 (76\%) clouds and 203 (64\%) cores had an average $H_{2}$ number density estimate.
\\ \\
A detailed listing of all HII regions and associated clouds and cores along with their individual properties and their accompanying SCUBA-2 450$\mu$m and 850$\mu$m images can be found in the unpublished MSc thesis of Bobotsis (2018). A detailed discussion of all sources of uncertainty can also be found there.
\subsection{Core Number Counts} \label{sec:enhanced_condensation}
As discussed earlier, the systems analyzed in this paper contained a large range of numbers of cores, with only 19\% of the sample containing more than 28 cores. Even the most populated of these systems do not have enough cores for a radial distribution profile to be fitted with any significant degree of certainty.
\\ \\
To further elaborate on this problem, we present the unscaled radial core distribution histogram of the 4 representative cases of Figure \ref{fig:HII_regions} in Figure \ref{fig:HII_region_radial_distributions}. In the first 3 panels we have Sh-2 168 with 19 cores, Sh-2 201 with 4 cores and Sh-2 242 with 10 cores. It is evident that no meaningful radial distribution profile can be fitted due to the small number of cores available. For Sh-2 305, even though it contains 31 cores, the extended condensation along the HII region boundary is marginally distinguishable, while the large span of unpopulated bins at intermediate distances from the HII region give rise to diverging Poisson counting errors, rendering these locations unusable for the radial profile fitting procedure.
\\ \\
To circumvent this issue, we stacked the counts from all the objects in our entire sample, seeking an average radial distribution fit rather than fitting on a case-by-case basis. Furthermore, to achieve a distance-independent result, we scaled the stacked data by the associated HII region radius; the new distance abbreviated simply as \textquotedblleft scaled separation distance" from here onwards. It should be noted that in the case of multiple HII regions in the vicinity of a core, the associated HII region was selected to be the one that shared the smallest scaled separation distance with the core of interest.
\\ \\
\begin{figure}
\centering
\subfloat[Sh-2 168]{
\fbox{\includegraphics[width=0.48\columnwidth]{S168_core_separation_distance_30_bin_histogram.pdf}}}
\subfloat[Sh-2 201]{
\fbox{\includegraphics[width=0.48\columnwidth]{S201_core_separation_distance_30_bin_histogram.pdf}}}
\subfloat[Sh-2 242]{
\fbox{\includegraphics[width=0.48\columnwidth]{S242_core_separation_distance_30_bin_histogram.pdf}}}
\subfloat[Sh-2 305]{
\fbox{\includegraphics[width=0.48\columnwidth]{S305_core_separation_distance_30_bin_histogram.pdf}}}
\caption{The core unscaled radial distribution profiles of the systems presented in Figure \ref{fig:HII_regions}, with 30 equally spaced bins used for each. The dashed red line indicates the boundary of each HII region.}
\label{fig:HII_region_radial_distributions}
\end{figure}
A cutoff was placed at $\Theta_{SCALED} = 12$ to segregate cores that were less likely to be associated with their nearest HII region. This cutoff corresponds to an angular separation distance anywhere between $6'$ and $150'$ (or $5.2 \ pc$ and $313 \ pc$), with the most likely being $25'$ (or $33 \ pc$), although the specific value strictly depended on the radius of the HII region at hand. These distant cores (18 in total) were not considered in further data analysis. Furthermore, the clouds that ended up with no cores associated to an HII region because of this segregation (9 in total) were also not considered in further data analysis.
\begin{figure}[!htb]
\centering
\setlength{\fboxsep}{0pt}
\setlength{\fboxrule}{1pt}
\fbox{\includegraphics[width=\linewidth]{core_h2_scaled_separation_40_bins.pdf}}
\fbox{\includegraphics[width=\linewidth]{core_h2_scaled_separation_40_bins_noS104S305.pdf}}
\caption{Two Histograms of core-to-HII Region, center-to-center, separation distances, scaled against the radius of each core's associated HII region. The power-law of best-fit is displayed in red while the black-dashed line indicates the boundary of our HII regions. The top histogram includes the entire core sample, while the bottom histogram excludes the cores from the two \textquotedblleft shell-like" HII regions Sh-2 104 and Sh-2 305.}
\label{fig:scaled_distance_histogram}
\end{figure}
\\ \\
A scaled separation distance histogram was made for the cores that were within $\Theta_{SCALED} \leq 12$ and is presented in the top plot of Figure \ref{fig:scaled_distance_histogram}. Inspection of this plot reveals that there are many cores far beyond the boundaries of the associated HII regions. The cores comprising these outer bins have been fitted with a power law of the form $N = c_{0} \Theta^{n}$. Different binning options were tested and all cores existing at bins $\Theta_{SCALED} \geq 1$ were used for the fit; the reason for this choice being two-fold. First, this functional form diverges for small $\Theta_{SCALED}$ values, something that is unphysical. Also, the HII regions have almost certainly affected the counts of cores at values $\Theta_{SCALED} \leq 1$.
\\ \\
Overall, the binning option that best represented the data, giving the lowest uncertainty in the fit parameters, was 40 equally-spaced bins. This choice yielded a fit of $N = (31.6 \pm 7.3) \ \Theta_{SCALED}^{(-1.1 \pm 0.2)}$ cores per 12/40 binsize in $\Theta_{SCALED}$, consistent with a volume number density power-law index of -3. This result is robust to variations of the binning used and the minimum $\Theta_{SCALED}$ bin to be included in the fit. The other tested fits were within the error limits quoted with power-law indices $n$ that varied between -1.3 and -0.8. Note that bins with only 1 counted core did not contribute at all to the fitting procedure, as their Poisson counting uncertainty is $\pm 100\%$, which when translated to a log-log scale yields a value of $0^{+0.333}_{-\infty}$. Integration of this fit suggests that out of the 315 identified cores, $70 \pm 3$ (22\%) cores should lie between $1 \leq \Theta_{SCALED} \leq 2$, while the actual count was 90 (29\%), which is significantly more than the expected amount. Fitting only the region $\Theta_{SCALED} \ge 2$ gives a lower curve and the excess number of cores increases substantially but the best fit curve in this case, while still a similar power law (i.e index $\approx -1$), is not so well constrained.
\\ \\
In order to estimate the level of background cores, a detailed investigation has been carried out by Bobotsis (2018), where a toy-model of the form $N = N_{0} \pi \Theta^{2}$ was fitted between $20 \leq \Theta_{SCALED} \leq 25$ as cores identified at/beyond this scaled distance are expected to be background/foreground cores of our sample. The best-fit values for $N_{0}$ were $4 \times 10^{-4}$ and $1.5 \times 10^{-3}$ depending on which of the considered bins were assigned a higher weight. This value of $N_{0}$ translated to a total of 2 to 6 cores expected to be part of the foreground/background in the entire core sample, and a probability of 0.2\% to 0.6\% for encountering a background/foreground core within 20$'$ from the center of any HII region from this sample, two results that rendered the issue of background/foreground contamination insignificant.
\\ \\
Even though we did not fit for $\Theta_{SCALED} \le 1$, we can see that the behavior there is very different than $\Theta_{SCALED} \geq 1$ as expected due to the different environment at hand. Specifically, a dramatic decrease in the number of cores can be seen interior to $\Theta_{SCALED} \le 1$, while an excess is seen near $\Theta_{SCALED} \approx 1$.
\\ \\
To further establish the significance of the observed core excess, the cores contributed from the shell-like HII regions Sh-2 104 and Sh-2 305, which populated mostly $\Theta_{SCALED} \leq 2$, were all removed from the core counts and the resulting histogram is presented at the bottom plot of Figure \ref{fig:scaled_distance_histogram}. A separate power-law fit was made for this histogram for the cores lying outside their associated HII region ($\Theta_{SCALED} \geq 1$). The resulting fit was $N = (25.1 \pm 5.8) \ \Theta_{SCALED}^{(-1.2 \pm 0.1)}$ with power-law indices $n$ that varied between -1.3 and -0.4; a result less robust than when considering the full sample. Integration of this fit made for the cores at $\Theta_{SCALED} \ge 1$ suggests that $53 \pm 3$ (17\%) cores should lie between $1 \leq \Theta_{SCALED} \leq 2$, while the actual count was 75 (31\%), which is greater than the expected amount by a larger and more significant amount than in the complete data set calculation above. It is evident that the observation of a large excess in the number of cores near the boundary of the HII regions ($\Theta_{SCALED} \approx 1$) was unaffected by this experiment, suggesting that the two shell-like HII regions do not introduce any significant bias in the interpretation of the earlier result.
\subsection{Cloud and Core Temperatures}
\begin{figure}
\centering
\subfloat{
\fbox{\includegraphics[width=0.48\columnwidth]{HII_physical_separation_vs_cloud_temp.pdf}}
}
\subfloat{
\fbox{\includegraphics[width=0.48\columnwidth]{HII_physical_separation_vs_core_temp.pdf}}
}
\subfloat{
\fbox{\includegraphics[width=0.48\columnwidth]{OB_physical_separation_vs_cloud_temp.pdf}}
}
\subfloat{
\fbox{\includegraphics[width=0.48\columnwidth]{OB_physical_separation_vs_core_temp.pdf}}
}
\caption{Average temperature against physical separation distance. In order of appearance, the HII Region - Cloud (Top-Left), HII Region - Core (Top-Right), nearest OB star - Cloud (Bottom-Left) and nearest OB star - Core (Bottom-Right) comparisons are displayed.}
\label{fig:heating}
\end{figure}
\begin{figure}
\centering
\subfloat{
\fbox{\includegraphics[width=0.47\columnwidth]{cloud_T_vs_NH2.pdf}}
}
\subfloat{
\fbox{\includegraphics[width=0.47\columnwidth]{core_T_vs_NH2.pdf}}
}
\caption{Average temperature against average $H_{2}$ column density for clouds (Left) and cores (Right).}
\label{fig:internal_heating}
\end{figure}
To investigate any heating effect taking place either due to the HII region, or the HII region's parent star(s), the average temperature of each core and cloud is compared against the physical distance between them and their associated HII region, as well as their nearest OB star. The results from these comparisons are presented in Figure \ref{fig:heating}. There is no significant heating effect indicated in any of the relationships plotted in this figure. There is a slight trend apparent to the eye for a larger number of high temperature points at physical separation distances $\leq 10 \ pc$, but the statistical significance of this trend is quite low.
\\ \\
However, a comparison between cloud and core average temperature against average $H_{2}$ column density shown in Figure \ref{fig:internal_heating} does show a dependency of high significance for the clouds. The clouds surrounding these dense cores are generally found to be warmer when they have smaller $H_{2}$ column densities. A power-law fit was made for the clouds and the result was $ N_{H_{2}} = (7.14\times 10^{25} \pm 7.70\times 10^{25}) \ T^{(-3.94 \pm 0.43)}$. This is not surprising, since a lower column density corresponds to a lower extinction, which in turn means a greater penetration for incoming photons from nearby stars. This observation is consistent with various cooling models for molecular clouds which suggest a negative power-law dependency to both column and number density (Juvela, Padoan, \& Nordlund 2001). On the other hand, the equivalent comparison for the cores does not show any convincing correlation between average temperature and average $H_{2}$ column density.
\\ \\
We compared the average temperature of all cores to the clouds surrounding them and present the result of this comparison in Figure \ref{fig:cloud_core_temp}. Out of the 315 cores considered, we were able to measure a temperature for 199 (63\%) of these cores and their surrounding cloud. Of these 199 cores, 147 (74\%) were found to be warmer than their surrounding cloud. No significant correlation between core and cloud average temperature was found.
\begin{figure}[!htb]
\centering
\setlength{\fboxsep}{0pt}
\setlength{\fboxrule}{1pt}
\fbox{\includegraphics[width=\linewidth]{core_temp_vs_cloud_temp.pdf}}
\caption{Average cloud temperature against average embedded core temperature. A black, dashed line is used to display cloud-core temperature equivalence.}
\label{fig:cloud_core_temp}
\end{figure}
\pagebreak
\subsection{Star Formation Efficiency}
The dust emission measured by the SCUBA-2 instrument provides a sensitive measure of the total mass of material around the young stars near the HII regions considered in this sample. Consequently, it is possible to use the masses derived from SCUBA-2 measurements to determine the \textquotedblleft Star Formation Efficiency" (SFE) for these systems. The SCUBA-2 measurements provide a separate measure from that of spectral line observations of molecules, such as CO.
\\ \\
The SFE ($\epsilon$) was determined for HII region systems with a complete, or almost-complete mass budget by simply comparing their gaseous and stellar mass budgets in the following manner:
\begin{equation}
\epsilon = 100 \times \bigg( \frac{M_{STAR}}{M_{STAR} + M_{GAS}} \bigg)
\end{equation}
For the gaseous component, the mass of the ionized gas plus the masses of all clouds and cores are summed together. The ionized gas mass is generally less uncertain than the cloud and core masses because the estimates of the latter are derived from dust mass which can sometimes suffer excessively from noisy 450$\mu$m photometry.
\\ \\
For the stellar component, the mass of the massive OB stars is summed separately from that of the low and intermediate-mass stars, which is estimated using a Kroupa (2001) \textquotedblleft Initial Mass Function" (IMF). A maximum stellar mass must be set to use the Kroupa IMF. For each object we have determined the OB stars associated with the HII region. We only calculate the SFE for those objects where we find such OB stars located within or near enough the HII region to be the exciting stars. We use the least massive OB star in each HII region as the upper limit for the Kroupa IMF determination of the mass of the stars with lower masses.
\\ \\
This assumes that all HII region systems have a complete account of their associated, massive OB stars, and consequently the mass contributed from these. Identifying the OB stars associated with each HII region is likely the largest uncertainty contributor in the stellar mass budget. The calculated SFE values are presented in Table \ref{table:SFE}, with detailed descriptions of each system, including image diameters and 450$\mu$m/850$\mu$m noise-per-pixel values.
\begin{table}[!htb]
\centering
\caption{Table of HII region systems whose SFE was obtainable from our data. Columns in order of appearance indicate (1) System ID (2) Contained HII regions (3) SFE, and (4) a short description of each system justifying assigned uncertainty. Systems in red are of very high uncertainty.}
\begin{footnotesize}
\setlength\tabcolsep{2pt
\begin{tabular}{|c|c|c|c|}
\hline
\textbf{System} & \textbf{HII Regions} & \textbf{SFE (\%)} & \textbf{Description} \\
\hline
G70 & Sh-2 99,100 & $1.02\pm0.3$ & \specialcell[c]{2 interacting HII regions, primary targets, many HM stars \\ many filaments, $1800''$, $[N_{450}, N_{850}] = [5, 1.3]$ mJy/beam} \\
\hline
G74 & Sh-2 104 & $7.85\pm1.5$ & \specialcell[c]{1 HII region, primary target, several HM stars, few filaments \\ $1800''$, $[N_{450}, N_{850}] = [5.7, 0.3]$ mJy/beam} \\
\hline
G97 & Sh-2 128 & $2.37\pm0.5$ & \specialcell[c]{1 HII region, primary target, 1 HM exciting star, several filaments \\ $1800''$, $[N_{450}, N_{850}] = [5.8, 1.3]$ mJy/beam} \\
\hline
G108 & Sh-2 152 & $7.97\pm1$ & \specialcell[c]{1 HII region, legacy survey, 1 HM exciting star,\\ few filaments, $3600''$, $[N_{450}, N_{850}] = [81.2, 3.4]$ mJy/beam} \\
\hline
G115 & Sh-2 168 & $29.5^{+2}_{-6}$ & \specialcell[c]{1 HII region, primary target, many HM stars in vicinity, \\ lots of filaments, $1800''$, $[N_{450}, N_{850}] = [4.7, 1.4]$ mJy/beam} \\
\hline
G120 & Sh-2 175 & $26.2^{+2}_{-5}$ & \specialcell[c]{1 HII region, primary target, 1 HM exciting star, \\ several filaments, $240''$, $[N_{450}, N_{850}] = [25.6, 1.2]$ mJy/beam} \\
\hline
G173 & \specialcell[c]{Sh-2 231,232\\233,235} & $19.2\pm4$ & \specialcell[c]{2 interacting HII regions, legacy survey, several HM stars in vicinity, \\ several filaments, $3600''$, $[N_{450}, N_{850}] = [138,3.4]$ mJy/beam} \\
\hline
\color{red} G173B & \color{red} \specialcell[c]{Sh-2 234,237} & \color{red} $72.6^{+2}_{-30}$ & \color{red} \specialcell[c]{2 non-interacting HII regions, legacy survey, lots of HM stars without \\ certain association, few filaments, $3600''$, $[N_{450}, N_{850}] = [187, 3.3]$ mJy/beam} \\
\hline
G182 & Sh-2 242 & $9.10\pm2$ & \specialcell[c]{1 HII region, primary target, 1 HM exciting star, \\ several filaments, $1800''$, $[N_{450}, N_{850}] = [8.1, 1,7]$ mJy/beam} \\
\hline
G188 & Sh-2 247 & $8.09\pm3$ & \specialcell[c]{1 HII region, legacy survey, 1 HM exciting star, \\ several filaments, $3600''$, $[N_{450}, N_{850}] = [16.5,0.8]$ mJy/beam} \\
\hline
G192 & \specialcell[c]{Sh-2 254,255,\\256,257,258} & $6.65^{+4}_{-1}$ & \specialcell[c]{5 interacting HII regions, legacy survey, several HM stars, \\ several filaments, $3600''$, $[N_{450}, N_{850}] = [11.4, 3.0]$ mJy/beam} \\
\hline
G192B & Sh-2 255B,259 & $3.41\pm1$ & \specialcell[c]{2 non-interacting HII regions, legacy survey, 1 HM exciting star, \\ few filaments, $3600''$, $[N_{450}, N_{850}] = [11.4,3.0]$ mJy/beam} \\
\hline
G210 & Sh-2 283 & $29.7^{+3}_{-8}$ & \specialcell[c]{1 HII region, primary target, few HM stars with uncertain association, \\ several filaments, $240''$, $[N_{450}, N_{850}] = [95.5, 4.5]$ mJy/beam} \\
\hline
\color{red} G219 & \color{red} Sh-2 288 & \color{red} $69.1^{+5}_{-35}$ & \color{red} \specialcell[c]{1 HII region, legacy survey, 1 HM star, unknown filamentary amount, \\ $7200''$, $[N_{450}, N_{850}] = [106, 4.6]$ mJy/beam} \\
\hline
G221 & BFS 64 & $37.5^{+5}_{-15}$ & \specialcell[c]{1 HII region, primary target, few HM stars with uncertain association, \\ few filaments, $240''$, $[N_{450}, N_{850}] = [141,4.5]$ mJy/beam} \\
\hline
G233 & Sh-2 305 & $3.28\pm0.3$ & \specialcell[c]{1 HII region, primary target, few HM stars, several filaments \\ $1800''$, $[N_{450}, N_{850}] = [5.4, 4.2]$ mJy/beam} \\
\hline
\label{table:SFE}
\end{tabular}
\end{footnotesize}
\end{table}
\\ \\
Of the 31 HII region systems in our sample, only 16 (52\%) had a sufficiently complete gas and star mass budget for an SFE estimate to be made. Of these, a majority of 9 systems displayed SFE values below 10\%, while 5 systems consisted of large-value outliers, and another 2 systems had very incomplete gas mass budgets, leading to only an upper limit estimate for their SFE (indicated in red in Table \ref{table:SFE}). The first of these 2 systems was G173B which had an SFE value of 72.6\%. This system is comprised of 2 HII regions (Sh-2 234 and Sh-2 237), 4 submillimeter-emitting clumps, of which only 1 had a determined total mass, and an unusually large list of 15 potentially associated OB stars, of which several may not be associated with the 2 HII regions of the system. The second of the two systems was G219 which had an SFE value of 69.1\%. This system is comprised of 1 HII region (Sh-2 288), 1 submillimeter-emitting clump and 1 associated OB star. The mass of the single clump is expected to be very underestimated. This is mostly due to poor atmospheric conditions at the time of observation, but also, due to the short integration time per pixel used in the scan itself, something that would most certainly render any low-mass clumps in the system practically undetectable.
\section{Discussion}
In our analysis, even the 6 most populated HII regions did not have enough cores to produce a statistically significant radial profile fit. However, the entire sample of cores, brought together in a scaled fashion as was done in this work clearly shows an extended core population. Attempts to analyze this population using un-binned statistics have so far been unsuccessful, in part due to the presence of a (small) background of cores affecting the large scales and a divergence in the expected number of cores at the smallest scales (e.g one needs to also account for the sizes of the cores, especially at the center of the radial distribution).
\\ \\
The results of this analysis of the distribution are (1) cores well beyond the HII region are distributed around it such that their number is consistent with a spherical population; (2) there is an excess number of dense cores just outside the HII regions, even when the obvious shell-like objects are removed from the sample; (3) the number of dense cores at small distances is consistent with no dense cores existing inside the HII region; (4) the amount of background core contamination is very insignificant.
\\ \\
Regarding our first result, our number counts are given in number of cores within equally spaced circular rings. At larger distances from the HII region, far beyond the ionized gas boundary, the number of cores decreases approximately as $r^{-1}$; therefore the surface density of the cores decreases as $r^{-2}$. We consider two opposite extremes for the large scale distribution of these dense cores: (i) a spherical distribution against a (ii) filamentary structure.
\\ \\
It is easy to show that filamentary structures of common length, with dense cores uniformly distributed along the filament, is a poor fit for the observed radial distribution found here. If filaments have a common (scaled) length (e.g. $L_{max}$), then the number of dense cores in each ring ($N(\Theta_{scaled})$) would follow a functional form proportional to ($L_{max}^{2}$ - $\Theta_{scaled}^{2})^{1/2}$, which is very different from the observed number counts, even having the opposite curvature in the plot. However by using more complex filamentary structures, such as a distribution of the lengths of the filaments and/or a non-uniform distribution of the dense cores along the filaments, it is possible to fit these number counts. However, with this level of freedom to choose parameters one could fit almost any number count distribution.
\\ \\
Alternatively if the dense cores are in a spherical distribution at large distances from the HII region, then the determination of the distribution function is simple: the volume density of the dense cores follows a $r^{-3}$ distribution. There would likely be significant dynamical differences between these two extremes (i.e filament versus spherical distribution). This suggests that a kinematic investigation of these cores (e.g. radial velocity observations) would be very useful in making progress on this question. Furthermore, the spherical distribution model would appear to contradict models in which massive stars are formed near the edges of GMCs, where an external trigger has been applied to start the star formation process. Kirk et al. (2016) have examined clusters of dense cores in Orion B and found that the most massive of these cores is near the center of each \textquotedblleft dense core cluster". They suggest that mass segregation has already occurred before the first star forms, and that the most massive star will then form in the center of this cluster. This is consistent with our result, where the massive OB star forming an HII region is at the center of a cluster of dense cores.
\\ \\
Our second result that there are additional cores in a shell around the outer edge of the HII region, seems to be true in general, not only for the few obvious, generally well-studied, shell-like HII regions. This may suggest that collect and collapse models are useful to describe most HII regions. Perhaps the lack of obvious shells around many HII regions is simply the result of lower densities of surrounding material.
\\ \\
The idea that these HII regions have shells of dense cores around and immediately outside the ionized region does not contradict the selection criteria that these HII regions are all very obvious in visible light. In theory these shells are expected to be far from uniform in density with strong instabilities causing the production of denser regions with these shells. This is observed. Many, perhaps most, visible HII regions show patches at a few positions where the emission from recombination is completely absorbed. However, when averaged over much of their surface area, the extinction to these HII regions is typically only a few magnitudes as seen in many studies including from measurements of the Blamer decrement over large apertures (Fich and Silkey, 1991). Even deeply embedded, presumably younger HII regions show the neutral material in very clumpy structures with spherical shells in 3D (Topchieva et al., 2018,2019).
\\ \\
This is further complicated by the observation that many HII regions that are seen in the visible are on the near side of large molecular clouds and emerging towards the observer, with much less neutral material on the near side than on the far side. However, even these HII regions will still show strong enhancements in the numbers of dense cores along the edges of the visible region when seen in projection on the sky, with smaller numbers, from the far side, seen at smaller projected distances.
\\ \\
Our third result from the core radial distribution is that the number of dense cores within the HII region boundary is small, and not of great surprise. There will be a significant number of cores seen in projection through the HII regions from the large scale distribution and from a shell around the boundary of the HII regions. The number seen from the boundary shell, through or in front of the ionized gas, depends on the thickness of the shell, and the number in this shell is always less that the number seen around the projected edges of the HII region. There is no need to match the number counts for there to be any dense cores within the ionized gas region. This does not mean that such objects (dense neutral cores) will never be seen within HII regions but the numbers strongly suggest that these should be rare. Any dense cores from the initial cluster that find themselves within the HII region will be eventually destroyed by the action of the ionizing star. In our sample of older, mature HII regions one would expect this core destruction process to be quite advanced.
\\ \\
Our result on the temperatures of the cores and their surrounding clouds was unexpected. Structures at the edge of the HII region; perhaps impacted by their expanding shock; or closer to the very luminous OB stars in, or near the HII region, might reasonably be expected to be at higher temperatures than more distant clouds and cores. However, this is not seen in this work, an observation consistent with recent work from Rumble et al. (2014) where the main B-type exciting star MCW 297 was found to have no noticeable heating effect on any but its nearest clump, with which it shared a small physical separation distance of 0.05 pc.
\\ \\
We also searched for correlations of average temperature with other properties, such as projected stellar heating flux (i.e luminosity of nearby OB stars divided by projected physical distance squared). The only correlation we found was that the clouds were hotter when they had lower column densities. Taken together, these results are consistent with cloud heating being dominated by the diffuse interstellar radiation field and not necessarily by any one nearby star. One caveat to this result is that our sample does not include any HII regions that are heated by the most luminous earlier-type O stars.
\\ \\
The cloud $T$ and $N_{H_{2}}$ correlation also suggests that a \textquotedblleft shielding" effect is in-place, where essentially the outer cloud layer allows progressively less external radiation from reaching the inner core condensations by virtue of its extinction, which in-turn prevents the interior cores from being heated to any significant extent by external radiation.
\\ \\
Our last temperature result was that most cores ($\approx 74\%$) were hotter than the cloud that surrounds them. There are many on-going searches for cold cores being the progenitors of star forming events. However it appears that such cold cores are relatively rare near HII regions. This suggests that the presence of the HII region has caused most of the nearby cores to begin collapse, becoming warmer in the process. An alternative is that the cores are all on similar schedules, beginning to form into stars at the same time, but the most massive core has evolved faster, producing an OB star and consequently an HII region which attracts our attention to investigate that part of the sky.
\\ \\
Our final result involves the calculation of the \textquotedblleft Star Formation Efficiency" (SFE). This required a determination of the mass in stars compared to the total mass of the system (i.e stars + gas). The measurements described here are amongst the few to use the dust emission to measure the gas mass. The dust traces both the atomic and molecular material, an advantage over molecular spectral line studies. However, on larger scales the submillimeter emission is faint and larger uncertainties arise as a result. Identifying all of the stellar mass is also problematic, as the parent stars for some HII regions are not seen, while for some others several candidates exist. Because of these difficulties we were only able to reliably estimate the SFE for a small fraction of our HII region systems.
\\ \\
About half (9) of these HII region systems had very reliable mass budgets and collectively suggested SFE values lower than 10\%. On the other hand a very small portion (2) of these systems had unreasonably high SFE estimates, likely due to the much lower reliability of their gas mass budget as compared to the rest of the sample. Nonetheless, a considerable fraction (5) of these HII region systems with modestly reliable mass budgets suggested larger-than-typical SFE values (19.2, 26.2, 29.5, 29.7 and 37.5 $\%$). It may be important that for these 5 systems most of the identified ionizing stars are B-type, with only 3 consisting of earlier type, more luminous stars (O9V or O9.5V). It should be noted that the lowest SFE estimates were generally made in systems with earlier type stars (O5V, O7V).
\\ \\
In order to validate the use of a Kroupa IMF in the determination of SFE values in the vicinity of ionized gas, we focus on the Sh-2 254 complex, which is the only system investigated sufficiently by other authors to allow for direct comparisons to be made. In our present work, we establish the mass of the ionized gas in the complex to be $\approx$ 127 $M_{\odot}$ using VLA 1.46 GHz data (Bobotsis, 2018). In addition, a lower limit of 3600 $M_{\odot}$ is placed on the total $H_2$ mass by summing the $H_2$ mass present within our identified SCUBA-2 clumps (Bobotsis, 2018). These two values together provide a lower limit of 3727 $M_{\odot}$ to the total gas mass of the complex. A total star mass of $\approx$ 265 $M_{\odot}$ is obtained by extrapolating the Kroupa IMF backwards from the least massive star associated with the region, assuming sample completeness for all mass ranges above that. This results in an upper limit of 6.6\% to the average SFE of the complex.
\\ \\
In Chavarria et al. (2008) the total gas mass of the complex was found to be $\approx$ 6385 $M_{\odot}$ using $^{13}CO$ and $^{12}CO$ data. The total star mass was found by converting total star counts to mass using the median YSO mass (0.5 $M_{\odot}$). The obtained SFE values vary between 4\% and 54\% across different components of the complex with an uncertainty up to a factor of 2. In order to make these SFE estimates comparable to our result, we determine the mass-weighted average of the various SFE values from Chavarria to be $\approx$ 8.2\%. Our value of 6.7\% is almost identical with this value from the Chavarria et al. (2008) dataset and the difference between the two is considerably smaller than the uncertainty estimated for either value.
\\ \\
Furthermore, in Lim et al. (2015) an extensive photometric study of the Sh-2 254 complex shows evidence for an IMF of slope -1.6 for the mass range $10 \le M/M_{\odot} \le 100$. This is slightly steeper than Kroupa's -1.3 slope for the same mass range. In addition, a lower limit of 169 $M_{\odot}$ is made for the total star mass contained in the complex. This is also in good agreement with our 265 $M_{\odot}$ estimate.
\\ \\
Finally, in Mucciarelli (Mucciarelli, Preibisch, \& Zinnecker 2011), an extended Chandra X-ray survey of the Sh-2 254 complex shows a population of young stars very similar to that expected from extrapolating Kroupa's IMF from the lower mass limit of the completely determined star sample down to star masses of 0.5 $M_{\odot}$. In summary, the use of a Kroupa IMF to determine the total stellar mass gives a result that is consistent with that found by others working in this field and using somewhat different assumptions. However, caution should be exercised regarding its use as many HII region systems such as the Sh-2 254 complex tend to favor low-mass star production (Lim et al 2015).
\\ \\
Regarding our uncertainty for the SFE values presented in table \ref{table:SFE}, the strong, non-linear dependency of our gas mass calculations to the SCUBA-2 450 and 850$\mu$m flux is expected to significantly dominate. Rigorous calculation of this uncertainty is a complicated statistical problem, due to the involvement of non-Gaussian variables as is discussed in Bobotsis (2018). However, we do have a good understanding of this uncertainty level. SFE values prone to high levels of uncertainty arise from systems that:
\begin{itemize}
\item{Were detected in one of the JCMT Legacy Surveys}
\item{Have an incomplete accounting of massive OB stars}
\item{Contain multiple HII regions}
\item{Contain a lot of filamentary gas structure}
\end{itemize}
Systems that are part of a JCMT Legacy Survey are prone to much higher gas mass uncertainties as compared to those that were specifically targeted as part of a project simply due to lower integration times and consequently much higher noise per pixel levels across an image, a natural consequence of being part of wide-field survey image.
\\ \\
An incomplete budget of massive OB stars influences star mass estimates two-fold. Clearly, an unaccounted massive star would yield a significant change in the total star mass budget. In conjunction to this however, missing such information can interfere with the choice of the upper mass limit used in the IMF for determining the mass of intermediate and low-mass stars in the system.
\\ \\
In addition, the interaction between multiple HII region fronts in a particular system clearly influences clump formation and consequently, star formation in their immediate vicinity. Contrary to isolated HII region fronts, these tend to drive SFEs up due to the enhancement of the collect-collapse mechanism. It is merely because of this that such systems should be considered as a category of their own, and their SFEs not be expected to trace those of the isolated cases.
\\ \\
Finally, cases where filamentary gas structure is more prevalent than clump structure are also prone to high systematic uncertainties. This paper identifies the molecular gas mass in dense clumps near HII regions. One might expect that cumulative clump mass significantly outweighs filamentary gas mass. It appears that this holds true for our sample in general.
\\ \\
Efficiencies of few percent are typical for regions where high-mass star formation is taking place. However, there have been a number of studies that have suggested higher SFE values for these types of systems. A notable example is the case of Sh-2 104 and RCW 79, for which SFE estimates were 40\% and 45\% respectively (Zavagno et al. 2005), while our SFE estimate for Sh-2 104 is only 7.9\%. The large discrepancy is largely due to the mass incorporated in the gas component between the works. In Zavagno et al. (2005) only the mass of the most massive clump is incorporated, while in this work the mass of all clumps near the targeted HII regions, as well as the mass of their ionized gas are all incorporated in the gas component, leading to a substantially lower SFE estimate.
\\ \\
Overall then, we have shown the utility of a larger sample in the determination of the effects from HII regions onto nearby condensations. It would be useful in the future to investigate HII regions with parent OB stars of earlier types (i.e. earlier than O9) in search of a heating effect on such condensations. Furthermore, the observation of a larger angular scale about each HII region target would allow a closer investigation of the extended core population found in this paper. Finally, if the cluster of dense cores is gravitationally bound, there should be a signature of this in the velocity dispersion profile of the cluster. Specifically, a decrease should be seen far from the center, while the velocity field of the cores inside the shell component will also be different.
\\ \\
On the systematic side of things, further treatment of contamination sources, including line contamination from molecules such as $CH_{3}OH$ and $SO_{2}$, as well as radio-continuum contamination from the HII regions themselves would provide even more reliable photometry. Finally, incorporation of multiple submillimeter wavelengths would allow the construction of an individual $\beta$ fit tailored to each source, as well as the usage of the band couple with the lowest uncertainty when performing flux ratios to determine temperature and subsequent derivative properties, lowering the uncertainty of the obtained properties overall.
\\ \\
Nonetheless, we have measured the properties of the material around a large sample of HII regions, all of them at a late stage in their evolution. Our sample was moderately uniform in most measured properties. However our sample does not contain any of the very large and luminous HII regions that are traditionally used as star formation tracers on galactic scales. The clouds of interstellar material surrounding the HII regions in our sample are not the Giant Molecular Clouds which receive much attention in such studies. The typical masses in this sample are only $\approx 1\%$ of a typical GMC mass. However our sample is probably representative of most HII regions. It remains to be seen how these contribute to overall star formation budgets as compared to the contributions of the small number of very large HII regions associated with the largest GMCs.
\section{Acknowledgments}
The James Clerk Maxwell Telescope has historically been operated by the Joint Astronomy Centre on behalf of the Science and Technology Facilities Council of the United Kingdom, the National Research Council of Canada and the Netherlands Organization for Scientific Research. Additional funds for the construction of SCUBA-2 were provided by the Canada Foundation for Innovation.
\pagebreak
|
2,869,038,155,925 | arxiv | \section{Introduction}
How giant planets form and evolve is one of the biggest challenges of modern astronomy and remains a subject of heated debate. This major goal is directly connected to the ultimate search for life over the horizon 2030 to 2040, although several astrophysical (formation, evolution, dynamics, structure, and atmosphere), biological (bio-markers), and technical (new technologies developed for next generation of instrumentation) steps must be carried out in that perspective. Understanding how giant planets are formed and structured, how they evolve and interact, is critical as they completely shape the planetary system architectures and therefore the possibility of forming telluric planets capable of hosting life. More than two decades ago, the only planets we knew were the ones of our Solar System. With the manna of exoplanet discoveries since the 51~Peg discovery \citep{Mayor1995}, the diversities of systems found (hot Jupiters, irradiated and evaporating planets, misaligned planets with stellar spin, planets in binaries, telluric planets in habitable zones, discovery of Mars-sized planets...), the theories of planetary formation have drastically evolved to digest these observing constraints. However, we are still missing the full picture, and some key fundamental questions still lack answers. For example: i/ the physical processes at play to pass the km-size barrier to form planetary cores, ii/ the physics of accretion to form planetary atmospheres, iii/ the formation mechanisms to explain the existence of giant planets at wide orbits, iv/ the physical properties of young Jupiters, v/ the impact of planet-planet and planet-disk interaction in the final planetary system architecture, or vi/ the influence of the stellar mass and stellar environment in the planetary formation processes. Neither core accretion plus gas capture (CA; \citealt{Pollack1996}) nor disk fragmentation driven by gravitational instabilities (GI; \citealt{Cameron1978}) can globally explain all current observables from planet hunting techniques. Alternative mechanisms are then proposed, such as pebbles accretion to enable core accretion to operate at wide orbits \citep{Lambrechts2012}, inward/outward migration or planet-planet \citep{Crida2009,Bromley2014} or simply the possibility to have several mechanisms forming giant planets \citep{Boley2009}. In this context, each individual discovery of a giant planet and young planetary system using direct imaging is rich in terms of scientific exploitation and characterization, as these systems offer the possibility of i/ directly probing the presence of planets in their birth environments, ii/ enabling the orbital, physical, and spectral characterization of young massive Jupiters, iii/ characterizing the population of giant planets at all separations in synergy with complementary techniques such as astrometry (\textit{GAIA}) and radial velocity adapted to filter stellar activity.\\
Dusty debris disks around pre- and main-sequence stars are possible signposts for the existence of planetesimals and exoplanets \citep{Matthews2014}. Numerous T Tauri and Herbig stars indicate that the characteristic timescale for the dispersal of a surrounding dusty, gaseous disk is a few million years \citep{Kennedy2008b}. Giant planet formation is therefore expected to play a key role in the evolution of disk. This is indirectly confirmed by extant submillimeter and near-infrared images of cool dusty debris disks around main-sequence stars usually showing substantial spatial structure (e.g., $\epsilon$ Eri, Vega, Fomalhaut, $\beta$ Pic; see \citealt{Schneider2014}). It is striking to note that a majority of recent discoveries of imaged giant planets have been obtained around young, dusty, early-type stars. It includes the breakthrough discoveries of Fomalhaut b (3~$M_{\rm{Jup}}$ at 110~AU, A4V star; \citealt{Kalas2008}), HR\,8799 bcde (5-10~$M_{\rm{Jup}}$ at 10-64~au, F0V star; \citealt{Marois2010}), $\beta$\,Pictoris\,b (8-13~$M_{\rm{Jup}}$ at 9~au, A5V star; \citealt{Lagrange2010}), HD\,95086\,b (3-5~$M_{\rm{Jup}}$ at 56~au, A8V star; \citealt{Rameau2013}), and more recently 51\,Eri\,b (2~$M_{\rm{Jup}}$ at 14~au, F0V star; \citealt{Macintosh2015}). The presence of dust and the spatial substructure (ring, gap, warp, and other asymmetries) are possible indirect indicators of the presence of giant planets \citep{Mouillet1997,Dipierro2015,Pinte2020}. Direct imaging is here a unique and viable technique to complete our view of planetary system characteristics at wide orbits ($\ge5$~au). This technique enables us to directly study the planet-disk connection to constrain the planet's and disk's physical properties, evolution, and formation. In the case of $\beta$\,Pictoris, \cite{Lagrange2012} confirmed that $\beta$\,Pic\,b was actually responsible for the disk inner warp geometry, perturbing the planetesimals field and shaping the warp up to 40-60~au. The stars HD\,95086 and HR\,8799 share a common two-component architecture consisting of a warm inner belt ($\le5~$au) and a cold outer disk ($100-200~$au) (see \citealt{Su2015}). \cite{Kennedy2014} actually showed that the spectral energy distributions of both systems are consistent with two-temperature components compatible with dust emission arising from two distinct radial locations. Such an architecture would be analogous to the outer Solar System’s configuration of asteroid and Kuiper belts separated by giant planets. Therefore, following the strategy of our NaCo DUSTIES (Dusty, yoUng, and early-type STar Imaging for ExoplanetS) survey \citep{Rameau2013} that led to the discovery of HD\,95086\,b, we initiated a searching for giant planets with SPHERE at VLT around an newly identified sample of young early-type stars with indication for some cases of multi-belt architecture to maximize the chances of discoveries. The sample, the observations, and the data reduction and analysis are presented in Sections \ref{sec:target_prop}, \ref{sec:observations} and \ref{sec:data_reduc_analysis}, respectively. The results are reported in Section \ref{sec:cc_detection} and discussed in Section \ref{sec:detection_limits}.
\section{Target Properties}
\label{sec:target_prop}
The target selection of the survey was obtained
from a large sample of young, nearby early-type stars according to the following criteria: declination ($\delta \leq 25^{o}$), age ($\leq 100$ Myr), distance ($\leq 100$\,pc), and R-band brightness ($\leq 9.5$) to favor good adaptive optics performances. Age selection criteria were applied based on different youth diagnostics (kinematics, isochrones, Lithium, H$_\alpha$ emission, X-ray activity, stellar rotation, and chromospheric activity). We also used, as selection criteria, the presence of significant $60-70\,\mu$m excess from the \textit{IRAS} and \textit{Spitzer} missions in the spectral energy distributions \citep{Zuckerman1995,Zuckerman2001,Rhee2007,Zuckerman2004,Zuckerman2004b,Zuckerman2011,Zuckerman2013,David2015,Moor2016} or the existence of multi-belt component analysis from \cite{Kennedy2014}. A final total of 30 late-B-, A-, and early-F-type young stars, observable from the southern hemisphere, were then kept, 22 of which were observed between October 2016 and August 2019. Their stellar properties are reported in Table\,1. The age, distance, spectral type, and IR excess properties are shown in Figure~\ref{targets_prop}.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{./pictures/fig_targets_properties2.pdf}
\caption{Diagram of target properties taking into account age with error bars, distance, spectral type, and excess in infrared.}
\label{targets_prop}
\end{figure}
\begin{figure*}[t]
\centering
\includegraphics[width=0.3\textwidth]{./pictures/hist_airmass.pdf}
\includegraphics[width=0.3\textwidth]{./pictures/hist_seeing.pdf}
\includegraphics[width=0.3\textwidth]{./pictures/hist_parang.pdf}
\includegraphics[width=0.3\textwidth]{./pictures/hist_strehl.pdf}
\includegraphics[width=0.3\textwidth]{./pictures/hist_R0.pdf}
\caption{Distribution of the SAXO real-time parameters, averaged over each observing sequence, for the complete survey: airmass, DIMM seeing ($\omega$), parallactic angle variation ($\Delta \theta$), the Strehl ratio at 1.6\,$\mu$m, and the Fried parameter of the atmosphere ($r_0$). }
\label{sparta}
\end{figure*}
\defcitealias{David2015}{D15}
\defcitealias{Zuckerman1995}{Z95}
\defcitealias{Zuckerman2001}{Z01}
\defcitealias{Zuckerman2004}{Z04}
\defcitealias{Zuckerman2004b}{Z04b}
\defcitealias{Zuckerman2011}{Z11}
\defcitealias{Zuckerman2012}{Z12}
\defcitealias{Zuckerman2013}{Z13}
\defcitealias{Rhee2007}{R07}
\defcitealias{Kennedy2014}{K14}
\defcitealias{Moor2016}{M16}
\defcitealias{Bell2015}{B15}
\defcitealias{Nielsen2019}{N19}
\defcitealias{Galicher2016}{G16}
\defcitealias{Meshkat2017}{M17}
\defcitealias{Vigan2017}{V17}
\begin{table*}
\caption{Description and properties of the sample. The Exc. column indicates the presence of an IR excess. The symbol "/" means no IR excess, and "Y" means with IR excess. References: \citepalias{Bell2015} \cite{Bell2015}; \citepalias{David2015} \cite{David2015}; \citepalias{Galicher2016} \cite{Galicher2016}; \citepalias{Kennedy2014} \cite{Kennedy2014}; \citepalias{Moor2016} \cite{Moor2016}; \citepalias{Meshkat2017} \cite{Meshkat2017}; \citepalias{Rhee2007} \cite{Rhee2007}; \citepalias{Vigan2017} \cite{Vigan2017}; \citepalias{Zuckerman1995} \cite{Zuckerman1995}; \citepalias{Zuckerman2001} \cite{Zuckerman2001}; \citepalias{Zuckerman2004} \cite{Zuckerman2004};\citepalias{Zuckerman2004b} \cite{Zuckerman2004b}; \citepalias{Zuckerman2011} \cite{Zuckerman2011}; \citepalias{Zuckerman2012} \cite{Zuckerman2012}; \citepalias{Zuckerman2013} \cite{Zuckerman2013}. }
\centering
\begin{tabular}{lllllllllll}
\hline
Target & RA(2000) & DEC(2000) & $\mu_{\alpha}$ & $\mu_{\delta}.\cos(\delta)$ & H & SpT & Dist. & Age & Exc. & References \\
&&& (mas/yr) & (mas/yr) & (mag) && (pc) & (Myr) &&\\
\hline
\noalign{\vskip 1mm}
HIP3277 & 00 41 46.3 & -56 30 04.73 & 90.79 & 57.19 & 5.6 & A3V & 67 & $93_{-76}^{+283}$ & / & \citetalias{David2015} \\
\noalign{\vskip 1mm}
HIP7345 & 01 34 37.7 & -15 40 34.89 &94.84 & -3.14 & 5.5 & A1V & 61 & $35_{-5}^{+5}$ & Y & \citetalias{Zuckerman1995,Zuckerman2012,Galicher2016} \\
\noalign{\vskip 1mm}
HIP7805 & 01 40 24.0 & -60 59 53.62 & 61.94 & -10.50 & 6.7 & F2V & 66 & $30_{-15}^{+15}$ & Y & \citetalias{Zuckerman2001,Zuckerman2004,Meshkat2017} \\
\noalign{\vskip 1mm}
HIP8832 & 01 53 31.8 & +19 17 37.87 & 79.20 & -97.63 & 2.8 & A0 & 50 & $87_{-71}^{+195}$ & / & \citetalias{David2015} \\
\noalign{\vskip 1mm}
HIP9902 & 02 07 26.1 & -59 40 45.942 & 91.11 & -18.29 & 6.2 & F7V & 44 & $45_{-4}^{+4}$ & Y & \citetalias{Kennedy2014,Bell2015} \\
\noalign{\vskip 1mm}
HIP13141 & 02 49 01.4 & -62 48 23.47 & 94.02 & 29.10 & 5.2 & A2V & 50 & $100_{-70}^{+200}$ & Y & \citetalias{Rhee2007,Galicher2016} \\
\noalign{\vskip 1mm}
HIP16095 & 03 27 18.6 & +12 44 07.03 & 10.36 & -7.56 & 6.3 & A0V & 88 & $194_{-138}^{+171}$ & / & \citetalias{Zuckerman2013,David2015} \\
\noalign{\vskip 1mm}
HIP18437 & 03 56 29.3 & -38 57 43.80 & 29.46 & 0.10 & 6.8 & A0V & 100 & $187_{-177}^{+150}$ & Y & \citetalias{Rhee2007,Meshkat2017} \\
\noalign{\vskip 1mm}
HIP19990 & 04 17 15.6 & +20 34 42.93 & -39.41 & -60.79 & 4.6 & A3 & 29 & $70_{-40}^{+30}$ & / & \citetalias{Zuckerman2013,Galicher2016} \\
\noalign{\vskip 1mm}
HIP22192 & 04 46 25.7 & -28 05 14.8 & -3.82 & 17.58 & 5.7 & A3V & 56 & $12_{-5}^{+5}$ & / & \citetalias{Zuckerman2013,Galicher2016} \\
\noalign{\vskip 1mm}
HIP22226 & 04 46 49.5 & -26 18 08.84 & 34.52 & -4.13 & 6.9 & F3V & 78 & $30_{-20}^{+20}$ & Y & \citetalias{Rhee2007,Galicher2016} \\
\noalign{\vskip 1mm}
HIP22845 & 04 54 53.7 & +10 09 02.99 & 41.49 & -128.73 & 4.5 & A3V & 34 & $100_{-70}^{+200}$ & Y & \citetalias{Zuckerman2004b,Galicher2016} \\
\noalign{\vskip 1mm}
HIP26309 & 05 36 10.2 & -28 42 28.847 & 25.80 & -3.04 & 5.9 & A2V & 56 & $30_{-10}^{+20}$ & / & \citetalias{Zuckerman2011,Galicher2016} \\
\noalign{\vskip 1mm}
HIP26990 & 05 43 35.8 & -39 55 24.7145 & 25.82 & 15.08 & 6.8 & G0V & 55 & $42_{-7}^{+8}$ & Y & \citetalias{Moor2016,Vigan2017} \\
\noalign{\vskip 1mm}
HIP34276 & 07 06 20.9 & -43 36 38.69 & 5.80 & 13.20 & 6.5 & A0V & 102 & $185_{-170}^{+120}$ & Y & \citetalias{Rhee2007,Meshkat2017} \\
\noalign{\vskip 1mm}
HIP41307 & 08 25 39.6 & -03 54 23.11 & -66.43 & -23.41 & 3.9 & A0V & 37 & $203_{-100}^{+100}$ & Y & \citetalias{Rhee2007,Meshkat2017} \\
\noalign{\vskip 1mm}
HIP93542 & 19 03 06.8 & -42 05 42.38 & 56.41 & -46.43 & 5.0 & B9V & 59 & $76_{-62}^{+148}$ & Y & \citetalias{Rhee2007,David2015} \\
\noalign{\vskip 1mm}
HIP95619 & 19 26 56.4 & -29 44 35.617 & 18.63 & -50.13 & 5.7 & B8.5 & 70 & $86_{-69}^{+138}$ & Y & \citetalias{David2015} \\
\noalign{\vskip 1mm}
HIP97749 & 19 51 50.6 & -39 52 27.7 & 18.42 & -11.27 & 5.4 & A & 100 & $82_{-67}^{+177}$ & / & \citetalias{David2015} \\
\noalign{\vskip 1mm}
HIP101800 & 20 37 49.1 & +11 22 39.63 & 39.15 & -8.26 & 5.4 & A1V & 57 & $225_{-43}^{+311}$ & Y & \citetalias{Rhee2007,David2015} \\
\noalign{\vskip 1mm}
HIP101958 & 20 39 38.2 & +15 54 43.46 & 53.82 & 8.47 & 3.9 & B9V & 77 & $60_{-49}^{+164}$ & / & \citetalias{David2015} \\
\noalign{\vskip 1mm}
HIP117452 & 23 48 55.5 & -28 07 48.97 & 100.80 & -105.34 & 4.6 & A0V & 42 & $70_{-40}^{+30}$ & Y & \citetalias{Zuckerman2011,David2015} \\
\noalign{\vskip 1mm}
\hline
\end{tabular}
\label{table_p99}
\end{table*}
\section{Observations}
\label{sec:observations}
The SPHERE planet-finder instrument installed at the VLT \citep{Beuzit2019} is a highly specialized instrument, dedicated to high-contrast imaging and spectroscopy of young giant exoplanets. It is based on the SAXO extreme adaptive optics (XAO) system \citep{Fusco2006,Sauvage2010,Petit2014}, which controls a deformable mirror with $41\times41$ actuators, and four control loops (fast visible tip-tilt, high-orders, near-infrared differential tip-tilt, and pupil stabilization). The common path optics employ several stress-polished toric mirrors \citep{Hugot2012} to transport the beam to the coronagraphs and scientific instruments. Several types of coronagraphic devices for stellar diffraction suppression are provided, including apodized pupil Lyot coronagraphs \citep{Soummer2005} and achromatic four-quadrant phase masks \citep{Boccaletti2008}. The instrument has three science subsystems: the infrared dual-band imager and spectrograph (IRDIS, \citealt{Dohlen2008}), an integral field spectrograph (IFS; \citealt{Claudi2008}), and the Zimpol rapid-switching imaging polarimeter (ZIMPOL; \citealt{Thalmann2008}).\\
The sample of young early-type stars was observed using the IRDIFS-EXT mode, with IRDIS in the dual-band imaging (DBI, \citealt{Vigan2010}) mode with $K_1K_2$ filters ($\lambda_{K_1} = 2.1025 \pm 0.1020,\mu$m - $\lambda_{K_2} = 2.2550 \pm 0.1090,\mu$m), and IFS in the $Y-H$ ($0.97-1.66\,\mu$m) mode in pupil-tracking. This combination enables the use of angular and/or spectral differential imaging techniques to improve the contrast performances at the subarcsecond level \citep{Racine1999,Marois2006}. The choice between IRDIFS mode and IRDIFS-EXT mode is critical to optimizing the detection of young, early-T, or warm, mid-L dwarfs planets, considering the primary age and distance. Indeed, it was crucial in the cases of the $\beta$ Pic\,b \citep{Lagrange2009} and HD\,95086\,b \citep{Rameau2013} discoveries to properly remove quasi-static speckles that dominate performance detection at close inner angles ($0.1-2.0\,\!''$, i.e., $3-60$\,au at 30 pc), but to also maximize the emitted flux by the giant planets. For young ages ($10-50$\,Myr), as the potential planets to which we are mostly sensitive are warm and dusty L-type planets with no methane absorption, the choice of the IRDIFS-EXT mode is more appropriate and was chosen for this observing campaign. For the follow-up, as candidates were only detected in the IRDIS field of view, we opted for the use of the IRDIS the DBI mode with $J_2J_3$ filters ($\lambda_{J_2} =2.1025 \pm 0.1020\,\mu$m - $\lambda_{J_3} = 2.2550 \pm 0.1090\,\mu$m) in pupil-tracking. Thus, this second epoch provides, in addition to the possibility of checking for common proper motion of the candidates relative to the primary star, the possibility to better discriminate background stars from physically young, early-T, or warm mid-L dwarfs planets in the color-magnitude diagram \citep{Bonnefoy2018}.
\begin{table*}
\caption{Observing Log}
\begin{center}
\begin{tabular}{lllllllllll}
\hline
\hline
UT Date & Target & Instrument & Mode & Filter & NDIT $\times$ DIT & $N_{\rm{exp}}$ & $\Delta \theta$ & $\omega$ & Strehl & Airmass \\
&&&&& (s) && ($^o$) & (") & [email protected] \upmu \mathrm{m}$ \\
\hline
\multicolumn{11}{c}{Survey}\\
\hline
\multirow{4}{*}{05-10-2016} & \multirow{2}{*}{HIP9902} & IRDIS & DBI & $K_1$$K_2$ & $3 \times 64$ & \multirow{2}{*}{46} & \multirow{2}{*}{20.7} & \multirow{2}{*}{0.62} & \multirow{2}{*}{0.75} & \multirow{2}{*}{1.22} \\
&& IFS & $R_{\lambda} = 30$ & $$YJH$$ & $1 \times 64$ & \\
& \multirow{2}{*}{HIP18437} & IRDIS & DBI & $K_1$$K_2$ & $3 \times 64$ & \multirow{2}{*}{46} & \multirow{2}{*}{44.2} & \multirow{2}{*}{0.47} & \multirow{2}{*}{0.77} & \multirow{2}{*}{1.03} \\
&& IFS & $R_{\lambda} = 30$ & $$YJH$$ & $1 \times 64$ & \\
\multirow{4}{*}{07-10-2016} & \multirow{2}{*}{HIP7805} & IRDIS & DBI & $K_1$$K_2$ & $3 \times 64$ & \multirow{2}{*}{46} & \multirow{2}{*}{20.0} & \multirow{2}{*}{0.53} & \multirow{2}{*}{0.83} & \multirow{2}{*}{1.24} \\
&& IFS & $R_{\lambda} = 30$ & $$YJH$$ & $1 \times 64$ & \\
& \multirow{2}{*}{HIP16095} & IRDIS & DBI & $K_1$$K_2$ & $3 \times 64$ & \multirow{2}{*}{46} & \multirow{2}{*}{19.0} & \multirow{2}{*}{0.46} & \multirow{2}{*}{0.87} & \multirow{2}{*}{1.26} \\
&& IFS & $R_{\lambda} = 30$ & $$YJH$$ & $1 \times 64$ & \\
\multirow{2}{*}{08-10-2016} & \multirow{2}{*}{HIP13141} & IRDIS & DBI & $K_1$$K_2$ & $3 \times 64$ & \multirow{2}{*}{46} & \multirow{2}{*}{20.8} & \multirow{2}{*}{0.41} & \multirow{2}{*}{0.83} & \multirow{2}{*}{1.30} \\
&& IFS & $R_{\lambda} = 30$ & $$YJH$$ & $1 \times 64$ & \\
\multirow{2}{*}{10-11-2016} & \multirow{2}{*}{HIP19990} & IRDIS & DBI & $K_1$$K_2$ & $3 \times 64$ & \multirow{2}{*}{46} & \multirow{2}{*}{22.6} & \multirow{2}{*}{0.27} & \multirow{2}{*}{0.94} & \multirow{2}{*}{1.30} \\
&& IFS & $R_{\lambda} = 30$ & $$YJH$$ & $1 \times 32$ & \\
\multirow{2}{*}{12-11-2016} & \multirow{2}{*}{HIP26309} & IRDIS & DBI & $K_1$$K_2$ & $3 \times 64$ & \multirow{2}{*}{46} & \multirow{2}{*}{107.4} & \multirow{2}{*}{0.41} & \multirow{2}{*}{0.87} & \multirow{2}{*}{1.01} \\
&& IFS & $R_{\lambda} = 30$ & $$YJH$$ & $1 \times 64$ & \\
\multirow{2}{*}{13-11-2016} & \multirow{2}{*}{HIP22192} & IRDIS & DBI & $K_1$$K_2$ & $7 \times 32$ & \multirow{2}{*}{46} & \multirow{2}{*}{130.9} & \multirow{2}{*}{0.33} & \multirow{2}{*}{0.86} & \multirow{2}{*}{1.01} \\
&& IFS & $R_{\lambda} = 30$ & $$YJH$$ & $1 \times 32$ & \\
\multirow{2}{*}{04-12-2016} & \multirow{2}{*}{HIP7345} & IRDIS & DBI & $K_1$$K_2$ & $3 \times 64$ & \multirow{2}{*}{17} & \multirow{2}{*}{81.4} & \multirow{2}{*}{0.44} & \multirow{2}{*}{0.90} & \multirow{2}{*}{1.02} \\
&& IFS & $R_{\lambda} = 30$ & $$YJH$$ & $1 \times 64$ & \\
\multirow{2}{*}{05-12-2016} & \multirow{2}{*}{HIP22226} & IRDIS & DBI & $K_1$$K_2$ & $3 \times 64$ & \multirow{2}{*}{46} & \multirow{2}{*}{15.2} & \multirow{2}{*}{0.42} & \multirow{2}{*}{0.82} & \multirow{2}{*}{1.00} \\
&& IFS & $R_{\lambda} = 30$ & $$YJH$$ & $1 \times 64$ & \\
\multirow{2}{*}{07-12-2016} & \multirow{2}{*}{HIP22845} & IRDIS & DBI & $K_1$$K_2$ & $3 \times 64$ & \multirow{2}{*}{46} & \multirow{2}{*}{19.3} & \multirow{2}{*}{0.44} & \multirow{2}{*}{0.82} & \multirow{2}{*}{1.27} \\
&& IFS & $R_{\lambda} = 30$ & $$YJH$$ & $1 \times 32$ & \\
\multirow{2}{*}{13-12-2016} & \multirow{2}{*}{HIP34276} & IRDIS & DBI & $K_1$$K_2$ & $8 \times 32$ & \multirow{2}{*}{46} & \multirow{2}{*}{39.5} & \multirow{2}{*}{0.55} & \multirow{2}{*}{0.84} & \multirow{2}{*}{1.06} \\
&& IFS & $R_{\lambda} = 30$ & $$YJH$$ & $1 \times 64$ & \\
\multirow{4}{*}{15-12-2016} & \multirow{2}{*}{HIP26990} & IRDIS & DBI & $K_1$$K_2$ & $3 \times 64$ & \multirow{2}{*}{46} & \multirow{2}{*}{42.6} & \multirow{2}{*}{0.55} & \multirow{2}{*}{0.76} & \multirow{2}{*}{1.04} \\
&& IFS & $R_{\lambda} = 30$ & $$YJH$$ & $1 \times 64$ & \\
& \multirow{2}{*}{HIP41307} & IRDIS & DBI & $K_1$$K_2$ & $17 \times 16$ & \multirow{2}{*}{46} & \multirow{2}{*}{43.0} & \multirow{2}{*}{0.35} & \multirow{2}{*}{0.92} & \multirow{2}{*}{1.03} \\
&& IFS & $R_{\lambda} = 30$ & $$YJH$$ & $1 \times 16$ & \\
\multirow{4}{*}{17-06-2017} & \multirow{2}{*}{HIP93542} & IRDIS & DBI & $K_1$$K_2$ & $7 \times 32$ & \multirow{2}{*}{46} & \multirow{2}{*}{59.5} & \multirow{2}{*}{0.83} & \multirow{2}{*}{0.69} & \multirow{2}{*}{1.05} \\
&& IFS & $R_{\lambda} = 30$ & $$YJH$$ & $1 \times 32$ & \\
& \multirow{2}{*}{HIP97749} & IRDIS & DBI & $K_1$$K_2$ & $7 \times 32$ & \multirow{2}{*}{46} & \multirow{2}{*}{43.3} & \multirow{2}{*}{0.81} & \multirow{2}{*}{0.52} & \multirow{2}{*}{1.06} \\
&& IFS & $R_{\lambda} = 30$ & $$YJH$$ & $1 \times 32$ & \\
\multirow{2}{*}{06-07-2017} & \multirow{2}{*}{HIP101800} & IRDIS & DBI & $K_1$$K_2$ & $7 \times 32$ & \multirow{2}{*}{42} & \multirow{2}{*}{22.1} & \multirow{2}{*}{0.58} & \multirow{2}{*}{0.86} & \multirow{2}{*}{1.24} \\
&& IFS & $R_{\lambda} = 30$ & $$YJH$$ & $1 \times 32$ & \\
\multirow{2}{*}{15-07-2017} & \multirow{2}{*}{HIP117452} & IRDIS & DBI & $K_1$$K_2$ & $6 \times 32$ & \multirow{2}{*}{46} & \multirow{2}{*}{117.1} & \multirow{2}{*}{0.45} & \multirow{2}{*}{0.87} & \multirow{2}{*}{1.01} \\
&& IFS & $R_{\lambda} = 30$ & $$YJH$$ & $1 \times 32$ & \\
\multirow{2}{*}{20-07-2017} & \multirow{2}{*}{HIP101958} & IRDIS & DBI & $K_1$$K_2$ & $15 \times 16$ & \multirow{2}{*}{46} & \multirow{2}{*}{23.4} & \multirow{2}{*}{0.45} & \multirow{2}{*}{0.90} & \multirow{2}{*}{1.36} \\
&& IFS & $R_{\lambda} = 30$ & $$YJH$$ & $1 \times 16$ & \\
\multirow{2}{*}{31-07-2017} & \multirow{2}{*}{HIP95619} & IRDIS & DBI & $K_1$$K_2$ & $7 \times 32$ & \multirow{2}{*}{46} & \multirow{2}{*}{110.0} & \multirow{2}{*}{0.77} & \multirow{2}{*}{0.62} & \multirow{2}{*}{1.01} \\
&& IFS & $R_{\lambda} = 30$ & $$YJH$$ & $1 \times 32$ & \\
\multirow{2}{*}{09-08-2017} & \multirow{2}{*}{HIP8832} & IRDIS & DBI & $K_1$$K_2$ & $15 \times 16$ & \multirow{2}{*}{46} & \multirow{2}{*}{22.5} & \multirow{2}{*}{0.35} & \multirow{2}{*}{0.89} & \multirow{2}{*}{1.40} \\
&& IFS & $R_{\lambda} = 30$ & $$YJH$$ & $1 \times 16$ & \\
\multirow{2}{*}{10-09-2017} & \multirow{2}{*}{HIP3277} & IRDIS & DBI & $K_1$$K_2$ & $7 \times 32$ & \multirow{2}{*}{46} & \multirow{2}{*}{26.5} & \multirow{2}{*}{0.54} & \multirow{2}{*}{0.83} & \multirow{2}{*}{1.20} \\
&& IFS & $R_{\lambda} = 30$ & $$YJH$$ & $1 \times 32$ & \\
\hline
\multicolumn{11}{c}{Follow-up}\\
\hline
\multirow{1}{*}{27-09-2018} & \multirow{1}{*}{HIP117452} & IRDIS & DBI & $J_2$$J_3$ & $ 6 \times 32 $ & \multirow{1}{*}{22} & \multirow{1}{*}{112.7} & \multirow{1}{*}{0.41} & \multirow{1}{*}{0.88} & \multirow{1}{*}{1.00} \\
\multirow{1}{*}{10-10-2018} & \multirow{1}{*}{HIP8832} & IRDIS & DBI & $J_2$$J_3$ & $ 4 \times 48 $ & \multirow{1}{*}{23} & \multirow{1}{*}{20.4} & \multirow{1}{*}{0.61} & \multirow{1}{*}{0.78} & \multirow{1}{*}{1.00} \\
\multirow{1}{*}{22-11-2018} & \multirow{1}{*}{HIP34276} & IRDIS & DBI & $J_2$$J_3$ & $ 4 \times 64 $ & \multirow{1}{*}{23} & \multirow{1}{*}{46.3} & \multirow{1}{*}{0.39} & \multirow{1}{*}{0.82} & \multirow{1}{*}{1.44} \\
\multirow{1}{*}{09-05-2019} & \multirow{1}{*}{HIP95619} & IRDIS & DBI & $J_2$$J_3$ & $ 7 \times 32 $ & \multirow{1}{*}{23} & \multirow{1}{*}{22.3} & \multirow{1}{*}{0.51} & \multirow{1}{*}{0.75} & \multirow{1}{*}{1.02} \\
\multirow{1}{*}{18-06-2019} & \multirow{1}{*}{HIP101800} & IRDIS & DBI & $J_2$$J_3$ & $ 7 \times 32 $ & \multirow{1}{*}{23} & \multirow{1}{*}{20.2} & \multirow{1}{*}{0.68} & \multirow{1}{*}{0.83} & \multirow{1}{*}{1.36} \\
\hline
\end{tabular}
\end{center}
\label{table_obs_log}
\end{table*}
The observing sequence used for the survey is as follows; PSF flux reference, coronographic centering using the waffle spots, deep coronographic observation of about 70\,min in total on target, new coronographic centering using the waffle spots, PSF flux reference, and sky. The PSF flux references were used to estimate the relative photometry of the companion candidates detected in the IRDIS and IFS field of view, as well as the detection limits. The coronographic centering sequence using the waffle spots sequence is critical to obtaining the position of the star behind the coronograph and the relative astrometry of the companion candidates. The deep coronographic observation was obtained close to meridian to maximize the field rotation. Finally, the sky background was used to optimize the background subtraction and the flat field correction. The typical observing sequence lasts approximately 90\,min, including pointing and overheads. The detail of the observations per target is reported in Table\,\ref{table_obs_log}. As a by-product of the SPHERE observation, one can access the evolution of the different atmospheric parameters seen and registered by the SPHERE XAO system (SAXO). These real-time parameters are good diagnostics of the turbulence conditions ($\tau_0$, $r_0$, integrated wind over the line of sight) and of the XAO correction (Strehl at 1.6\,$\mu$m) during the observing sequence. The summary of these SAXO parameters over the full survey is reported in Table\,\ref{table_obs_log} and shown in Figure\,\ref{sparta}. Given the brightness of our targets, about $70\,\%$ of the survey was obtained under median or good conditions for Paranal, with a typical Strehl ratio larger than $80\,\%$. Prior to the UT3 intervention at VLT in 2017, a few cases were affected by the low-wind effect, despite good atmospheric conditions.
\begin{figure*}[t]
\centering
\includegraphics[width=\columnwidth]{./pictures/dusties_hip16095_irdisz.pdf}
\includegraphics[width=\columnwidth]{./pictures/dusties_hip41307_ifsz.pdf}
\caption{Left: IRDIS reduced full-frame image of HIP\,16095 in the $K_1$ and $K_2$ combined filters using SpeCal with the TLOCI algorithm \citep{Galicher2011}. A bright companion candidate is well identified at a few arcseconds to the east of the star. North is up, and east is left. Right: IFS image reduced in PCA ASDI of HIP\,41307.
}
\label{obj_hip16095}
\end{figure*}
\section{Data reduction and analysis}
\label{sec:data_reduc_analysis}
In order to calibrate the IRDIS and IFS dataset on sky, the platescale and true north solution at each epoch were corrected based on the long-term analysis of the SPHERE Guaranteed Time Observation astrometric calibration described by \cite{Maire2016}. The rotation correction considered to align images to the detector vertical in pupil-tracking observations is $-135.99\pm0.11\degr$. Anamorphism correction was obtained by stretching the image $Y$-direction with a factor of $1.0060\pm0.0002$. All IRDIS and IFS datasets were reduced using the SPHERE Data Reduction and Handling (DRH) automated pipeline \citep{Pavlov2008} and additional IDL routines for the IFS data reduction \citep{Mesa2015} at the SPHERE Data Center \citep{Delorme2017} to correct each data cube for bad pixels, dark current, flat field, and sky background. After combining all data cubes with an adequate calculation of the parallactic angle for each individual frame of the deep coronagraphic sequence, all frames were shifted at the position of the stellar centroid calculated from the initial star center position.\\
For an independent check, two pipelines were then used to process the data in angular differential imaging (ADI), and in combined spectral and angular differential imaging (ASDI): the IPAG-ADI pipeline \citep{Chauvin2012}, and the SpeCal \citep{Galicher2018}. These routines allowed us to reduce the data cubes with almost the same set of algorithms (classical ADI, \citealt{Marois2006}; LOCI, \citealt{Lafreniere2007}; PCA,\citealt{Soummer2012}; Andromeda, \citealt{Cantalloube2015}), and to exploit the spectral diversity of the IRDIS and IFS observations using ASDI techniques in addition to ADI only. Following the principles described in \cite{Galicher2018}, SpeCal (and IPAG-ADI) delivers, for various algorithms and observing techniques (ADI, ASDI), contrast curves, signal-to-noise ratio (S/N) maps, and the possibility to locally characterize the astrometric, photometric, and spectroscopic signal of any companion candidate using either a template or negative fake planet injection approach. As consistent results were found with both pipelines, the full set of observations was reduced with SpeCal (routinely used with the SPHERE GTO) using the TLOCI algorithm (in ADI and ASDI) for IRDIS, and the PCA algorithm (in ASDI) for IFS. A spatial filtering for each data cube was automatically applied to the deep coronographic observations and the reference PSFs before the use of SpeCal. \\
The TLOCI algorithm is implemented in SpeCal, to attenuate the background signal. The TLOCI algorithm locally subtracts the stellar speckle pattern for each frame in annuli of $1.5\times\textit{FWHM}$ further divided into sectors. The subtraction is based on a linear combination of the best 20 ($N$ parameter) correlated reference images calculated in the optimization region and selected to minimize the self-subtraction at a maximum of $20\%$ ($\tau$ parameter), see \citealt{Galicher2011} and \citealt{Marois2014} for a further description of the reference frame selection and the subtraction and optimization regions. For IFS, in the PCA version, each frame we used is subtracted from its average over the field of view to estimate the principal components. The spectral diversity is exploited after proper rescaling and renormalization of the IFS data cubes as detailed by \citep{Mesa2015}. Considering the significant field rotation of our observations, the first 100 principal components were subtracted.
\section{Companion candidate detection and characterization}
\label{sec:cc_detection}
\begin{figure*}[t]
\centering
\includegraphics[width=0.48\textwidth]{./pictures/dusties_K1_K1K2.pdf}
\includegraphics[width=0.48\textwidth]{./pictures/dusties_J3_J3J2.pdf}
\caption{Left: absolute magnitude in $K_1$-band versus $K_1$-$K_2$ color for brown dwarfs with discovered companions. Right: same, but for absolute magnitude in $J_3$-band versus $J_3-J_2$ color. The targets from our survey are noted in red.}
\label{diag_mag}
\end{figure*}
\begin{table*}[t]
\caption{Companion candidate characterization and identification. Target name and observing date (modified Julian day) are given, as well as the different sources identified with their relative position and relative flux.}
\begin{center}
\begin{tabular}{llllllll}
\hline
\hline
Target & UT Date & Candidate & Filter & Separation & Position angle & $\Delta_{\rm Filter-1}$ & $\Delta_{\rm Filter-2}$ \\
& & & & (mas) & (deg) & (mag) & (mag) \\
\hline
HIP16095 & 57669.2937186 & cc-1 & DK12 & $3368\pm2$ & $111.38\pm0.04$ & $11.46\pm0.12$ & $11.28\pm0.12$ \\
& 58092.1556576 & cc-1 & DJ23 & $3385\pm2$ & $111.21\pm0.02$ & $12.88\pm0.08$ & $12.55\pm0.09$ \\
HIP95619 & 57965.1627630 & cc-1 & DK12 & $4564\pm3$ & $254.25\pm0.03$ & $11.11\pm0.51$ & $10.94\pm0.54$ \\
& 58613.3454076 & cc-1 & DJ23 & $4597\pm2$ & $255.23\pm0.01$ & $12.17\pm0.24$ & $11.83\pm0.29$\\
HIP101800&57940.3125070 & cc-1 & DK12 & $4513\pm4$ & $89.84\pm0.037$ & $12.42\pm0.12$ & - \\
&58653.3759935 & cc-1 & DJ23 & $4418\pm2$ & $89.82\pm0.01 $ & $13.34\pm0.17$ & $13.07\pm0.15$ \\
&58653.3759935 & cc-2 & DJ23 & $4021\pm2$ & $89.83\pm0.01 $ & $14.42\pm0.19$ & $14.17\pm0.17$ \\
HIP34276 & 57736.2557381 & cc-1 & DK12 & $3108\pm7$ & $132.55\pm0.11$ & $12.90\pm0.12$ & $12.72\pm0.13$\\
& 57736.2557381 & cc-2 & DK12 & $4407\pm4$ & $138.56\pm0.06$ & $12.26\pm0.12$ & $12.30\pm0.12$ \\
& 58445.3349875 & cc-1 & DJ23 & $3124\pm4$ & $133.01\pm0.06$ & $14.58\pm0.29$ & $14.34\pm0.12$\\
& 58445.3349875 & cc-2 & DJ23 & $4421\pm5$ & $138.95\pm0.06$ & $14.30\pm0.29$ & $14.02\pm0.13$\\
HIP117452 & 57949.3975893 & Ba & DK12 & $3708\pm9$ & $238.09\pm0.15$ & $3.84\pm0.05$ & $3.76\pm0.05$\\
& & Bb & DK12 & $3318\pm10$& $239.13\pm0.17$ & $4.58\pm0.05$ & $4.51\pm0.05$\\
HIP8832 & 57974.3996411 & cc-1 & DK12 & $5674\pm3$ & $213.71\pm0.04$ & $11.47\pm0.50$ & $11.54\pm0.51$ \\
\hline
\end{tabular}
\end{center}
\label{table_candidates}
\end{table*}
Using the IRDIS and IFS S/N maps provided by SpeCal, we identified a total of eight companion candidates by eye at relatively large separation ($\geq\,3.0\,\!''$) in the IRDIS fields of view of six targets (HIP\,16095, HIP\,95619, HIP\,101800, HIP\,34276, HIP\,117452, and HIP\,8832) of the complete survey. One companion candidate was identified at relatively close separation in the IFS field of view of HIP\,41307, but later flagged as a bright quasi-static speckle through various processing tests, and was therefore discarded. \\
Figure \ref{obj_hip16095} shows the IRDIS image reduced in TLOCI ADI of HIP\,16095 (bright companion located at $3.3\,\!''$ in the $K_1$ and $K_2$ combined filters), and the IFS image reduced in PCA ASDI of HIP\,41307 (quasi-static speckle discarded located at $0.5\,\!''$ in the combined \textit{YJH}-bands) as an illustration of the detection process. All companion candidates were then characterized using SpeCal with the TLOCI algorithm in ADI only, and according to a template approach. The relative astrometry and photometry are reported in Table\,\ref{table_candidates}. As first diagnostics, in Figure\,\ref{diag_mag} (\textit{Left}), we reported the location of all our companion candidates in the $K_1$-band- and $K_2$-band-based color-magnitude diagram (CMD). Details on the diagrams are given in \cite{Mesa2016,Samland2017,Chauvin2018,Bonnefoy2018}. We used the most recent parallaxes of the young objects from \cite{Greco2016}, and added additional companions \citep{Gauza2015,Stone2016,Derosa2014} at the L/T transition. At first glance, we see that all detected companion candidates fall on the expected sequence of possible bound companions from the early-M spectral type for the candidates around HIP\,117452, late-L spectral types for HIP\,95619, HIP\,16095 and HIP\,8832, to early-T for HIP\,34276. The companion around HIP\,101800 was detected only in $K_1$-band during the first epoch. After a verification of the public archive, the companion candidates around HIP\,34276 (cc1 and cc2) and HIP\,101800 (cc1 and cc2) were previously known and characterized as stationary background sources by \cite{Wahhaj2013} as part of the NICI campaign concerning debris disk stars. Both companion candidates around HIP\,117452 were earlier identified by \cite{Derosa2011} in the course of the Volume-limited A-Star (VAST) survey as a candidate binary companion. They were later confirmed by \cite{Matthews2018} as physically bound, confirming that this system was actually a quadruple system with an A0 primary (HIP\,117452\,A), orbited by a close binary pair Ba and Bb also resolved in this survey, and additionally by a K-type star at about $75\,\!"$.
\begin{figure*}[t]
\centering
\includegraphics[width=0.3\textwidth]{./pictures/diag_radec_HIP16095_cc1.pdf}
\includegraphics[width=0.3\textwidth]{./pictures/diag_radec_HIP95619_cc1.pdf}
\includegraphics[width=0.3\textwidth]{./pictures/diag_radec_HIP34276_cc1.pdf}
\includegraphics[width=0.3\textwidth]{./pictures/diag_radec_HIP34276_cc2.pdf}
\includegraphics[width=0.3\textwidth]{./pictures/diag_radec_HIP101800_cc1.pdf}
\includegraphics[width=0.3\textwidth]{./pictures/diag_radec_hip107412.pdf}
\caption{SPHERE measurements (in blue) of the offset positions of the companion candidates relative to their primary stars. For each diagram, if the candidate is a stationary background object, the expected variation of offset positions is shown (solid line). This is based on a distance and on a primary proper motion, as well as the initial offset position of the candidate relative to the primary. The predicted offset positions of a stationary background object for the second epoch is shown in red with uncertainties. For HIP117452, measurements of both components Ba and Bb at various epochs are plotted in dark and light blue, respectively.
}
\label{astrometry}
\end{figure*}
Follow-up observations of the candidates were automatically scheduled and obtained using the DBI mode with $J_2J_3$ filters of IRDIS, which is well adapted to distinguish background stars from physically young, early-T, or warm mid-L dwarf planets, and offers an additional epoch for a proper motion test. Follow-up observations were then processed using SpeCal with the TLOCI algorithm in ADI only and a template approach as before. All companion candidates were re-detected, except the one around HIP\,8832 falling outside the IRDIS field, given its large separation and an observing sequence that was not perfectly centered with the meridian passage. The results are reported in Table\,\ref{table_candidates}. The use of a different pair of filters enabled us to explore the companion candidate photometric properties in the $J_2$-band- and $J_3$-band-based CMD, for which we also report the distribution of background stars observed in previous crowded fields (see Figure\,\ref{diag_mag}, \textit{Right}). One can directly see that most of our late-L to early-T potential companion candidates, including the previous ones identified as stationary background stars around HIP\,34276 (cc1 and cc2) and HIP\,101800 (cc1 and cc2), fall onto the background contaminant sequence indicating that they are most likely background stars. As a further check, we used the relative astrometry obtained at two epochs to estimate the proper motion of the companion candidates relative to their primary stars. Figure\,\ref{astrometry} shows the proper motion plots of each candidate and confirms that the companion candidates around HIP\,34276, HIP\,101800, and HIP\,95619 are not co-moving with their primary stars. The distance and proper motion of the stars, with their uncertainties, are taken from the \textit{Gaia} Data Release 2 catalog \citep{Gaia2018}. For HIP\,16095, given the relatively low proper motion of the star, the status of the companion candidate HIP\,16095-cc1 remains ambiguous. However, the $J_2$-band- and $J_3$-band-based CMD still supports a background contamination. If bound, this candidate would have an estimated mass between 7 and 12 $M_{Jup}$ at the system age ($\leq100$ Myr) and distance (88\,pc) illustrative of the SPHERE detection performances around young nearby stars beyond 10\,au.
\begin{figure*}[t]
\centering
\includegraphics[width=0.435\textwidth]{./pictures/dusties_contrasts_ifs.pdf}
\hspace{0.2cm}
\includegraphics[width=0.435\textwidth]{./pictures/dusties_contrasts_irdis.pdf}
\caption{Magnitude contrast limit curves for all targets with TLOCI algorithm.}
\label{detlim}
\end{figure*}
For HIP\,117452 Ba and Bb, the colors and magnitudes in $K_1$ and $K_2$ compared to the predictions of the evolutionary models of \cite{Siess2000} suggest that Ba and Bb are a pair of M1 and M2 low-mass stars, considering an age of 40\,Myr at a distance of 42\,pc. Combining our relative astrometry with the one reported by \cite{Matthews2018} and shown in Table\,~\ref{table_candidates}, we performed a first orbit fitting of the pair. Following the method developed by \cite{Chauvin2012}, we used a Markov chain Monte Carlo (MCMC) Bayesian analysis technique \citep{Ford2007}, which is well suited for observations covering a small part of the whole orbit (for large orbital periods). We did not consider any prior information using the proximity of the primary star. The results are reported in Figure\,~\ref{mcmc} and favor a relatively inclined orbit $i\sim98_{-5}^{+8}$\,deg, a longitude of ascending node fairly well-constrained at $\Omega=20\pm2$\,deg, tight semi-major axis $a\sim14_{-4}^{+7}$\,au, but surprisingly large eccentricities $e\ge0.4$. These large values of eccentricity are not dynamically expected, given the proximity of the primary star located at a physical projected separation of $\sim150$\,au, although the orbit of the binary companion around HIP\,117452 is not known. Fitting solutions using a least squares Levenberg-Marquardt (LSLM) algorithm \citep{Press1992} to search for the model with the minimal reduced $chi^2$ are also reported for comparison. Further dynamical study of the global system considering the debris disk architecture around HIP\,117452 and the binary companion HIP\,117452\,BaBb configuration is be needed.
\section{Detection limits and survey completeness}
\label{sec:detection_limits}
To exploit the information from the actual nondetection in IFS and IRDIS observations of the survey, the detection limits of each individual observations were then estimated. Based on SpeCal results, we derived a standard pixel-to-pixel noise map for each observing sequence corrected from the flux loss related to the ADI or ASDI processing by injecting fake planets. The detection limit maps at $5\sigma$ were then obtained using the pixel-to-pixel noise map divided by the flux loss and normalized by the relative calibration with the primary star (considering the different exposure times, the neutral density, and the coronograph transmission). These detection limits were finally corrected from small number statistics following the prescription of \cite{Mawet2014} to adapt our $5\sigma$ confidence level at small angles with IRDIS and IFS. The $5\sigma$ contrast curves, resulting from the azimuthal average of the detection maps, are reported for IFS and IRDIS in Figure~\ref{detlim}.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{./pictures/mass_limit_mean.pdf}
\caption{Combined mean detection probability map for the whole survey.}
\label{mass_limit_mean}
\end{figure}
To convert the detection limits in terms of the mass and semi-major axis parameter space explored with SPHERE, we used the multi-purpose exoplanet simulation system (MESS) code, a Monte Carlo tool for the statistical analysis and prediction of exoplanet search results \citep{Bonavita2012}. This code has been used extensively in previous direct imaging surveys for that same purpose \citep{Chauvin2010,Chauvin2015,Chauvin2018,Vigan2012,Vigan2017,Rameau2013,Lannier2016}. With MESS, we then generated a uniform grid of mass and semi-major axis in the interval [1, 80]~M$_{\rm{Jup}}$ and [1, 1000]~au with a sampling of 0.5~M$_{\rm{Jup}}$ and 1~au, respectively. \\
For each point in the grids, 100 orbits were generated, randomly oriented in space from uniform distributions in cos(i), $\omega$, $\Omega$, $e \le 0.8$, and $T_p$. We built detection probability maps by counting the number of detected planets over the number of generated ones and simply comparing the on-sky projected position (separation and position angle) of each synthetic planet with the SPHERE 2D detection limit maps at $5\sigma$ converted in masses based on the COND (hot-start) model predictions \citep{Baraffe2003}. The primary age, distance, and magnitude reported in Table\,\ref{table_p99} are considered for the luminosity-mass conversion.\\
The resulting detection probability map of the complete survey is reported in Figure\,\ref{mass_limit_mean}. This result shows that, despite the relatively wide age range (20 to 120\,Myr) and distance (10 to 102\,pc) of our sample, we achieved a relatively good detection probability larger than $50\,\%$ for giant planets with masses larger than 5\,~M$_{\rm{Jup}}$ and semi-major axes between 10 and 500\,au, sufficient for the detection of system analogs to HR\,8799 or HD\,95086. In principle, the degeneracy between mass and initial entropy could change these limits considerably (e.g., \citealp{Marleau2014,Brandt2014}). In practice, however, taking more realistic post-formation entropies into account strongly mitigates this problem, as shown for instance in the case of HIP~65426~b by \citet{Marleau2019}.
\begin{figure*}[t]
\centering
\includegraphics[width=\textwidth]{./pictures/limit_mass_2.pdf}
\caption{Constraints on planetary systems for 12 targets in our survey. The positions of the inner and the outer debris belts are shown in orange by shading the regions inside the inner and beyond the outer. Our mass contrast limits are based on SPHERE/IRDIS and COND model predictions. Dynamical mass constraints for a slightly closer planet spacing of 20 mutual Hill radii from \cite{Shannon2016} are shown in green with masses below this value shaded. Np is the number of planets with the mass (indicated in green) required to open the gap. The uncertainties for debris belt position and dynamical mass limit are calculated based on the uncertainty in debris belt temperature, and indicated with hatching. }
\label{mass_limit}
\end{figure*}
\section{Discussion}
Our survey is composed of relatively old gas-free systems. Therefore, some of these systems contain debris disks. We assumed that planets are a valid explanation for the formation of debris structure, as shown in the case of the Solar System where planets are known to reside between two belts of debris, and in the case of HR\,8799 and HD\,95086 where planets are known to reside in two-temperature debris disks. The analysis of our survey follows the work by \cite{Matthews2018}. The temperature values of debris belts are found in \cite{Chen2014}, where these temperatures were estimated using a two-temperature black-body model and a Bayesian parameter estimation to select the best model to fit the SED. The disk radii were calculated following \cite{Pawellek2015}, assuming that dust are composed of 50\% astrosilicate and 50\% ice. In addition, we constrained our SPHERE/IRDIS observations with dynamical arguments on the possible planetary systems hiding within the debris gaps \citep{Shannon2016}.\\
Mass limits were calculated with the MESS code as described in Section~\ref{sec:detection_limits} and shown in Figure\,\ref{mass_limit}. The theoretical mass for a single planet to clear the observed gap is large $\geq 25 M_\mathrm{J}$ \citep{Nesvold2014}. Therefore, in our cases, we infer that the systems must be in multi-planet configuration, as in HR8799, in which several planets with lower masses clear the gap. In Figure~\ref{mass_limit}, we plot the minimum masses of planets required to clear the debris gaps, as well as their location and their "Np" number based on the N-body simulations of \cite{Shannon2016}. This model considers only planets with low eccentricities. The mass and the Np number change if the eccentricity is larger. The mass, shown in green in Figure~\ref{mass_limit}, is the minimum mass per planet, with uncertainties based on the age of the system and on the belts radius. The minimum mass calculation assumes that planets are spaced by a typical value of $\sim 20$ mutual Hill radius ($R_\mathrm{H}$), which is consistent with the value of $21.7 \pm 9.5 \mathrm{R_H}$ predicted by \cite{Fang2013}. \\
By combining the observational upper and theoretical lower mass constrains, only a small region of parameter space is unconstrained. For 12 targets in our survey of which the temperature values are found in \cite{Chen2014}, we infer a multi-planet system based on the large theoretical clearing masses. In such a multi-planet system, the widest separation planet would have a physical separation close to that of the outer debris belt, where our direct imaging limits are relatively tight. In main cases, planets must be at least $\sim 0.1M_J$ to clear the observed gap based on dynamical arguments, and in some cases the dynamical mass limit exceeds $1M_J$. In Figure~\ref{mass_limit}, for the target HIP7345, the mass limit, $\sim 1.3 M_J$ at 90\% in the gap, is close to the dynamical mass limit $\sim 0.9$. \\
Among our 12 targets for which we note the presence of two debris belts, no exoplanetary mass companions were detected. Our sample is too small for a detailed statistical analysis. However, a nondetection in a sample of 12 stars is not inconsistent with the debris disk occurrence rate of 6.27\% in a debris disk sample of planets between $5-20 M_{\mathrm{J}}$ and 10-1000\,au \citep{Meshkat2017}, since we would expect that some companions might be below our detection limits. Our nondetections are also consistent with the lower occurrence rate of $\sim 1 \%$ found in \cite{Bowler2016} and \cite{Galicher2016}. The results of this 12 target sample are not incompatible with the theory that planets are carving wide debris gaps, since in each case our direct imaging mass limits are higher than the theoretical mass limits that we calculate.\\
The existence of the planetary perturbers beyond 5\,au, and potentially these architectures will be explored in futur observations: i/ observations combining radial velocity, astrometry with \textit{GAIA} for the inner parts ($\le5$\,au), ii/ observations with the next generation of planet imagers from the ground (SCExAO, KPIC, SPHERE+, GPI2.0 on 10m-class Telescopes, then with the ELTs) and space (\textit{JWST}, \textit{WFIRST}).
\section{Conclusions}
We reported the observations and analysis of a survey of 22 stars with VLT/SPHERE with IRDIS in the DBI mode with $K_1K_2$ filters and $J_2J_3$ for the follow-up observations, and IFS in the $Y-H$ filters, with the goal of detecting and characterizing giant planets on wide orbits. The selected sample favors young, that is to say$\leq 100$ Myr, nearby, $\leq 100$ pc, dusty, and early-type stars to maximize the range of mass and separation, over which the observations are sensitive. The optimized observation strategy with the angular differential imaging in thermal bands and a dedicated data reduction using various algorithms allow us to reach a typical contrast 12.5 mag at 0.25” and 14 mag at 1.0” in IRDIS. These contrasts are converted to mass limits for each target. Despite the good sensitivity of our survey, we did not detect any new giant planets. We confirmed that the sources detected around HIP\,34276, HIP\,101800, HIP\,16095, and HIP\,95619 are stationary background sources by analyzing $K_1$-band, $K_2$-band, $J_2$-band, and $J_3$-band images and their relative motions. The status of the candidate around HIP\,8832 still requires further follow-up. HIP\,117452\,BaBb is resolved and confirmed as a binary companion \citep{Derosa2011,Matthews2018}. For 12 targets of our survey, where we determined the radii of the debris belt, we derived upper and lower mass limits. We used Monte Carlo simulations to estimate the sensitivity survey performance in terms of planetary mass and semi-major axis to perform the upper limit. We additionally calculated the minimum required mass for planets in the system to have cleared the observed debris gap to perform the lower mass limit. Combining our upper and lower mass limits, we are able to tightly constrain the unexplored parameter space around these systems: typically, planets must be at least $\sim 0.1M_J$ in main cases to clear the observed gap based on dynamical arguments, and in some cases the dynamical limit exceeds $1M_J$. Direct imaging data from VLT/SPHERE are sensitive to planets of $\sim 3 M_J$ for a typical target in our survey. Several of the planetary systems will likely be detectable with the next generation of high-contrast imagers.
\begin{acknowledgements}
First, we thanks the referee for providing useful comments.
This project was partly supported by the IDEXLyon project (contract ANR-16-IDEX-0005) under the auspices University of Lyon. It was supported by CNRS, by the Agence Nationale de la Recherche (ANR-14-CE33-0018). It has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie
Skłodowska-Curie grant agreement No 823823. G-D. Marleau acknowledges the support of the DFG priority program SPP 1992 ``Exploring the Diversity of Extrasolar Planets" (KU 2849/7-1). C.~Mordasini and G.-D.~Marleau acknowledge support from the Swiss National Science Foundation under grant BSSGI0\_155816 ``PlanetsInTime''. Parts of this work have been carried out within the frame of the National Centre for Competence in Research PlanetS supported by the SNSF. A. Bayo acknowledges support from ICM (Iniciativa Cient\'ifica Milenio) via the N\'ucleo Milenio de Formaci\'on Planetaria, and from FONDECYT (grant 1190748). Finally, this work has made use of the the SPHERE Data Centre, jointly operated by OSUG/IPAG (Grenoble), PYTHEAS/LAM/CESAM (Marseille), OCA/Lagrange (Nice), Observatoire de Paris/LESIA (Paris), and Observatoire de Lyon, also supported by a grant from Labex OSUG@2020 (Investissements d’avenir – ANR10 LABX56).
\end{acknowledgements}
\bibliographystyle{aa}
|
2,869,038,155,926 | arxiv | \section{Introduction}
We are interested in solving large, sparse, unsymmetric linear systems,
\[
Ax =b, \quad A \in \mathbb{R}^{N \times N}.
\]
Iterative methods are preferred for sparse linear systems as they depend only on matrix-vector products, which can be computed in $\mathcal{O}\big(\text{nnz}(A)\big)$ time. Popular examples include Krylov space methods such as CG~\cite{Hestenes&Stiefel:1952}, GMRES~\cite{Saad1986GMRESAG}, MINRES~\cite{citeulike:10745617}. However, iterative methods rarely work well without good preconditioners which are essential for fast convergence to the solution.
A naive LU or QR factorization of the matrix can cost $\mathcal{O}(N^3)$ even for sparse matrices due to the fill-in introduced during the factorization. However, one can ignore some of the fill-in entries to get an ``incomplete'' factorization of the matrix, which can then be used as a preconditioner for solving the associated linear system. For example, preconditioners like Incomplete LU~\cite{Saad1994ILUTAD}, Incomplete QR~\cite{jennings, Saad1988PreconditioningTF} and Incomplete Cholesky~\cite{Manteuffel1980AnIF} limit fill-in based on thresholding and on a prescribed maximum number of non-zeros in a row/column. While such methods are common in literature, there are no convergence guarantees nor provable efficiency for these preconditioners. In practice, they can fail for a large number of problems~\cite{Chow1997ExperimentalSO}. However, better preconditioners can be built when additional information on the problem is available.
In the recent years, another class of preconditioners have been developed based on the observation that certain off-diagonal blocks of $A$ or $A^{-1}$ are numerically low-rank. The matrices that exhibit this property are termed Hierarchical ($\mathcal{H}$) matrices~\cite{hmatrix_2, hmatrix_1, hmatrix_3, Hackbusch2000ASH}. While these methods were originally developed for dense matrices, there have been efforts to extend these ideas to sparse matrices, especially matrices arising out of PDE discretizations. These efforts have been focused on incorporating fast $\mathcal{H}-$algebra with a nested dissection based multifrontal elimination~\cite{mumps, H_QR,C:LaBRI::CIMI15, Ghysels2016AnEM, blr_pastix, Schmitz2012AFD, Xia2013EfficientSM, Xia2009SuperfastMM}. For instance, a matrix-vector product can be done in almost linear time when the dense fronts are represented using low-rank bases.
In contrast, we focus on another approach: continually decrease the size of the nested dissection separators by applying a low-rank approximation. As the size of the separators are reduced at every step, the algorithm never deals with large dense fronts. Some examples of these fast hierarchical solvers are the Hierarchical Interpolative Factorization (HIF)~\cite{feliufaba2020hierarchical, Ho2016HierarchicalIF}, LoRaSp~\cite{lorasp1, lorasp2} and Sparsified Nested Dissection (spaND)~\cite{2019arXiv190102971C, klockiewicz2020second}. All three algorithms were developed to perform fast Cholesky factorization of symmetric positive definite matrices. HIF and spaND have been extended to perform a fast LU factorization on unsymmetric matrices~\cite{Ho2016HierarchicalIF}. However, LU is known to be unstable unless a robust pivoting strategy is used which can be difficult for sparse matrices. Current sparse direct solvers often rely on \textit{ad hoc} techniques such as ignoring small pivots and replacing them by some large value $\epsilon^{-1}$ or postponing the elimination, leading to significant fill-in and an increase in the computational cost.
In this work, we propose a novel fast hierarchical solver to perform QR factorization on sparse, square matrices using low-rank approximations. The algorithm can be extended, with some changes, to solve sparse linear least-squares problems. This will be discussed in a future work. The use of orthogonal transformations in the QR decomposition ensures stability and allows for a more robust treatment of unsymmetric matrices. The resulting approximate factorization can then be used as a preconditioner with GMRES to solve general linear systems. Specifically, our algorithm produces a sparse approximate factorization of $A$ in near linear time, such that,
\[ A \approx QW = \prod_i Q_i \prod_j W_j \]
where each $Q_i$ is a sparse orthogonal matrix and $W_j$ is either sparse orthogonal or sparse upper triangular. While $W$ is not necessarily upper triangular, we still use the term ``fast QR solver'' as the algorithm is built on top of classical Householder QR.
\subsection{Contribution}
We propose, implement, and provide theoretical guarantees on a novel QR algorithm for unsymmetric, sparse matrices with full-rank. We henceforth refer to the algorithm as spaQR, or Sparsified QR. Our algorithm is built upon the ideas of the spaND algorithm, which was originally developed for SPD matrices. However, the existence and intuition behind spaQR is more involved as explained in \Cref{spars_s} and \Cref{Related_chol}. We summarize our main contributions as follows:
\begin{itemize}
\item We propose and implement a novel fast QR algorithm with tunable accuracy for sparse square matrices.
\item We provide a systematic analysis of the approximation error and effectiveness of the preconditioner.
\item We implement an additional block diagonal scaling that significantly improves the error and effectiveness of the preconditioner. The improvements from scaling are shown both theoretically and numerically.
\item We show that the factorization time scales as $\mathcal{O}(N \log N)$ and the solve time as $\mathcal{O}(N)$, under some assumptions
\item We perform numerical tests on benchmark unsymmetric problems.
\item The C++ code for the algorithm is freely available for download and use at this \href{https://github.com/Abeynaya/spaQR_public}{link}. The benchmarks can be reproduced by running the scripts available in the repository.
\end{itemize}
The rest of the paper is organized as follows. \Cref{Sec: Algo} introduces the algorithm and the block scaling. This is followed by theoretical guarantees on the approximation error, effectiveness of the preconditioner and the complexity of the algorithm in \Cref{theoretical_results_sec}. Numerical results are discussed in \Cref{benchmarks}. Finally, we discuss directions for future research. We also give some intuition behind the algorithm and different variants of the algorithm in \Cref{Related_chol}.
\section{Algorithm}
\label{Sec: Algo}
We begin with a discussion on classical sparse QR factorization based on Householder transformations and Nested Dissection, giving an overview on the fill-in generated during the factorization. This is followed by a high level overview of the spaQR algorithm, followed by a detailed discussion and a discussion on the block diagonal scaling.
\subsection{Sparse QR}
\label{QR}
Consider the Householder-based QR factorization of a sparse matrix $A\in \mathbb{R}^{m\times n}$ with $m \geq n$. Let $A^{[k]}$ denote the product $H_k H_{k-1} \dots H_1A$, where $H_k$ is the $k$-th Householder matrix. The sparsity of row $k$ in $R$ (and $A^{[k]}(k:m,:)$) can be understood in relation to the sparsity of $A^{[k-1]}$. When column $k$ of $A^{[k-1]}$ is operated on, all the rows $r$, that have non-zero entries in that column are affected. We introduce fill-in (or modify the existing entries) in all columns $c$ such that $A^{[k-1]}_{rc} \ne 0$ for any $r$ such that $A^{[k-1]}_{rk} \ne 0$. This can be seen as interactions between distance 1 and distance 2 neighbors (ignoring the direction of the edges) of node $k$ in \Cref{fillin}. This is in contrast to performing Gaussian Elimination on a matrix A, where we only have new interactions between distance 1 neighbors. Thus, fill-in in Householder QR is higher compared to the fill-in in Cholesky or LU factorization of a matrix. However, if $A$ has full column rank then the QR decomposition of $A$ and the Cholesky decomposition of $A^TA$ are related. In particular, if $A^TA = LL^T$, then $L = R^T(1:n, 1:n)$~\cite{10.5555/248979}.
\begin{figure}[tbhp]
\centering
\scalebox{0.75}{
\begin{tikzpicture}[node distance=2cm]
\node (s) [seps] {$k$};
\node (n1) at ($ (s) + (45:2) $) [seps] {$n_1$};
\node (n2) at ($ (s) + (-45:2) $) [seps] {$n_2$};
\node (p) [seps, right of=n1] {$p$};
\node (q) [seps, right of=n2] {$q$};
\draw [arrow] (s.45) -- (n1);
\draw [arrow] (s.-45) -- (n2);
\draw [arrow] (p) -- (n1);
\draw [arrow] (q) -- (n2);
\end{tikzpicture}}
\hspace{2cm}
\scalebox{0.75}{
\begin{tikzpicture}[node distance=2cm]
\node (s) [seps] {$k$};
\node (n1) at ($ (s) + (45:2) $) [seps] {$n_1$};
\node (n2) at ($ (s) + (-45:2) $) [seps] {$n_2$};
\node (p) [seps, right of=n1] {$p$};
\node (q) [seps, right of=n2] {$q$};
\draw [arrow] (p) -- (n1);
\draw [arrow] (q) -- (n2);
\draw [darrow, dashed, color=red] (n1) -- (n2);
\draw [arrow, dashed, color=red] (q) -- (n1);
\draw [arrow, dashed, color=red] (p) -- (n2);
\draw [arrow] (n1) -- (s);
\draw [arrow] (n2) -- (s);
\draw [arrow, dashed, color=red] (p) -- (s);
\draw [arrow, dashed, color=red] (q) -- (s);
\end{tikzpicture}}
\quad
\[
\begin{matrix}
& k & n_1 & n_2 & p & q \\
k & \star & & & & \\
n_1 & \star & \star & & \star & \\
n_2 & \star & & \star & & \star
\end{matrix}
\hspace{3cm}
\begin{matrix}
& k & n_1 & n_2 & p & q \\
k & \star & \r{\times} & \r{\times} & \r{\times} & \r{\times} \\
n_1 & & \star & \r{\times} & \star & \r{\times} \\
n_2 & & \r{\times} & \star & \r{\times} & \star
\end{matrix}\]
\caption{The graph of a sample matrix shown before and after one step of householder transformation on column $k$. There is a directed edge from node $j$ to node $i$ in the graph if $A(i,j) \ne 0$. The fill-in entries are represented by red $\times$ symbols and the corresponding edges are denoted by red dashed lines.}
\label{fillin}
\end{figure}
The relationship between the two factorizations allows us to extend the column reordering strategies developed for Cholesky to QR. The problem of finding an optimal permutation matrix $P$ for an SPD matrix $S$, such that the Cholesky factor of $PSP^T = LL^T$ has minimum fill-in is NP-hard. However, practical techniques based on heuristics have been developed and studied over the years. Some examples include minimum degree ordering, nested dissection, and Cuthill-McKee ordering. The reordering strategy that we use is Nested Dissection (ND) as it provides a convenient way to define separators and reinterpret the matrix as a block matrix. ND is a type of graph partitioning and works by recursively subdividing a graph while minimizing the number of edge cuts.
Consider the sparse symmetric matrix $A^TA = S \in \mathbb{R}^{N \times N}$ and its graph $G_S = (V,E)$ where $V = \{1, 2, \dots, N\}$ and $E = \{(i,j): S_{ij}\ne 0\}$. ND works by finding vertex separators, which are groups of vertices that divide the graph into two disconnected components. \Cref{ND_seps} shows the vertex separators when recursively subdividing the graph three times. The process stops when the cluster sizes are small enough to be factored using a dense factorization scheme.
The matrix factorization starts at the \textit{leaves}, which are the vertex \textit{clusters} at the last level (for example, $l=4$ in \Cref{ND_etree}) of the ND ordering. Once these are factorized, the factorization proceeds to the separators at the next lower level ($l=3$ in \Cref{ND_etree}) and continues to the top of the tree. This can be represented using an elimination tree as shown in \Cref{ND_etree}. The edges in the elimination tree indicate the dependencies between operations. Clusters at the same level can be operated on independently of one another. By factorizing from the leaves to the root of the elimination tree, we never create an edge (fill-in) between vertex clusters that are originally separated. The vertex separators obtained from the ND process on the matrix $A^TA$ provide a column partition for the matrix $A$, with the same fill-in guarantees. We discuss row partitioning ideas in \Cref{ord_clus}.
\begin{figure}[tbhp]
\centering
\begin{subfigure}[t]{0.35\textwidth}
\centering
\scalebox{0.75}{
\begin{tikzpicture}
\filldraw[fill = white, rounded corners, line width = 0.25mm] (0, 0) rectangle (3, 4) {};
\filldraw[fill = darkgray, rounded corners, line width= 0.25mm] (1.325, 0) rectangle (1.625, 4) {};
\filldraw[fill= gray, line width =0.25mm, rounded corners] (0, 2.5) rectangle (1.325, 2.75) {};
\filldraw[fill= gray, line width =0.25mm, rounded corners] (1.625, 1.5) rectangle (3, 1.75) {};
\filldraw[fill= lightgray, line width =0.25mm, rounded corners] (0.65, 0) rectangle (0.9, 2.5) {};
\filldraw[fill= lightgray, line width =0.25mm, rounded corners] (0, 3.25) rectangle (1.325, 3.5) {};
\filldraw[fill= lightgray, line width =0.25mm, rounded corners] (2.275, 1.75) rectangle (2.525, 4) {};
\filldraw[fill= lightgray, line width =0.25mm, rounded corners] (1.625, 0.75) rectangle (3, 1) {};
\end{tikzpicture}
}
\caption{Vertex separators}
\label{ND_seps}
\end{subfigure}%
~
\begin{subfigure}[t]{0.6\textwidth}
\centering
\scalebox{0.75}{
\begin{tikzpicture}[level distance=0.9cm,
level 1/.style={sibling distance=3cm},
level 2/.style={sibling distance=1.5cm},
level 3/.style={sibling distance=0.75cm},
edge from parent/.append style = {line width = 0.25mm}]
\node (Root) [circle, draw=black, fill=darkgray] {}
child {node [circle, draw=black, fill=gray]{}
child {node [circle, draw=black, fill=lightgray]{}
child {node [circle, draw=black]{}}
child {node [circle, draw=black]{}}
}
child {node [circle, draw=black, fill=lightgray]{}
child {node [circle, draw=black]{}}
child {node [circle, draw=black]{}}
}
}
child {node [circle, draw=black, fill=gray]{}
child {node [circle, draw=black, fill=lightgray]{}
child {node [circle, draw=black]{}}
child {node [circle, draw=black]{}}
}
child {node [circle, draw=black, fill=lightgray]{}
child {node [circle, draw=black]{}}
child {node [circle, draw=black]{}}
}
};
\begin{scope}[every node/.style={right}]
\path (Root -| Root-2-2-2) ++(5mm,0) node {$l=1$};
\path (Root-1 -| Root-2-2-2) ++(5mm,0) node {$l=2$};
\path (Root-1-1-| Root-2-2-2) ++(5mm,0) node {$l=3$};
\path (Root-1-1-1-| Root-2-2-2) ++(5mm,0) node {$l=4$};
\end{scope}
\end{tikzpicture}}
\caption{Elimination tree}
\label{ND_etree}
\end{subfigure}
\caption{A four level nested dissection on an arbitrary graph. The figure on the left shows the vertex separators when recursively subdividing the graph and the figure on the right shows the corresponding elimination tree.}
\label{ND}
\end{figure}
Nested Dissection ordering is usually used for elliptic partial differential equations discretized on 2D and 3D meshes. The cost of the Cholesky factorization on the reordered matrix reduces to $\mathcal{O}(N^{3/2})$ for 2D problems and $\mathcal{O}(N^2)$ for 3D problems, whereas the fill-in reduces to $\mathcal{O}(N \log N)$ in 2D and $\mathcal{O}(N^{3/4})$ in 3D~\cite{10.5555/248979}.
Even with Nested Dissection, the fill-in is still significant. For 3D problems, the top separator has size $\mathcal{O}(N^{2/3})$ and its matrix block is dense when all its descendants are eliminated. Hence, the factorization of the top separator block will cost $\mathcal{O}(N^2)$. These arguments extend to the QR factorization, which has the same asymptotic cost. We can bring down the cost of performing QR on these problems to $\mathcal{O}(N)$ by `sparsifying' subsets of the separators as discussed next.
\subsection{Sparsified QR (spaQR)}
\label{spaQR_s}
The spaQR algorithm works by continually decreasing the size of a vertex separator in the trailing matrix by using a low-rank approximation of its neighbors. The algorithm alternates between factoring (block QR) the separators at a level $l$ and `sparsifying' the interfaces at all levels $l'>l$.
We define an interface as a connected subset of a separator whose size is comparable to the diameter of the subdomains at that level. \Cref{interfaces_full} shows the distinction between separators and interfaces on a 3-level ND partition of a regular grid; \Cref{seps} shows the separators and \Cref{int} shows the interfaces. Denote the total number of levels as $L$ where the leaves correspond to $l = L$ and the root is at $l = 1$. Let $\hat{A}^l$ be the trailing matrix corresponding to level $1, 2, \dots, l$ of the matrix $A^{[l+1]} = H_{l+1}H_{l+2}\dots H_{L}A$, $\forall l< L$. Note that each of the householder matrices $H_k$ corresponds to a block reflector for the clusters at level $k$.
\begin{figure}[tbhp]
\centering
\begin{subfigure}[t]{0.25\textwidth}
\centering
\scalebox{0.75}{
\begin{tikzpicture}
\filldraw[fill = white, rounded corners, line width = 0.25mm] (0, 0) rectangle (3, 4) {};
\filldraw[fill = gray, rounded corners, line width= 0.25mm] (1.325, 0) rectangle (1.625, 4) {};
\filldraw[fill= lightgray, line width =0.25mm, rounded corners] (0, 3) rectangle (1.325, 3.25) {};
\filldraw[fill= lightgray, line width=0.25mm, rounded corners] (1.625, 1) rectangle (3, 1.25) {};
\end{tikzpicture}}
\caption{Vertex separators}
\label{seps}
\end{subfigure}
\begin{subfigure}[t]{0.25\textwidth}
\centering
\scalebox{0.75}{
\begin{tikzpicture}
\filldraw[fill = white, rounded corners, line width = 0.25mm] (0, 0) rectangle (3, 4) {};
\filldraw[fill = gray, rounded corners, line width= 0.25mm] (1.325, 0) rectangle (1.625, 1) {};
\filldraw[fill = gray, rounded corners, line width= 0.25mm] (1.325, 1) rectangle (1.625, 1.25) {};
\filldraw[fill =gray, rounded corners, line width= 0.25mm] (1.325, 1.25) rectangle (1.625, 3) {};
\filldraw[fill = gray, rounded corners, line width= 0.25mm] (1.325, 3) rectangle (1.625, 3.25) {};
\filldraw[fill = gray, rounded corners, line width= 0.25mm] (1.325, 3.25) rectangle (1.625, 4) {};
\filldraw[fill= lightgray, line width =0.25mm, rounded corners] (0, 3) rectangle (1.325, 3.25) {};
\filldraw[fill= lightgray, line width=0.25mm, rounded corners] (1.625, 1) rectangle (3, 1.25) {};
\end{tikzpicture}}
\caption{Interfaces}
\label{int}
\end{subfigure}
\caption{A three level nested dissection on an arbitrary graph. The figure on the left shows the usual nested dissection separators and the one on the right shows the interfaces.}
\label{interfaces_full}
\end{figure}
\[A^{[l+1]} =
\begin{bmatrix}
R_{l+1:L,l+1:L} & R_{l+1:L,1:l} \\
& \hat{A}^l
\end{bmatrix}\]
where, $R_{l+1:L,l+1:L}$ is an upper-triangular block. The notation $R_{l+1:L,l+1:L}$ may appear confusing. Recall that $l=L$ corresponds to the leaf level in the tree (that is the ``top left'' part of the matrix), while $l=1$ is the top of the tree (this is the ``bottom right'' of the matrix). There is a slight inconsistency between the numbering of the levels in the tree ($l=1$ is the top) and the usual row/column numbering of the matrix (which starts at $l=L$ with our numbering). For consistency, we stick to indices associated with levels in the tree.
We can rewrite this as,
\[A^{[l+1]} = \begin{bmatrix}
I_{l+1:L, l+1:L} & \\
& \hat{A}^{l}
\end{bmatrix}
\begin{bmatrix}
R_{l+1:L,l+1:L} & R_{l+1:L,1:l} \\
& I_{1:l,1:l}
\end{bmatrix}\]
and focus only on $\hat{A}^l$ (trailing matrix).
Let $p$ be a subset of the top ND separator (in dark grey) in \Cref{int} at the interface between two interiors (that have been eliminated) and let $n$ be all the nodes it's connected to ($\hat{A}^l_{np} \ne 0)$. Consider the submatrix of $\hat{A}^l$ corresponding to this interface $p$,
\[ \hat{A}^l_p = \begin{bmatrix}
\hat{A}^l_{pp} & \hat{A}^l_{pn} \\
\hat{A}^l_{np} & \hat{A}^l_{nn}
\end{bmatrix}\]
We work on the assumption that the off-diagonal blocks $\hat{A}_{np}^{l}$, $\hat{A}_{pn}^l$ corresponding to an interface are low rank. We begin by computing a rank-revealing factorization of $\begin{bmatrix}
\hat{A}_{np}^{lT} & \sigma\hat{A}_{pp}^{lT}\hat{A}^l_{pn}
\end{bmatrix}$, for a constant $\sigma$ to be defined later. The two terms in the rank-revealing factorization are necessary for specific reasons. The first term $\hat{A}_{np}^{lT}$ is present to decouple a part of the interface $p$ from $n$. The second term $\sigma\hat{A}_{pp}^{lT}\hat{A}^l_{pn}$ ensures that the structure of the elimination tree is not broken by the sparsification. Since, the fill-in guarantees are directly related to the elimination tree, this ensures that we do not introduce additional non-zeros in the matrix as the algorithm proceeds. Alternately, we can think of it as finding an orthogonal transformation such that a subset of $p$ is decoupled from $n$ both during QR on $A$ and Cholesky on $A^TA$. More discussion on this connection to Cholesky is given in subsection \Cref{Related_chol}.
Begin by computing a low-rank approximation of,
\[\begin{bmatrix}
\hat{A}_{np}^{lT} & \sigma\hat{A}_{pp}^{lT}\hat{A}^l_{pn}
\end{bmatrix} = Q_{pp}W_{pn} = \begin{bmatrix}
Q_{pf} & Q_{pc}
\end{bmatrix}\begin{bmatrix}
W_{fn} \\
W_{cn}
\end{bmatrix} \text{with } \|W_{fn}\|_{_2}=\mathcal{O}(\epsilon)\]
where, $\sigma$ is a scalar that will be defined later in \Cref{spars_s}. This gives us,
\[ \begin{bmatrix}
\hat{A}^l_{pp} & \hat{A}^l_{pn} \\
\hat{A}^l_{np} & \hat{A}^l_{nn}
\end{bmatrix} \begin{bmatrix}
Q_{pp} & \\
& I
\end{bmatrix} = \begin{bmatrix}
\hat{A}^l_{ff} & \hat{A}^l_{fc} & \hat{A}^l_{fn} \\
\hat{A}^l_{cf} & \hat{A}^l_{cc} & \hat{A}^l_{cn} \\
\mathcal{O}(\epsilon) & W_{cn}^T & \hat{A}^l_{nn}
\end{bmatrix} \text{ where, } \hat{A}_{pn}^l = \begin{bmatrix}
\hat{A}^l_{fn} \\
\hat{A}^l_{cn}
\end{bmatrix}\]
The orthogonal transformation $Q$ splits the nodes in interface $p$ into `fine' $f$ and `coarse' $c$ nodes. Ignoring the $\mathcal{O}(\epsilon)$ terms and applying a block Householder transform on the columns of the $f$ block,
\begin{align*}
\begin{bmatrix}
H_{pf}^T & \\
& I
\end{bmatrix}\begin{bmatrix}
\hat{A}^l_{pp} & \hat{A}^l_{pn} \\
\hat{A}^l_{np} & \hat{A}^l_{nn}
\end{bmatrix} \begin{bmatrix}
Q_{pp} & \\
& I
\end{bmatrix} &= \begin{bmatrix}
R_{ff} & R_{fc} & \mathcal{O}(\epsilon)\\
& \Tilde{A}_{cc}^{l} & \Tilde{A}_{cn}^l \\
& W_{cn}^T & \hat{A}^l_{nn}
\end{bmatrix} \\
&= \begin{bmatrix}
I_f & & \\
& \Tilde{A}_{cc}^{l} & \Tilde{A}_{cn}^l \\
& W_{cn}^T & \hat{A}^l_{nn}
\end{bmatrix} \begin{bmatrix}
R_{ff} & R_{fc} & \mathcal{O}(\epsilon)\\
& I_c & \\
& & I_n
\end{bmatrix}
\end{align*}
The $\mathcal{O}(\epsilon)$ terms are dropped. With this, the fine nodes are disconnected from the rest. Hence, the number of nodes in the interface $p$ has been reduced by $|f|$. In other words, interface $p$ has been sparsified. We can once again focus on the trailing matrix and continue the algorithm.
Following this procedure, we can sparsify all the remaining interfaces. Detailed proofs (like why $R_{fn} = \mathcal{O}(\epsilon)$ and its significance) and discussion on why the sparsification does not affect the elimination tree ordering (and hence the fill-in guarantees that come with it) are given in \Cref{spars_s}.
\begin{algorithm}
\caption{High level spaQR algorithm}
\begin{algorithmic}[1]
\REQUIRE {Sparse matrix A, Maximum level L, Tolerance $\epsilon$}
\STATE {Compute column and row partitioning of A, infer separators and interfaces (see \Cref{ord_clus})}
\FORALL{$l=L, L-1, \dots 1$}
\FORALL{Interiors $\mathcal{I}$ at level $l$}
\STATE {Factorize $\mathcal{I}$ using block Householder (see \Cref{sparseQR_S})}
\ENDFOR
\FORALL{Interfaces $\mathcal{S}$ between interiors}
\STATE {Sparsify $\mathcal{S}$ using tolerance $\epsilon$ (see \Cref{spaQR_s} and \Cref{spars_s})}
\ENDFOR
\ENDFOR
\end{algorithmic}
\label{highlevel_Algo}
\end{algorithm}
The spaQR algorithm alternates between factorization of the interiors at a level $l$ and sparsifying the interfaces at all levels $l'< l$. \Cref{highlevel_Algo} gives the high-level overview of spaQR. In the next few sections, we provide a detailed explanation on row/column reordering, defining interfaces, interior factorization and interface sparsification.
\subsection{Ordering and Clustering}
\label{ord_clus}
As we discussed earlier, Nested Dissection on the graph of $A^TA$ ($G_{A^TA}$) can be used to define the separators, which provides a column ordering for the matrix $A$. However, the cost of forming $A^TA$ is $\mathcal{O}(N^3)$ and is not preferred. Instead we use a hypergraph based partitioning technique that uses only the structure of $A$. The algorithm referred to as hypergraph-based unsymmetric nested dissection (HUND) developed in~\cite{Grigori_hypergraph-basedunsymmetric} is used for partitioning general matrices. Partitioning of hypergraphs is a well-studied problem and there are multiple software options like PaToH~\cite{atalyrek2011PaToHT}, hMetis~\cite{Karypis1998HmetisAH} and Zoltan~\cite{ZoltanHypergraphIPDPS06} to do the same. The problem of finding vertex separators in $A^TA$ is equivalent to finding hyperedge separators in $A$ as shown in~\cite{Catalyurek_hypergraph-partitioningbased, Grigori_hypergraph-basedunsymmetric, atalyrek1999HypergraphMF}.
\begin{figure}[tbhp]
\centering
\begin{subfigure}[t]{0.25\textwidth}
\centering
\scalebox{0.75}{
\begin{tikzpicture}
\filldraw[fill = white, rounded corners, line width = 0.25mm] (0, 0) rectangle (3, 4) {};
\filldraw[fill = gray, rounded corners, line width= 0.25mm] (1.25, 0) rectangle (1.75, 4) {};
\node at (0.75,2) {$\mathcal{I}_1$};
\node at (1.5,2) {$\mathcal{B}$};
\node at (2.25,2) {$\mathcal{I}_2$};
\end{tikzpicture}}
\caption{One level partition}
\end{subfigure}%
~
\begin{subfigure}[t]{0.3\textwidth}
\centering
\scalebox{0.75}{
\begin{tikzpicture}
\filldraw[fill = white, rounded corners, line width = 0.25mm] (0, 0) rectangle (1.75, 4) {};
\filldraw[fill = gray, rounded corners, line width= 0.25mm] (1.25, 0) rectangle (1.75, 4) {};
\node at (0.75,2) {$\mathcal{I}_1$};
\node at (1.5,2) {$\mathcal{B}$};
\end{tikzpicture}
}
\quad
\scalebox{0.75}{
\begin{tikzpicture}
\filldraw[fill = white, rounded corners, line width = 0.25mm] (0, 0) rectangle (1.75, 4) {};
\filldraw[fill = gray, rounded corners, line width= 0.25mm] (0, 0) rectangle (0.5, 4) {};
\node at (0.25,2) {$\mathcal{B}$};
\node at (1,2) {$\mathcal{I}_2$};
\end{tikzpicture}}
\caption{$\mathcal{I}_1 \cup \mathcal{B}$ and $\mathcal{I}_2 \cup \mathcal{B}$ }
\end{subfigure}%
~
\begin{subfigure}[t]{0.35\textwidth}
\centering
\scalebox{0.75}{
\begin{tikzpicture}
\filldraw[fill = white, rounded corners, line width = 0.25mm] (0, 0) rectangle (1.75, 4) {};
\filldraw[fill = gray, rounded corners, line width= 0.25mm] (1.25, 0) rectangle (1.75, 4) {};
\filldraw[fill= lightgray, line width =0.25mm, rounded corners] (0, 2.5) rectangle (1.75, 3) {};
\end{tikzpicture}}
\quad
\scalebox{0.75}{
\begin{tikzpicture}
\filldraw[fill = white, rounded corners, line width = 0.25mm] (0, 0) rectangle (1.75, 4) {};
\filldraw[fill = gray, rounded corners, line width= 0.25mm] (1.25, 0) rectangle (1.75, 2.5) {};
\filldraw[fill = gray, rounded corners, line width= 0.25mm] (1.25, 2.5) rectangle (1.75, 3) {};
\filldraw[fill =gray, rounded corners, line width= 0.25mm] (1.25, 3) rectangle (1.75, 4) {};
\filldraw[fill= lightgray, line width =0.25mm, rounded corners] (0, 2.5) rectangle (1.25, 3) {};
\end{tikzpicture}}
\caption{Subdivide $\mathcal{I}_1 \cup \mathcal{B}$ and define interfaces on the top separator}
\end{subfigure}
\caption{The first figure shows a one level partition of an arbitrary graph (hypergraph) using nested dissection (HUND). The next two figures depict the process of identifying the interfaces by subdividing $\mathcal{I}_1 \cup \mathcal{B}$.}
\label{mnd}
\end{figure}
However, in addition to defining separators, we need a clustering of the unknowns in a separator to define interfaces. In SpaND~\cite{2019arXiv190102971C}, the technique of modified nested dissection is developed to find the interfaces. This is done by keeping track of the boundary $\mathcal{B}$ of each interior $\mathcal{I}$ in the dissection process. Then instead of recursively subdividing $\mathcal{I}$, the recursion is done on $\mathcal{I}\cup \mathcal{B}$. One level of this process is shown in \Cref{mnd}. Note how subdividing $\mathcal{I}_1 \cup \mathcal{B}$ helps identify the interfaces. This process is defined as Modified Nested Dissection(MND) in~\cite{2019arXiv190102971C}. \Cref{mnd_multilvl} shows the application of MND to do a three level partitioning of an arbitrary graph. We refer the readers to Algorithm 2.2 of~\cite{2019arXiv190102971C} for details on the implementation of MND. Conceptually, this idea extends to hypergraph based partitioning and we adopt this in this work.
\begin{figure}[tbhp]
\centering
\begin{subfigure}[t]{0.25\textwidth}
\centering
\scalebox{0.75}{
\begin{tikzpicture}
\filldraw[fill = white, rounded corners, line width = 0.25mm] (0, 0) rectangle (3, 4) {};
\filldraw[fill = darkgray, rounded corners, line width= 0.25mm] (1.325, 0) rectangle (1.625, 4) {};
\end{tikzpicture}}
\end{subfigure}%
~
\begin{subfigure}[t]{0.25\textwidth}
\centering
\scalebox{0.75}{
\begin{tikzpicture}
\filldraw[fill = white, rounded corners, line width = 0.25mm] (0, 0) rectangle (3, 4) {};
\filldraw[fill = darkgray, rounded corners, line width= 0.25mm] (1.325, 0) rectangle (1.625, 4) {};
\filldraw[fill= gray, line width =0.25mm, rounded corners] (0, 2.5) rectangle (1.625, 2.75) {};
\filldraw[fill= gray, line width =0.25mm, rounded corners] (1.325, 1.5) rectangle (3, 1.75) {};
\end{tikzpicture}}
\end{subfigure}%
~
\begin{subfigure}[t]{0.25\textwidth}
\centering
\scalebox{0.75}{
\begin{tikzpicture}
\filldraw[fill = white, rounded corners, line width = 0.25mm] (0, 0) rectangle (3, 4) {};
\filldraw[fill = darkgray, rounded corners, line width= 0.25mm] (1.325, 0) rectangle (1.625, 4) {};
\filldraw[fill= gray, line width =0.25mm, rounded corners] (0, 2.5) rectangle (1.625, 2.75) {};
\filldraw[fill= gray, line width =0.25mm, rounded corners] (1.325, 1.5) rectangle (3, 1.75) {};
\filldraw[fill= lightgray, line width =0.25mm, rounded corners] (0.65, 0) rectangle (0.9, 2.75) {};
\filldraw[fill= lightgray, line width =0.25mm, rounded corners] (0, 3.25) rectangle (1.625, 3.5) {};
\filldraw[fill= lightgray, line width =0.25mm, rounded corners] (2.275, 1.5) rectangle (2.525, 4) {};
\filldraw[fill= lightgray, line width =0.25mm, rounded corners] (1.325, 0.75) rectangle (3, 1) {};
\end{tikzpicture}}
\end{subfigure}%
\vspace{0.25cm}
\begin{subfigure}[t]{0.25\textwidth}
\centering
\scalebox{0.75}{
\begin{tikzpicture}
\filldraw[fill = white, rounded corners, line width = 0.25mm] (0, 0) rectangle (3, 4) {};
\filldraw[fill = darkgray, rounded corners, line width= 0.25mm] (1.325, 0) rectangle (1.625, 4) {};
\end{tikzpicture}}
\caption{$l=1$}
\end{subfigure}%
~
\begin{subfigure}[t]{0.25\textwidth}
\centering
\scalebox{0.75}{
\begin{tikzpicture}
\filldraw[fill = white, rounded corners, line width = 0.25mm] (0, 0) rectangle (3, 4) {};
\filldraw[fill = darkgray, rounded corners, line width= 0.25mm] (1.325, 0) rectangle (1.625, 1.5) {};
\filldraw[fill = darkgray, rounded corners, line width= 0.25mm] (1.325, 1.5) rectangle (1.625, 1.75) {};
\filldraw[fill = darkgray, rounded corners, line width= 0.25mm] (1.325, 1.75) rectangle (1.625, 2.5) {};
\filldraw[fill = darkgray, rounded corners, line width= 0.25mm] (1.325, 2.5) rectangle (1.625, 2.75) {};
\filldraw[fill = darkgray, rounded corners, line width= 0.25mm] (1.325, 2.75) rectangle (1.625, 4) {};
\filldraw[fill= gray, line width =0.25mm, rounded corners] (0, 2.5) rectangle (1.325, 2.75) {};
\filldraw[fill= gray, line width =0.25mm, rounded corners] (1.625, 1.5) rectangle (3, 1.75) {};
\end{tikzpicture}}
\caption{$l=2$}
\end{subfigure}%
~
\begin{subfigure}[t]{0.25\textwidth}
\centering
\scalebox{0.75}{
\begin{tikzpicture}
\filldraw[fill = white, rounded corners, line width = 0.25mm] (0, 0) rectangle (3, 4) {};
\filldraw[fill = darkgray, rounded corners, line width= 0.25mm] (1.325, 0) rectangle (1.625, 0.75) {};
\filldraw[fill = darkgray, rounded corners, line width= 0.25mm] (1.325, 0.75) rectangle (1.625, 1) {};
\filldraw[fill = darkgray, rounded corners, line width= 0.25mm] (1.325, 1) rectangle (1.625, 1.5) {};
\filldraw[fill = darkgray, rounded corners, line width= 0.25mm] (1.325, 1.5) rectangle (1.625, 1.75) {};
\filldraw[fill = darkgray, rounded corners, line width= 0.25mm] (1.325, 1.75) rectangle (1.625, 2.5) {};
\filldraw[fill = darkgray, rounded corners, line width= 0.25mm] (1.325, 2.5) rectangle (1.625, 2.75) {};
\filldraw[fill = darkgray, rounded corners, line width= 0.25mm] (1.325, 2.75) rectangle (1.625, 3.25) {};
\filldraw[fill = darkgray, rounded corners, line width= 0.25mm] (1.325, 3.25) rectangle (1.625, 3.5) {};
\filldraw[fill = darkgray, rounded corners, line width= 0.25mm] (1.325, 3.5) rectangle (1.625, 4) {};
\filldraw[fill= gray, line width =0.25mm, rounded corners] (0, 2.5) rectangle (0.65, 2.75) {};
\filldraw[fill= gray, line width =0.25mm, rounded corners] (0.65, 2.5) rectangle (0.9, 2.75) {};
\filldraw[fill= gray, line width =0.25mm, rounded corners] (0.9, 2.5) rectangle (1.325, 2.75) {};
\filldraw[fill= gray, line width =0.25mm, rounded corners] (1.625, 1.5) rectangle (2.275, 1.75) {};
\filldraw[fill= gray, line width =0.25mm, rounded corners] (2.275, 1.5) rectangle (2.525, 1.75) {};
\filldraw[fill= gray, line width =0.25mm, rounded corners] (2.525, 1.5) rectangle (3, 1.75) {};
\filldraw[fill= lightgray, line width =0.25mm, rounded corners] (0.65, 0) rectangle (0.9, 2.5) {};
\filldraw[fill= lightgray, line width =0.25mm, rounded corners] (0, 3.25) rectangle (1.325, 3.5) {};
\filldraw[fill= lightgray, line width =0.25mm, rounded corners] (2.275, 1.75) rectangle (2.525, 4) {};
\filldraw[fill= lightgray, line width =0.25mm, rounded corners] (1.625, 0.75) rectangle (3, 1) {};
\end{tikzpicture}}
\caption{$l=3$}
\end{subfigure}
\vspace{0.25cm}
\begin{subfigure}[t]{0.45\textwidth}
\centering
\scalebox{0.75}{
\begin{tikzpicture}[level distance=0.5cm,
level 1/.style={sibling distance=1.2cm},
level 2/.style={sibling distance=0.6cm},
edge from parent/.append style = {line width = 0.25mm}]
\node (Root) [circle, draw=black, fill=darkgray] at (0,0) {}
child {node [circle, draw=black, fill=darkgray]{}
child {node [circle, draw=black, fill=darkgray]{}}
child {node [circle, draw=black, fill=darkgray]{}}
child {node [circle, draw=black, fill=darkgray]{}}
}
child {node [circle, draw=black, fill=darkgray]{}
child {node [circle, draw=black, fill=darkgray]{}}
}
child {node [circle, draw=black, fill=darkgray]{}
child {node [circle, draw=black, fill=darkgray]{}}
}
child {node [circle, draw=black, fill=darkgray]{}
child {node [circle, draw=black, fill=darkgray]{}}
}
child {node [circle, draw=black, fill=darkgray]{}
child {node [circle, draw=black, fill=darkgray]{}}
child {node [circle, draw=black, fill=darkgray]{}}
child {node [circle, draw=black, fill=darkgray]{}}
};
\end{tikzpicture}}
\caption{$l=1$ separator clustering hierarchy}
\label{lvl1_ch}
\end{subfigure}%
~
\begin{subfigure}[t]{0.45\textwidth}
\centering
\scalebox{0.75}{
\begin{tikzpicture}[level distance=0.5cm,
level 1/.style={sibling distance=1.2cm},
edge from parent/.append style = {line width = 0.25mm}]
\node (Root) [circle, draw=black, fill=gray] at (0, -0.5){}
child {node [circle, draw=black, fill=gray]{}}
child {node [circle, draw=black, fill=gray]{}}
child {node [circle, draw=black, fill=gray]{}};
\node (Root2) [circle, draw=black, fill=gray] at (3, -0.5){}
child {node [circle, draw=black, fill=gray]{}}
child {node [circle, draw=black, fill=gray]{}}
child {node [circle, draw=black, fill=gray]{}};
\end{tikzpicture}}
\caption{$l=2$ separators clustering hierarchy}
\label{lvl2_ch}
\end{subfigure}
\caption{The first row depicts the creation of separators by recursive application of modified nested dissection. The second row shows the creation of interfaces in each separator. The last row shows the clustering hierarchy within each separator.}
\label{mnd_multilvl}
\end{figure}
Modified Nested dissection on $A^TA$ or modified HUND on $A$ defines the separators/interfaces. The columns of the matrix are reordered following the ND/HUND ordering. The rows of the matrix are reordered after column ordering and clustering is done. Row ordering has to be done such that the off-diagonal blocks are low rank and the diagonal blocks are full rank.
We employ a different heuristics to assign the rows to the clusters. For diagonally dominant matrices, the reordering of the rows can be the same as the columns. For general matrices, one heuristic is to identify the cluster such that the weight of the row in that cluster is maximized. In other words, row $r_i$ is assigned to cluster $c$ where $c = \arg \max_{c_k}\sum_{j \in c_k} A_{ij}^2$. However, this can lead to too many rows assigned to a single cluster resulting in rectangular diagonal blocks. Typically, we want to avoid this situation as we want all the diagonal blocks to be full rank.
Another heuristic is to permute large entries to the diagonal of the matrix. This is done by performing a bipartite matching between the rows and the columns of the matrix. We use the MC64 routine from the HSL Mathematical Software Library~\cite{hsl_mc64} to perform the matching. One can test the performance with different heuristics and choose the best one for their problem.
\subsection{Householder QR on Separators}
\label{sparseQR_S}
The factorization of interiors or separators at a level $l$ is done by applying a block Householder step (regular sparse QR). Here, we describe the QR factorization of a separator $s$ reinterpreted in our notation. Let $s$ be the separator of interest, $n$ be all its neighbors (i.e, $(A^TA)_{ns} \ne 0)$ and $w$ be the rest of the nodes disconnected from $s$ in the graph of $A^TA$. Let nodes in $n$ be further categorized into $n=\{n_1, n_2, n_3 \}$. Nodes $n_1$ are such that $A_{n_1s} \ne 0$, while $A_{sn_1}$ may or may not be zero. Nodes $n_2$ are such that $A_{n_2s} = 0$ and $A_{sn_2} \ne 0$ and nodes $n_3$ are such that $A_{n_1n_3} \ne 0$, $A_{sn_3}=0$ and $A_{n_3s}=0$. All such nodes $n$ will correspond to $(A^TA)_{ns} \ne 0$. Consider the matrix A blocked in the following form,
\[ A = \begin{bmatrix}
A_{ss} & A_{sn_1} & A_{sn_2} & & \\
A_{n_1s} & A_{n_1n_1} & & A_{n_1n_3} & \\
& A_{n_2n_1} & A_{n_2n_2} & A_{n_2n_3} & A_{n_2w} \\
& A_{n_3n_1} & A_{n_3n_2} & A_{n_3n_3} & A_{n_3w} \\
& A_{wn_1} & A_{wn_2} & A_{wn_3} & A_{ww}
\end{bmatrix} \]
All the diagonal blocks are square as explained in the previous section. Consider the block Householder matrix $H$ such that,
\[ H^T \begin{bmatrix}
A_{ss} \\
A_{n_1s}
\end{bmatrix} = \begin{bmatrix}
R_{ss} \\
\\
\end{bmatrix}\]
where $R_{ss} \in \mathbb{R}^{|s|\times |s|}$ is upper triangular. Define,
\[H_s =
\begin{bmatrix}
H & \\
& I
\end{bmatrix}\] Then, \[ H_s^{T}A = \begin{bmatrix}
R_{ss} & R_{sn_1} & R_{sn_2} & R_{sn_3} & \\
& \Tilde{A}_{n_1n_1} & \Tilde{A}_{n_1n_2}& \Tilde{A}_{n_1n_3} & \\
& A_{n_2n_1} & A_{n_2n_2} & A_{n_2n_3} & A_{n_2w} \\
& A_{n_3n_1} & A_{n_3n_2} & A_{n_3n_3} & A_{n_3w} \\
& A_{wn_1} & A_{wn_2} & A_{wn_3} & A_{ww}
\end{bmatrix} = \begin{bmatrix}
R_{ss} & R_{sn} & \\
& \Tilde{A}_{nn}& A_{nw} \\
& A_{wn} & A_{ww}
\end{bmatrix} \]
Define, \[R_s = \begin{bmatrix}
R_{ss} & R_{sn} & \\
& I_n & \\
& & I_w
\end{bmatrix} \]
Then,
\[H_s^{T} A R_s^{-1} = \begin{bmatrix}
I_s & & \\
& \Tilde{A}_{nn} & {A}_{nw} \\
& A_{wn} & A_{ww}
\end{bmatrix}
\]
Hence the cluster $s$ has been disconnected from the rest. In this process we have introduced fill-in only between the neighbors $n$. There are no additional non-zeros in the blocks involving $w$ ($A_{nw}$, $A_{wn}$, and $A_{ww}$). This is key in the ND ordering.
\subsection{Sparsification of Interfaces}
\label{spars_s}
Once the interiors/separators at a level $l$ have been factorized, the algorithm goes through each interface and sparsifies it. Consider an interface $p$,
\[A =
\begin{bmatrix}
A_{pp} & A_{pn} & \\
A_{np} & A_{nn} & A_{nw} \\
& A_{wn} & A_{ww}
\end{bmatrix}\]
Assume the off-diagonal blocks $A_{np}$ and $A_{pn}$ are low-rank. Hence, the matrix $\begin{bmatrix}
A_{np}^T & \sigma A_{pp}^TA_{pn}
\end{bmatrix}$ can be well-approximated by a low rank matrix (for a scalar $\sigma$ to be defined later).
\[\begin{bmatrix}
A_{np}^{T} & \sigma A_{pp}^{T}A_{pn}
\end{bmatrix} = Q_{pp}W_{pn} = \begin{bmatrix}
Q_{pf} & Q_{pc}
\end{bmatrix}\begin{bmatrix}
W_{fn} \\
W_{cn}
\end{bmatrix} \text{with } \|W_{fn}\|_{_2}=\mathcal{O}(\epsilon)\]
\[\begin{bmatrix}
W_{fn}\\
W_{cn}
\end{bmatrix} = \begin{bmatrix}
W_{fn}^{(1)} & W_{fn}^{(2)} \\
W_{cn}^{(1)} & W_{cn}^{(2)}
\end{bmatrix} \]
Then,
\[A_{np}Q_{pc} = W_{cn}^{(1)T}, \text{ } A_{np}Q_{pf} = W_{fn}^{(1)T} = \mathcal{O}(\epsilon)\]
Define, \[Q_p = \begin{bmatrix}
Q_{pp} & & \\
& I & \\
& & I
\end{bmatrix}\]
\[ AQ_p = \begin{bmatrix}
\Tilde{A}_{ff} & \Tilde{A}_{fc} & A_{fn} & \\
\Tilde{A}_{cf} & \Tilde{A}_{cc} & A_{cn} &\\
\mathcal{O}(\epsilon) & W_{cn}^{(1)T} & A_{nn} & A_{nw} \\
& & A_{wn} & A_{ww}
\end{bmatrix} \quad \text{where,} \quad
A_{pn} = \begin{bmatrix}
A_{fn} \\
A_{cn}
\end{bmatrix}\]
where $\Tilde{A}_{ff}$ is a square block of size $|f| \times |f|$. Dropping the $\mathcal{O}(\epsilon)$ and applying a block Householder $H_f$ on the $f$ block, (see \Cref{sparseQR_S}),
\[ H_f = \begin{bmatrix}
H & \\
& I
\end{bmatrix}\] where $H \in \mathbb{R}^{|p|\times |p|}$. If $H_{ff}$ represent the first $|f|$ columns of $H$, then $H_{ff}^T (AQ_p)_{(:,1:f)} = R_{ff}$
\[H_f^T A Q_p = \begin{bmatrix}
R_{ff} & R_{fc} & \r{R_{fn}} & \\
& \hat{A}_{cc} & \hat{A}_{cn} & \\
& W_{cn}^{(1)T} & A_{nn} & A_{nw} \\
& & A_{wn} & A_{ww}
\end{bmatrix} \]
The term $R_{fn}=\mathcal{O}(\epsilon)$ for an appropriate choice of the scalar $\sigma$. The value of $\sigma$ for which this is true is given by \Cref{lemma1}. The proof is given in \Cref{proof:lem1}.
\begin{restatable}{lemma}{sigchoice} \label{lemma1}
$\|R_{fn}\|_{_2} \leq \epsilon$, for $\sigma = \frac{1}{\sigma_{\text{min}}(A_p)}$ where $A_p = \begin{bmatrix}
A_{pp} \\
A_{np}
\end{bmatrix}$
\end{restatable}
Finally define, \[R_f = \begin{bmatrix}
R_{ff} & R_{fc} & & \\
& I_c & & \\
& & I_n & \\
& & & I_w
\end{bmatrix}\]
to get,
\[H_f^T AQ_p R_f^{-1} = \begin{bmatrix}
I_f & & & \\
& \hat{A}_{cc} & \hat{A}_{cn} & \\
& W_{cn}^{(1)T} & A_{nn} & A_{nw} \\
& & A_{wn} & A_{ww}
\end{bmatrix}\]
Hence, the fine nodes $f$ are disconnected from all the remaining nodes. The size of interface $p$ is decreased by $|f|$. The $A_{nn}$, $A_{nw}$, $A_{wn}$, and $A_{ww}$ blocks are not affected during the sparsification process. Thus, we could eliminate a part of $p$ without introducing additional nonzeros in the rest of the matrix. Note that, the last two statements are true even if the term $R_{fn}$ was not $\mathcal{O}(\epsilon)$.
However, it is important that $\|R_{fn}\|_{_2}\leq \epsilon$ to ensure that the elimination tree structure of $A^TA$ is not affected. Remember that the QR factorization on $A$ and Cholesky on $A^TA$ are directly related. Hence, we need to ensure that we have not introduced fill-in in the $n-n$, $n-w$, $w-w$ blocks of $A^TA$ as well.
To understand this better, consider two nodes $n_1$ and $n_2$ such that $n_1, n_2\in n$ and belong to two disjoint subtrees of the elimination tree (of $A^TA$). Then by definition, (see Corollary 3.2 in~\cite{elimination_tree}) $R_{n_1n_2} = 0$ during direct QR factorization on $A$. However, say that $(A^TA)_{n_1n_2} \neq 0$ after sparsification of an interface in spaQR. This implies that an Householder transformation on the column $A_{:,n_1}$ will modify the column $A_{:,n_2}$, since the columns are not orthogonal \big($(A^TA)_{n_1n_2} \neq 0$\big). Ignoring any spurious cancellations that can occur, this leads to $R_{n_1n_2}\neq 0$. Thus, the fill-in guarantees that come with following the elimination tree ordering of the unknowns do not hold anymore.
In \Cref{well_sep}, we show that sparsification does not affect the elimination tree of $A^TA$, that is, any two disjoint subtrees of the elimination tree remain disjoint after sparsification of any interface. The proof depends on \Cref{lemma1} and is given in \Cref{proof:wellsep}.
\begin{restatable}{theorem}{wellsep}
\label{well_sep}
For any two interfaces $l$, $m$ such that the block $R_{lm} = 0$ in the direct QR factorization, we have $R_{lm} \approx 0$ in spaQR as well.
\end{restatable}
\subsection{Scaling of Interfaces}
\label{scale_s}
The $\sigma$ factor in the sparsification step was chosen to be $\sigma_{\text{min}}(A_p)^{-1}$. This factor was necessary to ensure that $R_{fn} = \mathcal{O}(\epsilon)$ in \Cref{lemma1}, which in turn was necessary to prove \Cref{well_sep}. However when $A_p$ (or $A$) is ill-conditioned, $\sigma$ can be large which will lead to a slower decay of the singular values of $\begin{bmatrix}
A_{np}^T & \sigma A_{pp}^TA_{pn}
\end{bmatrix}$. Thus even if the off-diagonal blocks have a faster decay of singular values, we could not take full advantage of it. In addition to fixing this, we get improved accuracy by scaling the diagonal blocks corresponding to all interfaces before sparsification. This gives better error guarantees as shown in \Cref{scaling_err_s}. Similar rescaling ideas have been shown to improve accuracy in~\cite{2019arXiv190102971C,FeliuFab2018RecursivelyPH, Xia2017EffectiveAR} for sparse Cholesky factorization on hierarchical matrices.
Consider an interface $p$ and its neighbors $n$,
\[A = \begin{bmatrix}
A_{pp} & A_{pn} \\
A_{np} & A_{nn}
\end{bmatrix}\]
Find the QR decomposition of $A_{pp}$; $A_{pp} = U_{pp} R_{pp}$. Then \[U_{pp}^T A_{pp} R_{pp}^{-1} = I\]
Define, \[U_p = \begin{bmatrix}
U_{pp}^T &\\
& I
\end{bmatrix} \qquad R_p = \begin{bmatrix}
R_{pp}^{-1} & \\
& I_n
\end{bmatrix}\]
Then,
\[U_p^TAR_p = \begin{bmatrix}
I_p & \Tilde{A}_{pn} \\
\Tilde{A}_{np} & A_{nn}
\end{bmatrix} \]
Similarly we scale the diagonal blocks corresponding to all the remaining interfaces. Once the interfaces are scaled, sparsification is straightforward; compress, \[ \begin{bmatrix}
\Tilde{A}_{np}^T & \Tilde{A}_{pn}
\end{bmatrix} = Q_{pp}W_{pn} = \begin{bmatrix}
Q_{pf} & Q_{pc}
\end{bmatrix}\begin{bmatrix}
W_{fn} \\
W_{cn}
\end{bmatrix} \quad \text{with} \quad \|W_{fn}\|_{_2}=\mathcal{O}(\epsilon) \]
Defining $Q_p$ as in \Cref{spars_s}, we find that sparsification and factorization of the `fine' nodes boils down to applying $Q_p$ on the left and right of the matrix.
\[Q_p^T U_p^TAR_p Q_p = \begin{bmatrix}
I_f & & \r{E_2} \\
& I_c & \hat{A}_{cn}\\
\r{E_1} & \hat{A}_{nc} & A_{nn}
\end{bmatrix}\]
where $\r{E_1} = W_{fn}^{(1)T}$, $\r{E_2} = W_{fn}^{(2)}$ and $\|E_1\|_{_2}\approx\|E_2\|_{_2}\leq \epsilon$. Since \Cref{lemma1} holds true, \Cref{well_sep} also holds. Hence, the algorithm can proceed without breaking the elimination tree structure.
\subsection{Merging of clusters}
\label{sec:merge}
Once the factorization of separators at a level is done, the interfaces of the remaining ND separators are merged following the cluster hierarchy. For example, in \Cref{mnd_multilvl}, once the leaves $l=4$ and the $l=3$ separators are factorized, the interfaces of the separators at $l=1,2$ are merged following the clustering hierarchy shown in \Cref{lvl1_ch}, \Cref{lvl2_ch}. Merging simply means combining the block rows and columns of the interfaces into a single block matrix.
\subsection{Sparsified QR}
We now have all the building blocks to write down the spaQR algorithm. Given a matrix, we typically pre-process it so that the 2-norm of each column is a constant. Then the matrix is partitioned to identify separators, interfaces (\Cref{ord_clus}) and is appropriately reordered. The spaQR algorithm involves applying a sequence of block Householder factorizations $H_s, R_s$ (\Cref{sparseQR_S}), scaling $U_p, R_p$ (\Cref{scale_s}), sparsification of the interfaces $Q_p$ (\Cref{spars_s}), permutations to take care of the fine nodes and merging of the clusters (\Cref{sec:merge}), at each level $l$ such that,
\[ Q^T A W^{-1} \approx I\]
where,
\begin{align*}
Q &= \prod_{l=1}^{L}\Bigg( \prod_{s\in S_l}H_s \prod_{p\in C_l}U_p \prod_{p\in C_l} Q_p\Bigg)\\
W &= \prod_{l=L}^{1}\Bigg(\prod_{p\in C_l} Q_p^T\prod_{p\in C_l}R_p \prod_{s\in S_l}R_s \Bigg)
\end{align*}
\begin{algorithm}
\caption{Sparsified QR (spaQR) algorithm}
\begin{algorithmic}[1]
\REQUIRE {Sparse matrix A, Tolerance $\epsilon$}
\STATE {Compute column and row partitioning of A, infer separators and interfaces (see \Cref{ord_clus})}
\FORALL{$l=L, L-1, \dots 1$}
\FORALL{separators $s$ at level $l$}
\STATE {Factorize $s$ using block Householder (see \Cref{sparseQR_S})}
\STATE {Append $H_s$ to $Q$ and $R_s$ to $W$}
\ENDFOR
\FORALL{interfaces $p$ remaining at level $l$}
\STATE {Perform block diagonal scaling on $p$ (see \Cref{scale_s})}
\STATE{Append $U_p$ to $Q$ and $R_p$ to $W$}
\ENDFOR
\FORALL{interfaces $p$ remaining at level $l$}
\STATE {Sparsify interface $p$ (see \Cref{spars_s}, \Cref{scale_s})}
\STATE{Append $Q_p$ to $Q$ and $Q_p^T$ to $W$}
\ENDFOR
\FORALL{separators $s$ remaining at level $l$}
\STATE {Merge interfaces of $s$ one level following the cluster hierarchy (see \Cref{sec:merge})}
\ENDFOR
\ENDFOR
\RETURN {$Q = \prod_{l=1}^{L}\Bigg( \prod_{s\in S_l}H_s \prod_{p\in C_l}U_p \prod_{p\in C_l} Q_p\Bigg)$\\
\qquad \qquad $W = \prod_{l=L}^{1}\Bigg(\prod_{p\in C_l} Q_p^T\prod_{p\in C_l}R_p \prod_{s\in S_l}R_s \Bigg)$ such that $Q^TAW^{-1} \approx I$ }
\end{algorithmic}
\label{Algo: spaQR}
\end{algorithm}
Here, $S_l$ is the set of all separators at level $l$ in the elimination tree and $C_l$ is the set of all interfaces remaining after factorization of separators at level $l$. $Q$ is a product of orthogonal matrices and $W$ is a product of upper triangular and orthogonal matrices. Since, $Q$ and $W$ are available as sequence of elementary transformations, they are easy to invert. The complete algorithm is presented in \Cref{Algo: spaQR}.
\section{Theoretical results}
\label{theoretical_results_sec}
In this section, we study the error introduced during the sparsification process, the effect of scaling and the effectiveness of using spaQR as a preconditioner with iterative methods. Finally, we discuss the theoretical complexity of the spaQR algorithm.
\subsection{Error Analysis}
\label{Error_s}
Consider a simple $2\times 2$ block matrix A.
\[A = \begin{bmatrix}
A_{pp} & A_{pn} \\
A_{np} & A_{nn}
\end{bmatrix}\]
After sparsification, interface $p$ is split into fine $f$ and coarse $c$ nodes,
\[AQ_p =
\begin{bmatrix}
A_{ff} & A_{fc} & A_{fn} \\
A_{cf} & A_{cc} & A_{cn} \\
\r{E} & A_{nc} & A_{nn}
\end{bmatrix}\]
where $\|\r{E}\|_{_2}\leq \epsilon$. After performing Householder QR on the $f$ columns,
\begin{align*}
H_f^TA Q_p &= \begin{bmatrix}
R_{ff} & R_{fc} & \r{R_{fn}} \\
& \hat{A}_{cc} & \hat{A}_{cn} \\
\r{E} & A_{nc} & A_{nn}
\end{bmatrix} \\
&= \begin{bmatrix}
I_f & & \r{R_{fn}} \\
& \hat{A}_{cc} & \hat{A}_{cn} \\
\r{E}R_{ff}^{-1} & A_{nc}-\r{E}R_{ff}^{-1}R_{fc} & A_{nn}
\end{bmatrix} \begin{bmatrix}
R_{ff} & R_{fc} & \\
& I_c & \\
& & I_n
\end{bmatrix}
\end{align*}
where $\|\r{R_{fn}}\|_{_2}\leq \epsilon$. Then,
\[H_f^TAQ_pR_f^{-1} = \begin{bmatrix}
I_f & & \r{R_{fn}} \\
& \hat{A}_{cc} & \hat{A}_{cn} \\
\r{E}R_{ff}^{-1} & A_{nc}-\r{E}R_{ff}^{-1}R_{fc} & A_{nn}
\end{bmatrix}\]
Define,
\[H_f^T\Tilde{A}Q_pR_f^{-1} = \begin{bmatrix}
I_f & & \\
& \hat{A}_{cc} & \hat{A}_{cn} \\
& A_{nc} & A_{nn}
\end{bmatrix}\]
as the approximation when $\r{E}$ and $\r{R_{fn}}$ are dropped in our algorithm. Then the error in the approximation is,
\[H_f^T(A-\Tilde{A})Q_pR_f^{-1} =\begin{bmatrix}
& & \r{R_{fn}} \\
& & \\
\r{E}R_{ff}^{-1} & -\r{E}R_{ff}^{-1}R_{fc} &
\end{bmatrix} \]
\begin{align*}
\|H_f^T(A-\Tilde{A})Q_pR_f^{-1}\|_{_2} & \leq c_1 \|\r{E}R_{ff}^{-1}R_{fc}\|_{_2}
\leq c_1 \|E\|_{_2} \; \|R_{ff}^{-1}\|_{_2} \; \|R_{fc}\|_{_2} \\
&\leq c_1\epsilon \; \frac{1}{\sigma_{\text{min}}(A_p)} \; \sigma_{\text{max}}(A_p)
= c_1\kappa(A_p) \; \epsilon
\end{align*}
where $c_1$ is a constant. We have used that facts that,
$$\begin{bmatrix}
R_{fc} \\
R_{cc} \\
A_{nc}
\end{bmatrix} = H_f^T \begin{bmatrix}
A_{fc} \\
A_{cc} \\
A_{nc}
\end{bmatrix}
\quad \text{and} \quad
\|R_{fc}\|_{_2} \leq \Bigg\|\begin{bmatrix}
A_{fc} \\
A_{cc} \\
A_{nc}
\end{bmatrix}\Bigg\|_{_2} \leq \Bigg\|\begin{bmatrix}
A_{pp} \\
A_{np} \\
\end{bmatrix}\Bigg\|_{_2}
= \sigma_{\text{max}}(A_p)
$$
in proving the above result. Thus, when $A_p$ is ill-conditioned, it is possible that $R_{ff}$ is ill-conditioned and the error in the approximation is worse than $\epsilon$. We can improve the upper bound on the error by first scaling the interfaces as we prove next.
\subsection{Accuracy of scaling}
\label{scaling_err_s}
Scale the diagonal blocks of all interfaces before sparsification as outlined in \Cref{scale_s}. If $U$ is the scaled version of $A$, then $H_f = Q_p$ and $R_f = I$. Then,
\[ Q_p^TUQ_p = \begin{bmatrix}
I_f & & \\
& I_c & \hat{A}_{cn}\\
& \hat{A}_{nc} & I_n
\end{bmatrix} + \begin{bmatrix}
& & \r{E_2} \\
& & \\
\r{E_1} & &
\end{bmatrix} \]
Define,
\[ Q_p^T \Tilde{U} Q_p = \begin{bmatrix}
I_f & & \\
& I_c & \hat{A}_{cn}\\
& \hat{A}_{nc} & I_n
\end{bmatrix} \]
Then the approximation error is,
\[
\|Q_p^T(U-\Tilde{U})Q_p\|_{_2} = \|E_1\|_{_2} = \|E_2\|_{_2} \leq \epsilon
\]
Thus, we have a better error bound by rescaling the diagonal blocks before sparsification.
\subsection{Effectiveness of the preconditioner}
Consider the same $2 \times 2$ block matrix A. After scaling and sparsification of interface $p$, we have
\[
Q_p^TU Q_p =\begin{bmatrix}
I_f & & \\
& I_c & \hat{A}_{cn}\\
& \hat{A}_{nc} & I_n
\end{bmatrix} + \begin{bmatrix}
& & \r{E_2} \\
& & \\
\r{E_1} & &
\end{bmatrix}
\]
Let us complete the factorization by performing an exact QR factorization on the $c$ and $n$ blocks as follows
\begin{align*}
H_c^T Q_p^T U Q_p &=\begin{bmatrix}
I_f & & \\
& R_{cc} & R_{cn}\\
& & \hat{A}_{nn}
\end{bmatrix} + H_c^T \begin{bmatrix}
& & \r{E_2} \\
& & \\
\r{\Tilde{E}_1} & &
\end{bmatrix}\\
H_c^T Q_p^TU Q_p R_c^{-1}
&= \begin{bmatrix}
I_f & & \\
& I_c & \\
& & \hat{A}_{nn}
\end{bmatrix} + H_c^T \begin{bmatrix}
& & \r{E_2} \\
& & \\
\r{\Tilde{E}_1} & &
\end{bmatrix} \\
S = H_n^TH_c^T Q_p^TU Q_p R_c^{-1} R_n^{-1} &=\begin{bmatrix}
I_f & & \\
& I_c & \\
& & I_n
\end{bmatrix} + H_n^TH_c^T \begin{bmatrix}
& & \r{E_2}R_{nn}^{-1} \\
& & \\
\r{\Tilde{E}_1} & &
\end{bmatrix}
\end{align*}
With this, we have $S$ as the preconditioned matrix. The final error is,
\[E = H_n^TH_c^T \begin{bmatrix}
& & \r{E_2}R_{nn}^{-1} \\
& & \\
\r{\Tilde{E}_1} & &
\end{bmatrix}
\]
If we represent $H_c = \begin{bmatrix}
H_{cc} & H_{cn}
\end{bmatrix}$, then, $\hat{A}_{nn} = H_{cn}^T \begin{bmatrix}
\hat{A}_{cn} \\
I_n
\end{bmatrix}$. Since, $\hat{A}_{nn}$ is a product of an orthogonal and a well-conditioned matrix, $\hat{A}_{nn}$ is also well-conditioned. Therefore, $\|R_{nn}^{-1}\|_{_2} = \mathcal{O}(1)$.
Then,
\[ \|E\|_{_2} = \mathcal{O}(\epsilon)
\]
The condition number of the preconditioned matrix $S = I+E$ can be calculated as follows,
\[ \sigma_{\max} (S) = \max_{x\in \mathbb{R}^{M}} \frac{\|Ix+Ex\|_{_2}}{\|x\|_{_2}} \leq \max_{x\in \mathbb{R}^{M}} \frac{\|Ix\|_{_2}}{\|x\|_{_2}} + \max_{x\in \mathbb{R}^{M}} \frac{\|Ex\|_{_2}}{\|x\|_{_2}} = 1+\|E\|_{_2} \]
\[ \sigma_{\min} (S) = \min_{x\in \mathbb{R}^{M}} \frac{\|Ix+Ex\|_{_2}}{\|x\|_{_2}} \geq \min_{x\in \mathbb{R}^{M}} \frac{\|Ix\|_{_2}}{\|x\|_{_2}} - \max_{x\in \mathbb{R}^{M}} \frac{\|Ex\|_{_2}}{\|x\|_{_2}} = 1-\|E\|_{_2} \]
Therefore,
\[ \kappa(S) \leq \frac{1+ \|E\|_{_2}}{1-\|E\|_{_2}}\]
\subsection{Complexity Analysis}
\label{sec:complexity}
In this section, we discuss the complexity of the spaQR algorithm under some assumptions. Consider the Nested Dissection process on the graph of $A^TA$ ($G_{A^TA}$). Define a node as a subgraph of $G_{A^TA}$. The root of the tree corresponds to $l=1$ and the root node is the entire graph $G_{A^TA}$. The children nodes are subgraphs of $G_{A^TA}$ disconnected by a separator.
We assume that the matrices and their graphs satisfy the following properties.
\begin{enumerate}
\item The leaf nodes in the elimination tree contain at most $N_0$ nodes, where $N_0 \in \mathcal{O}(1)$.
\item Let $D_i$ be the set of all nodes $j$ that are descendants of a node $i$, whose size is at least $n_i/2$. We assume that the size of $D_i$ is bounded, that is, $|D_i|=\mathcal{O}(1)$ for all $i$.
\item All the Nested Dissection separators are minimal. That is, every vertex in the separator connects two disconnected nodes in $G_{A^TA}$.
\item The number of edges leaving a node (subgraph) of size $n_i$ is at most $n_i^{2/3}$. In other words, a node of size $n_i$ is connected to at most $n_i^{2/3}$ vertices in $G_{A^TA}$. Most matrices that arise in the discretization of 2D and 3D PDEs satisfy this property.
\end{enumerate}
\paragraph{Direct Householder QR} We first recover the cost of direct QR on $A$ with Nested Dissection partitioning on PDEs discretized on a 3D grid. Consider a node $i$ of size $2^{-l+1}N \leq n_i \leq 2^{-l+2}N$ at a level $l$ in the elimination tree. By assumption 4, the associated separator has size at most \[c_l \in \mathcal{O}\Big(2^{-2l/3}N^{2/3}\Big)\]
The fill-in from Householder QR on the interiors results in at most $\mathcal{O}(2^{-2l/3}N^{2/3})$ non-zeros per row and column. This is because of assumption 4 and the fact that new connections are introduced only between the distance 1 neighbors of a node in $G_{A^TA}$. Thus, the cost of Householder QR on a separator is
\[
h_l \in \mathcal{O}\Big(\big(2^{-2l/3}N^{2/3}\big)^3\Big)
= \mathcal{O}\big(2^{-2l}N^2\big)
\]
By the pigeonhole principle, the number of nodes of size $n_i$, with $2^{-l+1}N \leq n_i \leq 2^{-l+2}N$ is bounded by $2^{l-1}$. Then, the total cost of a direct Householder QR on the matrix is,
\[t_{\text{QR, fact}} \in \mathcal{O}\Bigg(\sum_{l=1}^{L}2^{l}h_l\Bigg) = \mathcal{O}\Bigg(\sum_{l=1}^{L}2^{-l}N^2\Bigg) = \mathcal{O}\big(N^2\big) \qquad L \in \Theta(\log(N/N_0))\]
The cost of applying the factorization can be derived similarly. Solving with a given right-hand side $b$ involves applying a sequence of orthogonal and upper triangular transformations corresponding to the factorization of each interior/separator. Since, for a node of size $2^{-l+1}N \leq n_i \leq 2^{-l+2}N$, the associated separator has a size of $c_l$ with at most $\mathcal{O}(2^{-2l/3}N^{2/3})$ non-zeros per row/column, the total cost of applying the factorization is,
\[t_{\text{QR, apply}} \in \mathcal{O}\Bigg(\sum_{l=1}^{L}2^{l} \Big(2^{-2l/3}N^{2/3}\Big)^2 \Bigg) = \mathcal{O}\big(N^{4/3}\big)\]
\paragraph{spaQR} Next, we show that the complexity of spaQR factorization is $\mathcal{O}(N\log N)$. To show this, we need additional assumptions on the sparsification process and the size of interfaces defined in \Cref{ord_clus}. Remember that an interface is a multilevel partitioning of a separator constructed such that its size is comparable to the diameter of the subdomains at that level (see \Cref{int}). Assume that sparsification reduces the size of an interface at level $l$ to,
\[c_l' \in \mathcal{O}(2^{-l/3}N^{1/3})\]
Thus the size of a separator decreases from $c_l$ to $c_l'$ before it is factorized. This means that the rank scales roughly as the diameter of the separator. This assumption is a consequence of low rank interactions between separators that are far away in $G_{A^TA}$. This is comparable to complexity assumptions in the fast multipole method~\cite{FMM_1, greengard_rokhlin_1997}, spaND~\cite{2019arXiv190102971C}, and HIF~\cite{Ho2016HierarchicalIF}. Further, assume that an interface has $\mathcal{O}(1)$ neighbor interfaces.
The fill-in in the sparsified QR process results in at most $\mathcal{O}(2^{-l/3}N^{1/3})$ entries in each row and column. This is in part due to the assumption on the size of the interfaces, the number of neighbor interfaces and the fact that new connections are only made between distance 1 neighbors of a node in $G_{A^TA}$.
The total cost of spaQR factorization can be split into two parts:
\begin{itemize}
\item Householder QR on interiors/separators. The size of a separator is $c_l' \in \mathcal{O}(2^{-l/3}N^{1/3})$ right before it is factorized and has at most $\mathcal{O}(2^{-l/3}N^{1/3})$ non-zeros per row/column. Then the cost of Householder QR on a separator is
\[h_l' \in \mathcal{O}\Big(\big(2^{-l/3}N^{1/3}\big)^3\Big) = \mathcal{O}\big(2^{-l}N\big)\]
\item Scaling and sparsification of interfaces. The cost of scaling (QR on a block of size $c_l'\times c_l'$) an interface is $\mathcal{O}\big(2^{-l}N\big)$. Similarly, the cost of sparsifying (rank-revealing QR) an interface is also $\mathcal{O}\big(2^{-l}N\big)$ because of the assumptions on the size and number of non-zeros per row/column of an interface.
\end{itemize}
Hence, the total cost of the spaQR algorithm is
\[t_{\text{spaQR}} \in \mathcal{O}\Bigg(\sum_{l=1}^{L}2^{l}2^{-l}N\Bigg) = \mathcal{O}\Bigg(\sum_{l=1}^{L} N\Bigg) = \mathcal{O}(N \log N), \qquad L \in \Theta(\log(N/N_0))\]
The total cost of applying the factorization is
\[t_{\text{spaQR, apply}} \in \mathcal{O}\Bigg(\sum_{l=1}^{L}2^{l} \Big(2^{-l/3}N^{1/3}\Big)^2 \Bigg) = \mathcal{O}(N)\]
The memory requirements scales as the cost of applying the factorization. We show some numerical results on the size of interfaces, the number of non-zeros rows and columns per interface block and the cost of sparsification per level on a typical example in \Cref{Sec: Profiling}. These experimental results corroborate the assumptions made here.
\section{Benchmarks}
\label{benchmarks}
In this section, we benchmark the performance of the algorithm in solving unsymmetric system of linear equations (high and low contrast advection diffusion problems) on uniform 2D and 3D grids and sparse matrices from Suite Sparse Matrix Collection~\cite{suitesparse} and SPARSKIT collection~\cite{Boisvert1997}. We use geometric partitioning on $A^TA$ to get the separators and interfaces for the advection diffusion problem on regular grids and Hypergraph based partitioning on $A$ using PaToH~\cite{atalyrek2011PaToHT} for the non-regular problems. For a given matrix $A$ and a tolerance $\epsilon$, the spaQR algorithm (\Cref{Algo: spaQR}) is used to compute an approximate factorization which is then used as a preconditioner with a suitable iterative solver. GMRES is used as the iterative solver and the convergence criteria is set as $\|Ax-b\|_{_2}/\|b\|_{_2} \leq 10^{-12}$.
The algorithm was written in C++. We use GCC 8.1.0 and Intel(R) MKL 2019 for Linux for the BLAS and LAPACK operations. The number of levels in the nested dissection process is chosen as $\lceil\log (N/64)/\log 2\rceil$ for a matrix of size $N \times N$. Low rank approximations are performed using LAPACK's dlaqps routine which performs a column pivoted QR on $r$ columns. The value $r$ is chosen such that $\frac{|R_{ii}|}{|R_{11}|} \geq \epsilon$ for $1\leq i \leq r$, where $R$ is the upper triangular matrix that comes out of the column pivoted QR method. We typically begin sparsification on levels 3 or 4.
\subsection{Impact of Scaling}
We first compare the performance of the spaQR algorithm with and without the block diagonal scaling described in \Cref{scale_s}. First, we test the performance on flow problems in regular grids and then on non-regular problems.
\subsubsection{High contrast Advection Diffusion equations in 2D}
Consider the variable coefficient advection diffusion equation,
\[ -\nabla\big(a(\mathbf{x}) \cdot \nabla u(\mathbf{x})\big) + q \nabla \cdot \big(b(\mathbf{x}) u(\mathbf{x})\big) = f \quad \forall \mathbf{x} \in \Omega =[0,1], \quad u|_{d\Omega}=0 \]
where $a(\mathbf{x})$, $b(\mathbf{x})$ are sufficiently regular functions. In this example, the function $a(\mathbf{x})$ is a high contrast field quantized by a parameter $\rho$. Specifically, the field is built as follows on a $n \times n$ grid:
\begin{itemize}
\item For every grid point $(i,j)$ choose $\hat{a}_{ij}$ uniformly at random between 0 and 1
\item Smooth $\hat{a}$ by convolving with a unit-width Gaussian
\item Define \[a_{ij} = \begin{cases}
\rho & \text{if }\hat{a}_{ij} \geq 0.5 \\
\rho^{-1} & \text{otherwise }
\end{cases}\]
\end{itemize}
The values of $b(\mathbf{x})$ and $q$ are set to 1. The equation is discretized on a uniform 2D $n \times n$ grid. The matrices corresponding to this discretization are generated using the open source code from~\cite{leopold_matrixgen}.
\begin{figure}[tbhp]
\centering
\begin{subfigure}{\textwidth}
\centering
\scalebox{0.75}{
\begin{tikzpicture}
\begin{loglogaxis}[
ylabel style = {align=center},
title style = {align = center},
title = {$\rho = 1$ \\ $\kappa(A)\approx 10^4\text{--}10^6$},
scale = 0.55,
ylabel= {$\#$ GMRES \\ $\epsilon=10^{-3}$},
xmin=90, xmax=3200,
ymin=2, ymax=300,
xtick = \empty,
extra x ticks = {127, 255, 511, 1023, 2047},
extra x tick labels = \empty,
ymajorgrids=true,
line width=0.25mm,
grid style=dashed
]
\addplot coordinates {
(127, 5)(255, 6) (511, 7) (1023, 7) (2047, 8)
};
\addplot coordinates {
(127, 17)(255,40) (511, 81) (1023, 159) (2047, 300)
};
\end{loglogaxis}
\end{tikzpicture}}
\scalebox{0.75}{
\begin{tikzpicture}
\begin{loglogaxis}[
title style = {align = center},
title = {$\rho = 10$ \\ $\kappa(A)\approx 10^4\text{--}10^7$},
scale = 0.55,
xmin=90, xmax=3200,
ymin=2, ymax=300,
xtick = \empty,
yticklabel = \empty,
extra x ticks = {127, 255, 511, 1023, 2047},
extra x tick labels = \empty,
ymajorgrids=true,
line width=0.25mm,
grid style=dashed
]
\addplot coordinates {
(127, 5)(255, 7) (511, 8) (1023, 10) (2047,12)
};
\addplot coordinates {
(127, 22)(255,90) (511, 174) (1023,nan ) (2047, nan)
};
\addplot[draw = red, dashed] coordinates {
(127, 22)(255,90) (511, 174) (1023, 360 ) (2047, nan)
};
\end{loglogaxis}
\end{tikzpicture}}
\scalebox{0.75}{
\begin{tikzpicture}
\pgfplotsset{
every axis legend/.append style={
at={(1.02,1)},
anchor=north west,
text width=40pt},
legend cell align=left}
\begin{loglogaxis}[
title style = {align = center},
title = {$\rho = 100$ \\ $\kappa(A)\approx 10^6\text{--}10^8$},
legend columns = 1,
legend style = {draw = none},
scale = 0.55,
xmin=90, xmax=3200,
ymin=2, ymax=300,
xtick = \empty,
extra x ticks = {127, 255, 511, 1023, 2047},
extra x tick labels = \empty,
ymajorgrids=true,
yticklabel = \empty,
line width=0.25mm,
grid style=dashed,
legend entries = {spaQR, spaQR w/o scaling}
]
\addplot coordinates {
(127, 6)(255, 9) (511, 13) (1023, 21) (2047, 27)
};
\addplot coordinates {
(127, 22)(255,121) (511, nan) (1023, nan) (2047, nan)
};
\addplot[draw = red, dashed] coordinates {
(127, 22)(255,121) (511, 660) (1023, nan) (2047, nan)
};
\end{loglogaxis}
\end{tikzpicture}}
\end{subfigure}
\begin{subfigure}{\textwidth}
\centering
\scalebox{0.75}{
\begin{tikzpicture}
\begin{loglogaxis}[
ylabel style = {align=center},
scale = 0.55,
ylabel= {$\#$ GMRES \\ $\epsilon=10^{-5}$},
xlabel = {$n$},
xmin=90, xmax=3200,
ymin=2, ymax=300,
xtick = \empty,
extra x ticks = {127, 255, 511, 1023, 2047},
extra x tick labels = {127, ,511, , 2047},
ymajorgrids=true,
line width=0.25mm,
grid style=dashed
]
\addplot coordinates {
(127, 3)(255, 3) (511,3) (1023, 3) (2047, 4)
};
\addplot coordinates {
(127, 8)(255,13) (511, 23) (1023, 46) (2047, 91)
};
\end{loglogaxis}
\end{tikzpicture}}
\scalebox{0.75}{
\begin{tikzpicture}
\begin{loglogaxis}[
scale = 0.55,
xmin=90, xmax=3200,
ymin=2, ymax=300,
xtick = \empty,
yticklabel = \empty,
xlabel = $n$,
extra x ticks = {127, 255, 511, 1023, 2047},
extra x tick labels = {127, ,511, , 2047},
ymajorgrids=true,
line width=0.25mm,
grid style=dashed
]
\addplot coordinates {
(127, 3)(255,3) (511, 4) (1023, 4) (2047,4)
};
\addplot coordinates {
(127, 11)(255,27) (511,63) (1023,123 ) (2047, 267)
};
\end{loglogaxis}
\end{tikzpicture}}
\scalebox{0.75}{
\begin{tikzpicture}
\pgfplotsset{
every axis legend/.append style={
at={(1.02,1)},
anchor=north west,
text width=40pt},
legend cell align=left}
\begin{loglogaxis}[
legend columns = 1,
legend style = {draw = none},
scale = 0.55,
xmin=90, xmax=3200,
ymin=2, ymax=300,
xtick = \empty,
extra x ticks = {127, 255, 511, 1023, 2047},
extra x tick labels = {127, ,511, , 2047},
ymajorgrids=true,
yticklabel = \empty,
xlabel = $n$,
line width=0.25mm,
grid style=dashed,
legend entries = {spaQR, spaQR w/o
scaling}
]
\addplot coordinates {
(127, 3)(255, 4) (511, 5) (1023, 5) (2047, 5)
};
\addplot coordinates {
(127, 13)(255,57) (511, 259) (1023, nan) (2047, nan)
};
\end{loglogaxis}
\end{tikzpicture}}
\end{subfigure}
\caption{Comparison of the spaQR algorithm with and without scaling on 2D $n \times n$ High Contrast Advection Diffusion problems for three values of the parameter $\rho$. The two variations of the spaQR algorithm are compared for two values of the tolerance $\epsilon=10^{-3}$, $10^{-5}$. Notice that the spaQR algorithm (with scaling) outperforms the variant without scaling in all the cases. Moreover, for small enough $\epsilon$, spaQR algorithm converges in a constant number of iterations irrespective of the problem size for three values of the parameter $\rho$.}
\label{fig:hc_ad_comparison}
\end{figure}
In \Cref{fig:hc_ad_comparison}, we compare the number of GMRES iterations needed to converge by the two variants of the algorithm for three values of the parameter $\rho$. The problem becomes increasingly ill-conditioned as the parameter $\rho$ increases. The spaQR algorithm (with scaling) performs much better as compared to the variant without block diagonal scaling. For small enough tolerance $\epsilon$, the convergence of the spaQR algorithm is independent of the problem size $N=n^2$.
\subsubsection{Non-regular problems}
Next, we test the two variants of the spaQR algorithm on a set of matrices taken from the SuiteSparse Matrix Collection~\cite{suitesparse}. The name of the matrices and their properties such as the size, the number of non-zero entries, pattern symmetry, numerical symmetry and the application domain are given in \Cref{Table: suite sparse}. The matrices are partitioned using the modified HUND and row ordering is performed based on the heuristics discussed in \Cref{ord_clus}.
The number of GMRES iterations taken by the two variants of the spaQR algorithm for the ten matrices listed in \Cref{Table: suite sparse} are given in \Cref{Table: suite_sparse_results}. In nine out of the ten cases, spaQR algorithm (with scaling) performs better than the variant without block diagonal scaling. With a lower tolerance of $\epsilon = 10^{-6}$, both variants have almost the same performance.
\begin{table}[tbhp]
\centering
\label{Table: suite sparse}
\caption{List of test matrices and their properties: number of rows and columns (size), number of non-zeros (nnz), pattern symmetry (pat. sym.), numerical symmetry (num. sym.) and the problem domain (Kind). }
\begin{tabular}{rrrrrrp{115pt}}
\toprule
\# & Matrix & size & nnz & Pat. & Num. & Kind\\
& & & & sym. & sym. & \\
\midrule
1 & cavity15 & 2195 & 71601 & 5.9 & 0.0 & Subsequent CFD Problem \\
2 & cavity26 & 4562 & 138187 & 5.9 & 0.0 & Subsequent CFD Problem \\
3 & dw4096 & 8192 & 41746 & 96.3 & 91.5 & Electromagnetics problem \\
4 & Goodwin\_030 & 10142 & 312814 & 96.6 & 6.3 & CFD problem \\
5 & inlet & 11730 & 328323 & 60.8 & 0 & Model Reduction Problem \\
6 & Goodwin\_040 & 17922 & 561677 & 97.5 & 6.4 & CFD problem \\
7 & wang4 & 26068 & 177196 & 100 & 4.6 & Semiconductor device problem \\
8 & Zhao1 & 33381 & 166453 & 92.2 &0.0 & Electromagnetics problem\\
9 & Chevron1 & 37365 & 330633 & 99.5 & 71.0 & Seismic modelling \\
10 & cz40948 & 40948 & 412148 & 43.5 & 23.7 & Closest Point Method \\
\bottomrule
\end{tabular}
\end{table}
\subsubsection{2D flow in a driven cavity}
The lid-driven flow in a cavity is a well-studied problem. The problem deals with a viscous incompressible fluid flow in a square cavity. The cavity consists of three rigid walls with no-slip conditions and a lid moving with tangential unit velocity. This results in a circular flow.
\begin{table}[tbhp]
\centering
\caption{Performance of the spaQR algorithm with and without scaling in terms of the number of GMRES iterations needed to converge. The test problems are listed in \Cref{Table: suite sparse}.}
\label{Table: suite_sparse_results}
\begin{tabular}{@{}rrrrr@{}}
\toprule
& \multicolumn{2}{c}{\# GMRES, $\epsilon=10^{-3}$} & \multicolumn{2}{c}{\# GMRES, $\epsilon=10^{-6}$} \\
\cmidrule(lr){2-3}
\cmidrule(lr){4-5}
\# & spaQR & spaQR & spaQR & spaQR \\
& & w/o scaling & & w/o scaling\\
\midrule
1 & 58 & \textbf{43} & \textbf{5} & 10\\
2 & \textbf{25} &87& \textbf{4}& 11\\
3 & \textbf{23} & 45 & 4 & 4\\
4 & \textbf{7} & 16 & \textbf{3} & 4\\
5 & \textbf{75} & 138 & \textbf{5} & 7\\
6 & \textbf{7} & 22 & \textbf{3} & 4 \\
7 & \textbf{6} & 17 & \textbf{3} & 4 \\
8 & \textbf{6} & 7 & 5 & 5\\
9 & \textbf{21} & 108 & \textbf{4} & 6\\
10 & \textbf{5} & 77 & \textbf{2} & 9 \\
\bottomrule
\end{tabular}
\end{table}
The matrices arising from this problem are real and unsymmetric (symmetric indefinite in the case of $\text{Re}=0$). They are good test cases for iterative solvers as they are difficult to solve without an efficient preconditioner~\cite{Boisvert1997}. Incomplete LU based preconditioners fail on these matrices. They are unstable due to singular pivots. The spaND algorithm also fails on these matrices for the same reasons.
On the other hand, spaQR provides increased stability and the spaQR preconditioned system converges in less than 50 GMRES iterations for a wide range of Reynolds number. The matrices used for testing are taken from the SPARSKIT collection~\cite{Boisvert1997} and have a size of 17,281 with 553,956 non-zero entries. The performance of the two variants of spaQR algorithm in terms of the number of GMRES iterations needed to converge are shown in \Cref{Table: cavity_flow} for $0 \leq \text{Re} \leq 5000$. spaQR algorithm (with scaling) outperforms the variant without scaling for the entire range of Reynolds number tested. However, neither of the two variants break down during the factorization phase.
\begin{table}[tbhp]
\centering
\caption{Performance of spaQR algorithm on 2D fluid flow in a driven cavity. spaQR w/o scaling failed to converge in less than 300 iterations for the last two matrices. }
\label{Table: cavity_flow}
\begin{tabular}{@{}crcc@{}}
\toprule
& & \multicolumn{2}{c}{\# GMRES, $\epsilon=10^{-5}$} \\
\cmidrule(lr){3-4}
Matrix & Re & spaQR & spaQR \\
& & & w/o scaling\\
\midrule
E40R0000 & 0& 6 & 39 \\
E40R0100 &100 & 7 & 42 \\
E40R0500 &500 & 6 & 46\\
E40R1000 & 1000 & 11 & 62 \\
E40R2000 & 2000 & 23 & 138\\
E40R3000 & 3000 & 19 & 225 \\
E40R4000 & 4000 & 36 & ---\\
E40R5000 & 5000 & 21 & ---\\
\bottomrule
\end{tabular}
\end{table}
Along with the theoretical results on scaling (see \Cref{theoretical_results_sec}), the numerical experiments show that, in general, scaling is advantageous and leads to better performance. However, scaling should be used with caution for highly ill-conditioned problems. For these problems, scaling can only be done on alternate levels or can be done based on the condition number of the diagonal blocks. This is a topic for future research. In the rest of the section, we only consider the variant with block diagonal scaling (spaQR).
\subsection{Scaling with problem size} Next, we study the variation in the time to build the preconditioner and the number of GMRES iterations with the problem size on 2D and 3D Advection Diffusion problems.
\subsubsection{2D Advection Diffusion problem}
Let us consider the variable coefficient advection diffusion equation with $a(\mathbf{x})=1$. The constant $q$ controls the magnitude of the convective term. The equation is discretized on a uniform $n \times n$ 2D grid using the centered finite difference scheme. The resulting linear system becomes strongly unsymmetric as the convective term becomes dominant (higher value of $q$) and hence, is challenging to solve. We test the performance of our algorithm on these problems with different parameters $b(\mathbf{x})$, $q$ with $a(\mathbf{x})$ fixed at $1$ . The spaQR algorithm is used as a preconditioner to accelerate the convergence of the GMRES iterative solver.
\begin{figure}[tbhp]
\centering
\begin{subfigure}[t]{\textwidth}
\centering
\scalebox{0.75}{
\begin{tikzpicture}
\begin{loglogaxis}[
scale = 0.75,
ylabel={$\#$ GMRES},
xlabel = {$N$},
ymin = 2, ymax = 100,
xtick = \empty,
extra x ticks = {16000, 6.5*10^4, 2.5*10^5, 10^6, 4*10^6},
extra x tick labels = {16k, , 0.25M, ,4M},
ymajorgrids=true,
line width=0.25mm,
grid style=dashed,
]
\addplot coordinates {
(128^2, 9)(256^2, 11) (512^2, 14) (1024^2, 17) (2048^2, 23)
};
\addplot coordinates {
(128^2, 8)(256^2, 9) (512^2, 11) (1024^2, 13) (2048^2, 16)
};
\addplot+[mark = triangle*] coordinates {
(128^2, 8)(256^2, 9) (512^2, 9) (1024^2, 11) (2048^2, 13)
};
\end{loglogaxis}
\end{tikzpicture}}
\hspace{0.2cm}
\scalebox{0.75}{
\begin{tikzpicture}
\pgfplotsset{
every axis legend/.append style={
at={(1.02,1)},
anchor=north west},
legend cell align=left}
\begin{loglogaxis}[
legend columns = 1,
legend style = {draw = none},
scale = 0.75,
xlabel = {$N$},
ylabel = {Time to factorize ($s$)},
ymin = 0.1, ymax = 100,
xtick = \empty,
extra x ticks = {16000, 6.5*10^4, 2.5*10^5, 10^6, 4*10^6},
extra x tick labels = {16k, , 0.25M, ,4M},
ymajorgrids=true,
line width=0.25mm,
grid style=dashed,
legend entries = {$q = 1$, $q=25$, $q=1000$}
]
\addplot coordinates {
(16384, 0.29)(65536, 0.753) (262144, 2.83) (1048576, 13.5) (4194304, 54.25)
};
\addplot coordinates {
(128^2, 0.19)(256^2, 0.786) (512^2, 2.769) (1024^2, 12.8) (2048^2, 47.8)
};
\addplot+[mark = triangle*] coordinates {
(128^2, 0.233)(256^2, 0.796) (512^2, 2.884) (1024^2, 12.75) (2048^2, 48)
};
\addplot [black, domain = 128^2:2048^2] {x/150000};
\node [ anchor=center] at (3.5*10^5,1) {$\mathcal{O}(N)$};
\end{loglogaxis}
\end{tikzpicture}}
\end{subfigure}
\caption{Results for the 2D advection diffusion problem for varying values of $q$. The threshold $\epsilon$ for ignoring singular values in the spaQR algorithm is $\epsilon = 10^{-2}$. Note that the number of iterations grows slowly and the factorization time scales linearly with problem size for all three values of $q$.}
\label{ad_figure}
\end{figure}
\begin{figure}[tbhp]
\centering
\begin{subfigure}[t]{\textwidth}
\centering
\scalebox{0.75}{
\begin{tikzpicture}
\begin{loglogaxis}[
scale = 0.75,
ylabel={$\#$ GMRES},
xlabel = {$N$},
ymin = 2, ymax = 100,
xtick = \empty,
extra x ticks = {16000, 6.5*10^4, 2.5*10^5, 10^6, 4*10^6},
extra x tick labels = {16k, , 0.25M, ,4M},
ymajorgrids=true,
line width=0.25mm,
grid style=dashed
]
\addplot coordinates {
(128^2, 14)(256^2, 16) (512^2, 23) (1024^2, 33) (2048^2, 44)
};
\addplot coordinates {
(128^2, 7)(256^2, 9) (512^2, 9) (1024^2, 10) (2048^2, 11)
};
\addplot+[mark = triangle*] coordinates {
(128^2, 4)(256^2, 4) (512^2, 4) (1024^2, 4) (2048^2, 4)
};
\end{loglogaxis}
\end{tikzpicture}}
\hspace{0.2cm}
\scalebox{0.75}{
\begin{tikzpicture}
\pgfplotsset{
every axis legend/.append style={
at={(1.02,1)},
anchor=north west},
legend cell align=left}
\begin{loglogaxis}[
legend columns = 1,
legend style = {draw = none},
scale = 0.75,
xlabel = {$N$},
ylabel = {Time to factorize ($s$)},
ymin = 0.1, ymax = 300,
xtick = \empty,
extra x ticks = {16000, 6.5*10^4, 2.5*10^5, 10^6, 4*10^6},
extra x tick labels = {16k, , 0.25M, ,4M},
ymajorgrids=true,
line width=0.25mm,
grid style=dashed,
legend entries = {$\epsilon = 10^{-1}$, $\epsilon = 10^{-2}$, $\epsilon = 10^{-4}$,
Direct}]
\addplot coordinates {
(128^2, 0.217)(256^2, 0.944) (512^2, 3.29) (1024^2, 12.73) (2048^2, 46.2)
};
\addplot coordinates {
(128^2, 0.168)(256^2, 0.716) (512^2, 3.07) (1024^2, 17.42) (2048^2, 71.2)
};
\addplot+[mark = triangle*] coordinates {
(128^2, 0.232)(256^2, 0.924) (512^2, 3.38) (1024^2, 14) (2048^2, 56.6)
};
\addplot+[mark = diamond*] coordinates {
(128^2, 0.161)(256^2, 0.838) (512^2, 4.9) (1024^2, 30.4) (2048^2, 194.8)
};
\addplot [black, domain = 128^2:2048^2] {x/150000};
\node [ anchor=center] at (3.5*10^5,1) {$\mathcal{O}(N)$};
\end{loglogaxis}
\end{tikzpicture}}
\end{subfigure}
\caption{Variation in the number of iterations and time to factorize with tolerance $\epsilon$ for the 2D advection diffusion problem with $a = 1$, $b(x,y) = e^{x+y}$, $q=1000$. The iteration count is constant for small enough tolerance $\epsilon$ and the factorization time scales linearly with the problem size. The direct method with the same partition scales as $\mathcal{O}(N^{3/2})$.}
\label{ad_eps}
\end{figure}
\Cref{ad_figure} compares the number of GMRES iterations needed for convergence and the time taken to factorize for the 2D advection diffusion problem with $a=1$, $b=1$, and $q=1$, 25, 1000. The time to factorize the matrix scales as $\mathcal{O}(N)$ in contrast to Nested Dissection Householder QR which scales as $\mathcal{O}(N^{3/2})$. Combining this with the slow increase in the number of iterations to converge, gives a approximate complexity of $\mathcal{O}(N)$ complexity to the algorithm.
In \Cref{ad_eps}, we compare the iteration count and time to factorize for various values of the tolerance $\epsilon$. Note that the time to factorize scales as $\mathcal{O}(N)$ independent of the value of $\epsilon$ used. The rate of convergence of the residual $\|Ax-b\|_{_2}/\|b\|_{_2}$ with the GMRES iterations is shown in \Cref{fig:2d_ad_gmres_residual}. The rate of convergence of the residual increases greatly as the tolerance $\epsilon$ is decreased from $10^{-1}$ to $10^{-4}$. The optimal value of $\epsilon$ depends on the problem and is to be chosen such that the overall time (factorization $+$ solve) is minimized.
\begin{figure}[tbhp]
\centering
\scalebox{0.75}{
\begin{tikzpicture}
\pgfplotsset{
every axis legend/.append style={
at={(1.02,1)},
anchor=north west},
legend cell align=left}
\begin{semilogyaxis}[
legend columns = 1,
legend style = {draw = none},
scale = 0.75,
ylabel={Residual},
xlabel = {Iterations},
ymin = 1e-14, ymax =1,
xmin=1, xmax=50,
ymajorgrids=true,
line width=0.25mm,
grid style=dashed,
legend entries = {$\epsilon = 10^{-1}$, $\epsilon = 10^{-2}$, $\epsilon = 10^{-4}$}
]
\addplot table {results/2d_ad_q1000_p3_t0_1.dat};
\addplot table {results/2d_ad_q1000_p3_t0_01.dat};
\addplot+[mark = triangle*] table {results/2d_ad_q1000_p3_t0_0001.dat};
\end{semilogyaxis}
\end{tikzpicture}}
\caption{The convergence of the residual $\|Ax-b\|_{_2}/\|b\|_{_2}$ with the number of GMRES iterations for different values of the tolerance $\epsilon$ for the 2D advection diffusion problem on the $2048 \times 2048$ grid.}
\label{fig:2d_ad_gmres_residual}
\end{figure}
\subsubsection{3D Advection Diffusion problem}
\begin{figure}[tbhp]
\centering
\begin{subfigure}[t]{\textwidth}
\centering
\scalebox{0.75}{
\begin{tikzpicture}
\begin{loglogaxis}[
scale = 0.75,
ylabel={$\#$ GMRES},
xlabel = {$N$},
ymin = 5, ymax = 500,
xmin=200000, xmax=18000000,
xtick = {2.5*10^5, 5*10^5, 10^6, 2*10^6, 4*10^6, 8*10^6, 16*10^6},
xticklabels = {0.25M, ,1M, ,4M, ,16M},
ymajorgrids=true,
line width=0.25mm,
grid style=dashed
]
\addplot coordinates {
(262144, 68) (512000, 87)
(884736, 102) (2097152, 130 ) (4096000, 162) (7077888, 199) (16777216, nan)
};
\addplot coordinates {
(262144, 12) (512000, 14) (884736, 14) (2097152, 14) (4096000, 17) (7077888, 19) (16777216, nan)
};
\end{loglogaxis}
\end{tikzpicture}}
\scalebox{0.75}{
\begin{tikzpicture}
\pgfplotsset{
every axis legend/.append style={
at={(1.02,1)},
anchor=north west},
legend cell align=left}
\begin{loglogaxis}[
legend columns = 1,
legend style = {draw = none},
scale = 0.75,
xlabel = {$N$},
ylabel = {Time to factorize ($s$)},
ymin = 30, ymax = 400000,
xmin=200000, xmax=18000000,
xtick = {2.5*10^5, 5*10^5, 10^6, 2*10^6, 4*10^6, 8*10^6, 16*10^6},
xticklabels = {0.25M, ,1M, ,4M, ,16M},
ymajorgrids=true,
line width=0.25mm,
grid style=dashed,
legend entries = {$\epsilon = 10^{-1}$, $\epsilon = 10^{-2}$, Direct}
]
\addplot coordinates {
(262144, 42.07) (512000, 101.8 )
(884736, 211) (2097152,651 ) (4096000, 1616.8) (7077888, 3462.4) (16777216, 14206)
};
\addplot coordinates {
(262144, 140) (512000, 339.58)
(884736, 701.8) (2097152, 2298) (4096000, 6217) (7077888,15503) (16777216, nan)
};
\addplot+[draw=black, mark = diamond*, mark options = {fill=black}] coordinates {
(262144, 411.3) (512000, 1618) (884736, 5471) (2097152, nan) (4096000, nan) (7077888, nan) (16777216, nan)
};
\addplot[draw=black, dashed, mark= diamond, mark options={solid}] coordinates {
(262144, nan) (512000, nan) (884736, 5471) (2097152, 32000) (4096000, 122070) (7077888, 364500) (16777216, nan)
};
\addplot [black, domain = 250000:16000000] {x*log2(x)/150000};
\node [anchor=center] at (5000000,170) {$\mathcal{O}(N\log N)$};
\end{loglogaxis}
\end{tikzpicture}}
\end{subfigure}
\caption{Variation in the number of iterations and time to factorize with tolerance $\epsilon$ for the 3D $n \times n \times n$ advection diffusion problem with $a=1$, $b=1$, $q=1$. The iteration count increases slowly for small enough tolerance $\epsilon$. Empirically, the factorization time scales as $\mathcal{O}(N^{1.4})$. The missing data points with spaQR either indicate that the factorization time was more than 5 hours or that GMRES took more than 200 iterations to converge. The scaling of the direct method has been extrapolated for $N= 128^3$, $160^3$, $192^3$. }
\label{Figure:3d_ad}
\end{figure}
Consider the advection diffusion problem on a uniform $n \times n \times n$ 3D grid. The size of the matrix is $N = n^3$. The performance of the algorithm is reported in terms of the time to factorize and the number of GMRES iterations needed to converge in \Cref{Figure:3d_ad} for various values of tolerance $\epsilon$. Theoretically, we expect the factorization time to scale as $\mathcal{O}(N\log N)$ (see \Cref{sec:complexity}). However, the empirical complexity is $\mathcal{O}(N^{1.4})$. This is likely due to non-asymptotic effects. The convergence of the residual $\|Ax-b\|_{_2}/\|b\|_{_2}$ with the iteration count is shown in \Cref{fig:3d_ad_gmres_residual} for $N=192^3$. Similar to the 2D case, we notice that rate of convergence of the residual increases drastically as the tolerance $\epsilon$ is decreased from $10^{-1}$ to $10^{-2}$.
\begin{figure}[tbhp]
\centering
\scalebox{0.75}{
\begin{tikzpicture}
\pgfplotsset{
every axis legend/.append style={
at={(1.02,1)},
anchor=north west},
legend cell align=left}
\begin{semilogyaxis}[
legend columns = 1,
legend style = {draw = none},
scale = 0.75,
ylabel={Residual},
xlabel = {Iterations},
ymin = 1e-14, ymax =1,
xmin=1, xmax=200,
ymajorgrids=true,
line width=0.25mm,
grid style=dashed,
legend entries = { $\epsilon = 10^{-1}$, $\epsilon = 10^{-2}$}
]
\addplot table {results/3d_ad_192_t0_1.dat};
\addplot+[mark = triangle*] table {results/3d_ad_192_t0_01.dat};
\end{semilogyaxis}
\end{tikzpicture}}
\caption{Convergence of the residual $\|Ax-b\|_{_2}/\|b\|_{_2}$ with the number of GMRES iterations for different values of tolerance $\epsilon$ for the 3D advection diffusion problem on the $192 \times 192 \times 192$ grid.}
\label{fig:3d_ad_gmres_residual}
\end{figure}
\begin{figure}[tbhp]
\centering
\scalebox{0.75}{
\begin{tikzpicture}
\pgfplotsset{
every axis legend/.append style={
at={(1.02,1)},
anchor=north west},
legend cell align=left}
\begin{semilogyaxis}[
legend columns = 1,
legend style = {draw = none},
scale = 0.75,
ylabel={$|R_{ii}|/|R_{11}|$},
ymin = 1e-8, ymax = 1,
xmin=1, xmax=350,
ymajorgrids=true,
line width=0.25mm,
grid style=dashed,
legend entries = { $l=8$, $l=6$, $l=4$, $l=2$}
]
\addplot+ table [x expr=\coordindex, y index=0] {results/3d_64_sv_3.dat};
\addplot+ table [x expr=\coordindex, y index=0] {results/3d_64_sv_5.dat};
\addplot+ table [x expr=\coordindex, y index=0] {results/3d_64_sv_7.dat};
\end{semilogyaxis}
\end{tikzpicture}}
\caption{The singular value decay of the block $\begin{bmatrix} A_{np}^T & A_{pn} \end{bmatrix}$ corresponding to an interface $p$ of the top separator at various levels of sparsification. The diagonal entries $|R_{ii}|$ of a column pivoted QR on the block is used as a substitute for the singular values. The results shown are on the 3D advection diffusion problem with $N=64^3$.}
\label{fig:sv_decay}
\end{figure}
\subsection{Profiling}
\label{Sec: Profiling}
\begin{figure}[tbhp]
\centering
\begin{subfigure}{\textwidth}
\centering
\scalebox{0.75}{
\begin{tikzpicture}
\begin{semilogyaxis}[
scale = 0.7,
ylabel={Size of interface},
ymin = 25, ymax = 5000,
xmin=1, xmax = 12,
xticklabel = \empty,
x dir=reverse,
enlarge x limits=0.1,
bar width = 8pt,
ymajorgrids=true,
grid style=dashed,
line width=0.25mm,
xtick align = inside
]
\addplot[ybar, draw=blue, fill=blue!30, error bars/.cd,
y explicit,
y dir=both,
error bar style={line width=0.3mm, black}]
table [
x = level,
y = median,
y error plus expr=\thisrow{q2}-\thisrow{median},
y error minus expr=\thisrow{median}-\thisrow{q1}
]
{results/3d_64_ranks.dat};
\addplot [red, dashed, domain = 1:10] {2^(-(x-1)/3)*700};
\end{semilogyaxis}
\end{tikzpicture}}
\scalebox{0.75}{
\begin{tikzpicture}
\begin{semilogyaxis}[
scale = 0.7,
ymin = 25, ymax = 5000,
xmin=1, xmax = 15,
xticklabel = \empty,
x dir=reverse,
yticklabel = \empty,
enlarge x limits=0.07,
bar width = 7pt,
ymajorgrids=true,
line width =0.25mm,
grid style=dashed,
xtick align = inside
]
\addplot[ybar, draw=blue, fill=blue!30,error bars/.cd,
y explicit,
y dir=both,
error bar style={line width=0.3mm, black}]
table [
x = level,
y = median,
y error plus expr=\thisrow{q2}-\thisrow{median},
y error minus expr=\thisrow{median}-\thisrow{q1}
]
{results/3d_128_ranks.dat};
\addplot [red, dashed, domain = 1:13] {2^(-(x-1)/3)*1200};
\end{semilogyaxis}
\end{tikzpicture}}
\scalebox{0.75}{
\begin{tikzpicture}
\begin{semilogyaxis}[
scale = 0.7,
ymin = 25, ymax = 5000,
xmin=1, xmax = 18,
xticklabel = \empty,
x dir=reverse,
yticklabel = \empty,
enlarge x limits=0.06,
ymajorgrids=true,
line width=0.25mm,
grid style=dashed,
xtick align = inside,
bar width = 5pt
]
\addplot[ybar, draw=blue, fill=blue!30, error bars/.cd,
y explicit,
y dir=both,
error bar style={line width=0.3mm, black} ]
table [
x = level,
y = median,
y error plus expr=\thisrow{q2}-\thisrow{median},
y error minus expr=\thisrow{median}-\thisrow{q1},
] {results/3d_256_ranks.dat};
\addplot [red, dashed, domain = 1:16] {2^(-(x-1)/3)*2400};
\end{semilogyaxis}
\end{tikzpicture}}
\end{subfigure}
\begin{subfigure}{\textwidth}
\centering
\scalebox{0.75}{
\begin{tikzpicture}
\begin{semilogyaxis}[
ylabel style = {align=center},
scale = 0.7,
ylabel= {Median non-zeros},
ymin = 900, ymax = 20000,
xmin=1, xmax = 12,
xticklabel = \empty,
x dir=reverse,
enlarge x limits=0.1,
bar width = 8pt,
ymajorgrids=true,
grid style=dashed,
line width=0.25mm,
xtick align = inside
]
\addplot[ybar, draw=blue, fill=blue!30, error bars/.cd,
y explicit,
y dir=both,
error bar style={line width=0.3mm, black}]
table [
x = level,
y = median,
y error plus expr=\thisrow{q2}-\thisrow{median},
y error minus expr=\thisrow{median}-\thisrow{q1}
]
{results/3d_64_nbrs.dat};
\addplot [red, dashed, domain = 4:10] {2^(-(x-1)/3)*612*24};
\end{semilogyaxis}
\end{tikzpicture}}
\scalebox{0.75}{
\begin{tikzpicture}
\begin{semilogyaxis}[
scale = 0.7,
ymin = 900, ymax = 20000,
xmin=1, xmax = 15,
xticklabel = \empty,
x dir=reverse,
yticklabel = \empty,
enlarge x limits=0.07,
bar width = 7pt,
ymajorgrids=true,
line width =0.25mm,
grid style=dashed,
xtick align = inside
]
\addplot[ybar, draw=blue, fill=blue!30,error bars/.cd,
y explicit,
y dir=both,
error bar style={line width=0.3mm, black}]
table [
x = level,
y = median,
y error plus expr=\thisrow{q2}-\thisrow{median},
y error minus expr=\thisrow{median}-\thisrow{q1}
]
{results/3d_128_nbrs.dat};
\addplot [red, dashed, domain = 5:13] {2^(-(x-1)/3)*1124*30};
\end{semilogyaxis}
\end{tikzpicture}}
\scalebox{0.75}{
\begin{tikzpicture}
\begin{semilogyaxis}[
scale = 0.7,
ymin = 900, ymax = 20000,
xmin=1, xmax = 18,
xticklabel = \empty,
x dir=reverse,
yticklabel = \empty,
enlarge x limits=0.06,
bar width = 5pt,
ymajorgrids=true,
line width=0.25mm,
grid style=dashed,
xtick align = inside
]
\addplot[ybar, draw=blue, fill=blue!30, error bars/.cd,
y explicit,
y dir=both,
error bar style={line width=0.3mm, black}]
table [
x = level,
y = median,
y error plus expr=\thisrow{q2}-\thisrow{median},
y error minus expr=\thisrow{median}-\thisrow{q1}
]
{results/3d_256_nbrs.dat};
\addplot [red, dashed, domain = 1:16] {2^(-(x-1)/3)*2348*30};
\end{semilogyaxis}
\end{tikzpicture}}
\end{subfigure}
\begin{subfigure}{\textwidth}
\centering
\scalebox{0.75}{
\begin{tikzpicture}
\begin{semilogyaxis}[
xlabel style = {align=center},
scale = 0.7,
ybar,
ylabel={Time to sparsify (s)},
xlabel = {Level \\
$N = 64^3$},
ymin = 1, ymax = 2000,
xmin=1, xmax = 12,
xticklabel = \empty,
x dir=reverse,
extra x ticks = { 12, 11, 10, 9,8,7,6,5,4,3,2,1 },
extra x tick labels = {12,,10,,8,,6,,4,,2,},
enlarge x limits=0.1,
bar width = 8pt,
ymajorgrids=true,
grid style=dashed,
line width=0.25mm,
xtick align = inside
]
\addplot+ table {results/3d_64_sparsify_time.dat};
\end{semilogyaxis}
\end{tikzpicture}}
\scalebox{0.75}{
\begin{tikzpicture}
\begin{semilogyaxis}[
xlabel style = {align=center},
scale = 0.7,
ybar,
ymin = 1, ymax = 2000,
xmin=1, xmax = 15,
xlabel={Level \\
$N=128^3$},
xticklabel = \empty,
x dir=reverse,
extra x ticks = { 15,14,13,12, 11, 10, 9,8,7,6,5,4,3,2,1 },
extra x tick labels = {15,,13,,11,,9,,7,,5,,3,,1},
yticklabel = \empty,
enlarge x limits=0.07,
bar width = 7pt,
ymajorgrids=true,
line width =0.25mm,
grid style=dashed,
xtick align = inside
]
\addplot+ table {results/3d_128_sparsify_time.dat};
\end{semilogyaxis}
\end{tikzpicture}}
\scalebox{0.75}{
\begin{tikzpicture}
\begin{semilogyaxis}[
xlabel style = {align=center},
scale = 0.7,
ybar,
ymin = 1, ymax = 2000,
xmin=1, xmax = 18,
xlabel={Level \\
$N=256^3$},
xticklabel = \empty,
x dir=reverse,
extra x ticks = { 18,17,16,15,14,13,12, 11, 10, 9,8,7,6,5,4,3,2,1 },
extra x tick labels = {18,,,15,,,12,,,9,,,6,,4,,2,},
yticklabel = \empty,
enlarge x limits=0.06,
bar width = 5pt,
ymajorgrids=true,
line width=0.25mm,
grid style=dashed,
xtick align = inside
]
\addplot+
table {results/3d_256_sparsify_time.dat};
\end{semilogyaxis}
\end{tikzpicture}}
\end{subfigure}
\caption{The median size of an interface, the median number of non-zero entries per row and column (precisely, $\#$ of non-zero columns in $[A_{np}^T \; A_{pn}]$)), and the total time to sparsify the interfaces per level is shown for the 3D advection diffusion problem on the $64 \times 64 \times 64$, $128 \times 128 \times 128$ and $256 \times 256 \times 256$ grids. The red dashed line indicates that the interface size and the neighbors vary as $2^{-(l-1)/3}$ as assumed in the complexity analysis. The total time to sparsify has a long plateau at a given problem size. }
\label{fig: 3d_sparsification}
\end{figure}
In this section, we give more details on sparsification and the time and memory requirements of the spaQR algorithm. We start with analyzing the singular value decay of a representative block that we compress in \Cref{spars_s} for the 3D advection diffusion problem on the $64 \times 64 \times 64$ grid. \Cref{fig:sv_decay} shows the singular value decay of the block $\begin{bmatrix}
A_{np}^T & A_{pn}
\end{bmatrix}$ corresponding to a representative interface of the top separator at various levels of sparsification. The interface is chosen such that its size is close to the median interface size at that level of sparsification. Roughly, $50\%$ of the singular values are below $\epsilon = 0.1$. Also, note the exponential decay of the singular values after an intial plateau. This observation forms the basis of this work.
Next, we show experimental evidence to back the assumptions made in the complexity analysis. \Cref{fig: 3d_sparsification} shows the median size of an interface ($\#$ rows in $\begin{bmatrix}
A_{np}^T & A_{pn}
\end{bmatrix}$), the number of non-zero rows and columns in the off-diagonal blocks of an interface ($\#$ columns in $\begin{bmatrix}
A_{np}^T & A_{pn}
\end{bmatrix}$), and the total time for sparsification at a given level. The error bars show the inter-quartile range. The red dashed line indicates that the size of the interface grows as $2^{-(l-1)/3}$ where $l$ is the level of the separator of which the interface is a part of. The number of non-zero rows and columns corresponding to an interface is at most $\mathcal{O}(2^{-(l-1)/3})$ again as indicated by the red dashed line.
\begin{figure}
\centering
\scalebox{0.75}{
\begin{tikzpicture}
\begin{loglogaxis}[
scale = 0.75,
ylabel={$\text{size}_{\text{top}}$},
xlabel = {$N$},
ymin = 400, ymax = 4000,
xmin=200000, xmax=18000000,
xtick = {2.5*10^5, 5*10^5, 10^6, 2*10^6, 4*10^6, 8*10^6, 16*10^6},
xticklabels = {0.25M, ,1M, ,4M, ,16M},
ymajorgrids=true,
line width=0.25mm,
grid style=dashed
]
\addplot coordinates {
(262144, 512) (512000, 640)
(884736, 768) (2097152, 1024) (4096000, 1280) (7077888, 1536) (16777216, 2048)
};
\addplot+[mark = triangle*] coordinates {
(262144, 823) (512000, 1027) (884736, 1230) (2097152, 1643) (4096000, 2067) (7077888, 2477) (16777216, nan)
};
\addplot [black, domain = 250000:17000000] {x^(1/3)*7};
\node [ anchor=center] at (5*10^6,900) {$\mathcal{O}(N^{1/3})$};
\end{loglogaxis}
\end{tikzpicture}}
\scalebox{0.75}{
\begin{tikzpicture}
\pgfplotsset{
every axis legend/.append style={
at={(1.02,1)},
anchor=north west},
legend cell align=left}
\begin{loglogaxis}[
legend columns = 1,
legend style = {draw = none},
scale = 0.75,
xlabel = {$N$},
ylabel = {$\text{mem}_\text{F}$},
ymin =8*10^7, ymax = 3*10^10,
xmin=200000, xmax=18000000,
xtick = {2.5*10^5, 5*10^5, 10^6, 2*10^6, 4*10^6, 8*10^6, 16*10^6},
xticklabels = {0.25M, ,1M, ,4M, ,16M},
ymajorgrids=true,
grid style=dashed,
line width=0.25mm,
legend entries = {$\epsilon = 10^{-1}$, $\epsilon = 10^{-2}$}
]
\addplot coordinates {
(262144, 2*10^8) (512000, 4*10^8 )
(884736, 7.3*10^8) (2097152, 1.98*10^9) (4096000, 3.93*10^9) (7077888, 7*10^9) (16777216, 1.8*10^10)
};
\addplot+[mark = triangle*] coordinates {
(262144, 3.6*10^8) (512000, 7.8*10^8) (884736, 1.48*10^9) (2097152, 4*10^9) (4096000, 8.3*10^9) (7077888, 1.5*10^10) (16777216, nan)
};
\addplot [black, domain = 250000:17000000] {x*10^3/2};
\node [ anchor=center] at (4*10^6,10^9) {$\mathcal{O}(N)$};
\end{loglogaxis}
\end{tikzpicture}}
\caption{The growth in the size of the top separator and the memory required to store the preconditioner with the problem size $N$ for the 3D advection diffusion problem.}
\label{fig:3d_stop_mem}
\end{figure}
\begin{figure}[tbhp]
\centering
\scalebox{0.75}{
\begin{tikzpicture}
\pgfplotsset{
every axis legend/.append style={
at={(1.02,1)},
anchor=north west},
legend cell align=left}
\begin{axis}[
xlabel style = {align=center},
xscale = 1.5,
yscale = 0.75,
ybar stacked,
ymin = 1, ymax = 2500,
xmin=1, xmax = 18,
xlabel={Level},
ylabel={Time (s)},
xticklabel = \empty,
x dir=reverse,
extra x ticks = { 18,17,16,15,14,13,12, 11, 10, 9,8,7,6,5,4,3,2,1 },
extra x tick labels = {18,,,15,,,12,,,9,,,6,,4,,2,},
enlarge x limits=0.06,
bar width = 8.5pt,
ymajorgrids=true,
line width=0.25mm,
grid style=dashed,
xtick align = inside,
legend entries = {Factorize, Scale, Sparsify, Merge}
]
\addplot[ybar, black, pattern=north east lines] table[
x = Level,
y = Elimination ] {results/runtime_per_level_256_3d.dat};
\addplot+[ybar, black, pattern=horizontal lines] table[
x = Level,
y = Scale] {results/runtime_per_level_256_3d.dat};
\addplot+[ybar, black, pattern=dots] table[
x = Level,
y = Sparsify] {results/runtime_per_level_256_3d.dat};
\addplot+[ybar, black, pattern=vertical lines] table[
x = Level,
y = Merge] {results/runtime_per_level_256_3d.dat};
\end{axis}
\end{tikzpicture}}
\caption{The runtime per level of the spaQR algorithm split into the four phases: factorize interiors/separators, scale interfaces, sparsify interfaces, and merge the clusters. We skip sparsification for two levels. The results are shown for the 3D advection diffusion problem on a $256 \times 256 \times 256$ grid. }
\label{fig:runtime_per_level}
\end{figure}
The size of the top separator grows as $\mathcal{O}(N^{1/3})$ as shown in \Cref{fig:3d_stop_mem}. Hence, the cost of factorizing the corresponding block matrix is $\mathcal{O}(N)$. As the cost per level is roughly the same (see \Cref{sec:complexity}) and there are $\Theta(\log(N/N_0))$ levels, this brings the total cost to $\mathcal{O}(N \log N)$. From \Cref{fig:runtime_per_level}, we see that there is a spike in the runtime at the first level of interface sparsification. Starting sparsification sooner is inefficient as the off-diagonal blocks might not be sufficiently low rank to be beneficial. The runtime in the next few levels have smaller variations which will not matter as we run on bigger matrices. Finally, from \Cref{fig:3d_stop_mem}, we see that the memory required scales as $\mathcal{O}(N)$ as expected.
\section{Conclusions}
In this work, we develop a novel fast hierarchical QR solver with tunable accuracy for sparse square matrices. We propose an improvement to the base algorithm with a simple block diagonal scaling. We provide theoretical bounds on the error and condition number of the preconditioned matrix. Under certain assumptions (primarily on the required ranks), we proved that the spaQR algorithm scales as $\mathcal{O}(N \log N)$ with a $\mathcal{O}(N)$ solve cost and $\mathcal{O}(N)$ memory requirement. Finally, we provide numerical benchmarks on big sparse unsymmetric linear systems and non-regular problems, which shows the superiority of the algorithm in terms of time and iterations needed to converge to a high accuracy. The additional profiling results give more insight into the algorithm and confirm the validity of the assumptions made in the complexity analysis.
We believe that the spaQR solver opens up exciting new areas that can benefit from fast hierarchical solvers. The algorithm can be extended, with some changes, to rectangular matrices, especially for solving linear least squares problems. This will be investigated in a future work. Further improvements to the algorithm and the implementation are also possible. While the current implementation is sequential, the spaQR algorithm can also be parallelized.
|
2,869,038,155,927 | arxiv |
\section{Connection between 4d and 6d Lorentz invariants containing supermomenta} \label{sec:Connection_six_four_Invariants}
Similar to the 6d Lorentz invariants \eqref{eq:type1}, we try to find a combination of the invarnants \eqref{eq:type2} and \eqref{eq:type3} whose four dimensional projection is manifestly $R$-symmetry invariant. Because of the non-chiral nature of the six-dimensional amplitudes, the number of chiral and anti-chiral supermomenta are equal and the invariants \eqref{eq:type2} and \eqref{eq:type3} can only appear in the pairs
\begin{equation}
\begin{aligned} \label{eq:BadCorr6D4D}
6d: \qquad \langle q_{i}|k_1 \dots k_{2 r + 1}|q_{j}\rangle [\tilde{q}_{k}| p_1 \dots p_{2 s + 1}|\tilde{q}_{l}]
\end{aligned}
\end{equation}
leading to the following four-dimensional projection
\begin{equation}\label{eq:4dProjectionBadCorr6D4D}
\begin{aligned}
\Bigl( \langle q^{1}_{i}|k_1 \dots k_{2 r + 1}|\tilde{q}_{j 3}] - [\tilde{q}_{i 3}|k_1 \dots k_{2 r + 1}|q_{j}^{1}\rangle \Bigr) \Bigl( \langle q^{4}_{k}| p_1 \dots p_{2 s + 1}|\tilde{q}_{l 2}] - [\tilde{q}_{k 2}| p_1 \dots p_{2 s + 1}|q_{l}^{4}\rangle \Bigr)
\end{aligned}
\end{equation}
Unfortunately this combination is in general not $R$-symmetry invariant. There are two possibilities to make \cref{eq:4dProjectionBadCorr6D4D} $R$ invariant. First, we could impose restrictions on the supermomenta appearing in \cref{eq:BadCorr6D4D}. However, only the very special case $i=j=k=l$ of all supermomenta belonging to the same external leg, has a $R$ invariant projection. Since this strong restriction is completely unnatural from both the 6d and 4d perspective and does not allow for a construction of manifestly dual conformal covariant Lorentz invariants, it should be neglected. The second possibility is to antisymmetrize \cref{eq:4dProjectionBadCorr6D4D} in the indices $(1,4)$ and $(2,3)$, corresponding to $m$ and $m'$ $SU(2)$ contractions:
\begin{align}
\left( \langle q^{[1}_{i}|k_1 ... k_{2 r + 1}|\tilde{q}_{j [3}] - [\tilde{q}_{i [3}|k_1 ... k_{2 r + 1}|q_{j}^{[1}\rangle \right) \left( \langle q^{4]}_{k}| p_1 ... p_{2 s + 1}|\tilde{q}_{l 2]}] - [\tilde{q}_{k 2]}| p_1 ... p_{2 s + 1}|q_{l}^{4]}\rangle \right) \\
= \Bigl( \langle q^{m}_{i}|k_1 ... k_{2 r + 1}|\tilde{q}_{j\,m'}] - [\tilde{q}_{i\,m'}|k_1 ... k_{2 r + 1}|q_{j}^{m}\rangle \Bigr) \left( \langle q_{k\,m}| p_1 ... p_{2 s + 1}|\tilde{q}_{l}^{m'}] - [\tilde{q}_{k}^{m'} | p_1 ... p_{2 s + 1}|q_{l\,m}\rangle \right)
\end{align}
This combination would require, among others, the following projection to arise from six dimensions
\begin{align} \label{eq:BadProjection}
... - \langle q^{1}_{i}|k_1 ... k_{2 r + 1}|\tilde{q}_{j 2}]
\langle q^{4}_{k}| p_1 \dots p_{2 s + 1}|\tilde{q}_{l 3}] + ... \end{align}
However, from \cref{eq:type2,eq:type3} it follows that such a term cannot appear from a six dimensional projection. Even for the chiral self-conjugate case, with momenta being the same ($k = p$, $n = m$) and supermomenta being conjugate of each other ($i = k$, $j = l$), contributions of the form \eqref{eq:BadProjection} do not cancel. Consequently the blocks in \cref{eq:BadCorr6D4D} are irrelevant for a connection between the superamplitudes in six and four dimensions, since the latter are manifestly $R$-symmetry invariant. Therefore only the invariants of the type \cref{eq:correspondence6d4d} are natural objects for establishing such a bridge.
\section{Spinor Conventions}
\label{appendix:Spinors}
In this appendix we summarize our convention for the four- and six-dimensional spinors and provide the identities relevant for calculations within the spinor helicity formalism.
\subsection{Four-Dimensional Spinors}
Raising and lowering of spinor indices is defined by left multiplication with the $\epsilon$ symbol and its inverse:
\begin{align}
\lambda_\alpha&=\epsilon_{\alpha\beta}\lambda^\beta\,,&\lambda^\alpha&=\epsilon^{\alpha\beta}\lambda_\beta\,,\\
\tilde{\lambda}_{\dot{\alpha}}&=\epsilon_{\dot{\alpha}\dot{\beta}}\tilde{\lambda}^{\dot{\beta}}\,,&\tilde{\lambda}^{\dot{\alpha}}&=\epsilon^{\dot{\alpha}\dot{\beta}}\tilde{\lambda}_{\dot{\beta}}\,,
\end{align}
where the antisymmetric $\epsilon$ symbol is defined as
\begin{align}
\epsilon&=i\sigma_2&
\epsilon_{12}&=\epsilon_{\dot{1}\dot{2}}=-\epsilon^{12}=-\epsilon^{\dot{1}\dot{2}}=1\,
\end{align}
and is obeying the equations
\begin{equation}
\begin{aligned}\label{Schouten_4D_1}
\epsilon_{\alpha\beta}\epsilon^{\beta\gamma}&=\delta_\alpha^\gamma\,,&\epsilon_{\dot{\alpha}\dot{\beta}}\epsilon^{\dot{\beta}\dot{\gamma}}&=\delta_{\dot{\alpha}}^{\dot{\gamma}}\,,\\
\epsilon_{\beta \gamma} \delta^{\alpha}_{\delta} + \epsilon_{\gamma \delta} \delta^{\alpha}_{\beta} +\epsilon_{\delta \beta} \delta^{\alpha}_{\gamma} &= 0& \epsilon^{\dot{\beta}\dot{\gamma}} \delta^{\dot{\delta}}_{\dot{\alpha}} +\epsilon^{\dot{\gamma}\dot{\delta}} \delta^{\dot{\beta}}_{\dot{\alpha}} + \epsilon^{\dot{\delta}\dot{\beta}} \delta^{\dot{\gamma}}_{\dot{\alpha}} &= 0
\end{aligned}
\end{equation}
For the spinor products we choose the conventions
\begin{align}
\ang{\lambda}{\mu}&=\lambda^\alpha\mu_\alpha&&\text{and} & [\tilde{\lambda}\,\tilde{\mu}]&=\tilde{\lambda}_{\dot{\alpha}}\tilde{\mu}^{\dot{\alpha}}\,,
\end{align}
which implies
\begin{align}
\lambda_\alpha\mu_\beta-\lambda_\beta\mu_\alpha&=\epsilon_{\alpha\beta}\,\ang{\lambda}{\mu}&
\tilde\lambda_{\dot\alpha}\tilde\mu_{\dot\beta}-\tilde\lambda_{\dot\beta}\tilde\mu_{\dot\alpha}&=-\epsilon_{\dot\alpha\dot\beta}\,[\tilde\lambda\,\tilde\mu]
\end{align}
The four-dimensional sigma matrices are defined as
\begin{align}
\sigma^\mu_{\phantom{\mu}\,\alpha\dot{\alpha}}&=(1,\vec{\sigma})_{\alpha\dot{\alpha}}&&\text{and}&\bar{\sigma}^{\mu\,\dot{\alpha}\alpha}&=(1,-\vec{\sigma})^{\dot{\alpha}\alpha}
\end{align}
and have the properties
\begin{align}
\sigma^{\mu}\bar{\sigma}^{\nu}+\sigma^{\nu}\bar{\sigma}^{\mu}&=2\eta^{\mu\nu}\,,&\bar{\sigma}^{\mu}\sigma^{\nu}+\bar{\sigma}^{\nu}\sigma^{\mu}&=2\eta^{\mu\nu}\,,\\
\sigma^\mu_{\alpha\dot{\alpha}}\bar{\sigma}^{\dot{\beta}\beta}_\mu&=2\delta_\alpha^\beta\delta_{\dot{\alpha}}^{\dot{\beta}}\,,&\sigma^{\mu\,\alpha\dot{\beta}}&=\bar{\sigma}^{\mu\,\dot{\beta}\alpha}\,,
\end{align}
which are consequences of the properties of the ordinary three-dimensional Pauli matrices $\vec{\sigma}=\begin{pmatrix}\sigma_1&\sigma_2&\sigma_3\end{pmatrix}$
\begin{align}
\sigma_1&=\begin{pmatrix}0&1\\1&0\end{pmatrix}\,,&\sigma_2&=\begin{pmatrix}0&-i\\i&0\end{pmatrix}\,,&\sigma_3&=\begin{pmatrix}1&0\\0&-1\end{pmatrix}\,.
\end{align}
Raising and lowering of spinor indices on derivatives with respect to a spinor leads to an additional minus sign
\begin{align}
\frac{\partial}{\partial \lambda_\alpha}=\frac{\partial \lambda^\beta}{\partial \lambda_\alpha}\frac{\partial}{\partial \lambda^\beta}=-\epsilon_{\alpha\beta}\frac{\partial}{\partial \lambda^\beta}\,,
\end{align}
which is a general feature of derivatives carrying $su(2)$ indices.
\subsection{Six dimensional Spinors}
The six-dimensional Pauli matrices fulfill the algebra
\begin{equation}\label{eq:Sigma}
\Sigma^\mu\widetilde\Sigma^\nu+\Sigma^\nu\widetilde\Sigma^\mu=2\eta^{\mu\nu}\,.
\end{equation}
We choose the antisymmetric representation
\begin{align}\label{eq:Pauli6d}
\Sigma^0&=i\sigma_1\otimes\sigma_2\,,&\widetilde\Sigma^0&=-\Sigma^0\,,\\
\Sigma^1&=i\sigma_2\otimes\sigma_3\,,&\widetilde\Sigma^1&=\Sigma^1\,,\\
\Sigma^2&=-\sigma_2\otimes\sigma_0\,,&\widetilde\Sigma^2&=-\Sigma^2\,,\\
\Sigma^3&=-i\sigma_2\otimes\sigma_1\,,&\widetilde\Sigma^3&=\Sigma^3\,,\\
\Sigma^4&=-\sigma_3\otimes\sigma_2\,,&\widetilde\Sigma^4&=-\Sigma^4\,,\\
\Sigma^5&=i\sigma_0\otimes\sigma_2\,,&\widetilde\Sigma^5&=\Sigma^5\,.
\end{align}
They satisfy the following identities
\begin{align}
\Sigma^\mu_{AB}&=\tfrac{1}{2}\epsilon_{ABCD}\widetilde\Sigma_{\mu}^{CD}\,,&\widetilde\Sigma_{\mu}^{AB}&=\tfrac{1}{2}\epsilon^{ABCD}\Sigma^\mu_{CD}\\
\Sigma^\mu_{AB}\Sigma_{\mu\,CD}&=-2\epsilon_{ABCD}\,, &
\widetilde\Sigma^{\mu\,AB}\widetilde\Sigma_{\mu}^{CD}&=-2\epsilon^{ABCD}\,,\\
\widetilde\Sigma_\mu^{AB}\Sigma^{\mu}_{CD}&=-2(\delta_C^A\delta_D^B-\delta_C^B\delta_D^A)\,,&\mathop{\mathrm{Tr}}(\widetilde\Sigma^\mu\Sigma^\nu)&=4\eta^{\mu\nu}\,.
\end{align}
The six dimensional Shouten identity reads
\begin{equation}
\delta_A^F\epsilon_{BCDE}+\delta_B^F\epsilon_{CDEA}+\delta_C^F\epsilon_{DEAB}+\delta_D^F\epsilon_{EABC}+\delta_E^F\epsilon_{ABCD}=0\,,
\end{equation}
and contractions of epsilon tensors may be deduced from
\begin{multline}
\epsilon_{ABCD}\epsilon^{EFGD}=\delta_A^E\delta_B^F\delta_C^G+\delta_A^F\delta_B^G\delta_C^E+\delta_A^G\delta_B^E\delta_C^F
-\delta_C^E\delta_B^F\delta_A^G-\delta_C^F\delta_B^G\delta_A^E-\delta_C^G\delta_B^E\delta_A^F
\end{multline}
The first four of the six dimensional sigma matrices are simply related to the Weyl representation of the four dimensional gamma matrices
\begin{align}\label{eq:SigmaGamma}
\Sigma^\mu&= 1\otimes\epsilon\cdot\gamma^\mu=\begin{pmatrix}
0&-\sigma^{\mu\,\alpha}_{\phantom{\mu\,\alpha}\dot{\beta}}\\
\bar{\sigma}^{\mu\,\phantom{\dot\alpha}\beta}_{\phantom{\mu\,}\dot\alpha}&0
\end{pmatrix}\,,&\widetilde\Sigma^\mu&= \gamma^\mu\cdot1\otimes\epsilon^{-1}=\begin{pmatrix}
0&-\sigma_{\phantom{\mu}\alpha}^{\mu\,\phantom{\alpha}\dot\beta}\\
\bar\sigma^{\mu\,\dot\alpha}_{\phantom{\mu\,\dot\alpha}\beta}&0
\end{pmatrix}\,.
\end{align}
\subsubsection{Three-Point Kinematics}\label{appendix:threePoint}
The three-point kinematics
\begin{align}
p_1+p_2+p_3&=0\,, &p_i^2=0
\end{align}
imply the vanishing of all invariants
\begin{equation}
p_1\cdot p_2=p_1\cdot p_3=p_2\cdot p_3=0\,.
\end{equation}
As a consequence of \cref{eq:invariants}, the spinor products $\langle i|j]$ have rank one and posses a bispinor representation. A consistent set of spinors $\{u_i,\tilde u_i\}$ associated to the external legs has been introduced by C. Cheung and D. O'Connell in \cite{Cheung:2009dc} and reads
\begin{align}
\langle i_a|j_{\dot{a}}]&=u_{i\,a}\tilde{u}_{j\,\dot{a}}\,,& \langle j_a|i_{\dot{a}}]&=-u_{j\,a}\tilde{u}_{i\,\dot{a}}\,,&&\text{for $\{i,j\}$ cyclic.}
\end{align}
Due to momentum conservation, these spinors are subject to the constraints
\begin{align}\label{eq:P1}
u_1^a\langle 1_a|=u_2^a\langle 2_a|=u_3^a\langle 3_a|\,,&\tilde{u}_1^{\dot a}[ 1_{\dot a}|&=\tilde{u}_1^{\dot a}[ 1_{\dot a}|=\tilde{u}_1^{\dot a}[ 1_{\dot a}|\,.
\end{align}
Furthermore pseudoinverses of the spinors can be introduced
\begin{align}\label{eq:P2}
u_aw_b-u_bw_a&=\epsilon_{ab}\,,&\tilde{u}_{\dot{a}}\tilde{w}_{\dot{b}}-\tilde{u}_{\dot{b}}\tilde{w}_{\dot{a}}&=\epsilon_{\dot{a}\dot{b}}\,.
\end{align}
In order to reduce the redundancy in the definition of the spinors $w_i$ and $\tilde{w}_i$ it is convenient to impose the constraints
\begin{align}\label{eq:P3}
w_1^a\langle 1_a|+ w_2^a\langle 2_a|+ w_3^a\langle 3_a|&=0\,,& \tilde{w}_1^{\dot{a}}[ 1_{\dot{a}}|+\tilde{w}_2^{\dot{a}}[ 2_{\dot{a}}|+\tilde{w}_3^{\dot{a}}[ 3_{\dot{a}}|&=0\,.
\end{align}
\section{The Non-Chiral Superconformal Algebra} \label{sec:Algebra_Non_Chiral}
The $su(2)\times su(2)$ Lorentz generators $\mathds{M}_{\alpha \beta}$, $\overline{\mathds{M}}_{\dot{\alpha} \dot{\beta}}$ and the $su(2) \times su(2)$ $R$-symmetry generators $\mathfrak{M}_{n m}$, $\widetilde{\mathfrak{M}}_{n'm'}$ act canonically on the remaining generators carrying Lorentz and $R$-symmetry indices:
\begin{align}
\left[\mathds{M}_{\alpha \beta},\mathds{M}^{\gamma \delta}\right] &= \delta_{(\beta}^{\;(\gamma} \mathds{M}_{\alpha)}^{\;\;\;\delta)} &[\overline{\mathds{M}}_{\dot{\alpha} \dot{\beta}}, \overline{\mathds{M}}^{\dot{\gamma} \dot{\delta}}] &= \delta_{(\dot{\beta}}^{\;(\dot{\gamma}} \overline{\mathds{M}}_{\dot{\alpha})}^{\;\;\;\dot{\delta})} \\
[\mathds{M}_{\alpha \beta}, \mathds{P}^{\gamma \dot{\delta}}] & = \delta_{(\beta}^{\;\gamma} \mathds{P}_{\alpha)}^{\;\dot{\delta}} & [\overline{\mathds{M}}_{\dot{\alpha} \dot{\beta}}, \mathds{P}^{\gamma \dot{\delta}}] &= \delta_{(\dot{\beta}}^{\;\dot{\delta}} \mathds{P}_{\;\dot{\alpha})}^{\gamma} \\
[\mathds{M}_{\alpha \beta}, \mathds{K}^{\gamma \dot{\delta}}] &= -\delta_{(\beta}^{\;\gamma} \mathds{K}_{\alpha)}^{\;\dot{\delta}} & [\overline{\mathds{M}}_{\dot{\alpha} \dot{\beta}}, \mathds{K}^{\gamma \dot{\delta}}] &= -\delta_{(\dot{\beta}}^{\;\dot{\delta}} \mathds{K}_{\;\dot{\alpha})}^{\gamma} \\
[\mathds{M}_{\alpha \beta}, \mathds{Q}^{\gamma n}] &= \delta_{(\beta}^{\gamma} \mathds{Q}^{n}_{\alpha)} & [\overline{\mathds{M}}_{\dot{\alpha} \dot{\beta}}, \overline{\mathds{Q}}^{\dot{\gamma}}_{n}] &= \delta_{(\dot{\beta}}^{\dot{\gamma}} \overline{\mathds{Q}}_{\dot{\alpha}) n} \\
[\mathds{M}_{\alpha \beta}, \overline{\widetilde{\mathds{Q}}}^{\gamma n'}] &= \delta_{(\beta}^{\gamma} \overline{\widetilde{\mathds{Q}}}^{n'}_{\alpha)}& [\overline{\mathds{M}}_{\dot{\alpha} \dot{\beta}}, \widetilde{\mathds{Q}}^{\dot{\gamma}}_{n'}] &= \delta_{(\dot{\beta}}^{\dot{\gamma}} \widetilde{\mathds{Q}}_{\dot{\alpha}) n'}\\
[\mathds{M}_{\alpha \beta}, \mathds{S}^{\gamma}_{n}] &= - \delta^{\gamma}_{(\beta} \mathds{S}_{\alpha) n} & [\overline{\mathds{M}}_{\dot{\alpha} \dot{\beta}}, \overline{\mathds{S}}^{\dot{\gamma} n}] &= - \delta^{\dot{\gamma}}_{\;(\dot{\beta}} \overline{\mathds{S}}^{n}_{\dot{\alpha})} \\
[\mathds{M}_{\alpha \beta}, \overline{\widetilde{\mathds{S}}}^{\gamma}_{n'}]&= - \delta^{\gamma}_{(\beta} \overline{\widetilde{\mathds{S}}}_{\alpha) n'} & [\overline{\mathds{M}}_{\dot{\alpha} \dot{\beta}}, \widetilde{\mathds{S}}^{\dot{\gamma} n'}] &= - \delta^{\dot{\gamma}}_{\;(\dot{\beta}} \widetilde{\mathds{S}}^{n'}_{\dot{\alpha})}\\
\left[\mathfrak{M}_{n m},\mathfrak{M}^{k l}\right] &= \delta_{(m}^{\;(k} \mathfrak{M}_{n)}^{\;l)} & [\widetilde{\mathfrak{M}}_{n' m'}, \widetilde{\mathfrak{M}}^{k' l'}] &= \delta_{(m'}^{\;(k'} \widetilde{\mathfrak{M}}_{n')}^{\;l')} \\
[\mathfrak{M}_{n m}, \mathfrak{P}^{k l'}] &= \delta_{(m}^{\;k} \mathfrak{P}_{n)}^{\;l'} & [\widetilde{\mathfrak{M}}_{n' m'}, \mathfrak{P}^{k l'}] &= \delta_{(m'}^{\;l'} \mathfrak{P}_{\;n')}^{k} \\
[\mathfrak{M}_{n m}, \mathfrak{K}^{k l'}] &= - \delta_{(m}^{\;k} \mathfrak{K}_{n)}^{\;l'} & [\widetilde{\mathfrak{M}}_{n' m'}, \mathfrak{K}^{k l'}] &= - \delta_{(m'}^{\;l'} \mathfrak{K}_{\;n')}^{k} \\
[\mathfrak{M}_{n m}, \mathds{Q}^{\gamma k}] &= \delta_{(m}^{k} \mathds{Q}^{\gamma}_{n)} & [\widetilde{\mathfrak{M}}_{n' m'}, \widetilde{\mathds{Q}}^{k'}_{\dot{\alpha}}] &= \delta_{(n'}^{k'} \widetilde{\mathds{Q}}_{\dot{\alpha} m')} \\
[\mathfrak{M}_{n m}, \overline{\mathds{S}}_{\dot{\gamma}}^{k}] &= \delta_{(m}^{k} \overline{\mathds{S}}_{\dot{\gamma} n)} & [\widetilde{\mathfrak{M}}_{n' m'}, \overline{\widetilde{\mathds{S}}}^{k'}_{\alpha}] &= \delta_{(n'}^{k'}\overline{\widetilde{\mathds{S}}}_{\alpha m')} \\
[\mathfrak{M}_{n m}, \mathds{S}_{\alpha}^{k}] &= - \delta^{k}_{(n} \mathds{S}_{\alpha m)} & [\mathfrak{M}_{n m}, \overline{\mathds{Q}}_{\dot{\alpha}}^{k}] &= - \delta^{k}_{(n} \overline{\mathds{Q}}_{\dot{\alpha} m)} \\ [\widetilde{\mathfrak{M}}_{n' m'}, \widetilde{\mathds{S}}_{\dot{\alpha}}^{k'}] &= - \delta_{(m'}^{\;k'} \widetilde{\mathds{S}}_{\dot{\alpha} n')} & [\widetilde{\mathfrak{M}}_{n' m'}, \overline{\widetilde{\mathds{Q}}}_{\alpha}^{k'}] &= - \delta_{(m'}^{\;k'} \overline{\widetilde{\mathds{Q}}}_{\alpha n')}
\end{align}
The action of the dilatation $\mathds{D}$ and hypercharge $\mathds{B}$ on a generator $\mathds{G}$ is given by:
\begin{equation}
\left[\mathds{D}, \mathds{G} \right] =\dim\left(\mathds{G}\right) \mathds{G} \qquad \left[\mathds{B}, \mathds{G} \right] = \hyp\left(\mathds{G}\right) \mathds{G}
\end{equation}
The non-zero dimensions and hypercharges of the various generators are
\begin{equation}
\begin{gathered}
\begin{aligned}
\dim\left(\mathds{P}\right)& = 1\,, \qquad &\dim\left(\mathds{Q}\right) &= \dim \left(\widetilde{\mathds{Q}}\right) = \dim\left(\overline{\mathds{Q}}\right) = \dim (\overline{\widetilde{\mathds{Q}}}) = \tfrac{1}{2}\,,\\
\dim\left(\mathds{K}\right) &= -1\,, \qquad& \dim\left(\mathds{S}\right) &= \dim(\widetilde{\mathds{S}}) = \dim\left(\overline{\mathds{S}}\right) = \dim (\overline{\widetilde{\mathds{S}}}) = - \tfrac{1}{2}\,,
\end{aligned}\\
\begin{aligned}
\hyp\left(\mathds{Q}\right) &= \hyp(\overline{\widetilde{\mathds{Q}}}) = \hyp\left(\overline{\mathds{S}}\right) = \hyp(\widetilde{\mathds{S}}) = \tfrac{1}{2}\,, \\
\hyp\left(\overline{\mathds{Q}}\right) &= \hyp(\widetilde{\mathds{Q}}) = \hyp\left(\mathds{S}\right) = \hyp(\overline{\widetilde{\mathds{S}}}) = - \tfrac{1}{2} \,.
\end{aligned}
\end{gathered}
\end{equation}
The action of the $R$-dilatation $\mathfrak{D}$ on some generator $\mathds{G}$ is given by:
\begin{equation}
\left[\mathfrak{D}, \mathds{G} \right] =\ferm\left(\mathds{G}\right) \mathds{G}\,.
\end{equation}
The non-zero fermionic dimensions of the superconformal generators are:
\begin{equation}
\begin{aligned}
\ferm\left(\mathfrak{P}\right) &= 1\,, \qquad& \ferm\left(\mathds{Q}\right) &= \ferm (\widetilde{\mathds{Q}}) = \ferm\left(\overline{\mathds{S}}\right) = \ferm (\overline{\widetilde{\mathds{S}}}) = \tfrac{1}{2}\\
\ferm\left(\mathfrak{K}\right) &= -1 \qquad& \ferm\left(\overline{\mathds{Q}}\right) &= \ferm (\overline{\widetilde{\mathds{Q}}}) = \ferm\left(\mathds{S}\right) = \ferm (\widetilde{\mathds{S}}) = -\tfrac{1}{2}\,.
\end{aligned}
\end{equation}
The remaining non-trivial commutation relations are
\begin{align}\label{eq:NCsuperconformal}
\{\mathds{Q}^{n}_\alpha , \overline{\mathds{Q}}_{\dot{\alpha}m}\} &= \delta^{n}_{m} \mathds{P}_{\alpha \dot{\alpha}} \hspace{3cm}& \{\widetilde{\mathds{Q}}_{\dot{\alpha}}^{n'},\overline{\widetilde{\mathds{Q}}}_{\alpha m'}\} &= \delta^{n'}_{m'} \mathds{P}_{\alpha \dot{\alpha}}\\
[\mathds{K}_{\alpha \dot{\alpha}}, \mathds{Q}^{\beta n}] &= \delta^{\beta}_{\alpha} \overline{\mathds{S}}_{\dot{\alpha}}^{n}
& [\mathds{K}_{\alpha \dot{\alpha}}, \widetilde{\mathds{Q}}^{\dot{\beta} n'}] &= \delta^{\dot{\beta}}_{\dot{\alpha}} \overline{\widetilde{\mathds{S}}}_{\alpha}^{n'}\\
[\mathds{K}_{\alpha \dot{\alpha}},\overline{\mathds{Q}}^{\dot{\beta}}_{n}] &= \delta^{\dot{\beta}}_{\dot{\alpha}} \mathds{S}_{\alpha n}
& [\mathds{K}_{\alpha \dot{\alpha}},\overline{\widetilde{\mathds{Q}}}^{\beta}_{n'}] &= \delta^{\beta}_{\alpha} \widetilde{\mathds{S}}_{\dot{\alpha} n'}\\
[\mathds{S}_{\alpha n}, \mathds{P}^{\beta \dot{\beta}}] &= \delta^{\beta}_{\alpha} \overline{\mathds{Q}}^{\dot{\beta}}_{n} &
[\widetilde{\mathds{S}}_{\dot{\alpha} n'}, \mathds{P}^{\beta \dot{\beta}}] &= \delta^{\dot{\beta}}_{\dot{\alpha}} \overline{\widetilde{\mathds{Q}}}^{\beta}_{n'}\\
[\overline{\mathds{S}}_{\dot{\alpha}}^{n}, \mathds{P}^{\beta \dot{\beta}}] &= \delta^{\dot{\beta}}_{\dot{\alpha}} \mathds{Q}^{\beta n} & [\overline{\widetilde{\mathds{S}}}_{\alpha}^{n'}, \mathds{P}^{\beta \dot{\beta}}] &= \delta^{\beta}_{\alpha} \widetilde{\mathds{Q}}^{\dot{\beta} n'}\\
\{\mathds{S}_{\alpha n}, \overline{\mathds{S}}_{\dot{\alpha}}^{m}\} &= \delta^{m}_{n} \mathds{K}_{\alpha \dot{\alpha}} &
\{\widetilde{\mathds{S}}_{\dot{\alpha} n'}, \overline{\widetilde{\mathds{S}}}_{\alpha}^{m'}\} &= \delta^{m'}_{n'} \mathds{K}_{\alpha \dot{\alpha}} \\
\{\mathds{S}_{\alpha n}, \overline{\widetilde{\mathds{Q}}}^{\beta}_{n'}\} &= \delta^{\beta}_{\alpha} \mathfrak{K}_{n n'} & \{\widetilde{\mathds{S}}_{\dot{\alpha} n'}, \overline{\mathds{Q}}^{\dot{\beta}}_{n}\} &= - \delta^{\dot{\beta}}_{\dot{\alpha}} \mathfrak{K}_{n n'}\\
\{\overline{\mathds{S}}_{\dot{\alpha}}^{n}, \widetilde{\mathds{Q}}^{\dot{\beta} n'}\} &= \delta^{\dot{\beta}}_{\dot{\alpha}} \mathfrak{P}^{n n'} & \{\overline{\widetilde{\mathds{S}}}_{\alpha}^{n'}, \mathds{Q}^{\beta n}\} &= - \delta^{\beta}_{\alpha} \mathfrak{P}^{n n'}\\
\left[\right.\overline{\mathds{Q}}^{\dot{\beta}}_{m}, \mathfrak{P}^{n n'}\left. \right] &= \delta^{n}_{m} \widetilde{\mathds{Q}}^{\dot{\beta} n'} &
[\overline{\widetilde{\mathds{Q}}}^{\beta}_{m'}, \mathfrak{P}^{n n'}] &= - \delta^{n'}_{m'} \mathds{Q}^{\beta n} \\
[\mathds{S}_{\alpha m}, \mathfrak{P}^{n n'}] &= \delta^{n}_{m}\overline{\widetilde{\mathds{S}}}_{\alpha}^{n'} & [\widetilde{\mathds{S}}_{\dot{\alpha} m'}, \mathfrak{P}^{n n'}] &= - \delta^{n'}_{m'} \overline{\mathds{S}}_{\dot{\alpha}}^{n}\\
[\mathfrak{K}_{n n'}, \mathds{Q}^{\alpha m}] &= - \delta^{m}_{n} \overline{\widetilde{\mathds{Q}}}^{\alpha}_{n'} &
[\mathfrak{K}_{n n'}, \widetilde{\mathds{Q}}^{\dot{\alpha} m'}] &= \delta^{m'}_{n'} \overline{\mathds{Q}}^{\dot{\alpha}}_{n} \\
[\mathfrak{K}_{n n'}, \overline{\mathds{S}}_{\dot{\alpha}}^{m}] &= - \delta^{m}_{n} \widetilde{\mathds{S}}_{\dot{\alpha} n'} &
[\mathfrak{K}_{n n'}, \overline{\widetilde{\mathds{S}}}_{\alpha}^{n'}] &= \delta^{m'}_{n'} \mathds{S}_{\alpha n}
\end{align}
as well as
\begin{align}
\{\mathds{S}_{\alpha n}, \mathds{Q}^{\beta m}\} &= \delta^{m}_{n} \mathds{M}^{\beta}_{\;\;\alpha} - \delta^{\beta}_{\alpha} \mathfrak{M}^{m}_{\;\;n} + \tfrac{1}{2} \delta^{m}_{n} \delta^{\beta}_{\alpha} (\mathds{D} - \mathds{C} - \mathfrak{D})&\\
\{\widetilde{\mathds{S}}_{\dot{\alpha} n'}, \widetilde{\mathds{Q}}^{\dot{\beta} m'}\} &= \delta^{m'}_{n'} \overline{\mathds{M}}^{\dot{\beta}}_{\;\;\dot{\alpha}} - \delta^{\dot{\beta}}_{\dot{\alpha}} \widetilde{\mathfrak{M}}^{m'}_{\;\;n'} + \tfrac{1}{2} \delta^{m'}_{n'} \delta^{\dot{\beta}}_{\dot{\alpha}} (\mathds{D} + \mathds{C} - \mathfrak{D})&\\
\{\overline{\mathds{S}}_{\dot{\alpha}}^{n}, \overline{\mathds{Q}}^{\dot{\beta}}_{m}\} &= \delta^{n}_{m} \overline{\mathds{M}}^{\dot{\beta}}_{\;\;\dot{\alpha}} + \delta^{\dot{\beta}}_{\dot{\alpha}} \mathfrak{M}^{n}_{\;\;m} + \tfrac{1}{2} \delta^{n}_{m} \delta^{\dot{\beta}}_{\dot{\alpha}} (\mathds{D} + \mathds{C} + \mathfrak{D})&\\
\{\overline{\widetilde{\mathds{S}}}_{\alpha}^{n'}, \overline{\widetilde{\mathds{Q}}}^{\beta}_{m'}\} &= \delta^{n'}_{m'} \mathds{M}^{\beta}_{\;\;\alpha} + \delta^{\beta}_{\alpha} \widetilde{\mathfrak{M}}^{n'}_{\;\;m'} + \tfrac{1}{2} \delta^{n'}_{m'} \delta^{\dot{\beta}}_{\dot{\alpha}} (\mathds{D} - \mathds{C} + \mathfrak{D})&\\
[\mathfrak{K}_{m m'}, \mathfrak{P}^{n n'}] &= \delta^{n'}_{m'} \delta^{n}_{m} \mathfrak{D} + \delta^{n}_{m} \widetilde{\mathfrak{M}}^{m'}_{\;\;n'} + \delta^{n'}_{m'} \mathfrak{M}^{m}_{\;\;n}&\\
[ \mathds{K}_{\alpha \dot{\alpha}} , \mathds{P}^{\beta \dot{\beta}} ] &= \delta^{\beta}_{\alpha} \delta^{\dot{\beta}}_{\dot{\alpha}} \mathds{D} + \delta^{\dot{\beta}}_{\dot{\alpha}} \mathds{M}_{\;\alpha}^{\beta} + \delta^{\beta}_{\alpha} \overline{\mathds{M}}_{\;\dot{\alpha}}^{\dot{\beta}}&
\end{align}
\subsection{The On-Shell Representation}\label{sec:on_shell_non_chiral}
We denote the generators of the on-shell representation of the non-chiral superconformal algebra by small letters $a,b,c$ and $\mathpzc{a}, \mathpzc{b}, \mathpzc{c}$. We introduce the following abbreviations
\begin{align}
\partial_{i \a} &= \frac{\partial}{\partial \lambda^{\a}_i}\,,&\partial_{i \dot{\alpha}} &= \frac{\partial}{\partial \tilde\lambda^{\dot{\alpha}}_i}\,,&\partial_{i n} &= \frac{\partial}{\partial \eta^{n}_i}\,,&\partial_{i n'} &= \frac{\partial}{\partial \tilde{\eta}_{i}^{n'}}
\end{align}
for derivatives with respect to the on-shell variables. The on-shell generators are
\begin{align}
p^{\dot{\alpha} \alpha} &= \sum_{i} \lambda_{i}^{\alpha} \tilde{\lambda}_{i}^{\dot{\alpha}} & k_{\dot{\alpha} \alpha} &= \sum_{i} \partial_{i\alpha} \partial_{i \dot{\alpha}}\\
m_{\alpha \beta} &= \sum_{i} \lambda_{i (\alpha} \partial_{i \beta)} & \overline{m}_{\dot{\alpha} \dot{\beta}} &= \sum_{i} \tilde{\lambda}_{i (\dot{\alpha}} \partial_{i \dot{\beta})} \\
q^{\alpha n} &= \sum_{i} \lambda^{\alpha}_{i} \eta^{n}_{i} & \tilde{q}^{\dot\alpha n'} &= \sum_{i} \tilde{\lambda}^{\dot\alpha}_{i} \tilde{\eta}_{i}^{n'} \\
\bar{q}^{\dot{\alpha}}_{n} &= \sum_{i} \tilde{\lambda}^{\dot{\alpha}}_{i} \partial_{i n}
& \bar{\tilde{q}}^{\alpha}_{ n'} &= \sum_{i} \lambda^{\alpha}_{i} \partial_{i n'}\\
s_{\alpha n} &= \sum_{i} \partial_{i \alpha} \partial_{i n} &\tilde{s}_{\dot{\alpha} n'} &= \sum_{i} \partial_{i \dot{\alpha}}\partial_{i n'} \\
\bar{s}^{n}_{\dot{\alpha}} &= \sum_{i} \eta^{n}_{i} \partial_{i \dot{\alpha}} &\bar{\tilde{s}}_{\alpha}^{n'} &= \sum_{i} \tilde{\eta}_{i}^{n'} \partial_{i \alpha} \\
d &= \tfrac{1}{2} \sum_{i} \left( \lambda^{\alpha}_{i} \partial_{i \alpha} + \tilde{\lambda}^{\dot{\alpha}}_{i} \partial_{i \dot{\alpha}} + 2 \right)& b &= \tfrac{1}{2} \sum_{i}\left(\eta^{n}_{i} \partial_{i n} - \tilde{\eta}^{n'}_{i} \partial_{i n'}\right)\\
c &= \tfrac{1}{2} \sum_{i}\left(-\lambda^{\alpha}_{i} \partial_{i \alpha} + \tilde{\lambda}^{\dot{\alpha}}_{i} \partial_{i \dot{\alpha}} + \eta^{n}_{i} \partial_{i n} - \tilde{\eta}^{n'}_{i} \partial_{i n'}\right)\hspace{-1cm}&\\
\mathpzc{p}^{n n'} &= \sum_{i} \eta_{i}^{n} \tilde{\eta}_{i}^{n'} &\mathpzc{k}_{\,\;n n'} &= \sum_{i} \partial_{i n} \partial_{i n'}\\
\mathpzc{m}_{\,n m} &= \sum_{i} \eta_{i(n} \partial_{i m)} & \widetilde{\mathpzc{m}}_{\,n' m'} &= \sum_{i} \tilde{\eta}_{i (n'} \partial_{i m')}\\
\mathpzc{d} &= \tfrac{1}{2} \sum_{i} \left(\eta^{n}_{i} \partial_{i n} + \tilde{\eta}^{n'}_{i} \partial_{i n'} - 2\right)&
\end{align}
\subsection{The Dual Representation} \label{sec:dual_non_chiral}
We denote the generators of the dual representation of the non-chiral superconformal algebra by capital letters $A,B,C$ and $\mathcal{A}, \mathpzc{B}, \mathpzc{C}$. We present the dual representation in dual non-chiral superspace $(x,y,\theta,\tilde{\theta})$ using the following abbreviations
\begin{align}
\partial_{i \alpha \dot{\alpha}} &= \frac{\partial}{\partial x_{i}^{\dot{\alpha}\alpha }}=\tfrac{1}{2}\sigma^{\mu}_{\alpha\dot\alpha}\frac{\partial}{\partial x_i^\mu}\,,&
\partial_{i n n'} &= \frac{\partial}{\partial y_{i}^{n n'}}\,, &\partial_{i \alpha n} &= \frac{\partial}{\partial \theta_{i}^{\alpha n}}\,,& \partial_{i\dot{\alpha} n'} &= \frac{\partial}{\partial \tilde{\theta}_{i}^{\dot{\alpha} n'}}
\end{align}
for derivatives with respect to the dual variables.
In the dual superspace $\{x_i^{\dot{\alpha}\alpha},\theta_i^{m\,\alpha},\tilde\theta_i^{m'\,\dot\alpha}\}$ the generators of the dual non-chiral superconformal symmetry are given by
\begin{equation}\label{eq:dualConformalNC}
\begin{gathered}
\begin{aligned}
P_{\alpha \dot{\alpha}}& = \sum_{i} \partial_{i \alpha \dot{\alpha}} \qquad &\mathpzc{P}_{n n'} &= -\sum_{i} \partial_{i n n'} \\
Q_{\alpha n} &= -\sum_{i} \partial_{i \alpha n} \qquad& \widetilde{Q}_{\dot{\alpha} n'} &= -\sum_{i} \partial_{i \dot{\alpha} n'} \\
\overline{Q}^{n}_{\dot{\alpha}} &= \sum_{i} (\theta^{\alpha n}_{i} \partial_{i \alpha \dot{\alpha}} + y_{i}^{n n'} \partial_{i\dot{\alpha} n' }) \qquad& \overline{\widetilde{Q}}^{n'}_{\alpha} &= \sum_{i} (\tilde{\theta}^{\dot{\alpha} n'}_{i} \partial_{i \alpha \dot{\alpha}}-y_{i}^{n n'} \partial_{i \alpha n } ) \\
M_{\alpha \beta} &= \sum_{i} \left( \theta^{n}_{i (\alpha} \partial_{i \beta) n} + x_{i (\alpha}^{\dot{\alpha}} \partial_{i \beta) \dot{\alpha}} \right)&
\overline{M}_{\dot{\alpha} \dot{\beta}} &= \sum_{i} \left( \tilde{\theta}^{n'}_{i (\dot{\alpha}} \partial_{i \dot{\beta}) n'} + x_{i (\dot{\alpha}}^{\alpha} \partial_{i \dot{\beta}) \alpha} \right)\\
\mathpzc{M}_{\,n m} &= \sum_{i} \left( \theta_{i \alpha(n} \partial_{i m)}^{\alpha} + y_{i (n}^{\;\;\;\;n'} \partial_{i m) n'} \right)&
\widetilde{\mathpzc{M}}_{\,n' m'} &= \sum_{i} \left(\tilde{\theta}_{i \dot{\alpha}(n'} \partial_{i m')}^{\dot{\alpha}} + y_{i n (n'} \partial_{i m')}^{n} \right)\\
\overline{S}^{\dot{\alpha}}_{n} &= -\sum_{i} (\tilde{\theta}_{i}^{\dot{\alpha} n'} \partial_{i n n'} + x_{i}^{\alpha \dot{\alpha}} \partial_{i \alpha n}) \qquad&
\overline{\widetilde{S}}^{\alpha}_{n'} &= \sum_{i} ( \theta_{i}^{\alpha n} \partial_{i n n'} -x_{i}^{\alpha \dot{\alpha}} \partial_{i \dot{\alpha} n'} )\end{aligned}\\
\begin{aligned}
S^{\alpha n} &= \sum_{i} \left(- \theta_{i}^{\alpha m} \theta_{i}^{\beta n} \partial_{i \beta m} + x_{i}^{\alpha \dot{\beta}} \theta_{i}^{\beta n} \partial_{i \beta \dot{\beta}} - \theta_{i }^{\alpha m} y_{i}^{n m'} \partial_{im m'} + y_{i}^{n m'} x_{i}^{\alpha \dot{\alpha}} \partial_{i\dot{\alpha} m' }\right)\\
\widetilde{S}^{\dot{\alpha} n'} &= \sum_{i} \left(-\tilde{\theta}_{i}^{\dot{\alpha} m'} \tilde{\theta}_{i}^{\dot{\beta} n'} \partial_{i \dot{\beta} m'} + x_{i}^{\dot{\alpha} \beta} \tilde{\theta}_{i}^{\dot{\beta} n'} \partial_{i \beta \dot{\beta}} -\tilde{\theta}_{i}^{ \dot{\alpha} m'} y_{i}^{m n'} \partial_{i m m'} - y_{i}^{m n'} x_{i}^{\dot{\alpha} \alpha} \partial_{i \alpha m}\right)\\
K_{\alpha \dot{\alpha}} &= \sum_{i} \left( x_{i \alpha}^{\;\;\; \dot{\beta}} x_{i \dot{\alpha}}^{\;\;\; \beta} \partial_{i \beta \dot{\beta}} + x_{i \dot{\alpha}}^{\;\;\;\beta} \theta_{i \alpha}^{n} \partial_{i n \beta} + x_{i \alpha}^{\;\;\; \dot{\beta}} \tilde{\theta}_{i \dot{\alpha}}^{n'} \partial_{i n' \dot{\beta}}
+ \theta_{i \alpha}^{n} \tilde{\theta}_{i\dot{\alpha}}^{n'} \partial_{i n n'}\right)\\
\mathpzc{K}^{n n'} &= \sum_{i} \left(- y_{i}^{n m'} y_{i}^{m n'} \partial_{i m m'} - \tilde{\theta}^{\dot{\alpha} n'}_{i} y_{i}^{n m'} \partial_{i \dot{\alpha} m'} - \theta^{\alpha n}_{i} y_{i}^{m n'} \partial_{i \alpha m} +\theta_{i}^{n \alpha} \tilde{\theta}^{n' \dot{\alpha}} \partial_{\alpha \dot{\alpha}} \right)\\
D &=- \tfrac{1}{2} \sum_{i} \left(\theta_{i}^{n \alpha} \partial_{i \alpha n} + \tilde{\theta}_{i}^{\dot{\alpha} n'} \partial_{i \dot{\alpha} n'} + 2 x^{\alpha \dot{\alpha}}_{i} \partial_{i \alpha \dot{\alpha}} \right)\\
\mathpzc{D} &= -\tfrac{1}{2} \sum_{i} \left( \theta_{i}^{n \alpha} \partial_{i \alpha n} + \tilde{\theta}_{i}^{\dot{\alpha} n'} \partial_{i \dot{\alpha} n'} + 2 y^{n n'}_{i} \partial_{i n n'}\right)\\
B &= \tfrac{1}{2} \sum_{i} \left( \tilde{\theta}_{i}^{\dot{\alpha} n'} \partial_{i \dot{\alpha} n'}-\theta_{i}^{n \alpha} \partial_{i \alpha n} \right)
\end{aligned}
\end{gathered}
\end{equation}
We note that there are seven other possibilities to choose the signs of the generators such that they fulfill the non-chiral superconformal algebra listed at the beginning of \cref{sec:Algebra_Non_Chiral}. It is straightforward to obtain the generators in full non-chiral superspace $\{\lambda_i^\alpha,\tilde\lambda_i^{\dot\alpha},x_i^{\dot{\alpha}\alpha},\eta_{i}^m,\tilde\eta_{i}^{m'}\theta_i^{m\,\alpha},\tilde\theta_i^{m'\,\dot\alpha}\}$ by extending the action of the generators in dual non-chiral superspace such that they commute with the constraints \cref{eq:constraints_full_nonchiral}. Alternatively one could derive the action of the conformal and superconformal generators $K_{\a\dot{\alpha}}$, $S^{\alpha n}$, $\widetilde{S}^{\dot{\alpha} n'}$, $\overline{S}^{\dot{\alpha}}_{n}$ and $\overline{\widetilde{S}}^{\alpha}_{n'}$ in full superspace from their definition \eqref{eq:superconformalGenerators_NC} and the inversion rules \eqref{eq:inversion4dNC} of the onshell variables. The action of all remaining generators on the
onshell variables can then be obtained from the non-chiral superconformal algebra.
\section{Introduction}\label{section:intro}
Scattering amplitudes of maximally supersymmetric Yang-Mills theories in 3, 4, 6 and 10
dimensions possess remarkable properties. Next to their constitutional maximally extended
super-Poincar\'e symmetries they all enjoy a hidden dual conformal symmetry -- at least
at the tree-level \cite{Lipstein:2012kd,Drummond:2008vq,Brandhuber:2008pf,Dennen:2010dh,CaronHuot:2010rj}.
The four dimensional ${\cal N}=4$ super
Yang-Mills (SYM) theory is distinguished in this series as it also has superconformal symmetry in the standard sense. The standard superconformal symmetry then further enhances the dual conformal symmetry to a dual superconformal
symmetry \cite{Drummond:2008vq,Brandhuber:2008pf}.
On top the closure of the two sets of superconformal symmetry algebras leads to an infinite
dimensional symmetry algebra of Yangian type \cite{Drummond:2009fd}. It is the manifestation
of an underlying integrable structure in planar ${\cal N}=4$ SYM. The key to the discoveries of
these rich symmetry structures of maximally supersymmetric Yang-Mills theories in various dimensions is the use of a suitable on-shell superspace formalism
along with spinor helicity variables to package the component field amplitudes into
superamplitudes, which was pioneered in 4d in \cite{Nair:1988bq}. In this work we shall focus
on the four and six dimensional maximally theories: The 4d ${\cal N}=4$ SYM
and the 6d ${\cal N}=(1,1)$ SYM models.
While the massless tree amplitudes of 4d ${\cal N}=4$ SYM are very well studied and in fact known
analytically \cite{Drummond:2008cr}, not so much is known about the massive amplitudes on the Coulomb branch of this theory. These amplitudes are obtained by giving a vacuum
expectation value to the
scalar fields and yield -- arguably -- the simplest massive amplitudes in four
dimensions. Alternatively, these massive amplitudes
arise from the amplitudes of the maximally supersymmetric
6d ${\cal N}=(1,1)$ SYM theory upon dimensional reduction, where the higher dimensional momenta
yield the masses in the 4d theory.
Indeed, compact arbitrary multiplicity amplitudes for particular subclasses of
Coulomb branch amplitudes have been obtained in \cite{Craig:2011ws} by making use
of modern on-shell techniques.
The massive 4d ${\cal N}=4$ SYM amplitudes
are invariant under a dual conformal symmetry which is inherited from the
6d ${\cal N}=(1,1)$ SYM theory as shown in \cite{Dennen:2010dh}.
Moreover, this symmetry remains
intact also at loop-level if one restricts the loop-momentum integrations
to a four-dimensional subspace.
This prescription is equivalent to the Higgs regularization for infrared divergences in 4d proposed in \cite{Alday:2009zm},
where such an extended dual conformal invariance was conjectured and tested at the one-loop
four-point level. The dimensional reduction of 6d ${\cal N}=(1,1)$ SYM to four dimensions
yields ${\cal N}=4$ superamplitudes expressed on a non-chiral superspace \cite{Huang:2011um} which
is distinct to the usual chiral superspace of \cite{Nair:1988bq}.
In this work we explicitly construct all generators of the standard and dual (super)
conformal symmetry generators acting in the non-chiral ${\cal N}=4$ on-shell superspace
as well as in the
${\cal N}=(1,1)$ on-shell superspace. We also determine the standard and dual symmetries of
massive ${\cal N}=4$ amplitudes as they are induced from an enhanced super-Poincar\'e and
enhanced dual conformal symmetry of the 6d ${\cal N}=(1,1)$ SYM theory.
The most efficient method to analytically construct tree-level amplitudes is
based on an on-shell recursive technique due to Britto, Cachazo, Feng and Witten
(BCFW) \cite{Britto:2004ap,Britto:2005fq}.
In contrast to the earlier Berends-Giele off-shell recursion relations~\cite{Berends:1987me},
the BCFW relation uses only on-shell lower-point amplitudes, evaluated at complex, shifted
momenta. The BCFW recursion relation is easily generalizable to an on-shell recursion
for superamplitudes, as was done for ${\cal N}=4$ SYM in \cite{ArkaniHamed:2009dn}
(see also \cite{Bianchi:2008pu}). In fact the knowledge of the dual superconformal invariance
of superamplitudes motivates an ansatz in terms of dual conformal invariants.
Together with the super
BCFW recursion this allowed for the complete analytic solution \cite{Drummond:2008cr}.
In fact the variant of the BCFW recursion for 4d ${\cal N}=4$ SYM in non-chiral superspace has
not been written down before and we will do so in this work. The BCFW recursion for 6d
${\cal N}=(1,1)$ SYM theory was established in \cite{Dennen:2009vk,Bern:2010qa} and
tree-level amplitudes of multiplicities up to five were derived. The one loop
corrections were obtained in \cite{Brandhuber:2010mm}.
In this work we point out how a numerical implementation of the BCFW recursion for ${\cal N}=(1,1)$
SYM amplitudes in combination with a suitable set of dual conformal invariant basis
functions may be used to derive compact five and six-point amplitudes as well as
arbitrary multiplicity amplitudes for certain subclasses related to the 4d amplitudes with
two neighboring massive legs mentioned above \cite{Craig:2011ws}.
In fact, the method we propose is very general and could be applied to further cases
as well.
A very tempting option to obtain massive 4d amplitudes of ${\cal N}=4$ SYM
was introduced by Huang in \cite{Huang:2011um}. He indicated that it should be possible
to invert the dimensional reduction of ${\cal N}=(1,1)$ to massive ${\cal N}=4$ by uplifting the
massless non-chiral superamplitudes of $\mathcal{N}=4$ SYM to six-dimensional
superamplitudes of ${\mathcal N}=(1,1)$ SYM. Non-chiral superamplitudes of $\mathcal{N}=4$ SYM
are straightforward to obtain using the non-chiral BCFW recursion, resulting in an eminent practical relevance of a potential uplift. It is indeed
very surprising that in fact the massive Coulomb branch amplitudes or equivalently the six-dimensional amplitudes might not contain any more information than the massless four-dimensional amplitudes of $\mathcal{N}=4$ SYM.
It is the aim of this paper to provide a self-consistent and detailed exposition of
the theory of superamplitudes for 4d ${\cal N}=4$ SYM and 6d ${\cal N}=(1,1)$ SYM. The paper
is organized as follows.
We discuss the needed spinor helicity formalisms in section 2. Section 3 and 4 are devoted to
the on-shell superspaces of both theories and the standard and hidden symmetries of the
associated superamplitudes. In section 5 we discuss the dimensional reduction from
massless 6d to massive 4d amplitudes and establish the inherited (hidden) symmetries of the 4d
amplitudes. Section 6 exposes the on-shell BCFW recursion relations for ${\cal N}=4$ SYM in
non-chiral superspace as well as for ${\cal N}=(1,1)$ SYM. We also provide a proof of dual
conformal symmetry of ${\cal N}=(1,1)$ superamplitudes thereby correcting some minor mistakes in
the literature. Finally in section 8 we analyze in detail the proposal of Huang for
uplifting 4d massless ${\cal N}=4$ superamplitudes in non-chiral superspace to 6d
${\cal N}=(1,1)$ superamplitudes and point out why this uplift is non-trivial and in fact
not of a real practical use for multiplicities larger than five. Notational details and
extended formulae are relegated to the appendices.
\section{Spinor helicity formalism}
\subsection{General remarks}
Calculating scattering amplitudes of massless particles, the spinor helicity formalism has become a powerful tool in obtaining compact expressions for tree-level and one-loop amplitudes. The basic idea is to use a set of commuting spinor variables instead of the parton momenta $\{p_i\}$. These spinors trivialize the on-shell conditions for the momenta
\begin{equation}
(p_i)^2=0\,.
\end{equation}
In what follows we will briefly review the spinor helicity formalism in four and six dimensions. Additional details and conventions can be found in \cref{appendix:Spinors}.
\subsection{Four dimensions}\label{section:spinor4d}
The starting point of the spinor helicity formalism in four dimensions
\cite{DeCausmaecker:1981bg,Berends:1981uq,Kleiss:1985yh,Xu:1986xb},
which we briefly review here,
is to express all momenta by $(2\times2)$ matrices
\begin{align}
p_{\alpha\dot\alpha}&= \sigma^{\mu}_{\alpha\dot\alpha}\,p_{\mu}\,,&p^{\dot\alpha\alpha}&= \bar{\sigma}^{\mu\,\dot\alpha\alpha}\,p_{\mu},&\text{or inversely}&&p_\mu=\tfrac{1}{2}p_{\alpha\dot\alpha}\bar\sigma_\mu^{\dot\alpha\alpha}=\tfrac{1}{2}p^{\dot\alpha\alpha}\sigma_{\mu\,\alpha\dot\alpha}\,,
\end{align}
where we take $\sigma^{\mu}=(\mathbf{1},\vec{\sigma})$ and $\bar{\sigma}^{\mu}=(\mathbf{1},-\vec{\sigma})$ with
$\vec{\sigma}$ being the Pauli matrices. Raising and lowering of the $\alpha$ and $\dot\alpha$ indices may be conveniently defined by left multiplication with the antisymmetric $\epsilon$ symbol for which we choose the following conventions
\begin{align}
\epsilon_{12}&=\epsilon_{\dot{1}\dot{2}}=-\epsilon^{12}=-\epsilon^{\dot{1}\dot{2}}=1\,,&
\epsilon_{\alpha\beta}\epsilon^{\beta\gamma}&=\delta_\alpha^\gamma&\epsilon_{\dot{\alpha}\dot{\beta}}\epsilon^{\dot{\beta}\dot{\gamma}}&=\delta_{\dot{\alpha}}^{\dot{\gamma}}\,.
\end{align}
Besides being related by $p_{\alpha\dot{\alpha}}=\epsilon_{\alpha\beta}\epsilon_{\dot\alpha\dot\beta}p^{\dot\beta\beta}=p_{\dot\alpha\alpha}$, these matrices satisfy $p^2=\det(p_{\alpha\dot\alpha})=\det(p^{\alpha\dot\alpha})$, $p_{\alpha\dot\alpha}p^{\dot\alpha\beta}=p^2\delta_\alpha^\beta$ and $p^{\dot\alpha\alpha}p_{\alpha\dot\beta}=p^2\delta_{\dot\alpha}^{\dot\beta}$. Hence, the matrices $p^{\dot\alpha\alpha}$ and $p_{\alpha\dot\alpha}$ have rank one for massless momenta, implying the existence of chiral spinors $\lambda_\alpha$ and anti-chiral spinors $\tilde\lambda^{\dot\alpha}$ solving the massless Weyl equations
\begin{align}
p_{\alpha\dot\alpha}\tilde\lambda^{\dot\alpha}&=0\,,&p^{\dot\alpha\alpha}\lambda_{\alpha}&=0\,.
\end{align}
These spinors can be normalized such that
\begin{align}\label{eq:bispinor}
p_{\alpha\dot\alpha}&=\lambda_{\alpha}\, \tilde \lambda_{\dot\alpha}\, .
\end{align}
For complex momenta the spinors $\lambda$ and
$\tilde\lambda$ are independent. However, for real momenta we have the reality condition $p_{\alpha\dot\beta}^*=p_{\dot\alpha\beta}$, implying $\tilde \lambda_{\dot\alpha}=c\, \lambda^{*}_\alpha$ for some $c\in \mathds{R}$. Hence, the spinors can be normalized such that
\begin{equation}
\lambda_{\dot\alpha}=\pm\, \lambda^{*}_\alpha\,.
\end{equation}
An explicit representation is
\begin{equation}
|\lambda\rangle := \lambda_\alpha = \sfrac{\sqrt{p_0+p_3}}{p_1-ip_2}\,
\begin{pmatrix}
p_1-ip_2 \\
p_0-p_3 \\
\end{pmatrix} \, ,\qquad
|\tilde\lambda] := \tilde\lambda^{\dot\alpha} =
\sfrac{\sqrt{p_0+p_3}}{p_1+ip_2}\,
\begin{pmatrix}
-p_0+p_3 \\ p_1+ip_2 \\
\end{pmatrix} \, ,
\label{eq:reducedspinors}
\end{equation}
with $\lambda_{\dot\alpha}=\mathop{\mathrm{sign}}(p_0+p_3)\, \lambda^{*}_\alpha$.
Obviously, \cref{eq:bispinor} is invariant under the $SO(2)$ little group transformations
\begin{align} \label{4D_Littlegroup}
\lambda_{\alpha} \rightarrow z \lambda_{\alpha}\,, && \tilde{\lambda}_{\dot\alpha} \rightarrow z^{-1} \tilde{\lambda}_{\dot\alpha}\,&&&\text{with}&|z|=1\,.
\end{align}
Labeling the external particles by $i$, each parton momentum is invariant under its own little group transformation $\lambda_{i} \rightarrow z_i\, \lambda_{i}$. The simplest Lorentz invariant and little group covariant objects that can be built out of the chiral and anti-chiral spinors are the anti-symmetric spinor products
\begin{align}\label{spinor_kontaktion}
\ang{i}{j}&= \ang{\lambda_{i}}{ \lambda_{j}} = \lambda_{i}{^\alpha} \lambda_{j\alpha}\,, &&[i \,j] = [\tilde{\lambda}_{i}\, \tilde{\lambda}_{j}] =\tilde{\lambda}_{i\dot\alpha} \tilde{\lambda}_{j}^{\dot\alpha}
\end{align}
The little group invariant scalar products of massless momenta are then given by a product of two spinor brackets
\begin{equation} \label{skalarprodukt}
2 p_i p_j =p_{i\,\alpha\dot\alpha}p_j^{\dot\alpha\alpha} = \langle i\, j\rangle [j\, i]\,.
\end{equation}
The spinor helicity formalism allows for a compact treatment of polarizations.
Each external gluon carries helicity $h_{i}=\pm 1$ and a momentum specified by the spinors $\lambda_{i}$
and $\tilde\lambda_{i}$. Given
this data the associated polarization vectors are
\begin{align}
\left(\varepsilon^{+}_{i}\right)^{\dot\alpha\alpha}
&= \sqrt{2}\frac{\tilde\lambda_{i}^{\dot\alpha}\, \mu_{i}^\alpha}{\ang{\lambda_{i}}{\mu_{i}}}\, , &
\left(\varepsilon^{-}_{i}\right)^{\dot\alpha\alpha}
&= \sqrt{2}\frac{\tilde\mu_{i}^{\dot\alpha}\,\lambda_{i}^{\alpha}}
{\sqb{\tilde\mu_{i}}{\tilde\lambda_{i}}}\, ,&\left(\varepsilon^{\pm}_{i}\right)^{\mu}&=\tfrac{1}{2}\sigma_{\alpha\dot\alpha}^\mu\left(\varepsilon^{\pm}_{i}\right)^{\dot\alpha\alpha}\,,
\end{align}
where $(q_i)_{\alpha\dot\alpha}=\mu_{i}^{\alpha}\tilde\mu_{i}^{\dot\alpha}$ are auxiliary light-like
momenta reflecting the freedom of on-shell gauge transformations. It is straightforward to verify that the polarization vectors fulfill
\begin{align}
\varepsilon_i^{\pm}\cdot p_i&=0\,,&\varepsilon_i^{\pm}\cdot q_i&=0\,,&\varepsilon_i^{\pm}\cdot\varepsilon_i^{\pm}&=0\,,&\varepsilon_i^{\pm}\cdot\varepsilon_i^{\mp}&=-1\,,&(\varepsilon_i^{+})_\mu^*&=(\varepsilon_i^{-})_\mu\,,
\end{align}
as well as the completeness relation
\begin{equation}
\sum_{h=\pm}(\varepsilon^h_{i})_\mu(\varepsilon^h_{i})_\nu^*=-\eta_{\mu\nu}+\frac{p_{i\,\mu}q_{i\,\nu}+p_{i\,\nu}q_{i\,\mu}}{p_{i}\cdot q_{i}}\,.
\end{equation}
A summary of all our conventions for four dimensional spinors can be found in \cref{appendix:Spinors}.
\subsection{Six dimensions}\label{section:spinor6d}
Similar to four dimensions, the six-dimensional spinor-helicity formalism \cite{Cheung:2009dc} provides a solution to the on-shell condition $p^2=0$ for massless momenta by expressing them in terms of spinors. As a first step one uses the six-dimensional analog of the
Pauli matrices $\Sigma^\mu$ and $\widetilde \Sigma^\mu$ to represent a six-dimensional vector by an antisymmetric $4\times 4$ matrix
\begin{align}
p_{AB}&=p_\mu\Sigma^\mu_{AB}\,,& p^{AB}&=p_\mu\widetilde\Sigma^{\mu\,AB}\,,\,&&\text{or inversely} &p^\mu&=\tfrac{1}{4}\,p_{AB}\widetilde\Sigma^{\mu\,BA}=\tfrac{1}{4}\,p^{AB}\Sigma^\mu_{BA}\,.
\end{align}
Besides being related by $p_{AB}=\tfrac{1}{2}\,\epsilon_{ABCD}\,p^{CD}$, these matrices satisfy $p_{AB}p^{BC}=\delta_A^C p^2$ and $\det (p^{AB})=\det (p_{AB})=(p^2)^2$. Hence, for massless momenta, $p_{AB}$ and $p^{AB}$ have rank 2 and therefore the chiral and anti-chiral part of the Dirac equation
\begin{align}\label{eq:Weyl6d}
p_{AB}\lambda^{B\,a}&=0\,,& p^{AB}\tilde\lambda_{B\,\dot a}&=0
\end{align}
have two independent solutions, labeled by their little group indices $a=1,2$ and $\dot a= \dot 1,\dot 2$ respectively. Raising and lowering of the $SU(2)\times SU(2)$ little group indices may be conveniently defined by contraction with the antisymmetric tensors $\epsilon_{ab}$ and $\epsilon^{\dot a \dot b}$
\begin{align}
\lambda^{A}_{\phantom{A}\,a}&=\epsilon_{ab}\lambda^{A\,b}\,,&\tilde\lambda_{A}^{\phantom{A}\,\dot a}&=\epsilon^{\dot a \dot b}\tilde\lambda_{A\,\dot b}\,.
\end{align}
The anti-symmetry of $p_{AB}$ and $p^{AB}$ together with the on-shell condition
$p_{AB}\, p^{BC}=0$ yields the bispinor representation
\begin{align}\label{eq:bispinor6d}
p_{AB}&=\tilde\lambda_{A\,\dot a}\tilde\lambda_B^{\phantom{B}\,\dot a}\, , \qquad p^{AB}=\lambda^{A\,a}\lambda^{B}_{\phantom{B}\,a}\, \quad \text{and}
\quad \lambda^{A\,a}\tilde\lambda_{A\,\dot b}=0\, .
\end{align}
An explicit representation of the chiral and anti-chiral spinors is given by
\begin{align}
\lambda^{A\, a}&=\begin{pmatrix}
0 &\sqrt{p_0+p_{3}}\\
\frac{-p_5+ip_4}{\sqrt{p_0+p_{3}}} &\frac{p_1+ip_2}{\sqrt{p_0+p_{3}}}\\
\frac{-p1+ip_2}{\sqrt{p_0+p_{3}}} &\frac{-p_5-ip_4}{\sqrt{p_0+p_{3}}}\\
\sqrt{p_0+p_{3}} &0
\end{pmatrix}\,,&
\tilde\lambda_{A\,\dot a}&=\begin{pmatrix} 0 &\sqrt{p_0-p_{3}}\\
\frac{p_5+ip_4}{\sqrt{p_0-p_{3}}} &\frac{-p_1+ip_2}{\sqrt{p_0-p_{3}}}\\
\frac{p_1+ip_2}{\sqrt{p_0-p_{3}}} &\frac{p_5-ip_4}{\sqrt{p_0-p_{3}}}\\
\sqrt{p_0-p_{3}} &0
\end{pmatrix}\,.
\end{align}
As a consequence of the properties of the six-dimensional Pauli matrices, the spinors are subject to the constraint
\begin{equation}\label{eq:6dspinorConstraint}
\lambda^{A\, a}\lambda^{B}_{\,a}=\tfrac{1}{2}\epsilon^{ABCD}\tilde\lambda_{C\, \dot a}\tilde\lambda_{D}^{\dot a}\,.
\end{equation}
It is convenient to introduce the bra-ket notation
\begin{align}
\lambda_i^a&=|p_i^a\rangle=|i^a\rangle\,,&\tilde\lambda_{i\,\dot a}&=|p_{i\,\dot a}]=|i_{\dot a}]
\end{align}
By fully contracting all $SU(4)$ Lorentz indices it is possible to construct little group covariant and Lorentz invariant objects. The simplest Lorentz invariants are the products of chiral and anti-chiral spinors
\begin{align}
\langle i^a|j_{\dot a}]=[j_{\dot a} |i^a\rangle=\lambda_i^{A\,a}\tilde\lambda_{j\,A\,\dot a}
\end{align}
These little group covariant spinor products are related to the little group invariant scalar products by
\begin{equation}\label{eq:invariants}
2p_i\cdot p_j=\tfrac{1}{2}p_i^{AB}p_{j\,BA}=\det\left(\langle i|j]\right)\,.
\end{equation}
The spinor products are $2\times 2$ matrices whose inverse is
\begin{equation}
( \langle i^a|j_{\dot b}])^{-1}=-\frac{ [j^{\dot b}|i_a\rangle}{2p_i\cdot p_j}
\end{equation}
Each set of four linear independent spinors labeled by $i$, $j$, $k$, $l$ can be contracted with the antisymmetric tensor, to give the Lorentz invariant four brackets
\begin{align}
\langle i^a j^b k^c l^d \rangle&=\epsilon_{ABCD} \lambda_i^{A\,a}\lambda_j^{B\,b}\lambda_k^{C\,c}\lambda_l^{D\,d}=\det(\lambda_i^a\lambda_j^b\lambda_k^c\lambda_l^d)\,,\\
[i_{\dot a} j_{\dot b}k_{\dot c}l_{\dot d} ]&=\epsilon^{ABCD} \tilde\lambda_{i\,A\,\dot a}\tilde\lambda_{j\,B\,\dot b}\tilde\lambda_{k\,C\,\dot c}\tilde\lambda_{l\,D\,\dot d}=
\det(\tilde\lambda_{i\,\dot a}\tilde\lambda_{j\,\dot b}\tilde\lambda_{k\,\dot c}\tilde\lambda_{l\,\dot d})\,.
\end{align}
Note that in the above expressions the 4x4 matrix appearing in the determinants is
defined through its four columns vectors $\{\lambda_i^a\lambda_j^b\lambda_k^c\lambda_l^d\}$
and similarly for the second expression.
The four brackets are related to the spinor products by
\begin{equation}
\langle I_1 I_2 I_3 I_4 \rangle[J_1 J_2 J_3 J_4]=\det(\langle I_i|J_j])\,,
\end{equation}
where $I_k=(i_k)^{a_k}$, $J_k=(j_k)_{\dot a_k}$ are multi indices labeling the spinors.
Finally, it is convenient to define the following Lorentz invariant objects
\begin{align}
\langle i^a|k_1 k_2 \cdots k_{2m+1}|j^b\rangle&=\lambda_i^{A_1\,a}(k_1)_{A_1 A_2}(k_2)^{A_2 A_3}\dots(k_{2m+1})_{A_{2m+1} A_{2m+2}}\lambda_j^{A_{2m+2}\,b}\,,\\
\langle i^a|k_1 k_2 \cdots k_{2m}|j_{\dot b}]&=\lambda_i^{A_1\,a}(k_1)_{A_1 A_2}(k_2)^{A_2 A_3}\dots(k_{2m})^{A_{2m} A_{2m+1}}\tilde\lambda_{j\,A_{2m+1}\,\dot b}\,,\\
[ i_{\dot a}|k_1 k_2 \cdots k_{2m+1}|j_{\dot b}]&=\tilde\lambda_{i\,A_1\,\dot a}(k_1)^{A_1 A_2}(k_2)_{A_2 A_3}\dots(k_{2m+1})^{A_{2m+1} A_{2m+2}}\tilde\lambda_{j\,A_{2m+2}\,\dot b}\,.
\end{align}
Similar to the four dimensional case, the polarization vectors of the gluons can be expressed in terms of spinors by introducing some light-like reference momentum $q$ with $q\cdot p\neq 0$, where $p$ denotes the gluon momentum. The four polarization states are labeled by $SO(4)\simeq SU(2)\times SU(2)$ little group indices and can be defined as
\begin{equation}
\varepsilon_{a \dot a}^\mu=\frac{1}{\sqrt{2}}\langle p_a|\Sigma^\mu|q_b\rangle(\langle q_b|p^{\dot a}])^{-1}=\frac{1}{\sqrt{2}}[p_{\dot a}|\widetilde\Sigma^\mu|q_{\dot b}](\langle p^a|q_{\dot b}])^{-1}\,.
\end{equation}
It is straightforward to verify the properties
\begin{align}
\varepsilon_{a \dot a}\cdot p&=0\,,&\varepsilon_{a \dot p}\cdot q&=0\,,&
\varepsilon_{a \dot a}\cdot \varepsilon_{b \dot b} &=-\epsilon_{a b}\epsilon_{\dot a \dot b}\,,
\end{align}
as well as the completeness relation
\begin{equation}
\varepsilon_{a \dot a}^\mu \varepsilon^{\nu\,a \dot a}=-\eta^{\mu\nu}+\frac{p^\mu q^\nu +p^\nu q^\mu}{p\cdot q}\,.
\end{equation}
\section{\texorpdfstring{Four-dimensional $\mathcal{N}=4$}{N=4} SYM theory}\label{section:superamps4d}
\subsection{On-shell superspaces and superamplitudes}
Dealing with scattering amplitudes of supersymmetric gauge theories is most conveniently done using appropriate on-shell superspaces. Most common for treating ${\cal N}=4$ super Yang-Mills theory are \cite{Nair:1988bq,Witten:2003nn,Georgiou:2004by}
\begin{align}
\text{chiral superspace:}&\quad\{\lambda_i,\tilde\lambda_i,\eta_i^A\}\,, &&&\text{anti-chiral superspace:}&\quad\{\lambda_i,\tilde\lambda_i,\tilde\eta_{i\,A}\}\,.
\end{align}
The Grassmann variables $\eta_i^A$, $\tilde{\eta}_{i A}$ transform in the fundamental, anti-fundamental representation of $SU(4)$ and can be assigned the helicities
\begin{align}\label{eq:helicities}
h_i \eta_i^A& = \tfrac{1}{2} \eta_i^A \,,&&& h_i \tilde{\eta}_{i A} &= - \tfrac{1}{2} \tilde{\eta}_{i A}\,,
\end{align}
with $h_i$ denoting the helicity operator acting on leg $i$.
With their help it is possible to decode the sixteen on-shell states
\begin{align}
\mbox{gluons:}&\; G_{\pm}& \mbox{scalars:}&\; \phi_{A B} = \tfrac{1}{2} \epsilon_{ABCD} \phi^{CD}& \mbox{gluinos:}& \; \psi_{A}& \mbox{anti-gluinos:} &\; \overline{\psi}^{A}
\end{align}
into a chiral or an anti-chiral superfield $\varPhi\left(\eta\right)$, $\overline{\varPhi}\left(\tilde{\eta}\right)$, defined by
\begin{align}\label{eq:superfield_N=4}
\varPhi\left(\eta\right) &= G_{+} + \eta^A \psi_{A} + \frac{1}{2!} \eta^A \eta^B \phi_{A B} + \frac{1}{3!} \eta^A \eta^B \eta^C \epsilon_{ABCD} \overline{\psi}^{D} + \frac{1}{4!} \eta^A \eta^B \eta^C \eta^D \epsilon_{ABCD} G_{-}\,,\\
\overline{\varPhi}\left(\tilde{\eta}\right)& = G_{-} + \tilde{\eta}_A \overline{\psi}^{A} - \frac{1}{2!} \tilde{\eta}_A \tilde{\eta}_B \phi^{A B} + \frac{1}{3!} \tilde{\eta}_A \tilde{\eta}_B \tilde{\eta}_C \epsilon^{ABCD} \psi_D + \frac{1}{4!} \tilde{\eta}_A \tilde{\eta}_B \tilde{\eta}_C \tilde{\eta}_D \epsilon^{ABCD} G_{+}\,.
\end{align}
As a consequence of \cref{eq:helicities} the super fields carry the helicities
\begin{align}
h_i \varPhi_i\left(\eta\right) &= \varPhi_i\left(\eta\right)\,,&&& h_i \overline{\varPhi}_i\left(\tilde{\eta}\right)& = - \overline{\varPhi}_i\left(\tilde{\eta}\right)\,.
\end{align}
The chiral and anti-chiral superfield are related by a Grassmann Fourier transformation
\begin{align}\label{eq:fourier}
\overline{\varPhi}\left(\tilde{\eta}\right) &= \int d^4\eta \,e^{\eta^A \tilde{\eta}_A} \,\varPhi\left(\eta\right)\,, &&& \varPhi\left(\eta\right) &= \int d^4 \tilde{\eta}\, e^{-\eta^A \tilde{\eta}_A} \,\overline{\varPhi}\left(\tilde{\eta}\right)\,.
\end{align}
Chiral and anti-chiral color ordered superamplitudes $\mathcal{A}_n$ can be defined as functions of the respective superfields
\begin{align}
\mathcal{A}_n&=\mathcal{A}_n (\Phi_1, \Phi_2,\dots,\Phi_n)\,,&&&\overline{\mathcal{A}}_n&=\overline{\mathcal{A}}_n (\overline{\Phi}_1, \overline{\Phi}_2,\dots,\overline{\Phi}_n)\,.
\end{align}
Due to \cref{eq:fourier} both superamplitudes are related by a Grassmann Fourier transformation
\begin{equation}\label{eq:voll_ft}
\mathcal{A}_n (\Phi_1, \Phi_2,\dots,\Phi_n) = \prod_i \int d_i^4 \tilde{\eta}\; e^{-\sum_j \eta_j^A \tilde{\eta}_{jA} } \;\overline{\mathcal{A}}_n (\overline{\Phi}_1, \overline{\Phi}_2,\dots,\overline{\Phi}_n)
\end{equation}
The superamplitudes are inhomogeneous polynomials in the Grassmann odd variables $\eta_i^A$, $\tilde{\eta}_{i\,A}$, whose coefficients are given by the color ordered component amplitudes. A particular component amplitude can be extracted by projecting upon the relevant term
in the $\eta_{i}$ expansion of the super-amplitude via
\begin{align}
G^{+}_{i} &\to \eta^{A}_{i}=0\, ,&
G^{-}_{i} &\to \int d^{4}\eta_{i}\,,&\phi_{i\,AB}&\to \int d\eta^B_{i}d\eta^A_{i}\,,\\
\psi_{i,A} &\to \int d\eta^{A}_i\, , &\bar{\psi}^{A}_{i}
&\to -\int d^{4}\eta_{i} \, \eta_{i}^{A}\, ,&
\end{align}
and similar in anti-chiral superspace. By construction the chiral and anti-chiral superamplitudes have a manifest $SU(4)_R$ symmetry. The only $SU(4)_R$ invariants are contractions with the epsilon tensor
\begin{align}\label{eq:RsymmetryInvariants}
\eta_{i}^A\eta_{j}^B\eta_{k}^C\eta_{l}^D\epsilon_{ABCD}\,,&&\text{or}&&\tilde\eta_{i\,A}\tilde\eta_{j\,B}\tilde\eta_{k\,C}\tilde\eta_{l\,D}\epsilon^{ABCD}\,.
\end{align}
Consequently the appearing powers of the Grassmann variables within the superamplitudes need to be multiples of four. As a consequence of supersymmetry the superamplitudes are proportional to the supermomentum conserving delta function
\begin{align}
\delta^{(8)}(q^{\alpha A}):=\prod_{\alpha=1}^2\prod_{A=1}^4q^{\alpha\,A}&&\text{or}&&\delta^{(8)}( \tilde{q}^{\dot{\alpha}}_{A}):=\prod_{\dot\alpha=1}^2\prod_{A=1}^4\tilde{q}^{\dot\alpha}_A\,,
\end{align}
with the chiral $q^{\alpha A}=\sum_i\lambda_{i}^{\a}\eta_i^A$ or anti-chiral conserved supermomentum $\tilde{q}^{\dot{\alpha}}_{A}=\sum_i\tilde{\lambda}_i^{\dot{\alpha}}\tilde{\eta}_{i\,A}$.
Since the Grassmann variables carry helicity, \cref{eq:helicities}, their powers keep track of the amount of helicity violation present in the component amplitudes. Hence, decomposing the superamplitudes into homogeneous polynomials is equivalent to categorizing the component amplitudes according to their degree of helicity violation
\begin{align}\label{eq:MHV_decomposition}
\mathcal{A}_n (\Phi_1, \Phi_2,\dots,\Phi_n) &= \mathcal{A}^{\text{MHV}}_n + \mathcal{A}^{\text{NMHV}}_n + \mathcal{A}^{\text{N}^2\text{MHV}}_n + \dots + \mathcal{A}^{N^{(n-4)}\text{MHV}}_n\,,\\
\overline{\mathcal{A}}_n (\overline{\Phi}_1, \overline{\Phi}_2,\dots,\overline{\Phi}_n) &= \overline{\mathcal{A}}^{\overline{\text{MHV}}}_n + \overline{\mathcal{A}}^{N\overline{\text{MHV}}}_n + \overline{\mathcal{A}}^{N^2\overline{\text{MHV}}}_n + \dots + \overline{\mathcal{A}}^{N^{(n-4)}\overline{\text{MHV}}}_n\,,\end{align}
with
\begin{align}
\mathcal{A}^{\text{N}^p\text{MHV}}_n&=\mathcal{O}(\eta^{4(p+2)})\,,&&&\overline{\mathcal{A}}^{N^p\overline{\text{MHV}}}_n&=\mathcal{O}(\tilde{\eta}^{4(p+2)})\,.
\end{align}
The highest amount of helicity violation is present in the maximally helicity violating (MHV) superamplitude or in the $\overline{\text{MHV}}$ superamplitude in anti-chiral superspace. In general, $\mathcal{A}^{\text{N}^p\text{MHV}}_n$ and $\overline{\mathcal{A}}^{N^p\overline{\text{MHV}}}_n$ are the (Next to)${}^p$ MHV and the (Next to)${}^p$ $\overline{\text{MHV}}$ superamplitudes . The complexity of the amplitudes is increasing with the degree $p$ of helicity violation, the simplest being the
MHV superamplitude in chiral superspace \cite{Nair:1988bq}
\begin{equation}\label{MHV_super}
\mathcal{A}^{\text{MHV}}_n =i \frac{\delta^{(4)}(\sum_i p^{\alpha \dot{\alpha}}_i) \delta^{(8)}(\sum_i q^{\alpha A}_i)}{\left<1 2\right> \left<2 3\right> \dots \left<n 1\right>} \,,
\end{equation}
and the $\overline{\text{MHV}}$ superamplitude in anti-chiral superspace
\begin{equation}\label{anti_MHV_super}
\mathcal{A}^{\overline{\text{MHV}}}_n = i(-1)^n\frac{\delta^{(4)}(\sum_i p^{\alpha \dot{\alpha}}_i) \delta^{(8)}(\sum_i \tilde{q}^{\dot{\alpha}}_{iA})}{\left[1 2\right] \left[2 3\right] \dots \left[n 1\right]}\,,
\end{equation}
which are supersymmetric versions of the well known Parke-Taylor formula \cite{Parke:1986gb}. The increasingly complicated formulae for the amplitudes $\mathcal{A}^{\text{N}^p\text{MHV}}_n$ have been obtained in reference \cite{Drummond:2008cr}. Plugging the MHV decomposition, \cref{eq:MHV_decomposition}, into \cref{eq:voll_ft} we obtain the relation
\begin{equation}\label{eq:MHV_MHVbar}
\mathcal{A}_n ^{\text{N}^p\text{MHV}} = \prod_i \int d_i^4 \tilde{\eta} \;e^{-\sum_j \eta_j^A \tilde{\eta}_{jA} } \overline{\mathcal{A}}_n^{N^{n-4-p}\overline{\text{MHV}}}\,,
\end{equation}
simply stating that $\mathcal{A}_n ^{\text{N}^p\text{MHV}}$ and $\overline{\mathcal{A}}_n^{N^{n-4-p}\overline{\text{MHV}}}$ contain the same component amplitudes. Depending on whether $p<n-4-p$ or $p>n-4-p$ it is therefore more convenient to use the chiral or the anti-chiral description of the amplitudes, e.\,g.~the $\text{N}^{n-4}\text{MHV}=\overline{\text{MHV}}$ amplitudes are complicated in chiral superspace whereas they are trivial in anti-chiral superspace. Hence the most complicated amplitudes appearing in an $n$ point chiral or anti-chiral superamplitude are the helicity amplitudes of degree $p=\lfloor\tfrac{n}{2}\rfloor-2$, called minimal helicity violating (minHV) amplitudes .
\subsection{Non-chiral superspace}
\label{ncsuperspace4d}
Besides the well studied chiral and anti-chiral superspaces there is as well the non-chiral superspace
\begin{equation}
\{\lambda_i,\tilde\lambda_i,\eta_i^m,\tilde\eta_{i\,m'}\}\,,
\end{equation}
which is more natural from the perspective of the massive amplitudes and the six dimensional
parent theory that we are interested in.
Here the $SU(4)$ indices of the fields get split into two $SU(2)$ indices $m$ and $m'$ according to
\begin{align}
\psi_A&=\{\psi_m,\psi_{m'}\}\,,&&&\bar\psi^A&=\{\bar\psi^m,\bar\psi^{m'}\}\,,&&&\phi_{AB}&=\{\phi_{mn},\phi_{m'n},\phi_{mn'},\phi_{m'n'}\}\,.
\end{align}
Note that the due antisymmetry the fields $\phi_{mn}=-\phi_{nm}$ and $\phi_{m'n'}=-\phi_{n'm'}$ represent only one scalar field respectively, whereas the $\phi_{mn'}=-\phi_{n'm}$
account for the four remaining scalars. If raising and lowering of the $SU(2)$ indices are defined by left multiplication with $\epsilon=i\sigma_2$ and $\epsilon^{-1}$, the non-chiral superfield reads
\begin{multline}\label{eq:nonChiralSuperfield}
\varUpsilon= \tfrac{1}{2} \phi^{m'}_{\phantom{m}m'} + \eta^{m} \overline{\psi}_{m} + \tilde{\eta}_{m'}\psi^{m'} +\eta^{m} \tilde{\eta}_{m'} \phi^{\phantom{m}m'}_{m} + \eta^2 G_{-} + \tilde{\eta}^2 G_{+} \\+ \eta^2 \tilde{\eta}_{m'} \overline{\psi}^{m'} + \tilde{\eta}^2 \eta^{m} \psi_{m} + \tfrac{1}{2} \tilde{\eta}^2 \eta^2 \phi^{m}_{\;\;m}\,,
\end{multline}
with the abbreviations $\eta^2=\tfrac{1}{2}\eta^m\eta_m$, $\tilde\eta^2=\tfrac{1}{2}\tilde\eta_{m'}\tilde\eta^{m'}$. The non-chiral superfield is a scalar and has zero helicity. Obviously, the non-chiral superamplitudes will not have a $SU(4)_R$ symmetry, but rather will be invariant under $SU(2,2)$ transformations.
With the convention $m\in\{1,4\}$, $m'\in\{2,3\}$ the non-chiral superfield is related to the chiral and anti-chiral superfield by the half Grassmann Fourier transformations
\begin{align}\label{eq:relationOfsuperfields}
\varUpsilon = \int d\eta^{3} d\eta^{2} \;e^{ \eta^{2}\tilde{\eta}_{2} + \eta^{3}\tilde{\eta}_{3}} \varPhi=\int d\tilde\eta_{1} d\tilde\eta_{4} \;e^{ -\eta^{1}\tilde{\eta}_{1} - \eta^{4}\tilde{\eta}_{4}} \overline{\varPhi}\,.
\end{align}
As a consequence of supersymmetry, the superamplitudes are proportional to the supermomentum conserving delta functions
\begin{align}
\delta^{(4)}(q^{\alpha m}):=\prod_{\alpha=1}^2\prod_{m=1}^2q^{\alpha\,m}&&\text{and}&&\delta^{(4)}( \tilde{q}^{\dot{\alpha}}_{m'}):=\prod_{\dot\alpha=1}^2\prod_{m'=1}^2\tilde{q}^{\dot\alpha}_{m'}\,,
\end{align}
with the conserved supermomenta $q_{\alpha}^m=\sum_i\eta^m_i\lambda_{i\,\alpha}$ and $\tilde q_{\dot\alpha}^{m'}=\sum_i\tilde\eta^{m'}_i\tilde\lambda_{i\,\dot\alpha}$. Since we additionally have $h_i \varUpsilon_i=0$, the non-chiral superamplitudes have the general form
\begin{equation}
\label{325}
\mathcal{A}_n(\varUpsilon_1,\dots,\varUpsilon_n)=\delta^{4}(\sum_i q_{i\,\alpha}^m)\delta^{4}(\sum_i \tilde q_{i\,\dot\alpha}^{m'})f_n(\{p_i,q_i,\tilde q_i\})\,.
\end{equation}
It should be stressed that the dependence of $f_{n}$ only on the momenta
$\{p_i,q_i,\tilde q_i\}$ is distinct to the situation for the chiral or anti-chiral superamplitudes, where
we have a dependence on the super-spinors $\{\lambda_i,\tilde\lambda_i,\eta_i^A\}$ or
$\{\lambda_i,\tilde\lambda_i,\tilde\eta_i^A\}$.
Analyzing the half Fourier transform \eqref{eq:relationOfsuperfields} relating the superfields we see that the non-chiral superamplitudes are homogeneous polynomials in the variables $q_i$ and $\tilde{q}_i$ of degree $2n$ and the MHV decomposition \eqref{eq:MHV_decomposition} of the chiral superamplitudes translates to a MHV decomposition of the non-chiral superamplitudes
\begin{equation}
f_n=f_n^{\text{MHV}}+f_n^{\text{NMHV}}+\dots+f_n^{\overline{\text{MHV}}}\,,
\end{equation}
where the N${}^p$MHV sector corresponds to a fixed degree in the variables $q_i$ and $\tilde{q}_i$
\begin{equation}
f_n^{\text{N${}^p$MHV}}=\mathcal{O}(q^{2p}\tilde{q}^{2n-8-2p})\,.
\end{equation}
This reflects the chiral nature of ${\cal N}=4$ SYM theory.
Each of the three superspaces presented above has an associated dual superspace. In general, dual superspaces naturally arise when studying dual conformal properties of color ordered scattering amplitudes. Part of the spinor variables get replaced by
the region momenta $x_i$, which are related to the ordinary momenta of the external legs by
\begin{equation}\label{eq:regions}
x_i-x_{i+1}=p_i
\end{equation}
and a new set of dual fermionic variables $\theta_i$ or $\tilde\theta_i$ is introduced, related to the fermionic momenta by
\begin{align}\label{eq:theta}
\theta_i-\theta_{i+1}&=q_i\,,&\tilde\theta_i-\tilde\theta_{i+1}&=\tilde{q}_i\,.
\end{align}
Obviously, the amplitudes will depend on differences of dual variables $x_{ij}=x_i-x_j$, $\theta_{ij}=\theta_i-\theta_{j}$ and $\tilde\theta_{ij}=\tilde\theta_i-\tilde\theta_{i+1}$, as the dual variables are only defined up to an overall shift. With the identifications $x_1=x_{n+1}$, $\theta_1=\theta_{n+1}$, and $\tilde\theta_1=\tilde\theta_{n+1}$, the dual variables trivialize the momentum and supermomentum conservation.
The dual chiral superspace is given by
\begin{equation}\label{eq:dual_chiral}
\{\lambda_i^\alpha,x_i^{\dot{\alpha}\alpha},\theta_i^{A\,\alpha}\}
\end{equation}
with the constraints
\begin{align}\label{eq:constraints_dual_chiral}
x_{i\,i+1}^{\dot{\alpha}\alpha}\lambda_{i\,\alpha}&=0\,,&\theta_{i\,i+1}^{A\,\alpha}\lambda_{i\,\alpha}&=0\,.
\end{align}
Analogously, the dual anti-chiral superspace is given by
\begin{equation}\label{eq:dual_antichiral}
\{\tilde\lambda_i^{\dot\alpha},x_i^{\dot{\alpha}\alpha},\tilde\theta_{i\,A}^{\dot\alpha}\}
\end{equation}
with the constraints
\begin{align}
(x_{i\,i+1})_{\alpha\dot{\alpha}}\tilde\lambda_{i}^{\dot\alpha}&=0\,,&(\tilde\theta_{i\,i+1})_A^{\dot\alpha}\tilde\lambda_{i\,\dot\alpha}&=0\,.
\end{align}
In the case of the dual non-chiral superspace it is possible to completely eliminate all spinor variables and express the superamplitudes solely with the dual variables
\begin{equation}\label{eq:dual_nonchiral}
\{x_i^{\dot{\alpha}\alpha},\theta_i^{m\,\alpha},\tilde\theta_i^{m'\,\dot\alpha},y_i^{nn'}\}
\end{equation}
which are subject to the constraints
\begin{align}\label{eq:constraints_dualnonchiral}
x_{i\,i+1}^{\dot{\alpha}\alpha}\theta_{i\,i+1\,\alpha}^m&=0\,,&(x_{i\,i+1})_{\alpha\dot{\alpha}}\tilde\theta_{i\,i+1}^{m'\,\dot\alpha}&=0\,,&x_{i\,i+1}^{\dot{\alpha}\alpha}y_{i\,i+1}^{mm'}&=\theta_{i\,i+1}^{m\,\alpha}\tilde\theta_{i\,i+1}^{m'\,\dot\alpha}\,.
\end{align}
Note that $x_{i\,i+1}^2=0$ is a consequence of \cref{eq:constraints_dualnonchiral}. In fact the Grassmann even dual variables $y_i^{mm'}$ are not independent as they can be expressed by $\{x_i^{\dot{\alpha}\alpha},\theta_i^{m\,\alpha},\tilde\theta_i^{m'\,\dot\alpha}\}$. Hence, the amplitudes will not depend on them. However, the variables $y_i^{mm'}$ are necessary for the construction of the dual non-chiral superconformal symmetry algebra presented in \cref{section:symmetries_N=4,sec:Algebra_Non_Chiral} .
A further possibility is to study superamplitudes using the full superspaces obtained by adding the dual variables to the chiral, anti-chiral and non-chiral superspaces.
The full chiral superspace is given by
\begin{equation}\label{eq:full_chiral}
\{\lambda_i^\alpha,\tilde\lambda_i^{\dot\alpha},x_i^{\dot{\alpha}\alpha},\eta_i^A,\theta_i^{A\,\alpha}\}
\end{equation}
with the constraints
\begin{align}\label{eq:constraints_full_chiral}
x_{i\,i+1}^{\dot{\alpha}\alpha}&=\lambda_{i}^{\alpha}\tilde\lambda_i^{\dot\alpha}\,,&\theta_{i\,i+1}^{A\,\alpha}=\lambda_i^{\alpha}\eta_i^A\,.
\end{align}
Analogously, the full anti-chiral superspace has the variables
\begin{equation}
\{\lambda_i^\alpha,\tilde\lambda_i^{\dot\alpha},x_i^{\dot{\alpha}\alpha},\tilde\eta_{i\,A},\tilde\theta_{i\,A}^{\dot\alpha}\}
\end{equation}
subject to the constraints
\begin{align}
x_{i\,i+1}^{\dot{\alpha}\alpha}&=\lambda_{i}^\alpha\tilde\lambda_i^{\dot\alpha}\,,&(\tilde\theta_{i\,i+1})_A^{\dot\alpha}=\tilde\lambda_i^{\dot\alpha}\tilde\eta_{i\,A}\,.
\end{align}
Finally, the full non-chiral superspace is given by
\begin{equation}
\{\lambda_i^\alpha,\tilde\lambda_i^{\dot\alpha},x_i^{\dot{\alpha}\alpha},\eta_{i}^m,\tilde\eta_{i}^{m'},\theta_i^{m\,\alpha},\tilde\theta_i^{m'\,\dot\alpha},y_i^{nn'}\}
\end{equation}
with the constraints
\begin{align}\label{eq:constraints_full_nonchiral}
x_{i\,i+1}^{\dot{\alpha}\alpha}&=\lambda_{i}^\alpha\tilde\lambda_i^{\dot\alpha}\,,&\theta_{i\,i+1}^{m\,\alpha}&=\lambda_i^\alpha\eta_{i}^m\,,&\tilde\theta_{i\,i+1}^{m'\,\dot\alpha}&=\tilde\lambda_{i}^{\dot\alpha}\tilde\eta_{i}^{m'}\,&y_{i\,i+1}^{nn'}=\eta_i^m\tilde\eta_i^{m'}\,.
\end{align}
\subsection{Symmetries of non-chiral superamplitudes}\label{section:symmetries_N=4}
We are going to give a complete derivation of the symmetry generators of the non-chiral superamplitudes at tree level, which has not yet been done in full detail in the literature. Part of the results presented here can be found in reference \cite{Huang:2011um}. For recent
textbook treatments of the superconformal and dual superconformal symmetry of the chiral superamplitudes see \cite{Henn:2014yza,Elvang:2013cua}. A detailed presentation of the non-chiral superconformal algebra and its relevant representations is given in \cref{sec:Algebra_Non_Chiral}.
\subsubsection{Superconformal symmetry of non-chiral superamplitudes}
Due to the half Fourier transformation connecting the non-chiral and the chiral superspace, the $SU(4)_R$ symmetry is turned into an $SU(2,2)_R$ symmetry. The conformal symmetry does not involve Grassmann variables, hence the tree-level non-chiral superamplitudes are invariant under the
conformal algebra $su(2,2)$, with generators
\begin{equation}
\{p^{\dot{\alpha} \alpha},m_{\alpha \beta},\overline{m}_{\dot{\alpha} \dot{\beta}},d,k_{\alpha\dot{\alpha}}\}\,.
\end{equation}
As a consequence of the supersymmetry of the chiral and anti-chiral superamplitudes and \cref{eq:relationOfsuperfields} relating the superfields, the non-chiral superamplitudes are invariant under the $(2,2)$-supersymmetry generators
\begin{align}
q^{\alpha n} &= \sum_{i} \lambda^{\alpha}_{i} \eta^{n}_{i}\,, &\tilde{q}^{\dot{\alpha} n'} &= \sum_{i} \tilde{\lambda}^{\dot{\alpha}}_{i} \tilde{\eta}_{i}^{n'}
\end{align}
and their conjugates
\begin{align}
\overline{q}^{\dot{\alpha}}_{n} &= \sum_{i} \tilde{\lambda}^{\dot{\alpha}}_{i} \partial_{i n} \,, &\overline{\tilde{q}}^{\alpha}_{n'} &= \sum_{i} \lambda^{\alpha}_{i} \partial_{i n'} \,,&\qquad \mbox{with} \qquad \partial_{i n} &= \frac{\partial}{\partial \eta^{n}_{i}}\,,&\partial_{i n'} &= \frac{\partial}{\partial \tilde{\eta}^{n'}_{i}}\,.
\end{align}
All other symmetry generators now follow from the non-chiral superconformal symmetry algebra listed in \cref{sec:Algebra_Non_Chiral}.
Commuting the supersymmetry generators $q^{\alpha n}$, $\tilde{q}^{\dot{\alpha} n'}$, $\overline{q}^{\dot{\alpha}}_{n}$, $\overline{\tilde{q}}$ with the conformal boost generator $k_{\alpha \dot{\alpha}}$ yields the superconformal generators
\begin{equation}
\begin{aligned}
\begin{aligned}
s_{\alpha n} &= \sum_{i} \partial_{i \alpha} \partial_{i n}\,,& \qquad \overline{s}^{n}_{\dot{\alpha}} &= \sum_{i} \eta^{n}_{i} \partial_{i \dot{\alpha}} \,,\\
\tilde{s}_{\dot{\alpha} n'} &=\sum_{i} \partial_{i n'} \partial_{i \dot{\alpha}} \,,& \qquad \overline{\tilde{s}}^{n'}_{\alpha} &=\sum_{i} \tilde{\eta}_{i}^{n'} \partial_{i \alpha} \,,
\end{aligned}&&\text{with}& &\partial_{i \a} &= \frac{\partial}{\partial \lambda^{\a}_i}\,,&\partial_{i \dot{\alpha}} &= \frac{\partial}{\partial \tilde\lambda^{\dot{\alpha}}_i}\,,
\end{aligned}
\end{equation}
The central charge $c$ and the hypercharge $b$ are given by:
\begin{equation}
\begin{aligned}
c &= \tfrac{1}{2} \sum_{i}\left(-\lambda^{\alpha}_{i} \partial_{i \alpha} + \tilde{\lambda}^{\dot{\alpha}}_{i} \partial_{i \dot{\alpha}} + \eta^{n}_{i} \partial_{i n} - \tilde{\eta}^{n'}_{i} \partial_{i n'}\right)\,,& \qquad b &= \tfrac{1}{2} \sum_{i}\left(\eta^{n}_{i} \partial_{i n} - \tilde{\eta}^{n'}_{i} \partial_{i n'}\right) \,.
\end{aligned}
\end{equation}
As already stated at the beginning, the non-chiral superamplitudes have a $su(2,2)_R$ symmetry. Up to the constant in the R-dilatation $\mathpzc{d}$ and some sign ambiguities, its generators $\{\mathpzc{p}^{n n'}$, $\mathpzc{m}_{\,n m}$, $\widetilde{\mathpzc{m}}_{\,n' m'}$, $\mathpzc{d}$, $\mathpzc{k}_{\,\,n n'}\}$ are related to the conformal generators $\{p^{\dot{\alpha} \alpha}$, $m_{\alpha \beta}$, $\overline{m}_{\dot{\alpha} \dot{\beta}}$, $\,d$, $k_{\alpha\dot{\alpha}}\}$ by the replacements $\lambda\leftrightarrow\eta$ and $\tilde{\lambda}\leftrightarrow\tilde{\eta}$
\begin{equation}
\begin{gathered}
\begin{aligned}
\mathpzc{p}^{n n'} &= \sum_{i} \eta_{i}^{n} \tilde{\eta}_{i}^{n'}\,,& \qquad \mathpzc{k}_{\,\,n n'} &= \sum_{i} \partial_{i n} \partial_{i n'}\,,\\
\mathpzc{m}_{\,n m}& = \sum_{i} \eta_{i(n} \partial_{i m)} \,,&\qquad \widetilde{\mathpzc{m}}_{\,n' m'} &= \sum_{i} \tilde{\eta}_{i (n'} \partial_{i m')}\,,\\
\end{aligned} \\
\mathpzc{d} = \tfrac{1}{2} \sum_{i} \left(\eta^{n}_{i} \partial_{i n} + \tilde{\eta}^{n'}_{i} \partial_{i n'} - 2\right)\,.
\end{gathered}
\end{equation}
Whereas the generators $\mathpzc{m}_{\,n m}$, $\widetilde{\mathpzc{m}}_{\,n' m'}$ and $\mathpzc{d}$ are obvious symmetries of the non-chiral superamplitudes, invariance under $\mathpzc{p}^{n n'}$ and $\mathpzc{k}_{\,\,n n'} $ is unexpected.
\subsubsection{Dual superconformal symmetry of non-chiral superamplitudes}\label{dual-shell-nonchiral}
By analogy to the chiral superamplitudes we expect the non-chiral superamplitudes to have a dual superconformal symmetry as well. Starting point is the dual non-chiral superspace $\{x_i^{\dot{\alpha}\alpha},\theta_i^{m\,\alpha},\tilde\theta_i^{m'\,\dot\alpha},y_i^{mm'}\}$, introduced in \cref{eq:dual_nonchiral}, and the invariance of the non-chiral superamplitudes under the dual super Poincar\'e symmetry
\begin{equation}
\{P_{\alpha\dot\alpha},M_{\alpha\beta},\overline{M}_{\dot\alpha\dot\beta},Q_{\alpha m},\overline{Q}_{\dot\alpha m},\widetilde{Q}_{\dot\alpha m'},\overline{\widetilde{Q}}_{\alpha m'}\}\,
\end{equation}
where
\begin{equation}
\begin{aligned}
M_{\alpha \beta} &= \sum_{i} \left( \theta^{n}_{i (\alpha} \partial_{i \beta) n} + x_{i (\alpha}^{\dot{\alpha}} \partial_{i \beta) \dot{\alpha}} \right)\,,\\
\overline{M}_{\dot{\alpha} \dot{\beta}} &= \sum_{i} \left( \tilde{\theta}^{n'}_{i (\dot{\alpha}} \partial_{i \dot{\beta}) n'} + x_{i (\dot{\alpha}}^{\alpha} \partial_{i \dot{\beta})\,, \alpha} \right)
\end{aligned}
\end{equation}
are just the ordinary Lorentz generators $m_{\alpha \beta}$, $\overline{m}_{\dot{\alpha} \dot{\beta}}$ acting in dual non-chiral superspace and we used the abbreviations $\partial_{i \alpha \dot{\alpha}} = \frac{\partial}{\partial x_{i}^{\dot{\alpha}\alpha }}=\tfrac{1}{2}\sigma^{\mu}_{\alpha\dot\alpha}\frac{\partial}{\partial x_i^\mu}$, $\partial_{i \alpha n} = \frac{\partial}{\partial \theta^{\alpha n}_{i}}$, $\partial_{i \dot{\alpha} n'} = \frac{\partial}{\partial \tilde{\theta}^{\dot{\alpha} n'}_{i}}$.
The dual momentum $P_{\alpha \dot{\alpha}}$ and the dual supermomenta $Q_{\alpha m}$, $\widetilde{Q}_{\dot\alpha m'}$ are the generators of translations with respect to the dual variables $x$ and $\theta$, $\tilde\theta$
\begin{equation}
\begin{aligned}
P_{\alpha \dot{\alpha}} &= \sum_{i} \partial_{i \alpha \dot{\alpha}}\,,&\qquad Q_{\alpha n} &= -\sum_{i} \partial_{i \alpha n}\,,& \qquad \widetilde{Q}_{\dot{\alpha} n'} &=- \sum_{i} \partial_{i \dot{\alpha} n'}\,,
\end{aligned}
\end{equation}
The trivial translation invariance in the dual $y$ variable leads to the dual R-symmetry generator
\begin{align}
\mathpzc{P}_{m m'} &= -\sum_{i} \partial_{i m m'}&&\text{with}&\partial_{i m m'}=\frac{\partial}{\partial y_i^{mm'}}\,.
\end{align}
The conjugate dual supermomenta $\overline{Q}^{n}_{\dot{\alpha}}$, $\overline{\widetilde{Q}}^{n'}_{\alpha}$ are given by the action of the superconformal generators $\overline{s}^{n}_{\dot{\alpha}}$, $\overline{\widetilde{s}}^{n'}_{\alpha}$ in dual non-chiral superspace. Hence, we have
\begin{equation}
\begin{aligned}
\overline{Q}^{n}_{\dot{\alpha}} &= \sum_{i} (\theta^{\alpha n}_{i} \partial_{i \alpha \dot{\alpha}} + y_{i}^{n n'} \partial_{i n' \dot{\alpha}})\,,& \qquad \overline{\widetilde{Q}}^{n'}_{\alpha}& = \sum_{i} (\tilde{\theta}^{\dot{\alpha} n'}_{i} \partial_{i \alpha \dot{\alpha}} - y_{i}^{n n'} \partial_{i n \alpha}) \,.
\end{aligned}
\end{equation}
Similar to the chiral case, the non-chiral dual superconformal symmetry can be obtained by adding the discrete transformation of dual conformal inversion $I$ to the super Poincar\'e group. The conformal generator $K_{\alpha \dot{\alpha}}$ and the superconformal generators $S_{\alpha m}$, $\widetilde{S}_{\dot\alpha m'}$, $\overline{S}_{\dot\alpha m}$, $\overline{\widetilde{S}}_{\alpha m}$ are then given by
\begin{equation}\label{eq:superconformalGenerators_NC}
\begin{gathered}
K_{\alpha\dot{\beta}}=I P_{\beta\dot{\alpha}} I\\
\begin{aligned}
S_{\alpha m}&=I \overline{Q}_{\dot{\alpha} m} I\,,&\qquad\overline{S}_{\dot\alpha m}&=I Q_{\alpha m} I\,,\\
\widetilde{S}_{\dot\alpha m'}&=I \overline{\widetilde{Q}}_{\alpha m'} I\,,&\qquad\overline{\widetilde{S}}_{\alpha m'}&=I \widetilde{Q}_{\dot\alpha m'} I\,,
\end{aligned}
\end{gathered}
\end{equation}
and their commutators and anti-commutators immediately follow from the dual super Poincar\'e algebra and the fact that the inversion is an involution, i.\,e.~$I^2=\mathds{1}$. As we are going to show in \cref{section:BCFWnonChiral}, using the BCFW recursion, the tree-level non-chiral superamplitudes transform covariantly under inversions
\begin{equation}\label{eq:Inversion_Amp_NC}
I\left[\mathcal{A}_n\right]=x_1^2x_2^2\dots x_n^2 \,\mathcal{A}_n
\end{equation}
if the coordinates of full non-chiral superspace invert as,
\begin{equation}\label{eq:inversion4dNC}
\begin{aligned}
I \left[ x_i^{\dot\alpha\beta} \right]&=-(x_i^{-1})^{\dot\beta\alpha}\,,&I \left[ y_i^{mm'} \right]& = y_i^{mm'} -\langle\theta_i^m|x_i^{-1}|\tilde\theta_i^{m'}]\,, \\
I [ \theta^{\alpha m}_{i} ] &= (x^{-1}_{i})^{\dot{\alpha}\beta} \theta_{i\,\beta}^{m}\,,&I [ \tilde{\theta}^{\dot{\alpha}m'}_{i} ] &= \tilde{\theta}_{i\,\dot{\beta}}^{m'}(x^{-1}_{i})^{\dot\beta\alpha} \,,\\
I \left[ \lambda^{\alpha}_{i} \right] &= (x^{-1}_{i})^{\dot{\alpha}\beta} \lambda_{i\,\beta} \,,& I [ \tilde{\lambda}^{\dot{\alpha}}_{i} ] &= \tilde{\lambda}_{i\,\dot{\beta}}(x^{-1}_{i + 1})^{\dot\beta\alpha} \,, \\
I [ \eta^{m}_{i} ] &=\frac{x_i^2}{x_{i+1}^2}\left(\eta_i^m-\langle\theta_i^m|x_i^{-1}|\tilde\lambda_i]\right)\,,&I [ \tilde\eta^{m'}_{i} ] &=\tilde\eta_i^{m'}-[\tilde\theta_i^{m'}|x_i^{-1}|\lambda_i]\,.\end{aligned}
\end{equation}
The inversion rules of the Levi-Civita tensors
\begin{align}\label{eq:inversion4depsilon}
I[\epsilon_{\alpha\beta}]&=\epsilon_{\dot\alpha\dot\beta}\,,& I[\epsilon_{\dot\alpha\dot\beta}]&=\epsilon_{\alpha\beta}
\end{align}
can be deduced from $I^2[\lambda^{\alpha}_{i}]=\lambda_{i}^\alpha$, and $I^2 [ \tilde{\lambda}^{\dot{\alpha}}_{i} ]= \tilde{\lambda}^{\dot{\alpha}}_{i} $ since the inversion is an involution.
Note that the inversion defined in \cref{eq:inversion4dNC} is compatible with the constraints \cref{eq:constraints_full_nonchiral} in full non-chiral superspace.
The simplest purely bosonic dual conformal covariants are
\begin{align}
I[\,x_{ij}^2\,]&=\frac{x_{ij}^2}{x_{i}^2x_{j}^2}\,,&I[\,\ang{i}{i+1}\,]&=\frac{\ang{i}{i+1}}{x_i^2}\,,&I[\,[i\,i+1]\,]&=\frac{[i\,i+1]}{x_{i+2}^2}\,.
\end{align}
With the help of the inversion rules \eqref{eq:inversion4dNC} and its definition \eqref{eq:superconformalGenerators_NC}, the action of the dual conformal boost generator in dual non-chiral superspace can be calculated by applying the chain rule,
\begin{align}\label{eq:calculationOfK}
K_{\a\dot{\beta}}&=\sum_i\sum_j\bigg[I\biggl[\frac{\partial I[x_j^{\gamma\dot\delta}]}{\partial x_{i}^{\dot{\alpha}\b}}\biggr]\partial_{j\,\gamma\dot{\delta}}+I\biggl[\frac{\partial I[y_j^{mm'}]}{\partial x_{i}^{\dot{\alpha}\b}}\biggr]\partial_{j\,mm'}\notag\\
&\=\phantom{\sum_i\sum_j\bigg[}{}+I\biggl[\frac{\partial I[\theta_j^{\gamma\,m}]}{\partial x_{i}^{\dot{\alpha}\b}}\biggr]\partial_{j\,\gamma\,m} + I\biggl[\frac{\partial I[\tilde\theta_j^{\dot\gamma\,m'}]}{\partial x_{i}^{\dot{\alpha}\b}}\biggr]\partial_{j\,\dot\gamma\,m'}\biggr]\,.
\end{align}
Applying the Schouten identity \eqref{Schouten_4D_1} we obtain
\begin{equation}
\frac{\partial x_{i\delta\dot{\gamma}}^{-1}}{\partial x_{i}^{\dot{\alpha}\b}}=\frac{\epsilon_{\b\delta}\epsilon_{\dot{\alpha}\dot{\gamma}}}{x_i^2}-\frac{x_{i\,\delta\dot{\gamma}}x_{i\,\b\dot{\alpha}}}{x_i^4}{=}-x_{i\b\dot{\gamma}}^{-1}x_{i\delta\dot{\alpha}}^{-1}\,,
\end{equation}
immediately leading to e.\,g.
\begin{equation}
I\biggl[\frac{\partial I[x_j^{\gamma\dot\delta}]}{\partial x_{i}^{\dot{\alpha}\b}}\biggr]=\delta_{ij}\,x_{i\a}^{\;\;\;\;\dot{\gamma}}\, x_{i\dot{\beta}}^{\;\;\;\;\delta}\,.
\end{equation}
The final result is
\begin{equation}
\begin{gathered}
K_{\alpha \dot{\alpha}} = \sum_{i} \left( x_{i \alpha}^{\;\;\; \dot{\beta}} x_{i \dot{\alpha}}^{\;\;\; \beta} \partial_{i \beta \dot{\beta}} + x_{i \dot{\alpha}}^{\;\;\;\beta} \theta_{i \alpha}^{m} \partial_{i m \beta} + x_{i \alpha}^{\;\;\; \dot{\beta}} \tilde{\theta}_{i \dot{\alpha}}^{n'} \partial_{i n' \dot{\beta}} +\theta_{i \alpha}^{n} \tilde{\theta}_{i\dot{\alpha}}^{n'} \partial_{i n n'}\right)\,.
\end{gathered}
\end{equation}
Note that it would be equally straightforward to obtain the action of $K_{\alpha \dot{\alpha}}$ in full non-chiral superspace from \cref{eq:superconformalGenerators_NC,eq:inversion4dNC}. All other generators of the dual non-chiral superconformal symmetry now follow from the algebra listed in \cref{eq:NCsuperconformal} of appendix \cref{sec:Algebra_Non_Chiral}. Similar to the chiral case, part of the generators of the dual non-chiral superconformal algebra are directly given by the action of chiral generators in dual non-chiral superspace
\begin{equation}
\begin{gathered}
\begin{aligned}
m_{\alpha\beta}&=M_{\alpha\beta}\,,&\overline{m}_{\dot\alpha\dot\beta}&=\overline{M}_{\dot\alpha\dot\beta}\,,\\
\mathpzc{m}_{\,n m}&=\mathpzc{M}_{\,n m}\,,&\widetilde{\mathpzc{m}}_{\,n' m'}&=\widetilde{\mathpzc{M}}_{\,n' m'}\,,\\
\overline{s}^{n}_{\dot{\alpha}}&=\overline{Q}^{n}_{\dot{\alpha}}\,, &\overline{\widetilde{s}}^{n'}_{\alpha}&=\overline{\widetilde{Q}}^{n'}_{\alpha}\,,\\
\overline{q}^{n}_{\dot{\alpha}}&=-\overline{S}^{n}_{\dot{\alpha}}\,, &\overline{\widetilde{q}}^{n'}_{\alpha}&=-\overline{\widetilde{S}}^{n'}_{\alpha}\,,
\end{aligned}\\
\begin{aligned}
d&=-D+n\,,&\mathpzc{d}&=-\mathpzc{D}-n\,,&b=-B\,.
\end{aligned}
\end{gathered}
\end{equation}
Non trivial are the dual superconformal generators
\begin{equation}
\begin{aligned}
S^{\alpha n} &= \sum_{i} \left(- \theta_{i}^{\alpha m} \theta_{i}^{\beta n} \partial_{i \beta m} + x_{i}^{\alpha \dot{\beta}} \theta_{i}^{\beta n} \partial_{i \beta \dot{\beta}} - \theta_{i }^{\alpha m} y_{i}^{n m'} \partial_{im m'} + y_{i}^{n m'} x_{i}^{\alpha \dot{\alpha}} \partial_{i\dot{\alpha} m' }\right)\,,\\
\widetilde{S}^{\dot{\alpha} n'} &= \sum_{i} \left(-\tilde{\theta}_{i}^{\dot{\alpha} m'} \tilde{\theta}_{i}^{\dot{\beta} n'} \partial_{i \dot{\beta} m'} + x_{i}^{\dot{\alpha} \beta} \tilde{\theta}_{i}^{\dot{\beta} n'} \partial_{i \beta \dot{\beta}} -\tilde{\theta}_{i}^{ \dot{\alpha} m'} y_{i}^{m n'} \partial_{i m m'} - y_{i}^{m n'} x_{i}^{\dot{\alpha} \alpha} \partial_{i \alpha m}\right)\,.
\end{aligned}
\end{equation}
and the dual R-symmetry boost generator
\begin{equation}
\mathpzc{K}^{n n'} = \sum_{i} \left(- y_{i}^{n m'} y_{i}^{m n'} \partial_{i m m'} - \tilde{\theta}^{\dot{\alpha} n'}_{i} y_{i}^{n m'} \partial_{i \dot{\alpha} m'} - \theta^{\alpha n}_{i} y_{i}^{m n'} \partial_{i \alpha m} +\theta_{i}^{n \alpha} \tilde{\theta}^{n' \dot{\alpha}} \partial_{\alpha \dot{\alpha}} \right)\,,
\end{equation}
Due to the covariance of the non-chiral superamplitudes under dual conformal inversions, \cref{eq:Inversion_Amp_NC}, some of the generators only act covariantly on the amplitude. From \cref{eq:superconformalGenerators_NC,eq:Inversion_Amp_NC} and the algebra \cref{eq:NCsuperconformal} it follows
\begin{align}
K^{\dot\alpha\alpha}\mathcal{A}_n&=-\sum_i x_i^{\dot\alpha\alpha}\mathcal{A}_n\,,&\mathpzc{K}^{m m'}\mathcal{A}_n&=-\sum_i y_i^{mm'}\mathcal{A}_n\,,\\
S^{\alpha m}\mathcal{A}_n&=-\sum_i \theta_i^{\alpha m}\mathcal{A}_n\,,&\widetilde{S}^{\dot\alpha m'}\mathcal{A}_n&=-\sum_i \tilde\theta_i^{\dot\alpha m'}\mathcal{A}_n\,,\\
D\mathcal{A}_n&=n\,\mathcal{A}_n\,,&\mathpzc{D}\mathcal{A}_n&=-n\,\mathcal{A}_n\,.
\end{align}
For a complete list of the non-chiral superconformal algebra and its dual representation we refer to \cref{sec:Algebra_Non_Chiral}.
\subsubsection{Yangian symmetry of superamplitudes}\label{section:Yangian}
The conventional and dual superconformal algebras present at tree level
close into an infinite dimensional symmetry algebra known as the Yangian $Y[\text{psu}(2,2|4)]$ as was shown for the chiral and anti-chiral
superamplitudes in \cite{Drummond:2009fd}.
This symmetry algebra is a loop-algebra with a positive integer
level structure, whose level zero generators $J_a^{[0]}=\sum_i J_{a\,i}^{[0]}$ with local densities $ J_{a\,i}^{[0]}$ are given by the original superconformal generators
\begin{equation}
[J_a^{[0]},J_b^{[0]}\}=f_{ab}^{\phantom{ab}c}\,J_c^{[0]}\,,
\end{equation}
where $[\cdot,\cdot\}$ denotes the graded commutator and $f_{ab}^{\phantom{ab}c}$ are the structure constants of the superconformal algebra. Invariance under the level one Yangian generators $J_a^{[1]}$ with the bi-local representation
\begin{equation}\label{eq:level1}
J_a^{[1]}=f_a^{\phantom{a}cb}\sum_{i<j} J_{b\,i}^{[0]} J_{c\,j}^{[0]}
\end{equation}
then follows from the covariance under the non-trivial dual superconformal generators $K_{\a\dot{\alpha}}$, $S_{A}^{\a}$.
The level one generators obey the commutation relations
\begin{equation}\label{eq:commutatorsLevel1}
[J_a^{[1]},J_b^{[0]}\}=f_{ab}^{\phantom{ab}c}\,J_c^{[1]}
\end{equation}
as well as the Serre relation, for details we refer to \cite{Drummond:2009fd}.
Similar to the chiral superamplitudes the non-chiral superamplitudes have a Yangian symmetry as well, which has been investigated in \cite{Huang:2011um}. The infinite dimensional Yangian symmetry of the tree-level superamplitudes is a manifestation of the expected integrability of the planar sector of ${\cal N}=4$ SYM. In principle it should be possible to exploit the algebraic constraints, that the Yangian invariance puts on the amplitudes, to determine the amplitudes efficiently. The fact that the Yangian symmetry is obscured by the manifest local and unitary Lagrangian formulation of ${\cal N}=4$ SYM theory led to the development of alternative formulations \cite{ArkaniHamed:2009vw,ArkaniHamed:2012nw,Arkani-Hamed:2013jha}, that enjoy a manifest Yangian symmetry but lack manifest locality and manifest unitarity.
\section[Six-Dimensional \texorpdfstring{${\cal N}=(1,1)$}{N=(1,1)} SYM theory]{Six-Dimensional \texorpdfstring{$\bm{{\cal N}=(1,1)}$}{N=(1,1)} SYM Theory}
\subsection{On-shell superspace and superamplitudes}
In this section we introduce the maximal supersymmetric $\mathcal{N} = (1,1)$ SYM theory in six dimensions based on references
\cite{Dennen:2009vk, Bern:2010qa, Brandhuber:2010mm, Dennen:2010dh, Huang:2010rn, Huang:2011um, Elvang:2011fx}. The $\mathcal{N} = (1,1)$ SYM theory can be obtained by dimensionally reducing the $\mathcal{N} = 1$ SYM theory in ten dimensions and the dimensional reduction of $\mathcal{N} = (1,1)$ SYM to four dimensions is given by $\mathcal{N} = 4$ SYM theory. Hence, without presenting its Lagrangian we can immediately write down its on-shell degrees of freedom:
\begin{align}
\mbox{gluons:}&\quad g^{a}_{\;\;\dot{a}} & \mbox{scalars:}&\quad s, s', s'', s''' & \mbox{gluinos:}&\quad \chi^{a}, \lambda^{a}& \mbox{anti-gluinos:}&\quad \tilde{\chi}^{\dot{a}}, \tilde{\lambda}_{\dot{a}}
\end{align}
The amplitudes of $\mathcal{N} = (1,1)$ SYM theory are most conveniently studied using the six dimensional spinor helicity formalism introduced in \cref{section:spinor6d} and the non-chiral on-shell superspace introduced in \cite{Dennen:2009vk}
\begin{equation}
\{\,
\lambda^{A\, a}_{i}\, , \, \tilde\lambda_{i\, A\, \dot a}\, ,\,
\xi_{i\, a}\, ,\, \tilde\xi^{\dot a}_{i}\, \}\, ,
\end{equation}
whose Grassmann variables $\xi_{a}$, $\tilde\xi^{\dot a}$ carry little group indices and can be used to
encode all the on-shell degrees of freedom into the scalar superfield
\begin{multline}\label{eq:superfield6d}
\Omega=s+\chi^a\,\xi_a+s'\,\xi^2+\tilde\chi_{\dot a}\,\tilde\xi^{\dot a} +g^a_{\phantom{a}\dot b}\,\xi_a\tilde\xi^{\dot b}+\tilde\lambda_{\dot b} \, \tilde\xi^{\dot b}\xi^2+s''\,\tilde\xi^2+\lambda^a\,\xi_a \tilde\xi^2+s'''\, \xi^2\tilde\xi^2\,,
\end{multline}
with the abbreviations $\tilde\xi^2=\tfrac{1}{2} \tilde\xi_{\dot a}\tilde\xi^{\dot a}$, $\xi^2=\tfrac{1}{2}\xi^a\xi_a$. Superamplitudes can now be defined as functions of the superfields
\begin{equation}
\begin{gathered}
\mathcal{A}_n=\mathcal{A}_n(\Omega_1, \Omega_2,\dots,\Omega_n)\,.
\end{gathered}
\end{equation}
By construction these superamplitudes are invariant under the $SU(2)\times SU(2)$ little group but, as explained in \cite{Dennen:2009vk}, do not have the $SU(2)_{R}\times SU(2)_{R}$ symmetry of $\mathcal{N} = (1,1)$ SYM theory. As a consequence of the missing $R$-symmetry, the superamplitudes can not be decomposed according to the degree of helicity
violation as in four dimensions \eqref{eq:MHV_decomposition}.
The non-chiral superamplitudes are homogeneous polynomials of degree $n+n$ in the Grassmann variables
\begin{equation}
\mathcal{A}_n(\{\,
\lambda^{A\, a}_{i}\, , \, \tilde\lambda_{i\, A\, \dot a}\, ,\,
\alpha\xi_{i\, a}\, ,\, \tilde\alpha\tilde\xi^{\dot a}_{i}\, \}) = \alpha^n \tilde\alpha^n\mathcal{A}_n(\{\,
\lambda^{A\, a}_{i}\, , \, \tilde\lambda_{i\, A\, \dot a}\, ,\,
\xi_{i\, a}\, ,\, \tilde\xi^{\dot a}_{i}\, \})
\end{equation}
The tree-level superamplitudes of $\mathcal{N} = (1,1)$ are known only up to five external legs \cite{Dennen:2009vk}.
We now review the known amplitudes starting with $n=3$. The special three point kinematics require the introduction \cite{Cheung:2009dc} of the bosonic spinor variables $u_i^{a}$, $w_i^{a}$, $\tilde{u}_{i \dot{a}}$ and $\tilde{w}_{i \dot{a}}$, defined in appendix \cref{appendix:threePoint}. With the definition
\begin{align}
{\bf u}_i &= u_i^{a} \xi_{i a}\,,& \qquad \tilde{{\bf u}}_i &= \tilde{u}_{i \dot{a}} \tilde{\xi}_i^{\dot{a}}\,,& \qquad {\bf w}_i &= w_i^{a} \xi_{i a}\,,& \qquad \tilde{{\bf w}}_i &= \tilde{w}_{i \dot{a}} \tilde{\xi}_i^{\dot{a}}
\end{align}
the three point amplitude reads \cite{Cheung:2009dc}
\begin{equation}\label{eq:A3_6D}
\mathcal{A}_3 = -i \delta^{6}( \sum_{i} p^{AB} ) \left({\bf u}_1 {\bf u}_2 + {\bf u}_2 {\bf u}_3 + {\bf u}_3 {\bf u}_1 \right) \left( \sum_{i = 1}^3 {\bf w}_i \right) \left(\tilde{{\bf u}}_1 \tilde{{\bf u}}_2 + \tilde{{\bf u}}_2 \tilde{{\bf u}}_3 + \tilde{{\bf u}}_3 \tilde{{\bf u}}_1 \right) \left( \sum_{i = 1}^3 \tilde{{\bf w}}_i \right)\,,
\end{equation}
and has a manifest cyclic symmetry, and symmetry under chiral conjugation.
The four point amplitude has the nice and simple form
\begin{equation}\label{eq:A4_6D}
\mathcal{A}_4 = - \delta^{6}\left(p \right) \delta^{4}\left(q^A \right) \delta^{4}\left( \tilde{q}_{A} \right) \frac{i}{x_{1 3}^2 x_{2 4}^2} \,.
\end{equation}
%
with the conserved supermomenta being given by
\begin{align}
q^A&=\sum_{i}\lambda_i^{Aa}\xi_{ia}\,,&\tilde{q}_{A}&=\sum_{i}\tilde\lambda_{iA\dot{a}}\tilde\xi_{i}^{\dot{a}}\,,
\end{align}
and the Grassmann delta functions
\begin{align}
\delta^{4}\left(q^A \right)&=\tfrac{1}{4!}\epsilon_{ABCD}q^Aq^Bq^Cq^D\,,&\delta^{4}\left( \tilde{q}_{A} \right)&=\tfrac{1}{4!}\epsilon^{ABCD}\tilde{q}_{A}\tilde{q}_{B}\tilde{q}_{C}\tilde{q}_{D}\,.
\end{align}
The five point amplitude can be computed using the BCFW recursion, presented in \cref{section:BCFW6d}. The result, obtained in \cite{Bern:2010qa}, has the form
\begin{equation}\label{eq:5pkt_6d}
\begin{gathered}
\mathcal{A}_5 = - \delta^{6}\left( p \right) \delta^{4}\left( q \right) \delta^{4}\left( \tilde{q} \right) \frac{i}{x_{1 3}^2 x^2_{2 4} x_{3 5}^2 x_{4 1}^2 x_{5 2}^2} \bigl(\langle q_1 |p_2 p_3 p_4 p_5 |\tilde{q}_1] + \mbox{cyclic permutations}\\
\begin{aligned}
&+ \tfrac{1}{2} \langle q_1 | p_2 p_3 p_4 p_5 - p_2 p_5 p_4 p_3 | \tilde{q}_2] + \tfrac{1}{2} \langle q_3 | p_4 p_5 p_1 p_2 - p_4 p_2 p_1 p_5 | \tilde{q}_4] +\text{c.c.}\\
&+ \tfrac{1}{2} \langle q_4 | p_5 p_1 p_2 p_3 - p_5 p_3 p_2 p_1 | \tilde{q}_5] + \tfrac{1}{2} \langle q_3 | p_5 p_1 p_2 p_3 - p_5 p_3 p_2 p_1 | \tilde{q}_5] +\text{c.c.}\;\bigr)\,.
\end{aligned}
\end{gathered}
\end{equation}
This representation of the five-point superamplitude lacks any manifest non-trivial symmetry apart from supersymmetry and is much more complicated than the four point amplitude \cref{eq:A4_6D}. As the five point amplitude indicates, superamplitudes with more than three partons have the general form
\begin{equation}
\mathcal{A}_n=\delta^{(6)}\left(p\right)\delta^{(4)}\left(q\right)\delta^{(4)}\left(\tilde q\right)f_n(\{p_i,q_i,\tilde q_i\})\,.
\end{equation}
Judging from the increase in complexity going from $n=4$ to $n=5$, any straightforward application of the BCFW recursion, using \cref{eq:5pkt_6d} as initial data, cannot be expected to yield reasonable results for amplitudes with more than five external legs. Obviously new strategies are necessary to investigate higher point tree amplitudes of $\mathcal{N} = (1,1)$ SYM theory.
\subsection{Symmetries of superamplitudes}\label{generators_max_6d}
\subsubsection{Superpoincar\'e symmetry}
Although a part of the symmetries of tree-level $\mathcal{N} = (1,1)$ SYM theory amplitudes appear in the literature, e.\,g. in \cite{Elvang:2011fx, Dennen:2010dh}, a complete list of all generators and their algebra is missing. This section aims to close this gap.
We start with the symmetries of the tree level superamplitudes in on-shell superspace $\{\,
\lambda^{A\, a}_{i}\, , \, \tilde\lambda_{i\, A\, \dot a}\, ,\,
\xi_{i\, a}\, ,\, \tilde\xi^{\dot a}_{i}\, \}$.
In contrast to its four-dimensional daughter theory, ${\cal N} = 4$ SYM theory, the six-dimensional ${\cal N} = (1,1)$ SYM theory has no conformal symmetry since the gauge coupling constant in six dimensions is not dimensionless. However, we have a super Poincar\'e symmetry
\begin{equation}
\{p_{AB},m^A_{\;\;B},q^A,\overline{q}_A,\widetilde{q}_A,\overline{\widetilde{q}}^A\}\,.
\end{equation}
The super Poincar\'e algebra is given by the supersymmetry algebra
\begin{equation}\label{komutator_susy}
\begin{aligned}
\left\{q^A, \overline{q}^{B}\right\} &= {p}^{A B} \,,&\qquad \left\{\widetilde{{q}}_A, \widetilde{\overline{{q}}}_{B}\right\} = {p}_{A B}
\end{aligned}
\end{equation}
and the commutators involving the $m^{A}_{\;\;B}$ of the $SO(1,5)$ Lorentz symmetry
with covering group $SU^{\ast}(4)$ read
\begin{equation}
\begin{aligned}
{}[m^{A}_{\;\;B}, m^{C}_{\;\;D}] &= \delta^{C}_{B} m^{A}_{\;\;D}-\delta^{A}_{D} m^{C}_{\;\;B}\,,&\qquad [m^{A}_{\;\;B}, p_{CD}] &= \delta^{A}_{[C} p_{D]B}+\tfrac{1}{2}\delta^{A}_{B} p_{CD}\,,\\
[m^{A}_{\;\;B}, {q}^C] &= \delta^{C}_{B} {q}^A - \tfrac{1}{4} \delta^{A}_{B} {q}^C \,,&\qquad [m^{A}_{\;\;B}, \overline{q}^C]& = \delta^{C}_{B} \overline{{q}}^A - \tfrac{1}{4} \delta^{A}_{B} \overline{{q}}^C\,, \\
[m^{A}_{\;\;B}, \widetilde{{q}}_C] &= - \delta^{A}_{C} \widetilde{{q}}_B + \tfrac{1}{4} \delta^{A}_{B} \widetilde{{q}}_C \,,&\qquad [m^{A}_{\;\;B}, \overline{\widetilde{{q}}}_C] &= - \delta^{A}_{C} \overline{\widetilde{{q}}}_B + \tfrac{1}{4} \delta^{A}_{B} \overline{\widetilde{{q}}}_C \, .
\end{aligned}
\end{equation}
The translation symmetry is trivially given by momentum conservation
\begin{align}
p_{AB}=\sum_i \tilde\lambda_{iA\dot a}\tilde\lambda_{iB}^{\dot a}\,,
\end{align}
and the representation of the $(1,1)$ supersymmetry generators and their conjugates is
\begin{align}
&\begin{aligned}
q^A &= \sum_{i} \lambda_{i}^{A a} \xi_{i a}\,,& \qquad \widetilde{q}_A &= \sum_{i} \tilde{\lambda}_{i A \dot{a}} \tilde{\xi}_{i}^{\dot{a}}\,, \\
\overline{q}^A &= \sum_{i} \lambda_{i}^{A a} \partial_{i a}\,,& \qquad \overline{\widetilde{q}}_A &= \sum_{i} \tilde{\lambda}_{i A \dot{a}}\partial_{i}^{\dot{a}} \,,
\end{aligned}&& \mbox{ with } & \partial_{i a} &= \frac{\partial}{\partial \xi_{i}^{a}}\,, & \partial_{i}^{\dot{a}} &= \frac{\partial}{\partial \tilde{\xi}_{i \dot{a}}} \,.
\end{align}
The correct form of the $su(4)$ Lorentz generators
\begin{equation}
m^{A}_{\;\;B}=\sum_i \lambda_i^{Aa}\partial_{iBa}-\tilde\lambda_{iB\dot a}\partial_i^{A\dot a}-\tfrac{1}{4}\delta^A_B\lambda_i^{Ca}\partial_{iCa}+\tfrac{1}{4}\delta^A_B\tilde\lambda_{iC\dot a}\partial_i^{C\dot a}\,.
\end{equation}
is a bit more involved since the chiral and anti-chiral spinors are subject to the constraints
\begin{align}
\lambda_i^{A\, a}\lambda^{B}_{i\,a}&=\tfrac{1}{2}\epsilon^{ABCD}\tilde\lambda_{iC\, \dot a}\tilde\lambda_{iD}^{\dot a}\,,&\lambda_i^{A\, a}\tilde\lambda_{iA\, \dot a}&=0\,.
\end{align}
However, it is straightforward to show that the generators $m^{A}_{\;\;B}$ given above commute with these constraints.
Besides the super Poincar\'e symmetry there are a few additional trivial symmetries. First of all, we have the dilatation symmetry whose generator
\begin{equation}
d=\tfrac{1}{2}\sum_i\bigl[ \lambda_i^{Aa}\partial_{iAa} +\tilde\lambda_{iA\dot a}\partial_i^{A\dot a}\bigr]+n+2
\end{equation}
simply measures the dimension of a generator $\mathds{G}$
%
\begin{equation}
[{d}, \mathds{G}] = \dim(\mathds{G})\mathds{G}\,.
\end{equation}
The non-zero dimensions are
\begin{align}
\dim(\,p\,)&=1\,&\dim\left(\,q\,\right)&=\dim\left(\,\overline{q}\,\right)=\dim\left(\,\widetilde q\,\right)=\dim(\,\overline{\widetilde q}\,)=\tfrac{1}{2}\,.
\end{align}
As already mentioned before, the on-shell superfield and consequently the superamplitudes are manifest symmetric under the $SO(4)\simeq SU(2)\times SU(2)$ little group, whose generators are given by
\begin{align}
h_{a b} &= \sum_{i} \lambda^{A}_{i (a}\partial_{iA b)} - \xi_{i (a} \partial_{i b)} \,,& \tilde{h}_{\dot{a} \dot{b}} &= \sum_{i} \tilde{\lambda}_{iA(\dot{a}}\partial_{i\dot{b})}^A - \tilde{\xi}_{i (\dot{a}} \partial_{i \dot{b})}\,.
\end{align}
Finally there are two hyper charges
\begin{align}
b &= \sum_{i} \left(\xi_{i a} \partial_{i}^{a} - 1\right)\,,& \qquad \tilde{b} &= \sum_{i} \left(\tilde{\xi}_{i}^{\dot{a}} \partial_{i \dot{a}} - 1\right)
\end{align}
that correspond to a $U(1) \times U(1)$ subgroup of the $SU(2) \times SU(2)$ $R$-symmetry that we sacrificed for the manifest little group invariance. The action of the hyper charges on some generator $\mathds{G}$ are given by
\begin{align}
[b, \mathds{G}]& = \text{hyper}(\mathds{G})\mathds{G}\,,& [\tilde{b}, \mathds{G}] &= \widetilde{\text{hyper}}(\mathds{G})\mathds{G}\,,
\end{align}
and the non-zero values are
\begin{align}\label{eq:Rcharges}
\text{hyper}(\,q\,)&=\widetilde{\text{hyper}}\left(\,\widetilde q\,\right)=1\,,&\text{hyper}(\,\overline{q}\,)&=\widetilde{\text{hyper}}\left(\,\overline{\widetilde q}\,\right)=-1\,.
\end{align}
Note that the constants in $d$, $b$, $\tilde{b}$ are not fixed by the algebra and have been chosen such that they annihilate the superamplitude.
\subsubsection{Enhanced dual conformal symmetry}
All the symmetries presented up to this point exactly match the expectations. Beautifully
there is an additional non-trivial symmetry of the superamplitudes \cite{Dennen:2010dh}. Similar to $\mathcal{N} = 4$ SYM theory in four dimensions, the $\mathcal{N} = (1,1)$ SYM theory in six dimensions has a tree-level dual conformal symmetry. Due to the lack of a superconformal symmetry, the dual conformal symmetry does get not promoted to a full dual superconformal symmetry.
In analogy to four dimensions we extend the on-shell superspace by dual variables to the full non-chiral superspace
\begin{equation}\label{eq:fullSuper6d}
\{\,
\lambda^{A\, a}_{i}\, , \, \tilde\lambda_{i\, A\, \dot a}\, ,\,
\xi_{i\, a}\, ,\, \tilde\xi^{\dot a}_{i}\,,\,x_i^{AB},\theta_i^A,\tilde\theta_{i\,A} \}\,.
\end{equation}
The variables are subject to the constraints
\begin{equation}\label{eq:constraints6d}
\begin{aligned}
x_{i i+1}^{A B} & = \lambda^{A a}_{i} \lambda^{B}_{i a}\,,& x_{i i+1 \; A B} &=\tilde{\lambda}_{i A \dot{a}} \tilde{\lambda}_{i B}^{\dot{a}} \\
\theta_{i i+1}^{A} & =\lambda^{A a}_{i} \xi_{i a} \,,& \tilde{\theta}_{i i+1 \; A} &= \tilde{\lambda}_{i A \dot{a}} \tilde{\xi}_{i}^{\dot{a}}
\end{aligned}
\end{equation}
Similar to the non-chiral superamplitudes of $\mathcal{N} = 4$ SYM theory, it is possible to express the superamplitudes of $\mathcal{N} = (1,1)$ SYM solely using the dual superspace variables $\{x,\theta,\tilde\theta\}$. The amplitudes only depend on differences of dual variables, resulting in translation symmetries with respect to each of the dual variables. Hence, we define the dual translation generator to be
\begin{equation}
\begin{aligned}
P_{A B} &= \sum_i \partial_{i A B} \,,& \mbox{ with } && \partial_{i A B} &= \frac{\partial}{\partial x_{i}^{A B}}=\tfrac{1}{2}\widetilde{\Sigma}^{\mu\,BA}\frac{\partial}{\partial x_i^{\mu}}\,,
\end{aligned}
\end{equation}
and the dual supermomenta are
\begin{align}
Q_{A} &= \sum_i \partial_{i A}\,, & \widetilde{Q}^{A} &= \sum_i \partial_{i}^{A}\,, &\mbox{ with }&& \partial_{i A}& = \frac{\partial}{\partial \theta_{i}^{A}}\,& \partial_{i}^{A} &= \frac{\partial}{\partial \tilde{\theta}_{i A}}\,.
\end{align}
Although it is easy to algebraically construct conjugates $\overline{Q}_{A}$, $\overline{\widetilde{Q}}^{A}$ to the dual supermomenta, these conjugates would imply the invariance under the superconformal generators $\overline{s}_{A}=\sum\xi_i^a\partial_{iAa}$ and $\overline{\widetilde{s}}^{A}=\sum\widetilde{\xi}_{i\dot{a}}\partial^{iA\dot a}$, which is not the case. We conclude that the amplitudes have an supersymmetry enhanced dual Poincar\'e symmetry
\begin{equation}
\{P_{AB},M^A_{\;\;B},Q_A,\widetilde{Q}^A\}
\end{equation}
Though we do not have a full dual super Poincar\'e symmetry we have a dual conformal symmetry, which we are going to derive in what follows. First we recall that for $n>3$ the superamplitudes have the form
\begin{equation}
\mathcal{A}_n=\delta^{(6)}\left(p\right)\delta^{(4)}\left(q\right)\delta^{(4)}\left(\tilde q\right)f_n\,.
\end{equation}
It is possible to define a dual conformal inversion $I$ of the variables of the full superspace \cref{eq:fullSuper6d} such that the function $f_n$ inverts covariantly
\begin{equation}\label{eq:inversionA6d}
I[f_n]=\left(\prod_i x_i^2\right)\,f_n\,.
\end{equation}
In contrast to four dimensions the product of momentum and supermomentum conserving delta functions is not dual conformal invariant due to the mismatch of the degrees of momentum and supermomentum conserving delta functions
\begin{equation}
I[\delta^{(6)}(x_{1\,n+1})\delta^{(4)}(\theta_{1\,n+1})\delta^{(4)}(\tilde{\theta}_{1\,n+1})]=(x_1^2)^2\delta^{(6)}(x_{1\,n+1})\delta^{(4)}(\theta_{1\,n+1})\delta^{(4)}(\tilde{\theta}_{1\,n+1})\,.
\end{equation}
The inversion leading to \cref{eq:inversionA6d} is defined as
\begin{align}
I[x_{i}^{\mu}]&=-(x^{-1}_{i})_{\mu}=-\frac{x_{i\,\mu}}{x_i^2}\,,&I[x_{i}^{AB}]&=(x^{-1}_{i})_{AB}\,,\label{eq:inversion6d_first}\\
I[\theta_{i}^{A}]&=\theta_{i}^{B}(x^{-1}_{i})_{BA}\,,&I[\tilde\theta_{i\,A}]&=(x^{-1}_{i})^{AB}\tilde\theta_{i\,B}\,\\
I[\lambda_{i}^{Aa}]&=\frac{x_{i\,AB}\lambda_{i\,a}^{B}}{\sqrt{x_i^2x_{i+1}^2}}\,,&I[\tilde\lambda_{iA\dot a}]&=\frac{x_{i}^{AB}\tilde\lambda_{iB}^{\dot a}}{\sqrt{x_i^2x_{i+1}^2}}\,,\\
I[\xi_{i\,a}]&=\sqrt{\frac{x_i^2}{x_{i+1}^2}}\left(\xi_{i}^a+\langle\theta_i|x_i^{-1}|i^a\rangle\right)\,,&I[\tilde\xi_{i}^{\dot{a}}]&=-\sqrt{\frac{x_i^2}{x_{i+1}^2}}\left(\tilde\xi_{i\,\dot{a}}+[\tilde\theta_i|x_i^{-1}|i_{\dot a}]\right)\,,\\
I[u_{i\,a}]&=\frac{\beta \,u_i^a}{\sqrt{x_{i+2}^2}}\,,&I[\tilde{u}_{i\,\dot a}]&=\frac{\tilde{u}_i^{\dot a}}{\beta\sqrt{x_{i+2}^2}}\,,
\end{align}
where $\beta$ is some arbitrary constant.
Equations \eqref{eq:inversion6d_first} and the fact that the inversion needs to be an involution on the dual variables, i.\,e.~$I^2=\mathds{1}$, imply the inversion rules of the sigma matrices
\begin{align}
I[\Sigma_{AB}^\mu]&=\widetilde{\Sigma}_{\mu}^{BA}\,,&I[\widetilde{\Sigma}_{\mu}^{AB}]&=\Sigma_{BA}^\mu\,.
\end{align}
Consistency between the inversions of $x$ and the chiral and anti-chiral spinors requires the following inversion of the epsilon tensors of the little group
\begin{align}\label{eq:inversion6d_last}
I[\epsilon_{ab}]&=\epsilon^{ba}\,,&I[\epsilon_{\dot{a}\dot{b}}]&=\epsilon^{\dot{b}\dot{a}}\,.
\end{align}
Consequently, we have $I^2=-\mathds{1}$ on all variables carrying a little group index. Since the superamplitude is little group invariant this is no obstacle.
We note that the inversion defined in eqs.~\eqref{eq:inversion6d_first} to \eqref{eq:inversion6d_last} differs from the one presented in \cite{Dennen:2010dh} by some signs which are necessary in order to yield the desired inversion of the amplitudes. The proof of \cref{eq:inversionA6d} is straightforward using the BCFW recursion and will be presented in \cref{section:ProofDual}.
Similar to the four dimensional case we now define the generators
\begin{align}\label{eq:dualconformal6d}
K^{AB}&=IP_{AB}I\,,& \overline{S}^A&=IQ_AI\,,&\overline{\widetilde{S}}_A&=I\widetilde{Q}^AI\,.
\end{align}
From \cref{eq:inversionA6d} it immediately follows, that $f_n$ is annihilated by the dual superconformal generators $ \overline{S}^A$, $\overline{\widetilde{S}}_A$, but is covariant under dual conformal boosts
\begin{equation}\label{eq:actionK6d}
K^{AB}\,f_n=-\left(\sum_i x_i^{AB}\right)f_n\,.
\end{equation}
From the inversion rules eqs.~\eqref{eq:inversion6d_first} to \eqref{eq:inversion6d_last} and the defining equation \eqref{eq:dualconformal6d} we can obtain the action of the dual conformal boost generator by applying the chain rule, c.f.~four-dimensional case \cref{eq:calculationOfK}. Since for the action of the inversion operator on all variables carrying little group indices we have $I^2=-\mathds{1}$, the action of $K^{AB}$ on a little group invariant object is given by
\vspace*{0.5cm}
\begin{align}
K^{AB} &=\sum_i\sum_j\bigg[I\biggl[\frac{\partial I[x_j^{CD}]}{\partial x_{i\,AB}}\biggr]\partial_{j\,CD}+I\biggl[\frac{\partial I[\theta_j^{C}]}{\partial x_{i\,AB}}\biggr]\partial_{j\,C}+I\biggl[\frac{\partial I[\tilde\theta_{j\,D}]}{\partial x_{i\,AB}}\biggr]\partial_{j}^{D}\notag\\[+0.5cm]
&\=\phantom{\sum_i\sum_j\bigg[}{}-I\left[\frac{\partial I[\lambda_j^{Ca}]}{\partial x_{i\,AB}}\right]\partial_{j\,C a}-I\left[\frac{\partial I[\tilde{\lambda}_{j\,E\dot{a}}]}{\partial x_{i\,AB}}\right]\partial_{j}^{E\dot{a}}\notag\\[+0.5cm]
&\=\phantom{\sum_i\sum_j\bigg[}{}-I\left[\frac{\partial I[\xi_{j \,a}]}{\partial x_{i\,AB}}\right]\partial_{j}^{a}-I\left[\frac{\partial I[\tilde\xi_{j }^{\dot a}]}{\partial x_{i\,AB}}\right]\partial_{j\,\dot{ a}}\biggr]\,.
\end{align}
The coefficients of the derivatives are straightforward to obtain leading to
\begin{equation}\label{eq:K6d}
\begin{aligned}
K^{AB} &= \sum_{i} \biggl[ x_{i}^{AC} x_{i}^{BD} \partial_{i\, CD} - \theta^{[A}_{i} x_{i}^{B] C} \partial_{i C} - \epsilon^{ABCD} \tilde{\theta}_{i C} x_{i DE} \partial_{i}^{E} \\
&\=\phantom{\sum_{i} \biggl[}- \tfrac{1}{2}\lambda_{i}^{[A a} \left(x_i + x_{i+1}\right)^{B] C} \partial_{i C a} - \tfrac{1}{2}\epsilon^{ABCD} \tilde{\lambda}_{i C \dot{a}} \left(x_{i} + x_{i+1}\right)_{DE} \partial_{i}^{E \dot{a}} \\
&\=\phantom{\sum_{i} \biggl[}+ \tfrac{1}{2}\left(\theta_{i} + \theta_{i+1}\right)^{[A}\lambda^{B]}_{i a} \partial_{i}^{a}+\tfrac{1}{2} \epsilon^{ABCD} (\tilde{\theta}_{i} + \tilde{\theta}_{i+1})_{C} \tilde{\lambda}_{i D}^{\dot{a}} \partial_{i \dot{a}}\biggr]\,,
\end{aligned}
\end{equation}
In an analogue calculation or by calculating the commutators of $K^{AB}$ with the dual supermomenta $Q^A$, $\tilde{Q}_A$ we obtain
\begin{align}\label{eq:S_dual}
\overline{S}^{A} &= \sum_i x_{i}^{A B} \partial_{i B} - \lambda^{A}_{ia} \partial^{a}\,,& \overline{\widetilde{S}}_A &=\sum_i x_{i A B} \partial_{i}^{B} - \tilde{\lambda}_{i A \dot{a}} \partial^{\dot{a}}\,.
\end{align}
Obviously the dual superconformal generators $\overline{S}^{A}$, $\overline{\widetilde{S}}_A$ are related to the conformal generators $\overline{q}^{A}$, $\overline{\widetilde{q}}_A$ by $\overline{S}^{A}=-\overline{q}^{A}$ and $\overline{\widetilde{S}}_A=-\overline{\widetilde{q}}_A$.
Adding dual conformal inversions promotes the enhanced Poincar\'e symmetry to an enhanced dual conformal symmetry
\begin{equation}
\{P_{AB},M^A_{\;\;B},D,K_{AB},Q_A,\widetilde{Q}^A,\overline{S}^{A},\overline{\widetilde{S}}_A\}\,.
\end{equation}
The generators $M^A_{\;\;B}$ of the $SU(4)$ Lorentz symmetry\footnote{We drop the star
of $SU^{\ast}(4)$ from now on.} act canonically on all generators carrying $SU(4)$ indices
\begin{equation}
\begin{gathered}
{}[M^{A}_{\;\;B}, M^{C}_{\;\;D}] = \delta^{C}_{B} M^{A}_{\;\;D}-\delta^{A}_{D} M^{C}_{\;\;B}\,,\\[+.2cm]
\begin{aligned}
{} [M^{A}_{\;\;B}, P_{CD}] &= \delta^{A}_{[C} P_{D]B}+\tfrac{1}{2}\delta^{A}_{B} P_{CD}\,,&[M^{A}_{\;\;B}, K_{CD}] &= \delta^{A}_{[C} K_{D]B}+\tfrac{1}{2}\delta^{A}_{B} K_{CD}\,,\\
[M^{A}_{\;\;B}, Q_C] &= - \delta^{A}_{C}Q_B + \tfrac{1}{4} \delta^{A}_{B} Q_C \,,& [M^{A}_{\;\;B}, \overline{\widetilde{{S}}}_C] &= - \delta^{A}_{C} \overline{\widetilde{{S}}}_B + \tfrac{1}{4} \delta^{A}_{B} \overline{\widetilde{{S}}}_C\,,\\
[m^{A}_{\;\;B}, \widetilde{{Q}}^C] &= \delta^{C}_{B} \widetilde{{Q}}^A - \tfrac{1}{4} \delta^{A}_{B} \widetilde{{Q}}^C \,,&\qquad [M^{A}_{\;\;B}, \overline{S}^C]& = \delta^{C}_{B} \overline{{S}}^A - \tfrac{1}{4} \delta^{A}_{B} \overline{S}^C\,.
\end{aligned}
\end{gathered}
\end{equation}
The remaining non-zero commutation relations are
\begin{equation}\label{eq:Dualconformal6d}
\begin{gathered}
\begin{aligned}[t]
[{K}^{AB}, {Q}_{C}] &= \delta^{[A}_{C} \overline{{S}}^{B]} \,,&&\hspace{3cm}& [{K}_{AB}, \widetilde{{Q}}^{C}] &= \delta_{[A}^{C} \overline{\widetilde{S}}_{B]}\,,\\
[{P}^{AB}, \overline{\widetilde{S}}_{C}] &= \delta^{[A}_{C} \widetilde{{Q}}^{B]} \,,&&\hspace{3cm}&[{P}_{AB}, \overline{{S}}^{C}] &= \delta_{[A}^{C} Q_{B]}\,,\\
\end{aligned}\\[+.2cm]
[K_{AB},P^{CD}]=\delta_{[A}^{[C}M^{D]}_{\;\;B]}+\delta_{[A}^{C}\delta_{B]}^{D}D\,.
\end{gathered}
\end{equation}
The dual dilatation generator is given by
\begin{equation}
\begin{gathered}
D = -\tfrac{1}{2}\sum_{i}\left(\lambda_{i a}^{A} \partial_{i A}^{a} + \tilde{\lambda}_{i A \dot{a}} \partial_{i}^{A \dot{a}} + \theta_{i}^{A} \partial_{i A} + \tilde{\theta}_{i A} \partial_{i}^{A} + x_{i}^{A B} \partial_{i\,AB} \right)
\end{gathered}
\end{equation}
and, as a consequence of \cref{eq:actionK6d,eq:Dualconformal6d}, acts covariantly
\begin{equation}
D\,f_n=n\,f_n\,.
\end{equation}
The dual Lorentz generators $M^{A}_{\;\;B}$ are equal to the action of the on-shell Lorentz generators $m^{A}_{\;\;B}$ in the full superspace. Their representation can be obtained from the dual conformal algebra \cref{eq:Dualconformal6d} and is given by
\begin{equation}
\begin{aligned}
M^{A}_{\;\;B} = \sum_i\bigl[ x_{i}^{AC} \partial_{i\,B C} - \tfrac{1}{4} \delta^{A}_{B} x_{i}^{C D} \partial_{i\,CD}&+\lambda_{i}^{A a} \partial_{i B a} - \tfrac{1}{4} \delta^{A}_{B} \lambda_{i}^{C a} \partial_{i C a} + \theta_{i}^{A} \partial_{i B} - \tfrac{1}{4} \delta^{A}_{B} \theta_{i}^{C} \partial_{i C} \\
&- \tilde{\lambda}_{i B}^{\dot{a}} \partial_{i}^{A \dot{a}\phantom{i}} + \tfrac{1}{4} \delta^{A}_{B} \tilde{\lambda}_{i C}^{\dot{a}} \partial_{i}^{C \dot{a}\phantom{i}} - \tilde{\theta}_{i B} \partial_{i}^{A} + \tfrac{1}{4} \delta^{A}_{B} \tilde{\theta}_{i C} \partial_{i}^{C} \bigr]
\end{aligned}
\end{equation}
Finally, we define the dual $R$-symmetry hyper charges to be
\begin{align}
B &= \sum_{i} \left(\xi_{i a} \partial_{i}^{a} + \theta_{i}^{A a} \partial_{i A}^{a}\right) -n+4\,,& \widetilde{B} &= \sum_{i} \left(\tilde{\xi}_{i}^{\dot{a}} \partial_{i \dot{a}} + \tilde{\theta}_{i A \dot{a}} \partial_{i}^{A \dot{a}} \right) -n+4\,.
\end{align}
The non-zero charges are $\text{hyper}(Q)=\widetilde{\text{hyper}}(\widetilde{Q})=-\text{hyper}(\overline{S})=-\widetilde{\text{hyper}}(\overline{\widetilde{S}})=1$, and the constants in the definitions of $B$ and $\widetilde{B}$ have been fixed such that $f_n$ gets annihilated.
\subsection{Dimensional reduction to massless ${\mathcal{N} = 4}$ SYM}\label{section:dimensional_reduction}
In this section we explain how the six dimensional tree-level superamplitudes can be mapped to non-chiral superamplitudes of massless $\mathcal{N} = 4$ SYM. Similar mappings can be found in references \cite{Elvang:2011fx, Bern:2010qa, Dennen:2009vk,Huang:2011um}.
In order to perform the dimensional reduction we restrict the six dimensional momenta to the preferred four dimensional subspace $p_4=p_5=0$. Because of our special choice of six dimensional Pauli matrices, compare \cref{eq:SigmaGamma}, one can express the six dimensional spinors in terms of four dimensional ones
\begin{align}\label{eq:MapSpinors6d4d}
\lambda^{Aa} &= \left(\begin{array}{cc} 0 & \lambda_{\alpha} \\ \tilde{\lambda}^{\dot{\alpha}} & 0 \end{array}\right)\,,& \tilde{\lambda}_{A\dot{a}} &= \left(\begin{array}{cc} 0 & \lambda^{\alpha} \\ -\tilde{\lambda}_{\dot{\alpha}} & 0 \end{array}\right)\,.
\end{align}
In the four dimensional subspace the contractions with the six-dimensional Pauli matrices read
\begin{align}\label{eq:MapMomenta6d4d}
p_{AB} &= \left(\begin{array}{cc} 0 & -p^{\alpha}_{\;\;\dot \beta} \\ p^{\;\;\beta}_{\dot \alpha} & 0 \end{array}\right)\,,& p^{AB} &= \left(\begin{array}{cc} 0 & -p_{\alpha}^{\;\;\dot \beta} \\ p^{\dot{\alpha}}_{\;\;\beta} & 0 \end{array}\right)\,,
\end{align}
and the supermomenta are
\begin{align}\label{eq:456}
q^{A} &= \lambda^{A a} \xi_{a} = \left(\begin{array}{c} \lambda_{\alpha} \xi_{2} \\ \tilde{\lambda}^{\dot{\alpha}} \xi_{1} \end{array}\right) \,,&
\tilde{q}_{A} &= \tilde{\lambda}_{A \dot{a}} \tilde{\xi}^{\dot{a}} = \left(\begin{array}{cc} \lambda^{\alpha} \tilde\xi^{\dot{2}}, & -\tilde{\lambda}_{\dot{\alpha}} \tilde{\xi}_{\dot{1}} \end{array}\right)\,.
\end{align}
Obviously, both, $\xi_{a}$ and $\xi^{\dot{a}}$ have to be mapped to $\eta^{m}$ and $\tilde{\eta}_{m'}$. Here we make the choice
\begin{align}\label{eq:grassmann_map}
\xi_{a} &= \left(\tilde{\eta}_{3},\eta^{1} \right)\,,& \tilde{\xi}^{\dot{a}}& = \left(\tilde{\eta}_{2}, -\eta^{4}\right)\,,
\end{align}
recall that we are using the convention $m\in\{1,4\}$ and $m'\in\{2,3\}$ for the
non-chiral 4d superspace of section \ref{ncsuperspace4d}.
This implies the maps of the supermomenta
\begin{align}\label{eq:MapSupermomenta6d4d}
q^{A} &= \left(\begin{array}{c} q_\alpha^1 \\ \tilde{q}^{\dot{\alpha}}_{3} \end{array}\right) \,,&
\tilde{q}_{A} & = \left(\begin{array}{cc} -q^{\alpha\,4} , & -\tilde{q}_{\dot{\alpha}\,2} \end{array}\right)\,,
\end{align}
and supermomentum conserving delta functions
\begin{equation}\label{eq:projektion_delta}
\delta^{4} \left(\sum_{i} q_{i}^{A} \right) \delta^{4} \left(\sum_{i} \tilde{q}_{i A} \right)= \delta^{4} \left(\sum_{i} q_{i \alpha}^{m} \right) \delta^{4} \left(\sum_{i} \tilde{q}^{\dot{\alpha}}_{i m'} \right)\,.
\end{equation}
Applying the map of the Grassmann variables \cref{eq:grassmann_map} to the six dimensional superfield \cref{eq:superfield6d} and comparing it with the four dimensional non-chiral superfield \cref{eq:nonChiralSuperfield} yields the following map of the six and four dimensional on-shell states
\begin{equation}\label{eq:Map4d6d}
\begin{aligned}
\mbox{scalars:}&&\hspace{1cm}& \begin{aligned} s &= \phi_{2 3}\,, & s' &= \phi_{ 2 1}\,, & s''& = \phi_{4 3}\,,& s''' &= \phi_{4 1} \,,\end{aligned}\\
\mbox{gluinos:}&&\hspace{1cm}& \begin{aligned}\chi^{a} &= \left(\overline{\psi}^{4} ,-\psi_{2}\right)\,,& \lambda^{a} &= \left(\psi_{4},-\overline{\psi}^{2} \right)\,, \\
\tilde{\chi}_{\dot a} &=\left(-\psi_{3}, -\overline{\psi}^{1} \right)\,, & \tilde{\lambda}_{\dot{a}} &= \left(-\psi_{1}, -\overline{\psi}^{3} \right)\,,\end{aligned}\\
\mbox{gluons:}& &\hspace{1cm}& g^a_{\phantom{a}\dot{a}} = \left(\begin{array}{cc}G_{+} &\phi_{4 2} \\\phi_{3 1} & -G_{-}\end{array}\right)\,.
\end{aligned}
\end{equation}
With the help of \cref{eq:MapSpinors6d4d,eq:MapMomenta6d4d,eq:grassmann_map,eq:MapSupermomenta6d4d} it is possible to perform the dimensional reduction of any six dimensional superamplitude.
For a detailed analysis of the connection between the massless amplitudes in six and four dimensions and an investigation of a potential uplift from four to six dimensions we refer to \cref{section:uplift_huang}.
%
\section{From massless 6d to massive 4d superamplitudes}\label{section:DimRedmassive}
\subsection{On-shell massive superspace in 4d from dimensional reduction}
In \cref{section:dimensional_reduction} we dimensionally reduced the massless six-dimensional amplitudes to massless four-dimensional ones. In analogy, we now want to perform the dimensional reduction of the superamplitudes of ${\mathcal N}=(1,1)$ SYM to the massive Coulomb branch amplitudes of $\mathcal{N}=4$ SYM.
When performing the dimensional reduction we need to choose an appropriate set of massive four-dimensional on-shell variables. For the bosonic part of the on-shell variables we choose \emph{two} sets of helicity spinors $\{\lambda_{\alpha},\tilde{\lambda}_{\dot \alpha}\}$
and $\{\mu_{\alpha},\tilde{\mu}_{\dot \alpha}\}$ to write the bispinor representation of a four dimensional massive momentum as
\begin{align}
p_\mu\sigma^\mu_{\a\dot{\alpha}}&=p_{\a\dot{\alpha}}= \lambda_{\a}\tilde{\lambda}_{\dot{\alpha}} + \mu_{\a}\tilde{\mu}_{\dot{\alpha}}\,.
\end{align}
We introduce abbreviations for the spinor contractions
\begin{equation}
\label{massshell}
\ang{\lambda}{\mu} = m \, ,\qquad
[\tilde\mu \,\tilde\lambda] = \bar m \, ,
\end{equation}
where the mass parameters $m$ and $\bar m$ are in general complex numbers, related to the physical mass by $p^2=m \bar m$.
For the particular representation of the six-dimensional Pauli matrices listed in \cref{appendix:Spinors}, the six-dimensional spinors can be expressed using the two sets of four dimensional spinors introduced above
\begin{align}\label{eq:DimRedSpinors}
\lambda^{A\,a}&=\begin{pmatrix}
-\mu_\alpha&\lambda_\alpha\\
\tilde\lambda^{\dot\alpha}&\tilde\mu^{\dot\alpha}
\end{pmatrix}&\text{and}&&
\tilde\lambda_{A\,\dot a}&=\begin{pmatrix}
\bar\rho\mu^\alpha&\lambda^\alpha\\
-\tilde\lambda_{\dot\alpha}&\rho\tilde\mu_{\dot\alpha}
\end{pmatrix}&\text{with}&&\rho&=\bar\rho^{-1}=\frac{m}{\bar m}\,,
\end{align}
and the six-dimensional momenta and dual momenta are given by
\begin{align}\label{eq:DimRedMomenta}
p_{AB}&=\begin{pmatrix}
-\bar m\,\epsilon^{\alpha\beta}&-p^\alpha_{\phantom{\alpha}\dot\beta}\\
p_{\dot \alpha}^{\phantom{\dot \alpha}\beta}&m\,\epsilon_{\dot\alpha\dot\beta}
\end{pmatrix}&p^{AB}&=\begin{pmatrix}m \,\epsilon_{\alpha\beta}&-p_{\alpha}^{\phantom{\alpha}\dot\beta}\\
p^{\dot\alpha}_{\phantom{\dot\alpha}\beta}&-\bar m\,\epsilon^{\dot\alpha\dot\beta}\end{pmatrix}
\end{align}
and
\begin{align}\label{eq:DimRedRegions}
x_{AB}&=\begin{pmatrix}
-\bar n\,\epsilon^{\alpha\beta}&-x^\alpha_{\phantom{\alpha}\dot\beta}\\
x_{\dot \alpha}^{\phantom{\dot \alpha}\beta}&n\,\epsilon_{\dot\alpha\dot\beta}
\end{pmatrix}&x^{AB}&=\begin{pmatrix}n\,\epsilon_{\alpha\beta}&-x_{\alpha}^{\phantom{\alpha}\dot\beta}\\
x^{\dot\alpha}_{\phantom{\dot\alpha}\beta}&-\bar n\,\epsilon^{\dot\alpha\dot\beta}\end{pmatrix}\,.
\end{align}
Here $p_{\a\dot{\alpha}}=p_\mu\sigma^\mu_{\a\dot{\alpha}}$, $x_{\a\dot{\alpha}}=x_\mu \sigma^\mu_{\a\dot{\alpha}}$ are the contractions of the first four components of the six-dimensional vectors with the four-dimensional Pauli matrices and $m=p_5-ip_4$, $n=x_5-ix_4$. Our conventions for four dimensional spinors can be found in \cref{appendix:Spinors}.
Since we are interested in massive four dimensional amplitudes in the following, we from now on set the fourth spatial component of all six-dimensional vectors to zero, thereby effectively performing the dimensional reduction from a massless five-dimensional to a massive four dimensional theory. This is equivalent to setting $n=\bar{n}=x_5$ and imposing the constraint $m=\bar m$ on the spinor variables, which together with the reality condition for the momenta $\lambda^{\ast}=\pm \tilde\lambda$, $\mu^{\ast}=\pm \tilde\mu$ results in the 5 real degrees of freedom of a massive four dimensional momentum and a spin quantization axis\footnote{Each helicity spinor starts out with 4 real degrees of freedom, the reality condition
$\lambda^{\ast}=\pm \tilde\lambda$ and the $U(1)$ helicity scaling $\lambda\to\exp[i\alpha]\lambda$
cuts this down to 3 real degrees of freedom. The further condition
$\ang{\lambda}{\mu}=\bracket{\tilde\mu}{\tilde\lambda}$ brings us to 5=3+3-1 degrees of freedom.}.
Inserting the dimensional reduction of the spinors into the definition of the supermomenta we obtain
\begin{align}
q^{A} &= \lambda^{A a} \xi_{a} = \left(\begin{array}{c}-\mu_{\alpha} \xi_{1}+ \lambda_{\alpha} \xi_{2} \\ \tilde{\lambda}^{\dot{\alpha}} \xi_{1}+\tilde{\mu}^{\dot{\alpha}} \xi_{2} \end{array}\right) \,,&
\tilde{q}_{A} &= \tilde{\lambda}_{A \dot{a}} \tilde{\xi}^{\dot{a}} = \left(\begin{array}{cc} \mu^{\alpha} \tilde\xi^{\dot{1}}+\lambda^{\alpha} \tilde\xi^{\dot{2}}, & -\tilde{\lambda}_{\dot{\alpha}} \tilde{\xi}_{\dot{1}} +\tilde{\mu}_{\dot{\alpha}} \tilde{\xi}_{\dot{2}} \end{array}\right)\,,
\end{align}
generalizing the four-dimensional massless case of \cref{eq:456}. It is then
convenient to define the Grassmann part of our four-dimensional massive on-shell variables to be
\begin{align}\label{eq:MapGrassmann}
\zeta^a&=\begin{pmatrix}\xi_{1}\\-\tilde\xi^{\dot 1}\end{pmatrix}\,,&\bar\zeta^a&=\begin{pmatrix}\xi_{2}\\\tilde\xi^{\dot 2}\end{pmatrix}\,,
\end{align}
leading to the four-dimensional supermomenta
\begin{align}
q_{\alpha}^a&=\lambda_{\alpha}\bar\zeta^a-\mu_{\alpha}\zeta^a&
\tilde{q}_{\dot\alpha}^a&=\tilde\lambda_{\dot\alpha}\zeta^a+\tilde\mu_{\dot\alpha}\bar\zeta^a
\end{align}
related to the six-dimensional ones by
\begin{align}\label{eq:DimRedSuper}
q^{A} &= \left(\begin{array}{c}q_\alpha^1 \\ \tilde{q}^{\dot\alpha\,1} \end{array}\right) \,,&
\tilde{q}_{A} &= \left(\begin{array}{cc} q^{\alpha\,2}, & \tilde{q}_{\dot\alpha}^2 \end{array}\right)\,.
\end{align}
The dual fermionic momenta $\theta^{a}_{i\,\alpha}$, $\tilde\theta^{a}_{i\,\dot\alpha}$ are defined by
\begin{align}
(\theta_i-\theta_{i+1})^{a}_\alpha&= q^{a}_{i\,\alpha}\\
(\tilde\theta_i-\tilde\theta_{i+1})^{a}_{\dot\alpha}&= \tilde{q}^{a}_{i\,\dot\alpha}\,,
\end{align}
and are related to the six-dimensional dual fermionic momenta by
\begin{align}\label{eq:DimRedDual}
\theta^{A} &= \left(\begin{array}{c}\theta_\alpha^1 \\ \tilde{\theta}^{\dot\alpha\,1} \end{array}\right) \,,&
\tilde{\theta}_{A} &= \left(\begin{array}{cc} \theta^{\alpha\,2}, & \tilde{\theta}_{\dot\alpha}^2 \end{array}\right)\,.
\end{align}
In conclusion the massive Coulomb branch amplitudes of $\mathcal{N}=4$ SYM may be expressed either by the on-shell variables
\begin{equation}\label{eq:onshellMassive}
\{\lambda_i^\alpha,\mu_i^\alpha,\tilde\lambda_i^{\dot\alpha},\tilde\mu_i^{\dot\alpha};\zeta_i^a,\bar{\zeta}_i^a\}
\end{equation}
or the dual variables
\begin{equation}\label{eq:dualMassive}
\{x_i^{\dot{\alpha}\beta},n_i,\theta^{a}_{i\,\alpha},\tilde\theta^{a}_{i\,\dot\alpha}\}\,
\end{equation}
In the associated full superspace the constraints on the variables read
\begin{align}
(x_i-x_{i+1})_{\alpha\dot\alpha}&=p_{i\,\alpha\dot\alpha}\label{eq:constraint1}\,,\\
n_i-n_{i+1}&=m_i\,,\\
m_i&=\bar m_i\label{eq:constraint3}\,,\\
(\theta_i-\theta_{i+1})^{a}_\alpha&= q^{a}_{i\,\alpha}\label{eq:constraint4}\,\\
(\tilde\theta_i-\tilde\theta_{i+1})^{a}_{\dot\alpha}&= \tilde{q}^{a}_{i\,\dot\alpha}\label{eq:constraint7}\,.
\end{align}
With the help of the maps \cref{eq:DimRedSpinors,eq:DimRedMomenta,eq:MapGrassmann,eq:DimRedRegions,eq:DimRedDual,eq:DimRedSuper} it is straightforward to translate any representation of a six-dimensional superamplitude into our four-dimensional variables. From the general form or the six-dimensional superamplitudes we can deduce the general form of the massive amplitudes to be
\begin{equation}\label{eq:massiveAMPS}
\mathcal{A}_n=\delta^{(1)}(n_{1\,n+1})\delta^{(4)}(x_{1\,n+1})\delta^{(4)}(\theta_{1\,n+1}^{\a\,a})\delta^{(4)}(\tilde\theta_{1\,n+1}^{\dot{\alpha}\,a})f_n(\{x_{ij},n_{ij},\theta_{ij},\tilde\theta_{ij}\})\,.
\end{equation}
\subsection{Symmetries of massive $\mathcal{N}=4$ superamplitudes on the Coulomb branch}\label{section:symmetriesMassive}
\subsubsection{Super-Poincar\'e symmetry}
We now want to investigate the symmetries of the massive amplitudes using the on-shell variables \cref{eq:onshellMassive} introduced in the last section. To be more precisely we are interested in the symmetries of $f_n$, defined in \cref{eq:massiveAMPS}, on the support of the delta functions. Similar to the massless four-dimensional case we define shorthand notations for derivatives with respect to spinors
\begin{align}
\partial_{i\,\alpha} &= \frac{\partial}{\partial \lambda^{\alpha}_{i}}\,,&
\partial_{i\,\dot{\alpha}} &= \frac{\partial}{\partial
\tilde{\lambda}^{\dot{\alpha}}_{i}}\,, &
\delta_{i\, \alpha} &= \frac{\partial}{\partial \mu_{i}^{\alpha}}\,, &
\delta_{i\,\dot{\alpha}} &= \frac{\partial}{\partial
\tilde{\mu}_{i}^{\dot{\alpha}}}\,,
\end{align}
Judging from the symmetries of the six-dimensional superamplitudes, presented in \cref{generators_max_6d}, and the imposed constraint $m=\bar{m}$, we expect a five-dimensional super Poincar\'e symmetry. It remains to show how this symmetry is realized on the on-shell variables \cref{eq:onshellMassive}.
Obviously we have translation invariance
\begin{align}
p^{\alpha\dot\alpha}&=\sum_i \lambda_i^{\a}\tilde{\lambda}_i^{\dot{\alpha}} + \mu_i^{\a}\tilde{\mu}_i^{\dot{\alpha}}\,,&m&= \sum_{i} \ang{\lambda_{i}}{\mu_{i}}=\sum_{i} \sqb{\tilde\mu_{i}}{\tilde\lambda_{i}}\,.
\end{align}
as well as the Lorentz generators
\begin{align}
l_{\alpha\beta}&=\sum_i\lambda_{i\,(\alpha}\partial_{i\,\beta)}+\mu_{i\,(\alpha}\delta_{i\,\beta)}\,,&
\bar l_{\dot\alpha\dot\beta}&=\sum_i\tilde\lambda_{i\,(\dot\alpha}\partial_{i\,\dot\beta)}+\tilde\mu_{i\,(\dot\alpha}\delta_{i\,\dot\beta)}\,,
\end{align}
associated to rotations in the first four spatial directions. Lorentz rotations $l^{\mu 5}$ involving the fifth spatial dimension correspond to the generator
\begin{align}
w_{\alpha\dot\alpha}&=\sum_i\tilde\mu_{i\,\dot\alpha}\partial_{i\,\alpha}-\tilde\lambda_{i\,\dot\alpha}\delta_{i\,\alpha}+\mu_{i\,\alpha}\partial_{i\,\dot\alpha}-\lambda_{i\,\alpha}\delta_{i\,\dot\alpha}\,.
\end{align}
Supersymmetry is realized as
\begin{align}
q_{\alpha}^a&=\sum_i\lambda_{i\,\alpha}\bar\zeta_i^a-\mu_{i\,\alpha}\zeta_i^a\,,&
\tilde{q}_{\dot\alpha}^a&=\sum_i\tilde\lambda_{i\,\dot\alpha}\zeta_i^a+\tilde\mu_{i\,\dot\alpha}\bar\zeta_i^a\,,\\
\bar q_{\dot\alpha\,a}&=\sum_i\tilde\lambda_{i\,\dot\alpha}\frac{\partial}{\partial\bar\zeta_i^a}-
\tilde\mu_{i\,\dot\alpha}\frac{\partial}{\partial\zeta_i^a}\,,&\bar{\tilde{q}}_{\alpha\,a}&=\sum_i\lambda_{i\,\alpha}\frac{\partial}{\partial\zeta_i^a}+
\mu_{i\,\alpha}\frac{\partial}{\partial\bar{\zeta}_i^a}\,.
\end{align}
Trivially we have a dilatation symmetry with the generator
\begin{align}\label{eq:dilatation}
d&=\tfrac{1}{2}\sum_i(\lambda^{\alpha}_i\partial_{i\,\alpha}+\tilde\lambda^{\dot\alpha}_i\partial_{i\,\dot\alpha}+\mu^{\alpha}_i\delta_{i\,\alpha}+\tilde\mu_i^{\dot\alpha}\delta_{i\,\dot\alpha}+2)\,.
\end{align}
Performing the dimensional reduction of the spinors, \cref{eq:DimRedSpinors}, the independence of $\lambda_A$ and $\tilde\lambda^A$ gets lost. As a consequence only one $SU(2)$ factor of the $SU(2)\times SU(2)$ little group symmetry survives the dimensional reduction. Indeed we have the $SU(2)$ helicity generators
\begin{align}
h_+&=\frac{1}{\sqrt{2}}\sum_i\left(\lambda_i^\a\delta_{i\,\a}-\tilde\mu_i^{\dot{\alpha}}\partial_{i\,\dot{\alpha}}+\zeta_i^a\frac{\partial}{\partial\bar \zeta_i^a}\right),&h_-&=\frac{1}{\sqrt{2}}\sum_i\left(\mu_i^\a\partial_{i\,\a}-\tilde\lambda_i^{\dot{\alpha}}\delta_{i\,\dot{\alpha}}+\bar\zeta_i^a\frac{\partial}{\partial \zeta_i^a}\right),\notag\\
h&=\frac{1}{2}\sum_i\left(\lambda_i^\a\partial_{i\,\a}+\tilde\mu_i^{\dot{\alpha}}\delta_{i\,\dot{\alpha}}-\mu_i^{\a}\delta_{i\,\a}-\tilde\lambda_i^{\dot{\alpha}}\partial_{i\,\dot{\alpha}}+\zeta_i^a\frac{\partial}{\partial \zeta_i^a}-\bar\zeta_i^a\frac{\partial}{\partial \bar\zeta_i^a}\right).\hspace{-3.7cm}&&
\end{align}
They fulfill the following closing algebra
\begin{equation}\label{eq:OnShellAlgebra}
\begin{aligned}
{}[h_+,h_-]&=h&\hspace{2cm}[h,h_{\pm}]&=\pm h_{\pm}\\
[l_{\a\b},l_{\gamma\delta}]&=2\epsilon_{\gamma(\a}l_{\b)\delta}+2\epsilon_{\delta(\a}l_{\b)\gamma}&[\bar l_{\dot{\alpha}\dot{\beta}},\bar l_{\dot\gamma\dot\delta}]&=2\epsilon_{\dot\gamma(\dot{\alpha}}\bar l_{\dot{\beta})\dot\delta}+2\epsilon_{\dot\delta(\dot{\alpha}}\bar l_{\dot{\beta})\dot\gamma}\\
[w_{\alpha\dot\alpha}, w_{\beta\dot\beta}]&=2\epsilon_{\alpha\beta}\bar l_{\dot\alpha\dot\beta}+2\epsilon_{\dot\alpha\dot\beta} l_{\alpha\beta}&&\\
[\, l_{\b\gamma}, w_{\a\dot{\alpha}}\, ] &= \epsilon_{\a(\b}\, w_{\gamma)\dot{\alpha}}&[\, \bar l_{\dot{\beta}\dot{\gamma}}, w_{\a\dot{\alpha}}\, ] &= - w_{\a(\dot{\beta}}\,\epsilon_{\dot{\gamma})\dot{\alpha}}\\
[\, l_{\b\gamma}, p_{\a\dot{\alpha}}\, ] &= \epsilon_{\a(\b}\, p_{\gamma)\dot{\alpha}}&[\, \bar l_{\dot{\beta}\dot{\gamma}}, p_{\a\dot{\alpha}}\, ] &= - p_{\a(\dot{\beta}}\,\epsilon_{\dot{\gamma})\dot{\alpha}}\\
[w_{\alpha\dot\alpha},m]&=p_{\alpha\dot\alpha}&[w_{\alpha\dot\alpha},p_{\beta\dot\beta}]&=2\epsilon_{\alpha\beta}\epsilon_{\dot\alpha\dot\beta}m\\
[\, l_{\b\gamma}, q_{\a}^a\, ] &= \epsilon_{\a(\b}\, q_{\gamma)}^a&[\, l_{\b\gamma}, \bar{\tilde{q}}_{\a\,a}\, ] &= \epsilon_{\a(\b}\, \bar{\tilde{q}}_{\gamma)\,a}\\
[\, \bar l_{\dot{\beta}\dot{\gamma}}, \tilde{q}_{\dot{\alpha}}^a\, ] &= \epsilon_{\dot{\alpha}(\dot{\beta}}\, \tilde{q}_{\dot{\gamma})}^a&[\, \bar l_{\dot{\beta}\dot{\gamma}}, \bar q_{\dot{\alpha}\,a}\, ] &= \epsilon_{\dot{\alpha}(\dot{\beta}}\, \bar q_{\dot{\gamma})\,a}\\
[w_{\alpha\dot\alpha},q_{\beta}^a]&=-\epsilon_{\alpha\beta}\tilde{q}_{\dot\alpha}^a&[w_{\alpha\dot\alpha},\tilde{q}_{\dot\beta}^a]&=\epsilon_{\dot\alpha\dot\beta}q_{\alpha}^a\\
[w_{\alpha\dot\alpha},\bar{\tilde{q}}_{\beta\,a}]&=\epsilon_{\alpha\beta}\bar q_{\dot\alpha\,a}&[w_{\alpha\dot\alpha},\bar q_{\dot\beta\, a}]&=-\epsilon_{\dot\alpha\dot\beta}\bar{\tilde{q}}_{\alpha\, a}\\
\{q_{\alpha}^a,\bar q_{\dot\alpha\,b}\}&=p_{\alpha\dot\alpha}\,\delta^a_b&\{\tilde{q}_{\dot\alpha}^a,\bar{\tilde{q}}_{\alpha\,b}\}&=p_{\alpha\dot\alpha}\,\delta^a_b\\
\{q_{\alpha}^a,\bar{\tilde{q}}_{\beta\,b}\}&=m\,\epsilon_{\alpha\beta}\,\delta_b^a&\{\tilde{q}_{\dot\alpha}^a,\bar q_{\dot\beta\,b}\}&=-m\,\epsilon_{\dot\alpha\dot\beta}\,\delta_b^a
\end{aligned}
\end{equation}
along with the generic $[d, j] =\dim(j)\, j$ for any generator $j$, all other commutators vanishing. A necessary condition for the generators to be well defined on the massive amplitudes under consideration is that they commute with the constraint $m=\bar{m}$. One indeed shows that this is the case, e.g.
\begin{equation}
[\, w_{\a\dot{\alpha}}, \ang{\lambda_{i}}{\mu_{i}} - [\tilde\mu_{i}\,\tilde\lambda_{i}]\, ] = 0\,.
\end{equation}
Clearly the nice form of the algebra is suggesting the existence of a $SU(2)$ symmetry with respect to the Grassmann label $a$, introduced in \cref{eq:MapGrassmann}. However, at this point we see no indication that such a symmetry is realized on the massive superamplitudes \eqref{eq:massiveAMPS} for multiplicities larger than four and the introduction of the Grassmann variables $\zeta^a$, $\bar\zeta^a$ and their dual partners $\theta^a$, $\tilde\theta^a$ should be regarded as a very convenient way to compactly write down the algebra. Indeed, the $SU(2)$ symmetry of the algebra will be explicitly broken if we include the generators $r_1$, $r_2$ of $U(1)\times U(1)$ $R$-symmetry realized on the massive superamplitudes \eqref{eq:massiveAMPS}
\begin{align}
r_1&=\sum_i \left(\zeta_i^1\frac{\partial}{\partial \zeta_i^1}+\bar\zeta_i^1\frac{\partial}{\partial \bar\zeta_i^1}\right) -n+4\,&r_2=\sum_i\left( \zeta_i^2\frac{\partial}{\partial \zeta_i^2}+\bar\zeta_i^2\frac{\partial}{\partial \bar\zeta_i^2}\right)-n+4\,.
\end{align}
Invariance under $r_a$ follows from the hyper charges $b$, $\tilde{b}$ of \eqref{eq:Rcharges} of the six-dimensional superamplitudes. We have
\begin{align}
[r_a,q_{\alpha}^b]&=\delta_a^b q_{\alpha}^b\,,&[r_a,\tilde{q}_{\dot{\alpha}}^b]&=\delta_a^b \tilde{q}_{\dot{\alpha}}^b\,,&[r_a,\bar{q}_{\dot{\alpha}\,b}]&=-\delta_a^b \bar{q}_{\dot{\alpha}\,b}\,,&[r_a,\bar{\tilde q}_{\a\,b}]&=-\delta_a^b \bar{\tilde q}_{\a\,b}\,.
\end{align}
\subsubsection{Enhanced dual conformal symmetry}
We now want to investigate the symmetries in the dual superspace \eqref{eq:dualMassive}. Similar to the on-shell case we already know from the the six-dimensional amplitudes that we will have an extended dual conformal symmetry. Obviously the massive amplitudes have an extended
dual Poincar\'e symmetry with generators
\begin{equation}
\{P_{\alpha\dot{\alpha}},\,M,\,L_{\a\b},\,\bar L_{\dot{\alpha}\dot{\beta}},\,W_{\a\dot{\alpha}}\} \,.
\end{equation}
Translation invariance in the dual variables implies the symmetries
\begin{align}
P_{\alpha\dot{\alpha}}&=\sum_i \frac{\partial}{\partial x_{i}^{\dot{\alpha}\alpha}}\,,& M&=\sum_i \frac{\partial}{\partial n_{i}}\,.
\end{align}
and
\begin{align}
Q_{\a\,a}&=\sum_i \frac{\partial}{\partial \theta_{i}^{\a\,a}}\,,& \tilde{Q}_{\dot{\alpha}\,a}&=\sum_i \frac{\partial}{\partial \theta_{i}^{\dot{\alpha}\,a}}\,.
\end{align}
The Lorentz generators $L_{\a\b}$, $\bar L_{\dot{\alpha}\dot{\beta}}$, $W_{\a\dot{\alpha}}$ are simply given by the action of the on-shell Lorentz generators $l_{\a\b}$, $\bar l_{\dot{\alpha}\dot{\beta}}$, $w_{\a\dot{\alpha}}$ in dual superspace
\begin{align}
L_{\a\b}&=\sum_i\left(x_{i(\alpha}^{\dot{\alpha}}\partial_{i\beta)\dot{\alpha}}+\theta_{i(\alpha}^a\frac{\partial}{\partial\theta_i^{\beta)a}}\right)\,,&\bar{L}_{\dot{\alpha}\dot{\beta}}&=\sum_i\left(x_{i(\dot{\alpha}}^{\a}\partial_{i\dot{\beta})\a}+\tilde\theta_{i(\dot{\alpha}}^a\frac{\partial}{\partial\tilde\theta_i^{\dot{\beta})a}}\right)\,,
\end{align}
and
\begin{equation}
W_{\a\dot{\alpha}}=\sum_i\left(x_{\a\dot{\alpha}}\frac{\partial}{\partial n}+2\,n\frac{\partial}{\partial x_{i}^{\dot{\alpha}\alpha}}+\tilde{\theta}^a_{i\,\dot{\alpha}}\frac{\partial}{\partial \theta_i^{\a a}}-\theta^a_{i\,\a}\frac{\partial}{\partial \tilde\theta_i^{\dot{\alpha} a}}\right)
\end{equation}
making the relation of $W_{\a\dot{\alpha}}$ to the Lorentz rotations $l_{\mu 5}$ more obvious than in on-shell superspace. The dual dilatation is given by
\begin{equation}
D=-\tfrac{1}{2}\sum_i\bigl[2x_i^{\dot{\alpha}\a}\partial_{i\,\a\dot{\alpha}}+2n\frac{\partial}{\partial n}+{\theta}^{\a\,a}_{i}\frac{\partial}{\partial \theta_i^{\a a}}+\tilde{\theta}^{\dot{\alpha}\,a}_{i}\frac{\partial}{\partial \tilde\theta_i^{\dot{\alpha} a}}\bigr]
\end{equation}
and acts covariantly on the amplitude
\begin{equation}
D f_n=n\,f_n\,.
\end{equation}
From the six-dimensional superamplitude we know that the massive tree amplitudes are covariant under dual conformal inversion
\begin{equation}\label{eq:inversionMassive}
I[f_n]=\left(\prod_i (x_i^2-n_i^2)\right) f_n\,,
\end{equation}
and we only need to find the representation of the dual conformal boost generator in the dual variables \cref{eq:dualMassive}. We emphasize that in order to obtain the correct expression for the $\mu=0,1,2,3$ components of the dual conformal boost generator we cannot simply plug the 4$d$ variables into the expression for $K^{AB}$ given in \cref{eq:K6d} since this leads to the wrong result. The four-dimensional spinor variables solve the constraint \eqref{eq:6dspinorConstraint} on the six-dimensional spinors and thus spoil the assumed independence of chiral and anti-chiral spinors $\frac{\partial \tilde\lambda_A}{\partial \lambda^B}=0$ in the six-dimensional representation of the dual conformal boost generator $K^{AB}$.
Since there is no obstacle in translating the inversion rules of the six-dimensional dual momenta \eqref{eq:inversion6d_first}, one possibility to obtain the action of the dual conformal boost generator $K_{\a\dot{\beta}}=IP_{\b\dot{\alpha}}I$ in the full superspace is to start with the inversion rules for the bosonic dual variables
\begin{align}
I[x_{\a\dot{\beta}}]&=-\frac{x_{\b\dot{\alpha}}}{x^2-n^2}\,,& I[n]&=\frac{n}{x^2-n^2}\,,
\end{align}
and extend the corresponding part of the dual conformal boost generator $K_{\a\dot{\alpha}}$ acting only on the bosonic dual variables
\begin{equation}\label{eq:Kx}
K_{\alpha\dot\alpha}\bigr\rvert_{x,n}=\sum_{i}\left( x_{i\, \alpha\dot\gamma}\,x_{i\, \dot\alpha\gamma}\,\frac{\partial}{\partial
x_{i\, \gamma\dot\gamma}} + x_{i\, \alpha\dot\alpha}\, n_{i}\frac{\partial}{\partial n_{i}} + n_{i}^2\,
\, \frac{\partial}
{\partial x_i^{\dot\alpha\alpha}}\right)
\end{equation}
such that it commutes with the constraints \eqref{eq:constraint1} to \eqref{eq:constraint7}. Note that the additional minus sign in the inversion rules for $n$ originates from the six-dimensional mostly minus metric $\eta_{55}=-1$.
Requiring that the dual conformal generator $K_{\alpha\dot\alpha}\bigr\rvert_{x,n}$ commutes with the bosonic constraints \eqref{eq:constraint1} to \eqref{eq:constraint3} leads to
\begin{align}\label{eq:Kboson}
K_{\alpha\dot\alpha}^{\text{boson}}&=K_{\alpha\dot\alpha}\bigr\rvert_{x,n}+\tfrac{1}{2}\sum_i\biggl[(x_i+x_{i+1})^\beta_{\phantom{\beta}\dot\alpha}\;l_{i\,\alpha\beta}+(x_i+x_{i+1})^{\phantom{\alpha}\dot\beta}_\alpha\; \bar l_{i\,\dot\alpha\dot\beta} \notag\\
&\=\phantom{K_{\alpha\dot\alpha}^{\text{$x$ space}}+\sum_i\biggl[}+(x_i+x_{i+1})_{\alpha\dot\alpha}\,(d_i-1)+(n_i+n_{i+1})\,w_{i\,\alpha\dot\alpha}\biggr]
\end{align}
Since $K_{\alpha\dot\alpha}^{\text{boson}}$ has a non-vanishing commutator with the right hand side of the fermionic constraints \eqref{eq:constraint4} and \eqref{eq:constraint7}, we have to introduce the following fermionic terms:
\begin{align}\label{eq:Kfermion}
K_{\alpha\dot\alpha}^{\text{fermion}}&=\sum_i\biggl[\,\theta^a_{i\,\alpha}\,x_{i\,\beta\dot\alpha}\,\frac{\partial}{\partial\theta^a_{i\,\beta}}+\,\tilde\theta^a_{i\,\dot\alpha}\,x_{i\,\alpha\dot\beta}\,\frac{\partial}{\partial\tilde\theta^a_{i\,\dot\beta}}+\,n_i\,\tilde\theta^a_{i\,\dot\alpha}\,\frac{\partial}{\partial\theta_{i}^{\alpha\,a}}+\, n_i\,\theta^a_{i\,\alpha}\,\frac{\partial}{\partial\tilde\theta_{i}^{\dot\alpha\,a}}\notag\\
&\=\phantom{\sum_i\biggl[}+\tfrac{1}{2}(\theta_{i}+\theta_{i+1})_\alpha^a\,\bar q_{i\,\dot\alpha\,a}+\tfrac{1}{2}(\theta_{i}+\theta_{i+1})_{\dot\alpha}^a\,\bar{\tilde{q}}_{i\,\alpha\,a}\biggr]
\end{align}
Their sum $K_{\alpha\dot\alpha}=K_{\alpha\dot\alpha}^{\text{boson}}+K_{\alpha\dot\alpha}^{\text{fermion}}$ commutes with all constraints. The part of $K_{\alpha\dot\alpha}$ acting on the on-shell variables $\{\lambda_{i}, \tilde\lambda_{i}, \mu_{i},\tilde\mu_{i};
\zeta_{i},\bar\zeta_{i}\}$ is given by
\begin{align}\label{eq:K_onshell}
\hspace{-.3cm}K_{\alpha\dot\alpha}\bigr\rvert_{\text{on-shell}}&=\tfrac{1}{2}\sum_i\Bigl[ (x_i+x_{i+1})^\beta_{\phantom{\beta}\dot\alpha}\;l_{i\,\alpha\beta}+(x_i+x_{i+1})^{\phantom{\alpha}\dot\beta}_\alpha\; \bar l_{i\,\dot\alpha\dot\beta}+(n_i+n_{i+1})\,w_{i\,\alpha\dot\alpha}\hspace{.6cm}\\*
&\=\phantom{\tfrac{1}{2}\sum_i}{}+(x_i+x_{i+1})_{\alpha\dot\alpha}\,(d_i-1)+(\theta_{i}+\theta_{i+1})_\alpha^a\,\bar q_{i\,\dot\alpha\,a}+(\theta_{i}+\theta_{i+1})_{\dot\alpha}^a\,\bar{\tilde{q}}_{i\,\alpha\,a}\Bigr]\,.\notag
\end{align}
The representation of $K_5=IMI$ in four-dimensional variables may be obtained in a similar way or by Lorentz rotation $[W_{\a\dot{\alpha}},K_{\b\dot{\beta}}]=\epsilon_{\a\b}\epsilon_{\dot{\alpha}\dot{\beta}}K_{5}$ of $K_{\alpha\dot\alpha}$. The representations of $K_{\alpha\dot\alpha}$ and $K_5$ in dual superspace are
\begin{align}
K_{\a\dot{\alpha}}&=\sum_{i}\begin{aligned}[t]\biggl[& x_{i\, \alpha\dot\gamma}\,x_{i\, \dot\alpha\gamma}\,\frac{\partial}{\partial
x_{i\, \gamma\dot\gamma}} + x_{i\, \alpha\dot\alpha}\, n_{i}\frac{\partial}{\partial n_{i}}
+ n_{i}^2\,
\, \frac{\partial}
{\partial x_i^{\dot\alpha\alpha}}\label{eq:K}\\
&+\theta^a_{i\,\alpha}\,x_{i\,\beta\dot\alpha}\,\frac{\partial}{\partial\theta^a_{i\,\beta}}+\,\tilde\theta^a_{i\,\dot\alpha}\,x_{i\,\alpha\dot\beta}\,\frac{\partial}{\partial\tilde\theta^a_{i\,\dot\beta}}+\,n_i\,\tilde\theta^a_{i\,\dot\alpha}\,\frac{\partial}{\partial\theta_{i}^{\alpha\,a}}-\, n_i\,\theta^a_{i\,\alpha}\,\frac{\partial}{\partial\tilde\theta_{i}^{\dot\alpha\,a}}\biggr]\,,\end{aligned}\\
K_5&=\sum_{i}\begin{aligned}[t]\biggl[& n_i^2\,\frac{\partial}{\partial
n_{i}} + 2\,n_{i}\,x_i^{\dot\alpha\alpha} \frac{\partial}
{\partial x_i^{\dot\alpha\alpha}}+x_i^{2}\,\frac{\partial}{\partial n_i}
\\&+\theta^{\a \,a}_{i}\,x_{i\,\a\dot{\beta}}\,\frac{\partial}{\partial\tilde\theta^a_{i\,\dot{\beta}}}+\,\tilde\theta^a_{i\,\dot\alpha}\,x_{i}^{\dot{\alpha}\beta}\,\frac{\partial}{\partial\theta^{\b a}_{i}}+\,n_i\,\theta^{\a\,a}_{i}\,\frac{\partial}{\partial\theta_{i}^{\alpha\,a}}+\, n_i\,\tilde\theta^{a\,\dot{\alpha}}_{i}\,\frac{\partial}{\partial\tilde\theta_{i}^{\dot\alpha\,a}}\biggr]\label{eq:K5}\,.\end{aligned}
\end{align}
and the action of $K_5$ on the on-shell variables is given by
\begin{multline}\label{eq:K5onshell}
K_5\bigr\rvert_{\text{on-shell}}=\tfrac{1}{2}\sum_i\biggl[w_{i\,\a\dot{\alpha}}(x_i+x_{i+1})^{\dot{\alpha}\a}+2(d_i-1)(n_i+n_{i+1})\\-(\tilde{\theta}_i-\tilde{\theta}_{i+1})^{\dot{\alpha}\,a}\bar{q}_{\dot{\alpha}\,a}+({\theta}_i-{\theta}_{i+1})^{\a\,a}\bar{\tilde q}_{\a\,a}\biggr]
\end{multline}
The dual superconformal generators
\begin{align}
\bar{S}_{\dot{\alpha}\,a}&=\sum_i x_{i\,\a\dot{\alpha}}\frac{\partial}{\partial \theta_{i\,\a}^a}-n_i\frac{\partial}{\partial \theta_{i}^{\dot{\alpha}\,a}}\,,&\bar{\tilde S}_{\a\,a}&=\sum_i x_{i\,\a\dot{\alpha}}\frac{\partial}{\partial \tilde\theta_{i\,\dot{\alpha}}^a}+n_i\frac{\partial}{\partial \theta_{i}^{\a\,a}}\,.
\end{align}
can be obtained from the commutators of $K_{\a\dot{\alpha}}$ with the dual supermomenta $Q_a^{\b}$ and $\tilde{Q}_a^{\dot{\beta}}$. In full superspace they coincide with the supersymmetry generators $\bar{q}_{\dot{\alpha}\,a}$, $\bar{\tilde q}_{\a\,a}$
\begin{align}
\bar{S}_{\dot{\alpha}\,a}&=\bar{q}_{\dot{\alpha}\,a}\,,& \bar{\tilde{S}}_{\dot{\alpha}\,a}&=\bar{\tilde{q}}_{\a\,a}\,,
\end{align}
similar to the massless case. The dual conformal algebra reads
\begin{equation}\label{eq:algebraDualMassive}
\begin{gathered}
\begin{aligned}[t]
[M,K_{\a\dot{\alpha}}]&=W_{\a\dot{\alpha}}&[M,K_5]&=-2\,D \\
[W_{\a\dot{\alpha}},K_{\b\dot{\beta}}]&=\epsilon_{\a\b}\epsilon_{\dot{\alpha}\dot{\beta}}K_{5}&[W_{\a\dot{\alpha}},K_5]&=2K_{\a\dot{\alpha}}\\
[K_{\a\dot{\alpha}},Q_a^{\b}]&=\delta_{\a}^{\b}\bar{S}_{\dot{\alpha}\,a}&[K_{\a\dot{\alpha}},\tilde{Q}_a^{\dot{\beta}}]&=\delta_{\dot{\alpha}}^{\dot{\beta}}\bar{\tilde{S}}_{\a\,a}\\
[K_{5},Q_a^{\a}]&=-\bar{\tilde{S}}_{\a\,a}&[K_{5},\tilde{Q}_a^{\dot{\alpha}}]&=\bar{S}_{\dot{\alpha}\,a}\\
\end{aligned}\\
[K_{\a\dot{\alpha}},P^{\dot{\beta}\b}]=\delta_{\a}^{\b}\delta_{\dot{\alpha}}^{\dot{\beta}}D+\delta_{\a}^{\b}\bar{L}_{\dot{\alpha}}^{\;\;\dot{\beta}}+\delta_{\dot{\alpha}}^{\dot{\beta}}{L}_{\a}^{\;\;\b}
\end{gathered}
\end{equation}
along with the generic $[D,J]=\dim(J)J$ for all generators $J$.
We omitted all commutators that are either vanishing or equal to the corresponding commutators in the on-shell algebra \cref{eq:OnShellAlgebra}.
The action of the $R$-symmetry charges $r_a$ in dual superspace are given by
\begin{align}
R_1&=\sum_i \left(\theta_{i\a}^1\frac{\partial}{\partial \theta_i^{\a\,1}}+\tilde\theta_{i\dot{\alpha}}^1\frac{\partial}{\partial \tilde\theta_i^{\dot{\alpha}\,1}}\right) -n+4\,&R_2&=\sum_i \left(\theta_{i\a}^2\frac{\partial}{\partial \theta_i^{\a\,2}}+\tilde\theta_{i\dot{\alpha}}^2\frac{\partial}{\partial \tilde\theta_i^{\dot{\alpha}\,2}}\right) -n+4\,,
\end{align}
with the non-vanishing commutators
\begin{align}
[R_a,Q_{b}]&=-\delta_a^b Q_{b}\,,&[R_a,\tilde{Q}_{b}]&=-\delta_a^b \tilde{Q}_{b}\,,&[R_a,\bar{S}_{b}]&=-\delta_a^b \bar{S}_{b}\,,&[R_a,\bar{\tilde S}_{b}]&=-\delta_a^b \bar{\tilde S}_{b}\,.
\end{align}
Some further remarks are in order here. as we already mentioned, the generator $w_{\a\dot{\alpha}}$
arises from the Lorentz-generators $l^{\mu 5}$, just as $m$ is related to the momentum in the extra dimensional direction $p^{5}$.
As has been shown in \cite{Dennen:2010dh}, if the loop momentum is restricted to be four-dimensional, which is equivalent to the Higgs regularization described in \cite{Alday:2009zm}, the cut constructible parts of the loop amplitudes invert as
\begin{equation}\label{eq:inversionLoop}
I\left[\int\left(\prod_i^Ld^4x_{l_i}\right)\mathcal{I}_n^L\right]=\left(\prod_i^nx_i^2\right) \int\left(\prod_i^Ld^4x_{l_i}\right)\mathcal{I}_n^L\,.
\end{equation}
Due to the four dimensional loop momenta, the five dimensional Lorentz invariance as well as the dual translation invariance in the $x^{5}$ direction are lost. Hence, $w_{\a\dot{\alpha}}$ is a manifest symmetry of the tree-superamplitudes but no symmetry of the Higgs regularized loop amplitudes. Since the dual conformal boost generator is given by $K^\mu=IP_\mu I$, the inversion properties \eqref{eq:inversionLoop} only imply that $(K^\mu+2\sum_i x_i^\mu)$ is a symmetry of the regularized loop amplitudes for $\mu=0,1,2,3$, whereas the tree-amplitudes have the full five-dimensional dual conformal symmetry.
\subsubsection{Yangian symmetry}
The obvious question now arises: Can one reinterpret the dual conformal operator in six dimensions
as a level-one Yangian generator in a four dimensional massive theory? To answer this we proceed in great analogy to the work \cite{Drummond:2009fd} where a Yangian symmetry
of tree superamplitudes was established for ${\mathcal N}=4$ SYM as reviewed in\cref{section:Yangian}.
We continue by translating the expression for $K_{\alpha\dot\alpha}+\sum_i x_{i\,\alpha\dot\alpha}$ to four dimensional on-shell variables. Inserting
\begin{align}
x_i^{\dot{\alpha}\a}&=x_1^{\dot{\alpha}\a}-\sum_{j=1}^{i-1}p_j^{\dot{\alpha}\a}&n_i&=n_1-\sum_{j=1}^{i-1}m_j\\
\theta^a_{i\,\a}&=\theta^a_{1\,\a}-\sum_{j=1}^{i-1}q^a_{i\,\a}&\theta^a_{i\,\dot{\alpha}}&=\theta^a_{1\,\dot{\alpha}}-\sum_{j=1}^{i-1}\tilde{q}^a_{i\,\dot{\alpha}}
\end{align}
into the part of the dual conformal boost generator acting on the on-shell variables \cref{eq:K_onshell}, one finds the non-local result
\begin{align}
K_{\alpha\dot\alpha}\+\sum_i x_{i\,\alpha\dot\alpha}&=-\sum_{j<i}\biggr[p_{j\phantom{\beta}\dot\alpha}^{\phantom{j}\beta}l_{i\,\alpha\beta}+p_{j\,\alpha}^{\phantom{j\,\alpha}\dot\beta}\bar l_{i\,\dot\alpha\dot\beta}+p_{j\,\alpha\dot\alpha}d_i+m_j w_{i\,\alpha\dot\alpha}+q_{j\,\dot\alpha}^a\bar{\tilde{q}}_{i\,\alpha\, a}+q_{j\,\alpha}^a\bar q_{i\,\dot\alpha\, a}\biggr]\notag\\
&\=-\tfrac{1}{2}\sum_{i=1}^n\biggr[p_{i\phantom{\beta}\dot\alpha}^{\phantom{i}\beta}l_{i\,\alpha\beta}+p_{i\,\alpha}^{\phantom{i\,\alpha}\dot\beta}\bar l_{i\,\dot\alpha\dot\beta}+p_{i\,\alpha\dot\alpha}d_i+m_i w_{i\,\alpha\dot\alpha}+q_{i\,\dot\alpha}^a\bar{\tilde{q}}_{i\,\alpha\, a}+q_{i\,\alpha}^a\bar q_{i\,\dot\alpha\, a}\biggr]
\label{Kadaonshell}
\end{align}
Here we dropped the terms
\begin{align}\label{eq:drop}
+\,(x_1)_{\dot\alpha}^{\phantom{\dot\alpha}\beta}l_{\alpha\beta}+\,(x_1)_{\alpha}^{\phantom{\alpha}\dot\beta}\bar l_{\dot\alpha\dot\beta}+\,(x_{1})_{\alpha\dot\alpha}\,d+\, n_1\, w_{\a\dot\alpha}
+(\theta_{1})_\alpha^a\,\bar q_{\dot\alpha\,a}+(\theta_{1})_{\dot\alpha}^a\,\bar{\tilde{q}}_{\alpha\,a}+\tfrac{1}{2}p_{\a\dot{\alpha}}
\end{align}
which annihilate the tree amplitudes on their own because they are each proportional to symmetry generators. Since the tree superamplitude is independent of $x_1,\theta_1$, $n_1$ and $K_{\alpha\dot\alpha}+\sum_i x_{i\,\alpha\dot\alpha}$ annihilates it, one could also apply the reverse logic by concluding from \eqref{eq:drop} that $d, l_{\alpha\beta}, \bar l_{\dot\alpha\dot\beta}, w_{\a\dot\alpha}, \bar q_{\dot\alpha\,a}, \bar{\tilde{q}}_{\alpha\,a}$ are symmetries of the tree amplitudes. The Higgs regularized loop amplitudes explicitly depend on $n_1$ and are not invariant under
$w_{\a\dot\alpha}$. Consequently, the term $n_1\, w_{\a\dot\alpha}$ cannot be dropped at loop level.
Let us proceed by investigating the structure of the dual conformal boost generator in on-shell variables a bit further. Upon adding to $(K_{\a\dot{\alpha}} +\sum_{i}x_{i\,\a\dot{\alpha}})$ of \cref{Kadaonshell} the quantity
\begin{equation}
\Delta K_{\a\dot{\alpha}} = \,
\tfrac{1}{2}\biggr[p_{\dot\alpha}^{\beta}\, l_{\alpha\beta}+p_{\alpha}^{\dot\beta}
\,\bar l_{\dot\alpha\dot\beta}+p_{\alpha\dot\alpha}\,
d+ m\, \bar w_{\alpha\dot\alpha}
+ q^{a}{}_{\dot{\alpha}}\, {\bar q}_{\a a} + q^{a}{}_{\a}\,
\bar q_{\dot{\alpha} a}
\biggr] \, ,
\end{equation}
which is a manifest symmetry of the super-amplitudes, as $\{p_{\a\dot{\alpha}},m,l_{\a\b},\bar l_{\dot{\alpha}\dot{\beta}}
\}$ annihilate it, we find the bi-local representation of the level-one $p^{(1)}_{\a\dot{\alpha}}$ generator,
\begin{align}
p^{(1)}_{\a\dot{\alpha}} &= K_{\a\dot{\alpha}}+ \Delta K_{\a\dot{\alpha}} +\sum_{i}x_{i\,\a\dot{\alpha}} \nonumber\\
&=- \tfrac{1}{2}\sum_{j<i}\, \Biggl [ p_{j}^{\b\dot{\beta}}\, (\epsilon_{\dot{\alpha}\dot{\beta}}\, l_{i\, \a\b}
+ \epsilon_{\a\b}\, \bar l_{i\, \dot{\alpha}\dot{\beta}} + \epsilon_{\a\b}\, \epsilon_{\dot{\alpha}\dot{\beta}}\, d_{i}\, )
+ m_{j}\, w_{i\, \a\dot{\alpha}}\nonumber\\ &\=\phantom{- \sum_{j<i}\, \Biggl [}
+ q_{j}^{a}{}_{\dot{\alpha}}\, {\bar q}_{i\, \a a} + q_{j}^{a}{}_{\a}\,
\bar q_{i\, \dot{\alpha} a} - (i\leftrightarrow j) \Biggr ]\, .
\end{align}
which indeed obeys a level-one Yangian like relation, \cref{eq:commutatorsLevel1},
\begin{equation}
[\, w_{\a\dot{\alpha}}, p^{(1)}_{\b\dot{\beta}}\, ] = 2\, \epsilon_{\a\b}\, \epsilon_{\dot{\alpha}\dot{\beta}}\, m^{(1)}\, ,
\end{equation}
giving rise to the novel level one generator
\begin{equation}
m^{(1)} = - \tfrac{1}{4}\sum_{j<i} \Biggl [ p_{j}^{\gamma\dot{\gamma}}\, w_{i\, \gamma\dot{\gamma}} + 2m_{j}\, d_{i}
+ q^{a\gamma}_{j}\, {\bar q}_{i\, \gamma a}+ q_{j}^{a}{}_{\dot{\gamma}}\, {\bar q}_{i}^{\dot{\gamma}}{}_{a}
- (i\leftrightarrow j)\Biggr]\, .
\end{equation}
One checks that it indeed obeys the commutation relation
\begin{equation}
[\, w_{\a\dot{\alpha}}, m^{(1)}\, ] = p^{(1)}_{\a\dot{\alpha}}\, .
\end{equation}
We note that $m^{(1)}$ can also be obtained from the action of $K_5$ on the on-shell variables \eqref{eq:K5onshell} in the same way as $ p^{(1)}$ has been obtained from $K_{\a\dot{\alpha}}$ in \eqref{eq:Kx}.
A natural question to be addressed in future work is whether or not there exist the level-one fermionic generators $q_{a\,\a}^{(1)}$, $q_{a\,\dot{\alpha}}^{(1)}$. However, already at
this point it is clear that the non-local symmetry generators found will not lift to
the complete super Poincar\'e algebra but rather stay confined to the super-translational
piece. In particular there will be no level-one $w^{(1)}_{\a\dot{\alpha}}$ symmetry generator.
%
\section{BCFW on-shell recursion relations for tree-level amplitudes}
\subsection{General remarks}
The BCFW on-shell recursion \cite{Britto:2004ap, Britto:2005fq} is a valuable tool in calculating color ordered tree-level amplitudes in gauge theories, as it allows to recursively calculate an $n$ point amplitude from lower point amplitudes. As a direct consequence, the knowledge of the three point amplitudes and the BCFW recursion relation are sufficient to obtain all color ordered tree amplitudes of a particular gauge theory. In what follows we will briefly outline the general form of the BCFW recursion, for some more details we refer to to the excellent review \cite{Bern:2007dw}.
The basic idea is to analytically continue two external momenta by introducing light-like shifts proportional to the complex parameter $z$ that neither spoil the on-shell condition of the two shifted momenta nor the overall momentum conservation. If the shift vector $r$ has the properties
\begin{align} \label{eq:shiftvector}
r^2 &= 0\,, &r \cdot p_1 &= 0\,,& r \cdot p_n &= 0\,,
\end{align}
then the shift
\begin{align}\label{eq:shifts}
p_1\rightarrow p_{\hat{1}}(z) &= p_1 + z r\,,& p_n\rightarrow p_{\hat{n}}(z) &= p_n - z r\,,&A_n&\rightarrow \widehat{A}_n(z)\,,
\end{align}
has the desired properties
\begin{align}
p_{\hat{1}}^2 &= p_{\hat{n}}^2 = 0& p_{\hat{1}} + p_{\hat{n}} &= p_1 + p_n\,.
\end{align}
Using region momenta instead, the shifts in \cref{eq:shifts} can be reproduced by the single shift
\begin{equation}
x_1\rightarrow x_{\hat{1}}=x_1+z\,r\,.
\end{equation}
Color ordered tree amplitudes have simple analytic structure since they only have poles where sums of consecutive momenta go on-shell, i.\,e.~$x_{i\,j}^2=0$. As a consequence $\widehat{A}_n(z)$ is an analytical function that has only the simple poles $z_j$ solving the on-shell condition
\begin{align}
x_{\hat{1}\,j+1}^2=(x_{1\,j+1}+z\,r )^2&=x_{1\,j+1}^2+2z \,r\cdot x_{1\,j+1}=0\,,
\end{align}
i.\,e. the poles are given by
\begin{equation}\label{eq:poles}
z_{j} = -\frac{x^2_{1\,j+1}}{2r\cdot x_{1\,j+1}}\,.
\end{equation}
If the analytically continued amplitude $\widehat{A}_n$ is vanishing as $|z|\rightarrow \infty$ it is a simple fact that the contour integral of $\frac{\widehat{A}_n}{z}$ over a circle at infinity is vanishing. By virtue of the residue theorem this allows to relate the physical amplitude to the residues of $\frac{\widehat{A}_n}{z}$ at the poles $z_j$
\begin{equation}\label{eq:residues}
\frac{1}{2\pi i}\oint d z\frac{\widehat{A}_n}{z}=A_n+\sum_{j=2}^{n-2}\mathop{\mathrm{Res}}_{z=z_j}\frac{\widehat{A}_n}{z}=0\,.
\end{equation}
Due to the general factorization properties of tree amplitudes, these residues are given by products of lower-point on-shell amplitudes multiplied by the residue
\begin{equation}
-\mathop{\mathrm{Res}}_{z=z_j}\left(\frac{1}{z}\frac{i}{x_{\hat{1}\,j+1}^2}\right)=\frac{i}{x_{1\,j+1}^2}\,.
\end{equation}
Introducing the abbreviations
\begin{align}\label{eq:Def_Pj}
\hat{P}_j&=x_{\hat{1}\,j+1}\,,&P_j=x_{1\,j+1}\,,
\end{align}
the final form of the BCFW on-shell recursion is
\begin{equation} \label{eq:BCFW}
A_n= \sum_{j = 2}^{n-2} \sum_{h} A_{j+1}(p_{\hat{1}}, p_2,\dots,p_j, -\hat{P}^{(-h)}_{j}) \frac{i}{P^2_{j}} A_{n-j+1}(\hat{P}^{(h)}_{j}, p_{j+1},\dots,p_{\hat{n}})\Biggr\rvert_{z=z_j}
\end{equation}
where the sum goes over all poles $z_j$ and over all helicities of the intermediate states. Note that we assumed the vanishing of $\widehat{A}_n(z)$ for large $z$ to derive the recursion relation which is not a general feature for all gauge theories and all possible shifts. For details we refer to \cite{Cheung:2008dn} and \cite{ArkaniHamed:2008yf}.
In the following sections we will derive supersymmetric versions of the BCFW recursion \cref{eq:BCFW} for the four-dimensional ${\cal N}=4$ SYM theory and the six-dimensional ${\cal N}=(1,1)$ SYM theory.
\subsection{Supersymmetric BCFW for $\mathcal{N}=4$ SYM in non-chiral superspace}\label{section:BCFWnonChiral}
As it has not been done in the literature before, we are going to present the BCFW recursion in the non-chiral super space $\{\lambda_i^\alpha,\tilde\lambda_i^{\dot\alpha},\eta_i^m,\tilde\eta_i^{m'}\}$, introduced in \cref{section:superamps4d}. Additionally we will use the BCFW recursion to prove the postulated covariance in eq.~\eqref{eq:Inversion_Amp_NC} of the non-chiral superamplitudes under the dual conformal inversions \eqref{eq:inversion4dNC}, as well as to calculate the four-, five- and six-point superamplitudes.
Based on the previous section it is straightforward to write down a set of shifts preserving both bosonic and fermionic momentum conservation
\begin{align}
\lambda_1\rightarrow \lambda_{\hat{1}}(z) &= \lambda_1 + z \lambda_n \,,& \tilde{\lambda}_n\rightarrow\tilde{\lambda}_{\hat{n}}(z) &= \tilde{\lambda}_n - z \tilde{\lambda}_1\,,\\
\eta_n\rightarrow\eta_{\hat{n}}(z) &= \eta_n - z \eta_1\,,&\tilde{\eta}_{1}\rightarrow \tilde{\eta}_{\hat{1}}(z) &= \tilde{\eta}_{1} + z \tilde{\eta}_{n}\,,
\end{align}
leading to the poles, \cref{eq:poles},
\begin{equation}
z_{j} = - \frac{x_{1\, j+1}^2}{\langle n| x_{1\, j+1}| 1]}\,.
\end{equation}
of the shifted superamplitude. The corresponding dual shifts are
\begin{align}
x_{\hat{1}}^{\dot\alpha\alpha}&=x_1 ^{\dot\alpha\alpha}+z\, \tilde\lambda^{\dot\alpha}_1\lambda^\alpha_n\,,&
\tilde{\theta}_{\hat{1}}^{\dot{\alpha}\, m'} &= \tilde{\theta}_{1}^{\dot{\alpha}\,m'} + z \tilde{\lambda}^{\dot{\alpha}}_1 \tilde{\eta}_{n}^{m'}\,,&
\theta_{\hat{1}\,m}^{\alpha}&=\theta_{1\,m}^{\alpha}+ z\, \lambda^{\alpha}_n \eta_1^m\,.
\end{align}
According to the same arguments as in chiral and anti-chiral superspace, the BCFW recursion in non-chiral superspace is given by
\begin{equation}\label{eq:BCFW_non-chiral}
\mathcal{A}_n = \sum_{j = 2}^{n-2} \int d^2 \eta_{\hat{P}_j}\int d^2 \tilde\eta_{\hat{P}_j} \mathcal{A}_{j+1}(p_{\hat{1}},\dots,p_j,-\hat{P}_j)\frac{-i}{P^2_{j}}\mathcal{A}_{n-j+1}(\hat{P}_j, p_{j+1},\dots,p_{\hat{n}})\Biggr\rvert_{z=z_j}\,,
\end{equation}
with the explicit minus sign originating from the definition of the Grassmann integration measures $d^2\eta=\tfrac{1}{2}d\eta^m d\eta_{m}$, $d^2 \tilde\eta=\tfrac{1}{2}d\tilde\eta_{m'}d \tilde\eta^{m'}$.
The starting point for this recursion can be obtained by a half Fourier transform of the MHV and $\overline{\text{MHV}}$ three point amplitudes in chiral or anti-chiral superspace yielding
\begin{align}
\mathcal{A}_3^{\text{MHV}} &=-i \frac{\delta^4(p) \delta^4(q) \delta^2(\tilde{\eta}_{1} \langle2 3\rangle + \tilde{\eta}_{2} \langle3 1\rangle + \tilde{\eta}_{3} \langle1 2\rangle)}{\langle1 2\rangle \langle2 3\rangle \langle3 1\rangle}\,,\\
\mathcal{A}_3^{\overline{\text{MHV}}} &=i \frac{\delta^4(p)\delta^4(\tilde q) \delta^2(\eta_1 [2 3] + \eta_2 [3 1] + \eta_3 [1 2] ) }{[1 2] [2 3] [3 1]}\,,
\end{align}
where the two dimensional delta functions of objects $\chi_m$, $\tilde\chi_{m'}$ carrying Grassmann indices have the definition $\delta^2(\chi_m)=\tfrac{1}{2}\chi_m\chi^m$, $\delta^2(\tilde\chi_{m'})=\tfrac{1}{2}\tilde\chi^{m'}\chi_{m'}$ such that $\int d^2 \eta\, \delta^2(\eta)=\int d^2 \tilde\eta \,\delta^2(\tilde\eta)=1$.
We recall from eq.~(\ref{325}) that the superamplitudes with $n>3$ partons in non-chiral superspace have the form
\begin{equation}
\mathcal{A}_n=\delta^4(q)\delta^4(\tilde q)f_n(\{x_{ij},\theta_{ij},\tilde\theta_{ij}\})
\end{equation}
i.\,e.~the only $\eta_{\hat{P}_j}$, $\tilde\eta_{\hat{P}_j}$ dependence of the integrand in the BCFW recursion \cref{eq:BCFW_non-chiral} originates from delta functions of the three point amplitudes and the delta functions of the fermionic momenta, making the Grassmann integrations straightforward. For the four point amplitude we obtain
\begin{equation}\label{eq:A4nonchiral}
\mathcal{A}_4=-i\delta^4(q)\delta^4(\tilde q)\frac{1}{x_{13}^2x_{24}^2}\,,
\end{equation}
In agreement with \cite{Huang:2011um}.
Introducing the definitions
\begin{align}
|B_{ijk}\rangle&=x_{ij}x_{jk}|\theta_{ki}\rangle+x_{ik}x_{kj}|\theta_{ji}\rangle\,,& |\tilde B_{ijk}]&=x_{ij}x_{jk}|\tilde\theta_{ki}] +x_{ik}x_{kj}|\tilde\theta_{ji}]
\end{align}
we present the results of the Grassmann integrations in \cref{eq:BCFW_non-chiral} for the three different cases $j=2$, $2<j<n-2$ and $j=n-2$. In the case $j=2$ the left superamplitude has to be $\mathcal{A}_3^{\overline{\text{MHV}}}$ since $\mathcal{A}_3^{\text{MHV}}$
does not exist for the three point kinematics of this case. We obtain
\begin{equation}\label{eq:B1}
\begin{aligned}
\mathcal{B}_2&=\int d^2 \eta_{\hat{P}_2}\int d^2 \tilde\eta_{\hat{P}_2} \mathcal{A}_3^{\overline{\text{MHV}}}(p_{\hat{1}},p_2,-\hat{P}_2)\frac{i}{P^2_{2}}\mathcal{A}_{n-1}(\hat{P}_2, p_{3},\dots,p_{\hat{n}})\Biggr\rvert_{z=z_2}\\
&=\frac{\delta^4(q)\delta^4(\tilde{q})[1\,2]\delta^2([P_2|\tilde{\theta}_{\hat{1}3}])f_{n-1}(x_{\hat{1}},x_3,\dots,x_n)}{x_{2n}^2 x_{13}^2[1\,P_2] [2\,P_2]}\biggr\rvert_{z=z_2}\,.
\end{aligned}
\end{equation}
For practical applications it is convenient to rewrite $\mathcal{B}_2$ as
\begin{equation}
\mathcal{B}_2=\frac{\delta^4(q)\delta^4(\tilde{q})\delta^2([1|\tilde{B}_{13n}])f_{n-1}(x_{\hat{1}},x_3,\dots,x_n)\bigr\rvert_{z=z_2}}{x_{2n}^2 x_{13}^2[1|x_{13}|n\rangle [1\,n]}\,.
\end{equation}
For $2<j<n-2$ we have
\begin{multline}\label{eq:B2}
\mathcal{B}_j= \int d^2 \eta_{\hat{P}_j}\int d^2 \tilde\eta_{\hat{P}_j} \mathcal{A}_{j+1}(p_{\hat{1}},\dots,p_j,-\hat{P}_j)\frac{i}{P^2_{j}}\mathcal{A}_{n-j+1}(\hat{P}_j, p_{j+1},\dots,p_{\hat{n}})\Biggr\rvert_{z=z_j}=\\
\frac{i\,\delta^4(q)\delta^4(\tilde{q})\delta^2([P_j|\tilde{\theta}_{\hat{1}j+1}])\delta^2(\langle P_j|\theta_{\hat{1} j+1}])f_{j+1}(x_{\hat{1}},\dots,x_{j+1})f_{n-j+1}(x_{\hat{1}},x_{j+1},\dots,x_n)\bigr\rvert_{z=z_j}}{x_{1j+1}^2}\,.
\end{multline}
For practical applications it is more convenient to use the following expression for $\mathcal{B}_j$
\begin{equation}
\frac{i\,\delta^4(q)\delta^4(\tilde{q})\delta^2([1|\tilde{B}_{1j+1n}])\delta^2(\langle 2|B_{21j+1}])f_{j+1}(x_{\hat{1}},\dots,x_{j+1})f_{n-j+1}(x_{\hat{1}},x_{j+1},\dots,x_n)\bigr\rvert_{z=z_j}}{x_{1j+1}^2\langle n|x_{1j+1}|1]^2[n\,1]^2\ang{1}{2}^2}\,.\!\!
\end{equation}
In the case $j=n-2$ the right superamplitude has to be $\mathcal{A}_3^{\text{MHV}}$ due to the special three point kinematics and the integration gives
\begin{equation}\label{eq:B3}
\begin{aligned}
\mathcal{B}_{n-2} &=\int\!\! d^2 \eta_{\hat{P}_{n-2}}\int\!\! d^2 \tilde\eta_{\hat{P}_{n-2}} \mathcal{A}_{n-1}(p_{\hat{1}},\dots,p_{n-2},-\hat{P}_{n-2})\frac{i}{P^2_{n-2}}\mathcal{A}^{\text{MHV}}_{3}(\hat{P}_{n-2}, p_{n-1},p_{\hat{n}})\Biggr\rvert_{z=z_{n-2}}\\
&=\frac{\delta^4(q)\delta^4(\tilde{q})\ang{n}{n-1}\delta^2(\langle P_{n-2}|\theta_{\hat{1}n-1}])f_{n-1}(x_{\hat{1}},x_2,\dots,x_{n-1})\bigr\rvert_{z=z_{n-2}}}{x_{1n-1}^2\ang{P_{n-2}}{n-1}\ang{P_{n-2}}{n}}\,,
\end{aligned}
\end{equation}
which may be rewritten as
\begin{equation}
\mathcal{B}_{n-2} =\frac{\delta^4(q)\delta^4(\tilde{q})\delta^2(\langle 2|B_{21n-1}])f_{n-1}(x_{\hat{1}},x_2,\dots,x_{n-1})\bigr\rvert_{z=z_{n-2}}}{x_{1n-1}^2\langle n|x_{1n-1}|1]\ang{1}{2}^2[1\,n]}\,.
\end{equation}
Now the integrated non-chiral BCFW recursion relation reads
\begin{equation}\label{eq:BCFWnonChiralIntegrated}
\mathcal{A}_n=\sum_{j=2}^{n-2}\mathcal{B}_{j}\,.
\end{equation}
In this form it is straightforward to prove the dual conformal symmetry of the non-chiral superamplitudes. Applying the inversion rules \cref{eq:inversion4dNC}, we find
\begin{equation}
\begin{aligned}
I\left(\,[j\,P_j]\,\right)&=\frac{[j\,P_j]}{x_{j+1}^2}\,,&I\left(\,[1\,P_j]\,\right)&=\frac{x_{\hat{1}}^2}{x_2^2x_{j+1}^2}[1\,P_j]\,,\\
I\left(\,\langle j+1\,P_j\rangle\,\right)&=\frac{\langle j+1\,P_j\rangle}{x_{\hat{1}}^2}\,,&I\left(\,\langle n\,P_j\rangle\,\right)&=\frac{\langle n\,P_j\rangle}{x_n^2}\,,\\
I\left(\,[P_{j}|\tilde{\theta}_{\hat{1}j+1}]\,\right)&=\frac{[P_{j}|\tilde{\theta}_{\hat{1}j+1}]}{x_{j+1}^2}\,,&I\left(\,\langle P_{j}|\theta_{\hat{1}j+1}\rangle\,\right)&=\frac{\langle P_{j}|\theta_{\hat{1}j+1}\rangle}{x_{\hat{1}}^2}\,.\end{aligned}
\end{equation}
Hence, it follows from \cref{eq:B1,eq:B2,eq:B3} together with the inductive assumption
on $I[f_{k<n}]$ of eqn.~(\ref{eq:inversionA6d}) that
\begin{equation}
I[\mathcal{B}_j]=\left(\prod_i x_i^2\right)\mathcal{B}_j
\end{equation}
which proves the covariance \eqref{eq:Inversion_Amp_NC} of the non-chiral superamplitude under the dual conformal inversions \eqref{eq:inversion4dNC}.
In order to obtain useful representations of the non-chiral superamplitudes from the integrated BCFW recursion \cref{eq:BCFWnonChiralIntegrated} it remains to remove the hats from the the shifted dual point $\hat{1}$ by using identities like e.\,g.
\begin{equation}
x_{\hat{1}k\,}^2\bigr\rvert_{z=z_j}=-\frac{\langle n|x_{nj+1}x_{j+1k}x_{k1}|1]}{\langle n| x_{1\, j+1}| 1]}\,,
\end{equation}
or
\begin{equation}
\langle B_{kk+1\hat{1}}|\,\bigr\rvert_{z=z_j}=\frac{1}{\langle 1|x_{1k}|k]\langle n|x_{nj+1}|1]}\Bigl(\begin{aligned}[t]
&\langle n|x_{nj+1}x_{j+12}x_{2k}|k]\langle B_{kk+11}|\\
& \qquad\qquad+x_{1j+1}^2\langle n|x_{nk}|k]\langle B_{kk+12}|\Bigr)\,.
\end{aligned}
\end{equation}
After removing all hats the obtained expression may still contain spinors. However, these spinors can be removed by multiplying and dividing with the chiral conjugate spinor brackets. The final expression will only depend on $\{x_i,\,\theta_i,\,\tilde\theta_i\}$ and besides $x_{ij}^2$ it can be expressed by the dual conformal covariant objects
\begin{equation}\label{eq:defBB}\begin{gathered}
\mathop{\mathrm{Tr}}(i_1 \dots i_{2k}):=\mathop{\mathrm{Tr}}(x_{i_1\,i_2}x_{i_2\,i_3}\,\dots\, x_{i_{2k-1}\,i_{2k}}x_{i_{2k}\,i_1})\,,\qquad\qquad\qquad x_{ij}^2 =-\tfrac{1}{2}\mathop{\mathrm{Tr}}(i\,j)\,,\\
\langle B_{ijk}|i_1\,\dots \,i_{2k+1}|B_{ijk}\rangle:=\tfrac{1}{2}\langle (B_{ijk})^m | x_{i\,i_1}x_{i_1\,i_2}\dots x_{i_{2k+1}\,i} | (B_{ijk})_m\rangle\\
[ \tilde B_{ijk}|i_1\,\dots \,i_{2k+1}|\tilde B_{ijk}]:=\tfrac{1}{2}[ (\tilde B_{ijk})_{m'} |x_{i\,i_1}x_{i_1\,i_2}\dots x_{i_{2k+1}\,i}| (\tilde B_{ijk})^{m'}]\,,\end{gathered}
\end{equation}
where the prefactor of $\frac{1}{2}$ has been introduced for convenience. Carrying out the recursion step from four to five points we obtain
\begin{align}\label{eq:f_5_4d}
\mathcal{A}_5=
i\delta^4(q)\delta^4(\tilde q)\frac{ \langle B_{5 4 2}|\, 1\, 2\, 3\,| B_{5 4 2}\rangle +
[ \tilde B_{5 4 2}|\, 1\, 2\, 3\,| \tilde B_{5 4 2}]}{x_{1 3}^2 x_{2 4}^4 x_{2 5}^4 x_{3 5}^2 x_{4 1}^2}
\end{align}
and for the six-point amplitude we get
\begin{align}\label{eq:f_6_4d}
\mathcal{A}_6&=
i\delta^4(q)\delta^4(\tilde q)\Bigl(\frac{\langle B_{625}|\,4\,3 \,2 \,| B_{625}\rangle\langle B_{235}|\,6 \,5 \,1 \,| B_{235}\rangle}{x_{1 3}^2x_{24}^2x_{35}^4x_{46}^2x_{51}^2x_{52}^4x_{62}^4\mathop{\mathrm{Tr}}(6235)}+\text{chiral conjugate}\notag\\
&\=\phantom{i\delta^4(q)\delta^4(\tilde q)\Bigl(}+\frac{[ \tilde B_{136}|\, 2\,3 \,5 \,| \tilde B_{136}]\langle B_{436}|\,5 \,6 \,1 \,| B_{436}\rangle}{x_{1 3}^4x_{26}^2x_{35}^2x_{36}^2x_{46}^4x_{51}^2\mathop{\mathrm{Tr}}(2356)\mathop{\mathrm{Tr}}(3461)}\notag\\
&\=\phantom{i\delta^4(q)\delta^4(\tilde q)\Bigl(}-\frac{[ \tilde B_{325}|\, 4\,5 \,1 \,| \tilde B_{325}]\langle B_{215}|\,6 \,5 \,3 \,| B_{215}\rangle}{x_{1 3}^2x_{15}^2x_{24}^2x_{25}^2x_{35}^4x_{51}^2x_{62}^2\mathop{\mathrm{Tr}}(1245)\mathop{\mathrm{Tr}}(2356)}\notag\\
&\=\phantom{i\delta^4(q)\delta^4(\tilde q)\Bigl(}-\frac{[ \tilde B_{146}|\,2\,4 \,5 \,| \tilde B_{146}]\langle B_{214}|\,6 \,4 \,3 \,| B_{214}\rangle}{x_{1 3}^2x_{14}^2x_{24}^4x_{46}^4x_{51}^2x_{62}^2\mathop{\mathrm{Tr}}(1245)\mathop{\mathrm{Tr}}(3461)}\Bigr)
\end{align}
Dual conformal invariance of these expressions is easy to verify by simply counting the inversion weights on each dual point.
In principle all non-chiral amplitudes could be obtained by a half Fourier transform of the known chiral or anti-chiral superamplitudes. However, it is in general nontrivial to carry out these integrations in a way that leads to a useful representation of the amplitude. One exception are the MHV and $\overline{\text{MHV}}$ part of the non-chiral superamplitude, which can be obtained by either solving the BCFW recursion or by performing the half Fourier transform in the way described in \cite{Huang:2011um}. The result we found and also checked numerically is
\begin{multline}
\hspace{-0.4cm}\mathcal{A}_n^{\overline{\text{MHV}}}=\\\hspace{0.4cm}\frac{i\delta^4(q)\delta^4(\tilde q)}{\prod_{k=1}^nx_{kk+2}^2} \frac{\langle B_{n\,2\,n-1}|\,n\!-\!2\,n\!\!-\!\!3 \,n\!-\!4 \,| B_{n\,2\,n-1}\rangle}{x_{n-3n-1}^2x_{n2}^2}\prod_{k=1}^{n-5}\frac{\langle B_{k + 1\, k + 2\, n - 1}|\,n\, n\! -\!1\, k \,| B_{k + 1\, k + 2\, n - 1}\rangle}{x_{n-1k+1}^2\mathop{\mathrm{Tr}}(n\,k\!+\!1\,k\!+\!2\,n\!-\!1)},\hspace{-0.4cm}
\end{multline}
and similar for the MHV part. Note that our result differs from the one presented in \cite{Huang:2011um}.
\subsection[Supersymmetric BCFW for \texorpdfstring{${\cal N}=(1,1)$}{N=(1,1)} SYM]{Supersymmetric BCFW for \texorpdfstring{$\bm{{\cal N}=(1,1)}$}{N=(1,1)} SYM}\label{section:BCFW6d}
The supersymmetric BCFW recursion of $\mathcal{N} = (1,1)$ SYM theory in six dimensions will play a central role when investigating massive amplitudes in \cref{section:6damps,section:uplift_huang}. It has been introduced in reference \cite{Dennen:2009vk}. In what follows we will closely follow the detailed review presented in reference \cite{Bern:2010qa}. At the end of this section we will use the BCFW recursion relation to prove the dual conformal covariance, \cref{eq:dualconformal6d}, of the superamplitudes.
As a first step we introduce the shift vector
\begin{equation}\label{eq:shift6d}
r^\mu=\frac{1}{2 s_{1n}}X_{a\dot{a}}\langle 1^a|\Sigma^\mu p_n|1^{\dot a}]\,,
\end{equation}
that obviously has the desired properties $r\cdot p_1=0=r\cdot p_n$. The requirement $r^2=0$, implies $0=\epsilon^{ab}\epsilon^{\dot{a}\dot{b}}X_{a\dot{a}}X_{b\dot{b}}=2\det(X)$. Hence $X_{a\dot{a}}$ is some arbitrary rank one matrix and has a spinor helicity representation $X_{a\dot{a}}=x_a\tilde{x}_{\dot a}$. \Cref{eq:shift6d} implies
\begin{align}\label{eq:shift6d_matrix}
r^{AB}&=X_{a\dot a}\frac{{}^{[A}| p_n|1^{\dot{a}}]\langle 1^a|^{B]}}{s_{1n}}\,,&r_{AB}&=-X_{a\dot a}\frac{{}_{[A}|1^{\dot{a}}]\langle 1^a|p_n|_{B]}}{s_{1n}}\,,
\end{align}
and the shifts of the momenta $p_1$ and $p_n$ \eqref{eq:shifts} can be reinterpreted as shifts of the chiral and anti-chiral spinors. The equations
\begin{equation}
\begin{aligned}
p_{\hat{1}}^{AB}&=\lambda_{\hat{1}}^{A\,a}\lambda_{\hat{1}\,a}^{B}\,,& p_{\hat{n}}^{AB}&=\lambda_{\hat{n}}^{A\,a}\lambda_{\hat{n}\,a}^{B}\,,\\[+0.3cm]
p_{\hat{1}\,AB}&=\tilde\lambda_{\hat{1}\,A\,\dot{a}}\tilde\lambda_{\hat{1}\,B}^{\dot{a}}\,,\qquad&p_{\hat{n}\,AB}&=\tilde\lambda_{\hat{n}\,A\,\dot{a}}\tilde\lambda_{\hat{n}\,B}^{\dot{a}}
\end{aligned}
\end{equation}
have the simple solutions
\begin{equation}\label{eq:shiftSpinors6d}
\begin{aligned}
\lambda_{\hat{1}}^{A\,a}&= s_{1n}^{-1}\langle 1^a|p_n\,p_{\hat{1}}|^A=\lambda_{1}^{A\,a} + \frac{z}{s_{1n}} \langle 1^a|p_n\,r|^A\,,\\[+0.3cm]
\lambda_{\hat{n}}^{A\,b}&= s_{1n}^{-1}\langle n^a|p_1\,p_{\hat{n}}|^A=\lambda_{n}^{A\,a} - \frac{z}{s_{1n}} \langle n^a|p_1\,r|^A\,,\\[+0.3cm]
\tilde{\lambda}_{\hat{1}A\dot{a}} &= s_{1n}^{-1}[1_{\dot{a}}|p_n\,p_{\hat{1}}|_A=\tilde\lambda_{1\,A\,\dot{a}} + \frac{z}{s_{1n}} [1_{\dot{a}}|p_n\,r|_A\,,\\[+0.3cm]
\tilde{\lambda}_{\hat{n}A\dot{b}} &= s_{1n}^{-1}[n_{\dot{a}}|p_1\,p_{\hat{n}}|_A=\tilde\lambda_{n\,A\,\dot{a}} - \frac{z}{s_{1n}} [n_{\dot{a}}|p_1\,r|_A\,.
\end{aligned}
\end{equation}
Or after inserting the definition \eqref{eq:shift6d_matrix} of the shift vector
\begin{equation}\label{eq:shiftSpinors6d_inserted}
\begin{aligned}
\lambda_{\hat{1}}^{Aa}&= \lambda^{Aa}_{1} + \frac{z}{s_{1n}}X^{a\dot{a}} [1_{\dot a}| n_b\rangle \lambda^{A\,b}_{n}\,,&
\lambda_{\hat{n}}^{Ab}& = \lambda^{Ab}_{n} - \frac{z}{s_{1n}}X_{a\dot{a}} [1^{\dot a}| n^b\rangle \lambda^{A\,a}_{1} \,,\\
\tilde{\lambda}_{\hat{1}A\dot{a}} &= \tilde{\lambda}_{1A\dot{a}} + \frac{z}{s_{1n}}X_{a\dot{a}} [n_{\dot b}| 1^a\rangle \tilde{\lambda}_{nA}^{\dot b}\,,&
\tilde{\lambda}_{\hat{n}A\dot{b}} &= \tilde{\lambda}_{nA\dot{b}} + \frac{z}{s_{1n}}X_{a\dot{a}} [n_{\dot b}| 1^a\rangle \tilde{\lambda}_{1A}^{\dot a}\,.
\end{aligned}
\end{equation}
Supermomentum conservation can only be maintained if the Grassmann variables of legs $1$ and $n$ are shifted as well
\begin{equation}\label{eq:shiftGrassmann6d}
\begin{aligned}
\xi_{\hat{1} a} &= \xi_{1 a}+ z X_{a \dot{a}} [1^{\dot{a}}|q_n\rangle/s_{1n}\,,& \xi_{\hat{n} b} &= \xi_{n b} + z X_{a\dot{a}} [1^{\dot{a}}|n_{b}\rangle \xi_{1}^a/s_{1n}\,,\\
\tilde{\xi}^{\dot{a}}_{1} &= \tilde{\xi}^{\dot{a}}_{1}-z X^{a \dot{a}} [\tilde{q}_n|1_{a}\rangle /s_{1n}\,,&\tilde{\xi}^{\dot{b}}_{\hat{n}} &= \tilde{\xi}^{\dot{b}}_{n} - z X_{a\dot{a}} [n^{\dot{b}}|1^{a}\rangle \tilde{\xi}^{\dot{a}}_{1}/s_{1n}\,,
\end{aligned}
\end{equation}
resulting in the following shifts of the supermomenta
\begin{equation}
\begin{aligned}\label{eq:shiftsSuperMomenta6d}
q^A_{\hat{1}} &=[\tilde{\chi}|p_{\hat{1}}|^A =q^A_1 + z\, s^A \,,& q^A_{\hat{n}} &= [\tilde\chi|p_{\hat{n}}|^A=q^A_n -z\,s^A\,, \\
\tilde{q}_{\hat{1} A}&=\langle\chi|p_{\hat{1}}|_A= \tilde{q}_{1 A} + z\,\tilde{s}_A\,,&\tilde{q}_{\hat{n} A} &=\langle\chi|p_{\hat{n}}|_A= \tilde{q}_{n A} -z \,\tilde{s}_A \,,
\end{aligned}
\end{equation}
with
\begin{equation}
\begin{aligned}
\chi&=s_{1n}^{-1}([ \tilde{q}_1|p_n|+[\tilde{q}_n|p_1| )\,,&\tilde\chi&=s_{1n}^{-1}(\langle q_1|p_n|+\langle q_n|p_1|)\\
s&= [\tilde\chi|r|\,,&\tilde{ s}&=\langle \chi|r|\,,
\end{aligned}
\end{equation}
or with the definition of $r$ being inserted
\begin{equation}
\begin{aligned}
s^A&= \frac{X_{a\dot{a}}}{s^2_{1n}}\left( \langle q_{1} | p_n| 1^a\rangle[1^{\dot a}| p_n|^A + s_{1n}\langle q_{n}|1^{\dot{a}}]\lambda_{1}^{A \,a} \right)\,,\\
\tilde s_A&=\frac{X_{a\dot{a}}}{s^2_{1n}}\left( - [\tilde{q}_1| p_n|1^{\dot{a}}]\langle 1^a | p_n|_A - s_{1n} [\tilde{q}_n|1^{\dot{a}}\rangle\tilde{\lambda}_{1\,A}^{\dot{a}}\right)\,.
\end{aligned}
\end{equation}
The dual shifts are given by
\begin{align}\label{eq:shiftDual6d}
x_{\hat{1}}&=x_1+z\,r &\theta_{\hat{1}}&=\theta_1+z\,s&\tilde{\theta}_{\hat{1}}&=\tilde{\theta}_1+z\,\tilde{s}\,.
\end{align}
Note that the Grassmann shift variables $s^A$ and $\tilde{s}_A$ can alternatively be obtained by solving the equations
\begin{align}
\langle \theta_{\hat{1}2}|x_{\hat{1}2}|&=0\,,&\langle \theta_{n\hat{1}}|x_{n\hat{1}}|&=0\,,&[\tilde\theta_{\hat{1}2}|x_{\hat{1}2}|&=0\,,&[\tilde\theta_{n\hat{1}}|x_{n\hat{1}}|&=0\,.
\end{align}
The above set of supersymmetry preserving shifts leads to a shifted superamplitude whose residues at the poles \cref{eq:poles} are given by a product of two lower point superamplitudes. Similar to the supersymmetric BCFW recursions of $\mathcal{N} = 4$ SYM, the sum over intermediate states is realized by an integration with respect to the Grassmann variables of the intermediate leg.
Using the abbreviations introduced in \cref{eq:Def_Pj} the BCFW recursion of ${\cal N}=(1,1)$ SYM theory in six dimensions reads
\begin{equation}\label{eq:BCFW_6D}
\!\mathcal{A}_n\!\left(p_1,\dots,p_n\right) = \sum_{j = 2}^{n-2} \int\!\! d^2 \tilde{\xi}_{\hat{P}_{j}} \int\!\! d^2 \xi_{\hat{P}_{j}} \mathcal{A}_{j+1}(\hat{p}_1,\dots,p_j, -\hat{P}_{j}) \frac{-i}{P^2_{j}} \mathcal{A}_{n-j+1}(\hat{P}_{j}, p_{j+1},\dots,p_{\hat{n}})\Bigr\rvert_{z=z_j}\!
\end{equation}
Similar to the non-chiral BCFW recursion in four dimensions, \cref{eq:BCFW_non-chiral}, the explicit minus sign originates from the choice $d^2\xi=\tfrac{1}{2}d\xi^a d\xi_{a}$, $d^2 \tilde\xi=\tfrac{1}{2}d\tilde\xi_{\dot{a}}d \tilde\xi^{\dot{a}}$ for the integration measure and can be fixed by projecting the four point function resulting from the six-dimensional BCFW recursion \cref{eq:BCFW_6D} to four dimensions and comparing it with \cref{eq:A4nonchiral}.
Starting point for the recursion is the three-point superamplitude of \cref{eq:A3_6D}
\cite{Cheung:2009dc}. For applications of the BCFW recursion it is more convenient to use the following alternative representation of the three point amplitude
\begin{equation}\label{eq:A3_6d_alternative}
\mathcal{A}_3 =i \delta^{6}(p) ({\bf u}_1 -{\bf u}_2)({\bf\tilde u}_1- {\bf \tilde u}_2)\left({\bf u}_3 -\tfrac{1}{2}({\bf u}_1+{\bf u}_2) \right)\left({\bf \tilde u}_3 -\tfrac{1}{2}({\bf\tilde u}_1+{\bf\tilde u}_2) \right) \delta\left( {\bf w} \right) \delta\left( \tilde{{\bf w}} \right)\,.
\end{equation}
As has been shown in \cite{Dennen:2009vk}, the BCFW recursion yields the four point function
\begin{equation}\label{eq:A4_6d}
\mathcal{A}_4 = - \delta^{6}\left( p \right) \delta^{4}\left( q\right) \delta^{4}\left(\tilde{q} \right) \frac{i}{x_{1 3}^2 x_{2 4}^2}\,.
\end{equation}
Note that the four-point amplitude is fixed up to a numerical factor by supersymmetry and dual conformal symmetry.
In the remainder of this section we will explicitly carry out the Grassmann integrations in the BCFW recursion \cref{eq:BCFW_6D}. First of all we recall that for $n \geq 4$ an $n$-point superamplitude has the form
\begin{equation}\label{eq:superamps6d}
\mathcal{A}_{n} = \delta^6(p)\delta^{4} \left(q\right) \delta^{4} \left(\tilde{q} \right) f_n(\{x_i,\theta_i,\tilde\theta_i\})
\end{equation}
In order to consistently treat ingoing and outgoing particles we adopt the prescription
\begin{equation}
\begin{aligned}
\lambda_{(-p)} &= i\, \lambda_{p}\,,& \tilde{\lambda}_{(-p)} &= i\, \tilde{\lambda}_{p}\,,&\xi_{(-p)} &= i\, \xi_{p}\,,&\tilde{\xi}_{(-p)}& = i\, \tilde{\xi}_{p}\,.
\end{aligned}
\end{equation}
Structurally there are the three different cases $j=2$, $2<j<n-2$ and $j=n-2$ to be analyzed. Starting with the contribution $j=2$ in \cref{eq:BCFW_6D}, we want to evaluate
\begin{equation}
\mathcal{B}_{2}=\frac{-i}{x_{13}^2} \int d^2 \xi_{\hat{P}} d^2\tilde{\xi}_{\hat{P}_2}\mathcal{A}_3\left(p_{\hat{1}},p_2,-\hat{P}_2\right)\delta^4 \left(q_{\hat{P}_2}+\theta_{3\hat{1}}\right) \delta^4 \left(\tilde{q}_{\hat{P}_2}+\tilde\theta_{3\hat{1}} \right) f_{n-1} \left(x_{\hat 1},x_3,\dots,x_n\right)
\end{equation}
Taking the representation \cref{eq:A3_6d_alternative} of $\mathcal{A}_3$, the only dependence on $\xi_{\hat{P}_2}$, $\tilde{\xi}_{\hat{P}_2}$ is contained in Grassmann delta functions, and the integration boils down to solving the linear equations
\begin{align}
{\bf u}_{K}&=\tfrac{1}{2}({\bf u}_{\hat 1}+{\bf u}_{2})\,,&{\bf w}_{K}&=-{\bf w}_{\hat 1}-{\bf w}_{2}\,,\\
{\bf \tilde u}_{K}&=\tfrac{1}{2}({\bf \tilde u}_{\hat 1}+{\bf \tilde u}_{2})\,,&{\bf \tilde w}_{K}&=-{\bf \tilde w}_{\hat 1}-{\bf \tilde w}_{2}\,,
\end{align}
for $\xi_{\hat{P}_2}$, $\tilde{\xi}_{\hat{P}_2}$, with the abbreviation $K=-\hat{P}_2$. The solution is
\begin{align}
\xi_{\hat{P}_2}^a &=- \tfrac{i}{2} \left({\bf u}_{\hat{1}} + {\bf u}_{2}\right) w_{K}^a - i\left({\bf w}_{\hat{1}} + {\bf w}_{2}\right)u_{K}^a\,,& \tilde{\xi}_{\hat{P}_2}^{\dot{a}} &= \tfrac{i}{2} \left(\tilde{{\bf u}}_{\hat{1}} + \tilde{{\bf u}}_{2}\right)\tilde{w}_{K}^{\dot{a}} +i \left(\tilde{{\bf w}}_{\hat{1}} + \tilde{{\bf w}}_{2}\right)\tilde{u}_{K}^{\dot{a}}\,.
\end{align}
Using \cref{eq:P1,eq:P2,eq:P3} it is straightforward to show that on the support of $({\bf u}_{\hat 1} -{\bf u}_2)({\bf\tilde u}_{\hat 1}- {\bf \tilde u}_2)$ this implies
\begin{align}\label{deltas_ersetzung}
q_{\hat{P}_2} &= q_{\hat{1}} + q_{2}\,,& \tilde{q}_{\hat{P}_2}& = \tilde{q}_{\hat{1}} + \tilde{q}_{2} \,,
\end{align}
and therefore
\begin{equation}
\mathcal{B}_{2}=\delta^4 \left(q\right) \delta^4 \left(\tilde{q}\right) \frac{-i}{x_{13}^2} f_{n-1} \left(x_{\hat 1},x_3,\dots,x_n\right) \int d^2 \xi_{\hat{P}_2} d^2\tilde{\xi}_{\hat{P}_2}\, \mathcal{A}_3\left(p_{\hat{1}},p_2,-\hat{P}_2\right)\,.
\end{equation}
The integral of the three-point amplitude has the solution
\begin{equation}\label{eq:B26d}
i\left({\bf u}_{\hat{1}} - {\bf u}_{2}\right)\left({{\bf\tilde u}}_{\hat{1}} - {{\bf\tilde u}}_{2}\right) = i \left(\frac{\langle q_{2}|k_2 p_{\hat{1}}|\tilde{q}_{2}]}{2\,p_{2}\cdot k_2}- \langle q_{\hat{1}} |\tilde{q}_{2}] + \langle q_{2}| \tilde{q}_{\hat{1}}] - \frac{\langle q_{\hat{1}}| k_1 p_2|\tilde{q}_{\hat{1}}]}{2\,p_{\hat{1}}\cdot k_1} \right)\,,
\end{equation}
where $k_1$ and $k_2$ are some arbitrary reference vectors and $u^a w_a=1=\tilde{u}^{\dot{a}}\tilde{w}_{\dot{a}}$ has been used. The final result is
\begin{equation}\label{p_Ref}
\mathcal{B}_{2} = \delta^4 \left(q \right) \delta^4 \left(\tilde{q}\right) \frac{f_{n-1} \left(x_{\hat 1},x_3,\dots,x_n\right)}{x_{13}^2} \left(\frac{\langle q_{2}|k_2 p_{\hat{1}}|\tilde{q}_{2}]}{2\,p_{2}\cdot k_2} - \langle q_{\hat{1}} |\tilde{q}_{2}] + \langle q_{2}| \tilde{q}_{\hat{1}}]-\frac{\langle q_{\hat{1}}| k_1 p_2|\tilde{q}_{\hat{1}}]}{2\,p_{\hat{1}}\cdot k_1} \right),
\end{equation}
evaluated at $z=z_2$. In the case $j=n-2$ we need to evaluate
\begin{equation}\label{eq:B_2}
\mathcal{B}_{n-2}=\frac{-i}{x_{1\,n-1}^2}\delta^4 \left(q\right) \delta^4 \left(\tilde{q} \right) f_{n-1} \left(x_{\hat 1},\dots,x_{n-1}\right) \int d^2 \xi_{\hat{P}_{n-2}} d^2\tilde{\xi}_{\hat{P}_{n-2}}\mathcal{A}_3\left(p_{n-1},p_{\hat{n}},\hat{P}_{n-2}\right)\,.
\end{equation}
Here we already exploited that on the support of the three-point amplitude we have
\begin{align}
\xi_{\hat{P}_{n-2}}^a &= \tfrac{1}{2} \left({\bf u}_{\hat{n}} + {\bf u}_{n-1}\right) w_{\hat{P}_{n-2}}^a + \left({\bf w}_{\hat{n}} + {\bf w}_{n-1}\right)u_{\hat{P}_{n-2}}^a\,,\\ \tilde{\xi}_{\hat{P}_{n-2}}^{\dot{a}} &= -\tfrac{1}{2} \left(\tilde{{\bf u}}_{\hat{n}} + \tilde{{\bf u}}_{n-1}\right)\tilde{w}_{\hat{P}_{n-2}}^{\dot{a}} - \left(\tilde{{\bf w}}_{\hat{n}} + \tilde{{\bf w}}_{n-1}\right)\tilde{u}_{\hat{P}_{n-1}}^{\dot{a}}\,
\end{align}
or more conveniently
\begin{align}
q_{\hat{P}_{n-2}}&=-q_{\hat{n}}-q_{n-1}\,,&\tilde{q}_{\hat{P}_{n-2}}&=-\tilde{q}_{\hat{n}}-q_{n-1}\,.
\end{align}
The remaining integral of the three-point amplitude in \cref{eq:B_2} is given by
\begin{multline}
i\left( {\bf u}_{n-1}-{\bf u}_{\hat{n}}\right)\left({{\bf\tilde u}}_{n-1}-{{\bf\tilde u}}_{\hat{n}}\right) =\\ i \left(\frac{\langle q_{\hat{n}}| k_n p_{n-1}|\tilde{q}_{\hat{n}}]}{2\,p_{\hat{n}}\cdot k_n}- \langle q_{n-1}| \tilde{q}_{\hat{n}}]+ \langle q_{\hat{n}} |\tilde{q}_{n-1}] - \frac{\langle q_{n-1}|k_{n-1} p_{\hat{n}}|\tilde{q}_{n-1}]}{2\,p_{n-1}\cdot k_{n-1}} \right)\,,
\end{multline}
leading to
\begin{multline}
\mathcal{B}_{n-2}=\delta^4 \left(q\right) \delta^4 \left(\tilde{q} \right) \frac{f_{n-1} \left(x_{\hat 1},\dots,x_{n-1}\right)}{x_{1\,n-1}^2} \times\\
\left(\frac{\langle q_{\hat{n}}| k_n p_{n-1}|\tilde{q}_{\hat{n}}]}{2\,p_{\hat{n}}\cdot k_n}- \langle q_{n-1}| \tilde{q}_{\hat{n}}]+ \langle q_{\hat{n}} |\tilde{q}_{n-1}] - \frac{\langle q_{n-1}|k_{n-1} p_{\hat{n}}|\tilde{q}_{n-1}]}{2\,p_{n-1}\cdot k_{n-1}} \right)\,,
\end{multline}
evaluated at $z=z_{n-2}$. Similar to the case $j=2$, arbitrary reference momenta $k_{n}$, $k_{n-1}$ have been introduces in order to get rid of the $u$, $\tilde u$ variables. Finally there is the general case $2<j<n-2$ with no three-point amplitudes involved
\begin{multline}
\mathcal{B}_{j} = \frac{-i}{x_{1\,j+1}^2}\delta^4 \left(q \right) \delta^4 \left(\tilde{q} \right) f_{j + 1}\left(x_{\hat{1}},\dots,x_{j+1}\right)f_{n - j + 1}\left(x_{\hat{1}},x_{j+1},\dots,x_n\right)\times\\\int d^2 \xi_{\hat{P}_j} d^2\tilde{\xi}_{\hat{P}_j} \delta^4 \left(q_{\hat{P}_j}+\theta_{j+1\,\hat{1}} \right) \delta^4 \left(\tilde{q}_{\hat{P}_j} +\theta_{j+1\,\hat{1}}\right)
\end{multline}
To carry out the integration we want to rewrite the fermionic delta functions. Due to the algebra \cref{eq:Sigma} of the six-dimensional Pauli matrices, we have the identity
\begin{align}
\delta^A_C=s_{ij}^{-1}(p_i^{AB}p_{j\,BC}+p_i^{AB}p_{j\,BC})\,,
\end{align}
which implies
\begin{equation}
\begin{aligned}
q^A_i+ q^A_j+ Q^A&=(\xi_{i\,a}+s_{ij}^{-1} \langle i_a|p_j|Q\rangle) \lambda_i^{A\,a}+(\xi_{j\,a}+s_{ij}^{-1} \langle j_a|p_i|Q\rangle)\lambda_j^{A\,a}\,,\\
\tilde{q}_{i\,A}+\tilde{q}_{j\,A}+\tilde{Q}_A&=(\tilde\xi_{i}^{\dot a}+s_{ij}^{-1}[i^{\dot a}|p_j|\tilde{Q}])\tilde{\lambda}_{i\,A\,\dot a}+(\tilde\xi_{j}^{\dot a}+s_{ij}^{-1}[j^{\dot a}|p_i|\tilde{Q}])\tilde{\lambda}_{j\,A\,\dot a}\,.
\end{aligned}
\end{equation}
Consequently the fermionic delta functions can be rewritten as follows
\begin{equation}
\begin{aligned}
\delta^4( q_i+ q_j+ Q)&=-s_{ij}\delta^2(\xi_{i\,a}+s_{ij}^{-1}\langle i_a|p_j|Q\rangle)\delta^2(\xi_{j\,a}+s_{ij}^{-1} \langle j_a|p_i|Q\rangle)\,,\\
\delta^4(\tilde{q}_i+\tilde{q}_j+\tilde{Q})&=-s_{ij}\delta^2(\tilde\xi_{i}^{\dot a}+s_{ij}^{-1}[i^{\dot a}|p_j|\tilde{Q}])\delta^2(\tilde\xi_{j}^{\dot a}+s_{ij}^{-1}[j^{\dot a}|p_i|\tilde{Q}])\,.
\end{aligned}
\end{equation}
The two-dimensional Grassmann delta functions are defined as $\delta^2(\chi_a)=\tfrac{1}{2}\chi_a\chi^a$ and $\delta^2(\tilde\chi^{\dot{a}})=\tfrac{1}{2}\tilde\chi^{\dot{a}}\tilde\chi_{\dot{a}}$ such that $\int d^2\xi \,\delta^2(\xi_a)=1=\int d^2\tilde\xi \,\delta^2(\tilde\xi^{\dot a})$.
This allows us to easily carry out the Grassmann integrations
\begin{align}
\int d^2 \xi_{\hat{P}_j} \,\delta^4 \left(q_{\hat{P}_j}+\theta_{j+1\,\hat{1}} \right)&=-s_{\hat{P}_j\,\hat{n}}^{-1}\delta^2(\langle \hat{n}_a|\hat{P}_j|\theta_{j+1\,\hat{1}}\rangle)\notag\\
&=-\tfrac{1}{2}s_{\hat{P}_j\,\hat{n}}^{-1}\langle \hat{n}_a|\hat{P}_j|\theta_{j+1\,\hat{1}}\rangle \langle\hat{n}^a|\hat{P}_j|\theta_{j+1\,\hat{1}}\rangle\notag\\
&=-\tfrac{1}{2}s_{\hat{P}_j\,\hat{n}}^{-1}\langle \theta_{j+1\,\hat{1}}|\hat{P}_jp_{\hat{n}}\hat{P}_j|\theta_{j+1\,\hat{1}}\rangle\notag\\
&=-\tfrac{1}{2}\langle \theta_{j+1\,\hat{1}}|x_{\hat{1}\,j+1}|\theta_{j+1\,\hat{1}}\rangle\,,
\end{align}
and similarly for the anti-chiral integration
\begin{equation}
\int d^2\tilde{\xi}_{\hat{P}_j} \delta^4 \left(\tilde{q}_{\hat{P}_j} +\theta_{j+1\,\hat{1}}\right)=-\tfrac{1}{2}[\tilde\theta_{j+1\,\hat{1}}|x_{\hat{1}\,j+1}|\tilde\theta_{j+1\,\hat{1}}]\,.
\end{equation}
The full contribution is
\begin{multline}\label{eq:Bj}
\mathcal{B}_{j} =- i\,\delta^4 \left(q \right) \delta^4 \left(\tilde{q} \right) f_{j + 1}\left(x_{\hat{1}},\dots,x_{j+1}\right)f_{n - j + 1}\left(x_{\hat{1}},x_{j+1},\dots,x_n\right)\frac{1}{x_{1\,j+1}^2}\times\\
\tfrac{1}{4}\langle \theta_{j+1\,\hat{1}}|x_{\hat{1}\,j+1}|\theta_{j+1\,\hat{1}}\rangle[\tilde\theta_{j+1\,\hat{1}}|x_{\hat{1}\,j+1}|\tilde\theta_{j+1\,\hat{1}}]\,,
\end{multline}
evaluated at $z=z_j$. Hence, given all lower point amplitudes, the $n$-point super amplitude is simply given by
\begin{equation}\label{eq:BCFW_6D_integrated}
\mathcal{A}_n = \sum_{j = 2}^{n-2} \mathcal{B}_{j}\,.
\end{equation}
This expression is straightforward to implement numerically. Unfortunately, it is ill suited to directly obtain reasonable analytical expressions for higher point amplitudes because of the auxiliary variable $X_{a\dot{a}}=x_a\tilde{x}_{\dot a}$ contained in the shift \cref{eq:shift6d}. In contrast to four dimensions the shift vector is not fixed by requiring $r^2=0$, $r\cdot p_1=0=r\cdot p_n$. This ambiguity is reflected by the presence of $X_{a\dot{a}}$ in the definition of the shift vector. Obviously the amplitudes are independent of the shift vector, i.\,e.~independent of $X_{a\dot{a}}$. In principle it should be possible to remove the shift vector from the right hand side of \cref{eq:BCFW_6D_integrated} without inserting its definition \cref{eq:shift6d_matrix}, only using its general properties \cref{eq:shiftvector}. Unfortunately, even in the easiest case of the five point superamplitude this is very hard to achieve. As long as it is not understood
how to obtain $f_n(\{x_i,\theta_i,\tilde\theta_i\})$ from the output of the BCFW recursion, \cref{eq:BCFW_6D_integrated} will be limited to numerical applications.
Indeed, in \cref{section:6damps,section:uplift_huang} we will extensively use a \texttt{Mathematica} implementation of the integrated BCFW recursion \eqref{eq:BCFW_6D_integrated}. Independence of $X_{a\dot{a}}$ and the arbitrary reference momenta entering $\mathcal{B}_{2}$ and $\mathcal{B}_{n-2}$ provides a nontrivial check of the numerical results obtained from the implementation. In fact, taking the four point amplitude \eqref{eq:A4_6d} as initial data, independence of the six-point component amplitudes on $X_{a\dot{a}}$ requires the explicit minus sign appearing in the BCFW recursion relation \cref{eq:BCFW_6D}.
\subsection{Proof of dual conformal symmetry of $\mathcal{N}=(1,1)$ superamplitudes}\label{section:ProofDual}
With the help of the BCFW recursion and the inversion rules \eqref{eq:inversion6d_first} to \eqref{eq:inversion6d_last} it is straightforward to inductively prove the dual conformal covariant inversion of the $\mathcal{N}=(1,1)$ superamplitudes by showing that each term $\mathcal{B}_{i}$ in the integrated BCFW recursion \cref{eq:BCFW_6D_integrated} inverts as
\begin{equation}
I[\mathcal{B}_{j}]=\left(\prod_i x_i^2\right)\mathcal{B}_{j}\,.
\end{equation}
Since the BCFW diagrams involving three-point amplitudes $\mathcal{B}_{2}$, $\mathcal{B}_{n-2}$ are related by cyclic relabeling of the indices, we only need to consider one of them as well as the general diagram $\mathcal{B}_{j}$ without three-point functions.
We start out with $\mathcal{B}_{2}$, \cref{eq:B26d}, and investigate the inversion of $\left({\bf u}_{\hat{1}} - {\bf u}_{2}\right)\left({{\bf\tilde u}}_{\hat{1}} - {{\bf\tilde u}}_{2}\right)$. Simply plugging in the inversion rules yields
\begin{align}
I[{{\bf\tilde u}}_{2}-{{\bf\tilde u}}_{\hat{1}}]=\beta^{-1}\sqrt{\frac{x^2_2}{x_{\hat{1}}^2x_{3}^2}}{\bf\tilde u}_{2}-\beta^{-1}\sqrt{\frac{x_{\hat{1}}^2}{x_{2}^2 x_{3}^2}}{{\bf\tilde u}}_{\hat{1}}+\frac{\beta^{-1}}{\sqrt{x_{\hat{1}}^2x^2_2x^2_3}}\left(\tilde{u}_{\hat 1}^{\dot{a}}[\tilde{\theta}_{\hat 1}|x_{\hat 1}|\hat{1}_{\dot{a}}]-\tilde{u}_{2}^{\dot{a}}[\tilde{\theta}_{2}|x_{2}|2_{\dot{a}}]\right)
\end{align}
Using $\tilde{u}_{2}^{\dot{a}}[2_{\dot{a}}|=\tilde{u}_{\hat 1}^{\dot{a}}[{\hat 1}_{\dot{a}}|$ and $x_2|\hat{1}_{\dot{a}}]=x_{\hat{1}}|\hat{1}_{\dot{a}}]=\tfrac{1}{2}(x_{\hat{1}}+x_2)|\hat{1}_{\dot{a}}]$ the inhomogeneous term can be rewritten
\begin{align}
\left(\tilde{u}_{\hat 1}^{\dot{a}}[\tilde{\theta}_{\hat 1}|x_{\hat 1}|\hat{1}_{\dot{a}}]-\tilde{u}_{2}^{\dot{a}}[\tilde{\theta}_{2}|x_{2}|2_{\dot{a}}]\right)&=\tfrac{1}{2}\,\tilde{u}_{\hat 1}^{\dot{a}}\,\tilde{\xi}_{\hat 1}^{\dot b}\;[1_{\dot b}|x_{\hat 1}+x_2|1_{\dot a}]=\tfrac{1}{4}\,{{\bf\tilde u}}_{\hat{1}}\;\mathop{\mathrm{Tr}}\left[\,(x_{\hat 1}+x_2)(x_{\hat 1}-x_2)\;\right]\notag\\
&={{\bf\tilde u}}_{\hat{1}}(x_{\hat 1}^2-x_2^2)
\end{align}
and leads to the result
\begin{equation}
I[{{\bf\tilde u}}_{2}-{{\bf\tilde u}}_{\hat{1}} ]=\beta^{-1}\sqrt{\frac{x^2_2}{x_{\hat{1}}^2x_{3}^2}}\left({{\bf\tilde u}}_{2}-{{\bf\tilde u}}_{\hat{1}}\right)\,.
\end{equation}
Similarly we find
\begin{align}
I[{{\bf u}}_{2}-{{\bf u}}_{\hat{1}} ]&=\beta\sqrt{\frac{x^2_2}{x_{\hat{1}}^2x_{3}^2}}{\bf u}_{2}-\beta\sqrt{\frac{x_{\hat{1}}^2}{x_{2}^2 x_{3}^2}}{{\bf u}}_{\hat{1}}+\frac{\beta}{\sqrt{x_{\hat{1}}^2x^2_2x^2_3}}\left({u}_{\hat{1}\,a}\langle \theta_{\hat 1}|x_{\hat 1}|\hat{1}^{a}\rangle-u_{2\,a}\langle{\theta}_{2}|x_{2}|2^{a}\rangle\right)\notag\\
&=\beta\sqrt{\frac{x^2_2}{x_{\hat{1}}^2x_{3}^2}}\left({{\bf u}}_{2}-{{\bf u}}_{\hat{1}}\right)\,,
\end{align}
which together with
\begin{equation}
I\left[\frac{f_{n-1} \left(x_{\hat 1},x_3,\dots,x_n\right)}{x_{13}^2} \right]=\frac{x_{\hat 1}^2x_{ 3}^2}{x_{ 2}^2}\left(\prod_{i=1}^n x_{i}^2\right)\;\frac{f_{n-1} \left(x_{\hat 1},x_3,\dots,x_n\right)}{x_{13}^2}\,,
\end{equation}
proves the desired inversion of $\mathcal{B}_{2}$. What remains is to check the inversion of $\mathcal{B}_{j}$ given in \cref{eq:Bj}. Again inserting the inversion rules we obtain
\begin{align}
I\bigl[\,\langle \theta_{j+1\,\hat{1}}|x_{\hat{1}\,j+1}|\theta_{j+1\,\hat{1}}\rangle\,\bigr]&=\left(\langle \theta_{j+1}|x^{-1}_{j+1}-\langle\theta_{\hat{1}}|x^{-1}_{\hat{1}}\right)\,x_{\hat{1}}^{-1}x_{\hat{1}\,j+1}x_{j+1}^{-1}\,\left(x^{-1}_{j+1}|\theta_{j+1}\rangle-x^{-1}_{\hat{1}}|\theta_{\hat{1}}\rangle\right)\notag\\
&=\frac{1}{x_{\hat{1}}^2x_{j+1}^2}\,\langle \theta_{j+1\,\hat{1}}|x_{\hat{1}\,j+1}|\theta_{j+1\,\hat{1}}\rangle\,,\label{eq:inversion1}
\end{align}
where we have used $x_{\hat{1}\,j+1}^2=0$. The inversion of $[\tilde\theta_{j+1\,\hat{1}}|x_{\hat{1}\,j+1}|\tilde\theta_{j+1\,\hat{1}}]$ can be obtained by chiral conjugation\footnote{The relative minus sign in the inversion of $\theta$, $\tilde\theta$ drops out.} of \eqref{eq:inversion1} and together with
\begin{multline}
I\left[ \frac{f_{j + 1}\left(x_{\hat{1}},\dots,x_{j+1}\right)f_{n - j + 1}\left(x_{\hat{1}},x_{j+1},\dots,x_n\right)}{x_{1\,j+1}^2} \right]=\\
x_{\hat{1}}^4\,x_{j+1}^4\,\left(\prod_{i=1}^n x_{i}^2\right)\;\frac{f_{j + 1}\left(x_{\hat{1}},\dots,x_{j+1}\right)f_{n - j + 1}\left(x_{\hat{1}},x_{j+1},\dots,x_n\right)}{x_{1\,j+1}^2}
\end{multline}
this concludes the proof of the dual conformal symmetry of the tree superamplitudes.
\section{Tree-level superamplitudes of $\mathcal{N}=(1,1)$ SYM theory}\label{section:6damps}
In four dimensions the supersymmetric BCFW recursion together with the dual conformal invariance
allowed for the construction of analytical formulae for all superamplitudes of ${\cal N}=4$ SYM theory \cite{Drummond:2008cr}. The key to this remarkable result was the use of dual conformal invariant functions for the construction of a manifest dual conformal covariant solution to the BCFW recursion. Of similar importance was the MHV decomposition \eqref{eq:MHV_decomposition} of the superamplitudes, allowing to successively solve the recursion for the increasingly complex N${}^p$MHV superamplitudes. Although the non-chiral superamplitudes of ${\cal N}=(1,1)$ SYM do not possess a conformal symmetry and an analogue of the helicity violation decomposition of the 4d theory,
they still have a dual conformal symmetry and obey a supersymmetric BCFW recursion relation. Hence, it is natural to try to find dual conformal invariant functions suitable to construct a solution to the super-BCFW recursion of $\mathcal{N}=(1,1)$ SYM. Unfortunately, the six-dimensional
BCFW recursion, as reviewed in \cref{section:BCFW6d}, is ill suited to produce compact analytical expressions. In contrast to four dimensions the shift \eqref{eq:shift6d} is not uniquely fixed and contains auxiliary spinor variables $x_a$, $x_{\dot a}$. Although the amplitudes are independent of these variables, their removal is non-trivial. The main obstacle is that the individual BCFW diagrams are in general not independent of $x_a$, $x_{\dot a}$ but only their sum, denying any obvious elimination of the auxiliary variables. In spite of its limitations the six-dimensional BCFW recursion is a powerful tool to obtain numerical values for arbitrary tree amplitudes of ${\cal N}=(1,1)$ SYM theory. As we will explain in what follows, this can be exploited to determine manifest dual conformal covariant representations of superamplitudes.
\subsection{Analytical superamplitudes from numerical BCFW}\label{section:IdeaBCFW}
The general idea is to fix a sufficiently large set of dual conformal covariant functions $\Omega_{n,i}$ which are invariant under the dual symmetries $\{P_{AB},M^A_{\;\;B}, Q_A,\tilde{Q}^A,B,\tilde{B}\}$, covariant under the dual dilatation $D$, and are symmetric under chiral conjugation. In other words the $\Omega_{n,i}$ are Lorentz invariant functions of differences of dual variables, have Grassmann degree $\Omega_{n,i}=\mathcal{O}(\theta^{n-4}\tilde{\theta}^{n-4})$, are of dimension $-n$, and invert in the same way as $f_n$
\begin{equation}
I[\Omega_{n,j}]=\left(\prod_i x_i^2\right) \Omega_{n,j}\,.
\end{equation}
On the support of the momentum and supermomentum conserving delta functions, the $\Omega_{n,i}$ possess all continuous symmetries of $f_n$. Note that the invariance under the supersymmetry generators $\overline{q}^A$ and $\overline{\widetilde{q}}_A$ follows from the invariance under $Q_A$, $\tilde{Q}^A$ and the covariance under dual conformal boosts $K_{AB}$, compare \cref{eq:S_dual,eq:Dualconformal6d}. Besides chiral symmetry, we could equally enforce the other discrete symmetries, which are cyclic invariance and the reflection symmetry. As will become clear in what follows enforcing symmetry under chiral conjugation is essential.
Given a set of functions $\{\Omega_{n,j}\}$, we can make the ansatz
\begin{equation}\label{eq:ansatz6d}
f_n=\sum_i \alpha_i \Omega_{n,i}\,,
\end{equation}
By construction, the coefficients $\alpha_{i}$ are dimensionless, dual conformal invariant functions of differences $x_{ij}$ of the region momenta $x_{i}$.
The only dual conformal covariant objects that can be built from the $x_{ij}$ are the traces
\begin{align}
\widetilde{\mathop{\mathrm{Tr}}}(i_1\dots i_{2k})&:=\left(x_{i_1i_2}\right)_{A_1A_2}\left(x_{i_2i_3}\right)^{A_2A_3}\dots\left(x_{i_{2k-1}i_{2k}}\right)_{A_{2k-1}A_{2k}}\left(x_{i_{2k}i_1}\right)^{A_{2k}A_1}\,.
\end{align}
However, these traces contain six-dimensional Levi-Civita tensors if six linear independent momenta are present, i.\,e.~if $k>3$ and $n>6$. Since ${\cal N}=(1,1)$ SYM is a non-chiral theory, all of its component amplitudes should be free of Levi-Civita tensors. Consequently, all Levi-Civita tensors present in the coefficients $\alpha_i$ have to cancel out if we project the ansatz \cref{eq:ansatz6d} onto any component amplitude. The functions $\Omega_{n,i}$ are symmetric under chiral conjugation and therefore cannot produce Levi-Civita tensors. Hence, we conclude that only the chiral symmetric traces
\begin{equation}
\mathop{\mathrm{Tr}}(i_1\dots i_{2k})=\tfrac{1}{2}\left(\widetilde{\mathop{\mathrm{Tr}}}(i_1\dots i_{2k})+\widetilde{\mathop{\mathrm{Tr}}}(i_2\dots i_{2k}i_1)\right)
\end{equation}
can appear in the coefficients. These traces are given by
\begin{equation}
\begin{aligned}
\mathop{\mathrm{Tr}}(i\, j\, k\, l)&=2(x_{ij}^2x_{kl}^2-x_{ik}^2x_{jl}^2+x_{il}^2x_{jk}^2) \,\\
\mathop{\mathrm{Tr}}(i_1\,\dots \, i_{2k})&=-\tfrac{1}{2}\sum_{\alpha=2}^{2k}(-1)^\alpha x_{i_1i_{\alpha}}^2\mathop{\mathrm{Tr}}(i_2\,\dots\, i_{\alpha-1}\,i_{\alpha+1}\,\dots\, i_{2k})\,.
\end{aligned}
\end{equation}
We draw the important conclusion that the coefficients $\alpha_i$ are rational functions of dual conformal invariant cross ratios
\begin{align}
u_{ijkl}&=\frac{x_{ij}^2x_{kl}^2}{x_{il}^2x_{kj}^2}\,,&&\text{with}&x_{ij}^2&\neq0\,,\;x_{kl}^2\neq0\,,\;x_{il}^2\neq0\,,\;x_{kj}^2\neq0\,.
\end{align}
At multiplicity $n$, only $\nu_n=\tfrac{1}{2}n(n-5)$ of these cross ratios are independent. Since there are no cross ratios at four and five points, the $\alpha_i$ will be rational numbers in these cases. Unless the choice of the $\Omega_{n,i}$ has been extremely good, the $\alpha_i$ will depend on the cross-ratios for multiplicities greater than five. Nevertheless, it is straightforward to determine them using a numerical implementation of the BCFW recursion relation. Evaluating both sides of \cref{eq:ansatz6d} for a given phase space point $\pi_j$ on a sufficiently large number of component amplitudes, the resulting linear equations can be solved for $\alpha_i(\pi_j)$. Numbering the cross ratios $\{u_1,u_2,\dots,u_{\nu_n}\}$ we make an ansatz for each of the coefficients
\begin{equation}\label{eq:ansatzAlpha_i}
\alpha_i=\frac{\displaystyle a_0+\sum\limits_{i=1}^k\sum\limits_{\{n_j\}_k}a_{n_1\,\dots\, n_{\nu_n}}\prod\limits_{\sigma=1}^{\nu_n} u_{\sigma}^{n_{\sigma}}}{\displaystyle b_0+\sum\limits_{i=1}^k\sum\limits_{\{n_j\}_k}b_{n_1\,\dots\, n_{\nu_n}}\prod\limits_{\sigma=1}^{\nu_n} u_{\sigma}^{n_{\sigma}}}\,,
\end{equation}
where $\{n_j\}_k$ are all different distributions of $k$ powers among the cross ratios. Inserting the values of the cross ratios and the calculated values of the coefficients $\alpha_i(\pi_j)$ for a sufficiently large number of phase space points, the resulting linear equations can be solved for $\{a_I,b_I\}$.
Some remarks are in order here. It is very important to randomly choose the set of component amplitudes used to calculate the $\alpha_i(\pi_j)$. As will be demonstrated later,
picking only amplitudes of a particular sector, like e.\,g.~only gluon amplitudes, can lead to dual conformal extensions of this particular sector that are not equal to the full superamplitude. In practice one will successively increase the rank $k$ of the polynomials in \cref{eq:ansatzAlpha_i} until a solution is found. In order to not have to worry about numerical uncertainties or instabilities, we chose to use rational phase space points. Using momentum twistors it is straightforward to generating four-dimensional rational phase space points which can be used to obtain rational six-dimensional phase space points of the form $p_i^\mu=\{p_i^0,0,p_i^2,p_i^3,0,p_i^5\}$. Although these phase space points only have four non-zero components, they are sufficiently complex to yield non-zero results for all massive amplitudes \footnote{The only flaw in using them would have been the ruled out six-dimensional Levi-Civita tensors.}. The obvious benefit of the rational phase space points is that all found solutions to
the
ansatz \cref{eq:ansatz6d} are exact. An important property of the described method for the determination of the superamplitudes is that the obtained representations will contain only linear independent subsets of the basis functions $\Omega_{n,i}$. This may become an obstacle when looking for nice solutions with very simple coefficients $\alpha_i$ or ultimately for master formulae valid for arbitrary multiplicities since these not necessarily consist only of linear independent $\Omega_{n,i}$.
Essential for making the ansatz \cref{eq:ansatz6d} is the knowledge of the possible dual conformal covariant objects involving dual fermionic momenta $\theta_i$, $\tilde\theta_i$. Therefore we recall the inversion of the dual coordinates, compare \eqref{eq:inversion6d_first}-\eqref{eq:inversion6d_last},
\begin{align}
I[x_{ij}^{AB}]&=-(x_i^{-1}x_{ij}x_j^{-1})_{AB}\,,& I[(x_{ij})_{AB}]&=-(x_i^{-1}x_{ij}x_j^{-1})^{AB}\,,\\
I[\theta_{i}^{A}]&=\theta_{i}^{B}(x^{-1}_{i})_{BA}\,,&I[\tilde\theta_{i\,A}]&=(x^{-1}_{i})^{AB}\tilde\theta_{i\,B}\,.
\end{align}
Clearly the objects
\begin{equation}
\begin{gathered}\label{eq:Basis}
\begin{aligned}
&\langle \theta_{i_1}|x_{i_1i_2}\dots x_{i_{2k-1}i_{2k}}|\theta_{i_{2k}}\rangle&\qquad\quad&[\tilde\theta_{i_1}|x_{i_1i_2}\dots x_{i_{2k-1}i_{2k}}|\tilde\theta_{i_{2k}}]\\
\end{aligned}\\
\langle \theta_{i_1}|x_{i_1i_2}\dots x_{i_{2k}i_{2k+1}}|\tilde\theta_{i_{2k+1}}]
\end{gathered}
\end{equation}
have inversion weight minus one on each of the appearing dual points but lack a translation invariance in $\theta$ and $\tilde\theta$. Fortunately there is a unique way to obtain manifest dual translation invariant objects from the dual conformal covariants \cref{eq:Basis}. We define the dual translation invariant objects
\begin{align}\label{}
\langle B_{ijk}|&=\langle \theta_{ij}|x_{jk}x_{ki}|+\langle \theta_{ik}|x_{kj}x_{ji}|\,,&[ \widetilde{B}_{ijk}|&=[ \tilde\theta_{ij}|x_{jk}x_{ki}|+[ \tilde\theta_{ik}|x_{kj}x_{ji}|\,.
\end{align}
Because of
$\langle B_{ijk}|=-|x_{ij}x_{jk}|\theta_{ki}\rangle-|x_{ik}x_{kj}|\theta_{ji}\rangle$
\footnote{In the sense of e.g.~$
\langle \theta_{ij}| x_{jk}x_{ki}|^{C}=\theta^{A}_{ij}\, x_{jk\, AB}\, x_{ki}^{BC}=
-x_{ik}^{CB}\, x_{kj\, BA}\, \theta_{ji}^{A}=-{}^{C}|x_{ik}\,x_{kj}|\theta_{ji}\rangle
$.} we define $|B_{ijk}\rangle=-\langle B_{ijk}|$ and similar for the chiral conjugate. The dual conformal inversion properties become obvious if we expand them in $\theta$ and $\tilde\theta$, leading to
\begin{equation}
\langle B_{ijk}|=-x_{jk}^2\langle \theta_i|+\langle \theta_{j}|x_{jk}x_{ki}|+\langle \theta_{k}|x_{kj}x_{ji}|\,.
\end{equation}
Hence, the dual conformal covariant, dual translation invariant building blocks for the superamplitudes are
\begin{align}
\langle B_{i_1i_2i_3}|m_1\, \dots \,m_{2k}|B_{j_{1}j_{2}j_{3}}\rangle&=\langle B_{i_1i_2i_3}|x_{i_1 m_1}x_{m_1 m_2} \dots x_{m_{2k}j_1}|B_{j_{1}j_{2}j_{3}}\rangle\,,\label{eq:BxB}\\
[ \widetilde{B}_{i_1i_2i_3}|m_1\, \dots \,m_{2k}|\widetilde{B}_{j_{1}j_{2}j_{3}}]&=[ \widetilde{B}_{i_1i_2i_3}|x_{i_1 m_1}x_{m_1 m_2}\dots x_{m_{2k}j_1}|\widetilde{B}_{j_{1}j_{2}j_{3}}]\,,\label{eq:BtxBt}
\intertext{and}
\langle B_{i_1i_2i_3}|m_1\, \dots \,m_{2k+1}|\widetilde{B}_{j_{1}j_{2}j_{3}}]&=\langle B_{i_1i_2i_3}|x_{i_1 m_1}x_{m_1 m_2}\dots x_{m_{2k+1}j_1}|\widetilde{B}_{j_{1}j_{2}j_{3}}]\,.\label{eq:BBt}
\end{align}
They all have inversion weight minus one on every appearing dual point, e.\,g.
\begin{equation}
I\left(\,\langle B_{i_1i_2i_3}|m_1\, \dots \,m_{2k+1}|\widetilde{B}_{j_{1}j_{2}j_{3}}]\,\right)=\frac{\langle B_{i_1i_2i_3}|m_1\, \dots \,m_{2k+1}|\widetilde{B}_{j_{1}j_{2}j_{3}}]}{x_{i_1}^2x_{i_2}^2x_{i_3}^2x_{m_1}^2\dots x_{m_{2k+1}}^2x_{j_1}^2x_{j_2}^2x_{j_3}^2}
\end{equation}
Keeping in mind that the degree in both $\theta$ and $\tilde\theta$ always increases by one if we successively increase the multiplicity, the last of the building blocks appears most natural. The first two building blocks necessarily appear in pairs and lead to a partial decoupling of the chiral and anti-chiral supermomenta. Consequently the building blocks \cref{eq:BxB,eq:BtxBt} alone cannot be sufficient to construct an even multiplicity amplitude. Furthermore they are very unfavorable from the four-dimensional perspective as the massless projection of amplitudes containing them has an obscured $R$ symmetry, for details we refer to \cref{section:uplift_huang}. Although we found solutions to \cref{eq:ansatz6d} containing all three types of building blocks, we will neglect the building blocks \cref{eq:BxB,eq:BtxBt} in what follows.
To be more precisely we will try to find representations of the superamplitudes with the general form
\begin{equation}\label{eq:MasterFormula}
f_n=\sum_{I\,J\,K}\beta_{IJK}\prod_{i=1}^{n-4}\langle B_{I_i}|J_i|\widetilde{B}_{K_i}]\,,
\end{equation}
where the coefficients $\beta_{IJK}$ are functions of the dual conformal covariants $x_{ij}^2$ with the correct mass dimension and the correct inversion weights on each of the dual points in the multi-indices $I$, $J$, $K$. Manifest symmetry under chiral conjugation implies $\beta_{IJK}=(-1)^{n-4}\beta_{KJI}$.
Clearly not all of the building blocks \eqref{eq:BBt} are independent. All simple relations follow from
\begin{gather}\label{eq:propertiesB}
\begin{aligned}
\langle B_{i\,j\,k}|&=\langle B_{i\,k\,j}|\,,\qquad&\langle B_{i\,i+1\,k}|&=-\langle B_{i+1\,i\,k}|\,,\\
\langle B_{i\,j\,j+1}|&=0\,,\qquad&\langle B_{i\,i+1\,j}|x_{i\,i+1}&=0\,,
\end{aligned}\\
\intertext{and}
\langle B_I|\dots\, i\, j\, k\, j\, l\,\dots |\widetilde{B}_J]=-x_{jk}^2 \langle B_I|\dots\, i\, l\,\dots |\widetilde{B}_J]\,.
\end{gather}
\subsection{Compact analytical results}
\subsubsection{The four and five-point amplitudes}\label{section:A5}
As an instructive illustration of the severe restrictions the dual conformal covariance, \cref{eq:inversionA6d}, puts on the functional form of the superamplitudes, we consider the four point amplitude. Indeed, dual conformal covariance fixes the four point amplitude up to a constant and the only possible ansatz is
\begin{equation}\label{eq:ansatzA4}
f_{4} = \frac{\alpha}{x^2_{1 3} x^2_{2 4}}\,.
\end{equation}
The constant can be fixed by performing the dimensional reduction onto any massless four-dimensional amplitude. For the MHV gluon amplitude with negative helicity gluons at positions three and four we obtain
\begin{align}
A_4(1^1_{\;\;\dot{1}},2^1_{\;\;\dot{1}},3^2_{\;\;\dot{2}},4^2_{\;\;\dot{2}}) &= \frac{\alpha}{x^2_{1 3} x^2_{2 4}} \langle 1^{1} 2^{1} 3^{2} 4^{2}\rangle \left[ 1_{\dot{1}} 2_{\dot{1}} 3_{\dot{2}} 4_{\dot{2}}\right]\notag\\
&= \frac{\alpha}{x^2_{1 3} x^2_{2 4}}
\det\begin{pmatrix} 0 & 0 & \lambda_{3\alpha} & \lambda_{4\alpha}\\ \tilde{\lambda}_1^{\dot{\alpha}} & \tilde{\lambda}_2^{\dot{\alpha}} & 0 & 0\end{pmatrix}
\det\begin{pmatrix}0 & 0 & \lambda_3^{\alpha} & \lambda_4^{\alpha}\\ -\tilde{\lambda}_{1\dot{\alpha}} & -\tilde{\lambda}_{2\dot{\alpha}} & 0 & 0\end{pmatrix}\notag\\
&= \frac{\alpha \left<3 4\right>^2 \left[1 2\right]^2}{\left<1 2\right> \left[2 1\right] \left<2 3\right> \left[3 2\right]} = - \alpha\frac{\left<3 4\right>^4}{\left<1 2\right> \left<2 3\right> \left<3 4\right> \left<4 1\right>} \,.
\end{align}
Comparison with the well known Parke-Taylor formula yields $\alpha=-i$. This trivial calculation should be compared to the comparably complicated calculation using the BCFW recursion in references \cite{Cheung:2009dc,Dennen:2009vk}.
Recalling the known result for the five point amplitude, \cref{eq:5pkt_6d}, we want to find the most simple representation of $f_5$ that is manifest dual conformal covariant. Hence we are searching for dual translation invariant functions of mass dimension minus five, that are of degree one in both $\theta$ and $\tilde\theta$ and invert as
\begin{equation}\label{Inversion_5Pkt}
I[f_{5}] = x^{2}_{1} x^{2}_{2} x^{2}_{3} x^{2}_{4} x^{2}_{5} f_{5}\,.
\end{equation}
The most simple dual conformal covariant building blocks invariant under chiral conjugation are given by
\begin{align}\label{eq:def_Omega}
\Omega_{i\,j\,k\,l\,m}&:=\tfrac{1}{2}\left(\langle B_{ijl}|\widetilde{B}_{ikm}]-\langle B_{ikm}|\widetilde{B}_{ijl}]\right)\,,&&\text{with}&I[\Omega_{i\,j\,k\,l\,m}]&=\frac{\Omega_{i\,j\,k\,l\,m}}{x_i^2x_j^2x_k^2x_l^2x_m^2}\,.
\end{align}
Obviously $\Omega_{ijklm}$ is zero if less than three of its indices are distinct. From the properties of $\langle B_{ijk}|$, \cref{eq:propertiesB}, and its definition above follow the properties
\begin{equation}\label{eq:propertiesOmega}
\begin{aligned}
\Omega_{i\,j\,k\,l\,m}&=\Omega_{i\,l\,k\,j\,m}\,,\qquad&\Omega_{i\,j\,k\,l\,m}&=-\Omega_{i\,k\,j\,m\,l}\,,\qquad&\Omega_{i\,i+1\,k\,l\,m}&=-\Omega_{i+1\,i\,k\,l\,m}\,,\\
\Omega_{i\,j\,k\,j+1\,m}&=0\,.& \Omega_{i\,j\,k\,j\,m}&=0\,&\Omega_{i\,i\,k\,j\,m}&=0\,.
\end{aligned}
\end{equation}
At five point level the indices of $\Omega_{i\,j\,k\,l\,m}$ need to be a permutation of $\{1,2,3,4,5\}$. Applying the symmetry properties \eqref{eq:propertiesOmega} to all these permutations of the indices reveal that they are either zero or up to a sign equal to $\Omega_{1\,2\,3\,4\,5}$. Furthermore $\Omega_{1\,2\,3\,4\,5}$ is cyclically symmetric
\begin{equation}
\Omega_{1\,2\,3\,4\,5}=\Omega_{1\,2\,3\,4\,5\,1}=\Omega_{3\,4\,5\,1\,2}=\Omega_{4\,5\,1\,2\,3}=\Omega_{5\,1\,2\,3\,4}\,,
\end{equation}
and has the reflection symmetry
\begin{equation}
\Omega_{1\,2\,3\,4\,5}=-\Omega_{5\,4\,3\,2\,1}\,.
\end{equation}
Therefore the simplest possible structure for the five point amplitude is
\begin{align}\label{eq:ansatz_f5}
\frac{\Omega_{1\,2\,3\,4\,5}}{x_{13}^2x_{24}^2x_{35}^2x_{41}^2x_{52}^2}&&&\text{with}&I\left[\frac{\Omega_{1\,2\,3\,4\,5}}{x_{13}^2x_{24}^2x_{35}^2x_{41}^2x_{52}^2}\right]&=x_1^2x_2^2x_3^2x_4^2x_5^2\frac{\Omega_{1\,2\,3\,4\,5}}{x_{13}^2x_{24}^2x_{35}^2x_{41}^2x_{52}^2}\,.
\end{align}
Since there are no dual conformal invariant cross ratios at five point level, we know that \cref{eq:ansatz_f5} is either up to a constant equal to $f_5$ or we need to make a more complicated ansatz including the building blocks $\langle B_{ijk}|x_{il}x_{lk}|\widetilde{B}_{kmn}]$. Comparing this ansatz with the numerical BCFW recursion we indeed find the beautiful result
\begin{equation}\label{eq:f_5symmetric}
f_{5} = -i \frac{\Omega_{1\,2\,3\,4\,5}}{x^{2}_{13} x^{2}_{24} x^{2}_{35} x^{2}_{4 1} x^{2}_{5 2}}\,.
\end{equation}
This remarkably compact representation of the five point amplitude makes all continuous and discrete symmetries of the superamplitude manifest. Interestingly it can be simplified even more if we do not require manifest symmetry under chiral conjugation. On the support of the momentum and supermomentum conserving delta functions $\langle B_{124}|\widetilde{B}_{135}]$ is symmetric under chiral conjugation
\begin{equation}\label{eq:selfconjugate}
\langle B_{124}|\widetilde{B}_{135}]=-\langle B_{135}|\widetilde{B}_{124}]
\end{equation}
and the five point amplitude is given by
\begin{equation}\label{eq:A5compact}
\mathcal{A}_5=-i\delta^{(4)}(q)\delta^{(4)}(\tilde q)\frac{\langle B_{124}|\widetilde{B}_{135}]}{x^{2}_{13} x^{2}_{24} x^{2}_{35} x^{2}_{4 1} x^{2}_{5 2}}\,.
\end{equation}
This is the most compact dual conformal covariant expression of the five point amplitude
available and should be compared to the form \eqref{eq:5pkt_6d} of \cite{Bern:2010qa}. Making the dual conformal properties manifest led to a significant simplification.
Another manifest dual conformal covariant representation has been reported in \cite{Huang:2011um} by uplifting the four-dimensional five point amplitude of non-chiral superspace. We will discuss the potential uplift of massless four-dimensional amplitudes in \cref{section:uplift_huang}.
\subsubsection{The six-point amplitude}\label{section:A6}
As it turned out, the four and also the five point amplitudes were trivial examples of our general ansatz \cref{eq:ansatz6d}, since the coefficients $\alpha_i$ were constants. At six points they will in general no longer be constant but rational functions of the three dual conformal invariant cross ratios
\begin{align}\label{crossrations_definition}
u_{1} &= \frac{x_{13}^{2} x_{4 6}^{2}}{x_{14}^{2} x_{3 6}^{2}}\,,& \qquad u_{2} &= \frac{x_{15}^{2} x_{2 4}^{2}}{x_{14}^{2} x_{2 5}^{2}}\,,& \qquad u_{3} &= \frac{x_{2 6}^{2} x_{3 5}^{2}}{x_{2 5}^{2} x_{3 6}^{2}}\,.
\end{align}
Similar to the five point case we try to find a representation of the six point amplitude using only the simplest of the building blocks of \cref{eq:BBt}. To further reduce the resulting basis, we require chiral symmetry of the building blocks. Hence we only use the $\Omega_{i\,j\,k\,l\,m}$ defined in \cref{eq:def_Omega}. In contrast to five points the objects $\Omega_{i\,j\,j\,l\,m}$ are not all zero at multiplicity six. Nevertheless, we neglect them and stick to the $\Omega_{i\,j\,k\,l\,m}$ with distinct indices. What we are left with are the six building blocks
\begin{equation}\label{BB_Bloecke}
\begin{aligned}
\Omega_{1} &:= \Omega_{1\,2\,3\,4\,5}\,,\\[+0.2cm] \Omega_{2} &:= \Omega_{2\,3\,4\,5\,6}\\[+0.2cm]
\Omega_{3} &:= \Omega_{3\,4\,5\,6\,1}\,,\\[+0.2cm] \Omega_{4} &:= \Omega_{4\,5\,6\,1\,2}\,, \\[+0.2cm]
\Omega_{5} &:= \Omega_{5\,6\,1\,2\,3}\,,\\[+0.2cm] \Omega_{6} &:= \Omega_{6\,1\,2\,3\,4}
\end{aligned}
\end{equation}
The basis of fifteen terms that we built from the $\Omega_{i}$ is
\begin{equation}
\Omega_{i\,j}=\frac{\beta_{ij}\Omega_i \Omega_j}{x^2_{13}x^2_{24}x^2_{35}x^2_{46}x^2_{51}x^2_{62}}
\end{equation}
where the $\beta_{ij}$ cancel out the inversion weights of the four overlapping indices present in $\Omega_i \Omega_j$. Because of the existence of the three cross ratios, $\beta_{ij}$ are not uniquely fixed. One possible choice is
\begin{equation}\label{eq:betaij}
\beta_{ij}=\begin{pmatrix}
0&(x_{24}^2 x_{35}^2)^{-1}&(x_{14}^2 x_{35}^2)^{-1}&(x_{15}^2 x_{24}^2)^{-1}&(x_{13}^2 x_{25}^2)^{-1}&(x_{13}^2 x_{24}^2)^{-1}\\
0&0&(x_{35}^2 x_{46}^2)^{-1}&(x_{25}^2x_{46}^2)^{-1}&(x_{26}^2 x_{35}^2)^{-1}&(x_{24}^2 x_{36}^2)^{-1}\\
0&0&0&(x_{15}^2 x_{46}^2)^{-1}&(x_{35}^2 x_{46}^2)^{-1}&(x_{13}^2 x_{46}^2)^{-1}\\
0&0&0&0&(x_{15}^2 x_{26}^2)^{-1}&(x_{14}^2 x_{26}^2)^{-1}\\
0&0&0&0&0&(x_{13}^2 x_{26}^2)^{-1}\\
0&0&0&0&0&0
\end{pmatrix}
\end{equation}
We exclude terms of the form $(\Omega_i)^2$ and make the following ansatz for the five point amplitude
\begin{equation}\label{eq:ansatz_f6}
f_6=\sum_{i<j}\alpha_{ij}\Omega_{i\,j}\,.
\end{equation}
with $\alpha_{ij}=\alpha_{ij}(u_1,u_2,u_3)$ being a rational function of the cross ratios. Making an ansatz of the form \cref{eq:ansatzAlpha_i} it is straightforward to determine the $\alpha_{ij}$. The first observation is that out of our fifteen basis elements only eleven are linearly independent, leading to a large number of different representations of the form
\eqref{eq:ansatz_f6}. The highly nontrivial linear relations between the $\Omega_{i\,j}$ are only valid on the support of the momentum and supermomentum conserving delta functions and can be determined in the same way as the amplitude. The two ten-term and two eleven-term identities involving complicated functions of the cross ratios can be used to transform a particular solution to \cref{eq:ansatz_f6} to any other solution of this form. The complexity of the coefficients $\alpha_{ij}$ varies largely with the choice of linear independent $\Omega_{i\,j}$ in the solution, e.\,g.~some solutions involve rational functions of degree twelve in the cross rations $u_i$. The three simplest of the solutions involve nine $\Omega_{i\,j}$ and rational functions of degrees less than three. One particular of these simple solutions is
\begin{equation}\label{eq:solutionf6}
\left(\alpha_{ij}\right)=\frac{1}{1+u_1-u_2-u_3}
\begin{pmatrix}
0&u_2 u_3&-u_3&u_2
\left(u_3-u_1\right)&0&0\\
0&0&0&0&u_3\left(u_2-u_1\right)&-u_2\\
0&0&0&0&-u_2&u_1
\left(u_2+u_3\right)\\
0&0&0&0&u_2 u_3&-u_3\\
0&0&0&0&0&0\\
0&0&0&0&0&0
\end{pmatrix}\,.
\end{equation}
Inserting the coefficients, the definitions of $\Omega_{ij}$ and the cross rations $u_i$, as well as the identity
\begin{equation}
\mathop{\mathrm{Tr}} \left(1\, 2\, 3\, 5\, 6\, 4\right) = x^{2}_{1 4} x^{2}_{2 5} x^{2}_{3 6} \left(1 + u_1 - u_2 - u_3\right)\,,
\end{equation}
into the ansatz \cref{eq:ansatz_f6}, the six point amplitude reads
\begin{equation}\label{eq:f_6}
\begin{aligned}
f_{6} &= \frac{1}{x^{2}_{1 3} x^{2}_{2 4} x^{2}_{3 5} x^{2}_{4 6} x^{2}_{5 1} x^{2}_{6 2}} \frac{i}{\mathop{\mathrm{Tr}} \left(1\, 2\, 3\, 5\, 6\, 4\right)}\Biggl(- x^{2}_{26} \Omega_{1} \Omega_{3} - x^{2}_{1 5} \Omega_{2} \Omega_{6} - x^{2}_{2 4} \Omega_{3} \Omega_{5}- x^{2}_{35} \Omega_{4} \Omega_{6} \\
&\={} \hspace{3.5cm} +\frac{x_{26}^2 x_{15}^2}{x_{25}^{2}} \Omega_{1} \Omega_{2}+\frac{x_{24}^2 x_{35}^2}{x_{25}^{2}} \Omega_{4} \Omega_{5}+ \left(\frac{x_{15}^{2} x_{2 4}^{2}}{x_{2 5}^{2}} - \frac{x_{13}^{2} x_{4 6}^{2}}{x_{3 6}^{2}}\right) \Omega_{2} \Omega_{5}\\
&\= {} \hspace{3.5cm}+ \left(\frac{x_{2 6}^{2} x_{3 5}^{2}}{x_{2 5}^{2}} - \frac{x_{13}^{2} x_{4 6}^{2}}{x_{14}^{2}}\right) \Omega_{1} \Omega_{4} + \left(\frac{x_{15}^{2} x_{2 4}^{2}}{x_{14}^{2}} + \frac{x_{2 6}^{2} x_{3 5}^{2}}{x_{3 6}^{2}}\right) \Omega_{3} \Omega_{6} \Biggr)\,.
\end{aligned}
\end{equation}
Note that this representation of the six point amplitude has an unphysical pole $ u_2 + u_3 - u_1=1$, contained in the trace. From the analysis of the solutions to \cref{eq:ansatz_f6} we conclude that unphysical poles are a general feature for representations in terms of the $\Omega_{ij}$. Of course, all the unphysical poles are only spurious and cancel out if we project onto any component amplitude. The other two nine term solutions can be obtained by cyclic permutations of \cref{eq:f_6}.
Albeit all continuous symmetries and the symmetry under chiral conjugation of the six point amplitude are manifest in the solutions to \cref{eq:ansatz_f6}, the cyclic\footnote{\Cref{eq:f_6} is manifest invariant under the cyclic permutation $i\rightarrow i+3$.} and reflection symmetry are not obvious. However, there is no obstacle in finding manifest cyclically symmetric representations by constructing manifest cyclically symmetric basis elements from the $\Omega_i$. As a consequence of the manifest cyclic invariance of the basis, the coefficients in the general ansatz \cref{eq:ansatz6d} are cyclically symmetric as well, i.\,e.~are rational functions of symmetric polynomials of the cross ratios.
There are three types of such manifest cyclically symmetric basis elements
\begin{equation}
\begin{aligned}
&g_1(u_1,u_2,u_3)\Omega_{12}+\text{five cyclic rotations}\\
&g_2(u_1,u_2,u_3)\Omega_{13}+\text{five cyclic rotations}\\
&g_3(u_1,u_2,u_3)\Omega_{14}+f_3(u_2,u_3,u_1)\Omega_{24}+f_3(u_3,u_1,u_2)\Omega_{36}
\end{aligned}
\end{equation}
The functions $g_i$ are arbitrary rational functions of the cross ratios leaving a lot of freedom to define a cyclic basis. Looking at the solution \cref{eq:solutionf6}, reasonable choices are $g_1\in \{u_1u_2,u_1u_3,u_2u_3\}$, $g_2\in \{u_1,u_2,u_3\}$, and $g_3\in \{u_1(u_2\pm u_3),u_2(u_3\pm u_1),u_3(u_1\pm u_2)\}$. Indeed, this leads to a solution involving only three cyclically symmetric basis elements. Choosing $g_1=u_2u_3$, $g_2=u_3$, and $g_3=u_2(u_1+ u_3)$ we find
\begin{equation}
(\alpha_i)=\frac{1}{3-u_1-u_2-u_3}\begin{pmatrix}
1&-2&1
\end{pmatrix}
\end{equation}
or equivalently
\begin{multline}\label{eq:f6cyclic}
f_{6} = \frac{1}{x^{2}_{1 3} x^{2}_{2 4} x^{2}_{3 5} x^{2}_{4 6} x^{2}_{5 1} x^{2}_{6 2}} \frac{i}{x^{2}_{1 4} x^{2}_{2 5} x^{2}_{3 6}(3 - u_{1} - u_{2} - u_{3})}\left(\frac{x^{2}_{15}x^{2}_{26}}{x^{2}_{25}}\,\Omega_{1} \Omega_{2} - 2\, x^{2}_{26}\, \Omega_{1} \Omega_{3} + \right. \\
+ \left. \left(\frac{x^{2}_{13}x^{2}_{46}}{x^{2}_{14}} + \frac{x^{2}_{26}x^{2}_{35}}{x^{2}_{25}} \right) \Omega_{1} \Omega_{4} + \mbox{cyclic permutations}\right)\,.
\end{multline}
Clearly this representation is not minimal as it consists of all fifteen $\Omega_{ij}$. The contained unphysical pole at $u_{1} + u_{2} + u_{3} = 3$, might be expressed by the traces
\begin{equation}
\begin{gathered}
x^{2}_{1 4} x^{2}_{2 5} x^{2}_{3 6} \left(3 - u_{1} - u_{2} - u_{3}\right) = \tfrac{1}{2}\left(\;\mathop{\mathrm{Tr}} \left(1\, 2\, 3\, 5\, 6\, 4\right) + \text{cyclic permutations}\;\;\right)\,.
\end{gathered}
\end{equation}
As emphasized in \cref{section:IdeaBCFW} it is very important to randomly choose the component amplitudes which are used to calculate the coefficients $\alpha_i$ in the general ansatz \cref{eq:ansatz6d}. Since we are dealing with a maximally supersymmetric theory one might wonder if it would not be sufficient to consider e.\,g.~only gluon amplitudes and let supersymmetry care for all other amplitudes. Indeed this is a widespread claim within the literature which can be easily disproved. In fact, only eight of the fifteen $\Omega_{ij}$ are linear independent on gluon amplitudes compared to eleven on all component amplitudes. Consequently, supersymmetrizing gluon amplitudes as has been done in reference \cite{Dennen:2009vk} for the three, four and five point amplitudes will not yield the correct superamplitude for multiplicities greater than five.
Having said that, it is nevertheless interesting to investigate how such a supersymmetrization of the gluon amplitudes looks like. Therefore we try to find a dual conformal invariant extension of the gluon amplitudes, that is a solution to \cref{eq:ansatz6d} valid on all gluon amplitudes. At six points we do not have to worry about six-dimensional Levi-Civita tensors and it is not necessary to use chiral self-conjugate building blocks. Instead of the $\Omega_i$ we use the building blocks
\begin{align}\label{eq:OmegaUpDown}
\Omega_{i,j}^u&:=\langle B_{i\,i+1\,i+3}|\widetilde{B}_{i\,i+2\,i+4}]\,,&\Omega_{i,j}^d&:=\langle B_{i\,i-1\,i-3}|\widetilde{B}_{i\,i-2\,i-4}]\,
\end{align}
where the label $j$ indicates that the indices $\{i,i\pm1,i\pm2,\pm3,i\pm4\}$ in $\Omega_{i,j}^{u/d}$ have to be taken modulo $j$. Whenever the label $j$ is equal to the multiplicity $n$, we will usually drop it.
The $\Omega_{i}^{u/d}$ are related to the chiral self-conjugate $\Omega_i$ by
\begin{equation}\label{eq:relationOmegaUD}
\Omega_i=\tfrac{1}{2}\left(\Omega_i^u-\Omega_{i+4}^d\right)\,.
\end{equation}
The resulting ansatz for the dual conformal extension of the gluon sector is
\begin{align}\label{eq:ansatzf6gluon}
f_6\bigr\rvert_{\text{gluons}}=\frac{i}{x^{2}_{1 3} x^{2}_{2 4} x^{2}_{3 5} x^{2}_{4 6} x^{2}_{5 1} x^{2}_{6 2}x^{2}_{1 4} x^{2}_{2 5} x^{2}_{3 6}}\Bigl(\sum_{i,j}\begin{aligned}[t]&\alpha_{ij}x_{i-1\,j-1}^2\Omega_i^{u}\Omega_j^{u}\\&\quad+\beta_{ij}x_{i-1\,j+1}^2\Omega_i^{u}\Omega_j^{d}\\&\quad\quad+\gamma_{ij}x_{i+1\,j+1}^2\Omega_i^{d}\Omega_j^{d}\Bigr)\end{aligned}
\end{align}
Since the gluon sector is not closed under dual conformal symmetry, the massless coefficients $\alpha_{ij}$, $\beta_{ij}$, $\gamma_{ij}$ are in general rational functions of the Lorentz invariants $x_{kl}^2$. As expected not all of the $\Omega_i^{u/d}\Omega_j^{u/d}$ are linear independent on the gluon amplitudes. A good indication that we will find dual conformal covariant solutions to \cref{eq:ansatzf6gluon} is the fact that all two term identities that the $\Omega_i^{u/d}\Omega_j^{u/d}$ fulfill on the gluon amplitudes are in fact dual conformal covariant. On the support of the momentum and supermomentum conserving delta functions we have for example the six identities
\begin{equation}\label{eq:2termIds}
\begin{aligned}
x_{i-1\,i+1}^2\Omega^{u}_i\Omega^{d}_i\bigr\rvert_{\text{gluons}}&=x_{i+2\,i+4}^2\Omega^{u}_{i+3}\Omega^{d}_{i+3}\bigr\rvert_{\text{gluons}}\,,\\
x_{i-1\,i+3}^2\Omega^{u}_i\Omega^{d}_{i+2}\bigr\rvert_{\text{gluons}}&=x_{i+2\,i}^2\Omega^{u}_{i+3}\Omega^{d}_{i+5}\bigr\rvert_{\text{gluons}}\,.
\end{aligned}
\end{equation}
Indeed there are 24 nice three term solutions to \cref{eq:ansatzf6gluon} that are all dual conformal covariant. One of these solutions is
\begin{equation}\label{eq:f6gluon}
f_6\bigr\rvert_{\text{gluons}}=\frac{i}{x^{2}_{1 3} x^{2}_{2 4} x^{2}_{3 5} x^{2}_{4 6} x^{2}_{5 1} x^{2}_{6 2}x^{2}_{1 4} x^{2}_{2 5} x^{2}_{3 6}} \left(x_{24}^2\Omega^d_3 \Omega^u_3 - x_{13}^2\Omega^d_2 \Omega^d_6-x_{26}^2\Omega^u_1 \Omega^u_3 \right)\,.
\end{equation}
Unfortunately, none of the found dual conformal extensions of the gluon sector were equal to the superamplitude. However, they all gave the correct ultra helicity violating (UHV) amplitudes.
\subsubsection{Towards higher multiplicities}\label{section:HigherMultiplicities}
Inspired by the compact representations \cref{eq:f_5symmetric,eq:f_6,eq:f6cyclic} the next logical step is to try to find a nice representation of the seven point amplitude with the ultimate goal of a master formula valid for arbitrary multiplicities. The main difficulty at higher multiplicities is to make a good choice for the basis $\Omega_{n,i}$ used to express the amplitude, compare \cref{eq:ansatz6d}. Going from five to six partons the number of terms in the amplitudes increased roughly by a factor of ten. Hence, the number of terms in the seven point amplitude is expected to be of order 100, making systematic studies of the solutions to \cref{eq:ansatz6d} impossible for multiplicities $n>6$. Furthermore, the generic solution $\alpha_i$ to \cref{eq:ansatz6d} contains complicated rational functions of the $\nu_n=\tfrac{1}{2}n(n-5)$ cross ratios which require a huge calculational effort to be obtained from the BCFW recursion using \cref{eq:ansatzAlpha_i}.
At seven points, a natural starting point is to use a basis constructed from products of the chiral self-conjugate $\Omega_{i\,j\,k\,l\,m}$ defined in \cref{eq:def_Omega}. Hence, the ansatz for the seven point amplitude reads
\begin{equation}\label{eq:ansatz_f7}
f_{7} = \sum_{I, J,K} \alpha_{I J K} \beta_{I J K} \Omega_{I} \Omega_{J} \Omega_{K}\,
\end{equation}
where the coefficient $\beta_{I J K}$ are functions of the covariants $x_{ij}^2$ compensating the negative inversion weights in the dual points present in $\{I,J,K\}$. The $\beta_{I J K}$ have mass dimension -22 and are straightforward to obtain by counting the inversion weights in $\{I,J,K\}$, compare \cref{eq:betaij}. The dimensionless $\alpha_{I J K}$ are rational functions of the seven cross ratios
\begin{equation}
\begin{gathered}
u_{i} = \frac{x^2_{i\, i + 2} x^2_{i+3\, i+6}}{x^2_{i\, i + 3} x^2_{i + 2\, i+6}}\,.
\end{gathered}
\end{equation}
Even if we restrict the basis to products of distinct $\Omega_{I}$ and only consider $\Omega_{I}=\Omega_{i_1\,i_2\,i_3\,i_4\,i_5}$ with distinct indices we end up with more than $10^4$ basis elements. Solving for $\alpha_{I J K}(\pi_i)$ at different phase space points reveals that approximately $70$ basis elements are required to obtain a representation of the seven point amplitude. Analyzing different choices of a linear independent subset of basis elements, for none of them the complexity of the coefficients $\alpha_{I J K}$ turned out to be sufficiently low to justify the computational effort necessary to determine their analytical form. Due to the astronomical number of different solutions to \cref{eq:ansatz_f7} it is impossible to decide whether or not simple solutions to it exist or if the restriction to use only the building blocks $\Omega_{ijklm}$ needs to be relaxed.
Looking at the representations \cref{eq:f_5symmetric,eq:f_6,eq:f6cyclic} found for the five and six point amplitude we observe the prefactor
\begin{align}
&\frac{1}{\prod x^2_{ij}}\,,&&\text{with}&I\left[\frac{1}{\prod x^2_{ij}}\right]&=\frac{\left(\prod x_i^2\right)^{n-3}}{\prod x^2_{ij}}\,,
\end{align}
containing the product of all $\tfrac{1}{2}n(n-3)$ physical poles. It seems natural to expect this prefactor in a potential master formula for arbitrary multiplicities. With the definition
\begin{equation}\label{eq:defOmegaIJK}
\Omega_{I;J;K}=\tfrac{1}{2}\left(\langle B_I|J|\widetilde{B}_K]-\langle B_K|J|\widetilde{B}_I]\right)
\end{equation}
of the chiral self-conjugate building blocks $\Omega_{I;J;K}$ we can easily write down a nice ansatz valid for arbitrary multiplicities
\begin{align}\label{eq:MasterAnsatz}
f_n=\frac{1}{\prod x^2_{ij}}\sum_{I,J,K,L}\alpha_{I,J,K,L}\;\Omega_{I}\;\prod_{i=1}^{n-5}\Omega_{J_i;K_i;L_i}\,.
\end{align}
Here the sum goes over all multi-indices $I=\{i_1,i_2,i_3,i_4,i_5\}$, $J=\{J_1,\dots,J_{n-5}\}$, $K=\{K_1,\dots,K_{n-5}\}$, $L=\{L_1,\dots,L_{n-5}\}$ with $|J_i|=|L_i|=3$, $|K_i|=2i-1$, where $\{I,J,K,L\}$ is a permutation of $\{\{1\}^{n-4},\dots,\{n\}^{n-4}\}$. By construction the $\alpha_{I,J,K,L}$ are dimensionless and dual conformal invariant. Clearly the representation of the five point amplitude \cref{eq:f_5symmetric} is a solution to \cref{eq:MasterAnsatz} whereas the representations of the six point amplitude \cref{eq:f_6,eq:f6cyclic} are no solution to it. We leave it to future work to investigate whether there exist nice solutions to the master ansatz \cref{eq:MasterAnsatz} for multiplicities greater than five.
\subsection{The little group decomposition of the superamplitudes}\label{section:littleGroupDecomposition}
With regard to the MHV decomposition of the massless amplitudes of ${\cal N}=4$ SYM it would be nice to have a similar decomposition of the massless six-dimensional amplitudes of $\mathcal{N} = (1,1)$ into sectors of varying complexity. Here we propose a decomposition of the amplitudes according to the violation of the $SU(2)\times SU(2)$ little group which is the six-dimensional analog of the MHV-band decomposition introduced for massive amplitudes on the Coulomb branch in
$\mathcal{N} = 4$ SYM in \cite{Craig:2011ws}.
Our starting point is the decomposition of the six-dimensional superamplitude into the component amplitudes $A_{n}$
\begin{equation}\label{eq:decomposition6d}
\mathcal{A}_{n} = \sum_{I,J} \xi_{i_1a_1} \xi_{i_2a_2}\dots \xi_{i_na_n} \tilde{\xi}^{\dot{b}_1}_{j_1} \tilde{\xi}^{\dot{b}_2}_{j_2}\dots\tilde{\xi}^{\dot{b}_n}_{j_n} A_{n}\left(i_1^{a_1},i_2^{a_2},\dots ,i_n^{a_n},j_{1 \dot{b}_1}, j_{2 \dot{b}_2},\dots ,j_{n \dot{b}_n} \right)\,.
\end{equation}
Recall the connection of the six dimensional Grassmann variables to the non-chiral
four dimensional ones of section \cref{section:dimensional_reduction} under
dimensional reduction
\begin{align}\label{eq:mapEta}
\xi_{a} &= \left(\tilde{\eta}_{3},\eta^{1} \right)\,,& \tilde{\xi}^{\dot{a}}& = \left(\tilde{\eta}_{2}, -\eta^{4}\right)\,,
\end{align}
which implied the reduction for the supermomenta
\begin{align}
q^{A} &= \left(\begin{array}{c} q_\alpha^1 \\ \tilde{q}^{\dot{\alpha}}_{3} \end{array}\right) \,,&
\tilde{q}_{A} & = \left(\begin{array}{cc} -q^{\alpha\,4} \, , & -\tilde{q}_{\dot{\alpha}\,2}
\end{array}\right)\,.
\end{align}
It is instructive to translate the MHV decomposition of the massless four-dimensional superamplitudes into six-dimensional language. Because of the $SU(4)_R$ symmetry the N${}^p$MHV superamplitude in chiral superspace has the Grassmann dependence
\begin{align}
\text{4d chiral superspace:}&&\mathcal{A}_n^{\text{N${}^p$MHV}}&=\mathcal{O}\left((\eta^1)^{p+2}(\eta^2)^{p+2}(\eta^3)^{p+2}(\eta^4)^{p+2}\right)
\end{align}
According to \cref{eq:relationOfsuperfields}, the chiral super field $\mathcal{A}_n( \varPhi_1,\dots,\varPhi_1)$ is related to the non-chiral superfield $\mathcal{A}_n( \varUpsilon_1,\dots,\varUpsilon_n)$ by the half Fourier transformation
\begin{align}\label{eq:FT}
\mathcal{A}_n( \varUpsilon_1,\dots,\varUpsilon_n) = \prod_{i}\int d\eta_i^{3} d\eta_i^{2} \;e^{ \eta_i^{2}\tilde{\eta}_{i\,2} + \eta_i^{3}\tilde{\eta}_{i\,3}} \mathcal{A}_n( \varPhi_1,\dots,\varPhi_1)\,.
\end{align}
Consequently, the N${}^p$MHV superamplitude in non-chiral superspace has the Grassmann dependence
\begin{align}
\text{4d non-chiral superspace:}&&\mathcal{A}_n^{\text{N${}^p$MHV}}&=\mathcal{O}\left((\eta^1)^{p+2}(\tilde\eta_2)^{n-p-2}(\tilde\eta_3)^{n-p-2}(\eta^4)^{p+2}\right)\,.
\end{align}
With the help of the map \cref{eq:mapEta} between the four-dimensional and six-dimensional Grassmann variables we can deduce which of the six-dimensional component amplitudes $A_{n}\left(i_1^{a_1},\dots ,i_n^{a_n},j_{1 \dot{b}_1}, \dots ,j_{n \dot{b}_n} \right)$, defined in \cref{eq:decomposition6d}, correspond to massless four-dimensional N${}^p$MHV amplitudes
\begin{align}
\text{6d non-chiral superspace:}&&\mathcal{A}_n\bigr\rvert_{\text{N${}^p$MHV}}&=\mathcal{O}\left((\xi_1)^{n-p-2}(\xi_2)^{p+2}(\tilde\xi^{\dot{1}})^{n-p-2}(\tilde\xi^{\dot{2}})^{p+2}\right)\,.
\end{align}
Hence, the $SU(4)_R$ symmetry of the massless chiral superamplitudes in four dimensions leads to a Grassmann dependence of the form $(\xi_1)^{n-a}(\xi_2)^{a}(\tilde\xi^{\dot{1}})^{n-a}(\tilde\xi^{\dot{2}})^{a}$ in six dimensions. From the six-dimensional perspective the Grassmann dependence of the superamplitudes in the massless four-dimensional limit is a consequence of breaking the $SU(2) \times SU(2)$ little group to a $U(1)$ little group in four dimensions because on the four-dimensional subspace the chiral and anti-chiral spinors $\lambda^A$ and $\tilde\lambda_A$ are equal.
In the case of the massive four dimensional amplitudes the $SU(4)_R$ symmetry is broken and the Grassmann dependence of the corresponding six-dimensional superamplitude is no longer restricted, i.e.~all terms of the form $(\xi_1)^{n-a}(\xi_2)^{a}(\tilde\xi^{\dot{1}})^{n-b}(\tilde\xi^{\dot{2}})^{b}$ are appearing except the ones with $a,\,b\,\in\{0,n\}$. We then propose the following little group decomposition of the superamplitudes of
${\cal N}=(1,1)$ SYM
\begin{align}
\mathcal{A}_n&=\sum_{a=1}^{n-1}\sum_{b=1}^{n-1}\mathcal{A}_n^{a\times b}\,,&&\text{with}&\mathcal{A}_n^{a\times b}&=\mathcal{O}\left((\xi_1)^{n-a}(\xi_2)^{a}(\tilde\xi^{\dot{1}})^{n-b}(\tilde\xi^{\dot{2}})^{b}\right)\,.
\end{align}
\begin{figure}[t]\label{Baender_6D_pic}
\centering
\includegraphics[width=0.8\linewidth]{LittleGroup.pdf}
\caption[Little group decomposition of the $\mathcal{N} = (1,1)$ SYM amplitudes.]{Little group decomposition of the $\mathcal{N} = (1,1)$ SYM amplitudes. The general amplitude $\mathcal{A}^{ a \times b}_n$ has the Grassmann dependence $(\xi_1)^{n-a} (\xi_2)^{a} (\tilde{\xi}^{\dot{1}})^{n-b}(\tilde{\xi}^{\dot{2}})^{b}$. In the massless four-dimensional limit only $\mathcal{A}^{(p+2) \times (p+2)}_n$ with $p = 0,1,\dots,n-4$ are non-zero and give the N${}^p$MHV amplitudes (continuous horizontal lines). Some examples of amplitudes that are vanishing in the massless limit to four dimensions are represented by dashed lines.}
\end{figure}
This decomposition can be further motivated by translating the Grassmann dependence of $\mathcal{A}_n^{a\times b}$ back to chiral superspace using \cref{eq:mapEta,eq:FT}
\begin{align}
\text{6d:}&&&\mathcal{A}_n^{a\times b}&&\qquad\longrightarrow\qquad &&\text{4d:}&&\mathcal{O}\left((\eta^1)^{a}(\eta^3)^{a}(\eta^{2})^{b}(\eta^4)^{b}\right)
\end{align}
Hence the little group decomposition in six dimensions corresponds to breaking the four-dimensional $SU(4)_R$ symmetry to a $SU(2)_R\times SU(2)_R$ symmetry.
For the little group decomposition to be of use, the complexity of the $\mathcal{A}_n^{a\times b}$ should vary with the values of $a$ and $b$. In the massless four-dimensional theory the simplest amplitudes are the MHV amplitudes. In the massive case, gluon amplitudes with helicity configurations of the form $(+\,-\,\dots\,-)$ or $(-\,+\,\dots\,+)$ are no longer zero and belong to the ultra helicity violating (UHV) amplitudes. The UHV amplitudes are the simplest of the massive amplitudes and vanish in the massless limit.
Within the little group decomposition of the six-dimensional superamplitudes the UHV amplitudes are given by
\begin{align}\label{eq:defUHV}
\text{UHV amplitudes:}&&\mathcal{A}^{ 1 \times 1}_n\,,& &\mathcal{A}^{(n - 1) \times 1}_n\,,&&\mathcal{A}^{1 \times (n - 1)}_n\,,&&\mathcal{A}^{(n - 1) \times (n - 1)}_n\,.
\end{align}
Now that the simplest parts of the superamplitude is identified, the numerical BCFW recursion relation can be used to investigate their analytical form.
\subsection{The UHV amplitudes in $\mathcal{N} = (1,1)$ SYM theory}\label{section:UHV}
Since the UHV amplitudes are not closed under the dual conformal symmetry, we cannot expect the coefficients $\alpha_i$ in our general ansatz \cref{eq:ansatz6d} to be dual conformal invariant. In general the coefficients $\alpha_i$ will be rational functions of the $\rho_n=\tfrac{1}{2}n(n-3)$ Lorentz invariants $\{x_{13}^2,x_{24}^2,\dots\}=:\{s_1,s_2,\dots,s_{\rho_n}\}$ and similar to \cref{eq:ansatzAlpha_i} they can be obtained by solving the linear equations derived from the ansatz
\begin{equation}\label{eq:ansatzXij}
\alpha_i=\frac{\displaystyle \sum\limits_{\{n_i\}_k}a_{n_1\dots n_{\rho_n}}\prod\limits_{\sigma=1}^{\rho_n} s_{\sigma}^{n_\sigma}}{\displaystyle \sum\limits_{\{n_i\}_k}b_{n_1\dots n_{\rho_n}}\prod\limits_{\sigma=1}^{\rho_n} s_{\sigma}^{n_\sigma}}\,,
\end{equation}
where $\{n_j\}_k$ are all different distributions of $k$ powers among the Lorentz invariants. In contrast to the dual conformal invariant case, \cref{eq:ansatzAlpha_i}, numerator and denominator need to be homogeneous polynomials of equal degree $k$.
\subsubsection{Six-point case}
To get an idea of the complexity of the UHV amplitudes we turn to the six point case and make the same ansatz as in \cref{eq:ansatzf6gluon} for the gluon sector
\begin{align}\label{eq:ansatzf6UHV}
f_6^{\text{UHV}}=\frac{i}{x^{2}_{1 3} x^{2}_{2 4} x^{2}_{3 5} x^{2}_{4 6} x^{2}_{5 1} x^{2}_{6 2}x^{2}_{1 4} x^{2}_{2 5} x^{2}_{3 6}}\Bigl(\sum_{i,j}\begin{aligned}[t]&\alpha_{ij}x_{i-1\,j-1}^2\Omega_i^{u}\Omega_j^{u}\\&\quad+\beta_{ij}x_{i-1\,j+1}^2\Omega_i^{u}\Omega_j^{d}\\&\quad\quad+\gamma_{ij}x_{i+1\,j+1}^2\Omega_i^{d}\Omega_j^{d}\Bigr)\,,
\end{aligned}
\end{align}
where the $\Omega^{u/d}_{i}=\Omega^{u/d}_{i,n}$ were defined in \cref{eq:OmegaUpDown}.
Looking for solutions to \cref{eq:ansatzf6UHV}, the first observation is that only two of the $\Omega_i^{u/d}\Omega_j^{u/d}$ are linear independent. Further restricting to either the little group sectors $1\times 1\,\cup\,(n-1)\times (n-1)$ or $1\times (n-1)\,\cup\,(n-1)\times 1$ every single term in \cref{eq:ansatzf6UHV} gives a solution. This is an impressive display of the simplicity of the Grassmann dependence of the UHV amplitudes as well as a belated justification of the use of the dual conformal covariant building blocks $\Omega_j^{u/d}$. Unfortunately, all the two term solutions to \cref{eq:ansatzf6UHV}
have very complicated coefficients. In order to cure this fact we try to find non-minimal and ideally dual conformal covariant solutions with simple coefficients. Due to \cref{eq:relationOmegaUD} we already know four dual conformal covariant solutions to \cref{eq:ansatzf6UHV}, namely \cref{eq:f_6} and its cyclic rotations and \cref{eq:f6cyclic}.
A key observation towards simple dual conformal covariant representations of the UHV amplitudes is that on the UHV amplitudes the basis elements $\Omega_i^{u/d}\Omega_j^{u/d}$ obey the same dual conformal covariant two term identities \eqref{eq:2termIds} as on the gluon amplitudes. Hence it is natural to look for nice three term solutions similar to ones obtained for the gluon sector. A very basic way to to find three term solutions to \cref{eq:ansatzf6UHV} is to fix one of the coefficients to some simple function of the cross ratios, e.\,g. $\alpha_{13}=g(u_1,u_2,u_3)$ and solve for the remaining coefficients. With regard to the nice representations found for the gluons \eqref{eq:f6gluon}, we start with the most simple choices possible $g=\pm1$. Indeed for $\alpha_{13}=-1$ we find four simple dual conformal covariant solutions
\begin{equation}\label{eq:f6UHV}
\begin{aligned}
-i \,\left(x^{2}_{1 3} x^{2}_{2 4} x^{2}_{3 5} x^{2}_{4 6} x^{2}_{5 1} x^{2}_{6 2}x^{2}_{1 4} x^{2}_{2 5} x^{2}_{3 6}\right)\,f_6^{\text{UHV}}&=x_{24}^2\Omega^d_3 \Omega^u_3 - x_{13}^2\Omega^d_2 \Omega^d_6-x_{26}^2\Omega^u_1 \Omega^u_3 \\
&=x_{46}^2\Omega^d_3 \Omega^u_1 - x_{15}^2\Omega^d_4 \Omega^d_6-x_{26}^2\Omega^u_1 \Omega^u_3\\
&=x_{13}^2\Omega^d_6 \Omega^u_4 - x_{15}^2\Omega^d_4 \Omega^d_6-x_{26}^2\Omega^u_1 \Omega^u_3\\
&=x_{24}^2\Omega^d_6 \Omega^u_6 - x_{13}^2\Omega^d_2 \Omega^d_6-x_{26}^2\Omega^u_1 \Omega^u_3\,.\end{aligned}
\end{equation}
As it turns out, the 24 three-term solutions that can be obtained in this way exactly match the 24 gluon representations found in \cref{section:A6}, i.\,e.~the dual conformal extension of the UHV sector includes the gluon sector. This observation is highly nontrivial. At this point it is not clear whether this is a special feature of the six point amplitude or a multiplicity independent statement.
\subsubsection{UHV amplitudes with two massive legs at arbitrary multiplicities}\label{section:UHVtwoMass}
Motivated by compact formulae obtained in reference \cite{Craig:2011ws} for massive $\mathcal{N}=4$ SYM amplitudes with two neighboring massive legs, we investigate the UHV sector in the special kinematics where only the first two legs are massive from the four-dimensional point of view. By cyclic permutations of the indices this is straightforward to translate to the case were another pair of consecutive legs is massive. In six-dimensional language this is equivalent to the restriction to phase space points of the form
\begin{align}\label{eq:M1M2}
p^{5}_{1} &= - p^{5}_{2}\,,& p^{6}_{1} &= - p^{6}_{2}&&\text{and}& p^{5}_{i} &= p^{6}_{i} = 0&& \mbox{ for }& i &= 3,\dots,n\,.
\end{align}
Similar to the four-dimensional calculation in reference \cite{Craig:2011ws} we are searching for a formula valid for all multiplicities. Therefore we make the recursive ansatz
\begin{equation}\label{eq:recursionAnsatz}
f_{n}=f_{n-1}\left(\sum_{i}\alpha_i\Omega_{i,n}^u+\beta_i\Omega_{i,n}^d\right)\,,
\end{equation}
where at each recursion step we only use the $2n$ dual conformal covariant building blocks $\Omega_{i,n}^u$ defined in \cref{eq:OmegaUpDown}. Due to the special kinematics \cref{eq:M1M2} we do not have to worry about six-dimensional Levi-Civita tensors for multiplicities larger than six, hence there is no need for chiral self-conjugate building blocks. The coefficients $\alpha_i$, $\beta_i$ have mass dimension minus six and their functional dependence on the Lorentz invariants $x_{ij}^2$ can be obtained by modifying the ansatz \cref{eq:ansatzXij} accordingly. We successively determine the solutions to \cref{eq:recursionAnsatz} and at each multiplicity we keep all one term solutions and feed them back into the recursive ansatz \cref{eq:recursionAnsatz}. As initial data we take the ten equivalent representations of the full five point amplitude
following from \cref{eq:A5compact}, \cref{eq:selfconjugate} and the cyclic invariance of
the amplitude
\begin{align}
f_5&=- \frac{i}{x^{2}_{13} x^{2}_{24} x^{2}_{35} x^{2}_{41} x^{2}_{52}}\Omega_{i,5}^u\,,&f_5&= \frac{i}{x^{2}_{13} x^{2}_{24} x^{2}_{35} x^{2}_{41} x^{2}_{52}}\Omega_{i,5}^d\,.
\end{align}
Note that the discrete symmetries making the above 10 representations identical only
hold within five-point kinematics. Only the two $f_{n}$ of this set proportional to $\Omega_{1,5}^u$ or $\Omega_{5,5}^d$ yield one term solutions in the recursive construction of $f_6$ and out of the four one term solutions they produce again only two, namely
\begin{align}
f_6^{\text{UHV}}&=-\frac{i}{x^{2}_{13} x^{2}_{24} x^{2}_{35} x^{2}_{46} x^{2}_{51}x^{2}_{62}}\frac{\Omega_{1,5}^u\Omega_{4,6}^u}{x^{2}_{14}x^{2}_{25}}\\
\intertext{and}
f_6^{\text{UHV}}&=\frac{i}{x^{2}_{13} x^{2}_{24} x^{2}_{35} x^{2}_{46} x^{2}_{51}x^{2}_{62}}\frac{\Omega_{5,5}^d\Omega_{5,6}^u}{x^{2}_{13}x^{2}_{25}} \,.
\end{align}
lead to one term solutions for $f_7$. Interestingly both solutions are dual conformal covariant with inversion weight one on each dual point just like the full amplitude.
Both solutions for $f_6^{\text{UHV}}$ nicely evolve through all subsequent recursion steps. Looking at the two representations they yield for the UHV amplitudes of multiplicity seven
\begin{align}
f_7^{\text{UHV}}&=-\frac{i}{x^{2}_{13} x^{2}_{24} x^{2}_{35} x^{2}_{46} x^{2}_{57}x^{2}_{61}x^{2}_{72}}\Omega_{1,5}^u\frac{\Omega_{4,6}^u}{x^{2}_{14}x^{2}_{25}}\frac{\Omega_{5,7}^u}{x^{2}_{15}x^{2}_{26}}\\
\intertext{and}
f_7^{\text{UHV}}&=\frac{i}{x^{2}_{13} x^{2}_{24} x^{2}_{35} x^{2}_{46} x^{2}_{57}x^{2}_{61}x^{2}_{72}}\Omega_{5,5}^d\frac{\Omega_{5,6}^u}{x^{2}_{13}x^{2}_{25}}\frac{\Omega_{6,7}^u}{x^{2}_{13}x^{2}_{26}}\,,
\end{align}
it is straightforward to generalize them to arbitrary multiplicities. We conjecture the formulae
\begin{align}\label{eq:UHV1}
f_n^{\text{UHV}}&=-\frac{i}{\prod x_{i\,i+2}}\Omega_{1,5}^u\prod_{i=6}^n\frac{\Omega_{i-2,i}^u}{x^{2}_{1\,i-2}x^{2}_{2\,i-1}}\\
\intertext{and}\label{eq:UHV2}
f_n^{\text{UHV}}&=\frac{i}{\prod x_{i\,i+2}}\Omega_{5,5}^d\prod_{i=6}^n\frac{\Omega_{i-1,i}^u}{x^{2}_{13}x^{2}_{2\,i-1}}\,,
\end{align}
to be valid representations for UHV amplitudes of multiplicities greater than four. Up to multiplicity $n=13$ both formulae have been checked by determining the solutions to the recursive ansatz \cref{eq:recursionAnsatz} which seems sufficient to us to consider \cref{eq:UHV1,eq:UHV2} to be proven.
With regard to the three term solutions \eqref{eq:f6UHV} for all gluon and UHV amplitudes on general kinematics, we expect the formulae \cref{eq:UHV1,eq:UHV2} to be valid for other sectors as well. The natural guess is of course that the dual conformal extensions of the UHV amplitudes on the special kinematics \cref{eq:M1M2} produce the correct gluon amplitudes. However, this is not the case. The reason might be that the gluon sector does not undergo the same significant simplifications as the UHV sector if we specialize the kinematics. Fortunately the found dual conformal extensions of \cref{eq:UHV1,eq:UHV2} yield an even bigger class of amplitudes. We find the remarkable results that \cref{eq:UHV1} is equal to the superamplitude on all little group sectors of the form $1\times a$, $(n-1)\times a$, whereas \cref{eq:UHV2} is correct for the chiral conjugate little group sectors $a\times 1$, $a\times (n-1)$. We indicate this by writing
\begin{align}\label{eq:master1}
f_n^{1\times a\;(n-1)\times a}&=-\frac{i}{\prod x_{i\,i+2}}\Omega_{1,5}^u\prod_{i=6}^n\frac{\Omega_{i-2,i}^u}{x^{2}_{1\,i-2}x^{2}_{2\,i-1}}\\
\intertext{and}\label{eq:master2}
f_n^{a\times 1\;a\times (n-1)}&=\frac{i}{\prod x_{i\,i+2}}\Omega_{5,5}^d\prod_{i=6}^n\frac{\Omega_{i-1,i}^u}{x^{2}_{13}x^{2}_{2\,i-1}}\,.
\end{align}
Clearly the chiral conjugate of the formula for $f_n^{1\times a\;(n-1)\times a}$ is an alternative representation of $f_n^{a\times 1\;a\times (n-1)}$ and vice versa.
\section{From massless 4d to massless 6d superamplitudes }\label{section:uplift_huang}
A very exciting question, first discussed in \cite{Huang:2011um}, is whether or not it is possible to obtain the massless tree-level superamplitudes of the six-dimensional ${\cal N}=(1,1)$ SYM by uplifting the massless non-chiral superamplitudes of the four-dimensional ${\cal N}=4$ SYM. If so, as claimed by the author of \cite{Huang:2011um}, this implies that the massive four-dimensional amplitudes of ${\cal N}=4$ SYM can be obtained from the massless ones. Since the non-chiral superamplitudes of ${\cal N}=4$ are straightforward to obtain using the well behaved non-chiral BCFW recursion, described in \cref{section:BCFWnonChiral}, such a correspondence could provide an easy way to obtain the tree amplitudes of ${\cal N}=(1,1)$ SYM.
In this section we want to thoroughly investigate the potential uplift of the massless four-dimensional amplitudes and thereby clarify some points in \cite{Huang:2011um}.
\subsection{Dimensional reduction of $\mathcal{N} = (1,1)$ SYM revisited}\label{uplift_huang_diskussion}
As explained in detail in \cref{section:dimensional_reduction} performing the dimensional reduction of the superamplitudes of $\mathcal{N} = (1,1)$ to massless four dimensions yields the non-chiral superamplitudes of $\mathcal{N} = (1,1)$. The symmetries of the six-dimensional and four-dimensional superamplitudes have been discussed in detail in \cref{section:symmetries_N=4,generators_max_6d}. The most relevant in this discussion are the discrete symmetry under chiral conjugation and the $R$-symmetry of the four-dimensional superamplitudes. In particular the invariance under the $R$-symmetry generators $\mathpzc{m}_{\,n m}$ and $\widetilde{\mathpzc{m}}_{\,n' m'}$ implies that all $R$-symmetry indices within a superamplitude are contracted.
With the help of the maps between the six-dimensional on-shell variables $\{\lambda_i^A$, $\tilde\lambda_{i\,A}$, $\xi_i^a$, $\tilde\xi_i^{\dot{a}}\}$ and the massless four-dimensional on-shell variables $\{\lambda_i^\alpha$, $\tilde\lambda_i^{\dot{\alpha}}$, $\eta_i^m$, $\tilde\eta_i^{m'}\}$ \cref{eq:grassmann_map,eq:MapSpinors6d4d} it is straightforward to obtain the projection of every six-dimensional object. Since there is a one-to-one map between the supermomentum conserving delta functions \eqref{eq:projektion_delta} we neglect them straight away and investigate the correspondence
\begin{align}\label{eq:uplift}
f_n^{\text{6d}}&&&\begin{gathered}[c]\underrightarrow{\quad\text{projection}\quad}\\\overleftarrow{\quad\;\;\,\text{uplift?}\;\;\,\quad}\end{gathered} && f_n^{\text{4d}}\,.
\end{align}
The tree-level amplitudes of $\mathcal{N} = (1,1)$ SYM theory consist of Lorentz invariant contractions of momenta $p_i$ and supermomenta $q_i$, $\tilde{q}_i$. The only purely bosonic Lorentz invariants are traces of an even number of momenta $(k_i)_{AB}$, $(k_i)^{AB}$. However chiral conjugate traces project to the same four-dimensional traces
\begin{align}
\begin{aligned}[c]\text{Tr}^{6d}(\Gamma_{+}\slashed{k}_1 \slashed{k}_2 \dots \slashed{k}_{2 n})&=(k_1)_{A_1A_2}\dots(k_{2n})^{A_{2n}A_1}\\\text{Tr}^{6d}(\Gamma_{-}\slashed{k}_1 \slashed{k}_2 \dots \slashed{k}_{2 n})&=(k_1)^{A_1A_2}\dots(k_{2n})_{A_{2n}A_1}\end{aligned}&&\longrightarrow & &\mathop{\mathrm{Tr}}{}^{4d}\left(\slashed{k}_1 \slashed{k}_2 \dots \slashed{k}_{2 n}\right)
\end{align}
where $\slashed{k}_i$ denotes the contraction of the momentum $k_i$ with either the six-dimensional or the four-dimensional gamma matrices and $\Gamma_{\pm}=\tfrac{1}{2}(1\pm\gamma^7)$. Hence, the presence of traces in $f_n^{\text{6d}}$ that are not chiral self-conjugate would already spoil the uplift. The chiral conjugate traces differ by terms containing the six-dimensional Levi-Civita tensor. Since $\mathcal{N} = (1,1)$ SYM is a non-chiral theory it is symmetric under chiral conjugation $(p_i)_{AB}\leftrightarrow (p_i)^{AB}$, $q_i\leftrightarrow \tilde{q}_i$ and therefore free of six-dimensional Levi-Civita tensor. In conclusion, the only purely bosonic invariants in $f_n^{\text{6d}}$ are chiral self-conjugate traces whose projections can be uniquely uplifted from four dimensions
\begin{align}
\tfrac{1}{2}\mathop{\mathrm{Tr}}{}^{6d}\left(\slashed{k}_1 \slashed{k}_2 \dots \slashed{k}_{2 n}\right) &&\begin{gathered}[c]\underrightarrow{\qquad\vphantom{A}\quad}\\\overleftarrow{\quad \vphantom{A}\qquad}\end{gathered} & & \mathop{\mathrm{Tr}}{}^{4d}\left(\slashed{k}_1 \slashed{k}_2 \dots \slashed{k}_{2 n}\right)
\end{align}
Inserting the definition of the gamma matrices, the four-dimensional trace may be written as the sum of two chiral conjugate traces of four-dimensional Pauli matrices
%
\begin{equation}
\mathop{\mathrm{Tr}}{}^{4d}\left(\slashed{k}_1 \slashed{k}_2 \dots \slashed{k}_{2 n}\right) =(k_1)_{\alpha_1\dot{\alpha}_2}\dots(k_{2n})^{\dot{\alpha}_{2n}\alpha_1}+(k_1)^{\dot{\alpha}_1\alpha_2}\dots(k_{2n})_{\alpha_{2n}\dot{\alpha}_1}
\end{equation}
%
There are three possible Lorentz invariants containing supermomenta. All of them have a unique projection to four dimensions
\begin{align}
\langle q_{i}|k_1 \dots k_{2 n}|\tilde{q}_{j}] &&\longrightarrow & & \langle q^{1}_{i}|{k}_1 \dots k_{2 n}|q^{4}_{j} \rangle+ [\tilde{q}_{i 3}|k_1 \dots k_{2 n}|\tilde{q}_{j 2}]\label{eq:type1}\\
\langle q_{i}|k_1 \dots k_{2 n + 1}|q_{j}\rangle &&\longrightarrow && \langle q^{1}_{i}|k_1 \dots k_{2 n + 1}|\tilde{q}_{j 3}] - [\tilde{q}_{i 3}|k_1 \dots k_{2 n + 1}|q_{j}^{1}\rangle\label{eq:type2}\\
[\tilde{q}_{i}|k_1 \dots k_{2 n + 1}|\tilde{q}_{j}] &&\longrightarrow && \langle q^{4}_{i}|k_1 \dots k_{2 n + 1}|\tilde{q}_{j 2}] - [\tilde{q}_{i 2}|k_1 \dots k_{2 n + 1}|q_{j}^{4}\rangle\label{eq:type3}
\end{align}
Non-chirality of the four-dimensional superamplitudes implies their invariance under the exchanges $q^{1}_{i}\leftrightarrow \tilde{q}_{i 3}$ and $q^{4}_{i}\leftrightarrow \tilde{q}_{i 2}$. Since Lorentz invariants of the last two types, \cref{eq:type2,eq:type3}, can only occur pairwise in a six-dimensional superamplitude, it follows that the projection of a six-dimensional superamplitude has always a manifest chiral symmetry in four dimensions. Apparently none of these three six-dimensional Lorentz invariants leads to a manifest $R$-symmetry in four dimensions. However, any reasonable representation of $f_n^{\text{4d}}$ has a manifest $R$-symmetry. In conclusion, a potential uplift of $f_n^{\text{4d}}$ to six-dimensions can only consist of building blocks whose projection to four dimensions is $R$-symmetric. From the investigation of the three types of six-dimensional Lorentz invariants and their projections, \cref{eq:type1,eq:type2,eq:type3}, it immediately follows that there is only one such object
\begin{equation} \label{eq:correspondence6d4d}
\begin{aligned}
\langle q_{i}|k_1 \dots k_{2 n}|\tilde{q}_{j}]+[\tilde{q}_{i}|k_1 \dots k_{2 n}|q_{j}\rangle &&\begin{gathered}[c]\underrightarrow{\qquad\vphantom{A}\quad}\\\overleftarrow{\quad \vphantom{A}\qquad}\end{gathered} && \langle q^{m}_{i}|{k}_1 \dots k_{2 n}|q_{j\,m} \rangle+ [\tilde{q}_{i m'}|k_1 \dots k_{2 n}|\tilde{q}_{j}^{m'}]
\end{aligned}
\end{equation}
Unlike the claim in \cite{Huang:2011um} there is no combination of six-dimensional Lorentz invariants of the second and third type, \cref{eq:type2,eq:type3}, that has a $R$ invariant projection to four dimensions. For further details see \cref{sec:Connection_six_four_Invariants}.
We conclude that if a correspondence of the form \cref{eq:uplift} exists, then the involved representations of $f_n^{\text{6d}}$ and $f_n^{\text{4d}}$ only contain the building blocks \cref{eq:correspondence6d4d}. As will be explained in the next section, for multiplicities larger than five this is a severe constraint on the representations of $f_n^{\text{6d/4d}}$.
\subsection{Uplifting massless superamplitudes from four to six dimensions}\label{section:check_uplift}
We want to discuss the implications of \cref{eq:correspondence6d4d}. At four point level $f_4^{\text{4d}}$ is purely bosonic and the uplift is trivial
\begin{align}
f_4^{\text{4d}}=-\frac{i}{x_{13}^2x_{24}^2} &&\Longrightarrow & &f_4^{\text{6d}}=-\frac{i}{x_{13}^2x_{24}^2}\,.
\end{align}
At five points, any representation of $f_5^{\text{4d}}$ that has a manifest $R$-symmetry and a manifest symmetry under chiral conjugation automatically only consists of the building blocks \cref{eq:correspondence6d4d}. Since any reasonable representation of $f_5^{\text{4d}}$ has a manifest $R$-symmetry and the chiral symmetry can be made manifest by replacing e.\,g.~the MHV part by the chiral conjugate of the $\overline{\text{MHV}}$ part, any representation of $f_5^{\text{4d}}$ can be uplifted to six dimensions. By uplifting the representation, \cref{eq:f_5_4d},
\begin{equation}
f_5^{\text{4d}}=
\frac{i}{x_{1 3}^2 x_{2 4}^4 x_{2 5}^4 x_{3 5}^2 x_{4 1}^2}\left( \langle B_{5 4 2}|\, 1\, 2\, 3\,| B_{5 4 2}\rangle +
[ \tilde B_{5 4 2}|\, 1\, 2\, 3\,| \tilde B_{5 4 2}]\right)
\end{equation}
obtained from the BCFW recursion yields the following representation of the six-dimensional amplitude
\begin{align}\label{eq:f_5uplift}
f_5^{\text{6d}}&=
\frac{i}{x_{1 3}^2 x_{2 4}^4 x_{2 5}^4 x_{3 5}^2 x_{4 1}^2}\tfrac{1}{2}\left( \langle B_{5 4 2}|\, 1\, 2\, 3\,| \tilde{B}_{5 4 2}] +
[ \tilde B_{5 4 2}|\, 1\, 2\, 3\,| B_{5 4 2}\rangle\right) \\
&=i\frac{\Omega_{542;123;542}}{x_{1 3}^2 x_{2 4}^4 x_{2 5}^4 x_{3 5}^2 x_{4 1}^2}\,,
\end{align}
where the factor of $\tfrac{1}{2}$ originates from the definition \eqref{eq:defBB} and we inserted the definition of $\Omega_{I;J;K}$, \cref{eq:defOmegaIJK}. We checked numerically that \cref{eq:f_5uplift} is indeed equal to the five-point amplitude in six dimensions.
Unfortunately the uplift starts to be non-trivial already at multiplicity six. Let $\{\Omega_i\}$ denote a set of the chiral self-conjugate building blocks \eqref{eq:correspondence6d4d} for the six-dimensional superamplitudes
%
\begin{equation}
\Omega_i \qquad\begin{gathered}[c]\underrightarrow{\qquad\vphantom{A}\quad}\\\overleftarrow{\quad \vphantom{A}\qquad}\end{gathered} \qquad \omega_i+\tilde{\omega}_i
\end{equation}
where $\omega_i=\mathcal{O}(q^2)$ and $\tilde{\omega}_i=\mathcal{O}(\tilde{q}^2)$ are the chiral conjugates in the projection of $\Omega_i $. As a consequence of \cref{eq:correspondence6d4d} an uplift able representation of the six-point amplitudes has the form
\begin{equation}\label{eq:f_6uplift}
f_6^{\text{4d}}=\sum_{i,j}\alpha_{ij}(\omega_i+\tilde{\omega}_i)(\omega_j+\tilde{\omega}_j)
\end{equation}
and uplifts to
\begin{equation}
f_6^{\text{6d}}=\sum_{i,j}\alpha_{ij}\Omega_i \Omega_j\,.
\end{equation}
From \cref{eq:f_6uplift} it follows
\begin{align}
\left(f_6^{\text{4d}}\right)^{\text{MHV}}\!\!\!&=\sum_{i,j}\alpha_{ij}\tilde{\omega}_i\tilde{\omega}_j\,,\,\,\,&\left(f_6^{\text{4d}}\right)^{\text{NMHV}}\!\!\!&=\sum_{i,j}\alpha_{ij}(\omega_i\tilde{\omega}_j+\tilde\omega_i\omega_j)\,,\,\,\,&\left(f_6^{\text{4d}}\right)^{\overline{\text{MHV}}}\!\!\!&=\sum_{i,j}\alpha_{ij}\omega_i\omega_j\,.
\end{align}
Comparing this with the representation \cref{eq:f_6_4d} obtained for $f_6^{\text{4d}}$ from the BCFW recursion it is apparent that a generic representation of $f_6^{\text{4d}}$ does not have the form \cref{eq:f_6uplift} required for an uplift. In contrast to the five point case, making the chiral symmetry manifest does not solve the problem because the minimal helicity violating (minHV) NMHV amplitudes are independent of the MHV and $\overline{\text{MHV}}$ amplitudes. As a consequence, it is straightforward to turn a generic representation into the form
\begin{equation}
f_6^{\text{4d}}=\sum_{i,j}\beta_{ij}\omega_i\omega_j+\gamma_{ij}(\omega_i\tilde{\omega}_j+\tilde\omega_i\omega_j)+\beta_{ij}\tilde{\omega}_i\tilde{\omega}_j\,,
\end{equation}
but in general the coefficients $\beta_{ij}$ and $\gamma_{ij}$ are unrelated. This is the key issue, that to our mind has been overlooked in reference \cite{Huang:2011um}. As a result, finding any representation of $f_n^{\text{4d}}$ is not sufficient to obtain the six-dimensional amplitude. In fact, under the assumption that the uplift works, obtaining $f_n^{\text{6d}}$ is equivalent to finding a representation of the form
\begin{equation}\label{eq:f_nuplift}
f_n^{\text{4d}}=\sum_{|I|=n-4}\alpha_{I}\prod_{k=1}^{n-4}(\omega_{i_k}+\tilde{\omega}_{i_k})\,,
\end{equation}
for the four-dimensional amplitude. While such a representation trivially uplifts to
\begin{equation}\label{eq:f_nlifted}
f_n^{\text{6d}}=\sum_{|I|=n-4}\alpha_{I}\prod_{k=1}^{n-4}\Omega_{i_k}\,,
\end{equation}
obtaining it is non-trivial and a rigorous proof that \cref{eq:f_nlifted} is always a valid representation of the six-dimensional superamplitude is still missing. Of course we could use a numerical implementation of the non-chiral BCFW recursion relation to determine a solution to an ansatz of the form \cref{eq:f_nuplift} but this is not easier than determining $f_n^{\text{6d}}$ directly, using the methods described in \cref{section:IdeaBCFW}.
Albeit it seems save to say that the uplift is of no practical relevance for the determination of the six-dimensional superamplitudes, it is still very fascinating from the theoretical point of view. It is intriguing that the correct representation of the MHV superamplitude
\begin{equation}\label{eq:f_nMHV}
(f_n^{\text{4d}})^{\text{MHV}}=\sum_{|I|=n-4}\alpha_{I}\prod_{k=1}^{n-4}\tilde{\omega}_{i_k}\,,
\end{equation}%
might be sufficient to get the whole six-dimensional superamplitude, or equivalently all massive four-dimensional amplitudes.
One thing that would immediately invalidate the uplift are identities of the $\omega_{i}+\tilde{\omega}_{i}$ that do not uplift to identities of the $\Omega_i$. Though we do not have a concrete counterexample for the uplift, there are indeed four-dimensional identities of strings of momenta $k_i$ that do not have a six-dimensional counterpart, i.\,e.
\begin{align}\label{eq:identity4d}
\text{4d:}&& (p_ik_1\dots k_{2n}p_i)_{\a}^{\;\;\b} &=\left(\,|i\rangle[i|k_1\dots k_{2n}|i]\langle i|\,\right)_{\a}^{\;\;\b}=-(p_ik_{2n}\dots k_1p_i)_{\a}^{\;\;\b}\\
\intertext{but}
\text{6d:}&& (p_ik_1\dots k_{2n}p_i)_{A}^{\;\;B} &=\left(\,|i_{\dot{a}}][i^{\dot{a}}|k_1\dots k_{2n}|i^b\rangle\langle i_b|\,\right)\!\!{}_{A}^{\;\;B}\neq-(p_ik_{2n}\dots k_1p_i)_{A}^{\;\;B}
\end{align}
At this point we do not see how such identities could not spoil the uplift without restricting the allowed four-dimensional building blocks.
Using the numerical implementation of the six-dimensional BCFW recursion it is possible to numerically check the uplift. The easiest way to do so is to make an ansatz \eqref{eq:ansatz6d} for $f_n^{\text{6d}}$ using only the minimal building blocks $\Omega_{ijkl}$ defined in \cref{eq:BBt} and determine a solution $\alpha_i(\pi)$ for a massless phase space point with momenta of the form $\{p_i^1,p_i^2,p_i^3,p_i^4,0,0\}$. Since the coefficients are functions of the Lorentz invariants $x_{ij}^2$ they have identical numerical values on the 'massive' phase space point $\{p_i^1,0,p_i^3,p_i^4,0,p_i^2\}$ and we can check whether the obtained coefficients $\alpha_i(\pi)$ provide a solution to the massive amplitudes as well. In fact, we checked that up to multiplicity eight that representation of the massless non-chiral amplitudes containing only the minimal building blocks $\langle B_{ijk}|B_{ilm}\rangle+[ \tilde{B}_{ijk}|\tilde{B}_{ilm}]$ did always uplift to six dimensions. Since the eight-point amplitude is
already
very complicated, there is no reason to believe that the uplift of a representation containing only the minimal building blocks will fail beyond eight points. In case of more complicated building blocks the identities \eqref{eq:identity4d} might become an issue even at multiplicities lower than eight.
\section{Conclusion and outlook}
A central motivation for this work was to take first steps towards a generalization of the
analytic construction of massless QCD amplitudes from $\mathcal{N}=4$ SYM ones of \cite{Dixon:2010ik,Schuster:2013aya,Melia:2013epa} to massive QCD amplitudes by employing ${\cal N}=4$ SYM superamplitudes
on the Coulomb branch. For this we constructed all standard and hidden symmetries of the massless six-dimensional superamplitudes of ${\cal N}=(1,1)$ SYM theory thereby correcting small mistakes in the proof of the dual conformal symmetry given in \cite{Dennen:2010dh}. We exploited the symmetries of the six-dimensional amplitudes to derive the symmetries of massive tree amplitudes in $\mathcal{N}=4$ SYM theory and showed that the five dimensional dual conformal symmetry of the massive amplitudes leads to the presence of non-local Yangian-like generators $m^{(1)}$, $p^{(1)}$ associated to the masses and momenta in on-shell superspace. An interesting open question is whether or not there exist level-one supermomenta as
well.
Furthermore, we explained how analytical formulae for tree-level superamplitudes of $\mathcal{N}=(1,1)$ SYM can be obtained from a numerical implementation of the BCFW recursion relation. The developed method is very general and can be applied to other theories as well. We used it to derive compact manifest dual conformally covariant representations of the five- and six-point superamplitudes. To facilitate the investigation of the six-dimensional superamplitudes we proposed a little group decomposition of them. The little group decomposition is the six-dimensional analog of the MHV-band decomposition in 4d introduced in \cite{Craig:2011ws}. It allows a separation into parts of varying complexity as well as the identification of those
pieces of the superamplitude that survive in the massless limit to four-dimensions. We exploited the little group decomposition to study UHV amplitudes leading to arbitrary multiplicity formulae valid for large classes of component amplitudes with two consecutive
massive legs.
We demonstrated that within a maximally supersymmetric theory it is not always sufficient to consider only gluon amplitudes and the remaining amplitudes follow by supersymmetry. Indeed, the supersymmetrization of the six-dimensional gluon amplitudes, as has been done in reference \cite{Dennen:2009vk} for the three, four and five point amplitudes, will not necessarily yield the correct superamplitude for multiplicities greater than five. We derived examples of supersymmetric, dual conformally covariant representations of the gluon sector which do not coincide with the superamplitude. Nevertheless, we observed that dual conformal extensions and consequently supersymmetrizations of subsets of amplitudes reproduce at least part of the other component amplitudes. It would be interesting to investigate this in more detail in the future since finding dual conformal extensions of subsets of amplitudes is much simpler than finding the whole superamplitude.
In \cite{Huang:2011um} it has been claimed that all superamplitudes of $\mathcal{N}=(1,1)$ SYM can be obtained by uplifting massless tree-level superamplitudes of $\mathcal{N}=4$ SYM in non-chiral superspace. In our work we derived the superconformal and dual superconformal symmetries of the non-chiral superamplitudes and used the non-chiral BCFW recursion to prove the dual conformal symmetry as well as to derive the five and six-point superamplitudes. We thoroughly investigated the implications of a potential uplift by identifying the correct four- and six-dimensional Lorentz invariants that should appear in such a correspondence. By performing numerical checks we confirmed the uplift of representations containing only a restricted set of dual conformal covariant and chiral self-conjugate building blocks up to multiplicity eight. However, we proved that finding a representation of the massless non-chiral superamplitudes of $\mathcal{N}=4$ SYM that can be uplifted is non-trivial for
multiplicities larger than five. One possible flaw of the uplift are identities of the four-dimensional building blocks that do not uplift to identities of the corresponding six-dimensional building blocks. We gave examples of such identities that need to be avoided by restricting the allowed building blocks in order to not spoil the uplift. Despite being of no practical relevance for the determination of the six-dimensional superamplitudes or the massive four-dimensional amplitudes at this point, it is still very fascinating to note that the correct representation of the non-chiral MHV superamplitudes in four dimensions
could be sufficient to obtain all six-dimensional superamplitudes, or equivalently all massive four-dimensional amplitudes on the Coulomb branch of $\mathcal{N}=4$ SYM theory.
\acknowledgments
We thank H.~Elvang and M.~Kiermaier for discussions. JP thanks the Pauli Center for Theoretical
Studies Z\"urich
and the Institute for Theoretical Physics at the ETH Z\"urich for hospitality and
support in the framework of a visiting professorship.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.